tag:blogger.com,1999:blog-7862104.post4440272982570758090..comments2022-07-24T09:04:19.995-07:00Comments on DUB For the Future: I give up on CHI/UISTAnonymoushttp://www.blogger.com/profile/15776616183969942139noreply@blogger.comBlogger85125tag:blogger.com,1999:blog-7862104.post-5008135555423243722013-05-05T07:23:30.320-07:002013-05-05T07:23:30.320-07:00As a fresh-of-the-boat phd researcher I was shocke...As a fresh-of-the-boat phd researcher I was shocked by the CHI 2013 conference which I was attending for the first time. While the presentations are of good quality, the work behind them often seemd to be refurbished work from 5-10 years ago. Especially when I know how devestating the reviews are perceived in our university, I often hear phrases like: "make sure your evaluation is very strong" since they will immediately shoot you down. Yet I arrived there and it seemed as if a lot of the evaluations and related work research were done in such a way that it fit's the goal.<br /><br />Even more striking is that everyone seems to like the work and there is almost no positive critical input. I was glad to see Bill Buxton step up at a certain presentation and tell the presenter that inaccuracies in research might cause problems for researchers coming after him. <br /><br />Personally, CHI2013 completely took away my ideologic view on research and partly demotivated me. I can now see what it is all about, create your work in such a way that it targets a good conference and minimalize the work you spend on it to publish, publish, publish, publish. <br /><br />I can now see why there are more and more similar applications, frameworks or research results are emerging which are less and less usable/bugfree/accurate. They were never meant to be used in the first place, they were meant for a one-shot paper.De Rooms Brechthttps://www.researchgate.net/profile/Brecht_De_Rooms/?ev=hdr_xprfnoreply@blogger.comtag:blogger.com,1999:blog-7862104.post-85859988949897922012012-06-11T02:45:23.039-07:002012-06-11T02:45:23.039-07:00It's amazing that this post is over 2.5 years ...It's amazing that this post is over 2.5 years old, read by thousands and it still perfectly applies to the last round of UIST reviews. It's very sad.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7862104.post-9428616190001119202010-08-05T08:22:50.686-07:002010-08-05T08:22:50.686-07:00My understanding is that CHI is a different thing ...My understanding is that CHI is a different thing that it claims to be and consequently different thing that many people expect. CHI attempts to be a venue for researchers and practitioners. But the majority of the reviewers are researchers who favor rigorous experimentation (atomic task, simplified settings, statistical analysis). And I would argue that rigorous experimentation is a good thing and fundamental role of research. The problem is that ‘practitioners’ part of CHI is only illusion. The same reviewers judge design or system papers. But these papers are not research or scientific work. And as such they should be judged on different basis. Practitioners value exploration and generation of ideas, learning about new perspectives, ideas that failed, how systems are designed and built etc.<br /><br />As a by-product we have system papers including controlled experiments which make little sense but let the research reviewers find what they look for. This is not to say that controlled experiments are not useful in system papers.<br /><br />Acceptance procedure at CHI is also questionable. It is common to get high and low scores and contradicting reviews. And PCs take an average from that. This is a blunder and irresponsible behavior. In such cases a PC should read the paper and use his expertise to take one of the sides. Otherwise what is the point of putting HCI experts as PCs?<br /><br />Looking at rapidly increasing number of submissions and accepted papers at CHI I get a feeling that new venues with distinct objectives are necessary. In my opinion this would be a good thing for the community.Krystian Samphttp://hci.deri.ie/~ksamp/noreply@blogger.comtag:blogger.com,1999:blog-7862104.post-20935070234790360912010-07-21T06:11:37.965-07:002010-07-21T06:11:37.965-07:00Hi,
I definitely feel the same thing. I have over...Hi,<br /><br />I definitely feel the same thing. I have over 100 papers. Each journal/Conf reviews I get are useless. Sometime bull****. I also review for CHI but I am from communications.<br />The *top* professors in our fields should stop publishing and putting their names.<br /><br />Once someone is a professor, he should start looking at placing his work on the web rather than papers to claim his contributions.<br /><br />BAN all full professors from publishing. Let them do only reviews.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7862104.post-56720377918261589742010-07-21T06:11:21.603-07:002010-07-21T06:11:21.603-07:00Hi,
I definitely feel the same thing. I have over...Hi,<br /><br />I definitely feel the same thing. I have over 100 papers. Each journal/Conf reviews I get are useless. Sometime bull****. I also review for CHI but I am from communications.<br />The *top* professors in our fields should stop publishing and putting their names.<br /><br />Once someone is a professor, he should start looking at placing his work on the web rather than papers to claim his contributions.<br /><br />BAN all full professors from publishing. Let them do only reviews.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7862104.post-34950381370585599552010-05-27T06:03:31.133-07:002010-05-27T06:03:31.133-07:00Hi, James. I'm coming very late to this discu...Hi, James. I'm coming very late to this discussion (I discovered it when I typed UIST 2011 into Google--it's the top hit) but thought I'd chip in a comment, since this is something I've thought about for a while.<br /><br />When I carry out a small experiment like a Fitts' Law study of a novel interaction technique, I generally have a few goals in mind: I want to understand whether a given technique works; I want to understand its generality, *why* it works, at some level of abstraction; and I want to persuade others to take up either my findings or the technique itself.<br /><br />When I build non-interactive systems and written about them for publication outside CHI venues (e.g., in AI or cognitive modeling), my goals are comparable, but with the main emphasis being on the first point, demonstrating that a system actually does what I claim it can do.<br /><br />I've also built interactive systems. These are much harder to sell, as everyone agrees. Papers have been rejected because comparisons with existing systems in common use were too limited in scope, because performance improvements were too small or too localized, or even because we didn't carry out a summative evaluation, thinking that the novelty of the work would carry the paper, based on demonstrations. I've sometimes wished that, once we've demonstrated that a system basically works, reviewers would consider the question, "Does this system suggest a new direction for HCI?" Is it novel, is it plausible, does it have potential to change the way people think about interaction? These questions aren't easily answered, I think, in the usual way we review papers, even though some aspects, like novelty, are commonly part of our guidelines for reviewing.<br /><br />One of the institutional barriers, mentioned briefly in one of Jonathan's comments, is that conferences like CHI and UIST are now considered archival. I suspect that this, along with other factors, leads to a reluctance to take risks--we might not want to accept formative ideas that could turn out to be misguided or even wrong (which we can't judge without an enormous amount of further effort). "Risky" papers can be published elsewhere (at workshops, in alt.chi, as extended abstracts, etc.), but I don't think they get nearly the attention that safer full papers get at a high-profile conference, and of course they don't count as much on a CV.<br /><br />I'd like to see a venue that emphasized looking forward and taking risks. Quality control would be harder but still manageable, I think. Systems papers would mainly be to provide inspiration rather than a foundation for carrying out usability studies. Most of the usual evaluation work (how well does it perform, etc.) would be left for the future.<br /><br />-- Rob St. AmantRobert St. Amanthttps://www.blogger.com/profile/10959392496631877369noreply@blogger.comtag:blogger.com,1999:blog-7862104.post-39740486626100904642010-04-26T11:44:32.688-07:002010-04-26T11:44:32.688-07:00James,
Hear, hear. For those new to this debate, ...James,<br /><br />Hear, hear. For those new to this debate, I stirred things up with my CHI 03 "alt.chi" presentation, "The Tyranny of Evaluation", http://web.media.mit.edu/~lieber/Misc/Tyranny-Evaluation.html. Comments on that original rant appreciated. <br /><br />I wouldn't give up on either CHI or UIST, despite the frustration. If new venues appear that are more congenial to innovation, let's take them. But I do think that CHI/UIST has made an honest effort to at least listen to these concerns and respond. Witness that I was led to this blog post by the UIST review instructions! Right now, I think the CHI/UIST management understands the message better than the vast body of CHI/UIST reviewers, which accounts for why authors still get these kind of reviews. The conference committees still have to work with whatever they are submitted and whatever the reviewers say. But it still the committees that decide, so I encourage young innovators to keep at it and still submit papers and review papers, to give the committees material to push forward on making the conferences more innovative. <br /><br />On concrete suggestions for conference structure, I'd make two. One, the OOPSLA (now SPLASH) has a section called Onward, explicitly for more speculative and less rigourous work. CHI could emulate this. Second, in my experience, reviewing works much better when reviewers choose the papers they want to review rather than having them distributed by a committee. The biggest source of incompetent and hostile reviews is when people are thrust into reviewing papers they don't want. AAAI has a reviewer bid system that works well. <br /><br />Henry Liebermanlieber@media.mit.eduhttps://www.blogger.com/profile/09717917814966787048noreply@blogger.comtag:blogger.com,1999:blog-7862104.post-45871595770204219622010-04-21T09:20:27.096-07:002010-04-21T09:20:27.096-07:00Please, please, please call it Transaction on UIST...Please, please, please call it Transaction on UIST so we can call it Twist. Please. I'm begging.Bohttps://www.blogger.com/profile/14292806907045106934noreply@blogger.comtag:blogger.com,1999:blog-7862104.post-26044252536991837332010-04-20T23:34:54.309-07:002010-04-20T23:34:54.309-07:00Ivan says: "starting [a] new conference will ...Ivan says: "starting [a] new conference will not help as long as reviewing process is anonymous. Professionals should have courage to stand up by their opinions and defend them if needed, not hide behind the anonymity."<br /><br />I see your point of view, but on the other hand anonymity can help people say what they really think without social pressure to be overly nice. <br /><br />I have been thinking more about this recently and believe that systems-oriented papers tend to be rather dense, cover a lot of ground, and simply have to leave some things out. The quick review journal model, that has several back and forths with the reviewers would be better at helping these papers get over the bar for publication as a subset of the necessary ambiguity is teased out of the manuscript. I think you won't have a major problem with this issue of "reviewers hiding".<br /><br />As such, I'm getting ready to propose a new electronic journal to ACM on HCI Systems & Applications. The idea would be to link it with an existing conference (e.g., UIST) and have any papers accepted by the journal by 3 months before the conference appear at that conference.Anonymoushttps://www.blogger.com/profile/15776616183969942139noreply@blogger.comtag:blogger.com,1999:blog-7862104.post-30193421380249414862010-02-02T10:06:49.308-08:002010-02-02T10:06:49.308-08:00I guess the discussion is over but I believe that ...I guess the discussion is over but I believe that starting new conference will not help as long as reviewing process is anonymous. Professionals should have courage to stand up by their opinions and defend them if needed, not hide behind the anonymity.<br /><br />Just as a side note. Tech blogging community despises anonymous commentates as cowards and regards them trolls. So what does it say about CHI community?Ivan Poupyrevhttp://www.ivanpoupyrev.comnoreply@blogger.comtag:blogger.com,1999:blog-7862104.post-10044541823597437392009-12-17T06:15:48.805-08:002009-12-17T06:15:48.805-08:00Thanks Jose. A bunch of people had sent that video...Thanks Jose. A bunch of people had sent that video to me (and I had seen it online around the same time). Quite funny (if Hitler can be funny), but I didn't think to post it!Anonymoushttps://www.blogger.com/profile/15776616183969942139noreply@blogger.comtag:blogger.com,1999:blog-7862104.post-41067585454208081482009-12-17T06:11:05.188-08:002009-12-17T06:11:05.188-08:00Ryan, interesting comments... The one issue I thin...Ryan, interesting comments... The one issue I think you miss is this: "...will people really remain reliant on walled gardens like conferences and journals for distributing their content?" I don't think these are really going away. People will use blogs and other ways of distributing content ALSO. But as long as university exist in their current form, academic researchers (professors and students) will need to publish in prestigious, peer-reviewed venues. That system will be MUCH harder to change.Anonymoushttps://www.blogger.com/profile/15776616183969942139noreply@blogger.comtag:blogger.com,1999:blog-7862104.post-25344408341556785142009-12-16T19:13:50.946-08:002009-12-16T19:13:50.946-08:00Just to bring a bit of comic relief to this really...Just to bring a bit of comic relief to this really issue in CHI:<br /><br />http://www.youtube.com/watch?v=-VRBWLpYCPY<br /><br />Jose RojasJose Rojashttps://www.blogger.com/profile/07193792593532065501noreply@blogger.comtag:blogger.com,1999:blog-7862104.post-59208214889776436852009-12-08T22:24:09.081-08:002009-12-08T22:24:09.081-08:00Interesting discussion on "scientific novelty...Interesting discussion on "scientific novelty" vs "systems engineering" ...<br /><br />Worth reading in this context:<br /><br />Sharp, R. and Rehman, K. The 2005 UbiApp Workshop: What Makes Good Application-Led Research? IEEE Pervasive Computing, vol. 4, no. 3, pp. 80-82, 2005.<br /><br />Same issue, although on a somewhat different playground of ubiquitous computing.<br /><br />-TimppaTimo Ojalanoreply@blogger.comtag:blogger.com,1999:blog-7862104.post-22559719281742020952009-12-08T10:21:30.067-08:002009-12-08T10:21:30.067-08:00Ryan, I love that "Sciencification". Is ...Ryan, I love that "Sciencification". Is this, perhaps, a case of "be careful what you wish for"? Computer science in general has had a persistent complex about whether or not it's really a science (any field that has science in the title...). As a result, the field has arguably been steadily drifting from it's "invent something cool and see if people find it useful" roots (too engineering focused) to instead focus on measuring phenomena ("if we can measure it its science"). You could argue (depending on your definitions) that CHI, UIST, and SIGGRAPH are certainly more scientific now than 10 years ago. Of course, they're also (for those of a more systems-y, engineering bent) much less interesting.<br /><br />I've personally begun to feel that a related problem with much of the research community is what could be termed "Capitalification" (to keep it in line with Sciencification). People seem much more interested in doing Science and Research these days than science and research. My dictionary defines "research" as "diligent and systematic inquiry or investigation into a subject in order to discover or revise facts, theories, applications, etc." My working definition of "Research" is "work that is likely to be accepted in an academic conference", and is characterized by strong differentiation between Research (can be published) and "advanced development" (which can be ground-breaking and innovative but is difficult if not impossible to publish). Companies like Apple, Google, Facebook, and Twitter do "advanced development" because even though they have a fundamental impact on our use of computers, they don't publish; never mind that the dictionary definition of research is arguably a strong fit to what they do. However, studying mockups that don't work beyond the lab and that no one uses for more than 30 minutes is Research because you can publish it. Similarly, a dictionary definition of science is "knowledge gained by systematic study", while Science appears to involve knowledge gained by running a laboratory study and doing some statistical analyses (Science appears to require p-values).<br /><br />However, I suspect that to a large extent this problem is self-correcting. Conferences like CHI and SIGGRAPH are dependent on attendees, and academic research is dependent on funding. If conferences and Research become sufficiently irrelevant people will stop paying attention and resources will dry up, in which case Researchers will have to adapt.<br /><br />I suspect that we may see a situation with parallels to newspapers. When the tools to build innovative software and/or hardware are easily and cheaply available and tools like blogs and open source software repositories make it easy to share those innovations, will people really remain reliant on walled gardens like conferences and journals for distributing their content?Unknownhttps://www.blogger.com/profile/07476949785539433911noreply@blogger.comtag:blogger.com,1999:blog-7862104.post-56066205865103257422009-12-08T00:16:15.161-08:002009-12-08T00:16:15.161-08:00Ryan,
Great comparisons to issues going on at SI...Ryan, <br /><br />Great comparisons to issues going on at SIGGRAPH! I appreciate your commentary here and would like to keep in touch about how to reform the system for systems work. :)Anonymoushttps://www.blogger.com/profile/15776616183969942139noreply@blogger.comtag:blogger.com,1999:blog-7862104.post-20811914792327843912009-12-07T21:14:45.236-08:002009-12-07T21:14:45.236-08:00Excellent (and depressing) post, James. I do inter...Excellent (and depressing) post, James. I do interactive systems research in computer graphics, where the word "system" is the kiss of death, well-known to indicate second-class work. I have had reviewers insist that I clearly mark my abstract with the scarlet word, lest any reader be confused and think they were looking at a "technique" or "framework". I had been considering switching over to UIST, but it seems like that might be a mistake! <br />"The problem" with my field seems very similar to what you are saying here about CHI and UIST. At SIGGRAPH this year, the lifetime award winners (Rob Cook and Michael Kass) used their acceptance talks to (if I may brazenly paraphrase) ask why SIGGRAPH has become so boring, and encourage reviewers to be more generous with risk-taking research. We even had our own version of your post, when Michael Ashikhmin "quit graphics" in 2006. There was an emergency session at SIGGRAPH that year, after which....nothing changed.<br />It seems like we have the same evaluation problem, too. SIGGRAPH reviewers have recently started cribbing from CHI, demanding that the kinds of studies they see in CHI "interaction-technique" papers be applied to new 3D modeling systems. Testing with real users seems to be a waste of time – it is preferable to do an absurd (but measurable) comparison in the lab.<br />"Sciencification" is the underlying issue, in my opinion. The consensus seems to be that computer science needs grow up and become a real science, and you can't be "doing science" unless you are measuring something. I was recently told that “if you don’t have some kind of evaluation metric, then you’re really just randomly sampling a high-dimensional space”. The implication was clear – find a squared error, or statistic, or just something, *anything*, that will make your paper easy to review in an hour or less. Or wise up and do something easier to evaluate.<br />Maybe I’m a pessimist, but I don’t think the system can be changed. The simple fact is that unless the field is shrinking, new researchers outnumber the old. Anyone graduating now was raised in the current system, where getting a job/grant/tenure means optimizing for paper count, and the best way to do that is to stick with safe, easy-to-review work. And guess who is going to be running the papers committee in a few years…<br />I think for most fields (that survive), there is an interesting part at the beginning, where it is small and dynamic, and then it gets big and dull, because we publish or perish, and the law of large numbers guarantees that the average paper in a big field is going to be…average. So, I vote for starting UISTSys. It will be small and interesting, at least for a while.<br />( this is long, but you might find it interesting: http://www.cs.utah.edu/~michael/leaving.html )Ryan Schmidthttp://www.dgp.toronto.edu/~rmsnoreply@blogger.comtag:blogger.com,1999:blog-7862104.post-61824971113492331852009-12-04T05:22:23.102-08:002009-12-04T05:22:23.102-08:00Thanks for your comments Larry. I'd like to se...Thanks for your comments Larry. I'd like to see how new approaches, like what you hint at, could be published at CHI. Your work has had such practical impact, so I especially appreciate the comment.Anonymoushttps://www.blogger.com/profile/15776616183969942139noreply@blogger.comtag:blogger.com,1999:blog-7862104.post-62174351184466275322009-12-04T05:18:27.886-08:002009-12-04T05:18:27.886-08:00This a brave and brilliant post with many fine con...This a brave and brilliant post with many fine contributions. It obviously touched a sensitive and significant subject. I am first and foremost a practitioner, a working designer. For the most part, I don't do real research, incremental or otherwise, but I have been a persistent innovator for decades and believe that those of us on working at the coal face of IxD and design methods have something to offer the CHI community. However, there are certain topics and stances that are completely unpublishable because they violate or question the accepted canon and received "wisdom" of the field. Examples are alternatives to ethnography-based field inquiry (one reviewer said "there are no alternatives, it is the only acceptable method") and alternatives to user testing. One paper in part about what to do when you CANNOT do user testing (there are such situations in the real world but not, apparently, in the world of CHI referees) has been repeatedly rejected because the project did not do testing. Duh; that's the point.<br />Anonymous reviewing is also a mockery because only the referees are anonymous; the authors are almost invariably known to the reviewers. On one recent rejection, a reviewer even went to some lengths to track down and verify who the authors were, then criticized us for failing to anonymize. (We had actually followed the published rules to the letter.) I have nearly 200 published papers including some classics and widely cited works, yet I have never been able to get anything published at CHI. If the old pros like me are doomed, pity the poor young academics needing "quality" placements. The reviewing process seems to have become increasingly capricious, with reviews that can be almost completely disconnected from the content of the paper. Even high scores can be ignored, as in one recent paper that was recommended strongly for acceptance by the reviewers, but the chair didn't like it, so the reviews were ignored and their recommendations were overridden. The scientific community puts great faith (the correct word) in blind refereeing, but even there, studies in the sociology of science suggest the process is broken. I am not sure "crowd sourcing" the refereeing is the answer, but we certainly ought to be trying some alternatives. CHI's "relaxed anonymous" model is a step, but a step too small.Larry Constantinenoreply@blogger.comtag:blogger.com,1999:blog-7862104.post-91576603037997611972009-11-24T17:18:07.950-08:002009-11-24T17:18:07.950-08:00James, sorry to ask such a simple question (maybe ...James, sorry to ask such a simple question (maybe I missed this somewhere in the discussion) but why don't you just start a SigCHI Subcommittee on Systems? Desney would probably be cool with that.<br /><br />The next logical step beyond CHI Subcommittees is CHI Symposia, so you would eventually (sooner if you pushed for it), have you own symposium without losing the CHI brand.<br /><br />Don't get me wrong; starting a new thing is fun, though a little stressful. I have veered off the HCI track a little bit and just started the Symposium on Simulation for Architecture and Urban Design (www.simaud.org). Luckily, it is part of an existing conference so a majority of the work is already being done...Azam Khanhttp://www.autodeskresearch.com/azamnoreply@blogger.comtag:blogger.com,1999:blog-7862104.post-84631483551691119042009-11-19T03:59:49.444-08:002009-11-19T03:59:49.444-08:00As a normal researcher, I think every one goes thr...As a normal researcher, I think every one goes through such process of being rejected for his/her submission. My opinion is that if you have a good paper, it will always be publsihed somewhere. Therefore, making judgement on one conference because of being rejected does not help you improve the quality. You must accept the fact that every reviewer try to objective, but we are human being, we always are a bit subjective.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7862104.post-87581779668514384442009-11-17T16:42:39.544-08:002009-11-17T16:42:39.544-08:00This is an important discussion about the health o...This is an important discussion about the health of CHI. I want to begin my comment by acknowledging that all the CHI reviewers and ACs are volunteers who donate their time and energy. There couldn't be a conference without them and I am grateful for their willingness to volunteer. That said, there is no place for condescending or derisive comments in written reviews. They are just out of place. ACs have the responsibility to maintain professionalism and there needs to be some mechanism for them to toss out bad reviews.<br /><br />That said, from a manger's perspective there are several things about CHI that seem paradoxical. It's an applied field that is still so new that it is dominated by people who have chosen academic careers in research. CHI is also a field that has a much weaker theoretical base than many other technical fields, which increases the importance of analyzing application experience. My extension of Antti's suggestion is that we need to re-define what constitutes "good work" and "contribution" as the field matures. Reviewers who have not worked on software engineering projects may not have enough appreciation of how severe the constraints are on application projects. I would love to see well designed experiments to compare impact of new technology (not the confounded one asked of Jim). But few production managers are going to keep their job if they allocate funds to do the same application project twice in order to compare the results. Did I miss the definitive experiment that compared OOAD with earlier forms of programming on industrial strength problems before it was widely adopted? Applied researchers and engineers both need to learn how well new ideas actually worked in advanced applications, and that's going to be messy. But CHI can't realize its full potential impact as a field without being applied successfully in this kind of engineering project environment. My read on Bill's first comment is that by excluding engineering work CHI really risks marginalizing itself.<br /><br />Software engineering projects also include key contributions from several other disciplines and are often led by program managers from other fields. A project can't really be understood without somehow describing these contributions. Reports about these applications are critically important to advance CHI but all these factors add an even bigger scope for authors who's reports are supposed to fit into a 10-page paper. It's a similar challenge for reviewers who are supposed such a wide scope.<br /><br />In large part to address these kinds of issues the Engineering Community was established in CHI 2006. One important objective for the 2010 Engineering Community was to attract more submissions about CHI engineering research and application of CHI research to software engineering projects. A second objective was to increase the influence of reviewers who have the technical background, experience, and passion to review engineering work in CHI.<br /><br />Based on this thread and other sources (including my own frustrations) I'd say we still have a long way to go on both objectives. But one positive step that I'd like to point out was the new, serious effort on case studies this year. Case studies now have up to 16 pages to allow for background, method description, project description, outcome, discussion, etc. They are achieved and featured as talks in sessions that are the peer of paper sessions in the program. There were a much smaller set of submissions about engineering case studies, which allowed them to reach more qualified reviewers. <br /><br />I'll be curious to see whether case-studies can provide a good forum for early applications of research and integrative design projects. Also, during the hand-off meeting to the CHI 2011 committee I plan to raise the need for ACs to use the Communities to help recruit qualified reviewers for paper submissions about engineering work.Unknownhttps://www.blogger.com/profile/12408701499506468375noreply@blogger.comtag:blogger.com,1999:blog-7862104.post-78047212785550146212009-11-17T12:13:31.245-08:002009-11-17T12:13:31.245-08:00I've been thinking about this since you poste...I've been thinking about this since you posted it, and I finally have a (different) concrete suggestion. I am attending becc right now, a conference in a field that is journal based. I've cine to the conclusion that our conference based model is the problem. I think our community needs to embrace abstracts and posters as a real way to contribute to meetings. Take the artificially definition of quality of our reviewers out of the equation. People will put their best foot forward even without rigorous peer review, that's socially appropriate and the work will be interesting, novel, and broader in nature if this succeeds. Of course the work also needs to be published and getting our journals to speed that up would be a nice counterpart to my proposed solution. Taccess already has a 6 month turn around I'd love (concretely) to see alt.chi support this shift. jmankoffhttps://www.blogger.com/profile/01140958610146705422noreply@blogger.comtag:blogger.com,1999:blog-7862104.post-91438266055721466302009-11-16T15:25:49.760-08:002009-11-16T15:25:49.760-08:00critical begets critical
Reviewer #1 is reading a...critical begets critical<br /><br />Reviewer #1 is reading a paper which is similar in method and type to a paper she submitted to CHI last year. This paper is almost as good as the paper she submitted last year. That paper had gotten all '3' ratings with comments about needing more subjects and more control. She decides that since this paper had those same issues, and wasn't as good in other ways that she will rate it a 2.5 overall.<br /><br />How do you convince Reviewer #1 to be more generous to others than others were to her?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7862104.post-2381584353938011872009-11-13T15:22:38.924-08:002009-11-13T15:22:38.924-08:00Scenario:
- There is a generally known problem whi...Scenario:<br />- There is a generally known problem which has been approached in a few different ways but not to much overall avail.<br />- A researcher thinks they have an idea for an iPhone app which will help solve the problem.<br />- They build it and try it out with a few users who are not the ones with the problem but who are part of the process that has the problem.<br />- There aren't any practical examples as a result of this 'trial' (as one would expect).<br /><br />The CHI submission:<br />- does not explain how decisions were made when building the app to try to help solve the problem.<br />- makes assumptions about the problem itself without justifying those assumptions.<br />- does not show how the app helps solve a realistic example of the problem.<br /><br />Some would reject this work for one or both of two reasons. First, it doesn't look like "good research" but rather a pet project of building a tool which may or may not be of any use to anyone. Second, letting such work into CHI sets a higher bar for a legitimate attempt at trying to solve the problem in a later year without this work having helped the field in any way.<br /><br />Others would say "it is a good idea to be working on and the researcher had to build the app" and accept it.<br /><br />Which is closer to the right answer on whether to accept or reject the work?Anonnoreply@blogger.com