Friday, November 27, 2009

Augmented Reality is mainstream

There was a short article on Augmented Reality (AR) in the NY Times Sunday Magazine a couple of weeks ago. I think this really shows AR has finally hit the mainstream.  Of course it was clear the author had no idea where this stuff came from (e.g., Feiner's group at Columbia, Blair's group at GVU, and Billinghurst's group HITLab New Zeland). What do people think?

Thursday, November 12, 2009

Surface Area to Power the World

I thought these images were cute... Can anyone confirm the numbers (especially for the solar) as it looks much smaller than what I'd seen in some talks previously?


Saturday, November 07, 2009

I give up on CHI/UIST

The CHI reviews just came out and I have to say I'm pretty unhappy... not with the numbers per se... (one paper I co-authored has a 4.5 average out of 5 and I'm sure I'll get a fair number of papers accepted), but instead with the attitude in the reviews. The reviewers simply do not value the difficulty of building real systems and how hard controlled studies are to run on real systems for real tasks. This is in contrast with how easy it is to build new interaction techniques and then to run tight, controlled studies on these new techniques with small, artificial tasks (don't tell me this is not true as I have done it and published good papers of this style also).

I really am ready to give up on CHI / UIST and go elsewhere (to another existing community or create a new one -- UISTSys anyone?).
I've talked about this for 3-5 years with many of you, but I think I've finally had it as there has really been no change. In fact, I think it has gotten worse. The highest ranked paper we wrote took 6-10 weeks of work and is well written, interesting to read, and synthesizes many studies in multiple communities. It is valuable to the CHI community, but it invents nothing new. I'd love to see it published at CHI and I think there should be room for multiple kinds of work at CHI (including nice surveys, opinion pieces, interaction techniques, fieldwork, and systems work).

The papers we have submitted with truly new ideas and techniques, and years of work behind them, get reviews asking you to do 2-4 years more work. For example, they ask you to create a completely different system by another team with no knowledge of your ideas and run an A vs. B test (because that commercial system you compared to had different goals in mind). Oh, and 8-10 participants doing 3-4 hour sessions/participant isn't enough for an evaluation. You need lots more... They go on and on like this. Essentially setting you up for a level of rigor that is almost impossible to meet in the career of a graduate student.

This attitude is a joke and it offers researchers no incentive to do systems work. Why should they? Why should we put 3-4 person years into every CHI publication? Instead we can do 8 weeks of work on an idea piece or create a new interaction technique and test it tightly in 8-12 weeks and get a full CHI paper. I know it is not about counting publications, but until hiring and tenure policies change, this is essentially what happens in the real world. The HCI systems student with 3 papers over their career won't even get an interview. Nor will any systems papers win best paper awards (yes, it happens occasionally but I know for a fact that they are usually the ones written by big teams doing 3-4 person-years of work).

Don't tell me that as much systems work is appearing now as in the past. It is not true and much of the systems papers that do get in require big teams (yes, 3-4 person-years for each paper). When will this community wake up and understand that they are going to run out any work on creating new systems (rather than small pieces of systems) and cede that important endeavor to industry?

One might think that the recent papers on this topic by Dan Olsen at UIST and Saul Greenberg/Bill Buxton at CHI would have changed things, but I do not believe the community is listening. What is interesting is that it is probably the HCI systems researchers themselves who are at fault. We are our own worst enemies. I think we have been blinded by the perception that "true scientific" research is only found in controlled experiments and nice statistics.

What is the answer? I believe we need a new conference that values HCI systems work. I also have come to agree with Jonathan Grudin that conference acceptance rates need to be much higher so that interesting, innovative work is not left out (e.g., I'd advocate 30-35%), while coupling this conference with a coordinated, prestigious journal that has a fast publication cycle (e.g., electronic publication less than 6 months from when the conference publication first appears). This would allow the best of both worlds: systems publications to be seen by the larger community, with the time (9-12 months) to do additional work and make the research more rigorous.

Addendum:
This post started as a status update on facebook, but I quickly went over the maximum size for a status update (which I had never run into before). Thus, this blog post. Note that this was written hastily and late at night. Don't take this as a scholarly attempt to solve this problem (i.e., I cite no statistics to back up my claims here!) Also, this is not an attempt to influence the PC on my papers under review. I couldn't really care less about any individual paper. It is the trend over time that has me upset. I've done quite well at publishing at CHI so it is not about sour grapes. It is more frustration at how hard it is to publish the papers that I believe are the most important. If it is happening to me it is happening to many other people.