Microsoft has released an updated version of the TabletPC OS in the latest Windows XP SP2 (it is downloadable now if you want to try it out). The major change you can see for the TabletPC is the improvement of the TIP (Text Input Panel). This is better, but hardly a radical improvement to the UI.
At a recent TabletPC workshop put on by Microsoft and the University of Washington, many researchers complained about Microsoft refusing to release a pen-centric UI for the TabletPC. Instead they have relied on a UI that is simply incremental over the existing desktop UI. The pen is little more than a mouse pointer in the existing GUI. The TIP is just one such example of a hack that is necessary to let the pen input text into existing applications without the apps having to know anything about it.
The reason Microsoft has given for this decision to many of us privately in the past is that they actually did develop a purely pen-centric UI and it tested quite well. The problem was that people either said or exhibited problems when moving between that novel UI and their existing GUI apps, which they would still have to do in the future. This feels like a bad compromise or a great research problem.
How do you allow someone to learn an entirely new UI that takes advantage of the unique input characteristics of the TabletPC platform, yet still allow them to easily use their existing GUI metaphor when it is more appropriate or necessary? Can you make this new UI easy to learn and allow it to become more pen-centric as the user gets more expertise? Think about how Marking Menus start out as a simple pie menu (a fairly traditional GUI interaction -- except for the circular part) and then moves to a gestural interface as the user become more expert with it. You could push this research even further if you consider multimodal UIs that use pen, gesture, and speech rather than just pens.
What should Microsoft do? What is the interesting research here for those of us who want to push harder? What do you think?