Maybe failure is a strong word… wait – yeah, it is. I meant it to be.
I followed (along with a huge number of other folks) the whiplash of tech hype for the project called Origami (good name, btw). And while I was primarily watching the teasers on that site, Engadget was doing an excellent job of walking up the details at CeBIT. They had an initial So what is Origami post that was so forthright and informative that I almost didn’t believe it (that’s saying something), but then the details really began to roll out, and by the actual Intel press release, it was almost a let down. Heh.
So the reason I think this is going to fail as it stands? The UI is broken for finger-based input. It’s a tablet PC – which is a cobbled UI that tried to take a pen use it as a mouse. That’s a whole different issue, as I found out with a tablet PC recently, and the vast majority of programs that I used just didn’t work worth a damn with the pen based interactions. Aside from the interesting keypad/touchpad tabletPC pack that the UMPC’s will have – I think that is still going to be a problem.
Think about it – how will you make the OS realize you intend your typing into a specific text field (i.e. get the focus). Tab on a keyboard? That’s how I do it quickly with a keyboard. Click on it with a mouse? Okay – lets translate that – poke at it with a finger?
The size of the UI targets was difficult for a pen, and I suspect its almost completely whacked for a touch screen mechanism.
I’m not saying it can’t ever work – just that there is precious little “good software” for this platform, and that the software really needs to anticipate and work with the new interaction mediums that users will bring to it with a device like this.
The best software I ever ran into on the straight-up tabletPC (as opposed to a UMPC, which I haven’t yet seen) is Microsoft’s OneNote. They had clearly thought about making the whole thing work with a pen in your hand, although even they had some “standard GUI widget problems” to my mind. Tabs were just too damn small – something that is an easy target with a mouse is way different kind of target with a pen. I can only imagine that it would get more difficult if you were just poking with your finger.
Of course, you can’t even get into this arena without at least paying a little homage to the now-gone Apple Newton. Just way the hell ahead of it’s time, it had some incredible technology packed under the covers. The handwriting recognition was the biggest hype, but I think really that it’s overall UI experience was the most impressive. Granted Palm sort of kicked it’s butt with the simpler and more focused Graffiti system – quicker to learn. But the breadth of functionality was heading in the right direction on Newton for a general function device.
More recently, there’s been a swath of interest in multi-touch interfaces based mostly on the amazing demonstration of technology that Jeff Han put together with FTIR touch. If a single finger pointing and pecked would warp the UI experience – take a look at that video and think how it will mess with the current mouse/pointer idiom. The input data only (multiple points and their interactions between each other) is a massive leap outward in potential. I have no idea is a UMPC is multi-touch capable, but that’s clearly a means of additional human-computer interaction that I expect will get explored in the near future.