The language of gestures, v0.9

Since I woke up this morning, I’ve been thinking about the patterns and new UI concepts that we are becoming accustomed to with this first release of the iPad. Like a lot of others, I’m intensely interested in the productivity apps: pages, keynote, brushes, sketchbook pro, omnigraffle. The ipad applications that reach beyond the brief interface available and so effective on the iPhone. The applications that work on documents stored on the iPad (although like many others, I wish they were stored in the cloud instead).

This past Thursday evening, after a very packed <a href="Seattle Xcoders meeting, i was chatting with a couple of fellows from Omni. They mentioned that in the creation of omnigraffle, they had focused on gestures that could all be quickly explored and were obvious: tap, double-tap, tap-and-hold. That all the functionality of the application had to be accessible from that basis. It makes sense, especially when it comes to needing to test without a device! And what I’ve seen in the applications so far is that most apps can be run with this basic set of gestures.

Keynote and Pages had an introduction document that was a sort of README for the basic use of the application. Omnigraffle followed that same suit. SketchBook Pro, on the other hand, had a little embedded video to show the functions, and in fact is probably the most complex from a gesture perspective. A three-finger tap brings up the controls, which otherwise fade from the pallete to leave you focused on painting. The two finger pinch and zoom functionality appears to be getting generally added with rotational effects in many applications now too.

The downside of the tap and do something sequences, which appear to have a good general traction for pulling up those damned handy pop-over menus, is that it appears to me to be slowing down the interface. The trickiest of these sequences is the tap-to-select followed by a tap to bring up that menu. I find myself wanting to race forward, and the gesture/sequence gets confused with a double-tap gesture instead. I have to slow myself down to make sure I’m not confusing the system. The kicker is at instead of thinking about what I’m drawing or sketching or writing, I’m now thinking about the interface. That’s the last thing any application developer wants.

Tap and hold is a little inconsistent at yet. In editors, you commonly expect it to pull up a spyglass for detailed positioning. I really like what omni did with it, allowing a tap-and-hold followed by a drag to be a selection box mechanism. I don’t know if many are doing it, but I’d love to see a three finger tap-and-hold as a general “select this area of things” mechanism. Ad yeah – riffing that idea straight from the iPad game command-and-conquer.

ps: sorry for the relative lack of links. I’m writing this with WordPress for the iPad and it turns out the making HTML links on text is a complete pain in the butt. The built in text keyboard is just very ill suited to the effort. Although props to WP developers, the text editor saw i was trying to make a link and intercepted the effort and helped me place it down. Another place where that upcoming multitasking will be damned handy – look up a link, come back and paste it in to place.

Published by heckj

Developer, author, and life-long student. Writes online at https://rhonabwy.com/.

%d bloggers like this: