SceneKit interaction handling – Experiment439

A staple of science fiction movies has been 3D holographic visualizations and controls. Most efforts I’ve seen at taking real visualization data and putting them into a 3D context haven’t been terribly successful. At the same time, the advance of AR and VR makes me suspect that we should be able to take advantage of the additional dimension in displaying and visualizing data.

I started a project, named Experiment439, to go through the process of creating and building a few visualizations and seeing what I can do with them, and what it might be refined out into a library that can be re-used.

I wanted to take a shot at this leveraging Apple’s SceneKit 3D abstraction and see how far I could get.

The SceneKit abstraction and organization for scenes is a nice setup, although it’s weak in one area – delegating interaction controls.

The pattern I’m most familiar with is the View Controller setup (and it’s many variants depending on how you display data). Within SceneKit, an SCNNode can encapsulate other nodes (and controls overall placement in the view), so it makes a fairly close analogue to the embedding of views within each other that I’m familiar with from IOS and MacOS development. Coming up with something that encapsulates and controls a SCNNode (or set of SCNNodes) seems like it’s a pretty doable (and useful) abstraction.

The part that gets complicated quickly is handling interaction. User-invoked events in SceneKit today are limited to projecting hit-tests from the perspective of the camera that’s rendering the scene. In the case of AR apps on IOS for example, the camera can be navigating the 3D space, but when you want to select, move, or otherwise interact you’re fairly constrained to mapping touch events projected through the camera.

I’ve seen a few IOS AR apps that use the camera’s positioning as a “control input” – painting or placing objects where the IOS camera is positioned as you move about an AR environment.

You can still navigate a 3D space and scene, and see projected data – both 2D and 3D very effectively, but coming up with equivalent interactions to what you get on Mac and IOS apps – control interactions – has been significantly trickier.

A single button that gets toggled on/off isn’t too bad, but as soon as you step into the world of trying to move a 3D object through the perspective of the camera – shifting a slider or indicating a range – it gets hellishly complex.

With Apple’s WWDC 2019 around the corner (tomorrow as I publish this) and the rumors of significant updates to AR libraries and technologies, I’m hoping that there may be something to advance this space and make this experiment a bit easier, and even more to expand on the capabilities of interacting with the displayed environment.

IOS AR apps today are a glorified window into a 3D space – amazing and wonderful, but heavily constrained. It allows me to navigate around visualization spaces more naturally than anything pinned to a desktop monitor, but at the cost of physically holding the device that you would also use to interact with the environment. I can’t help but feel a bit of jealousy for the VR controllers that track in space, most recently the glowing reviews of the Valve Index VR controllers.

Better interaction capabilities of some kind will be key to taking AR beyond nifty-to-see but-not-entirely-useful games and windows on data. I’m hoping to see hints of what might be available or coming with the Apple ecosystem in the next few days.

Meanwhile, there still a tremendous amount that to be done to make visualizations and display them usefully in 3D. A lot of the inspiration of the current structure of my experiment has been from Mike Bostock‘s amazing D3.js library, which has been so successful in helping to create effective data visualization and exploration tools.

Published by heckj

Developer, author, and life-long student. Writes online at https://rhonabwy.com/.

%d bloggers like this: