SceneKit interaction handling – Experiment439

A staple of science fiction movies has been 3D holographic visualizations and controls. Most efforts I’ve seen at taking real visualization data and putting them into a 3D context haven’t been terribly successful. At the same time, the advance of AR and VR makes me suspect that we should be able to take advantage of the additional dimension in displaying and visualizing data.

I started a project, named Experiment439, to go through the process of creating and building a few visualizations and seeing what I can do with them, and what it might be refined out into a library that can be re-used.

I wanted to take a shot at this leveraging Apple’s SceneKit 3D abstraction and see how far I could get.

The SceneKit abstraction and organization for scenes is a nice setup, although it’s weak in one area – delegating interaction controls.

The pattern I’m most familiar with is the View Controller setup (and it’s many variants depending on how you display data). Within SceneKit, an SCNNode can encapsulate other nodes (and controls overall placement in the view), so it makes a fairly close analogue to the embedding of views within each other that I’m familiar with from IOS and MacOS development. Coming up with something that encapsulates and controls a SCNNode (or set of SCNNodes) seems like it’s a pretty doable (and useful) abstraction.

The part that gets complicated quickly is handling interaction. User-invoked events in SceneKit today are limited to projecting hit-tests from the perspective of the camera that’s rendering the scene. In the case of AR apps on IOS for example, the camera can be navigating the 3D space, but when you want to select, move, or otherwise interact you’re fairly constrained to mapping touch events projected through the camera.

I’ve seen a few IOS AR apps that use the camera’s positioning as a “control input” – painting or placing objects where the IOS camera is positioned as you move about an AR environment.

You can still navigate a 3D space and scene, and see projected data – both 2D and 3D very effectively, but coming up with equivalent interactions to what you get on Mac and IOS apps – control interactions – has been significantly trickier.

A single button that gets toggled on/off isn’t too bad, but as soon as you step into the world of trying to move a 3D object through the perspective of the camera – shifting a slider or indicating a range – it gets hellishly complex.

With Apple’s WWDC 2019 around the corner (tomorrow as I publish this) and the rumors of significant updates to AR libraries and technologies, I’m hoping that there may be something to advance this space and make this experiment a bit easier, and even more to expand on the capabilities of interacting with the displayed environment.

IOS AR apps today are a glorified window into a 3D space – amazing and wonderful, but heavily constrained. It allows me to navigate around visualization spaces more naturally than anything pinned to a desktop monitor, but at the cost of physically holding the device that you would also use to interact with the environment. I can’t help but feel a bit of jealousy for the VR controllers that track in space, most recently the glowing reviews of the Valve Index VR controllers.

Better interaction capabilities of some kind will be key to taking AR beyond nifty-to-see but-not-entirely-useful games and windows on data. I’m hoping to see hints of what might be available or coming with the Apple ecosystem in the next few days.

Meanwhile, there still a tremendous amount that to be done to make visualizations and display them usefully in 3D. A lot of the inspiration of the current structure of my experiment has been from Mike Bostock‘s amazing D3.js library, which has been so successful in helping to create effective data visualization and exploration tools.

IOS Dev Diary – using UIDocument

I have been working on an artist utility app, with the primary purpose to present an image and super-thin grid overlay. The inspiration came from the cropping functionality in the Photos app – but that’s very ephemeral to a the act of croping an image, and isn’t easily viewable on a continued basis (such as on an iPad) when you want that grid to support your sketching or painting. Using a grid like this is done for a couple of purposes: one of which is the “process by Leonardo” for helping to capturing and copying an image by hand. The other to double check the framing and composition against what’s called The Rule of Thirds.

I originally didn’t think of this as an application that would have or use a document format, but after trying it out a bit and getting some feedback on the current usage, it became abundantly clear that it would benefit tremendously by being able to save the image and the framing settings that shows the grid overlay. So naturally, I started digging into how to really enable this, which headed directly towards UIDocument.

Using UIDocument pretty quickly begged the question of supporting a viewer for the files, which led to researching UIDocumentBrowser, which was a rather surprisingly invasive design change. Not bad, mind you – just a lot of moving parts and new concepts:

  • UIDocument instances are asynchronous – loading and saving the contents is separate from instantiating the document.
  • UIDocument’s support cloud-hosted services from the get-go – which means they also include a concept of states that might be surprising including inConflict and editingDisabled in addition to reflecting loading, saving, and error conditions while doing these asynchronous actions.
  • UIDocument is built to be subclassed, but how you handle tracking the state changes & async is up to you.
  • UIDocumentBrowser is built to be controlled through a delegate/controller setup and UIDocumentControllerViewController which is subclassed, and also demands to be root of the view hierarchy.

Since my document data included UIImage and UIColor, both of which are annoying when trying to persist them using struct coding with swift , I ended up using NSKeyedArchiving, and then later NSSecureCoding, to save out the document.

One of the first lesson I barked my shin on here was when I went to make a ThumbnailPreview extension that loaded the document format and returned a thumbnail for the document icon. The first thing I hit was that NSKeyedArchiving was failing to load/decode the contents of my document when attempting to make the thumbnail, while the application was able to load and save the document just fine. It likely should have been more obvious to me, but the issue has to do with how NSKeyedArchiving works – it decodes by class name. In the plugin, the module name was different – so it was unable to load the class in question, which I found out when I went to the trouble of adding a delegate to the NSUnarchiver to see what on earth it was doing.

One solution might have been to add in some translation on NSKeyedUnarchiver to translate the class to the module name that was associated with the plugin setClass(_:forClassName:). I took the different path of taking the code that represented my document model and breaking it out into it’s own framework, embedded within the application – and then imported that framework into the main application and the preview plugin as well.

UIDocument Lesson #1: it may be worth putting your model code into a framework so plugins and app extensions can use it.

The second big “huh, I didn’t think of that…” was in using UIDocument. Creating a UIDocument and loading its data are two very separate actions, and a UIDocument actually has quite of bit of state that it might be sharing. The DocumentBrowser sample code took the path of making an explicit delegate structure to call back as things loaded, which I ended up adopting. The other sample code that Apple provided (Particles) was a lot easier to start with and understand, but doesn’t really do anything with the more complex world of handling saving and loading, and the asynchronous calls to set all that up.

UIDocument Lesson #2: using a document includes async calls to save, load and states that represent even potential conflicts when the same doc is editing at the same time from different systems.

One particularly nice little feature of UIDocument is that it includes a Progress property that can be handed and set on the UIDocumentBrowser’s transition controller when you’ve selected a document, so you get a nice bit of animation as the document is loaded (either locally or from iCloud).

UIDocumentBrowser Lesson #1: the browser subclass has a convenient (but not obvious) means of getting a animated transition controller for use when opening a document – and you can apply a UIDocument’s Progress to show the loading.

The callbacks and completions were the trickiest to navigate, trying to isolate which view controller had responsibility for loading the document. I ended up making some of my own callbacks/completion handlers so that when I was setting up the “editor” view I could load the UIDocument and handle the success/failure, but also supplied the success/failure from that back to the UIDocumentBrowserViewController subclass I created to support the UIDocumentBrowser. I’m not entirely convinced I’ve done it the optimal way, but it seems to be working – including when I need to open the resulting document to create a Quicklook Thumbnail preview.

The next step will be adding an IOS Action Extension, as that seems to be the only real way that you can interact with this code directly from Photo’s, which I really wanted to enable based on feedback. That will dovetail with also allowing the application to open image based file URLs and create a document using that image file as its basis. The current workflow for this application is creating a new document, and then choosing an image (from your photo library), so I think it could be significantly simpler to invoke and use.

IOS 12 DevNote: Embedded Swift Frameworks and bitcode

A side project for the barista’s at my favorite haunt has been a fun “getting back into it” programming exercise for IOS 12. It’s a silly simple app that checks the status of the network and if the local WIFI router is accessible, and provides some basic diagnostic and suggestions for the gang behind the counter.

It really boils down to two options:

    Yep, probably a good idea to restart that WIFI router
    Nope, you’re screwed – the internet problem is upstream and there’s nothing much you can do but wait (or call the Internet service provider)

It was a good excuse to try out the new Network.framework and specifically NWPathMonitor. In addition to the overall availability, I wanted to report on if a few specific sites were responding that the shop often uses, and on top of that I wanted to do some poking at the local WIFI router itself to make sure we could “get to it” and then made recommendations from there.

As I dug into things, I ended up deciding to use a swift framework BlueSocket, with the idea that if I could open a socket to the wifi router, then I could reasonably assume it was accessible. I could have used Carthage or CocoaPods, but I wanted to specifically try using git submodules for the dependencies, just to see how it could work.

With XCode 10, the general mechanism of dragging in a sub-project and binding it in works extremely easily and well, and the issues I had really didn’t hit until I tried to get something up to the IOS App Store for TestFlight.

The first thing I encountered was the sub-projects had a variable for CFVersionBundle: $(CURRENT_PROJECT_VERSION) that apparently wasn’t getting interpolated and set when it was built as a subproject. I ended up making a fork of the project and hard-coding the Info.plist with the specific version. Not ideal, but something that’s at least tractable. I’m really hoping that this coming WWDC shows some specific Xcode/IOS integration improvements when it comes to Swift Package Manager. Sometimes the Xcode build stuff can be very “black box”, and it would be really nice to have a more clear integration point for external dependencies.

The second issue was a real stumper – even though everything was validating locally for a locally built archive, the app store was denying it. The message that was coming back:

Invalid Bundle – One or more dynamic libraries that are referenced by your app are not present in the dylib search path.

Invalid Bundle – The app uses Swift, but one of the binaries could not link to it because it wasn’t found. Check that the app bundles correctly embed Swift standard libraries using the “Always Embed Swift Standard Libraries” build setting, and that each binary which uses Swift has correct search paths to the embedded Swift standard libraries using the “Runpath Search Paths” build setting.

I dug through all the linkages with otool, and everything was looking fine – and finally google trawled across a question in StackOverflow. Near the bottom there was a suggestion to disable bitcode (which is on by default when you upload an IOS archive). I gave that a shot, and it all flowed through brilliantly.

I can only guess that when you’re doing something with compiled-from-swift dylib’s, the bitcode process does something that the app store really doesn’t like. No probably without the frameworks (all the code in the project directly), but with the frameworks in my project, bitcode needed to be turned off.

Made it through all that, and now it’s out being tested with TestFlight!

El Diablo Network Advisor