Sharp Knives

After writing extensively with the Swift programming language, I recently spent time writing code in C++11. The experience has been very instructional, and I know have a much deeper appreciation for the guard rails that the language Swift has constructed. To my surprise, it has also left me with a renewed appreciation of the compiler tooling – LLVM, specifically the relative clarity of error messages.

C++ as a language was significantly more approachable to me after working on code using Swift for a while. There are a lot of parallels and similarities, which was particularly nice since I was porting code (re-writing the algorithms in another programming language). I think porting between programming languages is the rough equivalent of translating between human languages. In really comparing to allow you to move between the two, you become far more aware of the idiomatic conveniences and conventions.

One of the surprisingly pieces is my realization that I am far more attached to the swift concept of protocols than I had realized. C++ has a concept of interfaces, which is close – but definitely not the same in it’s breadth.

When combined with generic algorithms, the swift language feels far more capable and specific to me. In the swift programming language, you can constrain the implementation of your generic algorithms to only apply to specific protocols, which appears to be something you can’t easily do in C++ – or at least in C++11, which was the version I was working within. I found that programming with generics in C++ is far more reliant on convention, or possibly using subclasses – examples were a bit hard to come by for me.

My limited experience with C++ also leads me to think that the conventions followed between different groups of programmers is more diverse than Swift. The idiomatic patterns I found while reading other groups code had dramatically different patterns included within them. So much so that it was often hard to identify those patterns, and understand what was a team’s convention.

My time with the C++ components also makes me appreciate all the more the tendency of languages these days (to borrow from Python) to come “batteries included” with standard libraries. C++ standard library is more like tools to make tools, and some of the things commonly included with other languages (python, swift, etc) just have to be found individually and assembled.

While I have a bit more to do with C++ before I’ll be done with my project, I relish shifting back to the guard rails of swift. (I must admit, I’m now significantly more curious about the language Rust as well).

iPad Lost Mode

This past Saturday, I was on a 7am EDT flight from Orlando, FL to Seattle, WA – which means I was up at 3:30am east-coast-time to make the flight. That would be pretty harsh, except that my normal timezone is 4 hours later than that. The flight was smooth, and I half-slept most of the time. I took out my iPad and put it into the seat back pocket to work on it or read a bit during the flight, but never used it.

The following day after we were home, I went to get the iPad – and realized to my horror that I’d never pulled it from the seat-back pocket. The data was fortunately backed up, but loosing the hardware – a gift last Christmas – wasn’t a hit I wanted to take. We called the airlines baggage team, and they put in a lost report. I was pretty sure it was gone and resigned myself that I’d have to replace it some months down the road. I could simply make do until then, feeling foolish for leaving it in the seat pocket when I was half-asleep.

I logged into the Find my iPhone app on my iPhone, and marked the iPad in “lost mode”. I wanted it found, and had never used this feature previously. I set up the message that it was lost, and included my phone number – which shows on the lock screen. It hadn’t been able to connect to any networks where it already knew the wifi, so it was just “missing” from that system, but if and when it did connect – it would drop into “locked/lost” mode.

On Tuesday, the airlines called me and let me know they’d found an iPad matching my description. After a brief discussion, we verified that it was indeed mine. Within an hour I had several options for getting it back; choosing to have it delivered. I was expecting it to arrive “some time before 8pm on Wednesday”.

On Wednesday, I was at my usual coffee-house haunt, and saw a notification in email – “Your iPad has been found”! I checked the location, and it was my home address! Wait – the email notification was from 30 minutes ago.. So I called my sweetie and she ran downstairs and found… nothing!

She ran around the outside of the house three times, looking for any place where the box might have been left, but nothing. We immediately thought “OMG, it’s been stolen from our front porch” – as there’s a thing in our neighborhood where package theft is pretty common.

After looking again at the Find My Phone app, I realized that while it was “found” – it reported as being on the street in front of our house – and only for a minute, before disappearing again into the mists of who-knows-where. In hindsight, I imagine that the Fedex driver simply happened to be passing near our house at 9:17a. It was going slowly enough that the iPad was able to finally make a wifi connection it knew and register itself. Fedex, who was delivering it, reported that it was still undelivered, so after an hour or so of “theft panic”, that faded into a vague concern and hope that it really was still out for delivery. Fortunately updates from Fedex are nearly real-time, so if they’re reporting it is still in transit, there can be a reasonable amount of confidence that it is.

Over the course of the day, the Fedex delivery driver wandered around our neighborhood, delivering all their various packages. I got 2 more pings from the iPad, which also reported that it was “playing a noise” since it was lost. In both cases, it was near wifi networks that I’d previously connected with, and it was in those locations for only a minute or two. I can only imagine the driver being confused, if they heard the sound at all, coming from the back of their truck.

It was finally delivered home, around 6pm – nearly nine hours after first reporting itself in the area. It was in lost mode, and I was able to log in and recover it – everything intact. I felt extremely relieved, and even more that fortune had sided with me that day. Many friends have done or reported something similar, and devices forever disappear after that scenario.

I should mention that the Alaska Airlines team was fantastic through out this whole nerve-wracking scenario. They were super understanding when I called in panic, thorough about verifying the device was indeed mine when the found it, and efficient after getting the reports filed, made available online to me, and really flexible delivery options – including allowing me to come pick it up if I wanted.

The “lost device” feature was lovely, except when it wasn’t. It was like hearing distant calls for help that you can reach in time; sort of like the terrible dream where you can never make it to the end of the hallway to escape whatever dread it coming. I don’t know that any capability or feature of the iPad that would have made that better, it was just hellishly nerve-wracking to wait while it yelped in the back of that truck, wandering our my neighborhood.

A Using Combine update now available!

A new version of Using Combine (v0.7) is now available! 

The free HTML site of Using Combine has been updated automatically, and the PDF and ePub versions are available on Gumroad.

This version has relatively few updates, primarily focused on some of the missing publishers and resolving the some of the egregious flaws in ePub rendering. No significant changes have come with the later Xcode and IOS betas, and with Xcode 11 now in GM release, it was a good time for another update to be made available.

For the next release, I am focusing on fleshing out a number of the not-yet-written reference sections on operators, most of which are more specialized than the more generally used ones that have already been covered.

The project board at https://github.com/heckj/swiftui-notes/projects/1 also reflects all the various updates still remaining to be written.

Using Combine (v0.6) available!

design by Michael Critz

A new version of Using Combine is available! The free/online version of Using Combine is updated automatically as I merge changes, and the PDF and ePub versions are released periodically and available on Gumroad.

https://gumroad.com/js/gumroad.js Purchase Using Combine

The book now has some amazing cover art, designed and provided by Michael Critz, and has benefited from updates provided by a number of people, now in the acknowledgements section.

The updates also include a section broken out focusing on developing with Combine, as well as a number of general improvements and corrections.

For the next release, I am going to focus on fleshing out a number of the not-yet-written reference sections:

the publishers I haven’t touched on yet

– starting into a number of the operators, most of which are more specialized

I reviewed the content prior to this release to see what was remaining to be done, and updated the project planning board with the various updates still remaining to be written.

I do expect we’ll see beta6 from Apple before too long, although exactly when is unknown. I thought it might appear last week, in which case I was planning on trying to accommodate any updates in this release. Xcode 11 beta6 hasn’t hit the streets and I wanted to get an update regardless of its inclusion.

navigating Swift Combine, tuples, and XCTest

What started out as a Github repository to poke at SwiftUI changed course a few weeks ago and became a documentation/book project on Combine, Apple’s provided framework for handling asynchronous event streams, not unlike ReactiveX. My latest writing project (available for free online at https://heckj.github.io/swiftui-notes/) has been a foil for me to really dig into and learn Combine, how it works, how to use it, etc. Apple’s beta documentation is unfortunately highly minimal.

One of the ways I’ve been working out how its all operating is writing a copious amount of unit tests, more or less “poking the beast” of the code and seeing how it’s operating. This has been quite successful, and I’ve submitted what I suspect are a couple of bugs to Apple’s janky FeedbackAssistant along with tests illustrating the results. As I’m doing the writing, I’m generating sample code and examples, and then often writing tests to help illuminate my understanding of how various Combine operators are functioning.

In the writing, I’ve worked my way through the sections to where I’m tackling some of the operators that merge streams of data. CombineLatest is where I started, and it testing it highlighted some of the more awkward (to me) pieces of testing swift code.

The heart of the issue revolves around asserting equality with XCTest, Apple’s unit testing framework, and the side effect that Combine takes advantage of tuples as lightweight types in operators like CombineLatest. In the test I created to validate how it was operating, I collected the results of the data into an ordered list. The originated streams had simple, equatable types – one String, the other Int. The resulting collection, however, was a tuple of <(String, Int)>.

To use XCTAssertEquals, the underlying types that you are validating need to conform to Equatable protocol. In the case of checking a collection type, it drops down and relies on the equatable conformance of its underlying type. And that is where it breaks down – tuples in swift aren’t allowed to conform to protocols – so they can’t declare (or implement conformance with) equatable.

I’m certainly not the first to hit this issue – there’s a StackOverflow question from 2016 that highlights the issue, Michael Tsai highlights the same on his blog (also from 2016). A slightly later, but very useful StackOverflow Q&A entitled XCTest’ing a tuple was super helpful, with nearly identical advice to Paul Hudson’s fantastic swift tips in Hacking With Swift: how to compare equality on tuples. I don’t know that my solution is a good one – it really feels like a bit of a hack, but I’m at least confident that it’s correct.

The actual collection of results that I’m testing is based on Tristan’s Combine helper library: Entwine and EntwineTest. Entwine provides a virtual time scheduler, allowing me to validate the timing of results from operators as well as the values themselves. I ended up writing a one-off function in the test itself that did two things:

  • It leveraged an idea I saw in how to test equality of Errors (which is also a pain in the tuckus) – by converting them to Strings using debugDescription or localizedDescription. This let me take the tuple and consistently dump it into a string format, which was much easier to compare.
  • Secondarily, I also wrote the function so that the resulting tests were easy to read in how they described the timing and the results that were expected for a relatively complex operator like combineLatest.

If you’re hitting something similar and want to see how I tackled it, the code is public UsingCombineTests/MergingPipelineTests.swift. No promises that this is the best way to solve the problem, but it’s getting the job done:

func testSequenceMatch(
    sequenceItem: (VirtualTime, Signal<(String, Int), Never>),
    time: VirtualTime,
    inputvalues: (String, Int)) -> Bool {

    if sequenceItem.0 != time {
        return false
    }
    if sequenceItem.1.debugDescription != Signal<(String, Int),
       Never>.input(inputvalues).debugDescription {
        return false
    }
    return true
}

XCTAssertTrue(
    testSequenceMatch(sequenceItem: outputSignals[0], 
                      time: 300, inputvalues: ("a", 1))
)

XCTAssertTrue(
    testSequenceMatch(sequenceItem: outputSignals[1], 
                      time: 400, inputvalues: ("b", 1))
)

Tristan, the author of Entwine and EntwineTest, provided me with some great insight into how this could be improved in the future. The heart of it being that while swift tuples don’t/can’t have conformance to protocols like Hashable and Equatable, structs within Swift do. It’s probably not sane to make every possible combinatorial set in your code, but it’s perfectly reasonable to make an interim struct, map the tuples into it, and then use that struct to do the testing.

Tristan also pointed out that a different struct would be needed for the arity of the tuple – for example, a tuple of <String, Int> and <String, String, Int> would need different structs. The example implementation that Tristan showed:

struct Tuple2 {
  let t0: T0
  let t1: T1
  init(_ tuple: (T0, T1)) {
    self.t0 = tuple.0
    self.t1 = tuple.1
  }
  var raw: (T0, T1) {
    (t0, t1)
  }
 }

extension Tuple2: Equatable where T0: Equatable, T1: Equatable {}
extension Tuple2: Hashable where T0: Hashable, T1: Hashable {}

A huge thank you to Tristan for taking the time to explain the tuple <-> struct game in swift!

After having spent quite a few years with dynamic languages, this feels like jumping through a lot of hoops to get the desired result, but I’m pleased that at least it also makes sense to me, so maybe I’m not entirely lost to the dynamic languages.

Writing with an AsciiDoc toolchain

I was about to start off with I’m not a technical writer by trade, but I realized that what I should be saying is “In addition to many other skills, I have done quite a bit of technical writing”. I have several published books and articles in my past, one book even still “in print”. I’ve written documentation for parts of many open source projects, countless internal corporate documents and created my own wiki structures, some of which weren’t even complete disasters. I’m still coming back to it again and again.

As it pertains to this post, I have written with ReStructuredText (and Sphinx to render), Hugo, Markdown and Jekyll. I had a strong preference for ReStructuredText for seriously-long-form writing (doc sites, books, etc) because it pretty sanely represented inter-document linkages, which always felt a like a terrible add-on hack and shim on Jekyll (w/ Markdown). Hugo, which Kubernetes docs uses was better – but the CLI tooling on the Mac was damned awkward. I do favor Hugo over Jekyll, especially for it’s internationalization support.

my writin’ toolchain creds…

When I did my last book project (Kubernetes for Developers), I ended up working with Packt Publishing. They have their own wonky internal toolchain setup, more or less a hacked-up version of WordPress from what I could tell. While I was working up the content, I seriously explored self-publishing. One of the ideas that I considered was looking more seriously at writing with AsciiDoc.

I’ve since taken a more serious stab at it, wanting to do some writing on Apple’s Combine framework. I decided I wanted to embrace the constraints of writing to a book format for this effort, and set up the content and toolchain to render HTML, PDF, and ePub concurrently. The core of the solution I landed on revolves around a lovely project called AsciiDoctor.

A caveat for folks reading this: If you’re wanting to do some online writing, and only want to render it to HTML, then while you can likely make this AsciiDoctor-based toolchain work, it’s honestly not the best tool for the job. You will probably be a lot happier with Jekyll or Hugo. AsciiDoctor & AsciiDoc are missing a lot of the “add on” flexibility and additional rendering pieces that are easily found in other HTML-specific rendering solutions (Jekyll, Hugo, and Sphinx).

The biggest constraint in my toolchain is the rendering to ePub. AsciiDoctor has a nice extension (asciidoctor-epub3), but it comes with some fairly strict structural constraints. If you want to adopt this toolchain for yourself, start with reading it’s composition rules.

The toolchain I use leverages prebuilt asciidoctor containers with Docker (so I don’t have to mess with ruby environments and maintaining that all together). The container (asciidoctor/docker-asciidoctor) is updated from the asciidoctor, and pretty easily usable. I did experiment with adding some extensions into my own docker image, but the baseline from the project is perfectly effective.

The key to using this setup is figuring out the correct invocation for the docker container so that the content runs on a local directory and generates the relevant output.

A sample of how I do this:

#
 HTML
docker run --rm -v $(pwd):/documents/ --name asciidoc-to-html asciidoctor/docker-asciidoctor asciidoctor -v -t -D /documents/output -r ./docs/lib/google-analytics-docinfoprocessor.rb docs/using-combine-book.adoc
# copy the images directory over for the rendered HTML

cp -r docs/images output/images
#
# PDF

docker run --rm -v $(pwd):/documents/ --name asciidoc-to-pdf asciidoctor/docker-asciidoctor asciidoctor-pdf -v -t -D /documents/output docs/using-combine-book.adoc

# ePUB

docker run --rm -v $(pwd):/documents/ --name asciidoc-to-epub3 asciidoctor/docker-asciidoctor asciidoctor-epub3 -v -t -D /documents/output docs/using-combine-book.adoc

I also wrote a simple bash script to make a one-line “re-render it all”, and tweaked it up so that it could also open the resulting file. If you’re so inclined, feel free to grab, copy, and use it yourself: re.bash

The example above, and my associated re.bash script, are set up to build content from a “docs” directory, with a specific “core” to the asciidoctor-based publication. In my case, I’ve called that file using-combine-book.adoc. My example embeds a Google Analytics code for the rendered HTML (omitting it for PDF and ePub of course), enables syntax-highlighting for source code blocks, and has the structure specifically enabled for ePub generation.

I’m hosting the content using Github Pages. At the time of this writing, they don’t support generating anything other than Jekyll with Markdown content through their built in system. You can, however, set up a TravisCI job to render the content (using the same docker containers) and then publish that by having the TravisCI job push the content to a specific gh-pages branch on the github repository.

Feel free to also snag and re-use the TravisCI configuration (.travis.yml). You’ll need to add 3 environment variables to your Travis job to make this work:

  • GH_USER_NAME
  • GH_USER_EMAIL
  • GH_TOKEN

The last of these is a “personal access token”, which is what allows the travis job to push content to your repository as you. Do make sure you tweak the repository name and location to match your own if you copy this. It’s worth mentioning that I extended the basic example I found at http://mgreau.com/posts/2016/03/28/asciidoc-to-gh-pages-with-travis-ci-docker-asciidoctor.html, and the .travis.yml file I linked will only push updates when they’re merged with the master branch on the repository.

The end result is effective and I’m happy with the toolchain

It doesn’t come without it’s own constraints, primarily imposed by my desire to create an ePub format at the end of this process. I’ve also been happy with the effectiveness of AsciiDoc, although frankly I’m still just learning the basics. If you’re curious about the markup language itself, I’ve found the AsciiDoc Syntax Quick Reference and Asciidoctor User Manual to be invaluable references. I keep a tab open to each of those pages for reference while I’m writing.

If you’re so inclined to view the current work-in-progress, you can see the rendered HTML content at https://heckj.github.io/swiftui-notes/, which is derived from the asciidoc content within https://github.com/heckj/swiftui-notes/tree/master/docs. I also go ahead and publish the PDF and ePub in the same fashion, available at https://heckj.github.io/swiftui-notes/using-combine-book.pdf and https://heckj.github.io/swiftui-notes/using-combine-book.epub respectfully.

Commodity and fashion with SwiftUI

I’m only just starting to dig into the new declarative UI framework that Apple announced at WWDC this year: SwiftUI, but already there are a few patterns emerging that will be fascinating to watch in the Mac & IOS development community in the coming months.

The huge benefit of SwiftUI is, as a declarative framework, the process of creating and writing UI interfaces across Apple’s platforms should be far more consistent. I fully expect that best practices will get tried, shared, and good ideas swiftly copied – and culturally the community will glob around some common forms.

This is tensioned against what Apple (and its developers) has tended to celebrate in the past decade: independence and creative expression. UI elements and design have been following a fashion pattern, with the farthest reaching elements of that design being nearly unusable as those experiments pushed the boundaries beyond what was intuitively understood in user experiences. Sometimes the limits pushed so far as to not even be explorable.

Color, Layout, and graphics design are all clearly customizable with SwiftUI. I also expect that some of the crazier innovations (such as the now common “pull to refresh” gesture) will become significantly harder to enable from declarative structures. By its very nature, the declarative structure will make the common, well established UI elements easy and accessible, so much so that I wouldn’t be surprised to see a lot of early SwiftUI apps to “all look alike”. I expect the impact of “all looking alike” to drive a number of IOS dev teams a bit nuts.

The “escape hatches” to do crazy things clearly do exist – and while I haven’t reached that level of learning with SwiftUI, it does seem to follow the “make easy things simple, and hard things possible” concept of progressive disclosure.

It will be interesting to see what happens this fall when the first SwiftUI apps become available as experiments and where that takes consistency and usability on the Apple platforms.

SceneKit interaction handling – Experiment439

A staple of science fiction movies has been 3D holographic visualizations and controls. Most efforts I’ve seen at taking real visualization data and putting them into a 3D context haven’t been terribly successful. At the same time, the advance of AR and VR makes me suspect that we should be able to take advantage of the additional dimension in displaying and visualizing data.

I started a project, named Experiment439, to go through the process of creating and building a few visualizations and seeing what I can do with them, and what it might be refined out into a library that can be re-used.

I wanted to take a shot at this leveraging Apple’s SceneKit 3D abstraction and see how far I could get.

The SceneKit abstraction and organization for scenes is a nice setup, although it’s weak in one area – delegating interaction controls.

The pattern I’m most familiar with is the View Controller setup (and it’s many variants depending on how you display data). Within SceneKit, an SCNNode can encapsulate other nodes (and controls overall placement in the view), so it makes a fairly close analogue to the embedding of views within each other that I’m familiar with from IOS and MacOS development. Coming up with something that encapsulates and controls a SCNNode (or set of SCNNodes) seems like it’s a pretty doable (and useful) abstraction.

The part that gets complicated quickly is handling interaction. User-invoked events in SceneKit today are limited to projecting hit-tests from the perspective of the camera that’s rendering the scene. In the case of AR apps on IOS for example, the camera can be navigating the 3D space, but when you want to select, move, or otherwise interact you’re fairly constrained to mapping touch events projected through the camera.

I’ve seen a few IOS AR apps that use the camera’s positioning as a “control input” – painting or placing objects where the IOS camera is positioned as you move about an AR environment.

You can still navigate a 3D space and scene, and see projected data – both 2D and 3D very effectively, but coming up with equivalent interactions to what you get on Mac and IOS apps – control interactions – has been significantly trickier.

A single button that gets toggled on/off isn’t too bad, but as soon as you step into the world of trying to move a 3D object through the perspective of the camera – shifting a slider or indicating a range – it gets hellishly complex.

With Apple’s WWDC 2019 around the corner (tomorrow as I publish this) and the rumors of significant updates to AR libraries and technologies, I’m hoping that there may be something to advance this space and make this experiment a bit easier, and even more to expand on the capabilities of interacting with the displayed environment.

IOS AR apps today are a glorified window into a 3D space – amazing and wonderful, but heavily constrained. It allows me to navigate around visualization spaces more naturally than anything pinned to a desktop monitor, but at the cost of physically holding the device that you would also use to interact with the environment. I can’t help but feel a bit of jealousy for the VR controllers that track in space, most recently the glowing reviews of the Valve Index VR controllers.

Better interaction capabilities of some kind will be key to taking AR beyond nifty-to-see but-not-entirely-useful games and windows on data. I’m hoping to see hints of what might be available or coming with the Apple ecosystem in the next few days.

Meanwhile, there still a tremendous amount that to be done to make visualizations and display them usefully in 3D. A lot of the inspiration of the current structure of my experiment has been from Mike Bostock‘s amazing D3.js library, which has been so successful in helping to create effective data visualization and exploration tools.

IOS Dev Diary – using UIDocument

I have been working on an artist utility app, with the primary purpose to present an image and super-thin grid overlay. The inspiration came from the cropping functionality in the Photos app – but that’s very ephemeral to a the act of croping an image, and isn’t easily viewable on a continued basis (such as on an iPad) when you want that grid to support your sketching or painting. Using a grid like this is done for a couple of purposes: one of which is the “process by Leonardo” for helping to capturing and copying an image by hand. The other to double check the framing and composition against what’s called The Rule of Thirds.

I originally didn’t think of this as an application that would have or use a document format, but after trying it out a bit and getting some feedback on the current usage, it became abundantly clear that it would benefit tremendously by being able to save the image and the framing settings that shows the grid overlay. So naturally, I started digging into how to really enable this, which headed directly towards UIDocument.

Using UIDocument pretty quickly begged the question of supporting a viewer for the files, which led to researching UIDocumentBrowser, which was a rather surprisingly invasive design change. Not bad, mind you – just a lot of moving parts and new concepts:

  • UIDocument instances are asynchronous – loading and saving the contents is separate from instantiating the document.
  • UIDocument’s support cloud-hosted services from the get-go – which means they also include a concept of states that might be surprising including inConflict and editingDisabled in addition to reflecting loading, saving, and error conditions while doing these asynchronous actions.
  • UIDocument is built to be subclassed, but how you handle tracking the state changes & async is up to you.
  • UIDocumentBrowser is built to be controlled through a delegate/controller setup and UIDocumentControllerViewController which is subclassed, and also demands to be root of the view hierarchy.

Since my document data included UIImage and UIColor, both of which are annoying when trying to persist them using struct coding with swift , I ended up using NSKeyedArchiving, and then later NSSecureCoding, to save out the document.

One of the first lesson I barked my shin on here was when I went to make a ThumbnailPreview extension that loaded the document format and returned a thumbnail for the document icon. The first thing I hit was that NSKeyedArchiving was failing to load/decode the contents of my document when attempting to make the thumbnail, while the application was able to load and save the document just fine. It likely should have been more obvious to me, but the issue has to do with how NSKeyedArchiving works – it decodes by class name. In the plugin, the module name was different – so it was unable to load the class in question, which I found out when I went to the trouble of adding a delegate to the NSUnarchiver to see what on earth it was doing.

One solution might have been to add in some translation on NSKeyedUnarchiver to translate the class to the module name that was associated with the plugin setClass(_:forClassName:). I took the different path of taking the code that represented my document model and breaking it out into it’s own framework, embedded within the application – and then imported that framework into the main application and the preview plugin as well.

UIDocument Lesson #1: it may be worth putting your model code into a framework so plugins and app extensions can use it.

The second big “huh, I didn’t think of that…” was in using UIDocument. Creating a UIDocument and loading its data are two very separate actions, and a UIDocument actually has quite of bit of state that it might be sharing. The DocumentBrowser sample code took the path of making an explicit delegate structure to call back as things loaded, which I ended up adopting. The other sample code that Apple provided (Particles) was a lot easier to start with and understand, but doesn’t really do anything with the more complex world of handling saving and loading, and the asynchronous calls to set all that up.

UIDocument Lesson #2: using a document includes async calls to save, load and states that represent even potential conflicts when the same doc is editing at the same time from different systems.

One particularly nice little feature of UIDocument is that it includes a Progress property that can be handed and set on the UIDocumentBrowser’s transition controller when you’ve selected a document, so you get a nice bit of animation as the document is loaded (either locally or from iCloud).

UIDocumentBrowser Lesson #1: the browser subclass has a convenient (but not obvious) means of getting a animated transition controller for use when opening a document – and you can apply a UIDocument’s Progress to show the loading.

The callbacks and completions were the trickiest to navigate, trying to isolate which view controller had responsibility for loading the document. I ended up making some of my own callbacks/completion handlers so that when I was setting up the “editor” view I could load the UIDocument and handle the success/failure, but also supplied the success/failure from that back to the UIDocumentBrowserViewController subclass I created to support the UIDocumentBrowser. I’m not entirely convinced I’ve done it the optimal way, but it seems to be working – including when I need to open the resulting document to create a Quicklook Thumbnail preview.

The next step will be adding an IOS Action Extension, as that seems to be the only real way that you can interact with this code directly from Photo’s, which I really wanted to enable based on feedback. That will dovetail with also allowing the application to open image based file URLs and create a document using that image file as its basis. The current workflow for this application is creating a new document, and then choosing an image (from your photo library), so I think it could be significantly simpler to invoke and use.