Using Combine (v0.6) available!

design by Michael Critz

A new version of Using Combine is available! The free/online version of Using Combine is updated automatically as I merge changes, and the PDF and ePub versions are released periodically and available on Gumroad.

https://gumroad.com/js/gumroad.js Purchase Using Combine

The book now has some amazing cover art, designed and provided by Michael Critz, and has benefited from updates provided by a number of people, now in the acknowledgements section.

The updates also include a section broken out focusing on developing with Combine, as well as a number of general improvements and corrections.

For the next release, I am going to focus on fleshing out a number of the not-yet-written reference sections:

the publishers I haven’t touched on yet

– starting into a number of the operators, most of which are more specialized

I reviewed the content prior to this release to see what was remaining to be done, and updated the project planning board with the various updates still remaining to be written.

I do expect we’ll see beta6 from Apple before too long, although exactly when is unknown. I thought it might appear last week, in which case I was planning on trying to accommodate any updates in this release. Xcode 11 beta6 hasn’t hit the streets and I wanted to get an update regardless of its inclusion.

navigating Swift Combine, tuples, and XCTest

What started out as a Github repository to poke at SwiftUI changed course a few weeks ago and became a documentation/book project on Combine, Apple’s provided framework for handling asynchronous event streams, not unlike ReactiveX. My latest writing project (available for free online at https://heckj.github.io/swiftui-notes/) has been a foil for me to really dig into and learn Combine, how it works, how to use it, etc. Apple’s beta documentation is unfortunately highly minimal.

One of the ways I’ve been working out how its all operating is writing a copious amount of unit tests, more or less “poking the beast” of the code and seeing how it’s operating. This has been quite successful, and I’ve submitted what I suspect are a couple of bugs to Apple’s janky FeedbackAssistant along with tests illustrating the results. As I’m doing the writing, I’m generating sample code and examples, and then often writing tests to help illuminate my understanding of how various Combine operators are functioning.

In the writing, I’ve worked my way through the sections to where I’m tackling some of the operators that merge streams of data. CombineLatest is where I started, and it testing it highlighted some of the more awkward (to me) pieces of testing swift code.

The heart of the issue revolves around asserting equality with XCTest, Apple’s unit testing framework, and the side effect that Combine takes advantage of tuples as lightweight types in operators like CombineLatest. In the test I created to validate how it was operating, I collected the results of the data into an ordered list. The originated streams had simple, equatable types – one String, the other Int. The resulting collection, however, was a tuple of <(String, Int)>.

To use XCTAssertEquals, the underlying types that you are validating need to conform to Equatable protocol. In the case of checking a collection type, it drops down and relies on the equatable conformance of its underlying type. And that is where it breaks down – tuples in swift aren’t allowed to conform to protocols – so they can’t declare (or implement conformance with) equatable.

I’m certainly not the first to hit this issue – there’s a StackOverflow question from 2016 that highlights the issue, Michael Tsai highlights the same on his blog (also from 2016). A slightly later, but very useful StackOverflow Q&A entitled XCTest’ing a tuple was super helpful, with nearly identical advice to Paul Hudson’s fantastic swift tips in Hacking With Swift: how to compare equality on tuples. I don’t know that my solution is a good one – it really feels like a bit of a hack, but I’m at least confident that it’s correct.

The actual collection of results that I’m testing is based on Tristan’s Combine helper library: Entwine and EntwineTest. Entwine provides a virtual time scheduler, allowing me to validate the timing of results from operators as well as the values themselves. I ended up writing a one-off function in the test itself that did two things:

  • It leveraged an idea I saw in how to test equality of Errors (which is also a pain in the tuckus) – by converting them to Strings using debugDescription or localizedDescription. This let me take the tuple and consistently dump it into a string format, which was much easier to compare.
  • Secondarily, I also wrote the function so that the resulting tests were easy to read in how they described the timing and the results that were expected for a relatively complex operator like combineLatest.

If you’re hitting something similar and want to see how I tackled it, the code is public UsingCombineTests/MergingPipelineTests.swift. No promises that this is the best way to solve the problem, but it’s getting the job done:

func testSequenceMatch(
    sequenceItem: (VirtualTime, Signal<(String, Int), Never>),
    time: VirtualTime,
    inputvalues: (String, Int)) -> Bool {

    if sequenceItem.0 != time {
        return false
    }
    if sequenceItem.1.debugDescription != Signal<(String, Int),
       Never>.input(inputvalues).debugDescription {
        return false
    }
    return true
}

XCTAssertTrue(
    testSequenceMatch(sequenceItem: outputSignals[0], 
                      time: 300, inputvalues: ("a", 1))
)

XCTAssertTrue(
    testSequenceMatch(sequenceItem: outputSignals[1], 
                      time: 400, inputvalues: ("b", 1))
)

Tristan, the author of Entwine and EntwineTest, provided me with some great insight into how this could be improved in the future. The heart of it being that while swift tuples don’t/can’t have conformance to protocols like Hashable and Equatable, structs within Swift do. It’s probably not sane to make every possible combinatorial set in your code, but it’s perfectly reasonable to make an interim struct, map the tuples into it, and then use that struct to do the testing.

Tristan also pointed out that a different struct would be needed for the arity of the tuple – for example, a tuple of <String, Int> and <String, String, Int> would need different structs. The example implementation that Tristan showed:

struct Tuple2 {
  let t0: T0
  let t1: T1
  init(_ tuple: (T0, T1)) {
    self.t0 = tuple.0
    self.t1 = tuple.1
  }
  var raw: (T0, T1) {
    (t0, t1)
  }
 }

extension Tuple2: Equatable where T0: Equatable, T1: Equatable {}
extension Tuple2: Hashable where T0: Hashable, T1: Hashable {}

A huge thank you to Tristan for taking the time to explain the tuple <-> struct game in swift!

After having spent quite a few years with dynamic languages, this feels like jumping through a lot of hoops to get the desired result, but I’m pleased that at least it also makes sense to me, so maybe I’m not entirely lost to the dynamic languages.

Writing with an AsciiDoc toolchain

I was about to start off with I’m not a technical writer by trade, but I realized that what I should be saying is “In addition to many other skills, I have done quite a bit of technical writing”. I have several published books and articles in my past, one book even still “in print”. I’ve written documentation for parts of many open source projects, countless internal corporate documents and created my own wiki structures, some of which weren’t even complete disasters. I’m still coming back to it again and again.

As it pertains to this post, I have written with ReStructuredText (and Sphinx to render), Hugo, Markdown and Jekyll. I had a strong preference for ReStructuredText for seriously-long-form writing (doc sites, books, etc) because it pretty sanely represented inter-document linkages, which always felt a like a terrible add-on hack and shim on Jekyll (w/ Markdown). Hugo, which Kubernetes docs uses was better – but the CLI tooling on the Mac was damned awkward. I do favor Hugo over Jekyll, especially for it’s internationalization support.

my writin’ toolchain creds…

When I did my last book project (Kubernetes for Developers), I ended up working with Packt Publishing. They have their own wonky internal toolchain setup, more or less a hacked-up version of WordPress from what I could tell. While I was working up the content, I seriously explored self-publishing. One of the ideas that I considered was looking more seriously at writing with AsciiDoc.

I’ve since taken a more serious stab at it, wanting to do some writing on Apple’s Combine framework. I decided I wanted to embrace the constraints of writing to a book format for this effort, and set up the content and toolchain to render HTML, PDF, and ePub concurrently. The core of the solution I landed on revolves around a lovely project called AsciiDoctor.

A caveat for folks reading this: If you’re wanting to do some online writing, and only want to render it to HTML, then while you can likely make this AsciiDoctor-based toolchain work, it’s honestly not the best tool for the job. You will probably be a lot happier with Jekyll or Hugo. AsciiDoctor & AsciiDoc are missing a lot of the “add on” flexibility and additional rendering pieces that are easily found in other HTML-specific rendering solutions (Jekyll, Hugo, and Sphinx).

The biggest constraint in my toolchain is the rendering to ePub. AsciiDoctor has a nice extension (asciidoctor-epub3), but it comes with some fairly strict structural constraints. If you want to adopt this toolchain for yourself, start with reading it’s composition rules.

The toolchain I use leverages prebuilt asciidoctor containers with Docker (so I don’t have to mess with ruby environments and maintaining that all together). The container (asciidoctor/docker-asciidoctor) is updated from the asciidoctor, and pretty easily usable. I did experiment with adding some extensions into my own docker image, but the baseline from the project is perfectly effective.

The key to using this setup is figuring out the correct invocation for the docker container so that the content runs on a local directory and generates the relevant output.

A sample of how I do this:

#
 HTML
docker run --rm -v $(pwd):/documents/ --name asciidoc-to-html asciidoctor/docker-asciidoctor asciidoctor -v -t -D /documents/output -r ./docs/lib/google-analytics-docinfoprocessor.rb docs/using-combine-book.adoc
# copy the images directory over for the rendered HTML

cp -r docs/images output/images
#
# PDF

docker run --rm -v $(pwd):/documents/ --name asciidoc-to-pdf asciidoctor/docker-asciidoctor asciidoctor-pdf -v -t -D /documents/output docs/using-combine-book.adoc

# ePUB

docker run --rm -v $(pwd):/documents/ --name asciidoc-to-epub3 asciidoctor/docker-asciidoctor asciidoctor-epub3 -v -t -D /documents/output docs/using-combine-book.adoc

I also wrote a simple bash script to make a one-line “re-render it all”, and tweaked it up so that it could also open the resulting file. If you’re so inclined, feel free to grab, copy, and use it yourself: re.bash

The example above, and my associated re.bash script, are set up to build content from a “docs” directory, with a specific “core” to the asciidoctor-based publication. In my case, I’ve called that file using-combine-book.adoc. My example embeds a Google Analytics code for the rendered HTML (omitting it for PDF and ePub of course), enables syntax-highlighting for source code blocks, and has the structure specifically enabled for ePub generation.

I’m hosting the content using Github Pages. At the time of this writing, they don’t support generating anything other than Jekyll with Markdown content through their built in system. You can, however, set up a TravisCI job to render the content (using the same docker containers) and then publish that by having the TravisCI job push the content to a specific gh-pages branch on the github repository.

Feel free to also snag and re-use the TravisCI configuration (.travis.yml). You’ll need to add 3 environment variables to your Travis job to make this work:

  • GH_USER_NAME
  • GH_USER_EMAIL
  • GH_TOKEN

The last of these is a “personal access token”, which is what allows the travis job to push content to your repository as you. Do make sure you tweak the repository name and location to match your own if you copy this. It’s worth mentioning that I extended the basic example I found at http://mgreau.com/posts/2016/03/28/asciidoc-to-gh-pages-with-travis-ci-docker-asciidoctor.html, and the .travis.yml file I linked will only push updates when they’re merged with the master branch on the repository.

The end result is effective and I’m happy with the toolchain

It doesn’t come without it’s own constraints, primarily imposed by my desire to create an ePub format at the end of this process. I’ve also been happy with the effectiveness of AsciiDoc, although frankly I’m still just learning the basics. If you’re curious about the markup language itself, I’ve found the AsciiDoc Syntax Quick Reference and Asciidoctor User Manual to be invaluable references. I keep a tab open to each of those pages for reference while I’m writing.

If you’re so inclined to view the current work-in-progress, you can see the rendered HTML content at https://heckj.github.io/swiftui-notes/, which is derived from the asciidoc content within https://github.com/heckj/swiftui-notes/tree/master/docs. I also go ahead and publish the PDF and ePub in the same fashion, available at https://heckj.github.io/swiftui-notes/using-combine-book.pdf and https://heckj.github.io/swiftui-notes/using-combine-book.epub respectfully.

Commodity and fashion with SwiftUI

I’m only just starting to dig into the new declarative UI framework that Apple announced at WWDC this year: SwiftUI, but already there are a few patterns emerging that will be fascinating to watch in the Mac & IOS development community in the coming months.

The huge benefit of SwiftUI is, as a declarative framework, the process of creating and writing UI interfaces across Apple’s platforms should be far more consistent. I fully expect that best practices will get tried, shared, and good ideas swiftly copied – and culturally the community will glob around some common forms.

This is tensioned against what Apple (and its developers) has tended to celebrate in the past decade: independence and creative expression. UI elements and design have been following a fashion pattern, with the farthest reaching elements of that design being nearly unusable as those experiments pushed the boundaries beyond what was intuitively understood in user experiences. Sometimes the limits pushed so far as to not even be explorable.

Color, Layout, and graphics design are all clearly customizable with SwiftUI. I also expect that some of the crazier innovations (such as the now common “pull to refresh” gesture) will become significantly harder to enable from declarative structures. By its very nature, the declarative structure will make the common, well established UI elements easy and accessible, so much so that I wouldn’t be surprised to see a lot of early SwiftUI apps to “all look alike”. I expect the impact of “all looking alike” to drive a number of IOS dev teams a bit nuts.

The “escape hatches” to do crazy things clearly do exist – and while I haven’t reached that level of learning with SwiftUI, it does seem to follow the “make easy things simple, and hard things possible” concept of progressive disclosure.

It will be interesting to see what happens this fall when the first SwiftUI apps become available as experiments and where that takes consistency and usability on the Apple platforms.

SceneKit interaction handling – Experiment439

A staple of science fiction movies has been 3D holographic visualizations and controls. Most efforts I’ve seen at taking real visualization data and putting them into a 3D context haven’t been terribly successful. At the same time, the advance of AR and VR makes me suspect that we should be able to take advantage of the additional dimension in displaying and visualizing data.

I started a project, named Experiment439, to go through the process of creating and building a few visualizations and seeing what I can do with them, and what it might be refined out into a library that can be re-used.

I wanted to take a shot at this leveraging Apple’s SceneKit 3D abstraction and see how far I could get.

The SceneKit abstraction and organization for scenes is a nice setup, although it’s weak in one area – delegating interaction controls.

The pattern I’m most familiar with is the View Controller setup (and it’s many variants depending on how you display data). Within SceneKit, an SCNNode can encapsulate other nodes (and controls overall placement in the view), so it makes a fairly close analogue to the embedding of views within each other that I’m familiar with from IOS and MacOS development. Coming up with something that encapsulates and controls a SCNNode (or set of SCNNodes) seems like it’s a pretty doable (and useful) abstraction.

The part that gets complicated quickly is handling interaction. User-invoked events in SceneKit today are limited to projecting hit-tests from the perspective of the camera that’s rendering the scene. In the case of AR apps on IOS for example, the camera can be navigating the 3D space, but when you want to select, move, or otherwise interact you’re fairly constrained to mapping touch events projected through the camera.

I’ve seen a few IOS AR apps that use the camera’s positioning as a “control input” – painting or placing objects where the IOS camera is positioned as you move about an AR environment.

You can still navigate a 3D space and scene, and see projected data – both 2D and 3D very effectively, but coming up with equivalent interactions to what you get on Mac and IOS apps – control interactions – has been significantly trickier.

A single button that gets toggled on/off isn’t too bad, but as soon as you step into the world of trying to move a 3D object through the perspective of the camera – shifting a slider or indicating a range – it gets hellishly complex.

With Apple’s WWDC 2019 around the corner (tomorrow as I publish this) and the rumors of significant updates to AR libraries and technologies, I’m hoping that there may be something to advance this space and make this experiment a bit easier, and even more to expand on the capabilities of interacting with the displayed environment.

IOS AR apps today are a glorified window into a 3D space – amazing and wonderful, but heavily constrained. It allows me to navigate around visualization spaces more naturally than anything pinned to a desktop monitor, but at the cost of physically holding the device that you would also use to interact with the environment. I can’t help but feel a bit of jealousy for the VR controllers that track in space, most recently the glowing reviews of the Valve Index VR controllers.

Better interaction capabilities of some kind will be key to taking AR beyond nifty-to-see but-not-entirely-useful games and windows on data. I’m hoping to see hints of what might be available or coming with the Apple ecosystem in the next few days.

Meanwhile, there still a tremendous amount that to be done to make visualizations and display them usefully in 3D. A lot of the inspiration of the current structure of my experiment has been from Mike Bostock‘s amazing D3.js library, which has been so successful in helping to create effective data visualization and exploration tools.

IOS Dev Diary – using UIDocument

I have been working on an artist utility app, with the primary purpose to present an image and super-thin grid overlay. The inspiration came from the cropping functionality in the Photos app – but that’s very ephemeral to a the act of croping an image, and isn’t easily viewable on a continued basis (such as on an iPad) when you want that grid to support your sketching or painting. Using a grid like this is done for a couple of purposes: one of which is the “process by Leonardo” for helping to capturing and copying an image by hand. The other to double check the framing and composition against what’s called The Rule of Thirds.

I originally didn’t think of this as an application that would have or use a document format, but after trying it out a bit and getting some feedback on the current usage, it became abundantly clear that it would benefit tremendously by being able to save the image and the framing settings that shows the grid overlay. So naturally, I started digging into how to really enable this, which headed directly towards UIDocument.

Using UIDocument pretty quickly begged the question of supporting a viewer for the files, which led to researching UIDocumentBrowser, which was a rather surprisingly invasive design change. Not bad, mind you – just a lot of moving parts and new concepts:

  • UIDocument instances are asynchronous – loading and saving the contents is separate from instantiating the document.
  • UIDocument’s support cloud-hosted services from the get-go – which means they also include a concept of states that might be surprising including inConflict and editingDisabled in addition to reflecting loading, saving, and error conditions while doing these asynchronous actions.
  • UIDocument is built to be subclassed, but how you handle tracking the state changes & async is up to you.
  • UIDocumentBrowser is built to be controlled through a delegate/controller setup and UIDocumentControllerViewController which is subclassed, and also demands to be root of the view hierarchy.

Since my document data included UIImage and UIColor, both of which are annoying when trying to persist them using struct coding with swift , I ended up using NSKeyedArchiving, and then later NSSecureCoding, to save out the document.

One of the first lesson I barked my shin on here was when I went to make a ThumbnailPreview extension that loaded the document format and returned a thumbnail for the document icon. The first thing I hit was that NSKeyedArchiving was failing to load/decode the contents of my document when attempting to make the thumbnail, while the application was able to load and save the document just fine. It likely should have been more obvious to me, but the issue has to do with how NSKeyedArchiving works – it decodes by class name. In the plugin, the module name was different – so it was unable to load the class in question, which I found out when I went to the trouble of adding a delegate to the NSUnarchiver to see what on earth it was doing.

One solution might have been to add in some translation on NSKeyedUnarchiver to translate the class to the module name that was associated with the plugin setClass(_:forClassName:). I took the different path of taking the code that represented my document model and breaking it out into it’s own framework, embedded within the application – and then imported that framework into the main application and the preview plugin as well.

UIDocument Lesson #1: it may be worth putting your model code into a framework so plugins and app extensions can use it.

The second big “huh, I didn’t think of that…” was in using UIDocument. Creating a UIDocument and loading its data are two very separate actions, and a UIDocument actually has quite of bit of state that it might be sharing. The DocumentBrowser sample code took the path of making an explicit delegate structure to call back as things loaded, which I ended up adopting. The other sample code that Apple provided (Particles) was a lot easier to start with and understand, but doesn’t really do anything with the more complex world of handling saving and loading, and the asynchronous calls to set all that up.

UIDocument Lesson #2: using a document includes async calls to save, load and states that represent even potential conflicts when the same doc is editing at the same time from different systems.

One particularly nice little feature of UIDocument is that it includes a Progress property that can be handed and set on the UIDocumentBrowser’s transition controller when you’ve selected a document, so you get a nice bit of animation as the document is loaded (either locally or from iCloud).

UIDocumentBrowser Lesson #1: the browser subclass has a convenient (but not obvious) means of getting a animated transition controller for use when opening a document – and you can apply a UIDocument’s Progress to show the loading.

The callbacks and completions were the trickiest to navigate, trying to isolate which view controller had responsibility for loading the document. I ended up making some of my own callbacks/completion handlers so that when I was setting up the “editor” view I could load the UIDocument and handle the success/failure, but also supplied the success/failure from that back to the UIDocumentBrowserViewController subclass I created to support the UIDocumentBrowser. I’m not entirely convinced I’ve done it the optimal way, but it seems to be working – including when I need to open the resulting document to create a Quicklook Thumbnail preview.

The next step will be adding an IOS Action Extension, as that seems to be the only real way that you can interact with this code directly from Photo’s, which I really wanted to enable based on feedback. That will dovetail with also allowing the application to open image based file URLs and create a document using that image file as its basis. The current workflow for this application is creating a new document, and then choosing an image (from your photo library), so I think it could be significantly simpler to invoke and use.

IOS Dev Diary – accessibility quirk with “Bold Text”

I just worked around a surprisingly tricky bug. The “Bold Text” accessibility feature in IOS has some really surprising impacts – it changes the rendering of images, specifically toolbar images.

In the IOS app I’m working on, I have a toolbar icon that I made in vector format (PDF). In the back and forth of working on the app, I set the global tint of the storyboard to white. The toolbar button was set to only use the vector image, and its tint is set to black. When I ran the application, the toolbar button showed up just fine, and in the tint (black) I selected for the button.

When one of my testers ran the app, the toolbar button “disappeared”. It was still there, but rendering white on the white toolbar. It took a while to figure out that the difference between our environments: that Bold Text was enabled in accessibility. Then it took a while longer to find that it wasn’t respecting the local tint, but using the global tint when enabled.

That “Bold Text” being enabled effected the image rendering came as a surprise to me. Some friends indicated they’d seen significant performance issues with Bold Text as well (in cells in a tableview), so they knew that it impacted image rendering – I guess it does something to try and make an image “bolder”, even though it’s not text. (I’m unable to perceive the visual difference in rendering the vector image)

It turns out that Accessibility Inspector also doesn’t reflect this setting. To try it out in the simulator, you need to go to the settings in the simulator directly and enable it. Fortunately, it does reflect the changes once enabled. (radr://49752183) UPDATE: marked as a duplicate of radr://49301632

Once I found that it was using the global tint, it was easy to set that global tint to something more sane (black in my case), so the workaround was very easy once I found it. Fortunately, the sample code for the bug report was equally easy. (radr://49752053)

In the end, I came away with a new “launch ready” checklist item:

  • review all the tints in the storyboard and make sure they’re consistent.

Back to NetNewsWire

I started with RSS and NetNewsWire as an aggregator quite a while ago to keep up with the blogs and other various information sources I wanted to follow. It was the most effective way of keeping up with the developer communities I was interested in. Things progress, change, and generally move – and I moved with RSS to using Google’s Reader – which was really a lovely solution, in that I had a sync’d view of what’s I’d read regardless of the device I was using. Then in 2013, they shut it down.

I was disappointed, but not angry. I was getting a lot of connected news stories from Twitter, LinkedIn, some email newsletters, and even a touch through friends on Facebook. Fast-forward to 2019 and the state of social media has devolved so much that I can’t reliably find recent updates – the timelines aren’t timelines, instead having morphed into tuned and algorithmically calculated ad-feeders. I suppose it was inevitable – trusting those sources to find and gather information, it’s a natural place to monetize with advertising, so of course the providers will optimize that.

A month ago I started the “purge these assholes” from my social media feeds, which was mostly successful. After I stopped following a number of hyperbolic-tending sources, the streams were better. They still didn’t help me learn and find new information – they still weren’t what I wanted and once had.

I was at the Xcoders meetup a month ago, and getting back into doing some IOS and Mac development projects. I knew that Brent had been quietly working on Evergreen, and that recently transformed/renamed to NetNewsWire – now open source and with a working build. It is a development build – so I fully expect things might break, not work, or otherwise have holes, but it was a no-brainer for me. Now it’s installed, in my dock, and getting daily use.

I’m relieved to have a news source that

  • is only filtering what I want, when I want
  • supports the open web
  • isn’t brutally promoting ads into my face.

I’m happy to sort and filter through all the various sources. In fact, I even went through all the blogs listed in IOSDevDirectory and made an IOS Dev OPML file for myself. If you’re so inclined in that direction, feel free to grab it and use it yourself.

IOS 12 DevNote: Embedded Swift Frameworks and bitcode

A side project for the barista’s at my favorite haunt has been a fun “getting back into it” programming exercise for IOS 12. It’s a silly simple app that checks the status of the network and if the local WIFI router is accessible, and provides some basic diagnostic and suggestions for the gang behind the counter.

It really boils down to two options:

    Yep, probably a good idea to restart that WIFI router
    Nope, you’re screwed – the internet problem is upstream and there’s nothing much you can do but wait (or call the Internet service provider)

It was a good excuse to try out the new Network.framework and specifically NWPathMonitor. In addition to the overall availability, I wanted to report on if a few specific sites were responding that the shop often uses, and on top of that I wanted to do some poking at the local WIFI router itself to make sure we could “get to it” and then made recommendations from there.

As I dug into things, I ended up deciding to use a swift framework BlueSocket, with the idea that if I could open a socket to the wifi router, then I could reasonably assume it was accessible. I could have used Carthage or CocoaPods, but I wanted to specifically try using git submodules for the dependencies, just to see how it could work.

With XCode 10, the general mechanism of dragging in a sub-project and binding it in works extremely easily and well, and the issues I had really didn’t hit until I tried to get something up to the IOS App Store for TestFlight.

The first thing I encountered was the sub-projects had a variable for CFVersionBundle: $(CURRENT_PROJECT_VERSION) that apparently wasn’t getting interpolated and set when it was built as a subproject. I ended up making a fork of the project and hard-coding the Info.plist with the specific version. Not ideal, but something that’s at least tractable. I’m really hoping that this coming WWDC shows some specific Xcode/IOS integration improvements when it comes to Swift Package Manager. Sometimes the Xcode build stuff can be very “black box”, and it would be really nice to have a more clear integration point for external dependencies.

The second issue was a real stumper – even though everything was validating locally for a locally built archive, the app store was denying it. The message that was coming back:

Invalid Bundle – One or more dynamic libraries that are referenced by your app are not present in the dylib search path.

Invalid Bundle – The app uses Swift, but one of the binaries could not link to it because it wasn’t found. Check that the app bundles correctly embed Swift standard libraries using the “Always Embed Swift Standard Libraries” build setting, and that each binary which uses Swift has correct search paths to the embedded Swift standard libraries using the “Runpath Search Paths” build setting.

I dug through all the linkages with otool, and everything was looking fine – and finally google trawled across a question in StackOverflow. Near the bottom there was a suggestion to disable bitcode (which is on by default when you upload an IOS archive). I gave that a shot, and it all flowed through brilliantly.

I can only guess that when you’re doing something with compiled-from-swift dylib’s, the bitcode process does something that the app store really doesn’t like. No probably without the frameworks (all the code in the project directly), but with the frameworks in my project, bitcode needed to be turned off.

Made it through all that, and now it’s out being tested with TestFlight!

El Diablo Network Advisor