Thanksgiving 2020

This has been a right 🤬 of a year, and that probably applies to most anyone else on this globe. In the US, the stress of this presidential election was extreme, and acerbated by COVID pandemic that we pretty much failed to get any sort of handle on. I’ve managed to stay tucked down and safe, but my day to day life from a year ago is completely different now. My favorite “office” coffeeshop, El Diablo, is now gone – 20 years after it started. The loss has an oversized impact on my because it was also my social hub connecting me to friends in my community. Like a lot of others, my world is currently collapsed down to a pretty tiny bubble.

The downsides of this year are undeniable and clear, but there have been a number of silver linings, and since we’re heading into Thanksgiving, it seemed a good time to reflect and celebrate the positives for the year. It doesn’t remove the horror and pain, but to me – it helps offset it.

Right as the pandemic was sweeping up, I managed to get a longer term contract gig that’s been really useful for me, working on technical writing. I’ve been programming, and managing programmers, devops, QA, and the whole software lifecycle kit for multiple decades, and one of the skills that I’ve been trying to cultivate has been communication – specifically writing. Last year around this time, I self-published what was primarily a labor of love – Using Combine – a book/reference doc combination on Apple’s Combine framework. The year before I’d published a different book through Packt, Kubernetes for Developers (A google search on that phrase now directs you to more of offline training and certifications, but there’s a book in there too!). I’d been searching for editors to work with, that really dug into the work to help me structure it – not just the fluffy top level stuff that so a number of publishers stick to.

It’s the combination of these things that ends up being what tops my give-thanks list this year. In the past eight months, I’ve had a chance to work closely with a number of truly amazing editors, helping to refine and improve my writing skills. While I’m still crappy at spotting passive voice in my own writing, the feedback I’ve gotten from Chuck, Joni, Susan, Colleen, Mary Kate, and Paul has been amazing. Everything from basic grammar (that I had just never really did well) to structure, narrative flow, complexity. Top that off with some wonderful writers, Ben, Liz, Dave, and Joanna, willing to answer the odd question or just chat about the latest garden recipes on a video call, and really these past months of contracting have been a terrific experience.

I was super fortunate to find a gig when everyone else seemed to losing them. I’m sure the troubles aren’t even close to over – there’s so much to rebuild – but it’s a bit of light against the backdrop of this year.

Combine and Swift Concurrency

Just before last weekend, the folks on the Swift Core Team provided what I took to be a truly wonderful gift: a roadmap and series of proposals that outline a future of embedding concurrency primitives deeper into the swift language itself. If you’re interested in the details of programming language concurrency, it’s a good read. The series of pitches and proposals:

If you have questions, scan through each of the pitches (which all link into the forums), and you’ll see a great deal of conversation there – some of which may have your answer (other parts of which, at least if you’re like me, may just leave you more confused), but most importantly the core team is clearly willing to answer questions and explore the options and choices.

When I first got involved with the swift programming language, I was already using some of these kinds of concurrency constructs in other languages – and it seemed to be a glaring lack of the language that they didn’t specify, instead relying on the Objective-C runtime and in particular the dispatch libraries in that runtime. The dispatch stuff, however, is darned solid – battle honed as it were. You can still abuse it into poor performance, but it worked solidly, so while it kind of rankled, it made sense with the thinking of “do the things you need to do now, and pick your fights carefully” in order to make progress. Since then time (and the language) has advanced significantly, refined out quite a bit, and I’m very pleased to see formal concurrency concepts getting added to the language.

A lot of folks using Combine have reached for it, looking for the closest thing they can find to Futures and the pattern of linking futures together, explicitly managing the flow of asynchronous updates. While it is something Combine does, Combine is quite a bit more – and a much higher level library, than the low level concurrency constructs that are being proposed.

For those of you unfamiliar, Combine provides a Future publisher, and Using Combine details how to use it a bit, and Donny Wals has a nice article detailing it that’s way more “tutorial like”.

What does this imply for Combine?

First up, pitches and proposals such as these are often made when there’s something at least partially working, that an individual or three have been experimenting with. But they aren’t fully baked, nor are they going to magically appear in the language tomorrow. Whatever comes of the proposals, you should expect it’ll be six months minimum before they appear in any seriously usable form, and quite possibly longer. This is tricky, detailed stuff – and the team is excellent with it, but there’s still a lot of moving parts to manage with this.

Second, where there’s some high level conceptual overlap, they’re very different things. Combine, being a higher level library and abstraction, I expect will take advantage of the the lower-level constructs with updates, very likely ones that we’ll never see exposed in their API, to make the operators more efficient. The capabilities that are being pitched for language-level actors (don’t confuse that with a higher level actor or distributed-actor library – such as Akka or Orleans) may offer some really interesting capabilities for Combine to deal with it’s queue/runloop hopping mechanisms more securely and clearly.

Finally, I think when this does come into full existence, I hope the existing promise libraries that are used in Swift (PromiseKit, Google’s promises, or Khanlou’s promise) start to leverage the async constructs into their API structure – giving a clear path to use for people wanting a single result processed through a series of asynchronous functions. You can use Combine for that, but it is really aimed at being a library that deals with a whole series or stream of values, rather than a single value, transformed over time.

tl;dr

The async and concurrency proposals are goodness, not replacement, for Combine – likely to provide new layers that Combine can integrate and build upon to make itself more efficient, and easier to use.

The range operator and SwiftUI’s layout engine

This post is specific to Swift the programming language and SwiftUI, Apple’s newest multi-platform UI framework. If you’re not interested in both, probably best to skip past this…

I was working on visualization code that leverages SwiftUI to see how that might work, and ran into a few interesting tidbits: playgrounds with SwiftUI works brilliantly, right up until it doesn’t and then you’re in a deep, dark hole of WTF?!, and the SwiftUI layout engine, especially with nested and semi-complex container views, do some sort of iterative solver technique. The less interesting tidbit is a classic aphorism: It’s the things you think you know that aren’t true that bite you.

When you’re fiddling with adding in your own constraints in SwiftUI, you might be tempted to use the range operator – at least my thinking was “Oh hey, this looks like a super convenient way to check to make sure this value is within an expected range”. It works stunningly well for me, as long as I’m careful about creating it. I started creating ranges on the fly from variable values, and that’s where playgrounds, and my bad assumptions, bit me.

If you create a range that’s ludicrous, Swift can throw an exception at runtime. So if you’re working with ranges, you’ve got the possibility that passing in a blatantly incorrect value will give you a non-sensical range, and that will result in a crashing exception. When you stumble into this using Playgrounds, you get a crash that doesn’t really tell you much of anything. When you kick that same thing up in Xcode, it still crashes (of course), but at least the debugger will drop into place and show you what you did that was wrong. I love using SwiftUI with Playgrounds, but the lack of runtime feedback when I hit an exception – about what I screwed up – makes it significantly less useful to me.

And debugging this in Xcode was where I learned that closures you provide within SwiftUI layout, such as alignmentGuide or a method of your own creation working with a GeometryReader don’t get called just once. Sometimes they’re called one, but other times they are called repeatedly, and with pretty strange values for the view’s dimension. I think underneath the covers, there’s an iterative layout solver that’s trying out a variety of layout options for the view that’s being created. Sometimes those closures would be invoked once, other times repeatedly – and in some cases repeatedly with the same values. Interestingly, sometimes those values included a ViewDimension or GeometryProxy with a size.width of 0. The bad assumption I made was that it would be sized quite a bit larger, never zero. Because of that, I attempted to build an incorrect range – effectively ClosedRange(x ... x-1) – which caused the exception.

Even with my own assumptions biting me, I like the use of range and I’m trying to use it in an experimental API surface. Lord knows what’ll come of the experiment, but the basics are bearing some fruit. I have a bit of code where I’ve been porting some of the concepts, such as scale and tick, from D3 to use within SwiftUI.

The current experimental code looks like:

// axis view w/ linear scale - simple/short
HStack {
    VerticalTickDisplayView(
        scale: LinearScale(
            domain: 0 ... 5.0,
            isClamped: false)
    )
    VerticalAxisView(
        scale: LinearScale(
            domain: 0 ... 5.0,
            isClamped: false)
    )
}
.frame(width: 60, height: 200, alignment: .center)
.padding()

// axis view w/ log scale variant - manual ticks
HStack {
    VerticalTickDisplayView(
        scale: LogScale(
            domain: 0.1 ... 100.0,
            isClamped: false),
        values: [0.1, 1.0, 10.0, 100.0]
    )
    VerticalAxisView(
        scale: LogScale(
            domain: 0.1 ... 100.0,
            isClamped: false)
    )
}
.frame(width: 60, height: 200, alignment: .center)
.padding()

And results in fairly nice horizontal and vertical tick axis that I can use around a chart area:

NaNoWriMo Beat Sheet Scrivener Template

I participated in NaNoWriMo once before, in 2016 – having watched my partner get more deeply involved in previous years. I haven’t really been back, but this year with all the … yeah, I decided to give it a shot again. I’ve got my profile set up, at least the basics, and now I’m stumbling around trying to figure out what my story is going to encompass.

Last time through, I made a great start, but didn’t have much of a plan, so it fizzled out pretty hard about half-way through. So this time, I thought I’d try something a bit different and do a little planning. Last weekend, the local NaNoWriMo group hosted a session for planning and plotting, which I’d not tried previously. I’m going to try out using a beat sheet this time, to provide some constraints and structure for the overall story arc.

I grabbed a basic beat sheet from the NaNoWriMo writer’s prep resources, and since I have a copy the amazing writing app Scrivener, I decided to go ahead and make a NaNoWriMo template based on that.

If you are so inclined, you’re absolutely welcome to use it – it’s based on 50,000 words, deadline of 11/30, and the beat sheet google doc linked from the resources.

Open apps with SwiftUI

Earlier this week, James Dempsey asked on twitter about who else was actively trying to build macOS apps using SwiftUI. I’m super interested in SwiftUI. A year ago, it spawned my own side-project into writing my own reference docs on Combine. Originally I had a vision of writing about Combine as well as SwiftUI. Combine alone was hugely, so I stopped there with the notes, especially as SwiftUI is still massively maturing. Some pretty amazing updates came just earlier this year. While clearly not finished, likely not even close to finished, it’s now far enough along in it’s maturity that you can at least consider using it for full apps. Or parts of your app if you like – macOS, iOS, watchOS, or tvOS.

While I’m been keeping track of the framework, I’ve also been keeping track of people who are using it, writing about it, struggling with it, etc. There’s two implementations of full applications, all open source (and hence completely visible), that I’ve been following as super interesting examples of using SwiftUI: NetNewsWire and ControlRoom.

NetNewsWire

I’ve contributed a bit to NetNewsWire, only a tiny amount (and mostly around continuous integration), but I’ve been using it since the earliest days and from it’s original inception and through multiple owners to it’s current state as an open-source project that Brent Simmons is leading. The code is available online, ever evolving, at https://github.com/Ranchero-Software/NetNewsWire. The recent work to embrace SwiftUI is on it’s main branch with a lot of the SwiftUI code under the directory multiplatform/shared. Take a deep dive and dig around – there’s some gems and interesting questions, and you can see some really fascinating examples of integrating SwiftUI and UIKit or AppKit where SwiftUI isn’t quite up to some of the tasks desired by the project.

ControlRoom

The other app I’ve been watching is ControlRoom, an app that Paul Hudson referenced in a demo capture on twitter. ControlRoom’s code is on Github at https://github.com/twostraws/ControlRoom, released earlier in SwiftUI’s lifecycle, and showing an integration not of the new SwiftUI app architecture pieces, but more of “classic” macOS AppKit integration. Like NetNewsWire, I found a number of really great gems within the code, often having “light-bulb” moments when I understood how the app accomplished some of its goals.

Others…

There are easily other apps out there, that I’m unaware of – but not too many folks are openly sharing their development like the two projects above. I did find a list of open-source IOS apps on GitHub that includes a subset listing SwiftUI, that might be interesting.

I have a few of my own experiments, but nothing as polished and effective as these two, and I don’t think I solve any problems in novel ways that they haven’t. In a bit of test and benchmarking code, I was creating SwiftUI interfaces across macOS, iOS, and tvOS – which turns out to be a right pain in the butt, even for the simplest displays.

I hope, but don’t expect, more apps to become available – or to be more visible down the road. Having the open sharing of how they solved problems is invaluable to me for learning, and even more so for sharing. Apple has their sample code, well – some of it anyway – but seeing folks outside of Apple use the framework “in the wild” really shows it’s working (or where it isn’t).

Learning to Write, Writing to Learn

Writing, while I love it, doesn’t come naturally to me. I suspect it doesn’t come naturally to any writer. The process of getting something written, really tightly focused and right on target, is a heroic act of understanding, simplification and embracing constraints. It’s a skill for which I don’t have a good analogue in the other kinds of work I’ve done. Maybe the closest is the finish work (sanding and polishing) with jewelry making or metal-work, but that doesn’t quite fit. I suspect there are good analogues, I’m just not spotting them.

A few weeks ago I wrote about my challenges with the learning to write process, complicated by COVID and the overall lockdown. While I broke through that plateau, there are always more challenges to work through. This last week, I hit another slog-swamp – this time it’s more about mental framing than my actual writing and feedback loops.

Part of why I’m doing technical writing is that writing is an act of sharing that has an incredibly long reach. It’s a really big-damn-lever in the move the world kind of metaphor. That makes it, to me, a supremely worthy endeavor.

To really do it well, you need to know the subject you’re writing about: backwards and forwards, from a couple different angles, maybe even coming in from a different dimension or two. I embraced the idea of “If you can explain something simply, then you may understand it.” That’s the “writing to learn” part of this – what I’m doing with the writing is forcing myself to learn, to think about the subject matter from different angles.

The hardest part of that learning is figuring out the angles that don’t come naturally, or from which I don’t have a lot of background. I’m generally fairly empathetic, so I’ve got at least a slight leg up; I can often at least visualize things from another point of view, even if I don’t fully understand it.

The flip side of it, learning to write, happened this week. I completed a draft revision, and when I reviewed with some folks, I realized it fell way off the mark. There were a few things that I thought were true (that weren’t) that knocked it awry. More than that the feedback I got was about taking things to a more concrete example to help reinforce what I was trying to say. Really, it was super-positive feedback, but the kind of feedback that has me thinking I might need to gut the structure pretty heavily and rebuild it back up. As the weekend closes out, I suspect an act of creative destruction might be coming in order to reset this and get it closer to where it needs to be.

I’ve been noodling on that feedback most of this weekend – it has been percolating in my rear-brain for quite a while. Aside from the “things you thought were true that weren’t” (the evil bits that get you, no matter what the topic area) that I got straight pretty quickly, the key from this feedback is that while I was correct in what I was touching on in the writing, it was too abstracted and too easily mis-undertstood. Especially in the world of technical writing and programming topics, it’s SUPER easy to get “too abstract”. And then there’s the death knell of what should be good technical writing – too abstract AND indirect.

Embracing the constraint of “making it more concrete” is some next-level thinking. It’s forcing me to be less abstract (a good thing) and it’s taking a while to really see how to make that work for the topic I’m writing about. I mean, really, I’m still working on the “nuke all the passive voice” thing. While I’m getting better, it takes me something like 3 or 4 passes, reading sentence by sentence, to spot them in my writing.

For what it’s worth, I’m loving the “nuke passive voice” constraint. I love that it forces the writing to be specific about what takes action and what the results are – so much hidden stuff becomes clear, and it’s a great forcing function to see if you also really understand how it’s working.

For now I’ll continue to noodle on how to make my example more concrete, and get back to my baking and kitchen cleaning for the afternoon. Come tomorrow, I’ll take another pass at that article and see what I can do for the concrete-ness aspect.

Feeling alone and outside of your comfort zone

Month five of the COVID lockdown, and when it started I picked up a new bit of work. It is something I’d wanted to do and from which I get enjoyment: technical writing. I am definitely stepping outside of my comfort zone. Although I’ve written extensively and am a published author with several titles, the skills of grammar, spelling, and word choice aren’t what I’d describe as my super powers.

The past few weeks have been a bit harder that usual, as I’ve been pushing past the basics, and breaking through the plateau where I felt comfortable and knew what I was doing. It is doubly-hard with COVID lockdown and remote-only work, as none of the avenues I’ve used in the past to vet ideas or check thinking and progress has been easily available to me. At it’s heart, writing is about communicating – and the technical writing I’m doing is aimed at being precise, accurate, concise, and easy to understand. Sometimes I’ve got some gems and it works well. Other times I stare at a single paragraph for the better part of 3 hours, tearing apart the sentences word by word and setting them into the form that I think may work – because I certainly don’t think or speak with that level of simple, direct, and accurate conciseness.

I’m confident I’ll get this, eventually. It will take longer than I’d like to get through the “Man, I suck at this” stage – as it always does when you’re learning something. It feels terrible at the moment. I often feel lost, sometimes confused, and – not surprisingly – frustrated. The constraints are a gift, but I rail at them just the same. It’s awkward and painful, and the assistance I have been able to find from coworkers or my fellow coffee-house peers in bouncing ideas around is gone or greatly reduced.

I’m determined to make something better in this whole mess. One of the few things I can control is what I’m working on, and I have that luxury – so I’m using the current time and constraints to improve myself.

If you’re doing the same, remember that it’s worth acknowledging that this shit ain’t easy, whatever your skill or task may be. If it’s worth improving, then it won’t be – almost by the very nature of it. Doesn’t matter if it’s hand-eye timing coordination and mastery for a video game, learning to paint in a new style, strategic puzzle solving of board games, or learning to set a perfect weld. Keep at it and keep moving, even if it doesn’t always feel like forward motion.

post-WWDC – more device collaboration?

It’s been two weeks since WWDC as I’m writing this. I certainly haven’t caught all the content from this year’s event, or even fully processed what I have learned. I see several patterns evolving, hear and read the various rumors, and can’t help but wonder. Maybe it’s wishful thinking, or I’m reading tea leaves incorrectly, but a few ideas keep popping into my head. One of them is that I think we would benefit from — and have the technology seeds for — more and better device collaboration between Apple devices.

One of the features that SwiftUI released this year was leveraging a pattern of pre-computed visualizations for use as complications on the Apple Watch, and more generally as Widgets. The notion of a timeline is ingenious and a great way to solve the constrained compute capability of compact devices. This feature heavily leverages SwiftUI’s declarative views; that is, views derived from data.

Apple devices have frequently worked with each other – Phone and Watch pairing, the Sidecar mechanism that’s available with a Mac laptop and an iPad, and of course AirPlay. They offload data, and in some cases they offload computation. There’s creative caching involved, but even the new App Clips feature seems like it’s a variation on this theme – with code that can be trusted and run for small, very focused purposes.

Apple has made some really extraordinary wearable computing devices – the watch and AirPods, leveraging Siri – in a very different take than the smart speakers of Google Home and Amazon’s Alexa. This year’s update to Siri, for example, supports for on-device translation as well as dictation.

Now extrapolate out just a bit farther…

My house has a lot of Apple devices in it – laptop, watch, AirPods, several iPads, and that’s just the stuff I use. My wife has a similar set. The wearable bits are far more constrained and with me all the time – but also not always able to do everything themselves. And sometimes, they just conflict with each other – especially when it comes to Siri. (Go ahead – say “Hey Siri” in my house, and hear the chorus of responses)

So what about collaboration and communication between these devices? It would be fantastic if they could share sufficient context to support making the interactions even more seamless. A way to leverage the capabilities of a remote device (my phone, tablet, or even laptop) from the AirPods and a Siri request. They could potentially even hand-off background tasks (like tracking a timer) or knowing which device has been used most recently to better infer context for a request. For example, I want a timer while cooking often on my watch, not my phone – but “hey Siri” is not at all guaranteed to get it there.

That they could also know about the various devices capabilities and share those capabilities would make the whole set even smarter and more effective, and depending on which rumors you are excited by – they may be able to do some heavier computation off the devices that are more power constrained (wearables) by near-by but not physically connected (and power efficient) microprocessors. That could be generating visuals like Widgets, or perhaps the inverse – running a machine learning model against transmitted Lidar updates to identify independent objects and their traits from a point cloud or computed 3D mesh.

It’ll be interesting to see where this goes – I hope that distributed coordination, a means of allowing it (as a user), and a means developing for it is somewhere in the near future.

Exploring MultipeerConnectivity

A few weeks ago, I got curious about the MultipeerConnectivity framework available across Apple’s platforms. It’s a neat framework, and there are community-based libraries that layer over it to make it easier to use for some use cases: MultipeerKit (src) being the one that stood out to me.

The promise of what this framework does is compelling, which is to seamlessly enable peer to peer networking, layering over any local transport available (bluetooth, ethernet if available, a local wifi connection, or a common wifi infrastructure). There’s a lot of “magic” in that capability, layering over underlying technologies and dealing with the advertise and connect mechanisms. Some of it uses Bonjour (aka zeroconf), and I suspect other mechanisms as well.

One of the “quirks” of this technology is that you don’t direct what transport is used, nor do you get information about the transport. You do get a nicely designed cascade of objects, all of which leverage the delegate/protocol structure to do their thing. Unfortunately, the documentation doesn’t make how to use them and what to expect entirely clear.

The structure starts with an advertiser, which is paired with an advertising browser. It wasn’t completely obvious to me at first, but you don’t need both sides of the peer to peer conversation doing advertising in order to make a connection. One side can advertise, the other browse, and you can establish a connection on that basis. It doesn’t need to be set up bi-directionally, although it can.

When I first started in, having glanced through the developer docs, I thought you needed to have both sides actively advertising for this to work. Nope, bad assumption on my part. Both can advertise, and it’s makes for interesting viewing of “who’s out there” – but the heart that enables data transfer is another layer down: MCSession.

You use the browser to “invite” a found peer to a session, and the corresponding advertiser has a delegate you use to accept invites.

The session (MCSession) is the heart of the communications from here. Session has an active side and a reactive side to it’s API – methods like send(_:toPeers:with:) pair to a session delegate responding using the method session(_:didReceive:fromPeer:). Before you get into sending data, however, you need to be connected.

While the browser allows you to invite, and the advertiser to accept, it is the session that gives you detail on what’s happening. Session is designed to do this through delegate method callbacks to session(_:peer:didChange:) which is how you get state changes on for connections changes, and information on to whom you are connected. The session state is a tri-state thing: notConnected, connecting, or connected. In my experience so far, you don’t spend very long in the connecting state, and state updates propagate pretty quickly (within a second or two) when the infrastructure changes. For example, when your iOS device goes to sleep, or you set the device into airplane mode. I haven’t measured exactly how fast, or how consistently, these updates propagate.

Session is a bi-directional communications channel, and once you are connected, either side can send data for the other to receive. Session also has the concept of not just sending Data, but of explicitly transferring (presumably larger) resources that are URL based. I haven’t experimented with this layer, but I’m guessing it’s a bit more optimized for reading a large file and streaming it out. The session delegate has callbacks for when it starts transfering and when it completes.

There’s a third transport mechanism, which uses open ended streams, that I haven’t yet touched. When I started looking, I did find some older github projects that tested using the streams capability – and how many simultaneous streams could be triggered and used effectively – but no published results. Alas those projects were written with swift 3, so while I poked at them out of curiosity, I mostly left them alone.

To explore MultipeerConnectivity, I created a project (available on github) called MPCF-TestBench. The project is open source and available for anyone to compile and use themselves – but no promises on it all working correctly or looking anything close to good. (contributions welcome, but certainly not expected).

Fair warning: when I think of “open source”, it’s the messy sausage-making process, not the completed and pretty, cleaned, ready-to-use-no-work-involved library or application that is the desired end goal. Feel free to dig in to the source, ask questions if you like, improve it and share the changes, or use it to your own explorations – but demand anything and you’ll just get a snicker.

The project is an excuse to do something “heavier” in SwiftUI and see how things work – like how to get updates from more of these delegate heavy structures into the UI, and to see how SwiftUI translates across platforms. In short, it’s an excuse to learn.

All of this started several weeks ago when I poked Guiherme Rambo about MultiPeerKit to see how fast it actually worked. He hadn’t made any explicit measurements, so I thought it might be interesting to do just that. To that end, the MPCF-TestBench has (crudely) cobbled a reflector and a test-runner with multiple targets (iOS, tvOS, and mac). This is also an excuse to see how SwiftUI translates across the platforms, but more on that in another (later) post. If you go to use this, I’d recommend sticking with the iOS targets for now, as it’s where I’m actively doing my development and using it.

MPCF-TestBench work in progress

I have yet to do the work of trying out the transmissions at various sizes, but you can get basic information for yourself with the code as it stands. The screenshot above was transmitted from my iPhone (iPhone X) to a 10″ iPad Pro, leveraging the my local wifi network, to which both were connected. The test sent 100 data packets that were about 1K in size, and a corresponding reflector on the iPad echoed the data back. No delay, just shoving them down a pipe as fast as I could – using the “reliable” transport mode.

I used a simple Codable structure that exports to JSON to dump out the data (although my export mechanism is only half-working to be honest). Still, it’s far enough to get a sample available if you’re curious. Feel free to dig apart the list of JSON objects for your own purposes, I’ll be adding more and making a more thorough result set over time.

I haven’t yet been able to establish a bluetooth only connection – the peering mechanism isn’t making a connection, but it could easily be something stupid I’ve done as well.

So there you have it – my initial answer to “how fast is it”: I’m seeing about 3 Kbytes/sec transferred using a “reliable” transport mode, over wifi, and using more recent iOS devices. The transmissions appear to be reasonably stable as well – not a terrible amount of standard deviation in my simple tests.

Continuous Integration with Github Actions for macOS and iOS projects

GitHub Actions released in August 2019 – I’ve been trying them out for nearly a full year, using beta access available the adventurous before it was generally available. It was a long time in coming, and I saw this feature as GitHub’s missing piece. Some great companies stepped into that early gap and provide excellent services: TravisCI, CircleCI, codeship, SemaphoreCI, Bitrise, and many others. I’ve used most of these, predominantly TravisCI because it was available before the rest, and I got started with it. When GitHub finally did circle back and make actions available, I was there trying it out and seeing how it worked.

Setting up CI for macOS and iOS project has always been a little odd, but doable. For many people who are making apps, the goal is to build the code, run any unit tests, maybe run some UI or integration tests, sign the resulting elements, and ship the whole out via testflight. Tools like fastlane do a spectacular job of helping to automate into these services where Apple hasn’t provided a lot of support, or connected the dots.

I’m going to focus a bit more narrowly in this post – looking at how to leverage swift package manager and xcodebuild, both command line tools for building swift projects or mac and iOS applications respectively. I’ll leave the whole “setting up fastlane”, dealing with the complexities of signing code, and submitting builds from CI systems to others.

Building swift packages with github actions

If you want to build a swift package, then reach for swiftpm. You can’t build macOS or iOS applications with swiftpm, but you can create command-line executables or compile swift libraries. Most interestingly, you can compile swift on other platforms – linux is supported, and other operating systems (Windows, Android, and more) are being worked on by the swift open source community. Swiftpm is also the go-to tooling if you want to use the burgeoning server-side swift capabilities.

While there are some complicated corners to using the swift package manager, especially when it comes to integrating existing C or C++ libraries, the basics for how to use it are gloriously simple:

swift test --enable-test-discovery

To use that tooling, we need to define how it’s invoked – and that whole accumulation of detail is what goes into a GitHub Action declarative workflow.

To set up an action, create a YAML file in the directory .github/workflows at the root of your repository. GitHub will look in this directory for YAML files and they’ll become the actions enabled for your repository. The documentation for github actions is available at https://help.github.com/en/actions, but it isn’t exactly easy deciphering unless you’re already familiar with CI and some github specific terms.

One the simplest CI definitions I’ve seen is the CI running on SwiftDocOrg‘s DocTest repository, which builds its executables for both swift on macOS and swift on Linux:

name: CI
on:
  push:
    branches: [master]
  pull_request:
    branches: [master]
jobs:
  macos:
    runs-on: macOS-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v1
      - name: Build and Test
        run: swift test
        env:
          DEVELOPER_DIR: /Applications/Xcode_11.4.app/Contents/Developer
  linux:
    runs-on: ubuntu-latest
    container:
      image: swift:5.2
      options: --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --security-opt apparmor=unconfined
    steps:
      - name: Checkout
        uses: actions/checkout@v1
      - name: Build and Test
        run: swift test --enable-test-discovery

To explain the setup, let’s look at it in pieces. The very first piece is the name of the action: CI. For all practical purposes the name effects nothing, but knowing the name can be helpful. GitHub indexes actions by name. What I find most useful is that GitHub provides an easy-to-use badge that you can drop into a README file, so that people viewing the rendered markdown will have a quick look as to the current build status.

The badge uses the repository name and the workflow name together in a URL. The pattern for this URL is:

https://github.com/USERNAME/REPOSITORY_NAME/workflows/WORKFLOW_NAME/badge.svg

Make an image link in markdown to display this badge in your README. For example, a badge for DocTest’s repository could be:

[![Actions Status](https://github.com/SwiftDocOrg/DocTest/workflows/CI/badge.svg)](https://github.com/SwiftDocOrg/DocTest/actions)

The next segment is on, that defines when the action will be applied. DocTest’s repository has the action triggering when the master branch (but no other branches) changes via push, or when a pull request is opened against the master branch.

on:
  push:
    branches: [master]
  pull_request:
    branches: [master]

The next segment is jobs, which has two elements defined underneath it. Each job is run separately, and you may declare dependencies between jobs if you want or need. Each job defines where it runs – more specifically what operating system is used, and DocTest’s example has a job for building on macOS and another for building on Linux.

The first is the macos job, which runs within a macOS virtual machine:

jobs:
  macos:
    runs-on: macOS-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v1
      - name: Build and Test
        run: swift test
        env:
          DEVELOPER_DIR: /Applications/Xcode_11.4.app/Contents/Developer

The steps are run linearly, each having to complete without a failure or error response before the next. This example shows the common practice for leveraging actions/checkout, a pre-defined action in the GitHub “Marketplace“.

Marketplace gets quotes because I think marketplace is poor name choice – you’re not required to buy anything, which I originally thought was the intention. And to be clear, I’m glad it’s not. GitHub’s mechanism allows anyone to host their own actions, and the marketplace is the place to find them.

The second step is simply invoking swift test, just like you might on your own laptop with macOS installed. The environment variable DEVELOPER_DIR is being defined here, which Xcode uses as a means to indicate which version of Xcode to use when multiple are installed. The alternative way to do this is by explicitly selecting the version of Xcode with another command xcode-select.

The GitHub actions runners have been maintained impressively well over the past year, and even beta releases of Xcode are frequently available within weeks of when they are released. The VM image has an impressive array of commonly used tools, libraries, and languages pre-installed – and that’s maintained publicly in a list at https://github.com/actions/virtual-environments/blob/master/images/macos/macos-10.15-Readme.md.

By selecting the version of Xcode with the environment variable declaration, this also implies the version of swift that’s being used, swift version 5.2 in this case.

The last segment of this CI declaration is the version that builds the swift package on Linux.

  linux:
    runs-on: ubuntu-latest
    container:
      image: swift:5.2
      options: --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --security-opt apparmor=unconfined
    steps:
      - name: Checkout
        uses: actions/checkout@v1
      - name: Build and Test
        run: swift test --enable-test-discovery

In this case, it’s using Ubuntu 18.04 (the latest supported by GitHub as I’m writing this post) – which has a corresponding README of everything that includes at https://github.com/actions/virtual-environments/blob/master/images/linux/Ubuntu1804-README.md.

The container declaration defines a docker contain that’s used to run these steps on top of that base linux image, in this case the swift:5.2 image. The additional options listed are to open up specific security mechanisms otherwise locked down within a container – in this case, it’s enabling the ptrace system call, which is critical to allowing swift to run an integrated REPL or use the LLDB debugger when run within a container.

The last bit that you might have noticed is the option --enable-test-discovery. This is an option available from the swift package manager that only recently released. Where the Xcode leverages the objective-C runtime to dynamically inspect and identify test classes to run, the same didn’t exist (and was a right pain in the butt) for swift on Linux until this option was made available in swift 5.2. The build system creates an index while it’s building the code on Linux, and then uses this index to identify functions that should be invoked based on their name (the ones prefixed with test and that are within subclasses of XCTest). The end result is swift test “finding the tests” as most other unit testing libraries do.

Building macOS or iOS applications using xcodebuild with github actions

If you want to build a macOS, tvOS, iOS, or even watchOS application, use xcodebuild. xcodebuild is the command-line invocation that uses the build toolchain built into xcode, leveraging all the built in mechanisms with targets, schemes, build settings, and overlays interactions with the simulators. To use xcodebuild, you’ll need to have xcode installed – and with github actions, that’s available through virtualized instances of macOS with Xcode (and a lot of other tools) pre-installed.

The example repository I’m using is one of my own (https://github.com/heckj/MPCF-TestBench/blob/master/.github/workflows/build.yml), although it was a hard choice – as I really like NetNewsWire’s CI setup as a good example. Definitely take a look at https://github.com/Ranchero-Software/NetNewsWire/blob/master/.github/workflows/build.yml and the corresponding CI Tech Note for some excellent detail and to see how another project enabled CI on GitHub.

The whole CI file, from https://github.com/heckj/MPCF-TestBench/blob/master/.github/workflows/build.yml:

name: CI
on: [push]
jobs:
  build:
    runs-on: macos-latest
    strategy:
      matrix:
        run-config:
          - { scheme: 'MPCF-Reflector-mac', destination: 'platform=macOS' }
          - { scheme: 'MPCF-TestRunner-mac', destination: 'platform=macOS' }
          - { scheme: 'MPCF-Reflector-ios', destination: 'platform=iOS Simulator,OS=13.4.1,name=iPhone 8' }
          - { scheme: 'MPCF-TestRunner-ios', destination: 'platform=iOS Simulator,OS=13.4.1,name=iPhone 8' }
          - { scheme: 'MPCF-Reflector-tvOS', destination: 'platform=tvOS Simulator,OS=13.4,name=Apple TV' }
    steps:
    - name: Checkout Project
      uses: actions/checkout@v1
    - name: Homebrew build helpers install
      run: brew bundle
    - name: Show the currently detailed version of Xcode for CLI
      run: xcode-select -p
    - name: Show Build Settings
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showBuildSettings
    - name: Show Build SDK
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showsdks
    - name: Show Available Destinations
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showdestinations
    - name: lint
      run: swift format lint --configuration .swift-format-config -r .
    - name: build and test
      run: xcodebuild clean test -scheme '${{ matrix.run-config['scheme'] }}' -destination '${{ matrix.run-config['destination'] }}' -showBuildTimingSummary

Both this example and NetNewsWire’s CI use the technique of a matrix build. This is immensely useful when you have multiple targets in the same Xcode project and want to verify that they’re all building and testing correctly. The matrix is defined right at the top of this file as a strategy:

jobs:
  build:
    runs-on: macos-latest
    strategy:
      matrix:
        run-config:
          - { scheme: 'MPCF-Reflector-mac', destination: 'platform=macOS' }
          - { scheme: 'MPCF-TestRunner-mac', destination: 'platform=macOS' }
          - { scheme: 'MPCF-Reflector-ios', destination: 'platform=iOS Simulator,OS=13.4.1,name=iPhone 8' }
          - { scheme: 'MPCF-TestRunner-ios', destination: 'platform=iOS Simulator,OS=13.4.1,name=iPhone 8' }
          - { scheme: 'MPCF-Reflector-tvOS', destination: 'platform=tvOS Simulator,OS=13.4,name=Apple TV' }

This is a single job – meaning a single operating system to run the build – but when you use a matrix is replicates the job by the size of the matrix. In this case, there are 5 matrix definitions – 2 for macOS targets, 2 for iOS targets, and 1 target for tvOS. When this is run, it runs 5 parallel instances, each with its own matrix definition and applies those values to further steps. This example defines two properties, scheme and destination, to be filled out with different values for each matrix run, and which are used later in the steps. The scheme definition corresponds to the names of schemes that are in the Xcode workspace, and destination maps to parameters used to xcodebuild’s destination argument which is a combination of target platform, name, and version of the operating system to use.

Something to be aware of – the specific values that are used in the destinations of the matrix will vary with the version of Xcode, and the latest version of Xcode will always be used unless you explicitly override it. In this example, I am intentionally setting it to latest to keep tracking any updates, but if you’re building a CI system for verifying stability over time, that’s probably something you want to explicitly declare and lock down in your CI configuration.

Checkout is pretty self explanatory, and right after that step you’ll see the step that installs helpers.

    - name: Homebrew build helpers install
      run: brew bundle

This uses a feature of Homebrew called bundle, which reads a file named Brewfile, allowing you to define a single place to say “install all these additional tools for my later use”.

The next steps are purely informational, and aren’t actually needed for the build, but are handy to have for debugging:

    - name: Show the currently detailed version of Xcode for CLI
      run: xcode-select -p
    - name: Show Build Settings
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showBuildSettings
    - name: Show Build SDK
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showsdks
    - name: Show Available Destinations
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showdestinations

These all invoke xcodebuild with the various options to show the parameters available. These parameters vary with the version of Xcode that is running, and the very first command (xcode-select -p) prints that version.

The next command, named lint, uses one of the helpers that is defined in that Brewfileswift format. In this example, I’m using Apple’s swift-format command (the other common example is Nick Lockwood’s swiftformat). In other projects I’ve used Nick’s swiftformat (and the often paired tool: swiftlint), both of which I like quite a lot. This project was an excuse to try out the tooling from the Apple’s open source version and see how it worked, and how I liked working with it. My final opinion is still pending, but it mostly does the right thing.

    - name: lint
      run: swift format lint --configuration .swift-format-config -r .

The lint step is really applying a verification of formatting, so arguably might not be relevant to be included in a build. I rely on linters (in several languages) to yell at me, as otherwise I’m lousy about consistency in how I format code. In practice, I also try and use the same tools to auto-format the code to just keep it up to whatever standard seems predominantly acceptable.

The final step is where it compiles and run the tests:

    - name: build and test
      run: xcodebuild clean test -scheme '${{ matrix.run-config['scheme'] }}' -destination '${{ matrix.run-config['destination'] }}' -showBuildTimingSummary

And you can see the references to the matrix values that are applied directly as parameters to xcodebuild. The text ${{ matrix.run-config['scheme'] }} is a replacement definition, indicating that the value of scheme for the running matrix build should be dropped into that position for the command line argument.

The NetNewsWire project uses the exact same technique, although the developers run xcodebuild from within a shell script so that they can arrange for signing keys and other parameters to be set up. It’s a thoughtful and ingenious system, but quite a bit harder to explain or show the matrix being used directly.

The downsides of GitHub Actions

While I am very pleased with how GitHub actions works, I am completely leveraging the “no cost to run” options. The cost of the pay-for GitHub Actions is notable, and in some cases, injuriously expensive.

If you’re running GitHub actions in a private repository (and paying for the privilege) you may find it just too expensive. The billing information for github actions shows that running macOS virtual images is 10 times the price of running Linux images ($0.08 per minute vs. $0.008 per minute). If you build on every pull request you’re going to wrack up an impressive number of minutes, very quickly. On top of that, techniques like the matrix build add an additional multiplier – 5x in my case. GitHub does offer the option of allowing you to create your own “GitHub Action runners”, and I’d recommend seriously looking at that option – just using GitHub as the coordinator – from the cost perspective alone. It’s more “stuff you have to maintain”, but in the end – likely quite a bit cheaper than paying the GitHub Actions hosting fees.

If you are building something purely open source, then you’re allowed to take advantage of the free services. That’s pretty darned nice.

Where GitHub is not yet supporting the swift ecosystem

This isn’t directly related to GitHub Actions, but more a related note on how GitHub is (or isn’t) supporting the swift ecosystem. There are a tremendous number of support options in their site for some great features, but really all of them are pretty anemic when it comes to supporting swift, either via Apple or through the open source ecosystem:

  • There’s no current availability for swift language syntax highlighting. It does offer Objective-C, which is a pretty good fallback, but swift specific highlighting would be really lovely to have when reviewing pull requests or just reading code.
  • GitHub’s security mechanisms, which host security advisories, track dependency flow, and the recently announced security code scanning – don’t support swift:
    • Dependencies through Package.swift, although parsable, aren’t tracked and aren’t reflected.
    • While you can draft a security advisory and host it at Github, it won’t propagate to any dependencies because it’s not tracked.
  • A year ago GitHub announced that it would be supporting “Swift Packages” – I’m not aware of much that came of that effort, if anything.
  • The same constraint of not parsing and tracking dependencies is highlighted in their Insights tab and the “Dependency graph”. On swift packages, there’s nothing. Same with Xcode projects. Nada, zilch, zip.
  • The code scanning, which uses a really great technology called CodeQL, doesn’t support Swift, or even Objective-C.

At a bare minimum, I would have hoped that GitHub would have enabled parsing and showing dependencies for Package.swift. I can understand not wanting to reverse engineer the .pbproj XML plist structure to get to dependencies, but swift itself is open source and the package manifest is completely open.

In addition Swift Syntax, the AST parsing mechanisms that swift enables through open source, are being used for swift language server support. These, I think, would be a perfect complement to leverage within CodeQL.

I do hope we’ll so more movement on these fronts from GitHub, but if not – well, I suppose it’s a good differentiating opportunity for competitive sites such as BitBucket or GitLab.