Open apps with SwiftUI

Earlier this week, James Dempsey asked on twitter about who else was actively trying to build macOS apps using SwiftUI. I’m super interested in SwiftUI. A year ago, it spawned my own side-project into writing my own reference docs on Combine. Originally I had a vision of writing about Combine as well as SwiftUI. Combine alone was hugely, so I stopped there with the notes, especially as SwiftUI is still massively maturing. Some pretty amazing updates came just earlier this year. While clearly not finished, likely not even close to finished, it’s now far enough along in it’s maturity that you can at least consider using it for full apps. Or parts of your app if you like – macOS, iOS, watchOS, or tvOS.

While I’m been keeping track of the framework, I’ve also been keeping track of people who are using it, writing about it, struggling with it, etc. There’s two implementations of full applications, all open source (and hence completely visible), that I’ve been following as super interesting examples of using SwiftUI: NetNewsWire and ControlRoom.

NetNewsWire

I’ve contributed a bit to NetNewsWire, only a tiny amount (and mostly around continuous integration), but I’ve been using it since the earliest days and from it’s original inception and through multiple owners to it’s current state as an open-source project that Brent Simmons is leading. The code is available online, ever evolving, at https://github.com/Ranchero-Software/NetNewsWire. The recent work to embrace SwiftUI is on it’s main branch with a lot of the SwiftUI code under the directory multiplatform/shared. Take a deep dive and dig around – there’s some gems and interesting questions, and you can see some really fascinating examples of integrating SwiftUI and UIKit or AppKit where SwiftUI isn’t quite up to some of the tasks desired by the project.

ControlRoom

The other app I’ve been watching is ControlRoom, an app that Paul Hudson referenced in a demo capture on twitter. ControlRoom’s code is on Github at https://github.com/twostraws/ControlRoom, released earlier in SwiftUI’s lifecycle, and showing an integration not of the new SwiftUI app architecture pieces, but more of “classic” macOS AppKit integration. Like NetNewsWire, I found a number of really great gems within the code, often having “light-bulb” moments when I understood how the app accomplished some of its goals.

Others…

There are easily other apps out there, that I’m unaware of – but not too many folks are openly sharing their development like the two projects above. I did find a list of open-source IOS apps on GitHub that includes a subset listing SwiftUI, that might be interesting.

I have a few of my own experiments, but nothing as polished and effective as these two, and I don’t think I solve any problems in novel ways that they haven’t. In a bit of test and benchmarking code, I was creating SwiftUI interfaces across macOS, iOS, and tvOS – which turns out to be a right pain in the butt, even for the simplest displays.

I hope, but don’t expect, more apps to become available – or to be more visible down the road. Having the open sharing of how they solved problems is invaluable to me for learning, and even more so for sharing. Apple has their sample code, well – some of it anyway – but seeing folks outside of Apple use the framework “in the wild” really shows it’s working (or where it isn’t).

Learning to Write, Writing to Learn

Writing, while I love it, doesn’t come naturally to me. I suspect it doesn’t come naturally to any writer. The process of getting something written, really tightly focused and right on target, is a heroic act of understanding, simplification and embracing constraints. It’s a skill for which I don’t have a good analogue in the other kinds of work I’ve done. Maybe the closest is the finish work (sanding and polishing) with jewelry making or metal-work, but that doesn’t quite fit. I suspect there are good analogues, I’m just not spotting them.

A few weeks ago I wrote about my challenges with the learning to write process, complicated by COVID and the overall lockdown. While I broke through that plateau, there are always more challenges to work through. This last week, I hit another slog-swamp – this time it’s more about mental framing than my actual writing and feedback loops.

Part of why I’m doing technical writing is that writing is an act of sharing that has an incredibly long reach. It’s a really big-damn-lever in the move the world kind of metaphor. That makes it, to me, a supremely worthy endeavor.

To really do it well, you need to know the subject you’re writing about: backwards and forwards, from a couple different angles, maybe even coming in from a different dimension or two. I embraced the idea of “If you can explain something simply, then you may understand it.” That’s the “writing to learn” part of this – what I’m doing with the writing is forcing myself to learn, to think about the subject matter from different angles.

The hardest part of that learning is figuring out the angles that don’t come naturally, or from which I don’t have a lot of background. I’m generally fairly empathetic, so I’ve got at least a slight leg up; I can often at least visualize things from another point of view, even if I don’t fully understand it.

The flip side of it, learning to write, happened this week. I completed a draft revision, and when I reviewed with some folks, I realized it fell way off the mark. There were a few things that I thought were true (that weren’t) that knocked it awry. More than that the feedback I got was about taking things to a more concrete example to help reinforce what I was trying to say. Really, it was super-positive feedback, but the kind of feedback that has me thinking I might need to gut the structure pretty heavily and rebuild it back up. As the weekend closes out, I suspect an act of creative destruction might be coming in order to reset this and get it closer to where it needs to be.

I’ve been noodling on that feedback most of this weekend – it has been percolating in my rear-brain for quite a while. Aside from the “things you thought were true that weren’t” (the evil bits that get you, no matter what the topic area) that I got straight pretty quickly, the key from this feedback is that while I was correct in what I was touching on in the writing, it was too abstracted and too easily mis-undertstood. Especially in the world of technical writing and programming topics, it’s SUPER easy to get “too abstract”. And then there’s the death knell of what should be good technical writing – too abstract AND indirect.

Embracing the constraint of “making it more concrete” is some next-level thinking. It’s forcing me to be less abstract (a good thing) and it’s taking a while to really see how to make that work for the topic I’m writing about. I mean, really, I’m still working on the “nuke all the passive voice” thing. While I’m getting better, it takes me something like 3 or 4 passes, reading sentence by sentence, to spot them in my writing.

For what it’s worth, I’m loving the “nuke passive voice” constraint. I love that it forces the writing to be specific about what takes action and what the results are – so much hidden stuff becomes clear, and it’s a great forcing function to see if you also really understand how it’s working.

For now I’ll continue to noodle on how to make my example more concrete, and get back to my baking and kitchen cleaning for the afternoon. Come tomorrow, I’ll take another pass at that article and see what I can do for the concrete-ness aspect.

Feeling alone and outside of your comfort zone

Month five of the COVID lockdown, and when it started I picked up a new bit of work. It is something I’d wanted to do and from which I get enjoyment: technical writing. I am definitely stepping outside of my comfort zone. Although I’ve written extensively and am a published author with several titles, the skills of grammar, spelling, and word choice aren’t what I’d describe as my super powers.

The past few weeks have been a bit harder that usual, as I’ve been pushing past the basics, and breaking through the plateau where I felt comfortable and knew what I was doing. It is doubly-hard with COVID lockdown and remote-only work, as none of the avenues I’ve used in the past to vet ideas or check thinking and progress has been easily available to me. At it’s heart, writing is about communicating – and the technical writing I’m doing is aimed at being precise, accurate, concise, and easy to understand. Sometimes I’ve got some gems and it works well. Other times I stare at a single paragraph for the better part of 3 hours, tearing apart the sentences word by word and setting them into the form that I think may work – because I certainly don’t think or speak with that level of simple, direct, and accurate conciseness.

I’m confident I’ll get this, eventually. It will take longer than I’d like to get through the “Man, I suck at this” stage – as it always does when you’re learning something. It feels terrible at the moment. I often feel lost, sometimes confused, and – not surprisingly – frustrated. The constraints are a gift, but I rail at them just the same. It’s awkward and painful, and the assistance I have been able to find from coworkers or my fellow coffee-house peers in bouncing ideas around is gone or greatly reduced.

I’m determined to make something better in this whole mess. One of the few things I can control is what I’m working on, and I have that luxury – so I’m using the current time and constraints to improve myself.

If you’re doing the same, remember that it’s worth acknowledging that this shit ain’t easy, whatever your skill or task may be. If it’s worth improving, then it won’t be – almost by the very nature of it. Doesn’t matter if it’s hand-eye timing coordination and mastery for a video game, learning to paint in a new style, strategic puzzle solving of board games, or learning to set a perfect weld. Keep at it and keep moving, even if it doesn’t always feel like forward motion.

post-WWDC – more device collaboration?

It’s been two weeks since WWDC as I’m writing this. I certainly haven’t caught all the content from this year’s event, or even fully processed what I have learned. I see several patterns evolving, hear and read the various rumors, and can’t help but wonder. Maybe it’s wishful thinking, or I’m reading tea leaves incorrectly, but a few ideas keep popping into my head. One of them is that I think we would benefit from — and have the technology seeds for — more and better device collaboration between Apple devices.

One of the features that SwiftUI released this year was leveraging a pattern of pre-computed visualizations for use as complications on the Apple Watch, and more generally as Widgets. The notion of a timeline is ingenious and a great way to solve the constrained compute capability of compact devices. This feature heavily leverages SwiftUI’s declarative views; that is, views derived from data.

Apple devices have frequently worked with each other – Phone and Watch pairing, the Sidecar mechanism that’s available with a Mac laptop and an iPad, and of course AirPlay. They offload data, and in some cases they offload computation. There’s creative caching involved, but even the new App Clips feature seems like it’s a variation on this theme – with code that can be trusted and run for small, very focused purposes.

Apple has made some really extraordinary wearable computing devices – the watch and AirPods, leveraging Siri – in a very different take than the smart speakers of Google Home and Amazon’s Alexa. This year’s update to Siri, for example, supports for on-device translation as well as dictation.

Now extrapolate out just a bit farther…

My house has a lot of Apple devices in it – laptop, watch, AirPods, several iPads, and that’s just the stuff I use. My wife has a similar set. The wearable bits are far more constrained and with me all the time – but also not always able to do everything themselves. And sometimes, they just conflict with each other – especially when it comes to Siri. (Go ahead – say “Hey Siri” in my house, and hear the chorus of responses)

So what about collaboration and communication between these devices? It would be fantastic if they could share sufficient context to support making the interactions even more seamless. A way to leverage the capabilities of a remote device (my phone, tablet, or even laptop) from the AirPods and a Siri request. They could potentially even hand-off background tasks (like tracking a timer) or knowing which device has been used most recently to better infer context for a request. For example, I want a timer while cooking often on my watch, not my phone – but “hey Siri” is not at all guaranteed to get it there.

That they could also know about the various devices capabilities and share those capabilities would make the whole set even smarter and more effective, and depending on which rumors you are excited by – they may be able to do some heavier computation off the devices that are more power constrained (wearables) by near-by but not physically connected (and power efficient) microprocessors. That could be generating visuals like Widgets, or perhaps the inverse – running a machine learning model against transmitted Lidar updates to identify independent objects and their traits from a point cloud or computed 3D mesh.

It’ll be interesting to see where this goes – I hope that distributed coordination, a means of allowing it (as a user), and a means developing for it is somewhere in the near future.

Exploring MultipeerConnectivity

A few weeks ago, I got curious about the MultipeerConnectivity framework available across Apple’s platforms. It’s a neat framework, and there are community-based libraries that layer over it to make it easier to use for some use cases: MultipeerKit (src) being the one that stood out to me.

The promise of what this framework does is compelling, which is to seamlessly enable peer to peer networking, layering over any local transport available (bluetooth, ethernet if available, a local wifi connection, or a common wifi infrastructure). There’s a lot of “magic” in that capability, layering over underlying technologies and dealing with the advertise and connect mechanisms. Some of it uses Bonjour (aka zeroconf), and I suspect other mechanisms as well.

One of the “quirks” of this technology is that you don’t direct what transport is used, nor do you get information about the transport. You do get a nicely designed cascade of objects, all of which leverage the delegate/protocol structure to do their thing. Unfortunately, the documentation doesn’t make how to use them and what to expect entirely clear.

The structure starts with an advertiser, which is paired with an advertising browser. It wasn’t completely obvious to me at first, but you don’t need both sides of the peer to peer conversation doing advertising in order to make a connection. One side can advertise, the other browse, and you can establish a connection on that basis. It doesn’t need to be set up bi-directionally, although it can.

When I first started in, having glanced through the developer docs, I thought you needed to have both sides actively advertising for this to work. Nope, bad assumption on my part. Both can advertise, and it’s makes for interesting viewing of “who’s out there” – but the heart that enables data transfer is another layer down: MCSession.

You use the browser to “invite” a found peer to a session, and the corresponding advertiser has a delegate you use to accept invites.

The session (MCSession) is the heart of the communications from here. Session has an active side and a reactive side to it’s API – methods like send(_:toPeers:with:) pair to a session delegate responding using the method session(_:didReceive:fromPeer:). Before you get into sending data, however, you need to be connected.

While the browser allows you to invite, and the advertiser to accept, it is the session that gives you detail on what’s happening. Session is designed to do this through delegate method callbacks to session(_:peer:didChange:) which is how you get state changes on for connections changes, and information on to whom you are connected. The session state is a tri-state thing: notConnected, connecting, or connected. In my experience so far, you don’t spend very long in the connecting state, and state updates propagate pretty quickly (within a second or two) when the infrastructure changes. For example, when your iOS device goes to sleep, or you set the device into airplane mode. I haven’t measured exactly how fast, or how consistently, these updates propagate.

Session is a bi-directional communications channel, and once you are connected, either side can send data for the other to receive. Session also has the concept of not just sending Data, but of explicitly transferring (presumably larger) resources that are URL based. I haven’t experimented with this layer, but I’m guessing it’s a bit more optimized for reading a large file and streaming it out. The session delegate has callbacks for when it starts transfering and when it completes.

There’s a third transport mechanism, which uses open ended streams, that I haven’t yet touched. When I started looking, I did find some older github projects that tested using the streams capability – and how many simultaneous streams could be triggered and used effectively – but no published results. Alas those projects were written with swift 3, so while I poked at them out of curiosity, I mostly left them alone.

To explore MultipeerConnectivity, I created a project (available on github) called MPCF-TestBench. The project is open source and available for anyone to compile and use themselves – but no promises on it all working correctly or looking anything close to good. (contributions welcome, but certainly not expected).

Fair warning: when I think of “open source”, it’s the messy sausage-making process, not the completed and pretty, cleaned, ready-to-use-no-work-involved library or application that is the desired end goal. Feel free to dig in to the source, ask questions if you like, improve it and share the changes, or use it to your own explorations – but demand anything and you’ll just get a snicker.

The project is an excuse to do something “heavier” in SwiftUI and see how things work – like how to get updates from more of these delegate heavy structures into the UI, and to see how SwiftUI translates across platforms. In short, it’s an excuse to learn.

All of this started several weeks ago when I poked Guiherme Rambo about MultiPeerKit to see how fast it actually worked. He hadn’t made any explicit measurements, so I thought it might be interesting to do just that. To that end, the MPCF-TestBench has (crudely) cobbled a reflector and a test-runner with multiple targets (iOS, tvOS, and mac). This is also an excuse to see how SwiftUI translates across the platforms, but more on that in another (later) post. If you go to use this, I’d recommend sticking with the iOS targets for now, as it’s where I’m actively doing my development and using it.

MPCF-TestBench work in progress

I have yet to do the work of trying out the transmissions at various sizes, but you can get basic information for yourself with the code as it stands. The screenshot above was transmitted from my iPhone (iPhone X) to a 10″ iPad Pro, leveraging the my local wifi network, to which both were connected. The test sent 100 data packets that were about 1K in size, and a corresponding reflector on the iPad echoed the data back. No delay, just shoving them down a pipe as fast as I could – using the “reliable” transport mode.

I used a simple Codable structure that exports to JSON to dump out the data (although my export mechanism is only half-working to be honest). Still, it’s far enough to get a sample available if you’re curious. Feel free to dig apart the list of JSON objects for your own purposes, I’ll be adding more and making a more thorough result set over time.

I haven’t yet been able to establish a bluetooth only connection – the peering mechanism isn’t making a connection, but it could easily be something stupid I’ve done as well.

So there you have it – my initial answer to “how fast is it”: I’m seeing about 3 Kbytes/sec transferred using a “reliable” transport mode, over wifi, and using more recent iOS devices. The transmissions appear to be reasonably stable as well – not a terrible amount of standard deviation in my simple tests.

Continuous Integration with Github Actions for macOS and iOS projects

GitHub Actions released in August 2019 – I’ve been trying them out for nearly a full year, using beta access available the adventurous before it was generally available. It was a long time in coming, and I saw this feature as GitHub’s missing piece. Some great companies stepped into that early gap and provide excellent services: TravisCI, CircleCI, codeship, SemaphoreCI, Bitrise, and many others. I’ve used most of these, predominantly TravisCI because it was available before the rest, and I got started with it. When GitHub finally did circle back and make actions available, I was there trying it out and seeing how it worked.

Setting up CI for macOS and iOS project has always been a little odd, but doable. For many people who are making apps, the goal is to build the code, run any unit tests, maybe run some UI or integration tests, sign the resulting elements, and ship the whole out via testflight. Tools like fastlane do a spectacular job of helping to automate into these services where Apple hasn’t provided a lot of support, or connected the dots.

I’m going to focus a bit more narrowly in this post – looking at how to leverage swift package manager and xcodebuild, both command line tools for building swift projects or mac and iOS applications respectively. I’ll leave the whole “setting up fastlane”, dealing with the complexities of signing code, and submitting builds from CI systems to others.

Building swift packages with github actions

If you want to build a swift package, then reach for swiftpm. You can’t build macOS or iOS applications with swiftpm, but you can create command-line executables or compile swift libraries. Most interestingly, you can compile swift on other platforms – linux is supported, and other operating systems (Windows, Android, and more) are being worked on by the swift open source community. Swiftpm is also the go-to tooling if you want to use the burgeoning server-side swift capabilities.

While there are some complicated corners to using the swift package manager, especially when it comes to integrating existing C or C++ libraries, the basics for how to use it are gloriously simple:

swift test --enable-test-discovery

To use that tooling, we need to define how it’s invoked – and that whole accumulation of detail is what goes into a GitHub Action declarative workflow.

To set up an action, create a YAML file in the directory .github/workflows at the root of your repository. GitHub will look in this directory for YAML files and they’ll become the actions enabled for your repository. The documentation for github actions is available at https://help.github.com/en/actions, but it isn’t exactly easy deciphering unless you’re already familiar with CI and some github specific terms.

One the simplest CI definitions I’ve seen is the CI running on SwiftDocOrg‘s DocTest repository, which builds its executables for both swift on macOS and swift on Linux:

name: CI
on:
  push:
    branches: [master]
  pull_request:
    branches: [master]
jobs:
  macos:
    runs-on: macOS-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v1
      - name: Build and Test
        run: swift test
        env:
          DEVELOPER_DIR: /Applications/Xcode_11.4.app/Contents/Developer
  linux:
    runs-on: ubuntu-latest
    container:
      image: swift:5.2
      options: --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --security-opt apparmor=unconfined
    steps:
      - name: Checkout
        uses: actions/checkout@v1
      - name: Build and Test
        run: swift test --enable-test-discovery

To explain the setup, let’s look at it in pieces. The very first piece is the name of the action: CI. For all practical purposes the name effects nothing, but knowing the name can be helpful. GitHub indexes actions by name. What I find most useful is that GitHub provides an easy-to-use badge that you can drop into a README file, so that people viewing the rendered markdown will have a quick look as to the current build status.

The badge uses the repository name and the workflow name together in a URL. The pattern for this URL is:

https://github.com/USERNAME/REPOSITORY_NAME/workflows/WORKFLOW_NAME/badge.svg

Make an image link in markdown to display this badge in your README. For example, a badge for DocTest’s repository could be:

[![Actions Status](https://github.com/SwiftDocOrg/DocTest/workflows/CI/badge.svg)](https://github.com/SwiftDocOrg/DocTest/actions)

The next segment is on, that defines when the action will be applied. DocTest’s repository has the action triggering when the master branch (but no other branches) changes via push, or when a pull request is opened against the master branch.

on:
  push:
    branches: [master]
  pull_request:
    branches: [master]

The next segment is jobs, which has two elements defined underneath it. Each job is run separately, and you may declare dependencies between jobs if you want or need. Each job defines where it runs – more specifically what operating system is used, and DocTest’s example has a job for building on macOS and another for building on Linux.

The first is the macos job, which runs within a macOS virtual machine:

jobs:
  macos:
    runs-on: macOS-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v1
      - name: Build and Test
        run: swift test
        env:
          DEVELOPER_DIR: /Applications/Xcode_11.4.app/Contents/Developer

The steps are run linearly, each having to complete without a failure or error response before the next. This example shows the common practice for leveraging actions/checkout, a pre-defined action in the GitHub “Marketplace“.

Marketplace gets quotes because I think marketplace is poor name choice – you’re not required to buy anything, which I originally thought was the intention. And to be clear, I’m glad it’s not. GitHub’s mechanism allows anyone to host their own actions, and the marketplace is the place to find them.

The second step is simply invoking swift test, just like you might on your own laptop with macOS installed. The environment variable DEVELOPER_DIR is being defined here, which Xcode uses as a means to indicate which version of Xcode to use when multiple are installed. The alternative way to do this is by explicitly selecting the version of Xcode with another command xcode-select.

The GitHub actions runners have been maintained impressively well over the past year, and even beta releases of Xcode are frequently available within weeks of when they are released. The VM image has an impressive array of commonly used tools, libraries, and languages pre-installed – and that’s maintained publicly in a list at https://github.com/actions/virtual-environments/blob/master/images/macos/macos-10.15-Readme.md.

By selecting the version of Xcode with the environment variable declaration, this also implies the version of swift that’s being used, swift version 5.2 in this case.

The last segment of this CI declaration is the version that builds the swift package on Linux.

  linux:
    runs-on: ubuntu-latest
    container:
      image: swift:5.2
      options: --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --security-opt apparmor=unconfined
    steps:
      - name: Checkout
        uses: actions/checkout@v1
      - name: Build and Test
        run: swift test --enable-test-discovery

In this case, it’s using Ubuntu 18.04 (the latest supported by GitHub as I’m writing this post) – which has a corresponding README of everything that includes at https://github.com/actions/virtual-environments/blob/master/images/linux/Ubuntu1804-README.md.

The container declaration defines a docker contain that’s used to run these steps on top of that base linux image, in this case the swift:5.2 image. The additional options listed are to open up specific security mechanisms otherwise locked down within a container – in this case, it’s enabling the ptrace system call, which is critical to allowing swift to run an integrated REPL or use the LLDB debugger when run within a container.

The last bit that you might have noticed is the option --enable-test-discovery. This is an option available from the swift package manager that only recently released. Where the Xcode leverages the objective-C runtime to dynamically inspect and identify test classes to run, the same didn’t exist (and was a right pain in the butt) for swift on Linux until this option was made available in swift 5.2. The build system creates an index while it’s building the code on Linux, and then uses this index to identify functions that should be invoked based on their name (the ones prefixed with test and that are within subclasses of XCTest). The end result is swift test “finding the tests” as most other unit testing libraries do.

Building macOS or iOS applications using xcodebuild with github actions

If you want to build a macOS, tvOS, iOS, or even watchOS application, use xcodebuild. xcodebuild is the command-line invocation that uses the build toolchain built into xcode, leveraging all the built in mechanisms with targets, schemes, build settings, and overlays interactions with the simulators. To use xcodebuild, you’ll need to have xcode installed – and with github actions, that’s available through virtualized instances of macOS with Xcode (and a lot of other tools) pre-installed.

The example repository I’m using is one of my own (https://github.com/heckj/MPCF-TestBench/blob/master/.github/workflows/build.yml), although it was a hard choice – as I really like NetNewsWire’s CI setup as a good example. Definitely take a look at https://github.com/Ranchero-Software/NetNewsWire/blob/master/.github/workflows/build.yml and the corresponding CI Tech Note for some excellent detail and to see how another project enabled CI on GitHub.

The whole CI file, from https://github.com/heckj/MPCF-TestBench/blob/master/.github/workflows/build.yml:

name: CI
on: [push]
jobs:
  build:
    runs-on: macos-latest
    strategy:
      matrix:
        run-config:
          - { scheme: 'MPCF-Reflector-mac', destination: 'platform=macOS' }
          - { scheme: 'MPCF-TestRunner-mac', destination: 'platform=macOS' }
          - { scheme: 'MPCF-Reflector-ios', destination: 'platform=iOS Simulator,OS=13.4.1,name=iPhone 8' }
          - { scheme: 'MPCF-TestRunner-ios', destination: 'platform=iOS Simulator,OS=13.4.1,name=iPhone 8' }
          - { scheme: 'MPCF-Reflector-tvOS', destination: 'platform=tvOS Simulator,OS=13.4,name=Apple TV' }
    steps:
    - name: Checkout Project
      uses: actions/checkout@v1
    - name: Homebrew build helpers install
      run: brew bundle
    - name: Show the currently detailed version of Xcode for CLI
      run: xcode-select -p
    - name: Show Build Settings
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showBuildSettings
    - name: Show Build SDK
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showsdks
    - name: Show Available Destinations
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showdestinations
    - name: lint
      run: swift format lint --configuration .swift-format-config -r .
    - name: build and test
      run: xcodebuild clean test -scheme '${{ matrix.run-config['scheme'] }}' -destination '${{ matrix.run-config['destination'] }}' -showBuildTimingSummary

Both this example and NetNewsWire’s CI use the technique of a matrix build. This is immensely useful when you have multiple targets in the same Xcode project and want to verify that they’re all building and testing correctly. The matrix is defined right at the top of this file as a strategy:

jobs:
  build:
    runs-on: macos-latest
    strategy:
      matrix:
        run-config:
          - { scheme: 'MPCF-Reflector-mac', destination: 'platform=macOS' }
          - { scheme: 'MPCF-TestRunner-mac', destination: 'platform=macOS' }
          - { scheme: 'MPCF-Reflector-ios', destination: 'platform=iOS Simulator,OS=13.4.1,name=iPhone 8' }
          - { scheme: 'MPCF-TestRunner-ios', destination: 'platform=iOS Simulator,OS=13.4.1,name=iPhone 8' }
          - { scheme: 'MPCF-Reflector-tvOS', destination: 'platform=tvOS Simulator,OS=13.4,name=Apple TV' }

This is a single job – meaning a single operating system to run the build – but when you use a matrix is replicates the job by the size of the matrix. In this case, there are 5 matrix definitions – 2 for macOS targets, 2 for iOS targets, and 1 target for tvOS. When this is run, it runs 5 parallel instances, each with its own matrix definition and applies those values to further steps. This example defines two properties, scheme and destination, to be filled out with different values for each matrix run, and which are used later in the steps. The scheme definition corresponds to the names of schemes that are in the Xcode workspace, and destination maps to parameters used to xcodebuild’s destination argument which is a combination of target platform, name, and version of the operating system to use.

Something to be aware of – the specific values that are used in the destinations of the matrix will vary with the version of Xcode, and the latest version of Xcode will always be used unless you explicitly override it. In this example, I am intentionally setting it to latest to keep tracking any updates, but if you’re building a CI system for verifying stability over time, that’s probably something you want to explicitly declare and lock down in your CI configuration.

Checkout is pretty self explanatory, and right after that step you’ll see the step that installs helpers.

    - name: Homebrew build helpers install
      run: brew bundle

This uses a feature of Homebrew called bundle, which reads a file named Brewfile, allowing you to define a single place to say “install all these additional tools for my later use”.

The next steps are purely informational, and aren’t actually needed for the build, but are handy to have for debugging:

    - name: Show the currently detailed version of Xcode for CLI
      run: xcode-select -p
    - name: Show Build Settings
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showBuildSettings
    - name: Show Build SDK
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showsdks
    - name: Show Available Destinations
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showdestinations

These all invoke xcodebuild with the various options to show the parameters available. These parameters vary with the version of Xcode that is running, and the very first command (xcode-select -p) prints that version.

The next command, named lint, uses one of the helpers that is defined in that Brewfileswift format. In this example, I’m using Apple’s swift-format command (the other common example is Nick Lockwood’s swiftformat). In other projects I’ve used Nick’s swiftformat (and the often paired tool: swiftlint), both of which I like quite a lot. This project was an excuse to try out the tooling from the Apple’s open source version and see how it worked, and how I liked working with it. My final opinion is still pending, but it mostly does the right thing.

    - name: lint
      run: swift format lint --configuration .swift-format-config -r .

The lint step is really applying a verification of formatting, so arguably might not be relevant to be included in a build. I rely on linters (in several languages) to yell at me, as otherwise I’m lousy about consistency in how I format code. In practice, I also try and use the same tools to auto-format the code to just keep it up to whatever standard seems predominantly acceptable.

The final step is where it compiles and run the tests:

    - name: build and test
      run: xcodebuild clean test -scheme '${{ matrix.run-config['scheme'] }}' -destination '${{ matrix.run-config['destination'] }}' -showBuildTimingSummary

And you can see the references to the matrix values that are applied directly as parameters to xcodebuild. The text ${{ matrix.run-config['scheme'] }} is a replacement definition, indicating that the value of scheme for the running matrix build should be dropped into that position for the command line argument.

The NetNewsWire project uses the exact same technique, although the developers run xcodebuild from within a shell script so that they can arrange for signing keys and other parameters to be set up. It’s a thoughtful and ingenious system, but quite a bit harder to explain or show the matrix being used directly.

The downsides of GitHub Actions

While I am very pleased with how GitHub actions works, I am completely leveraging the “no cost to run” options. The cost of the pay-for GitHub Actions is notable, and in some cases, injuriously expensive.

If you’re running GitHub actions in a private repository (and paying for the privilege) you may find it just too expensive. The billing information for github actions shows that running macOS virtual images is 10 times the price of running Linux images ($0.08 per minute vs. $0.008 per minute). If you build on every pull request you’re going to wrack up an impressive number of minutes, very quickly. On top of that, techniques like the matrix build add an additional multiplier – 5x in my case. GitHub does offer the option of allowing you to create your own “GitHub Action runners”, and I’d recommend seriously looking at that option – just using GitHub as the coordinator – from the cost perspective alone. It’s more “stuff you have to maintain”, but in the end – likely quite a bit cheaper than paying the GitHub Actions hosting fees.

If you are building something purely open source, then you’re allowed to take advantage of the free services. That’s pretty darned nice.

Where GitHub is not yet supporting the swift ecosystem

This isn’t directly related to GitHub Actions, but more a related note on how GitHub is (or isn’t) supporting the swift ecosystem. There are a tremendous number of support options in their site for some great features, but really all of them are pretty anemic when it comes to supporting swift, either via Apple or through the open source ecosystem:

  • There’s no current availability for swift language syntax highlighting. It does offer Objective-C, which is a pretty good fallback, but swift specific highlighting would be really lovely to have when reviewing pull requests or just reading code.
  • GitHub’s security mechanisms, which host security advisories, track dependency flow, and the recently announced security code scanning – don’t support swift:
    • Dependencies through Package.swift, although parsable, aren’t tracked and aren’t reflected.
    • While you can draft a security advisory and host it at Github, it won’t propagate to any dependencies because it’s not tracked.
  • A year ago GitHub announced that it would be supporting “Swift Packages” – I’m not aware of much that came of that effort, if anything.
  • The same constraint of not parsing and tracking dependencies is highlighted in their Insights tab and the “Dependency graph”. On swift packages, there’s nothing. Same with Xcode projects. Nada, zilch, zip.
  • The code scanning, which uses a really great technology called CodeQL, doesn’t support Swift, or even Objective-C.

At a bare minimum, I would have hoped that GitHub would have enabled parsing and showing dependencies for Package.swift. I can understand not wanting to reverse engineer the .pbproj XML plist structure to get to dependencies, but swift itself is open source and the package manifest is completely open.

In addition Swift Syntax, the AST parsing mechanisms that swift enables through open source, are being used for swift language server support. These, I think, would be a perfect complement to leverage within CodeQL.

I do hope we’ll so more movement on these fronts from GitHub, but if not – well, I suppose it’s a good differentiating opportunity for competitive sites such as BitBucket or GitLab.

How to make a SwiftUI component that draws a Shape with light

While I was experimenting with SwiftUI, one of the effects I wanted to re-create was taking a shape and making “stroke effects” on it. The goal was to create a SwiftUI component that could take an arbitrary path, apply this effect to the shape, and render it within the context of a larger SwiftUI view.

The effect I wanted to create was a “laser light” like drawing. You see this a lot in science fiction film user interfaces or backgrounds, and it is just kind of fun and neat. And yeah, I want it to be decent in both light and dark modes, although “dark mode” is where it will shine the most.

And the code to represent this is:

VStack {
    LaserLightShape(color: Color.orange, lineWidth: 1) {
        Rectangle()
    }

    LaserLightShape(color: Color.red, lineWidth: 2) {
        Circle()
    }

    LaserLightShape(color: Color.blue, lineWidth: 0.5) {
        Path { path in
            path.move(to: CGPoint(x: 0, y: 0))
            path.addLine(to: CGPoint(x: 50, y: 50))
        }
    }
}

Solving this challenge underscored the change in mindset from imperative code to declarative code. When I was looking through the SwiftUI methods, I kept looking for a methods with a means to “add on” to the existing view – or perhaps to replace an element within the view. In both of those cases, this highlighted my pattern of thinking – me telling the framework (or library) what to do. SwiftUI’s biggest win (and challenge) is inverting that thinking.

The methods to work with SwiftUI Views don’t “take a view, modify it, and hand it back”. Instead they take in some information, make a whole new View, and return it. There’s no “tweaking” or “changing” – the closest you get to that (imperative) paradigm is wholesale replacement. My natural instinct was to reach for something that had an explicit side effect, I suspect because that’s how a lot of languages I’ve used for years got something done. It’s a pattern I’m familiar with, and the first tool I reach towards.

This change in mindset is also why you’ll see a lot of the same people who “get” the new paradigm talking about how it overlaps with functional programming, using phrases like “pure functions”, and perhaps even “functors” and “monads”. I quickly get lost in many of these abstract concepts. For me, the biggest similarity is like when I understand the inversion of control that was the mental leap in moving from using a library to writing for a framework. This feels very much akin to that ‘Ah ha’ moment. And for the record, I’m not asserting that I fully understand it – only that I recognize I need to change how I’m framing the problem in my head in order to use these new tools.

To solve this particular challenge, I originally started looking at SwiftUI ViewModifiers. I’d been reading about them and thought maybe that’s what I want. Unfortunately, that didn’t work – ViewModifiers are great when you want to layer additional effects on a View – which means constraining what you do to the methods available and defined on the View protocol – but I wanted to work on a Shape, specifically leveraging the stroke method, which is a different critter.

The solution that I came up with uses a ViewBuilder. I wrote a bit about these before talking about making PreviewBackground. The mental framing that helped provide this solution was thinking about what I wanted to achieve as taking some information (a Shape) and returning some kind of View.

A ViewBuilder is a fairly significantly generic-heavy function. So to make it accept something conforming to the Shape protocol, I specified that the generic type it accepted was constrained to the protocol. I am still stumbling through the specifics of how to effectively use generics and protocols with swift, and thought this was a pretty nice way to show some of its strength.

Without further ado, here’s the code that produces the effect:

struct LaserLightShape<Content>: View where Content: Shape {
    let content: () -> Content
    let color: Color
    let lineWidth: CGFloat

    @Environment(\.colorScheme) var colorSchemeMode
    var blendMode: BlendMode {
        if colorSchemeMode == .dark {
            // lightens content within a dark
            // color scheme
            return BlendMode.colorDodge
        } else {
            // darkens content within a light
            // color scheme
            return BlendMode.colorBurn
        }
    }

    init(color: Color, lineWidth: CGFloat, @ViewBuilder content: @escaping () -> Content) {
        self.content = content
        self.color = color
        self.lineWidth = lineWidth
    }

    var body: some View {
        ZStack {
            // top layer, intended only to reinforce the color
            // narrowest, and not blurred or blended
            if colorSchemeMode == .dark {
                content()
                    .stroke(Color.primary, lineWidth: lineWidth / 4)
            } else {
                content()
                    .stroke(color, lineWidth: lineWidth / 4)
            }

            if colorSchemeMode == .dark {
                // pushes in a bit of additional lightness 
                // when in dark mode
                content()
                    .stroke(Color.primary, lineWidth: lineWidth)
                    .blendMode(.softLight)
            }
            // middle layer, half-width of the stroke and blended
            // with reduced opacity. re-inforces the underlying
            // color - blended to impact the color, but not blurred
            content()
                .stroke(color, lineWidth: lineWidth / 2)
                .blendMode(blendMode)

            // bottom layer - broad, blurred out, semi-transparent
            // this is the "glow" around the shape
            if colorSchemeMode == .dark {
                content()
                    .stroke(color, lineWidth: lineWidth)
                    .blur(radius: lineWidth)
                    .opacity(0.9)
            } else {
                // knock back the blur/background effects on
                // light mode vs. dark mode
                content()
                    .stroke(color, lineWidth: lineWidth / 2)
                    .blur(radius: lineWidth / 1.5)
                    .opacity(0.8)
            }
        }
    }
}

The instructions for what to do are embedded in body of the view we are returning. The pattern is one I looked up from online references of how to make this same effect in photoshop. The gist is:

  • You take the base shape, make a wide stroke of it, and blur it out a bit. This will end up being the “widest” portion of the effect.
  • Over that, you put another layer – roughly half the width of the bottom layer, and stroke it with the color you want to show.
  • And then you add a final top layer, narrowest of the set, intended to put the “highlight” or shine onto the result.

You’ll notice in the code that it’s also dynamic in regards to light and dark backgrounds – in a light background, I tried to reinforce the color without making the effect look like an unholy darkness shadow trying to swallow the result, and in the dark background I wanted to have a “laser light” light shine show through. I also found through trial-and-error experiments that it helped to have a fourth layer sandwiched in there in dark mode specifically to brighten up the stack, adding in more “white” to the end effect.

A lot of this ends up leveraging the blend modes from SwiftUI that composite layers. I’m far from a master of blend modes, and had to look up a number of primers to figure out what I wanted from the fairly large set of possibilities.

I suspect there is a lot more that can be accomplished by leveraging the ViewBuilder pattern.

Introducing and explaining the PreviewBackground package

While learning and experimenting with SwiftUI, I use the canvas assistant editor to preview SwiftUI views extensively. It is an amazing feature of Xcode 11 and I love it. There is a quirk that gets difficult for me though – the default behavior of the preview provider uses a gray background. I frequently use multiple previews while making SwiftUI elements, wanting to see my creation on a background supporting both light and dark modes.

The following little stanza is a lovely way to iterate through the modes and displaying them as previews:

#if DEBUG
struct ExampleView_Previews: PreviewProvider {
    static var previews: some View {
        Group {
            ForEach(ColorScheme.allCases,
                    id: \.self) { scheme in

                Text("preview")
                    .environment(\.colorScheme, scheme)
                    .frame(width: 100,
                           height: 100,
                           alignment: .center)
                    .previewDisplayName("\(scheme)")
            }
        }
    }
}
#endif

Results in the following preview:

The gray background doesn’t help all that much here. It is perfect when you are viewing a fairly composed element set, as you are often working over an existing background. When you are creating an element to stand alone, or moving an element. In those cases, I really want a background for the element.

And this is exactly what PreviewBackground provides. I made PreviewBackground into a SwiftPM package. While I could have created this effect with a ViewModifier, I tried it out as a ViewBuilder instead, thinking it would be nice to wrap the elements I want to preview explicitly.

The same example, using PreviewBackground:

import PreviewBackground

#if DEBUG
struct ExampleView_Previews: PreviewProvider {
    static var previews: some View {
        Group {
            ForEach(ColorScheme.allCases,
                    id: \.self) { scheme in
                PreviewBackground {
                    Text("preview")
                }
                .environment(\.colorScheme, scheme)
                .frame(width: 100,
                       height: 100,
                       alignment: .center)
                .previewDisplayName("\(scheme)")
            }
        }
    }
}
#endif

The code is available on Github, and you may include it within your own projects by adding a swift package with the URL: https://github.com/heckj/PreviewBackground

Remember to import PreviewBackground in the views where you want to use it, and work away!

Explaining the code

There are not many examples of using ViewBuilder to construct a view, and this is a simple use case. Here is how it works:

import SwiftUI

public struct PreviewBackground<Content>: View where Content: View {
    @Environment(\.colorScheme) public var colorSchemeMode

    public let content: () -> Content

    public init(@ViewBuilder content: @escaping () -> Content) {
        self.content = content
    }

    public var body: some View {
        ZStack {
            if colorSchemeMode == .dark {
                Color.black
            } else {
                Color.white
            }
            content()
        }
    }
}

The heart of using ViewBuilder is using it within a View initializer to return a (specific but) generic instance of View, and using the returned closure as a property that you execute when composing a view.

There is a lot of complexity in that statement. Allow me to try and explain it:

Normally when creating a SwiftUI view, you create a struct that conforms to the View protocol. This is written in code as struct SomeView: View. You may use the default initializer that swift creates for you, or you can write your own – often to set properties on your view. ViewBuilder allows you to take a function in that initializer that returns an arbitrary View. But since the kind of view is arbitrary, we need to make the struct generic – since we can’t assert exactly what type it will be until the closure is compiled. To tell the compiler it’ll need to do the work to figure out the types, we label the struct as a being generic, using the <SomeType> syntax:

struct SomeView<Content>: View where Content: View

This says there is a generic type that we’re calling Content, and that generic type is expected to conform to the View protocol. There is a more compact way to represent this that you may prefer:

struct SomeView<Content: View>: View

Within the view itself, we have a property – which we name content. The type of this content isn’t known up front – it is the arbitrary type that the compiler gets to infer from the closure that will provided in the future. This declaration is saying the content property will be a closure – taking no parameters – that returns some an arbitrary type we are calling Content.

public let content: () -> Content

Then in the initializer, we use ViewBuilder:

public init(@ViewBuilder content: @escaping () -> Content) {
    self.content = content
}

In case it wasn’t obvious, ViewBuilder is a function builder, the swift feature that is enabling this declarative structure with SwiftUI. This is what allows us to ultimately use it with in that declarative syntax form.

The final bit of code to describe is using the @Environment property wrapper.

@Environment(\.colorScheme) public var colorSchemeMode

The property wrapper is not in common use, but perfect for this need. The property wrapper uses exposes a specific part of the existing environment as a local property for this view. This is what enables PreviewBackground to choose the color for the background appropriate to the mode. By reading the environment it chooses an appropriately colored background. It then uses that property to assemble a view by invoking the property named content (which was provided by the function builder) within a ZStack.

By using ViewBuilder, we can use the PreviewBackground struct like any other composed view within SwiftUI:

var body: some View {
    PreviewBackground {
        Text("Hello there!")
    }
}

If we had created this code as a ViewModifier, then using it would look different – instead of the curly-bracket syntax, we would be chaining on a method. The default set up for something like that looks like:

var body: some View {
    Text("Hello there!")
    .modify(PreviewBackground())
}

I wanted to enable the curly-bracket syntax for this, hence the choice of using a ViewBuilder.

A side note about moving code into a Swift package

When I created this code, I did so within the context of another project. I wanted to use it across a second project, and the code was simple enough (a single file) to copy/paste – but instead I went ahead and made it a Swift package. Partially to make it easier for anyone else to use, but also just to get a bit more experience with what it takes to set up and use this kind of thing.

The mistake that I made immediately on moving the code was not explicitly making all the structs and properties public. It moved over, compiled fine, and everything was looking great as a package, but then when I went to use it – I got some really odd errors:

Cannot call value of non-function type 'module<PreviewBackground>'

In other instances (yes, I admit this wasn’t the first time I made this mistake – and it likely won’t be the last) the swift compiler would complain about the scope of a function, letting me know that it was using the default internal scope, and was not available. But SwiftUI and this lovely function builder mechanism is making the compiler work quite a bit more, and it is not nearly as good at identifying why this mistake might have happened, only that it was failing.

If you hit the error Cannot call value of non-function type when moving code into a package, you may have forgotten to make the struct (and relevant properties) explicitly public.

Four strategies to use while developing SwiftUI components

Lets start out with the (possibly) obvious: when I code, I frequently make mistakes (and fix them); but while I am going through that process function builders are frequently kicking my butt. When you are are creating SwiftUI views, you use function builders intensely – and the compiler is often at a loss to explain how I screwed up. And yeah, even with the amazing new updates into the Diagnostic Engine alongside Swift 5.2, which I am loving.

What is a function builder? It is the thing that looks like a normal “do some work” code closure in swift that you use as the declarative structure when you are creating a SwiftUI view. When you see code such as:

import SwiftUI

struct ASimpleExampleView: View {
    var body: some View {
        Text("Hello, World!")
    }
}

The bit after some View is the function builder closure, which includes the single line Text("Hello, World!").

The first mistake I make is assuming all closures are normal “workin’ on the code” closures. I immediately start trying to put every day code inside of function builders. When I do, the compiler – often immediately and somewhat understandably – freaks out. The error message that appears in Xcode:

Function declares an opaque return type, but has no return statements in its body from which to infer an underlying type

And some times there are other errors as well. It really depending on what I stacked together and how I grouped and composed the various underlying elements in that top level view, and ultimately what I messed up deep inside all that.

I want to do some calculations in some of what I am creating, but doing them inline in the function builder closures is definitely not happening, so my first recommended strategy:

Strategy #1: Move calculations into a function on the view

Most of the reasons I’m doing a calculation is because I want to determine a value to hand in to a SwiftUI view modifier. Fiddling with the opacity, position, or perhaps line width. If you are really careful, you can do some of that work – often simple – inline. But when I do that work, I invariably screw it up – make a mistake in matching a type, dealing with an optional, or something. At those times when the code is inline in a function builder closure, the compiler is having a hell of a hard time figuring out what to tell me about how I screwed it up. By putting the relevant calculation/code into a function that returns an explicit type, the compiler gets a far more constrained place to provide feedback about what I screwed up.

As an example:

struct ASimpleExampleView: View {
    func determineOpacity() -> Double {
        1
    }

    var body: some View {
        ZStack {
            Text("Hello World").opacity(determineOpacity())
        }
    }
}

Some times you aren’t even doing calculations, and the compiler gets into a tizzy about the inferred type being returned. I have barked my shins on that particular edge repeatedly while experimenting with all the various options, seeing what I like in a visualization. The canvas assistant editor that is available in Xcode is a god-send for fast visual feedback, but I get carried away in assembling lots of blocks with ZStacks, HStacks, and VStacks to see what I can do. This directly leads to my second biggest win:

Strategy #2: Ruthlessly refactor your views into subcomponents.

I am beginning to think that seeing repeated, multiple kinds of stacks together in a single view is possibly a code smell. But more than anything else, keeping the code within a single SwiftUI view as brutally simple as possible gives the compiler a better than odds chance of being able to tell me what I screwed up, rather than throwing up it’s proverbial hands with an inference failure.

There are a number of lovely mechanisms with Binding that make it easy to compose and link to the relevant data that you want to use. When I am making a subcomponent that provides some visual information that I expect the enclosing view to be tracking, I have started using the @Binding property wrapper to pass it in, which works nicely in the enclosing view.

TIP:

When you’re using @Binding, remember that you can make a constant binding in the PreviewProvider in that same file:

YourView(someValue: .constant(5.0))

While I was writing this, John Sundell has recently published a very in-depth look at exactly this topic. His article Avoiding Massive SwiftUI Views covers another angle of how and why to ruthlessly refactor your views.

On the topic of the mechanics of that refactoring, when we lean what to do, it leads to leveraging Xcode’s canvas assistant editor withPreviewProvider – and my next strategy:

Strategy #3: use Group and multiple view instances to see common visual options quickly

This strategy is more or less obvious, and was highlighted in a number of the SwiftUI WWDC presentations that are online. The technique is immensely useful when you have a couple of variations of your view that you want to keep operational. It allows you to visually make sure they are working as desired while you continue development. In my growing example code, this looks like:

import SwiftUI

struct ASimpleExampleView: View {
    let opacity: Double
    @Binding var makeHeavy: Bool

    func determineOpacity() -> Double {
        // maybe do some calculation here
        // mixing the incoming data
        opacity
    }

    func determineFontWeight() -> Font.Weight {
        if makeHeavy {
            return .heavy
        }
        return .regular
    }

    var body: some View {
        ZStack {
            Text("Hello World")
                .fontWeight(determineFontWeight())
                .opacity(determineOpacity())
        }
    }
}

struct ASimpleExampleView_Previews: PreviewProvider {
    static var previews: some View {
        Group {
            ASimpleExampleView(opacity: 0.8, 
                makeHeavy: .constant(true))

            ASimpleExampleView(opacity: 0.8, 
                makeHeavy: .constant(false))
        }
    }
}

And the resulting canvas assistant editor view:

This does not always help you experiment with what your views look like in all variations. For sets of pre-defined options, or data that influences your view, it can make a huge difference. A good variation that I recommend anyone use is setting and checking the accessibility environment settings to make sure everything renders as you expect. Another that I have heard is in relatively more frequent use: verifying localization rendering.

The whole rapid experimentation and feedback capability is what is so compelling about using SwiftUI. Which leads pretty directly to my next strategy:

Strategy #4: Consider making throw-away control views to tweak your visualization effects

I am not fortunate enough to constantly work closely with a designer. Additionally, I often do not have the foggiest idea of how some variations will feel in terms of a final design. When the time comes, seeing the results on a device (or on multiple devices) makes a huge difference.

You do not want to do this for every possible variation. That is where mocks fit into the design and development process – take the time to make them and see what you think. Once you have narrowed down your options to a few, then this strategy can really kick in and be effective.

In the cases when I have a few number of variations to try out, I encapsulate those options into values that I can control. Then I make a throw-away view that will never be shown in the final code that allows me to tweak the view within the context of a running application. Then the whole thing goes into whatever application I am working on – macOS, iOS, etc – and I see how it looks

When I am making a throw-away control view, I often also make (another throw-away) SwiftUI view that composes the control and controlled view together, as I intend to display it in the application. This is primarily to see the combined effect in a single Preview within Xcode. The live controls are not active in the Xcode canvas assistant editor, but it helps to see how having the controls influences the rest of the view structure.

A final note: Do not be too aggressive about moving code in a SwiftPM package

You may (like me) be tempted to move your code into a library, especially with the lovely SwiftPM capabilities that now exist within Xcode 11. This does work, and it functionally works quite well, from my initial experiments. But there is a significant downside, at least with the current versions (including Xcode 11.4 beta 3) – while you are still doing active development on the library:

If you open the package to edit or tweak it with Xcode, and load and build from only Package.swift without an associated Xcode project, the SwiftUI canvas assistant preview will not be functioning. If you use an Xcode project file, it works fine – so if you do go down this route, just be cautious about removing the Xcode project file for now. I have filed feedback to Apple to report the issue – both with Xcode 11.3 (FB7619098) and Xcode 11.4 beta 3 (FB7615026).

I would not recommend moving anything into a library until you used had it stable in case. There are also still some awkward quirks about developing code and a dependent library at the same time with Xcode. It can be done, but it plays merry havoc with Xcode’s automatic build mechanisms and CI.

Using Combine v1.1 is available

After getting the major edits for the existing content done, I called the result the first release. As with any creative product, I wasn’t happy with some of the corners that still had rough edges. Over the past two weeks I fleshed those in, wrote a bunch of unit tests, figured out some of the darker corners that I’d previously ignored, and generally worked to improve on the overall consistency.

The results have been flowing into the online version as I merged them. And now the updated version, available on gumroad in PDF and ePub format, is updated as well. Anyone who’s previously purchased the content gets the updates for free – just log in and they are available for you.

The rough bits that were fleshed out include several focuses of content:

  • Tests created and content written (and updated) for the multicast and share operators. The focus was primarily how they work and how to use them.
  • Worked through what the Record publisher offers (and doesn’t offer), including how to serialize & deserialize recorded streams (while this sounds cool, its not ultimately as useful as I hoped it might be).
  • Added the missing note that swift’s Result type could also be used as a publisher, courtesy of a little extension that was added in Combine.
  • Updated some of the details of throttle and debounce with the specifics of delays that are incurred in timing, after having validated the theories with some unit tests (spoiler: debounce always delays the events by a short bit, but throttle doesn’t have much of an impact). I had previously written about throttle and debounce on this blog as well.

The new version is 1.1, tagged in the underlying repository if you are so inclined to review/poke at the unit tests beyond the narrative details I shared in the book itself.