Exploring MultipeerConnectivity

A few weeks ago, I got curious about the MultipeerConnectivity framework available across Apple’s platforms. It’s a neat framework, and there are community-based libraries that layer over it to make it easier to use for some use cases: MultipeerKit (src) being the one that stood out to me.

The promise of what this framework does is compelling, which is to seamlessly enable peer to peer networking, layering over any local transport available (bluetooth, ethernet if available, a local wifi connection, or a common wifi infrastructure). There’s a lot of “magic” in that capability, layering over underlying technologies and dealing with the advertise and connect mechanisms. Some of it uses Bonjour (aka zeroconf), and I suspect other mechanisms as well.

One of the “quirks” of this technology is that you don’t direct what transport is used, nor do you get information about the transport. You do get a nicely designed cascade of objects, all of which leverage the delegate/protocol structure to do their thing. Unfortunately, the documentation doesn’t make how to use them and what to expect entirely clear.

The structure starts with an advertiser, which is paired with an advertising browser. It wasn’t completely obvious to me at first, but you don’t need both sides of the peer to peer conversation doing advertising in order to make a connection. One side can advertise, the other browse, and you can establish a connection on that basis. It doesn’t need to be set up bi-directionally, although it can.

When I first started in, having glanced through the developer docs, I thought you needed to have both sides actively advertising for this to work. Nope, bad assumption on my part. Both can advertise, and it’s makes for interesting viewing of “who’s out there” – but the heart that enables data transfer is another layer down: MCSession.

You use the browser to “invite” a found peer to a session, and the corresponding advertiser has a delegate you use to accept invites.

The session (MCSession) is the heart of the communications from here. Session has an active side and a reactive side to it’s API – methods like send(_:toPeers:with:) pair to a session delegate responding using the method session(_:didReceive:fromPeer:). Before you get into sending data, however, you need to be connected.

While the browser allows you to invite, and the advertiser to accept, it is the session that gives you detail on what’s happening. Session is designed to do this through delegate method callbacks to session(_:peer:didChange:) which is how you get state changes on for connections changes, and information on to whom you are connected. The session state is a tri-state thing: notConnected, connecting, or connected. In my experience so far, you don’t spend very long in the connecting state, and state updates propagate pretty quickly (within a second or two) when the infrastructure changes. For example, when your iOS device goes to sleep, or you set the device into airplane mode. I haven’t measured exactly how fast, or how consistently, these updates propagate.

Session is a bi-directional communications channel, and once you are connected, either side can send data for the other to receive. Session also has the concept of not just sending Data, but of explicitly transferring (presumably larger) resources that are URL based. I haven’t experimented with this layer, but I’m guessing it’s a bit more optimized for reading a large file and streaming it out. The session delegate has callbacks for when it starts transfering and when it completes.

There’s a third transport mechanism, which uses open ended streams, that I haven’t yet touched. When I started looking, I did find some older github projects that tested using the streams capability – and how many simultaneous streams could be triggered and used effectively – but no published results. Alas those projects were written with swift 3, so while I poked at them out of curiosity, I mostly left them alone.

To explore MultipeerConnectivity, I created a project (available on github) called MPCF-TestBench. The project is open source and available for anyone to compile and use themselves – but no promises on it all working correctly or looking anything close to good. (contributions welcome, but certainly not expected).

Fair warning: when I think of “open source”, it’s the messy sausage-making process, not the completed and pretty, cleaned, ready-to-use-no-work-involved library or application that is the desired end goal. Feel free to dig in to the source, ask questions if you like, improve it and share the changes, or use it to your own explorations – but demand anything and you’ll just get a snicker.

The project is an excuse to do something “heavier” in SwiftUI and see how things work – like how to get updates from more of these delegate heavy structures into the UI, and to see how SwiftUI translates across platforms. In short, it’s an excuse to learn.

All of this started several weeks ago when I poked Guiherme Rambo about MultiPeerKit to see how fast it actually worked. He hadn’t made any explicit measurements, so I thought it might be interesting to do just that. To that end, the MPCF-TestBench has (crudely) cobbled a reflector and a test-runner with multiple targets (iOS, tvOS, and mac). This is also an excuse to see how SwiftUI translates across the platforms, but more on that in another (later) post. If you go to use this, I’d recommend sticking with the iOS targets for now, as it’s where I’m actively doing my development and using it.

MPCF-TestBench work in progress

I have yet to do the work of trying out the transmissions at various sizes, but you can get basic information for yourself with the code as it stands. The screenshot above was transmitted from my iPhone (iPhone X) to a 10″ iPad Pro, leveraging the my local wifi network, to which both were connected. The test sent 100 data packets that were about 1K in size, and a corresponding reflector on the iPad echoed the data back. No delay, just shoving them down a pipe as fast as I could – using the “reliable” transport mode.

I used a simple Codable structure that exports to JSON to dump out the data (although my export mechanism is only half-working to be honest). Still, it’s far enough to get a sample available if you’re curious. Feel free to dig apart the list of JSON objects for your own purposes, I’ll be adding more and making a more thorough result set over time.

I haven’t yet been able to establish a bluetooth only connection – the peering mechanism isn’t making a connection, but it could easily be something stupid I’ve done as well.

So there you have it – my initial answer to “how fast is it”: I’m seeing about 3 Kbytes/sec transferred using a “reliable” transport mode, over wifi, and using more recent iOS devices. The transmissions appear to be reasonably stable as well – not a terrible amount of standard deviation in my simple tests.

Continuous Integration with Github Actions for macOS and iOS projects

GitHub Actions released in August 2019 – I’ve been trying them out for nearly a full year, using beta access available the adventurous before it was generally available. It was a long time in coming, and I saw this feature as GitHub’s missing piece. Some great companies stepped into that early gap and provide excellent services: TravisCI, CircleCI, codeship, SemaphoreCI, Bitrise, and many others. I’ve used most of these, predominantly TravisCI because it was available before the rest, and I got started with it. When GitHub finally did circle back and make actions available, I was there trying it out and seeing how it worked.

Setting up CI for macOS and iOS project has always been a little odd, but doable. For many people who are making apps, the goal is to build the code, run any unit tests, maybe run some UI or integration tests, sign the resulting elements, and ship the whole out via testflight. Tools like fastlane do a spectacular job of helping to automate into these services where Apple hasn’t provided a lot of support, or connected the dots.

I’m going to focus a bit more narrowly in this post – looking at how to leverage swift package manager and xcodebuild, both command line tools for building swift projects or mac and iOS applications respectively. I’ll leave the whole “setting up fastlane”, dealing with the complexities of signing code, and submitting builds from CI systems to others.

Building swift packages with github actions

If you want to build a swift package, then reach for swiftpm. You can’t build macOS or iOS applications with swiftpm, but you can create command-line executables or compile swift libraries. Most interestingly, you can compile swift on other platforms – linux is supported, and other operating systems (Windows, Android, and more) are being worked on by the swift open source community. Swiftpm is also the go-to tooling if you want to use the burgeoning server-side swift capabilities.

While there are some complicated corners to using the swift package manager, especially when it comes to integrating existing C or C++ libraries, the basics for how to use it are gloriously simple:

swift test --enable-test-discovery

To use that tooling, we need to define how it’s invoked – and that whole accumulation of detail is what goes into a GitHub Action declarative workflow.

To set up an action, create a YAML file in the directory .github/workflows at the root of your repository. GitHub will look in this directory for YAML files and they’ll become the actions enabled for your repository. The documentation for github actions is available at https://help.github.com/en/actions, but it isn’t exactly easy deciphering unless you’re already familiar with CI and some github specific terms.

One the simplest CI definitions I’ve seen is the CI running on SwiftDocOrg‘s DocTest repository, which builds its executables for both swift on macOS and swift on Linux:

name: CI
on:
  push:
    branches: [master]
  pull_request:
    branches: [master]
jobs:
  macos:
    runs-on: macOS-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v1
      - name: Build and Test
        run: swift test
        env:
          DEVELOPER_DIR: /Applications/Xcode_11.4.app/Contents/Developer
  linux:
    runs-on: ubuntu-latest
    container:
      image: swift:5.2
      options: --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --security-opt apparmor=unconfined
    steps:
      - name: Checkout
        uses: actions/checkout@v1
      - name: Build and Test
        run: swift test --enable-test-discovery

To explain the setup, let’s look at it in pieces. The very first piece is the name of the action: CI. For all practical purposes the name effects nothing, but knowing the name can be helpful. GitHub indexes actions by name. What I find most useful is that GitHub provides an easy-to-use badge that you can drop into a README file, so that people viewing the rendered markdown will have a quick look as to the current build status.

The badge uses the repository name and the workflow name together in a URL. The pattern for this URL is:

https://github.com/USERNAME/REPOSITORY_NAME/workflows/WORKFLOW_NAME/badge.svg

Make an image link in markdown to display this badge in your README. For example, a badge for DocTest’s repository could be:

[![Actions Status](https://github.com/SwiftDocOrg/DocTest/workflows/CI/badge.svg)](https://github.com/SwiftDocOrg/DocTest/actions)

The next segment is on, that defines when the action will be applied. DocTest’s repository has the action triggering when the master branch (but no other branches) changes via push, or when a pull request is opened against the master branch.

on:
  push:
    branches: [master]
  pull_request:
    branches: [master]

The next segment is jobs, which has two elements defined underneath it. Each job is run separately, and you may declare dependencies between jobs if you want or need. Each job defines where it runs – more specifically what operating system is used, and DocTest’s example has a job for building on macOS and another for building on Linux.

The first is the macos job, which runs within a macOS virtual machine:

jobs:
  macos:
    runs-on: macOS-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v1
      - name: Build and Test
        run: swift test
        env:
          DEVELOPER_DIR: /Applications/Xcode_11.4.app/Contents/Developer

The steps are run linearly, each having to complete without a failure or error response before the next. This example shows the common practice for leveraging actions/checkout, a pre-defined action in the GitHub “Marketplace“.

Marketplace gets quotes because I think marketplace is poor name choice – you’re not required to buy anything, which I originally thought was the intention. And to be clear, I’m glad it’s not. GitHub’s mechanism allows anyone to host their own actions, and the marketplace is the place to find them.

The second step is simply invoking swift test, just like you might on your own laptop with macOS installed. The environment variable DEVELOPER_DIR is being defined here, which Xcode uses as a means to indicate which version of Xcode to use when multiple are installed. The alternative way to do this is by explicitly selecting the version of Xcode with another command xcode-select.

The GitHub actions runners have been maintained impressively well over the past year, and even beta releases of Xcode are frequently available within weeks of when they are released. The VM image has an impressive array of commonly used tools, libraries, and languages pre-installed – and that’s maintained publicly in a list at https://github.com/actions/virtual-environments/blob/master/images/macos/macos-10.15-Readme.md.

By selecting the version of Xcode with the environment variable declaration, this also implies the version of swift that’s being used, swift version 5.2 in this case.

The last segment of this CI declaration is the version that builds the swift package on Linux.

  linux:
    runs-on: ubuntu-latest
    container:
      image: swift:5.2
      options: --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --security-opt apparmor=unconfined
    steps:
      - name: Checkout
        uses: actions/checkout@v1
      - name: Build and Test
        run: swift test --enable-test-discovery

In this case, it’s using Ubuntu 18.04 (the latest supported by GitHub as I’m writing this post) – which has a corresponding README of everything that includes at https://github.com/actions/virtual-environments/blob/master/images/linux/Ubuntu1804-README.md.

The container declaration defines a docker contain that’s used to run these steps on top of that base linux image, in this case the swift:5.2 image. The additional options listed are to open up specific security mechanisms otherwise locked down within a container – in this case, it’s enabling the ptrace system call, which is critical to allowing swift to run an integrated REPL or use the LLDB debugger when run within a container.

The last bit that you might have noticed is the option --enable-test-discovery. This is an option available from the swift package manager that only recently released. Where the Xcode leverages the objective-C runtime to dynamically inspect and identify test classes to run, the same didn’t exist (and was a right pain in the butt) for swift on Linux until this option was made available in swift 5.2. The build system creates an index while it’s building the code on Linux, and then uses this index to identify functions that should be invoked based on their name (the ones prefixed with test and that are within subclasses of XCTest). The end result is swift test “finding the tests” as most other unit testing libraries do.

Building macOS or iOS applications using xcodebuild with github actions

If you want to build a macOS, tvOS, iOS, or even watchOS application, use xcodebuild. xcodebuild is the command-line invocation that uses the build toolchain built into xcode, leveraging all the built in mechanisms with targets, schemes, build settings, and overlays interactions with the simulators. To use xcodebuild, you’ll need to have xcode installed – and with github actions, that’s available through virtualized instances of macOS with Xcode (and a lot of other tools) pre-installed.

The example repository I’m using is one of my own (https://github.com/heckj/MPCF-TestBench/blob/master/.github/workflows/build.yml), although it was a hard choice – as I really like NetNewsWire’s CI setup as a good example. Definitely take a look at https://github.com/Ranchero-Software/NetNewsWire/blob/master/.github/workflows/build.yml and the corresponding CI Tech Note for some excellent detail and to see how another project enabled CI on GitHub.

The whole CI file, from https://github.com/heckj/MPCF-TestBench/blob/master/.github/workflows/build.yml:

name: CI
on: [push]
jobs:
  build:
    runs-on: macos-latest
    strategy:
      matrix:
        run-config:
          - { scheme: 'MPCF-Reflector-mac', destination: 'platform=macOS' }
          - { scheme: 'MPCF-TestRunner-mac', destination: 'platform=macOS' }
          - { scheme: 'MPCF-Reflector-ios', destination: 'platform=iOS Simulator,OS=13.4.1,name=iPhone 8' }
          - { scheme: 'MPCF-TestRunner-ios', destination: 'platform=iOS Simulator,OS=13.4.1,name=iPhone 8' }
          - { scheme: 'MPCF-Reflector-tvOS', destination: 'platform=tvOS Simulator,OS=13.4,name=Apple TV' }
    steps:
    - name: Checkout Project
      uses: actions/checkout@v1
    - name: Homebrew build helpers install
      run: brew bundle
    - name: Show the currently detailed version of Xcode for CLI
      run: xcode-select -p
    - name: Show Build Settings
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showBuildSettings
    - name: Show Build SDK
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showsdks
    - name: Show Available Destinations
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showdestinations
    - name: lint
      run: swift format lint --configuration .swift-format-config -r .
    - name: build and test
      run: xcodebuild clean test -scheme '${{ matrix.run-config['scheme'] }}' -destination '${{ matrix.run-config['destination'] }}' -showBuildTimingSummary

Both this example and NetNewsWire’s CI use the technique of a matrix build. This is immensely useful when you have multiple targets in the same Xcode project and want to verify that they’re all building and testing correctly. The matrix is defined right at the top of this file as a strategy:

jobs:
  build:
    runs-on: macos-latest
    strategy:
      matrix:
        run-config:
          - { scheme: 'MPCF-Reflector-mac', destination: 'platform=macOS' }
          - { scheme: 'MPCF-TestRunner-mac', destination: 'platform=macOS' }
          - { scheme: 'MPCF-Reflector-ios', destination: 'platform=iOS Simulator,OS=13.4.1,name=iPhone 8' }
          - { scheme: 'MPCF-TestRunner-ios', destination: 'platform=iOS Simulator,OS=13.4.1,name=iPhone 8' }
          - { scheme: 'MPCF-Reflector-tvOS', destination: 'platform=tvOS Simulator,OS=13.4,name=Apple TV' }

This is a single job – meaning a single operating system to run the build – but when you use a matrix is replicates the job by the size of the matrix. In this case, there are 5 matrix definitions – 2 for macOS targets, 2 for iOS targets, and 1 target for tvOS. When this is run, it runs 5 parallel instances, each with its own matrix definition and applies those values to further steps. This example defines two properties, scheme and destination, to be filled out with different values for each matrix run, and which are used later in the steps. The scheme definition corresponds to the names of schemes that are in the Xcode workspace, and destination maps to parameters used to xcodebuild’s destination argument which is a combination of target platform, name, and version of the operating system to use.

Something to be aware of – the specific values that are used in the destinations of the matrix will vary with the version of Xcode, and the latest version of Xcode will always be used unless you explicitly override it. In this example, I am intentionally setting it to latest to keep tracking any updates, but if you’re building a CI system for verifying stability over time, that’s probably something you want to explicitly declare and lock down in your CI configuration.

Checkout is pretty self explanatory, and right after that step you’ll see the step that installs helpers.

    - name: Homebrew build helpers install
      run: brew bundle

This uses a feature of Homebrew called bundle, which reads a file named Brewfile, allowing you to define a single place to say “install all these additional tools for my later use”.

The next steps are purely informational, and aren’t actually needed for the build, but are handy to have for debugging:

    - name: Show the currently detailed version of Xcode for CLI
      run: xcode-select -p
    - name: Show Build Settings
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showBuildSettings
    - name: Show Build SDK
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showsdks
    - name: Show Available Destinations
      run: xcodebuild -workspace MPCF-TestBench.xcworkspace -scheme '${{ matrix.run-config['scheme'] }}' -showdestinations

These all invoke xcodebuild with the various options to show the parameters available. These parameters vary with the version of Xcode that is running, and the very first command (xcode-select -p) prints that version.

The next command, named lint, uses one of the helpers that is defined in that Brewfileswift format. In this example, I’m using Apple’s swift-format command (the other common example is Nick Lockwood’s swiftformat). In other projects I’ve used Nick’s swiftformat (and the often paired tool: swiftlint), both of which I like quite a lot. This project was an excuse to try out the tooling from the Apple’s open source version and see how it worked, and how I liked working with it. My final opinion is still pending, but it mostly does the right thing.

    - name: lint
      run: swift format lint --configuration .swift-format-config -r .

The lint step is really applying a verification of formatting, so arguably might not be relevant to be included in a build. I rely on linters (in several languages) to yell at me, as otherwise I’m lousy about consistency in how I format code. In practice, I also try and use the same tools to auto-format the code to just keep it up to whatever standard seems predominantly acceptable.

The final step is where it compiles and run the tests:

    - name: build and test
      run: xcodebuild clean test -scheme '${{ matrix.run-config['scheme'] }}' -destination '${{ matrix.run-config['destination'] }}' -showBuildTimingSummary

And you can see the references to the matrix values that are applied directly as parameters to xcodebuild. The text ${{ matrix.run-config['scheme'] }} is a replacement definition, indicating that the value of scheme for the running matrix build should be dropped into that position for the command line argument.

The NetNewsWire project uses the exact same technique, although the developers run xcodebuild from within a shell script so that they can arrange for signing keys and other parameters to be set up. It’s a thoughtful and ingenious system, but quite a bit harder to explain or show the matrix being used directly.

The downsides of GitHub Actions

While I am very pleased with how GitHub actions works, I am completely leveraging the “no cost to run” options. The cost of the pay-for GitHub Actions is notable, and in some cases, injuriously expensive.

If you’re running GitHub actions in a private repository (and paying for the privilege) you may find it just too expensive. The billing information for github actions shows that running macOS virtual images is 10 times the price of running Linux images ($0.08 per minute vs. $0.008 per minute). If you build on every pull request you’re going to wrack up an impressive number of minutes, very quickly. On top of that, techniques like the matrix build add an additional multiplier – 5x in my case. GitHub does offer the option of allowing you to create your own “GitHub Action runners”, and I’d recommend seriously looking at that option – just using GitHub as the coordinator – from the cost perspective alone. It’s more “stuff you have to maintain”, but in the end – likely quite a bit cheaper than paying the GitHub Actions hosting fees.

If you are building something purely open source, then you’re allowed to take advantage of the free services. That’s pretty darned nice.

Where GitHub is not yet supporting the swift ecosystem

This isn’t directly related to GitHub Actions, but more a related note on how GitHub is (or isn’t) supporting the swift ecosystem. There are a tremendous number of support options in their site for some great features, but really all of them are pretty anemic when it comes to supporting swift, either via Apple or through the open source ecosystem:

  • There’s no current availability for swift language syntax highlighting. It does offer Objective-C, which is a pretty good fallback, but swift specific highlighting would be really lovely to have when reviewing pull requests or just reading code.
  • GitHub’s security mechanisms, which host security advisories, track dependency flow, and the recently announced security code scanning – don’t support swift:
    • Dependencies through Package.swift, although parsable, aren’t tracked and aren’t reflected.
    • While you can draft a security advisory and host it at Github, it won’t propagate to any dependencies because it’s not tracked.
  • A year ago GitHub announced that it would be supporting “Swift Packages” – I’m not aware of much that came of that effort, if anything.
  • The same constraint of not parsing and tracking dependencies is highlighted in their Insights tab and the “Dependency graph”. On swift packages, there’s nothing. Same with Xcode projects. Nada, zilch, zip.
  • The code scanning, which uses a really great technology called CodeQL, doesn’t support Swift, or even Objective-C.

At a bare minimum, I would have hoped that GitHub would have enabled parsing and showing dependencies for Package.swift. I can understand not wanting to reverse engineer the .pbproj XML plist structure to get to dependencies, but swift itself is open source and the package manifest is completely open.

In addition Swift Syntax, the AST parsing mechanisms that swift enables through open source, are being used for swift language server support. These, I think, would be a perfect complement to leverage within CodeQL.

I do hope we’ll so more movement on these fronts from GitHub, but if not – well, I suppose it’s a good differentiating opportunity for competitive sites such as BitBucket or GitLab.

How to make a SwiftUI component that draws a Shape with light

While I was experimenting with SwiftUI, one of the effects I wanted to re-create was taking a shape and making “stroke effects” on it. The goal was to create a SwiftUI component that could take an arbitrary path, apply this effect to the shape, and render it within the context of a larger SwiftUI view.

The effect I wanted to create was a “laser light” like drawing. You see this a lot in science fiction film user interfaces or backgrounds, and it is just kind of fun and neat. And yeah, I want it to be decent in both light and dark modes, although “dark mode” is where it will shine the most.

And the code to represent this is:

VStack {
    LaserLightShape(color: Color.orange, lineWidth: 1) {
        Rectangle()
    }

    LaserLightShape(color: Color.red, lineWidth: 2) {
        Circle()
    }

    LaserLightShape(color: Color.blue, lineWidth: 0.5) {
        Path { path in
            path.move(to: CGPoint(x: 0, y: 0))
            path.addLine(to: CGPoint(x: 50, y: 50))
        }
    }
}

Solving this challenge underscored the change in mindset from imperative code to declarative code. When I was looking through the SwiftUI methods, I kept looking for a methods with a means to “add on” to the existing view – or perhaps to replace an element within the view. In both of those cases, this highlighted my pattern of thinking – me telling the framework (or library) what to do. SwiftUI’s biggest win (and challenge) is inverting that thinking.

The methods to work with SwiftUI Views don’t “take a view, modify it, and hand it back”. Instead they take in some information, make a whole new View, and return it. There’s no “tweaking” or “changing” – the closest you get to that (imperative) paradigm is wholesale replacement. My natural instinct was to reach for something that had an explicit side effect, I suspect because that’s how a lot of languages I’ve used for years got something done. It’s a pattern I’m familiar with, and the first tool I reach towards.

This change in mindset is also why you’ll see a lot of the same people who “get” the new paradigm talking about how it overlaps with functional programming, using phrases like “pure functions”, and perhaps even “functors” and “monads”. I quickly get lost in many of these abstract concepts. For me, the biggest similarity is like when I understand the inversion of control that was the mental leap in moving from using a library to writing for a framework. This feels very much akin to that ‘Ah ha’ moment. And for the record, I’m not asserting that I fully understand it – only that I recognize I need to change how I’m framing the problem in my head in order to use these new tools.

To solve this particular challenge, I originally started looking at SwiftUI ViewModifiers. I’d been reading about them and thought maybe that’s what I want. Unfortunately, that didn’t work – ViewModifiers are great when you want to layer additional effects on a View – which means constraining what you do to the methods available and defined on the View protocol – but I wanted to work on a Shape, specifically leveraging the stroke method, which is a different critter.

The solution that I came up with uses a ViewBuilder. I wrote a bit about these before talking about making PreviewBackground. The mental framing that helped provide this solution was thinking about what I wanted to achieve as taking some information (a Shape) and returning some kind of View.

A ViewBuilder is a fairly significantly generic-heavy function. So to make it accept something conforming to the Shape protocol, I specified that the generic type it accepted was constrained to the protocol. I am still stumbling through the specifics of how to effectively use generics and protocols with swift, and thought this was a pretty nice way to show some of its strength.

Without further ado, here’s the code that produces the effect:

struct LaserLightShape<Content>: View where Content: Shape {
    let content: () -> Content
    let color: Color
    let lineWidth: CGFloat

    @Environment(\.colorScheme) var colorSchemeMode
    var blendMode: BlendMode {
        if colorSchemeMode == .dark {
            // lightens content within a dark
            // color scheme
            return BlendMode.colorDodge
        } else {
            // darkens content within a light
            // color scheme
            return BlendMode.colorBurn
        }
    }

    init(color: Color, lineWidth: CGFloat, @ViewBuilder content: @escaping () -> Content) {
        self.content = content
        self.color = color
        self.lineWidth = lineWidth
    }

    var body: some View {
        ZStack {
            // top layer, intended only to reinforce the color
            // narrowest, and not blurred or blended
            if colorSchemeMode == .dark {
                content()
                    .stroke(Color.primary, lineWidth: lineWidth / 4)
            } else {
                content()
                    .stroke(color, lineWidth: lineWidth / 4)
            }

            if colorSchemeMode == .dark {
                // pushes in a bit of additional lightness 
                // when in dark mode
                content()
                    .stroke(Color.primary, lineWidth: lineWidth)
                    .blendMode(.softLight)
            }
            // middle layer, half-width of the stroke and blended
            // with reduced opacity. re-inforces the underlying
            // color - blended to impact the color, but not blurred
            content()
                .stroke(color, lineWidth: lineWidth / 2)
                .blendMode(blendMode)

            // bottom layer - broad, blurred out, semi-transparent
            // this is the "glow" around the shape
            if colorSchemeMode == .dark {
                content()
                    .stroke(color, lineWidth: lineWidth)
                    .blur(radius: lineWidth)
                    .opacity(0.9)
            } else {
                // knock back the blur/background effects on
                // light mode vs. dark mode
                content()
                    .stroke(color, lineWidth: lineWidth / 2)
                    .blur(radius: lineWidth / 1.5)
                    .opacity(0.8)
            }
        }
    }
}

The instructions for what to do are embedded in body of the view we are returning. The pattern is one I looked up from online references of how to make this same effect in photoshop. The gist is:

  • You take the base shape, make a wide stroke of it, and blur it out a bit. This will end up being the “widest” portion of the effect.
  • Over that, you put another layer – roughly half the width of the bottom layer, and stroke it with the color you want to show.
  • And then you add a final top layer, narrowest of the set, intended to put the “highlight” or shine onto the result.

You’ll notice in the code that it’s also dynamic in regards to light and dark backgrounds – in a light background, I tried to reinforce the color without making the effect look like an unholy darkness shadow trying to swallow the result, and in the dark background I wanted to have a “laser light” light shine show through. I also found through trial-and-error experiments that it helped to have a fourth layer sandwiched in there in dark mode specifically to brighten up the stack, adding in more “white” to the end effect.

A lot of this ends up leveraging the blend modes from SwiftUI that composite layers. I’m far from a master of blend modes, and had to look up a number of primers to figure out what I wanted from the fairly large set of possibilities.

I suspect there is a lot more that can be accomplished by leveraging the ViewBuilder pattern.

Introducing and explaining the PreviewBackground package

While learning and experimenting with SwiftUI, I use the canvas assistant editor to preview SwiftUI views extensively. It is an amazing feature of Xcode 11 and I love it. There is a quirk that gets difficult for me though – the default behavior of the preview provider uses a gray background. I frequently use multiple previews while making SwiftUI elements, wanting to see my creation on a background supporting both light and dark modes.

The following little stanza is a lovely way to iterate through the modes and displaying them as previews:

#if DEBUG
struct ExampleView_Previews: PreviewProvider {
    static var previews: some View {
        Group {
            ForEach(ColorScheme.allCases,
                    id: \.self) { scheme in

                Text("preview")
                    .environment(\.colorScheme, scheme)
                    .frame(width: 100,
                           height: 100,
                           alignment: .center)
                    .previewDisplayName("\(scheme)")
            }
        }
    }
}
#endif

Results in the following preview:

The gray background doesn’t help all that much here. It is perfect when you are viewing a fairly composed element set, as you are often working over an existing background. When you are creating an element to stand alone, or moving an element. In those cases, I really want a background for the element.

And this is exactly what PreviewBackground provides. I made PreviewBackground into a SwiftPM package. While I could have created this effect with a ViewModifier, I tried it out as a ViewBuilder instead, thinking it would be nice to wrap the elements I want to preview explicitly.

The same example, using PreviewBackground:

import PreviewBackground

#if DEBUG
struct ExampleView_Previews: PreviewProvider {
    static var previews: some View {
        Group {
            ForEach(ColorScheme.allCases,
                    id: \.self) { scheme in
                PreviewBackground {
                    Text("preview")
                }
                .environment(\.colorScheme, scheme)
                .frame(width: 100,
                       height: 100,
                       alignment: .center)
                .previewDisplayName("\(scheme)")
            }
        }
    }
}
#endif

The code is available on Github, and you may include it within your own projects by adding a swift package with the URL: https://github.com/heckj/PreviewBackground

Remember to import PreviewBackground in the views where you want to use it, and work away!

Explaining the code

There are not many examples of using ViewBuilder to construct a view, and this is a simple use case. Here is how it works:

import SwiftUI

public struct PreviewBackground<Content>: View where Content: View {
    @Environment(\.colorScheme) public var colorSchemeMode

    public let content: () -> Content

    public init(@ViewBuilder content: @escaping () -> Content) {
        self.content = content
    }

    public var body: some View {
        ZStack {
            if colorSchemeMode == .dark {
                Color.black
            } else {
                Color.white
            }
            content()
        }
    }
}

The heart of using ViewBuilder is using it within a View initializer to return a (specific but) generic instance of View, and using the returned closure as a property that you execute when composing a view.

There is a lot of complexity in that statement. Allow me to try and explain it:

Normally when creating a SwiftUI view, you create a struct that conforms to the View protocol. This is written in code as struct SomeView: View. You may use the default initializer that swift creates for you, or you can write your own – often to set properties on your view. ViewBuilder allows you to take a function in that initializer that returns an arbitrary View. But since the kind of view is arbitrary, we need to make the struct generic – since we can’t assert exactly what type it will be until the closure is compiled. To tell the compiler it’ll need to do the work to figure out the types, we label the struct as a being generic, using the <SomeType> syntax:

struct SomeView<Content>: View where Content: View

This says there is a generic type that we’re calling Content, and that generic type is expected to conform to the View protocol. There is a more compact way to represent this that you may prefer:

struct SomeView<Content: View>: View

Within the view itself, we have a property – which we name content. The type of this content isn’t known up front – it is the arbitrary type that the compiler gets to infer from the closure that will provided in the future. This declaration is saying the content property will be a closure – taking no parameters – that returns some an arbitrary type we are calling Content.

public let content: () -> Content

Then in the initializer, we use ViewBuilder:

public init(@ViewBuilder content: @escaping () -> Content) {
    self.content = content
}

In case it wasn’t obvious, ViewBuilder is a function builder, the swift feature that is enabling this declarative structure with SwiftUI. This is what allows us to ultimately use it with in that declarative syntax form.

The final bit of code to describe is using the @Environment property wrapper.

@Environment(\.colorScheme) public var colorSchemeMode

The property wrapper is not in common use, but perfect for this need. The property wrapper uses exposes a specific part of the existing environment as a local property for this view. This is what enables PreviewBackground to choose the color for the background appropriate to the mode. By reading the environment it chooses an appropriately colored background. It then uses that property to assemble a view by invoking the property named content (which was provided by the function builder) within a ZStack.

By using ViewBuilder, we can use the PreviewBackground struct like any other composed view within SwiftUI:

var body: some View {
    PreviewBackground {
        Text("Hello there!")
    }
}

If we had created this code as a ViewModifier, then using it would look different – instead of the curly-bracket syntax, we would be chaining on a method. The default set up for something like that looks like:

var body: some View {
    Text("Hello there!")
    .modify(PreviewBackground())
}

I wanted to enable the curly-bracket syntax for this, hence the choice of using a ViewBuilder.

A side note about moving code into a Swift package

When I created this code, I did so within the context of another project. I wanted to use it across a second project, and the code was simple enough (a single file) to copy/paste – but instead I went ahead and made it a Swift package. Partially to make it easier for anyone else to use, but also just to get a bit more experience with what it takes to set up and use this kind of thing.

The mistake that I made immediately on moving the code was not explicitly making all the structs and properties public. It moved over, compiled fine, and everything was looking great as a package, but then when I went to use it – I got some really odd errors:

Cannot call value of non-function type 'module<PreviewBackground>'

In other instances (yes, I admit this wasn’t the first time I made this mistake – and it likely won’t be the last) the swift compiler would complain about the scope of a function, letting me know that it was using the default internal scope, and was not available. But SwiftUI and this lovely function builder mechanism is making the compiler work quite a bit more, and it is not nearly as good at identifying why this mistake might have happened, only that it was failing.

If you hit the error Cannot call value of non-function type when moving code into a package, you may have forgotten to make the struct (and relevant properties) explicitly public.

Four strategies to use while developing SwiftUI components

Lets start out with the (possibly) obvious: when I code, I frequently make mistakes (and fix them); but while I am going through that process function builders are frequently kicking my butt. When you are are creating SwiftUI views, you use function builders intensely – and the compiler is often at a loss to explain how I screwed up. And yeah, even with the amazing new updates into the Diagnostic Engine alongside Swift 5.2, which I am loving.

What is a function builder? It is the thing that looks like a normal “do some work” code closure in swift that you use as the declarative structure when you are creating a SwiftUI view. When you see code such as:

import SwiftUI

struct ASimpleExampleView: View {
    var body: some View {
        Text("Hello, World!")
    }
}

The bit after some View is the function builder closure, which includes the single line Text("Hello, World!").

The first mistake I make is assuming all closures are normal “workin’ on the code” closures. I immediately start trying to put every day code inside of function builders. When I do, the compiler – often immediately and somewhat understandably – freaks out. The error message that appears in Xcode:

Function declares an opaque return type, but has no return statements in its body from which to infer an underlying type

And some times there are other errors as well. It really depending on what I stacked together and how I grouped and composed the various underlying elements in that top level view, and ultimately what I messed up deep inside all that.

I want to do some calculations in some of what I am creating, but doing them inline in the function builder closures is definitely not happening, so my first recommended strategy:

Strategy #1: Move calculations into a function on the view

Most of the reasons I’m doing a calculation is because I want to determine a value to hand in to a SwiftUI view modifier. Fiddling with the opacity, position, or perhaps line width. If you are really careful, you can do some of that work – often simple – inline. But when I do that work, I invariably screw it up – make a mistake in matching a type, dealing with an optional, or something. At those times when the code is inline in a function builder closure, the compiler is having a hell of a hard time figuring out what to tell me about how I screwed it up. By putting the relevant calculation/code into a function that returns an explicit type, the compiler gets a far more constrained place to provide feedback about what I screwed up.

As an example:

struct ASimpleExampleView: View {
    func determineOpacity() -> Double {
        1
    }

    var body: some View {
        ZStack {
            Text("Hello World").opacity(determineOpacity())
        }
    }
}

Some times you aren’t even doing calculations, and the compiler gets into a tizzy about the inferred type being returned. I have barked my shins on that particular edge repeatedly while experimenting with all the various options, seeing what I like in a visualization. The canvas assistant editor that is available in Xcode is a god-send for fast visual feedback, but I get carried away in assembling lots of blocks with ZStacks, HStacks, and VStacks to see what I can do. This directly leads to my second biggest win:

Strategy #2: Ruthlessly refactor your views into subcomponents.

I am beginning to think that seeing repeated, multiple kinds of stacks together in a single view is possibly a code smell. But more than anything else, keeping the code within a single SwiftUI view as brutally simple as possible gives the compiler a better than odds chance of being able to tell me what I screwed up, rather than throwing up it’s proverbial hands with an inference failure.

There are a number of lovely mechanisms with Binding that make it easy to compose and link to the relevant data that you want to use. When I am making a subcomponent that provides some visual information that I expect the enclosing view to be tracking, I have started using the @Binding property wrapper to pass it in, which works nicely in the enclosing view.

TIP:

When you’re using @Binding, remember that you can make a constant binding in the PreviewProvider in that same file:

YourView(someValue: .constant(5.0))

While I was writing this, John Sundell has recently published a very in-depth look at exactly this topic. His article Avoiding Massive SwiftUI Views covers another angle of how and why to ruthlessly refactor your views.

On the topic of the mechanics of that refactoring, when we lean what to do, it leads to leveraging Xcode’s canvas assistant editor withPreviewProvider – and my next strategy:

Strategy #3: use Group and multiple view instances to see common visual options quickly

This strategy is more or less obvious, and was highlighted in a number of the SwiftUI WWDC presentations that are online. The technique is immensely useful when you have a couple of variations of your view that you want to keep operational. It allows you to visually make sure they are working as desired while you continue development. In my growing example code, this looks like:

import SwiftUI

struct ASimpleExampleView: View {
    let opacity: Double
    @Binding var makeHeavy: Bool

    func determineOpacity() -> Double {
        // maybe do some calculation here
        // mixing the incoming data
        opacity
    }

    func determineFontWeight() -> Font.Weight {
        if makeHeavy {
            return .heavy
        }
        return .regular
    }

    var body: some View {
        ZStack {
            Text("Hello World")
                .fontWeight(determineFontWeight())
                .opacity(determineOpacity())
        }
    }
}

struct ASimpleExampleView_Previews: PreviewProvider {
    static var previews: some View {
        Group {
            ASimpleExampleView(opacity: 0.8, 
                makeHeavy: .constant(true))

            ASimpleExampleView(opacity: 0.8, 
                makeHeavy: .constant(false))
        }
    }
}

And the resulting canvas assistant editor view:

This does not always help you experiment with what your views look like in all variations. For sets of pre-defined options, or data that influences your view, it can make a huge difference. A good variation that I recommend anyone use is setting and checking the accessibility environment settings to make sure everything renders as you expect. Another that I have heard is in relatively more frequent use: verifying localization rendering.

The whole rapid experimentation and feedback capability is what is so compelling about using SwiftUI. Which leads pretty directly to my next strategy:

Strategy #4: Consider making throw-away control views to tweak your visualization effects

I am not fortunate enough to constantly work closely with a designer. Additionally, I often do not have the foggiest idea of how some variations will feel in terms of a final design. When the time comes, seeing the results on a device (or on multiple devices) makes a huge difference.

You do not want to do this for every possible variation. That is where mocks fit into the design and development process – take the time to make them and see what you think. Once you have narrowed down your options to a few, then this strategy can really kick in and be effective.

In the cases when I have a few number of variations to try out, I encapsulate those options into values that I can control. Then I make a throw-away view that will never be shown in the final code that allows me to tweak the view within the context of a running application. Then the whole thing goes into whatever application I am working on – macOS, iOS, etc – and I see how it looks

When I am making a throw-away control view, I often also make (another throw-away) SwiftUI view that composes the control and controlled view together, as I intend to display it in the application. This is primarily to see the combined effect in a single Preview within Xcode. The live controls are not active in the Xcode canvas assistant editor, but it helps to see how having the controls influences the rest of the view structure.

A final note: Do not be too aggressive about moving code in a SwiftPM package

You may (like me) be tempted to move your code into a library, especially with the lovely SwiftPM capabilities that now exist within Xcode 11. This does work, and it functionally works quite well, from my initial experiments. But there is a significant downside, at least with the current versions (including Xcode 11.4 beta 3) – while you are still doing active development on the library:

If you open the package to edit or tweak it with Xcode, and load and build from only Package.swift without an associated Xcode project, the SwiftUI canvas assistant preview will not be functioning. If you use an Xcode project file, it works fine – so if you do go down this route, just be cautious about removing the Xcode project file for now. I have filed feedback to Apple to report the issue – both with Xcode 11.3 (FB7619098) and Xcode 11.4 beta 3 (FB7615026).

I would not recommend moving anything into a library until you used had it stable in case. There are also still some awkward quirks about developing code and a dependent library at the same time with Xcode. It can be done, but it plays merry havoc with Xcode’s automatic build mechanisms and CI.

Using Combine v1.1 is available

After getting the major edits for the existing content done, I called the result the first release. As with any creative product, I wasn’t happy with some of the corners that still had rough edges. Over the past two weeks I fleshed those in, wrote a bunch of unit tests, figured out some of the darker corners that I’d previously ignored, and generally worked to improve on the overall consistency.

The results have been flowing into the online version as I merged them. And now the updated version, available on gumroad in PDF and ePub format, is updated as well. Anyone who’s previously purchased the content gets the updates for free – just log in and they are available for you.

The rough bits that were fleshed out include several focuses of content:

  • Tests created and content written (and updated) for the multicast and share operators. The focus was primarily how they work and how to use them.
  • Worked through what the Record publisher offers (and doesn’t offer), including how to serialize & deserialize recorded streams (while this sounds cool, its not ultimately as useful as I hoped it might be).
  • Added the missing note that swift’s Result type could also be used as a publisher, courtesy of a little extension that was added in Combine.
  • Updated some of the details of throttle and debounce with the specifics of delays that are incurred in timing, after having validated the theories with some unit tests (spoiler: debounce always delays the events by a short bit, but throttle doesn’t have much of an impact). I had previously written about throttle and debounce on this blog as well.

The new version is 1.1, tagged in the underlying repository if you are so inclined to review/poke at the unit tests beyond the narrative details I shared in the book itself.

Using Combine – first edition available

I just finished my first edit pass of the content of Using Combine, and am incredibly pleased. Sufficiently pleased, in fact, that I am going to call this version the “first edition”.

It is certainly not perfect, nor even as complete as I would like, but a significant enough improvement that I wanted to put a stake in the ground and get it out there.

I’m not planning on stopping the development work, there are more examples, more details, and useful tidbits that will be developed. I have continued to receive wonderful feedback, and plan to continue to accept updates, as all the content, example code, and sample project pieces are available in the github repository swiftui-notes.

The content will continue to remain available for free in a single-page HTML format, hosted at http://heckj.github.io/swiftui-notes/. The ePub and PDF version is available through gumroad.

SwiftUI and Combine – Binding, State, and notification of changes

When I started the project that became Using Combine, it was right after WWDC; I watched streamed WWDC sessions online, captivated like so many others about SwiftUI. I picked up this idea that SwiftUI was “built using the new framework: Combine”. In my head, I thought that meant Combine managed all the data – notifications and content – for SwiftUI. And well, that ain’t so. While the original “built using Combine” is accurate, it misses a lot of detail, and the truth is a bit more complex.

After I finished my first run through drafting the content for Using Combine, I took some time to dig back into SwiftUI. I originally intended to write (and learn) more about that. In fact, SwiftUI is what started the whole segue into Combine. I hadn’t really tried to use SwiftUI seriously, or get into the details until just recently. I realized after all the work on examples for Combine and UIKit, I had completely short shifted the SwiftUI examples.

Mirroring a common web technology pattern, SwiftUI works as a declarative structure of what gets shown with the detail being completely derived from some source of truth – derived from state stored or managed somewhere. The introductory docs made is clear that @State was how this declarative mechanism could represent a bit of local state within a View, and with the benefit of Daniel and Paul’s writing (SwiftUI Kickstart and SwiftUI by Example), it was also quickly clear that @EnvironmentObject and @ObservedObject played a role there too.

The Combine link to SwiftUI, as it turns out, is really only about notifying the SwiftUI components that a model had changed, not at all what changed. The key is the protocol from Combine: ObservableObject (Apple’s docs). This protocol, along with the @Published property wrapper, does the wonderful work of generating a combine publisher – the default type of which is represented by the class ObservableObjectPublisher. In the world of Combine, it has a defined output and failure type: <Void, Never>. The heart of that Void output type is that the data that is changing doesn’t matter – only that a change was happening.

So how does SwiftUI go and get the data it needs?Binding is the SwiftUI generic structure that is used to do this linkage. The documentation at Apple asserts:

Use a binding to create a two-way connection between a view and its underlying model. For example, you can create a binding between a Toggle and a Bool property of a State. Interacting with the toggle control changes the value of the Bool, and mutating the value of the Bool causes the toggle to update its presented state.

You can get a binding from a State by accessing its binding property. You can also use the $prefix operator with any property of a State to create a binding.

https://developer.apple.com/documentation/swiftui/binding

Looking around a bit more while creating some examples, and it becomes clear that some handy form elements (such as TextField) expect a parameter of type binding when they are declared. Binding itself works by leveraging swift’s property getters and setters. You can even manually create a Binding if you’re so inclined, defining the closures for get and set to whatever you like. Property wrappers such as @State, @ObservedObject, and @EnvironmentObject either create and expose a Binding, or create a wrapper that in turn passes back a Binding.

My take away is the flow with Combine and SwiftUI has a generally expected pattern: A model to be represented by a reference object, which sends updates when the data is about to change (by conforming to the ObservableObject protocol). SwiftUI goes and gets the data that it needs based on what was declared in the View using Binding to get to the underlying data (and potentially allowing the SwiftUI view to update it in turn if that’s relevant).

Given that SwiftUI views are also designed to be composed, I am leaning towards expecting a pattern that state will need to be defined for pretty much any variation of a view – and potentially externalized. The property wrappers for representing, and externalizing, state within SwiftUI are:

  • @State
  • @ObservedObject and @Published
  • @EnvironmentObject

@State is all about local representation, and the simplest mechanism, simply providing a link to a property and the Binding.

@ObservedObject (along with @Published) adds a notification mechanism on change, as well as a way to get a typed Binding to properties on the model. SwiftUI’s mechanism expects this always to be a reference type (aka a ‘class’), which ends up being pretty easy to define in code.

@EnvironmentObject takes that a step further and exposes a reference model not just to a single view, but allows it to be used by any number of views in their own hierarchy.

  • Drive most of the visual design choices entirely by the current state

But that’s not the only mechanism that is available: SwiftUI is also set up to react to a Combine publisher – although not in a heavily predetermined fashion. An interesting aspect is that all of the SwiftUI views also support a Combine subscriber: onReceive. So you can bring the publisher, and then write code within a View (or View component) to react to what it sends.

The onReceive subscriber acts very similarly to Combine’s sink subscriber – the single-closure version of sink (implying a Combine pipeline failure type of Never). You to define a closure within your SwiftUI view element that accepts that data and does whatever needs doing. This could be using the data, transforming and storing it into local @State, or just reacting to the fact that data was sent and updating the view based on that.

From a “What is a best practice” point of view, it seems the more you represent what you want to display within a reference model, the easier it will be to use. While you can expose a publisher right into a SwiftUI view, it tightly couples the combine publisher to the view and all links all those relevant types. You could (likely just as easily) have the model object encapsulate that detail – in which case the declaration of how you handle event changes over time are separated from how you present the view. This is likely a better separation of concerns.

The project (SwiftUI-Notes) linked to Using Combine now has two examples with Combine and SwiftUI. The first is a simple form validation (the view ReactiveForm.swift and model ReactiveFormModel.swift). This uses both the pattern of encapsulating the state within the model, and exposing a publisher to the SwiftUI View to show what can be done. I’m not espousing that the publisher mechanism is a good way to solve that particular problem, but it illustrates what can be done nicely.

The second example is a view (HeadingView.swift) that uses a model and publisher I created to use the built-in CoreLocation framework. The model (LocationModelProxy.swift) exposes the authorization as a published property, as well as the location updates through a publisher. Within the built-in Cocoa framework, those are normally exposed through a delegate callback. A large number of the existing Cocoa frameworks are convertible into a publisher-based mechanism to work with Combine using this pattern. The interesting bit was linking this up to SwiftUI, which was fun – although this example only taps the barest possibility of what could done.

It will be interesting to see what Apple might provide in terms of adopting Combine as alternative interfaces to its existing frameworks. CoreLocation is such a natural choice with its streaming updates, but there are a lot of others that could be used as well. And of course I’m looking forward to seeing how they expand on SwiftUI – and if they bring in more Combine based mechanisms into it or not.

programming tools – start with sketching and build to fully specified

A couple of years ago I was managing a very distributed team – or really set of teams: Shanghai, Minneapolis, Seattle, Santa Clara, and Boston. Everyone worked for the same company, but culturally the teams were hugely divergent.

In one region, the developers in the team preferred things to be as loosely defined as possible. Their ideal state was “tell me how you’re going to rate me” and then you got of their way as they min-max’d based on whatever measurement rules you just set in place. When they were in charge of a specific element of the solution, there was a great deal of confidence it’d get done and effectively. But when they needed to be a bit more of active coordinator with a growing solution, they struggled quite a bit more – communication overhead really took a steep hit.

The opposite end of the spectrum was a region where the developers preferred for everything to be fully specified. Every detail defined, every option laid out and determined, how things work, and how things fail. Lots of detailed definition, to a level that extremely exhaustive – and would frankly annoy nearly all the developers from the first region.

I would love to make a hasty generalization and tell you that this was a “web developer” on one side and an ex-firmware developer on the other, but these folks were *all* firmware-level developers, stepping into a new team we had merged from a variety of pieces and places. Everyone was dealing with new development patterns, a new language – quite different from what they were used to historically, and a different business reality with goals and intentions that were in a high degree of flux.

What is surprising to me is that while I started out sympathizing with the first region’s culture far more, the influence of the spectrum – and the far side that preferred the specificity is what has stuck with me. I had previously left a lot more unspecified and open to interpretation – and not surprisingly that bit back upon occasion. Now with programming, regardless of the language, I want the tools to help me support that far more detailed, specified version of what I expect.

In the last year, I’ve actively programmed in C, C++, Javascript, TypeScript, Swift, Python, and Objective-C. When I programmed in dynamic languages (javascript, python) I just *expected* to need to write a lot of tests, including ones that a type system would otherwise do some overlapping validation with. A lot of time it was tracking null values, or explicitly invalid inputs to functions and systems to make sure the responses were as expected (errors and failures). As I followed this path, I gained a huge amount of respect for the optional concept that’s been embedded into Swift. It has been an amazing eye opener for me, and I find I struggle to go back to languages without it now.

Last fall I diverted heavily into a C++ project, where we explicitly pulled in and used an optional extension through a library: type_safe. Back in December, Visual Studio went and exposed a fatal compiler error bug in the Microsoft compiler update that just exploded when attempting to use this library, much to my annoyance. But even with the type system of C++ and this library working with me, I can’t even *imagine* trying to write stable C++ code without doing a ton of tests to validate what’s happening and how it’s working – and failing.

I’m still tempted to delve into Rust, as I keep hearing and seeing references to it as a language with very similar safety/structural goals to swift (quite possibly with stronger guarantees – I don’t have a great way to judge the specifics there though). Bryan Cantrill keeps pimping it in the background of his conversations at Oxide Computing’s On The Metal podcast. If you’re into some fascinating history/interviews about the software-hardawre interface, it’s amazing!

The end result is that I’ll start out sketching in ideas, but want my tools and languages to help me drive to a point where I’m far more “fully specified” as I go through the development process and iterate on the code. In my ideal world, which I don’t really expect, the language and software tooling helps me have confidence in the correctness and functionality of the code I’ve written, while allowing me the expressiveness to explore ideas as quickly as possible.

Just like the cultures and regions I navigated with that global team, it is a matter of sliding around on the spectrum of possibilities to optimize for the software challenge at hand.

CRDTs and lockless data structures

A good five or so years ago Shevek, a friend I met while building cloud-generating appliances at the (now defunct) Nebula, spent an afternoon and evening describing the benefits of lock-free data structures to me. He went deep into the theoretical aspects of it, summarizing a lot of the research thinking at the time. While I tried to keep up to retain it all later; I really didn’t manage it. The (predictable) end result was that I got the basics of what it did, why it was valuable, and some basic examples of how it was used, but I missed a lot of the how can this be more directly useful to me. A lot of that conversation was focused on multithreaded code and making it efficient, and what could be done (or not) to avoid contention blocks.

Jumping back to today, when I’m not writing away on Using Combine, I am helping some friends with a project that includes real-time collaborative editing. The goal is the same kind of thing that you see in Google Docs where multiple people are editing a document at the same time – you see each other’s cursors, lives updates, etc.

The earliest form of this kind of functionality that I used was originally a mac program called SubEthaEdit, and a couple years later a browser-based “text editor” called EtherPad, which I used extensively when working with the open source project OpenStack. (This was also around five years ago – I haven’t been actively contributing to OpenStack in quite a while).

Google adopted this capability as a feature, and has done an outstanding job of making it work. Happily, they also talk about how they do such things. In the details I was able to find, they often used the term “operational transformation” to cover most of their recent efforts. That was also the key technology behind Google’s (now dead effort): Wave. Other systems have done the same thing: ShareJS, Atom’s Xray, and the editor Xi.

I spent some time researching how others had tackled the problem of how you enable this kind of feature. The single best cohesive document I read was a blog post by Alexei Baboulevitch (aka “Archagon“) entitled: Data Laced with History: Causal Trees and Operational CRDTs. The article describes his own code and implementation experiments, conveniently available on github as crdt-playground. It also includes a tremendously useful primer into the related topics and links to research papers. It also helped (me) that all his code was done with Swift, and readily available to try out and play with.

Data Laced with History: Causal Trees and Operational CRDTs is absolutely amazing, but not an easy read. While I highly recommend it if you want to know how CRDT works, expect to need to read through it several times before the details all sink in. Alexei writes clearly and very accurately, and it is extremely dense. I have read the whole article many times, as well as dug through all that code repeatedly, to gain an understanding of it.

While I was buried in my own learning of how this works, I had an epiphany one evening: the core of a CRDT is a lock-free data structure. Once I stepped back from the specifics of it, looking at the whole thing as a general algorithm, the pieces I picked up from Shevek clicked into place. Turns out that conversation (which I had mostly forgotten and thought I lost) had a huge amount of value for me, years later. The background I gleaned after the fact gave me some ideas to understand how and why CRDTs work, and ultimately  proved incredibly useful to understand how to effectively use them.