How to make a SwiftUI component that draws a Shape with light

While I was experimenting with SwiftUI, one of the effects I wanted to re-create was taking a shape and making “stroke effects” on it. The goal was to create a SwiftUI component that could take an arbitrary path, apply this effect to the shape, and render it within the context of a larger SwiftUI view.

The effect I wanted to create was a “laser light” like drawing. You see this a lot in science fiction film user interfaces or backgrounds, and it is just kind of fun and neat. And yeah, I want it to be decent in both light and dark modes, although “dark mode” is where it will shine the most.

And the code to represent this is:

VStack {
    LaserLightShape(color: Color.orange, lineWidth: 1) {
        Rectangle()
    }

    LaserLightShape(color: Color.red, lineWidth: 2) {
        Circle()
    }

    LaserLightShape(color: Color.blue, lineWidth: 0.5) {
        Path { path in
            path.move(to: CGPoint(x: 0, y: 0))
            path.addLine(to: CGPoint(x: 50, y: 50))
        }
    }
}

Solving this challenge underscored the change in mindset from imperative code to declarative code. When I was looking through the SwiftUI methods, I kept looking for a methods with a means to “add on” to the existing view – or perhaps to replace an element within the view. In both of those cases, this highlighted my pattern of thinking – me telling the framework (or library) what to do. SwiftUI’s biggest win (and challenge) is inverting that thinking.

The methods to work with SwiftUI Views don’t “take a view, modify it, and hand it back”. Instead they take in some information, make a whole new View, and return it. There’s no “tweaking” or “changing” – the closest you get to that (imperative) paradigm is wholesale replacement. My natural instinct was to reach for something that had an explicit side effect, I suspect because that’s how a lot of languages I’ve used for years got something done. It’s a pattern I’m familiar with, and the first tool I reach towards.

This change in mindset is also why you’ll see a lot of the same people who “get” the new paradigm talking about how it overlaps with functional programming, using phrases like “pure functions”, and perhaps even “functors” and “monads”. I quickly get lost in many of these abstract concepts. For me, the biggest similarity is like when I understand the inversion of control that was the mental leap in moving from using a library to writing for a framework. This feels very much akin to that ‘Ah ha’ moment. And for the record, I’m not asserting that I fully understand it – only that I recognize I need to change how I’m framing the problem in my head in order to use these new tools.

To solve this particular challenge, I originally started looking at SwiftUI ViewModifiers. I’d been reading about them and thought maybe that’s what I want. Unfortunately, that didn’t work – ViewModifiers are great when you want to layer additional effects on a View – which means constraining what you do to the methods available and defined on the View protocol – but I wanted to work on a Shape, specifically leveraging the stroke method, which is a different critter.

The solution that I came up with uses a ViewBuilder. I wrote a bit about these before talking about making PreviewBackground. The mental framing that helped provide this solution was thinking about what I wanted to achieve as taking some information (a Shape) and returning some kind of View.

A ViewBuilder is a fairly significantly generic-heavy function. So to make it accept something conforming to the Shape protocol, I specified that the generic type it accepted was constrained to the protocol. I am still stumbling through the specifics of how to effectively use generics and protocols with swift, and thought this was a pretty nice way to show some of its strength.

Without further ado, here’s the code that produces the effect:

struct LaserLightShape<Content>: View where Content: Shape {
    let content: () -> Content
    let color: Color
    let lineWidth: CGFloat

    @Environment(\.colorScheme) var colorSchemeMode
    var blendMode: BlendMode {
        if colorSchemeMode == .dark {
            // lightens content within a dark
            // color scheme
            return BlendMode.colorDodge
        } else {
            // darkens content within a light
            // color scheme
            return BlendMode.colorBurn
        }
    }

    init(color: Color, lineWidth: CGFloat, @ViewBuilder content: @escaping () -> Content) {
        self.content = content
        self.color = color
        self.lineWidth = lineWidth
    }

    var body: some View {
        ZStack {
            // top layer, intended only to reinforce the color
            // narrowest, and not blurred or blended
            if colorSchemeMode == .dark {
                content()
                    .stroke(Color.primary, lineWidth: lineWidth / 4)
            } else {
                content()
                    .stroke(color, lineWidth: lineWidth / 4)
            }

            if colorSchemeMode == .dark {
                // pushes in a bit of additional lightness 
                // when in dark mode
                content()
                    .stroke(Color.primary, lineWidth: lineWidth)
                    .blendMode(.softLight)
            }
            // middle layer, half-width of the stroke and blended
            // with reduced opacity. re-inforces the underlying
            // color - blended to impact the color, but not blurred
            content()
                .stroke(color, lineWidth: lineWidth / 2)
                .blendMode(blendMode)

            // bottom layer - broad, blurred out, semi-transparent
            // this is the "glow" around the shape
            if colorSchemeMode == .dark {
                content()
                    .stroke(color, lineWidth: lineWidth)
                    .blur(radius: lineWidth)
                    .opacity(0.9)
            } else {
                // knock back the blur/background effects on
                // light mode vs. dark mode
                content()
                    .stroke(color, lineWidth: lineWidth / 2)
                    .blur(radius: lineWidth / 1.5)
                    .opacity(0.8)
            }
        }
    }
}

The instructions for what to do are embedded in body of the view we are returning. The pattern is one I looked up from online references of how to make this same effect in photoshop. The gist is:

  • You take the base shape, make a wide stroke of it, and blur it out a bit. This will end up being the “widest” portion of the effect.
  • Over that, you put another layer – roughly half the width of the bottom layer, and stroke it with the color you want to show.
  • And then you add a final top layer, narrowest of the set, intended to put the “highlight” or shine onto the result.

You’ll notice in the code that it’s also dynamic in regards to light and dark backgrounds – in a light background, I tried to reinforce the color without making the effect look like an unholy darkness shadow trying to swallow the result, and in the dark background I wanted to have a “laser light” light shine show through. I also found through trial-and-error experiments that it helped to have a fourth layer sandwiched in there in dark mode specifically to brighten up the stack, adding in more “white” to the end effect.

A lot of this ends up leveraging the blend modes from SwiftUI that composite layers. I’m far from a master of blend modes, and had to look up a number of primers to figure out what I wanted from the fairly large set of possibilities.

I suspect there is a lot more that can be accomplished by leveraging the ViewBuilder pattern.

Introducing and explaining the PreviewBackground package

While learning and experimenting with SwiftUI, I use the canvas assistant editor to preview SwiftUI views extensively. It is an amazing feature of Xcode 11 and I love it. There is a quirk that gets difficult for me though – the default behavior of the preview provider uses a gray background. I frequently use multiple previews while making SwiftUI elements, wanting to see my creation on a background supporting both light and dark modes.

The following little stanza is a lovely way to iterate through the modes and displaying them as previews:

#if DEBUG
struct ExampleView_Previews: PreviewProvider {
    static var previews: some View {
        Group {
            ForEach(ColorScheme.allCases,
                    id: \.self) { scheme in

                Text("preview")
                    .environment(\.colorScheme, scheme)
                    .frame(width: 100,
                           height: 100,
                           alignment: .center)
                    .previewDisplayName("\(scheme)")
            }
        }
    }
}
#endif

Results in the following preview:

The gray background doesn’t help all that much here. It is perfect when you are viewing a fairly composed element set, as you are often working over an existing background. When you are creating an element to stand alone, or moving an element. In those cases, I really want a background for the element.

And this is exactly what PreviewBackground provides. I made PreviewBackground into a SwiftPM package. While I could have created this effect with a ViewModifier, I tried it out as a ViewBuilder instead, thinking it would be nice to wrap the elements I want to preview explicitly.

The same example, using PreviewBackground:

import PreviewBackground

#if DEBUG
struct ExampleView_Previews: PreviewProvider {
    static var previews: some View {
        Group {
            ForEach(ColorScheme.allCases,
                    id: \.self) { scheme in
                PreviewBackground {
                    Text("preview")
                }
                .environment(\.colorScheme, scheme)
                .frame(width: 100,
                       height: 100,
                       alignment: .center)
                .previewDisplayName("\(scheme)")
            }
        }
    }
}
#endif

The code is available on Github, and you may include it within your own projects by adding a swift package with the URL: https://github.com/heckj/PreviewBackground

Remember to import PreviewBackground in the views where you want to use it, and work away!

Explaining the code

There are not many examples of using ViewBuilder to construct a view, and this is a simple use case. Here is how it works:

import SwiftUI

public struct PreviewBackground<Content>: View where Content: View {
    @Environment(\.colorScheme) public var colorSchemeMode

    public let content: () -> Content

    public init(@ViewBuilder content: @escaping () -> Content) {
        self.content = content
    }

    public var body: some View {
        ZStack {
            if colorSchemeMode == .dark {
                Color.black
            } else {
                Color.white
            }
            content()
        }
    }
}

The heart of using ViewBuilder is using it within a View initializer to return a (specific but) generic instance of View, and using the returned closure as a property that you execute when composing a view.

There is a lot of complexity in that statement. Allow me to try and explain it:

Normally when creating a SwiftUI view, you create a struct that conforms to the View protocol. This is written in code as struct SomeView: View. You may use the default initializer that swift creates for you, or you can write your own – often to set properties on your view. ViewBuilder allows you to take a function in that initializer that returns an arbitrary View. But since the kind of view is arbitrary, we need to make the struct generic – since we can’t assert exactly what type it will be until the closure is compiled. To tell the compiler it’ll need to do the work to figure out the types, we label the struct as a being generic, using the <SomeType> syntax:

struct SomeView<Content>: View where Content: View

This says there is a generic type that we’re calling Content, and that generic type is expected to conform to the View protocol. There is a more compact way to represent this that you may prefer:

struct SomeView<Content: View>: View

Within the view itself, we have a property – which we name content. The type of this content isn’t known up front – it is the arbitrary type that the compiler gets to infer from the closure that will provided in the future. This declaration is saying the content property will be a closure – taking no parameters – that returns some an arbitrary type we are calling Content.

public let content: () -> Content

Then in the initializer, we use ViewBuilder:

public init(@ViewBuilder content: @escaping () -> Content) {
    self.content = content
}

In case it wasn’t obvious, ViewBuilder is a function builder, the swift feature that is enabling this declarative structure with SwiftUI. This is what allows us to ultimately use it with in that declarative syntax form.

The final bit of code to describe is using the @Environment property wrapper.

@Environment(\.colorScheme) public var colorSchemeMode

The property wrapper is not in common use, but perfect for this need. The property wrapper uses exposes a specific part of the existing environment as a local property for this view. This is what enables PreviewBackground to choose the color for the background appropriate to the mode. By reading the environment it chooses an appropriately colored background. It then uses that property to assemble a view by invoking the property named content (which was provided by the function builder) within a ZStack.

By using ViewBuilder, we can use the PreviewBackground struct like any other composed view within SwiftUI:

var body: some View {
    PreviewBackground {
        Text("Hello there!")
    }
}

If we had created this code as a ViewModifier, then using it would look different – instead of the curly-bracket syntax, we would be chaining on a method. The default set up for something like that looks like:

var body: some View {
    Text("Hello there!")
    .modify(PreviewBackground())
}

I wanted to enable the curly-bracket syntax for this, hence the choice of using a ViewBuilder.

A side note about moving code into a Swift package

When I created this code, I did so within the context of another project. I wanted to use it across a second project, and the code was simple enough (a single file) to copy/paste – but instead I went ahead and made it a Swift package. Partially to make it easier for anyone else to use, but also just to get a bit more experience with what it takes to set up and use this kind of thing.

The mistake that I made immediately on moving the code was not explicitly making all the structs and properties public. It moved over, compiled fine, and everything was looking great as a package, but then when I went to use it – I got some really odd errors:

Cannot call value of non-function type 'module<PreviewBackground>'

In other instances (yes, I admit this wasn’t the first time I made this mistake – and it likely won’t be the last) the swift compiler would complain about the scope of a function, letting me know that it was using the default internal scope, and was not available. But SwiftUI and this lovely function builder mechanism is making the compiler work quite a bit more, and it is not nearly as good at identifying why this mistake might have happened, only that it was failing.

If you hit the error Cannot call value of non-function type when moving code into a package, you may have forgotten to make the struct (and relevant properties) explicitly public.

Four strategies to use while developing SwiftUI components

Lets start out with the (possibly) obvious: when I code, I frequently make mistakes (and fix them); but while I am going through that process function builders are frequently kicking my butt. When you are are creating SwiftUI views, you use function builders intensely – and the compiler is often at a loss to explain how I screwed up. And yeah, even with the amazing new updates into the Diagnostic Engine alongside Swift 5.2, which I am loving.

What is a function builder? It is the thing that looks like a normal “do some work” code closure in swift that you use as the declarative structure when you are creating a SwiftUI view. When you see code such as:

import SwiftUI

struct ASimpleExampleView: View {
    var body: some View {
        Text("Hello, World!")
    }
}

The bit after some View is the function builder closure, which includes the single line Text("Hello, World!").

The first mistake I make is assuming all closures are normal “workin’ on the code” closures. I immediately start trying to put every day code inside of function builders. When I do, the compiler – often immediately and somewhat understandably – freaks out. The error message that appears in Xcode:

Function declares an opaque return type, but has no return statements in its body from which to infer an underlying type

And some times there are other errors as well. It really depending on what I stacked together and how I grouped and composed the various underlying elements in that top level view, and ultimately what I messed up deep inside all that.

I want to do some calculations in some of what I am creating, but doing them inline in the function builder closures is definitely not happening, so my first recommended strategy:

Strategy #1: Move calculations into a function on the view

Most of the reasons I’m doing a calculation is because I want to determine a value to hand in to a SwiftUI view modifier. Fiddling with the opacity, position, or perhaps line width. If you are really careful, you can do some of that work – often simple – inline. But when I do that work, I invariably screw it up – make a mistake in matching a type, dealing with an optional, or something. At those times when the code is inline in a function builder closure, the compiler is having a hell of a hard time figuring out what to tell me about how I screwed it up. By putting the relevant calculation/code into a function that returns an explicit type, the compiler gets a far more constrained place to provide feedback about what I screwed up.

As an example:

struct ASimpleExampleView: View {
    func determineOpacity() -> Double {
        1
    }

    var body: some View {
        ZStack {
            Text("Hello World").opacity(determineOpacity())
        }
    }
}

Some times you aren’t even doing calculations, and the compiler gets into a tizzy about the inferred type being returned. I have barked my shins on that particular edge repeatedly while experimenting with all the various options, seeing what I like in a visualization. The canvas assistant editor that is available in Xcode is a god-send for fast visual feedback, but I get carried away in assembling lots of blocks with ZStacks, HStacks, and VStacks to see what I can do. This directly leads to my second biggest win:

Strategy #2: Ruthlessly refactor your views into subcomponents.

I am beginning to think that seeing repeated, multiple kinds of stacks together in a single view is possibly a code smell. But more than anything else, keeping the code within a single SwiftUI view as brutally simple as possible gives the compiler a better than odds chance of being able to tell me what I screwed up, rather than throwing up it’s proverbial hands with an inference failure.

There are a number of lovely mechanisms with Binding that make it easy to compose and link to the relevant data that you want to use. When I am making a subcomponent that provides some visual information that I expect the enclosing view to be tracking, I have started using the @Binding property wrapper to pass it in, which works nicely in the enclosing view.

TIP:

When you’re using @Binding, remember that you can make a constant binding in the PreviewProvider in that same file:

YourView(someValue: .constant(5.0))

While I was writing this, John Sundell has recently published a very in-depth look at exactly this topic. His article Avoiding Massive SwiftUI Views covers another angle of how and why to ruthlessly refactor your views.

On the topic of the mechanics of that refactoring, when we lean what to do, it leads to leveraging Xcode’s canvas assistant editor withPreviewProvider – and my next strategy:

Strategy #3: use Group and multiple view instances to see common visual options quickly

This strategy is more or less obvious, and was highlighted in a number of the SwiftUI WWDC presentations that are online. The technique is immensely useful when you have a couple of variations of your view that you want to keep operational. It allows you to visually make sure they are working as desired while you continue development. In my growing example code, this looks like:

import SwiftUI

struct ASimpleExampleView: View {
    let opacity: Double
    @Binding var makeHeavy: Bool

    func determineOpacity() -> Double {
        // maybe do some calculation here
        // mixing the incoming data
        opacity
    }

    func determineFontWeight() -> Font.Weight {
        if makeHeavy {
            return .heavy
        }
        return .regular
    }

    var body: some View {
        ZStack {
            Text("Hello World")
                .fontWeight(determineFontWeight())
                .opacity(determineOpacity())
        }
    }
}

struct ASimpleExampleView_Previews: PreviewProvider {
    static var previews: some View {
        Group {
            ASimpleExampleView(opacity: 0.8, 
                makeHeavy: .constant(true))

            ASimpleExampleView(opacity: 0.8, 
                makeHeavy: .constant(false))
        }
    }
}

And the resulting canvas assistant editor view:

This does not always help you experiment with what your views look like in all variations. For sets of pre-defined options, or data that influences your view, it can make a huge difference. A good variation that I recommend anyone use is setting and checking the accessibility environment settings to make sure everything renders as you expect. Another that I have heard is in relatively more frequent use: verifying localization rendering.

The whole rapid experimentation and feedback capability is what is so compelling about using SwiftUI. Which leads pretty directly to my next strategy:

Strategy #4: Consider making throw-away control views to tweak your visualization effects

I am not fortunate enough to constantly work closely with a designer. Additionally, I often do not have the foggiest idea of how some variations will feel in terms of a final design. When the time comes, seeing the results on a device (or on multiple devices) makes a huge difference.

You do not want to do this for every possible variation. That is where mocks fit into the design and development process – take the time to make them and see what you think. Once you have narrowed down your options to a few, then this strategy can really kick in and be effective.

In the cases when I have a few number of variations to try out, I encapsulate those options into values that I can control. Then I make a throw-away view that will never be shown in the final code that allows me to tweak the view within the context of a running application. Then the whole thing goes into whatever application I am working on – macOS, iOS, etc – and I see how it looks

When I am making a throw-away control view, I often also make (another throw-away) SwiftUI view that composes the control and controlled view together, as I intend to display it in the application. This is primarily to see the combined effect in a single Preview within Xcode. The live controls are not active in the Xcode canvas assistant editor, but it helps to see how having the controls influences the rest of the view structure.

A final note: Do not be too aggressive about moving code in a SwiftPM package

You may (like me) be tempted to move your code into a library, especially with the lovely SwiftPM capabilities that now exist within Xcode 11. This does work, and it functionally works quite well, from my initial experiments. But there is a significant downside, at least with the current versions (including Xcode 11.4 beta 3) – while you are still doing active development on the library:

If you open the package to edit or tweak it with Xcode, and load and build from only Package.swift without an associated Xcode project, the SwiftUI canvas assistant preview will not be functioning. If you use an Xcode project file, it works fine – so if you do go down this route, just be cautious about removing the Xcode project file for now. I have filed feedback to Apple to report the issue – both with Xcode 11.3 (FB7619098) and Xcode 11.4 beta 3 (FB7615026).

I would not recommend moving anything into a library until you used had it stable in case. There are also still some awkward quirks about developing code and a dependent library at the same time with Xcode. It can be done, but it plays merry havoc with Xcode’s automatic build mechanisms and CI.

Using Combine v1.1 is available

After getting the major edits for the existing content done, I called the result the first release. As with any creative product, I wasn’t happy with some of the corners that still had rough edges. Over the past two weeks I fleshed those in, wrote a bunch of unit tests, figured out some of the darker corners that I’d previously ignored, and generally worked to improve on the overall consistency.

The results have been flowing into the online version as I merged them. And now the updated version, available on gumroad in PDF and ePub format, is updated as well. Anyone who’s previously purchased the content gets the updates for free – just log in and they are available for you.

The rough bits that were fleshed out include several focuses of content:

  • Tests created and content written (and updated) for the multicast and share operators. The focus was primarily how they work and how to use them.
  • Worked through what the Record publisher offers (and doesn’t offer), including how to serialize & deserialize recorded streams (while this sounds cool, its not ultimately as useful as I hoped it might be).
  • Added the missing note that swift’s Result type could also be used as a publisher, courtesy of a little extension that was added in Combine.
  • Updated some of the details of throttle and debounce with the specifics of delays that are incurred in timing, after having validated the theories with some unit tests (spoiler: debounce always delays the events by a short bit, but throttle doesn’t have much of an impact). I had previously written about throttle and debounce on this blog as well.

The new version is 1.1, tagged in the underlying repository if you are so inclined to review/poke at the unit tests beyond the narrative details I shared in the book itself.

Using Combine – first edition available

I just finished my first edit pass of the content of Using Combine, and am incredibly pleased. Sufficiently pleased, in fact, that I am going to call this version the “first edition”.

It is certainly not perfect, nor even as complete as I would like, but a significant enough improvement that I wanted to put a stake in the ground and get it out there.

I’m not planning on stopping the development work, there are more examples, more details, and useful tidbits that will be developed. I have continued to receive wonderful feedback, and plan to continue to accept updates, as all the content, example code, and sample project pieces are available in the github repository swiftui-notes.

The content will continue to remain available for free in a single-page HTML format, hosted at http://heckj.github.io/swiftui-notes/. The ePub and PDF version is available through gumroad.

SwiftUI and Combine – Binding, State, and notification of changes

When I started the project that became Using Combine, it was right after WWDC; I watched streamed WWDC sessions online, captivated like so many others about SwiftUI. I picked up this idea that SwiftUI was “built using the new framework: Combine”. In my head, I thought that meant Combine managed all the data – notifications and content – for SwiftUI. And well, that ain’t so. While the original “built using Combine” is accurate, it misses a lot of detail, and the truth is a bit more complex.

After I finished my first run through drafting the content for Using Combine, I took some time to dig back into SwiftUI. I originally intended to write (and learn) more about that. In fact, SwiftUI is what started the whole segue into Combine. I hadn’t really tried to use SwiftUI seriously, or get into the details until just recently. I realized after all the work on examples for Combine and UIKit, I had completely short shifted the SwiftUI examples.

Mirroring a common web technology pattern, SwiftUI works as a declarative structure of what gets shown with the detail being completely derived from some source of truth – derived from state stored or managed somewhere. The introductory docs made is clear that @State was how this declarative mechanism could represent a bit of local state within a View, and with the benefit of Daniel and Paul’s writing (SwiftUI Kickstart and SwiftUI by Example), it was also quickly clear that @EnvironmentObject and @ObservedObject played a role there too.

The Combine link to SwiftUI, as it turns out, is really only about notifying the SwiftUI components that a model had changed, not at all what changed. The key is the protocol from Combine: ObservableObject (Apple’s docs). This protocol, along with the @Published property wrapper, does the wonderful work of generating a combine publisher – the default type of which is represented by the class ObservableObjectPublisher. In the world of Combine, it has a defined output and failure type: <Void, Never>. The heart of that Void output type is that the data that is changing doesn’t matter – only that a change was happening.

So how does SwiftUI go and get the data it needs?Binding is the SwiftUI generic structure that is used to do this linkage. The documentation at Apple asserts:

Use a binding to create a two-way connection between a view and its underlying model. For example, you can create a binding between a Toggle and a Bool property of a State. Interacting with the toggle control changes the value of the Bool, and mutating the value of the Bool causes the toggle to update its presented state.

You can get a binding from a State by accessing its binding property. You can also use the $prefix operator with any property of a State to create a binding.

https://developer.apple.com/documentation/swiftui/binding

Looking around a bit more while creating some examples, and it becomes clear that some handy form elements (such as TextField) expect a parameter of type binding when they are declared. Binding itself works by leveraging swift’s property getters and setters. You can even manually create a Binding if you’re so inclined, defining the closures for get and set to whatever you like. Property wrappers such as @State, @ObservedObject, and @EnvironmentObject either create and expose a Binding, or create a wrapper that in turn passes back a Binding.

My take away is the flow with Combine and SwiftUI has a generally expected pattern: A model to be represented by a reference object, which sends updates when the data is about to change (by conforming to the ObservableObject protocol). SwiftUI goes and gets the data that it needs based on what was declared in the View using Binding to get to the underlying data (and potentially allowing the SwiftUI view to update it in turn if that’s relevant).

Given that SwiftUI views are also designed to be composed, I am leaning towards expecting a pattern that state will need to be defined for pretty much any variation of a view – and potentially externalized. The property wrappers for representing, and externalizing, state within SwiftUI are:

  • @State
  • @ObservedObject and @Published
  • @EnvironmentObject

@State is all about local representation, and the simplest mechanism, simply providing a link to a property and the Binding.

@ObservedObject (along with @Published) adds a notification mechanism on change, as well as a way to get a typed Binding to properties on the model. SwiftUI’s mechanism expects this always to be a reference type (aka a ‘class’), which ends up being pretty easy to define in code.

@EnvironmentObject takes that a step further and exposes a reference model not just to a single view, but allows it to be used by any number of views in their own hierarchy.

  • Drive most of the visual design choices entirely by the current state

But that’s not the only mechanism that is available: SwiftUI is also set up to react to a Combine publisher – although not in a heavily predetermined fashion. An interesting aspect is that all of the SwiftUI views also support a Combine subscriber: onReceive. So you can bring the publisher, and then write code within a View (or View component) to react to what it sends.

The onReceive subscriber acts very similarly to Combine’s sink subscriber – the single-closure version of sink (implying a Combine pipeline failure type of Never). You to define a closure within your SwiftUI view element that accepts that data and does whatever needs doing. This could be using the data, transforming and storing it into local @State, or just reacting to the fact that data was sent and updating the view based on that.

From a “What is a best practice” point of view, it seems the more you represent what you want to display within a reference model, the easier it will be to use. While you can expose a publisher right into a SwiftUI view, it tightly couples the combine publisher to the view and all links all those relevant types. You could (likely just as easily) have the model object encapsulate that detail – in which case the declaration of how you handle event changes over time are separated from how you present the view. This is likely a better separation of concerns.

The project (SwiftUI-Notes) linked to Using Combine now has two examples with Combine and SwiftUI. The first is a simple form validation (the view ReactiveForm.swift and model ReactiveFormModel.swift). This uses both the pattern of encapsulating the state within the model, and exposing a publisher to the SwiftUI View to show what can be done. I’m not espousing that the publisher mechanism is a good way to solve that particular problem, but it illustrates what can be done nicely.

The second example is a view (HeadingView.swift) that uses a model and publisher I created to use the built-in CoreLocation framework. The model (LocationModelProxy.swift) exposes the authorization as a published property, as well as the location updates through a publisher. Within the built-in Cocoa framework, those are normally exposed through a delegate callback. A large number of the existing Cocoa frameworks are convertible into a publisher-based mechanism to work with Combine using this pattern. The interesting bit was linking this up to SwiftUI, which was fun – although this example only taps the barest possibility of what could done.

It will be interesting to see what Apple might provide in terms of adopting Combine as alternative interfaces to its existing frameworks. CoreLocation is such a natural choice with its streaming updates, but there are a lot of others that could be used as well. And of course I’m looking forward to seeing how they expand on SwiftUI – and if they bring in more Combine based mechanisms into it or not.

programming tools – start with sketching and build to fully specified

A couple of years ago I was managing a very distributed team – or really set of teams: Shanghai, Minneapolis, Seattle, Santa Clara, and Boston. Everyone worked for the same company, but culturally the teams were hugely divergent.

In one region, the developers in the team preferred things to be as loosely defined as possible. Their ideal state was “tell me how you’re going to rate me” and then you got of their way as they min-max’d based on whatever measurement rules you just set in place. When they were in charge of a specific element of the solution, there was a great deal of confidence it’d get done and effectively. But when they needed to be a bit more of active coordinator with a growing solution, they struggled quite a bit more – communication overhead really took a steep hit.

The opposite end of the spectrum was a region where the developers preferred for everything to be fully specified. Every detail defined, every option laid out and determined, how things work, and how things fail. Lots of detailed definition, to a level that extremely exhaustive – and would frankly annoy nearly all the developers from the first region.

I would love to make a hasty generalization and tell you that this was a “web developer” on one side and an ex-firmware developer on the other, but these folks were *all* firmware-level developers, stepping into a new team we had merged from a variety of pieces and places. Everyone was dealing with new development patterns, a new language – quite different from what they were used to historically, and a different business reality with goals and intentions that were in a high degree of flux.

What is surprising to me is that while I started out sympathizing with the first region’s culture far more, the influence of the spectrum – and the far side that preferred the specificity is what has stuck with me. I had previously left a lot more unspecified and open to interpretation – and not surprisingly that bit back upon occasion. Now with programming, regardless of the language, I want the tools to help me support that far more detailed, specified version of what I expect.

In the last year, I’ve actively programmed in C, C++, Javascript, TypeScript, Swift, Python, and Objective-C. When I programmed in dynamic languages (javascript, python) I just *expected* to need to write a lot of tests, including ones that a type system would otherwise do some overlapping validation with. A lot of time it was tracking null values, or explicitly invalid inputs to functions and systems to make sure the responses were as expected (errors and failures). As I followed this path, I gained a huge amount of respect for the optional concept that’s been embedded into Swift. It has been an amazing eye opener for me, and I find I struggle to go back to languages without it now.

Last fall I diverted heavily into a C++ project, where we explicitly pulled in and used an optional extension through a library: type_safe. Back in December, Visual Studio went and exposed a fatal compiler error bug in the Microsoft compiler update that just exploded when attempting to use this library, much to my annoyance. But even with the type system of C++ and this library working with me, I can’t even *imagine* trying to write stable C++ code without doing a ton of tests to validate what’s happening and how it’s working – and failing.

I’m still tempted to delve into Rust, as I keep hearing and seeing references to it as a language with very similar safety/structural goals to swift (quite possibly with stronger guarantees – I don’t have a great way to judge the specifics there though). Bryan Cantrill keeps pimping it in the background of his conversations at Oxide Computing’s On The Metal podcast. If you’re into some fascinating history/interviews about the software-hardawre interface, it’s amazing!

The end result is that I’ll start out sketching in ideas, but want my tools and languages to help me drive to a point where I’m far more “fully specified” as I go through the development process and iterate on the code. In my ideal world, which I don’t really expect, the language and software tooling helps me have confidence in the correctness and functionality of the code I’ve written, while allowing me the expressiveness to explore ideas as quickly as possible.

Just like the cultures and regions I navigated with that global team, it is a matter of sliding around on the spectrum of possibilities to optimize for the software challenge at hand.

CRDTs and lockless data structures

A good five or so years ago Shevek, a friend I met while building cloud-generating appliances at the (now defunct) Nebula, spent an afternoon and evening describing the benefits of lock-free data structures to me. He went deep into the theoretical aspects of it, summarizing a lot of the research thinking at the time. While I tried to keep up to retain it all later; I really didn’t manage it. The (predictable) end result was that I got the basics of what it did, why it was valuable, and some basic examples of how it was used, but I missed a lot of the how can this be more directly useful to me. A lot of that conversation was focused on multithreaded code and making it efficient, and what could be done (or not) to avoid contention blocks.

Jumping back to today, when I’m not writing away on Using Combine, I am helping some friends with a project that includes real-time collaborative editing. The goal is the same kind of thing that you see in Google Docs where multiple people are editing a document at the same time – you see each other’s cursors, lives updates, etc.

The earliest form of this kind of functionality that I used was originally a mac program called SubEthaEdit, and a couple years later a browser-based “text editor” called EtherPad, which I used extensively when working with the open source project OpenStack. (This was also around five years ago – I haven’t been actively contributing to OpenStack in quite a while).

Google adopted this capability as a feature, and has done an outstanding job of making it work. Happily, they also talk about how they do such things. In the details I was able to find, they often used the term “operational transformation” to cover most of their recent efforts. That was also the key technology behind Google’s (now dead effort): Wave. Other systems have done the same thing: ShareJS, Atom’s Xray, and the editor Xi.

I spent some time researching how others had tackled the problem of how you enable this kind of feature. The single best cohesive document I read was a blog post by Alexei Baboulevitch (aka “Archagon“) entitled: Data Laced with History: Causal Trees and Operational CRDTs. The article describes his own code and implementation experiments, conveniently available on github as crdt-playground. It also includes a tremendously useful primer into the related topics and links to research papers. It also helped (me) that all his code was done with Swift, and readily available to try out and play with.

Data Laced with History: Causal Trees and Operational CRDTs is absolutely amazing, but not an easy read. While I highly recommend it if you want to know how CRDT works, expect to need to read through it several times before the details all sink in. Alexei writes clearly and very accurately, and it is extremely dense. I have read the whole article many times, as well as dug through all that code repeatedly, to gain an understanding of it.

While I was buried in my own learning of how this works, I had an epiphany one evening: the core of a CRDT is a lock-free data structure. Once I stepped back from the specifics of it, looking at the whole thing as a general algorithm, the pieces I picked up from Shevek clicked into place. Turns out that conversation (which I had mostly forgotten and thought I lost) had a huge amount of value for me, years later. The background I gleaned after the fact gave me some ideas to understand how and why CRDTs work, and ultimately  proved incredibly useful to understand how to effectively use them.

Human Voice

When I joined twitter, it was because my friends were talking about it. Conversations that I could normally only participate in during conferences or meetups became available to me. I tried to follow it slavishly at first, and then I had an epiphany that it was more like chatting with some friends at a restaurant or bar – people are coming and going, and you chat with whomever is around and available. Facebook was similar, but family and friend focused – keeping up with what my friends are doing after I moved 1000 miles away.

Fast forward a decade, and the human voice has been nearly extinguished in both mediums. I still have accounts in both systems, but it’s more like turning on a constant advertising stream. I ceased being able to rely on either for even slight recency of human voices, let alone the friends and conversations that I used to have. They have become the modern noise-on-the-TV that I grew up with – nothing good on. Perhaps worse, because so much of it is emotionally strident – “this one small trick”, “you’ll be shocked and amazed”, etc. So much bullshit.

Fortunately I’ve (re)found a place where I can get that human voice again:

I’ve gone back to checking my RSS feeds first for reading instead of hitting twitter or facebook, which makes a huge difference. That dopamine hit isn’t the same – RSS isn’t a never-ending stream of potentially interesting content that keeps you addicted like a manic crackhead, but the content that is there tends to be pretty darned good.

I’ve gone back to curating – looking for mentions and links, and following those back to sources. The “X Weekly” curated newsletters are equally good for finding new people to read, as well as friends of friends. It takes some effort, but that is also making it more real. If someone goes to wonky, I can easily ignore them for a bit, or drop their feed from my set – no shaming or cancel notification, just stepping away to more of what I’m interested in.

If you want to pipe up and join in the conversation, you can easily host your writing at Micro.blog, WordPress, or Medium. Micro.blog and WordPress are $5 and $8 per month and Medium is no direct cost.

Remember if you’re not paying for a service, you are likely the product…

I have used WordPress for years, so I stuck there, but honestly the easy to get started option is very much micro.blog. Write about whatever you want, as much as you want. A sentence, paragraph, or longer – there’s no limit, no “right way/wrong way”, and you don’t need to torture your words into some small number of characters.

Using Combine – reference content complete!

I’m thrilled to be announcing that an updated version of Using Combine is now available!

It has taken me nearly 6 months to draft it all, reverse engineering and writing tests for all the various publishers, operators, and pieces in between – and documenting what I found. The end result is 182 pages (in US PDF format) of reference documentation the way I’d generally like to have it.

While the live site is updated automatically, updated PDF and ePub versions are now available on Gumroad. If you purchased a copy previously, you can go to Gumroad and get an updated, DRM free, content in either PDF or ePub formats.

This updates finishes the largest swath of reference updates, creating tests to verify all the various operators and writing the documentation reference sections for the following:

There was also an update for Xcode 11.3 and associated iOS 13.3 and macOS 10.15.2, which included some subtle changes to the throttle operator behavior, which I recently wrote about in some detail.

With this update, the majority of the core content is now complete, but the work is by no means finished!

The next steps for the book are review and editing. On the to-do list are refining the descriptions of the reference sections, reviewing all the patterns now that we have had Combine for a few months, and seeing the updates as the API changes and refines. There are some diagrams now, but more are likely needed in some sections – both in the patterns and reference sections.

As before, this continues as a labor of love and for the community. Meaning that the content will continue to be free, available on the live site, with updates being made available as I make them. The work has been financially supported by 116 people as I’m writing this, as well as a number of people providing pull requests to fix typos and grammar flaws.

If you want a DRM-free digital copy in either PDF or ePub format, please consider supporting this work by purchasing a copy at gumroad.