The range operator and SwiftUI’s layout engine

This post is specific to Swift the programming language and SwiftUI, Apple’s newest multi-platform UI framework. If you’re not interested in both, probably best to skip past this…

I was working on visualization code that leverages SwiftUI to see how that might work, and ran into a few interesting tidbits: playgrounds with SwiftUI works brilliantly, right up until it doesn’t and then you’re in a deep, dark hole of WTF?!, and the SwiftUI layout engine, especially with nested and semi-complex container views, do some sort of iterative solver technique. The less interesting tidbit is a classic aphorism: It’s the things you think you know that aren’t true that bite you.

When you’re fiddling with adding in your own constraints in SwiftUI, you might be tempted to use the range operator – at least my thinking was “Oh hey, this looks like a super convenient way to check to make sure this value is within an expected range”. It works stunningly well for me, as long as I’m careful about creating it. I started creating ranges on the fly from variable values, and that’s where playgrounds, and my bad assumptions, bit me.

If you create a range that’s ludicrous, Swift can throw an exception at runtime. So if you’re working with ranges, you’ve got the possibility that passing in a blatantly incorrect value will give you a non-sensical range, and that will result in a crashing exception. When you stumble into this using Playgrounds, you get a crash that doesn’t really tell you much of anything. When you kick that same thing up in Xcode, it still crashes (of course), but at least the debugger will drop into place and show you what you did that was wrong. I love using SwiftUI with Playgrounds, but the lack of runtime feedback when I hit an exception – about what I screwed up – makes it significantly less useful to me.

And debugging this in Xcode was where I learned that closures you provide within SwiftUI layout, such as alignmentGuide or a method of your own creation working with a GeometryReader don’t get called just once. Sometimes they’re called one, but other times they are called repeatedly, and with pretty strange values for the view’s dimension. I think underneath the covers, there’s an iterative layout solver that’s trying out a variety of layout options for the view that’s being created. Sometimes those closures would be invoked once, other times repeatedly – and in some cases repeatedly with the same values. Interestingly, sometimes those values included a ViewDimension or GeometryProxy with a size.width of 0. The bad assumption I made was that it would be sized quite a bit larger, never zero. Because of that, I attempted to build an incorrect range – effectively ClosedRange(x ... x-1) – which caused the exception.

Even with my own assumptions biting me, I like the use of range and I’m trying to use it in an experimental API surface. Lord knows what’ll come of the experiment, but the basics are bearing some fruit. I have a bit of code where I’ve been porting some of the concepts, such as scale and tick, from D3 to use within SwiftUI.

The current experimental code looks like:

// axis view w/ linear scale - simple/short
HStack {
    VerticalTickDisplayView(
        scale: LinearScale(
            domain: 0 ... 5.0,
            isClamped: false)
    )
    VerticalAxisView(
        scale: LinearScale(
            domain: 0 ... 5.0,
            isClamped: false)
    )
}
.frame(width: 60, height: 200, alignment: .center)
.padding()

// axis view w/ log scale variant - manual ticks
HStack {
    VerticalTickDisplayView(
        scale: LogScale(
            domain: 0.1 ... 100.0,
            isClamped: false),
        values: [0.1, 1.0, 10.0, 100.0]
    )
    VerticalAxisView(
        scale: LogScale(
            domain: 0.1 ... 100.0,
            isClamped: false)
    )
}
.frame(width: 60, height: 200, alignment: .center)
.padding()

And results in fairly nice horizontal and vertical tick axis that I can use around a chart area:

Open apps with SwiftUI

Earlier this week, James Dempsey asked on twitter about who else was actively trying to build macOS apps using SwiftUI. I’m super interested in SwiftUI. A year ago, it spawned my own side-project into writing my own reference docs on Combine. Originally I had a vision of writing about Combine as well as SwiftUI. Combine alone was hugely, so I stopped there with the notes, especially as SwiftUI is still massively maturing. Some pretty amazing updates came just earlier this year. While clearly not finished, likely not even close to finished, it’s now far enough along in it’s maturity that you can at least consider using it for full apps. Or parts of your app if you like – macOS, iOS, watchOS, or tvOS.

While I’m been keeping track of the framework, I’ve also been keeping track of people who are using it, writing about it, struggling with it, etc. There’s two implementations of full applications, all open source (and hence completely visible), that I’ve been following as super interesting examples of using SwiftUI: NetNewsWire and ControlRoom.

NetNewsWire

I’ve contributed a bit to NetNewsWire, only a tiny amount (and mostly around continuous integration), but I’ve been using it since the earliest days and from it’s original inception and through multiple owners to it’s current state as an open-source project that Brent Simmons is leading. The code is available online, ever evolving, at https://github.com/Ranchero-Software/NetNewsWire. The recent work to embrace SwiftUI is on it’s main branch with a lot of the SwiftUI code under the directory multiplatform/shared. Take a deep dive and dig around – there’s some gems and interesting questions, and you can see some really fascinating examples of integrating SwiftUI and UIKit or AppKit where SwiftUI isn’t quite up to some of the tasks desired by the project.

ControlRoom

The other app I’ve been watching is ControlRoom, an app that Paul Hudson referenced in a demo capture on twitter. ControlRoom’s code is on Github at https://github.com/twostraws/ControlRoom, released earlier in SwiftUI’s lifecycle, and showing an integration not of the new SwiftUI app architecture pieces, but more of “classic” macOS AppKit integration. Like NetNewsWire, I found a number of really great gems within the code, often having “light-bulb” moments when I understood how the app accomplished some of its goals.

Others…

There are easily other apps out there, that I’m unaware of – but not too many folks are openly sharing their development like the two projects above. I did find a list of open-source IOS apps on GitHub that includes a subset listing SwiftUI, that might be interesting.

I have a few of my own experiments, but nothing as polished and effective as these two, and I don’t think I solve any problems in novel ways that they haven’t. In a bit of test and benchmarking code, I was creating SwiftUI interfaces across macOS, iOS, and tvOS – which turns out to be a right pain in the butt, even for the simplest displays.

I hope, but don’t expect, more apps to become available – or to be more visible down the road. Having the open sharing of how they solved problems is invaluable to me for learning, and even more so for sharing. Apple has their sample code, well – some of it anyway – but seeing folks outside of Apple use the framework “in the wild” really shows it’s working (or where it isn’t).

post-WWDC – more device collaboration?

It’s been two weeks since WWDC as I’m writing this. I certainly haven’t caught all the content from this year’s event, or even fully processed what I have learned. I see several patterns evolving, hear and read the various rumors, and can’t help but wonder. Maybe it’s wishful thinking, or I’m reading tea leaves incorrectly, but a few ideas keep popping into my head. One of them is that I think we would benefit from — and have the technology seeds for — more and better device collaboration between Apple devices.

One of the features that SwiftUI released this year was leveraging a pattern of pre-computed visualizations for use as complications on the Apple Watch, and more generally as Widgets. The notion of a timeline is ingenious and a great way to solve the constrained compute capability of compact devices. This feature heavily leverages SwiftUI’s declarative views; that is, views derived from data.

Apple devices have frequently worked with each other – Phone and Watch pairing, the Sidecar mechanism that’s available with a Mac laptop and an iPad, and of course AirPlay. They offload data, and in some cases they offload computation. There’s creative caching involved, but even the new App Clips feature seems like it’s a variation on this theme – with code that can be trusted and run for small, very focused purposes.

Apple has made some really extraordinary wearable computing devices – the watch and AirPods, leveraging Siri – in a very different take than the smart speakers of Google Home and Amazon’s Alexa. This year’s update to Siri, for example, supports for on-device translation as well as dictation.

Now extrapolate out just a bit farther…

My house has a lot of Apple devices in it – laptop, watch, AirPods, several iPads, and that’s just the stuff I use. My wife has a similar set. The wearable bits are far more constrained and with me all the time – but also not always able to do everything themselves. And sometimes, they just conflict with each other – especially when it comes to Siri. (Go ahead – say “Hey Siri” in my house, and hear the chorus of responses)

So what about collaboration and communication between these devices? It would be fantastic if they could share sufficient context to support making the interactions even more seamless. A way to leverage the capabilities of a remote device (my phone, tablet, or even laptop) from the AirPods and a Siri request. They could potentially even hand-off background tasks (like tracking a timer) or knowing which device has been used most recently to better infer context for a request. For example, I want a timer while cooking often on my watch, not my phone – but “hey Siri” is not at all guaranteed to get it there.

That they could also know about the various devices capabilities and share those capabilities would make the whole set even smarter and more effective, and depending on which rumors you are excited by – they may be able to do some heavier computation off the devices that are more power constrained (wearables) by near-by but not physically connected (and power efficient) microprocessors. That could be generating visuals like Widgets, or perhaps the inverse – running a machine learning model against transmitted Lidar updates to identify independent objects and their traits from a point cloud or computed 3D mesh.

It’ll be interesting to see where this goes – I hope that distributed coordination, a means of allowing it (as a user), and a means developing for it is somewhere in the near future.

Exploring MultipeerConnectivity

A few weeks ago, I got curious about the MultipeerConnectivity framework available across Apple’s platforms. It’s a neat framework, and there are community-based libraries that layer over it to make it easier to use for some use cases: MultipeerKit (src) being the one that stood out to me.

The promise of what this framework does is compelling, which is to seamlessly enable peer to peer networking, layering over any local transport available (bluetooth, ethernet if available, a local wifi connection, or a common wifi infrastructure). There’s a lot of “magic” in that capability, layering over underlying technologies and dealing with the advertise and connect mechanisms. Some of it uses Bonjour (aka zeroconf), and I suspect other mechanisms as well.

One of the “quirks” of this technology is that you don’t direct what transport is used, nor do you get information about the transport. You do get a nicely designed cascade of objects, all of which leverage the delegate/protocol structure to do their thing. Unfortunately, the documentation doesn’t make how to use them and what to expect entirely clear.

The structure starts with an advertiser, which is paired with an advertising browser. It wasn’t completely obvious to me at first, but you don’t need both sides of the peer to peer conversation doing advertising in order to make a connection. One side can advertise, the other browse, and you can establish a connection on that basis. It doesn’t need to be set up bi-directionally, although it can.

When I first started in, having glanced through the developer docs, I thought you needed to have both sides actively advertising for this to work. Nope, bad assumption on my part. Both can advertise, and it’s makes for interesting viewing of “who’s out there” – but the heart that enables data transfer is another layer down: MCSession.

You use the browser to “invite” a found peer to a session, and the corresponding advertiser has a delegate you use to accept invites.

The session (MCSession) is the heart of the communications from here. Session has an active side and a reactive side to it’s API – methods like send(_:toPeers:with:) pair to a session delegate responding using the method session(_:didReceive:fromPeer:). Before you get into sending data, however, you need to be connected.

While the browser allows you to invite, and the advertiser to accept, it is the session that gives you detail on what’s happening. Session is designed to do this through delegate method callbacks to session(_:peer:didChange:) which is how you get state changes on for connections changes, and information on to whom you are connected. The session state is a tri-state thing: notConnected, connecting, or connected. In my experience so far, you don’t spend very long in the connecting state, and state updates propagate pretty quickly (within a second or two) when the infrastructure changes. For example, when your iOS device goes to sleep, or you set the device into airplane mode. I haven’t measured exactly how fast, or how consistently, these updates propagate.

Session is a bi-directional communications channel, and once you are connected, either side can send data for the other to receive. Session also has the concept of not just sending Data, but of explicitly transferring (presumably larger) resources that are URL based. I haven’t experimented with this layer, but I’m guessing it’s a bit more optimized for reading a large file and streaming it out. The session delegate has callbacks for when it starts transfering and when it completes.

There’s a third transport mechanism, which uses open ended streams, that I haven’t yet touched. When I started looking, I did find some older github projects that tested using the streams capability – and how many simultaneous streams could be triggered and used effectively – but no published results. Alas those projects were written with swift 3, so while I poked at them out of curiosity, I mostly left them alone.

To explore MultipeerConnectivity, I created a project (available on github) called MPCF-TestBench. The project is open source and available for anyone to compile and use themselves – but no promises on it all working correctly or looking anything close to good. (contributions welcome, but certainly not expected).

Fair warning: when I think of “open source”, it’s the messy sausage-making process, not the completed and pretty, cleaned, ready-to-use-no-work-involved library or application that is the desired end goal. Feel free to dig in to the source, ask questions if you like, improve it and share the changes, or use it to your own explorations – but demand anything and you’ll just get a snicker.

The project is an excuse to do something “heavier” in SwiftUI and see how things work – like how to get updates from more of these delegate heavy structures into the UI, and to see how SwiftUI translates across platforms. In short, it’s an excuse to learn.

All of this started several weeks ago when I poked Guiherme Rambo about MultiPeerKit to see how fast it actually worked. He hadn’t made any explicit measurements, so I thought it might be interesting to do just that. To that end, the MPCF-TestBench has (crudely) cobbled a reflector and a test-runner with multiple targets (iOS, tvOS, and mac). This is also an excuse to see how SwiftUI translates across the platforms, but more on that in another (later) post. If you go to use this, I’d recommend sticking with the iOS targets for now, as it’s where I’m actively doing my development and using it.

MPCF-TestBench work in progress

I have yet to do the work of trying out the transmissions at various sizes, but you can get basic information for yourself with the code as it stands. The screenshot above was transmitted from my iPhone (iPhone X) to a 10″ iPad Pro, leveraging the my local wifi network, to which both were connected. The test sent 100 data packets that were about 1K in size, and a corresponding reflector on the iPad echoed the data back. No delay, just shoving them down a pipe as fast as I could – using the “reliable” transport mode.

I used a simple Codable structure that exports to JSON to dump out the data (although my export mechanism is only half-working to be honest). Still, it’s far enough to get a sample available if you’re curious. Feel free to dig apart the list of JSON objects for your own purposes, I’ll be adding more and making a more thorough result set over time.

I haven’t yet been able to establish a bluetooth only connection – the peering mechanism isn’t making a connection, but it could easily be something stupid I’ve done as well.

So there you have it – my initial answer to “how fast is it”: I’m seeing about 3 Kbytes/sec transferred using a “reliable” transport mode, over wifi, and using more recent iOS devices. The transmissions appear to be reasonably stable as well – not a terrible amount of standard deviation in my simple tests.

How to make a SwiftUI component that draws a Shape with light

While I was experimenting with SwiftUI, one of the effects I wanted to re-create was taking a shape and making “stroke effects” on it. The goal was to create a SwiftUI component that could take an arbitrary path, apply this effect to the shape, and render it within the context of a larger SwiftUI view.

The effect I wanted to create was a “laser light” like drawing. You see this a lot in science fiction film user interfaces or backgrounds, and it is just kind of fun and neat. And yeah, I want it to be decent in both light and dark modes, although “dark mode” is where it will shine the most.

And the code to represent this is:

VStack {
    LaserLightShape(color: Color.orange, lineWidth: 1) {
        Rectangle()
    }

    LaserLightShape(color: Color.red, lineWidth: 2) {
        Circle()
    }

    LaserLightShape(color: Color.blue, lineWidth: 0.5) {
        Path { path in
            path.move(to: CGPoint(x: 0, y: 0))
            path.addLine(to: CGPoint(x: 50, y: 50))
        }
    }
}

Solving this challenge underscored the change in mindset from imperative code to declarative code. When I was looking through the SwiftUI methods, I kept looking for a methods with a means to “add on” to the existing view – or perhaps to replace an element within the view. In both of those cases, this highlighted my pattern of thinking – me telling the framework (or library) what to do. SwiftUI’s biggest win (and challenge) is inverting that thinking.

The methods to work with SwiftUI Views don’t “take a view, modify it, and hand it back”. Instead they take in some information, make a whole new View, and return it. There’s no “tweaking” or “changing” – the closest you get to that (imperative) paradigm is wholesale replacement. My natural instinct was to reach for something that had an explicit side effect, I suspect because that’s how a lot of languages I’ve used for years got something done. It’s a pattern I’m familiar with, and the first tool I reach towards.

This change in mindset is also why you’ll see a lot of the same people who “get” the new paradigm talking about how it overlaps with functional programming, using phrases like “pure functions”, and perhaps even “functors” and “monads”. I quickly get lost in many of these abstract concepts. For me, the biggest similarity is like when I understand the inversion of control that was the mental leap in moving from using a library to writing for a framework. This feels very much akin to that ‘Ah ha’ moment. And for the record, I’m not asserting that I fully understand it – only that I recognize I need to change how I’m framing the problem in my head in order to use these new tools.

To solve this particular challenge, I originally started looking at SwiftUI ViewModifiers. I’d been reading about them and thought maybe that’s what I want. Unfortunately, that didn’t work – ViewModifiers are great when you want to layer additional effects on a View – which means constraining what you do to the methods available and defined on the View protocol – but I wanted to work on a Shape, specifically leveraging the stroke method, which is a different critter.

The solution that I came up with uses a ViewBuilder. I wrote a bit about these before talking about making PreviewBackground. The mental framing that helped provide this solution was thinking about what I wanted to achieve as taking some information (a Shape) and returning some kind of View.

A ViewBuilder is a fairly significantly generic-heavy function. So to make it accept something conforming to the Shape protocol, I specified that the generic type it accepted was constrained to the protocol. I am still stumbling through the specifics of how to effectively use generics and protocols with swift, and thought this was a pretty nice way to show some of its strength.

Without further ado, here’s the code that produces the effect:

struct LaserLightShape<Content>: View where Content: Shape {
    let content: () -> Content
    let color: Color
    let lineWidth: CGFloat

    @Environment(\.colorScheme) var colorSchemeMode
    var blendMode: BlendMode {
        if colorSchemeMode == .dark {
            // lightens content within a dark
            // color scheme
            return BlendMode.colorDodge
        } else {
            // darkens content within a light
            // color scheme
            return BlendMode.colorBurn
        }
    }

    init(color: Color, lineWidth: CGFloat, @ViewBuilder content: @escaping () -> Content) {
        self.content = content
        self.color = color
        self.lineWidth = lineWidth
    }

    var body: some View {
        ZStack {
            // top layer, intended only to reinforce the color
            // narrowest, and not blurred or blended
            if colorSchemeMode == .dark {
                content()
                    .stroke(Color.primary, lineWidth: lineWidth / 4)
            } else {
                content()
                    .stroke(color, lineWidth: lineWidth / 4)
            }

            if colorSchemeMode == .dark {
                // pushes in a bit of additional lightness 
                // when in dark mode
                content()
                    .stroke(Color.primary, lineWidth: lineWidth)
                    .blendMode(.softLight)
            }
            // middle layer, half-width of the stroke and blended
            // with reduced opacity. re-inforces the underlying
            // color - blended to impact the color, but not blurred
            content()
                .stroke(color, lineWidth: lineWidth / 2)
                .blendMode(blendMode)

            // bottom layer - broad, blurred out, semi-transparent
            // this is the "glow" around the shape
            if colorSchemeMode == .dark {
                content()
                    .stroke(color, lineWidth: lineWidth)
                    .blur(radius: lineWidth)
                    .opacity(0.9)
            } else {
                // knock back the blur/background effects on
                // light mode vs. dark mode
                content()
                    .stroke(color, lineWidth: lineWidth / 2)
                    .blur(radius: lineWidth / 1.5)
                    .opacity(0.8)
            }
        }
    }
}

The instructions for what to do are embedded in body of the view we are returning. The pattern is one I looked up from online references of how to make this same effect in photoshop. The gist is:

  • You take the base shape, make a wide stroke of it, and blur it out a bit. This will end up being the “widest” portion of the effect.
  • Over that, you put another layer – roughly half the width of the bottom layer, and stroke it with the color you want to show.
  • And then you add a final top layer, narrowest of the set, intended to put the “highlight” or shine onto the result.

You’ll notice in the code that it’s also dynamic in regards to light and dark backgrounds – in a light background, I tried to reinforce the color without making the effect look like an unholy darkness shadow trying to swallow the result, and in the dark background I wanted to have a “laser light” light shine show through. I also found through trial-and-error experiments that it helped to have a fourth layer sandwiched in there in dark mode specifically to brighten up the stack, adding in more “white” to the end effect.

A lot of this ends up leveraging the blend modes from SwiftUI that composite layers. I’m far from a master of blend modes, and had to look up a number of primers to figure out what I wanted from the fairly large set of possibilities.

I suspect there is a lot more that can be accomplished by leveraging the ViewBuilder pattern.

Introducing and explaining the PreviewBackground package

While learning and experimenting with SwiftUI, I use the canvas assistant editor to preview SwiftUI views extensively. It is an amazing feature of Xcode 11 and I love it. There is a quirk that gets difficult for me though – the default behavior of the preview provider uses a gray background. I frequently use multiple previews while making SwiftUI elements, wanting to see my creation on a background supporting both light and dark modes.

The following little stanza is a lovely way to iterate through the modes and displaying them as previews:

#if DEBUG
struct ExampleView_Previews: PreviewProvider {
    static var previews: some View {
        Group {
            ForEach(ColorScheme.allCases,
                    id: \.self) { scheme in

                Text("preview")
                    .environment(\.colorScheme, scheme)
                    .frame(width: 100,
                           height: 100,
                           alignment: .center)
                    .previewDisplayName("\(scheme)")
            }
        }
    }
}
#endif

Results in the following preview:

The gray background doesn’t help all that much here. It is perfect when you are viewing a fairly composed element set, as you are often working over an existing background. When you are creating an element to stand alone, or moving an element. In those cases, I really want a background for the element.

And this is exactly what PreviewBackground provides. I made PreviewBackground into a SwiftPM package. While I could have created this effect with a ViewModifier, I tried it out as a ViewBuilder instead, thinking it would be nice to wrap the elements I want to preview explicitly.

The same example, using PreviewBackground:

import PreviewBackground

#if DEBUG
struct ExampleView_Previews: PreviewProvider {
    static var previews: some View {
        Group {
            ForEach(ColorScheme.allCases,
                    id: \.self) { scheme in
                PreviewBackground {
                    Text("preview")
                }
                .environment(\.colorScheme, scheme)
                .frame(width: 100,
                       height: 100,
                       alignment: .center)
                .previewDisplayName("\(scheme)")
            }
        }
    }
}
#endif

The code is available on Github, and you may include it within your own projects by adding a swift package with the URL: https://github.com/heckj/PreviewBackground

Remember to import PreviewBackground in the views where you want to use it, and work away!

Explaining the code

There are not many examples of using ViewBuilder to construct a view, and this is a simple use case. Here is how it works:

import SwiftUI

public struct PreviewBackground<Content>: View where Content: View {
    @Environment(\.colorScheme) public var colorSchemeMode

    public let content: () -> Content

    public init(@ViewBuilder content: @escaping () -> Content) {
        self.content = content
    }

    public var body: some View {
        ZStack {
            if colorSchemeMode == .dark {
                Color.black
            } else {
                Color.white
            }
            content()
        }
    }
}

The heart of using ViewBuilder is using it within a View initializer to return a (specific but) generic instance of View, and using the returned closure as a property that you execute when composing a view.

There is a lot of complexity in that statement. Allow me to try and explain it:

Normally when creating a SwiftUI view, you create a struct that conforms to the View protocol. This is written in code as struct SomeView: View. You may use the default initializer that swift creates for you, or you can write your own – often to set properties on your view. ViewBuilder allows you to take a function in that initializer that returns an arbitrary View. But since the kind of view is arbitrary, we need to make the struct generic – since we can’t assert exactly what type it will be until the closure is compiled. To tell the compiler it’ll need to do the work to figure out the types, we label the struct as a being generic, using the <SomeType> syntax:

struct SomeView<Content>: View where Content: View

This says there is a generic type that we’re calling Content, and that generic type is expected to conform to the View protocol. There is a more compact way to represent this that you may prefer:

struct SomeView<Content: View>: View

Within the view itself, we have a property – which we name content. The type of this content isn’t known up front – it is the arbitrary type that the compiler gets to infer from the closure that will provided in the future. This declaration is saying the content property will be a closure – taking no parameters – that returns some an arbitrary type we are calling Content.

public let content: () -> Content

Then in the initializer, we use ViewBuilder:

public init(@ViewBuilder content: @escaping () -> Content) {
    self.content = content
}

In case it wasn’t obvious, ViewBuilder is a function builder, the swift feature that is enabling this declarative structure with SwiftUI. This is what allows us to ultimately use it with in that declarative syntax form.

The final bit of code to describe is using the @Environment property wrapper.

@Environment(\.colorScheme) public var colorSchemeMode

The property wrapper is not in common use, but perfect for this need. The property wrapper uses exposes a specific part of the existing environment as a local property for this view. This is what enables PreviewBackground to choose the color for the background appropriate to the mode. By reading the environment it chooses an appropriately colored background. It then uses that property to assemble a view by invoking the property named content (which was provided by the function builder) within a ZStack.

By using ViewBuilder, we can use the PreviewBackground struct like any other composed view within SwiftUI:

var body: some View {
    PreviewBackground {
        Text("Hello there!")
    }
}

If we had created this code as a ViewModifier, then using it would look different – instead of the curly-bracket syntax, we would be chaining on a method. The default set up for something like that looks like:

var body: some View {
    Text("Hello there!")
    .modify(PreviewBackground())
}

I wanted to enable the curly-bracket syntax for this, hence the choice of using a ViewBuilder.

A side note about moving code into a Swift package

When I created this code, I did so within the context of another project. I wanted to use it across a second project, and the code was simple enough (a single file) to copy/paste – but instead I went ahead and made it a Swift package. Partially to make it easier for anyone else to use, but also just to get a bit more experience with what it takes to set up and use this kind of thing.

The mistake that I made immediately on moving the code was not explicitly making all the structs and properties public. It moved over, compiled fine, and everything was looking great as a package, but then when I went to use it – I got some really odd errors:

Cannot call value of non-function type 'module<PreviewBackground>'

In other instances (yes, I admit this wasn’t the first time I made this mistake – and it likely won’t be the last) the swift compiler would complain about the scope of a function, letting me know that it was using the default internal scope, and was not available. But SwiftUI and this lovely function builder mechanism is making the compiler work quite a bit more, and it is not nearly as good at identifying why this mistake might have happened, only that it was failing.

If you hit the error Cannot call value of non-function type when moving code into a package, you may have forgotten to make the struct (and relevant properties) explicitly public.

Four strategies to use while developing SwiftUI components

Lets start out with the (possibly) obvious: when I code, I frequently make mistakes (and fix them); but while I am going through that process function builders are frequently kicking my butt. When you are are creating SwiftUI views, you use function builders intensely – and the compiler is often at a loss to explain how I screwed up. And yeah, even with the amazing new updates into the Diagnostic Engine alongside Swift 5.2, which I am loving.

What is a function builder? It is the thing that looks like a normal “do some work” code closure in swift that you use as the declarative structure when you are creating a SwiftUI view. When you see code such as:

import SwiftUI

struct ASimpleExampleView: View {
    var body: some View {
        Text("Hello, World!")
    }
}

The bit after some View is the function builder closure, which includes the single line Text("Hello, World!").

The first mistake I make is assuming all closures are normal “workin’ on the code” closures. I immediately start trying to put every day code inside of function builders. When I do, the compiler – often immediately and somewhat understandably – freaks out. The error message that appears in Xcode:

Function declares an opaque return type, but has no return statements in its body from which to infer an underlying type

And some times there are other errors as well. It really depending on what I stacked together and how I grouped and composed the various underlying elements in that top level view, and ultimately what I messed up deep inside all that.

I want to do some calculations in some of what I am creating, but doing them inline in the function builder closures is definitely not happening, so my first recommended strategy:

Strategy #1: Move calculations into a function on the view

Most of the reasons I’m doing a calculation is because I want to determine a value to hand in to a SwiftUI view modifier. Fiddling with the opacity, position, or perhaps line width. If you are really careful, you can do some of that work – often simple – inline. But when I do that work, I invariably screw it up – make a mistake in matching a type, dealing with an optional, or something. At those times when the code is inline in a function builder closure, the compiler is having a hell of a hard time figuring out what to tell me about how I screwed it up. By putting the relevant calculation/code into a function that returns an explicit type, the compiler gets a far more constrained place to provide feedback about what I screwed up.

As an example:

struct ASimpleExampleView: View {
    func determineOpacity() -> Double {
        1
    }

    var body: some View {
        ZStack {
            Text("Hello World").opacity(determineOpacity())
        }
    }
}

Some times you aren’t even doing calculations, and the compiler gets into a tizzy about the inferred type being returned. I have barked my shins on that particular edge repeatedly while experimenting with all the various options, seeing what I like in a visualization. The canvas assistant editor that is available in Xcode is a god-send for fast visual feedback, but I get carried away in assembling lots of blocks with ZStacks, HStacks, and VStacks to see what I can do. This directly leads to my second biggest win:

Strategy #2: Ruthlessly refactor your views into subcomponents.

I am beginning to think that seeing repeated, multiple kinds of stacks together in a single view is possibly a code smell. But more than anything else, keeping the code within a single SwiftUI view as brutally simple as possible gives the compiler a better than odds chance of being able to tell me what I screwed up, rather than throwing up it’s proverbial hands with an inference failure.

There are a number of lovely mechanisms with Binding that make it easy to compose and link to the relevant data that you want to use. When I am making a subcomponent that provides some visual information that I expect the enclosing view to be tracking, I have started using the @Binding property wrapper to pass it in, which works nicely in the enclosing view.

TIP:

When you’re using @Binding, remember that you can make a constant binding in the PreviewProvider in that same file:

YourView(someValue: .constant(5.0))

While I was writing this, John Sundell has recently published a very in-depth look at exactly this topic. His article Avoiding Massive SwiftUI Views covers another angle of how and why to ruthlessly refactor your views.

On the topic of the mechanics of that refactoring, when we lean what to do, it leads to leveraging Xcode’s canvas assistant editor withPreviewProvider – and my next strategy:

Strategy #3: use Group and multiple view instances to see common visual options quickly

This strategy is more or less obvious, and was highlighted in a number of the SwiftUI WWDC presentations that are online. The technique is immensely useful when you have a couple of variations of your view that you want to keep operational. It allows you to visually make sure they are working as desired while you continue development. In my growing example code, this looks like:

import SwiftUI

struct ASimpleExampleView: View {
    let opacity: Double
    @Binding var makeHeavy: Bool

    func determineOpacity() -> Double {
        // maybe do some calculation here
        // mixing the incoming data
        opacity
    }

    func determineFontWeight() -> Font.Weight {
        if makeHeavy {
            return .heavy
        }
        return .regular
    }

    var body: some View {
        ZStack {
            Text("Hello World")
                .fontWeight(determineFontWeight())
                .opacity(determineOpacity())
        }
    }
}

struct ASimpleExampleView_Previews: PreviewProvider {
    static var previews: some View {
        Group {
            ASimpleExampleView(opacity: 0.8, 
                makeHeavy: .constant(true))

            ASimpleExampleView(opacity: 0.8, 
                makeHeavy: .constant(false))
        }
    }
}

And the resulting canvas assistant editor view:

This does not always help you experiment with what your views look like in all variations. For sets of pre-defined options, or data that influences your view, it can make a huge difference. A good variation that I recommend anyone use is setting and checking the accessibility environment settings to make sure everything renders as you expect. Another that I have heard is in relatively more frequent use: verifying localization rendering.

The whole rapid experimentation and feedback capability is what is so compelling about using SwiftUI. Which leads pretty directly to my next strategy:

Strategy #4: Consider making throw-away control views to tweak your visualization effects

I am not fortunate enough to constantly work closely with a designer. Additionally, I often do not have the foggiest idea of how some variations will feel in terms of a final design. When the time comes, seeing the results on a device (or on multiple devices) makes a huge difference.

You do not want to do this for every possible variation. That is where mocks fit into the design and development process – take the time to make them and see what you think. Once you have narrowed down your options to a few, then this strategy can really kick in and be effective.

In the cases when I have a few number of variations to try out, I encapsulate those options into values that I can control. Then I make a throw-away view that will never be shown in the final code that allows me to tweak the view within the context of a running application. Then the whole thing goes into whatever application I am working on – macOS, iOS, etc – and I see how it looks

When I am making a throw-away control view, I often also make (another throw-away) SwiftUI view that composes the control and controlled view together, as I intend to display it in the application. This is primarily to see the combined effect in a single Preview within Xcode. The live controls are not active in the Xcode canvas assistant editor, but it helps to see how having the controls influences the rest of the view structure.

A final note: Do not be too aggressive about moving code in a SwiftPM package

You may (like me) be tempted to move your code into a library, especially with the lovely SwiftPM capabilities that now exist within Xcode 11. This does work, and it functionally works quite well, from my initial experiments. But there is a significant downside, at least with the current versions (including Xcode 11.4 beta 3) – while you are still doing active development on the library:

If you open the package to edit or tweak it with Xcode, and load and build from only Package.swift without an associated Xcode project, the SwiftUI canvas assistant preview will not be functioning. If you use an Xcode project file, it works fine – so if you do go down this route, just be cautious about removing the Xcode project file for now. I have filed feedback to Apple to report the issue – both with Xcode 11.3 (FB7619098) and Xcode 11.4 beta 3 (FB7615026).

I would not recommend moving anything into a library until you used had it stable in case. There are also still some awkward quirks about developing code and a dependent library at the same time with Xcode. It can be done, but it plays merry havoc with Xcode’s automatic build mechanisms and CI.

SwiftUI and Combine – Binding, State, and notification of changes

When I started the project that became Using Combine, it was right after WWDC; I watched streamed WWDC sessions online, captivated like so many others about SwiftUI. I picked up this idea that SwiftUI was “built using the new framework: Combine”. In my head, I thought that meant Combine managed all the data – notifications and content – for SwiftUI. And well, that ain’t so. While the original “built using Combine” is accurate, it misses a lot of detail, and the truth is a bit more complex.

After I finished my first run through drafting the content for Using Combine, I took some time to dig back into SwiftUI. I originally intended to write (and learn) more about that. In fact, SwiftUI is what started the whole segue into Combine. I hadn’t really tried to use SwiftUI seriously, or get into the details until just recently. I realized after all the work on examples for Combine and UIKit, I had completely short shifted the SwiftUI examples.

Mirroring a common web technology pattern, SwiftUI works as a declarative structure of what gets shown with the detail being completely derived from some source of truth – derived from state stored or managed somewhere. The introductory docs made is clear that @State was how this declarative mechanism could represent a bit of local state within a View, and with the benefit of Daniel and Paul’s writing (SwiftUI Kickstart and SwiftUI by Example), it was also quickly clear that @EnvironmentObject and @ObservedObject played a role there too.

The Combine link to SwiftUI, as it turns out, is really only about notifying the SwiftUI components that a model had changed, not at all what changed. The key is the protocol from Combine: ObservableObject (Apple’s docs). This protocol, along with the @Published property wrapper, does the wonderful work of generating a combine publisher – the default type of which is represented by the class ObservableObjectPublisher. In the world of Combine, it has a defined output and failure type: <Void, Never>. The heart of that Void output type is that the data that is changing doesn’t matter – only that a change was happening.

So how does SwiftUI go and get the data it needs?Binding is the SwiftUI generic structure that is used to do this linkage. The documentation at Apple asserts:

Use a binding to create a two-way connection between a view and its underlying model. For example, you can create a binding between a Toggle and a Bool property of a State. Interacting with the toggle control changes the value of the Bool, and mutating the value of the Bool causes the toggle to update its presented state.

You can get a binding from a State by accessing its binding property. You can also use the $prefix operator with any property of a State to create a binding.

https://developer.apple.com/documentation/swiftui/binding

Looking around a bit more while creating some examples, and it becomes clear that some handy form elements (such as TextField) expect a parameter of type binding when they are declared. Binding itself works by leveraging swift’s property getters and setters. You can even manually create a Binding if you’re so inclined, defining the closures for get and set to whatever you like. Property wrappers such as @State, @ObservedObject, and @EnvironmentObject either create and expose a Binding, or create a wrapper that in turn passes back a Binding.

My take away is the flow with Combine and SwiftUI has a generally expected pattern: A model to be represented by a reference object, which sends updates when the data is about to change (by conforming to the ObservableObject protocol). SwiftUI goes and gets the data that it needs based on what was declared in the View using Binding to get to the underlying data (and potentially allowing the SwiftUI view to update it in turn if that’s relevant).

Given that SwiftUI views are also designed to be composed, I am leaning towards expecting a pattern that state will need to be defined for pretty much any variation of a view – and potentially externalized. The property wrappers for representing, and externalizing, state within SwiftUI are:

  • @State
  • @ObservedObject and @Published
  • @EnvironmentObject

@State is all about local representation, and the simplest mechanism, simply providing a link to a property and the Binding.

@ObservedObject (along with @Published) adds a notification mechanism on change, as well as a way to get a typed Binding to properties on the model. SwiftUI’s mechanism expects this always to be a reference type (aka a ‘class’), which ends up being pretty easy to define in code.

@EnvironmentObject takes that a step further and exposes a reference model not just to a single view, but allows it to be used by any number of views in their own hierarchy.

  • Drive most of the visual design choices entirely by the current state

But that’s not the only mechanism that is available: SwiftUI is also set up to react to a Combine publisher – although not in a heavily predetermined fashion. An interesting aspect is that all of the SwiftUI views also support a Combine subscriber: onReceive. So you can bring the publisher, and then write code within a View (or View component) to react to what it sends.

The onReceive subscriber acts very similarly to Combine’s sink subscriber – the single-closure version of sink (implying a Combine pipeline failure type of Never). You to define a closure within your SwiftUI view element that accepts that data and does whatever needs doing. This could be using the data, transforming and storing it into local @State, or just reacting to the fact that data was sent and updating the view based on that.

From a “What is a best practice” point of view, it seems the more you represent what you want to display within a reference model, the easier it will be to use. While you can expose a publisher right into a SwiftUI view, it tightly couples the combine publisher to the view and all links all those relevant types. You could (likely just as easily) have the model object encapsulate that detail – in which case the declaration of how you handle event changes over time are separated from how you present the view. This is likely a better separation of concerns.

The project (SwiftUI-Notes) linked to Using Combine now has two examples with Combine and SwiftUI. The first is a simple form validation (the view ReactiveForm.swift and model ReactiveFormModel.swift). This uses both the pattern of encapsulating the state within the model, and exposing a publisher to the SwiftUI View to show what can be done. I’m not espousing that the publisher mechanism is a good way to solve that particular problem, but it illustrates what can be done nicely.

The second example is a view (HeadingView.swift) that uses a model and publisher I created to use the built-in CoreLocation framework. The model (LocationModelProxy.swift) exposes the authorization as a published property, as well as the location updates through a publisher. Within the built-in Cocoa framework, those are normally exposed through a delegate callback. A large number of the existing Cocoa frameworks are convertible into a publisher-based mechanism to work with Combine using this pattern. The interesting bit was linking this up to SwiftUI, which was fun – although this example only taps the barest possibility of what could done.

It will be interesting to see what Apple might provide in terms of adopting Combine as alternative interfaces to its existing frameworks. CoreLocation is such a natural choice with its streaming updates, but there are a lot of others that could be used as well. And of course I’m looking forward to seeing how they expand on SwiftUI – and if they bring in more Combine based mechanisms into it or not.

Commodity and fashion with SwiftUI

I’m only just starting to dig into the new declarative UI framework that Apple announced at WWDC this year: SwiftUI, but already there are a few patterns emerging that will be fascinating to watch in the Mac & IOS development community in the coming months.

The huge benefit of SwiftUI is, as a declarative framework, the process of creating and writing UI interfaces across Apple’s platforms should be far more consistent. I fully expect that best practices will get tried, shared, and good ideas swiftly copied – and culturally the community will glob around some common forms.

This is tensioned against what Apple (and its developers) has tended to celebrate in the past decade: independence and creative expression. UI elements and design have been following a fashion pattern, with the farthest reaching elements of that design being nearly unusable as those experiments pushed the boundaries beyond what was intuitively understood in user experiences. Sometimes the limits pushed so far as to not even be explorable.

Color, Layout, and graphics design are all clearly customizable with SwiftUI. I also expect that some of the crazier innovations (such as the now common “pull to refresh” gesture) will become significantly harder to enable from declarative structures. By its very nature, the declarative structure will make the common, well established UI elements easy and accessible, so much so that I wouldn’t be surprised to see a lot of early SwiftUI apps to “all look alike”. I expect the impact of “all looking alike” to drive a number of IOS dev teams a bit nuts.

The “escape hatches” to do crazy things clearly do exist – and while I haven’t reached that level of learning with SwiftUI, it does seem to follow the “make easy things simple, and hard things possible” concept of progressive disclosure.

It will be interesting to see what happens this fall when the first SwiftUI apps become available as experiments and where that takes consistency and usability on the Apple platforms.