Four strategies to use while developing SwiftUI components

Lets start out with the (possibly) obvious: when I code, I frequently make mistakes (and fix them); but while I am going through that process function builders are frequently kicking my butt. When you are are creating SwiftUI views, you use function builders intensely – and the compiler is often at a loss to explain how I screwed up. And yeah, even with the amazing new updates into the Diagnostic Engine alongside Swift 5.2, which I am loving.

What is a function builder? It is the thing that looks like a normal “do some work” code closure in swift that you use as the declarative structure when you are creating a SwiftUI view. When you see code such as:

import SwiftUI

struct ASimpleExampleView: View {
    var body: some View {
        Text("Hello, World!")
    }
}

The bit after some View is the function builder closure, which includes the single line Text("Hello, World!").

The first mistake I make is assuming all closures are normal “workin’ on the code” closures. I immediately start trying to put every day code inside of function builders. When I do, the compiler – often immediately and somewhat understandably – freaks out. The error message that appears in Xcode:

Function declares an opaque return type, but has no return statements in its body from which to infer an underlying type

And some times there are other errors as well. It really depending on what I stacked together and how I grouped and composed the various underlying elements in that top level view, and ultimately what I messed up deep inside all that.

I want to do some calculations in some of what I am creating, but doing them inline in the function builder closures is definitely not happening, so my first recommended strategy:

Strategy #1: Move calculations into a function on the view

Most of the reasons I’m doing a calculation is because I want to determine a value to hand in to a SwiftUI view modifier. Fiddling with the opacity, position, or perhaps line width. If you are really careful, you can do some of that work – often simple – inline. But when I do that work, I invariably screw it up – make a mistake in matching a type, dealing with an optional, or something. At those times when the code is inline in a function builder closure, the compiler is having a hell of a hard time figuring out what to tell me about how I screwed it up. By putting the relevant calculation/code into a function that returns an explicit type, the compiler gets a far more constrained place to provide feedback about what I screwed up.

As an example:

struct ASimpleExampleView: View {
    func determineOpacity() -> Double {
        1
    }

    var body: some View {
        ZStack {
            Text("Hello World").opacity(determineOpacity())
        }
    }
}

Some times you aren’t even doing calculations, and the compiler gets into a tizzy about the inferred type being returned. I have barked my shins on that particular edge repeatedly while experimenting with all the various options, seeing what I like in a visualization. The canvas assistant editor that is available in Xcode is a god-send for fast visual feedback, but I get carried away in assembling lots of blocks with ZStacks, HStacks, and VStacks to see what I can do. This directly leads to my second biggest win:

Strategy #2: Ruthlessly refactor your views into subcomponents.

I am beginning to think that seeing repeated, multiple kinds of stacks together in a single view is possibly a code smell. But more than anything else, keeping the code within a single SwiftUI view as brutally simple as possible gives the compiler a better than odds chance of being able to tell me what I screwed up, rather than throwing up it’s proverbial hands with an inference failure.

There are a number of lovely mechanisms with Binding that make it easy to compose and link to the relevant data that you want to use. When I am making a subcomponent that provides some visual information that I expect the enclosing view to be tracking, I have started using the @Binding property wrapper to pass it in, which works nicely in the enclosing view.

TIP:

When you’re using @Binding, remember that you can make a constant binding in the PreviewProvider in that same file:

YourView(someValue: .constant(5.0))

While I was writing this, John Sundell has recently published a very in-depth look at exactly this topic. His article Avoiding Massive SwiftUI Views covers another angle of how and why to ruthlessly refactor your views.

On the topic of the mechanics of that refactoring, when we lean what to do, it leads to leveraging Xcode’s canvas assistant editor withPreviewProvider – and my next strategy:

Strategy #3: use Group and multiple view instances to see common visual options quickly

This strategy is more or less obvious, and was highlighted in a number of the SwiftUI WWDC presentations that are online. The technique is immensely useful when you have a couple of variations of your view that you want to keep operational. It allows you to visually make sure they are working as desired while you continue development. In my growing example code, this looks like:

import SwiftUI

struct ASimpleExampleView: View {
    let opacity: Double
    @Binding var makeHeavy: Bool

    func determineOpacity() -> Double {
        // maybe do some calculation here
        // mixing the incoming data
        opacity
    }

    func determineFontWeight() -> Font.Weight {
        if makeHeavy {
            return .heavy
        }
        return .regular
    }

    var body: some View {
        ZStack {
            Text("Hello World")
                .fontWeight(determineFontWeight())
                .opacity(determineOpacity())
        }
    }
}

struct ASimpleExampleView_Previews: PreviewProvider {
    static var previews: some View {
        Group {
            ASimpleExampleView(opacity: 0.8, 
                makeHeavy: .constant(true))

            ASimpleExampleView(opacity: 0.8, 
                makeHeavy: .constant(false))
        }
    }
}

And the resulting canvas assistant editor view:

This does not always help you experiment with what your views look like in all variations. For sets of pre-defined options, or data that influences your view, it can make a huge difference. A good variation that I recommend anyone use is setting and checking the accessibility environment settings to make sure everything renders as you expect. Another that I have heard is in relatively more frequent use: verifying localization rendering.

The whole rapid experimentation and feedback capability is what is so compelling about using SwiftUI. Which leads pretty directly to my next strategy:

Strategy #4: Consider making throw-away control views to tweak your visualization effects

I am not fortunate enough to constantly work closely with a designer. Additionally, I often do not have the foggiest idea of how some variations will feel in terms of a final design. When the time comes, seeing the results on a device (or on multiple devices) makes a huge difference.

You do not want to do this for every possible variation. That is where mocks fit into the design and development process – take the time to make them and see what you think. Once you have narrowed down your options to a few, then this strategy can really kick in and be effective.

In the cases when I have a few number of variations to try out, I encapsulate those options into values that I can control. Then I make a throw-away view that will never be shown in the final code that allows me to tweak the view within the context of a running application. Then the whole thing goes into whatever application I am working on – macOS, iOS, etc – and I see how it looks

When I am making a throw-away control view, I often also make (another throw-away) SwiftUI view that composes the control and controlled view together, as I intend to display it in the application. This is primarily to see the combined effect in a single Preview within Xcode. The live controls are not active in the Xcode canvas assistant editor, but it helps to see how having the controls influences the rest of the view structure.

A final note: Do not be too aggressive about moving code in a SwiftPM package

You may (like me) be tempted to move your code into a library, especially with the lovely SwiftPM capabilities that now exist within Xcode 11. This does work, and it functionally works quite well, from my initial experiments. But there is a significant downside, at least with the current versions (including Xcode 11.4 beta 3) – while you are still doing active development on the library:

If you open the package to edit or tweak it with Xcode, and load and build from only Package.swift without an associated Xcode project, the SwiftUI canvas assistant preview will not be functioning. If you use an Xcode project file, it works fine – so if you do go down this route, just be cautious about removing the Xcode project file for now. I have filed feedback to Apple to report the issue – both with Xcode 11.3 (FB7619098) and Xcode 11.4 beta 3 (FB7615026).

I would not recommend moving anything into a library until you used had it stable in case. There are also still some awkward quirks about developing code and a dependent library at the same time with Xcode. It can be done, but it plays merry havoc with Xcode’s automatic build mechanisms and CI.

SceneKit interaction handling – Experiment439

A staple of science fiction movies has been 3D holographic visualizations and controls. Most efforts I’ve seen at taking real visualization data and putting them into a 3D context haven’t been terribly successful. At the same time, the advance of AR and VR makes me suspect that we should be able to take advantage of the additional dimension in displaying and visualizing data.

I started a project, named Experiment439, to go through the process of creating and building a few visualizations and seeing what I can do with them, and what it might be refined out into a library that can be re-used.

I wanted to take a shot at this leveraging Apple’s SceneKit 3D abstraction and see how far I could get.

The SceneKit abstraction and organization for scenes is a nice setup, although it’s weak in one area – delegating interaction controls.

The pattern I’m most familiar with is the View Controller setup (and it’s many variants depending on how you display data). Within SceneKit, an SCNNode can encapsulate other nodes (and controls overall placement in the view), so it makes a fairly close analogue to the embedding of views within each other that I’m familiar with from IOS and MacOS development. Coming up with something that encapsulates and controls a SCNNode (or set of SCNNodes) seems like it’s a pretty doable (and useful) abstraction.

The part that gets complicated quickly is handling interaction. User-invoked events in SceneKit today are limited to projecting hit-tests from the perspective of the camera that’s rendering the scene. In the case of AR apps on IOS for example, the camera can be navigating the 3D space, but when you want to select, move, or otherwise interact you’re fairly constrained to mapping touch events projected through the camera.

I’ve seen a few IOS AR apps that use the camera’s positioning as a “control input” – painting or placing objects where the IOS camera is positioned as you move about an AR environment.

You can still navigate a 3D space and scene, and see projected data – both 2D and 3D very effectively, but coming up with equivalent interactions to what you get on Mac and IOS apps – control interactions – has been significantly trickier.

A single button that gets toggled on/off isn’t too bad, but as soon as you step into the world of trying to move a 3D object through the perspective of the camera – shifting a slider or indicating a range – it gets hellishly complex.

With Apple’s WWDC 2019 around the corner (tomorrow as I publish this) and the rumors of significant updates to AR libraries and technologies, I’m hoping that there may be something to advance this space and make this experiment a bit easier, and even more to expand on the capabilities of interacting with the displayed environment.

IOS AR apps today are a glorified window into a 3D space – amazing and wonderful, but heavily constrained. It allows me to navigate around visualization spaces more naturally than anything pinned to a desktop monitor, but at the cost of physically holding the device that you would also use to interact with the environment. I can’t help but feel a bit of jealousy for the VR controllers that track in space, most recently the glowing reviews of the Valve Index VR controllers.

Better interaction capabilities of some kind will be key to taking AR beyond nifty-to-see but-not-entirely-useful games and windows on data. I’m hoping to see hints of what might be available or coming with the Apple ecosystem in the next few days.

Meanwhile, there still a tremendous amount that to be done to make visualizations and display them usefully in 3D. A lot of the inspiration of the current structure of my experiment has been from Mike Bostock‘s amazing D3.js library, which has been so successful in helping to create effective data visualization and exploration tools.

IOS 12 DevNote: Embedded Swift Frameworks and bitcode

A side project for the barista’s at my favorite haunt has been a fun “getting back into it” programming exercise for IOS 12. It’s a silly simple app that checks the status of the network and if the local WIFI router is accessible, and provides some basic diagnostic and suggestions for the gang behind the counter.

It really boils down to two options:

    Yep, probably a good idea to restart that WIFI router
    Nope, you’re screwed – the internet problem is upstream and there’s nothing much you can do but wait (or call the Internet service provider)

It was a good excuse to try out the new Network.framework and specifically NWPathMonitor. In addition to the overall availability, I wanted to report on if a few specific sites were responding that the shop often uses, and on top of that I wanted to do some poking at the local WIFI router itself to make sure we could “get to it” and then made recommendations from there.

As I dug into things, I ended up deciding to use a swift framework BlueSocket, with the idea that if I could open a socket to the wifi router, then I could reasonably assume it was accessible. I could have used Carthage or CocoaPods, but I wanted to specifically try using git submodules for the dependencies, just to see how it could work.

With XCode 10, the general mechanism of dragging in a sub-project and binding it in works extremely easily and well, and the issues I had really didn’t hit until I tried to get something up to the IOS App Store for TestFlight.

The first thing I encountered was the sub-projects had a variable for CFVersionBundle: $(CURRENT_PROJECT_VERSION) that apparently wasn’t getting interpolated and set when it was built as a subproject. I ended up making a fork of the project and hard-coding the Info.plist with the specific version. Not ideal, but something that’s at least tractable. I’m really hoping that this coming WWDC shows some specific Xcode/IOS integration improvements when it comes to Swift Package Manager. Sometimes the Xcode build stuff can be very “black box”, and it would be really nice to have a more clear integration point for external dependencies.

The second issue was a real stumper – even though everything was validating locally for a locally built archive, the app store was denying it. The message that was coming back:

Invalid Bundle – One or more dynamic libraries that are referenced by your app are not present in the dylib search path.

Invalid Bundle – The app uses Swift, but one of the binaries could not link to it because it wasn’t found. Check that the app bundles correctly embed Swift standard libraries using the “Always Embed Swift Standard Libraries” build setting, and that each binary which uses Swift has correct search paths to the embedded Swift standard libraries using the “Runpath Search Paths” build setting.

I dug through all the linkages with otool, and everything was looking fine – and finally google trawled across a question in StackOverflow. Near the bottom there was a suggestion to disable bitcode (which is on by default when you upload an IOS archive). I gave that a shot, and it all flowed through brilliantly.

I can only guess that when you’re doing something with compiled-from-swift dylib’s, the bitcode process does something that the app store really doesn’t like. No probably without the frameworks (all the code in the project directly), but with the frameworks in my project, bitcode needed to be turned off.

Made it through all that, and now it’s out being tested with TestFlight!

El Diablo Network Advisor

Vapor 3 and a few random experiments

This past week I dug more deeply into server-side swift, specifically with Vapor 3. Vapor was interesting because it recently built over SwiftNIO, and initial reports of its performance were very positive. A highly performant HTTP application based framework in a memory safe language? Worth a look!

I have used dynamically typed languages (NodeJS/TypeScript/Javascript and Python) for quite a while, so the biggest shock is transferring back towards the constraints of a strictly typed language. This cascades into how the software is represented at a lot of levels, and really the transfer to “classes, structs, and enums” was the hardest to re-acquaint myself. The piece that feels the weakest (compared to other languages and frameworks) is the testing – the dynamic languages uses the full capabilities of the languages dynamism, and it’s brutally missing from swift. I have become immensely spoiled using testing frameworks like Jasmine, or Mocha and Chai with supertest over express, making a super-easy to read testing framework that works the code directly.

Speaking of BDD, I took a day detour into even trying to use Quick and Nimble, but in the end decided it was more pain than value – and leveraging XCTest, even if writing tests with it felt stunted and somewhat awkward, was a more robust path. It was particularly painful to work with server-side swift, it seems far more robust with IOS projects – but the lack of reflection and XCTest identifying what to run on Linux is atrocious. When I found the SwiftPM command to help collect the tests with XCTest:

swift test --generate-linuxmain

that won the day, and it was back to XCTest.

Vapor 3 itself was very straightforward, although the docs are very rough – and in some cases downright useless. There are multiple points where extensions to Vapor (WebSocket, Auth, and so forth) are not clear on how you attach and use them in their templates and sample code. Fortunately the community (on Discord, rather than Slack and Vapor on StackOverflow) makes up for the difference. The developers who are actively pushing Vapor forward as well as community members are very accessible and willing to answer questions.

As I mentioned earlier, I’m finding the idioms of programming with swift the hardest to get my head around. It is a very different way of thinking about the problem and how to solve it, tending to be fully specified at every level. Structs, extensions, and enums make up most of the structures, often with lots of smaller files in the examples that I’ve been seeing so far.

While it’s very straight forward to read and understand, I find myself struggling to know where to look things up, and how to read documentation to get what I need out of it. In addition, even Apple’s documentation seems significantly weaker than it has in years past. There’s a new style to the documentation that I’m struggling to learn – the ability to read the docs to know what enumeration options should be used, how and when, is definitely a challenge. It is often Q&A and samples in StackOverflow that provide the closing hints or how to use the code in any holistic way that makes a difference.

On the good side, Xcode running Vapor on my laptop was a gem, and I was immediately enthralled with the cpu and memory tracking details that you could see while running the code locally. I haven’t fully explored what you can do, but even just seeing the live CPU and memory tracking on the Vapor application while it’s running is wonderful. In other environments, there would be a lot of infrastructure setup to capture that same level of immediate detail – and it’s just built in with Xcode.

CPU spikes when running “ab” load testing
memory usage over time with the same “ab” load testing

Vapor makes it easy to leverage Xcode, wrapping the SwiftPM tool commands so you can invoke something akin to:

vapor xcode -y

This will regenerate an Xcode project file and open it. Vapor projects also have a number of examples of wrapping the code into a container to run however you like, and the next version of Vapor (4, in development now) will have some “polite shutdown” signal handlers for SIGINT and SIGTERM, which will make it work better with orchestration systems like Kubernetes.

I have this perverse idea of wanting to run this same code that I can put into a container on an IOS device for a quick-shot “mobile server”. Yes – I know there are issues with IOS and activating the relevant devices through SwiftNIO, but the idea of having my own portable server as an IOS app is really appealing.

Vapor 3 is all based on Swift Package Manager, for which there’s no (yet) direct Xcode support. It looks like it may be possible to use Xcode’s cross-project linking to have an SPM based Xcode project working with a more classic IOS one using the project as a dependency. There’s an article on how this can work called Bringing Swift NIO to the iPhone, and a similar reference, a walk-through how-to in the swift forums. I haven’t wrapped my sample Vapor 3 project into an IOS application yet, but I’ll be giving that a shot shortly.

tweaking XCode build configurations (aka Info.plist is missing)

When I start a new project in XCode, it always sort of bugs me that so many things are tossed into the main project directory. I suspect I’m not alone there. To resolve that, one of the first things that I do is start organizing stuff. Two folders immediately get created: “resources” and “src”. I throw Info.plist and the English.lproj bits into resources and main.m and the precompiled headers into src. No big deal, right? Well, sorta.

As soon as you do this the XCode project will whine a bit at you (whining in this case takes for the form of the files in the project browser getting listed in red).

Xcode complaining about missing files

This part is very easily to resolve – just click on them and choose Get Info. Once you’ve got that up, click on the “Choose…” button and you’ll get a standard file navigation window. Just click into the directory you’ve stashed the file into and select it.

Choosing a new file location with Get Info

This all worked great for me and I thought I had everything settled there until I tried to build… and damn, it didn’t work. The message I got back (after a little quick digging into the build results window) was error: The file “Info.plist” does not exist.

error: The file “Info.plist” does not exist.

Damned annoying message, because I know it exists – I just moved it.

The resolution to it really isn’t that bad, but it can be confusing. There is a build configuration “Info.plist File” that you can set. But where you really want to set it is on the target, not the project. Turns out you can set it in both places, and when I set it in just the project, it didn’t fix the problem. To resolve, find your build target and choose “Get Info” there. Click on the “build” tab and find the configuration setting (the built in search mechanism is a joy here).Setting the build configuration

You’ll see in the above example that Info.plist is sort of highlighted. Just double click on the value and change it to (in my case) resources/Info.plist and click on “OK”. If you’ve moved your precompiled headers, you’ll want to do a similar adjustment to the setting “Prefix Header”. 

Once you’ve got that completed, you’re back in action and can compile your code up without issue. And yeah, for the record – I fully expect that I’ll forget this and come looking for it again in another 9 months or something. I just hope I can find it then…