Hosting your Swift Library Docs on Github Pages

The beta for Xcode 13.3 dropped yesterday. With it came a released version of Swift 5.6 and a bunch of neat additions that the 5.6 release enables. A feature I was watching closely was two-fold: the capability for plugins to extend the commands available within swift’s package manager, and a static hosting option that was added to the documentation compiler tool (DocC) that Apple announced and open sourced this past year.

The two elements came together with the initial release of swift-docc-plugin, a plugin to swift package manager that adds the ability to generate documentation on the command line with the command swift package generate-documentation. Before I go any farther, I should note that when the DocC team put this together, they knew a lot of folks wanted to host their content as if it were static HTML – so they took the time to document HOW to do that, and best of all – they hosted _that_ documentation on github pages (I’m assuming using their own tools to do it). The articles, all of which are sourced from the content in the git repository, are hosted on GitHub pages:

This works, and quite nicely – I pushed up documentation for the Lindenmayer library I’ve been working on using the following command:

swift package \
    --allow-writing-to-directory ./docs \
    --target Lindenmayer \
    generate-documentation \
    --output-path ./docs \
    --emit-digest \
    --disable-indexing \
    --transform-for-static-hosting \
    --hosting-base-path 'Lindenmayer'

BUT… there are some quirks you should be aware of.

First, the ability to statically host the content does NOT mean that it is static content. DocC is a different take on managing documentation from almost all of the similar, prior tools. Where Doxygen, JavaDoc, Sphinx, Hugo, AsciiDoc and others read through the source and generate HTML, that is not what DocC is doing. While the content dumped out for static hosting ultimately includes HTML, DocC relies on two other critical components to do its work. It starts with the compiler generating what’s called a Symbol Graph. That’s a file that contains all the symbols – types, properties, type aliases, etc – in your source code, and the relationships between them. DocC then tweaks and adjusts that graph of symbols and more specifically mixes it with additional (authored, not automatic) markdown files from a directory – which is called the documentation catalog. If the markdown files in the documentation catalog, or the original source, don’t provide content or structure for the relationships in the symbol graph, then DocC builds up a default set, and attempts to provide a default structure. For what it’s worth, this default content set is where the dreaded “No Overview Available” comes from. The resulting combination is serialized into a bunch of JSON files, stored in the filesystem. Those JSON files contain the information needed to render the content – but the core of that content rendering happens from another project, A Vue-based JavaScript single page app: swift-docc-render.

The JSON files allows this documentation content to be more easily consumed by more than just browsers. I think it’s a reasonable assumption to presume this is what drives and enables the Xcode quick-help summary information.

Back to DocC and static hosting – what this means is that the content that gets dumped into the “docs” directory isn’t just plain HTML – it’s the Javascript and all the associated content as well. The effect this has is that while it LOOKS like you could just pop open the index.html in that directory and see your content, it’s not going to work. Instead you’ll just get a blank page, and if you happen to look at the JavaScript console, you’ll see errors reporting that the requested URL wasn’t found on this server.

It also means that the content isn’t available at the root of the GitHub pages you pushed. The root for my Lindenmayer project is – but going there directly doesn’t show anything. Instead you need to go a couple directories down: Also note that the name of the library is lower-cased. The first thing I tried was, which didn’t work either.

The key thing is to be aware that the URL you want to point people to has that documentation/your_library_name_lowercased extended on it. Oh – and that first repetition of the name is the github repo, in case you don’t have the benefit of having them the same. For example, for the swift automerge library, the reposity is automerge-swift, while the package name is automerge. The URL for the hosted pages on github then becomes:


The example generate-documentation command I provided above has extra bits in it that you probably don’t need, in particular the --emit-digest option. This option generates an additional JSON file at the top level of the content (linkable-entities.json) which contains a list of all of the (public) symbols within the library. I’ve intentionally chosen to include this file in the content I’m hosting on Github pages (at
, although it’s not (to my knowledge) used by the single-page app that displays the HTML content. The formal content definition (written as an OpenAPI spec) is defined in the DocC repository at The short form is that it provides a list of all the symbols that I can use to reference symbols within that content, which otherwise proved of hard to get.

If you’re curious what this looks like, try the following command (assuming you have curl and jq installed):

curl -sL | jq '.[].referenceURL' -r.

The output looks something like:


These are the full reference links used within the DocC, and a portion of these reference links are what can be used as symbols within the markdown files in the documentation catalog.


maps to the symbol:


In a few cases, those little hash extensions (the -5b948 bit in the example) are tricky to find. Xcode does a reasonable job of dropping them in using code completion, but I’ve found a few bugs where they were hard to ascertain. This JSON file with all the symbols is exactly what I needed to get a full list.

I haven’t (yet?) figured out a means to transform these doc:// resource URLs into web URLs, but I’ve got a notion there’s a means to enable that for the ability to cross-link documentation when libraries build, or depend, on other libraries. Maybe that’s ultimately what this digest is for, but if so – it’s not a feature that’s easily usable yet from DocC. I’m still exploring the internals of DocC, but there’s an idea of a resolver – which can be an external process or service – that provides this mapping for DocC to build the links (I think?) it needs to enable that functionality.

Looking for help to solve a specific 3D math/trig problem

I’ve been working on a Swift library that implements Lindenmayer systems, but the past week has me deeply stuck on a specific 3D math problem. There’s a specific 3D rendering command that’s described in  The Algorithmic Beauty of Plants – the ‘$’ character in that work, that has a special meaning that’s taken me quite a bit to sort out. The idea is that as you progress through rendering, this particular command rotates around the axis that you’re “growing in” such that the “up” vector (which is 90° pitched up from the “forward” vector) is as close to vertical as possible.

It turns out this little critter is critical to getting tree representations that match the examples of what’s in the book. I haven’t solved it – I thought I had earlier, but I managed to delude myself, so now I’m back to trying to sort the problem. The course of solving this had let to some nice side effects – such I know have a nice 3D debugging view for my rendered trees – but I haven’t yet finalized on HOW to solve the problem. In years past, if I saw someone stuck this badly on a problem, I’d advise them to ask for help – so that’s what I’m doing.

I’m using a 3D affine transform (a 4×4 matrix of floats) to represent a combination of translations and rotations in 3D space doing a sort of 3D turtle graphics kind of thing. From this state I know the heading that I’m going, and what I’d like to do is roll around this particular axis. The problem that I’m trying to solve is determining the angle (let’s call it θ) to roll that results in one of the heading vectors being as close to vertical (+Y) as possible while still on the plane of rotation that’s constrained to the “heading” axis.

My starting points for the “forward” heading is +1 on the Y axis, and a local “up” heading as +1 on the Z axis.

The way I was trying to tackle this problem was applying the built-up set of translations and rotations using the 3D Affine transform and then figuring out if there was trigonometry I could use to solve for the angle. Since I know the axis around which I want to roll (or can compute it by applying the transform to the unit vector that represents it – the (0,0,1) vector), I was looking at the Rodrigues rotational formula, but my linear algebra (matrix math) skills and understanding are fairly weak – and I couldn’t see a path to solving that equation for θ given known vectors for the heading, or vectors on the plane that can be used to define a cross-product that is the heading, as well as knowing the world vector.

I’m heading towards trying to solve this by iteratively applying various values of θ and homing in on the solution based on the the resulting value that has the Y component value. I can apply the roll as an affine transform that I multiply onto the current transform, and then test the result of a unit “up” vector – rinse and repeat to find the one that gives me the best “Y” component value.

I’d like to know If there’s a way to solve this directly – to compute the value of θ that I can use to directly do the rotation, rather than numerically iterate/solve into the solution. I wasn’t sure how active (or if I’d get a response) on the GameDev stack exchange but I tried asking:

If you’re familiar with 3D graphics, rotations, transforms, or the mathematics of solving this sort of thing, I’d love to have any input or advice on how to solve this problem, even just some concrete knowledge of if this problem is amenable to a direct solution – or if it’s the kind of thing that requires an iterative numerical solution like I’m considering.

UPDATE: Solved!

I got an answer from a friend in slack who saw these, and I’ve mostly implemented it. There’s a few numerical instability points I need to sort out, but the gist is: Yes, there’s definitely a way to explicitly calculate the angle of rotation needed. The summary of the answer that put me onto the right path is: “Look at this problem as projecting the vector you want to roll towards as a vector on the plane that’s the base of the heading. Once its projected, you can compute the angle between two vectors, and that should be what you need.”

The flow of the process is as follows:

  • start with a unit vector that represents the ‘up’ vector that I want to compare (+Z in my case)
  • pull the 3×3 translation matrix from the affine matrix, and use that to rotate the ‘up’ vector. We’ll be comparing against this later to get the angle. Because it’s explicitly set as a unit vector in the ‘up’ direction, we already know it’s on the plane that can be represented by the normal of that plane – which is our heading vector.
  • Use a similar technique to rotate the ‘heading’ vector (what starts as +Y for me) by the rotation matrix.

(a quick double check here I did in my testing was that these two remained 90° apart after rotation – primarily to verify that I didn’t screw up the rotation calculation)

  • Now that we have the heading, we use that as a normal vector for the plane upon which we want to project our two vectors, and from which we can get the angle desired. The vector (the ‘rotated up’ vector) is already on this plane. The other vector is the +Y world vector – the “as vertical as possible” component of this.

The formula for projecting a vector on a plane is:

vec_projected = vector - ( ( vector • plane_normal ) / plane_normal.length^2 ) * plane_normal

You can look at this conceptually as taking the vector you want to project and subtracting from it the portion of the vector that corresponds to the normal vector, which leaves you with just the component that’s aligned on the plane.

🎩-tip to Greg Titus, who referred me to ProjectionOfVectorOntoPlan at MapleSoft.

  • Once both the vectors are projected onto that base plane, then use the equation to calculate the angle between two vectors:
    dot(rotated_up_vector, projected_vector) /
            ( length(projected_vector) * length(rotated_up_vector) 

There’s some numerical instability points I need to work out where my current code is returning ‘NaN’ from the final comparison, and the directional component isn’t included in that – so I need to sort out a way to determine which direction to rotate, but the fundamentals part of it is at least sorted.

Solved, even better

After I’d implemented most of the above details to prove to myself that it worked, I received a suggestion from DMGregory with a significantly better answer. His solution uses Vector cross-products to get define the plane that’s the base upon which we’ll rotate, and then uses the Arctangent function with two parameters to compute the angle from the dot products from those two angles.

It’s a denser solution, but he provided a helpful way to think about it that made a lot of sense to me after I sketched it all out in a notebook so I could understand it:

planeRight = normalize(cross(worldUp, heading));

planeUp = cross(heading, planeRight);

angle = atan2(dot(currentUp, planeRight), dot(currentUp, planeUp));

You can think of the dot products as getting us the x & ycoordinates of our current up vector in this plane, and from that the two-argument arctangent gets us the angle of the vector from the positive x-axis in that plane.


Since I’m using SceneKit, I did have to fiddle around with the cross-products to make them legit for a right-handed coordinate scheme, but the resulting solution helpfully provides a proper direction to rotation as well as the value, which is something that the original solution didn’t have.

API Design decisions behind Lindenmayer in Swift

Procedural generation of art is fascinating to me. The scope of efforts that fall into the bucket of procedural generation is huge. Quite a lot of what you find is either focused on art or video games. Within procedural generation, there is a topic that really caught my eye, I think primarily because it wasn’t just about art or games, but rather botany.

I learned of L-systems, also known as Lindenmayer systems, quite some time ago, and knew that you could use them to generate interesting fractal images. L-systems were devised by Aristid Lindenmayer as a formal means to describe and model plant growth. Much of the background was printed in 1990 in the book The Algorithmic Beauty of Plants. The book is currently available from the site Algorithmic Botany in PDF form (which I linked above). I find it fascinating, and after skimming through it a couple of times, I started to look for what code might be available that I could use to play with it. I pretty quickly found the Swift Playground ‘lindenmayer‘, created by Henri Normak. That was neat, but I wanted to go beyond what it could do to re-generate some of the more complex examples from the book.

Fast forward a number of months, and I’ve now published an early release of a swift library that you can use to generate, and render, Lindenmayer systems. The library is available as a swift package – meaning it is intended to be used on Apple platforms (iOS macOS, etc) and quite a lot of it (the core) could be used by swift on Linux without a lot of trouble. I’m hosting the project, and the source for it, on Github at The current release (0.7.0) is not at all finalized, but has a lot of the features that I wanted to use: contextual rules, parametric symbols, as well as image and 3D model representations of the L-systems.

To tease a bit of what it can do, let me share some examples from the book, and then my examples of the same L-systems, ported to using the library I created:

Rendered trees from The Algorithmic Beauty of Plants, page 59
The ported L-systems using the original parameters as the book, rendered in 3D

The rendered result is really close – but not quite there. Part of it stems from a the representation of a specific symbol in the original (‘$’), and my interpretation of what that does when render the state of an evolved L-System in three dimensions. It could just as easily be a mistake in how I interpreted and ported the original L-systems that were published in the book. You can really see the differences in the second set of trees that I tried to create from the book:

I debated how I wanted to create this library, and in the end landed on doing something was would push me as a developer, forcing me to more deeply understand the swift programming language as well as how to design APIs with it. The result isn’t an interpreted thing, but something that closely uses the swift compiler and tries to match to leveraging the type safety. If you want to make an L-system with this library, you’re doing it in the swift programming language.

The core of an L-system is a set of modules – an abstract representation – and a set of rules that you apply to these modules. You start with an initial set of modules – maybe just one – and on each ‘iteration’ of an L-system, you go through the entire list of all the existing modules, and apply a set of rules. The rules are set up to match a module, or a specific sequence of modules, and if they do – then you replace the current module in its current location within the sequence, with whatever the set of modules the rule returns. Typically only one rule applies, but in any case I set it up so that tries all the rules in order and only uses the first rule that reports it matches. If no rules match, the symbol is ignored and left in place.

A capability that I wanted to enable was implementing a parametric L-system. This means that the symbols aren’t just one-character things that you interpret, but objects (called modules) that can have parameters. Those parameters can be read, and used to determine if a rule should be chosen, or what the rule produces. I chose to use Swift closures for constructing the rules, the idea being that you could define a module as a Swift struct (with or without properties), evaluate if a rule applied to a module (or set of modules) by either their type, or by their type and properties. If choose, then also making the types with any properties available to compute values and choose what modules should be the replacements. My thinking was anything you could compute using Swift was more immediately available by using a closure, and had the benefit of being type-checked by the compiler.

To make this idea more concrete, take a look through a variation of the system that created the tree images above:

    struct Trunk: Module {
        public var name = "A"

        let growthDistance: Double
        let diameter: Double

    struct MainBranch: Module {
        public var name = "B"

        let growthDistance: Double
        let diameter: Double

    struct SecondaryBranch: Module {
        public var name = "C"

        let growthDistance: Double
        let diameter: Double

    struct StaticTrunk: Module {
        public var name = "A°"
        public var render3D: ThreeDRenderCmd {
                length: growthDistance,
                radius: diameter / 2,
                color: ColorRepresentation(red: 0.7, green: 0.3, blue: 0.1, alpha: 0.95)

        let growthDistance: Double
        let diameter: Double

    public struct Definitions: Codable, Equatable {
        var contractionRatioForTrunk: Double = 0.9 /* Contraction ratio for the trunk */
        var contractionRatioForBranch: Double = 0.6 /* Contraction ratio for branches */
        var branchAngle: Double = 45 /* Branching angle from the trunk */
        var lateralBranchAngle: Double = 45 /* Branching angle for lateral axes */
        var divergence: Double = 137.5 /* Divergence angle */
        var widthContraction: Double = 0.707 /* Width contraction ratio */
        var trunklength: Double = 10.0
        var trunkdiameter: Double = 1.0

        init(r1: Double = 0.9,
             r2: Double = 0.6,
             a0: Double = 45,
             a2: Double = 45)
            contractionRatioForTrunk = r1
            contractionRatioForBranch = r2
            branchAngle = a0
            lateralBranchAngle = a2

    static let defines = Definitions()
    public static let figure2_6A = defines
    public static let figure2_6B = Definitions(r1: 0.9, r2: 0.9, a0: 45, a2: 45)
    public static let figure2_6C = Definitions(r1: 0.9, r2: 0.8, a0: 45, a2: 45)
    public static let figure2_6D = Definitions(r1: 0.9, r2: 0.7, a0: 30, a2: -30)

    public static var monopodialTree = Lindenmayer.withDefines(
        [Trunk(growthDistance: defines.trunklength, diameter: defines.trunkdiameter)],
        prng: PRNG(seed: 42),
        parameters: defines
    .rewriteWithParams(directContext: Trunk.self) { trunk, params in

        // original: !(w) F(s) [ &(a0) B(s * r2, w * wr) ] /(d) A(s * r1, w * wr)
        // Conversion:
        // s -> trunk.growthDistance, w -> trunk.diameter
        // !(w) F(s) => reduce width of pen, then draw the line forward a distance of 's'
        //   this is covered by returning a StaticTrunk that doesn't continue to evolve
        // [ &(a0) B(s * r2, w * wr) ] /(d)
        //   => branch, pitch down by a0 degrees, then grow a B branch (s = s * r2, w = w * wr)
        //      then end the branch, and yaw around by d°
            StaticTrunk(growthDistance: trunk.growthDistance, diameter: trunk.diameter),
            Modules.PitchDown(angle: params.branchAngle),
            MainBranch(growthDistance: trunk.growthDistance * params.contractionRatioForBranch,
                       diameter: trunk.diameter * params.widthContraction),
            Modules.TurnLeft(angle: params.divergence),
            Trunk(growthDistance: trunk.growthDistance * params.contractionRatioForTrunk,
                  diameter: trunk.diameter * params.widthContraction),
    .rewriteWithParams(directContext: MainBranch.self) { branch, params in
        // Original P2: B(s, w) -> !(w) F(s) [ -(a2) @V C(s * r2, w * wr) ] C(s * r1, w * wr)
        // !(w) F(s) - Static Main Branch
            StaticTrunk(growthDistance: branch.growthDistance, diameter: branch.diameter),

            Modules.RollLeft(angle: params.lateralBranchAngle),
            SecondaryBranch(growthDistance: branch.growthDistance * params.contractionRatioForBranch,
                            diameter: branch.diameter * params.widthContraction),


            SecondaryBranch(growthDistance: branch.growthDistance * params.contractionRatioForTrunk,
                            diameter: branch.diameter * params.widthContraction),
    .rewriteWithParams(directContext: SecondaryBranch.self) { branch, params in
        // Original: P3: C(s, w) -> !(w) F(s) [ +(a2) @V B(s * r2, w * wr) ] B(s * r1, w * wr)
            StaticTrunk(growthDistance: branch.growthDistance, diameter: branch.diameter),

            Modules.RollRight(angle: params.branchAngle),

            MainBranch(growthDistance: branch.growthDistance * params.contractionRatioForBranch,
                       diameter: branch.diameter * params.widthContraction),

            MainBranch(growthDistance: branch.growthDistance * params.contractionRatioForTrunk,
                       diameter: branch.diameter * params.widthContraction),

As a side note, the library uses both protocols as types (also known as existential types) and generic types – both for generating the L-systems and the rules within an L-system. Enabling that made for some tedious development work, as there’s not a lot of “meta-programming” capabilities with the Swift language today. That said, I’m happy with the results as they stand right now. I’m still debating if I would get any benefit from leveraging the Swift DSL capabilities with result builders. I’ve watched and re-watched Becca Dax’s talk from WWDC 21: Write a DSL in Swift using result builders at least four times, but so far I haven’t convinced myself it’s a win over the factory methods I’ve currently enabled.

The 2D representation draws into SwiftUI’s canvas (which is pretty much a clone of the stuff that draws into a CoreGraphics context that Henri Normak shared in his playground), and the 3D representation work generates up a SceneKit scene, and there’s so much more to go!

I love the idea of being able to use this to generate images or 3D models, and explore the variety of things you can create with L-systems. A huge shout-out to Dr. Kate Compton, who’s writing over many years has enabled me to explore a variety of things within procedural generation. I’m still working up to being able to “generate a 1000 bowls of oatmeal“, at least easily. One of the recent additions I enabled was randomization within the library. I included a seedable pseudo-random number generator, so you can make things deterministic if you want.

This was the first real effort I’ve taken to generating 3D scenes, so I may need to step back and re-think through the whole renderer that generates SceneKit scenes, and I haven’t yet even begun to explore how I might enable the same with RealityKit. The 2D version was relatively straight-forward, but when you get into 3D there’s all sorts of bizarre complications of rotations to deal with – and while I have something basically working, it’s not intuitive to understand – or debug.

I have a number of ideas for how I might continue to grow this – but if you find it interesting, feel free to try it out. I’ve enabled discussions on the Github repo, or you can track me down on twitter pretty easily, if you have questions or suggestions. I’m rather fixated on the issue that what I’m generating doesn’t exactly match the results from the book, and what I interpreted incorrectly, but I think the gist of what this is and does is reasonably solid. If you come to play, don’t be surprised if the examples that I have built into the library change as I iterate on getting the results to match more closely to the originally published work.

I dearly wish that the Swift Playgrounds version 4 (the update that just came this winter) out allowed for Swift packages to be used within a playground. Alas, that doesn’t appear to be the case – but you can still experiment with this library using Swift Playgrounds by including it in an App. I’ll have to explore how to publish that as an example… another thing for the TO-DO list for the project!

Adding DocC to an existing swift package

During WWDC 21, Apple announced that they would be open sourcing documentation tooling (DocC) that’s used to build and provide documentation within Apple. At the tail end of October 2021, the initial version of DocC was released — available on Github, scattered through multiple repositories:

Apple hosts documentation about DocC (presumably written and published with DocC) at its Developer documentation site.

The initial release of DocC is primarily oriented around providing documentation for libraries and frameworks. It isn’t quite useful for documenting the internals of apps (yet!), but it’s getting closer. Some developers are waiting for features they see as critical before adopting DocC. I suspect the most sought-after feature is the ability to output something that can be dropped into a directory and directly hosted — such as rendering the documentation so that it can be published through Github Pages.

If you’re curious about the specifics of how the DocC project is considering enabling “static hosting”, Scotty has submitted a pull request to that feature, along with a parallel PR by Dobromir in the swift-docc-render repository.

With this feature, and some other recent improvements and plans, including some outside of DocC, I am looking forward to adding documentation to a few libraries I’m using, but which haven’t yet adopted DocC.

Unfortunately, Apple’s documentation on what you do to adding documentation to an existing swift library (say, one provided by Package.swift) has some trouble — it’s so highly focused on adding documentation using Xcode, and to when you have an Xcode project, that it heavily neglects the details for settings things up with package-only swift libraries. There’s a good article by Keith Harrison from this summer (at that helps, but also leans heavily on using Xcode.

If you want to build the documentation through just Package.swift (for instance, if the library has no xcodeproj file), then make sure the Package.swift tools version is at 5.5:

// swift-tools-version:5.5

Where you put the documentation is important as well. There’s a one-liner from the Apple’s article:

For Swift packages, place the documentation catalog in the same folder as your library’s source files for DocC to associate the catalog with the library.

Documenting a Swift Framework or Package

When you’re not working in Xcode, this means adding a directory in the sources directory of your package with an extension .docc on it. For example, when I drafted a documentation stub pull request for automerge-swift, I used the package name automerge.docc, not the repository name automerge-swift.docc. The name of the directory doesn’t appear to matter, but I like Keith’s suggestion of having it match the package name. Within the .docc directory, include at least one markdown file that has at the top of a special marker that lines up with the name of the package. In my documentation PR for automerge-swift, I ended up with the following structure and files:

  • Source/Automerge.docc/
  • Source/Automerge.docc/

You can get Apple’s templates by making a (throw-away) Xcode project, adding documentation files to it, and copying them out – but I thought I’d include a few examples for anyone else looking on the web for the quick detail.

The template for the overall catalog:

# ``YourPackageNameHere``
One line summary
## Overview
Short (a paragraph, maybe two) overview of your library.
## Topics
### Group
- <doc:AnArticle>

A few notes about this template:

  • The name at the top should match the name of your swift package, surrounded by the double-back ticks.
  • The ## headings are the major groupings. You can provide additional organization with ###, but don’t bother with sub-sub-headings (#### or more). Its legit markdown, but ignored by DocC.
  • If DocC can’t find a symbol reference that you have in your markdown, or a link to the relevant <doc:> article, then it may just pass on displaying it. In some cases, (links to symbols using the “ syntax) a misspelled symbol may be presented as you provided it, in a fixed-width font, but not have a link.
  • The doc link above references the name of the article’s markdown file – not the title within it. Although to keep yourself sane, I’d recommend keeping the title and header pretty much identical.

And to match that, here’s a quick and dirty template for an Article, loosely based off the one Apple provides within Xcode:

# Article
One sentence overview/abstract introducing the article
## Overview
Introductory paragraph(s)
## How to do something interesting
Your content
### section heading
Your content

Third, there’s an template that Xcode is calling an Extension, which you can use to add additional detail onto an existing symbol – it incorporates what you included with the symbol, and allows you to add more – but external to the documentation in the source. There’s a nice detail of this template within Apple’s article Adding Structure to Your Documentation Pages.

While you can edit and set these up with any text editor, you get a notable win when you use Xcode to editing these files.

  1. The DocC compiler provides nice fixit’s – notes about what’s wrong – when it can’t find or link to a symbol after you tried to build the docs with Product > Build Documentation using Xcode.
  2. Xcode provides some very handy symbol code-completion capabilities within Xcode that you’d otherwise have to guess or learn through trial and error.

At some point, hopefully in the near future, you’ll be able to render out a bunch of static HTML with your documentation. For now, the easiest way to iterate and see the feedback is by building the docs through Xcode. The shortcut keybinding (Shift-Control-Command-D) is a finger-bender, but after Xcode builds the documentation, you immediately see the results in the documentation window of Xcode.

A time for change

For some reason, the fall seems to match the most with the inflection points in my life where I’ve made notable changes. This fall is no different, as today is the last day of a contract position that I started just before the COVID lockdown. The 18 month gig was wonderful – sometimes a bit “hard mode” due to the constraints that large companies put on contractors. I went into it with a couple of goals, and I’m finishing it out feeling like I very much achieved (or made huge progress against) those goals.

“What’s next?” is a common question; from others to me, and within my own brain. For a long time, I absorbed a tremendous amount of my self-identity from what I did. Or more specifically, from the communities that I was a part of – large influences including groups of coworkers – but also external groups. I’m fairly addicted to solving problems and puzzles, so I have a hard time stepping back to understand “what I want to do” in a way that’s not heavily influenced by the group or groups that I’m currently or recently involved with. In practical terms, this hits me as “I’ve been working in and around this problem space for the last X months, so all the interesting puzzles that immediately come to mind tend to be related to that problem space.

Now that I’m wrapping up my contract gig, I’m taking some time off – away from all my home influences – to try and let my inner-self simmer down. While I give myself time to wind down, I’m making lists of things that “sound interesting”. Getting it out of my head into writing often lets me put it aside mentally. I learned long ago I’d obsess until I wrote it down – so I make lists and notes to free myself. I’m also reviewing my journals from the time period when I last had a break. I’m hoping that refreshing my thoughts will help ease things back out such that my own voice rides to the top. There were also some things I set aside when I took the contract that I may want to revisit and consider. I have the luxury of taking some time to figure out what’s next, so I’m trying to take it.

Kubernetes and Developers

Three years ago (April, 2018) Packt published my book Kubernetes for Developers. There weren’t many books related to Kubernetes on the market, and the implementation of Kubernetes was still early – solid, but early. Looking back, I’m pleased with the content I created. It’s still useful today, and for technical content that is pretty darned amazing. I wrote it for an audience of NodeJS and Python developers, and while some of the examples and tasks are now a touch dated, the core concepts remain sound. I originally hoped for at least an 18 month lifespan of the content, and I think it has doubled that.

When I wrote that book, a lot of folks that I interacted with wondered if Kubernetes would stick around, or if it was just a flash-in-the-pan fad. Over the decade prior to writing it, I used with a large variety of devops tools (Puppet, Chef, Ansible, Terraform, and a number of older variations), providing me with experience and understanding. When I saw how Kubernetes represented the infrastructure it manages, I thought it would be a win. The concepts were the “right” primitives, modular and composable, they felt natural, and applied consistently. When I saw AWS “flinch” — responding to the competitive pressures of Google releasing and supporting Kubernetes with cloud services — I knew it was destined for success.

I saw, and still see, a place where this kind of infrastructure can be an incredible boon to a developer or small development team — but with Kubernetes there is a notable downside. While it has great flexibility and composes well, it’s brutally complex, has a very steep learning curve, and can be notably opaque. For a developer looking at the raw output of Kubernetes manifests, it can be a horror show. Values are copied and repeated all over the place, the loose coupling means you have to know the conventions to trace the constructs it makes, and error messages presume in-depth knowledge of of how the infrastructure works. In addition, there are conventions that rule functional operation, and when you step to trying to edit something, it’s astonishingly easy to misconfigure things with an extra space because — well — YAML.

I recently came back to creating application manifests for Kubernetes, and wrote a (simple) helm chart for an application. Coming back after a couple years working on different technology reminded me how opaque and confusing the whole process was. The updated version of Helm (version 3) is an improvement over its predecessor for a number of technical reasons, but it’s no easier to use to develop charts. Creating even a simple chart expects a very deep knowledge of Kubernetes, the options it provides and how to specify them, knowledge of the coupling that’s implicit within the manifests — where data is repeated, and the conventions that rule them. It was very eye opening. The win that I hoped for three years ago when I published the book — that developers could see and have a guide to using the power of Kubernetes to support the development process — hasn’t entirely come to pass. It still could, but needs work.

As a side note, I ran across the blog post 13 Best Practices for Using Helm, which I recommend if you find yourself stepping into the world of using Helm to help manage Kubernetes manifests for your applications.

For developers who are being asked to be responsible for their code running in production, running apps with Kubernetes provides useful levers and feedback loops. Taking advantage of Kubernetes in your development flow is not going to speed up that tight loop where you’re enabling a new HTTP endpoint, but it can make a notable difference when you get the stage of integration, acceptance, and functional testing — the places where you verify correctness, performance analysis, and resiliency testing. As a developer, if you know what Kubernetes is looking for, you can write code to communicate back to the cluster. This in turn lets the cluster manage your code – individually or at scale – far more effectively. Kubernetes has the concept of liveness and readiness probes for any application. By crafting extensions on your application, you can provide direct signals of when things are all good, when there’s trouble, and when your app needs to restart.

The same pattern of interaction that Kubernetes uses to manage its resources is used by observability tools. Once you’re comfortable sending signals to Kubernetes, you can extend that and send metrics, logs, and even distributed tracing – about how your application is working with the details that let you debug, or forecast, how your code operates. Open source observability tools such as Prometheus, Grafana, and Jaeger all comfortably run within Kubernetes, enabling you to quickly provide observability to your apps. The same observability that you use in production can provide you with additional insights to experiment and explore during development. A post I wrote two years ago – Adding tracing with Jaeger to an express application – is still a popular article, and I used that setup to characterize an app by capturing and visualizing traces while running an integration test.

Having been periodically responsible for large and small “dev and test labs” over several decades of my career, it appeals to me that I can create a cluster, deploy the code, run functional and performance tests, and validate the results while also getting a full load of metrics, traces, and logging details. And because it’s both ephemeral and software defined, I can create, run, capture data, and destroy in whatever iterative loop makes the most sense for what I need. In the smallest scaled scenario, where I don’t need a lot of hardware, I can verify everything on a laptop. And when I need scale, I can use a cloud provider to host a cluster, and get the scale that I need, use it, and tear it down again at the end.

Getting back into these recent from-nothing deployments into a cluster, and writing new charts, reminds me that while the primitives are great, the tooling and user-interface for developers working with this has a long, long way to go. My past experience is that developer tools can be among the last to get decent, let alone good, user interfaces. Its often only slightly ahead of the dreaded “enterprise application” tools in terms of thoughtful user-experience or visual design. With work, I think the complexities of Kubernetes could be encapsulated – visible if you needed or wanted to really see. That work should let you focus on how your app works within a cluster, and use the good stuff from Kubernetes and the surrounding community of tools to establish an effective information feedback loop. It could be so much better supporting developers and allowing them to verify, optimize, and analyze their applications.

Public Service Announcement

If you’re a lone dev, or a small team, and want to take advantage of Kubernetes as I described above, don’t abuse yourself by spinning up a big cluster for a production environment that amounts to a single, or a few, containers.

It’s incredibly easy to spend a bunch of excess money from which you get essentially no value. If you want to use Kubernetes for smaller work, start with Minikube or Rancher’s k3s, and grow to using cloud services once you’ve exceeded what you can do on a single laptop.

Concurrency, Combine, and Swift 5.5

I started a post that brings together all the moving parts that have been discussed in the various concurrency proposals that are going into Swift 5.5. They’re all accessible through GitHub, and the discussions in the public forums. The combined view of all the moving parts is complex. I was aiming to post something this weekend, but in the end I trashed the draft after I read Paul Hudson’s What’s New in Swift 5.5. He has outdone himself with a beautiful job of describing, and pulling together, all the moving parts, and I think there’s little at this stage that I could add — anything I wrote would be a poor shadow, repeating the work he put into that overview. If you haven’t yet read it, do so. It’s a stellar overview and detailed description (with examples!) of a lot of the moving parts that we’ll see this year. And I’m sure it sets the foundation for quite a bit more to come!

I started closely following the updates to the Swift language, partially because there’s an obvious overlap with what has been proposed and how Combine operates. Some of what/how Combine does it’s processing is getting getting pushed down into the language, which is frankly amazing. I don’t know how Apple’s Combine framework will evolve, but I fully expect it to embrace the async/await foundation, as well as the iteration, sequence, and stream-like structures that are in flight. There’s a massive collection of concurrency proposals and work, and if that alone was what Apple announced and shared at WWDC, it would be huge. I’m guessing that it’s only the foundation for quite a lot of other elements, as yet to be seen. So yeah – I’m one of the people thinking “This year’s gunna be a big one!” when it comes to WWDC announcements.

For me, the portion of the concurrency proposals that’s been most intriguing, exciting, and perhaps a little scary, is the addition of Actors and Global Actors. Building on top of the concurrency features, they provide some amazing coordination and memory-safety constructs that make it far easier to write safe code for an increasingly multi-core system. And I suspect that Actors aren’t going to be limited to a single device, but that it’ll be extended to communicating and coordinating across devices. There’s nothing yet in the public proposals, but I think there’s still quite a bit more coming.

I’m looking forward to these hitting the streets, and (hopefully) API use of Global Actors to help provide compiler-enforced expectations around callbacks and functions that need to interact on the main thread. Yeah – mostly that’s UI updates. The existing class of bugs related to UI updates on macOS and iOS have been helped by warnings from the amazing Clang Thread Sanitizer, but now it can be enforced – in Swift at least – within the compiler. I also expect a surge of complexity due to the annotations and new keywords for concurrency. I suspect it may be overwhelming for a lot of developers, especially new developers getting involved in the platform. While you probably don’t need it to learn swift, I’m guessing that it’ll be pretty front and center to a lot of core app development. And I wouldn’t be surprised to see a lot of discussion, and confusion, around the techniques of linking the earlier Cocoa delegate call-back style into async methods and functions. I think it may be challenging – at least at the start.

Like any new tool, I predict that Actors will get over-used in the near future, maybe for quite a while, before collapsing back to a more sane level of usage in constructing software. In any case, I think it’ll change a lot of conversations about how current software components work together. I hope that the concurrency elements come with equally excellent testing infrastructure and patterns. I’m still bummed that Combine-Schedulers wasn’t something that was included with the Combine framework directly, or at least with XCTest, while at the same time being immensely grateful to the Point•free crew for making that library available.

I’m not going to make any other WWDC predictions, and I’m clearing my mental decks for the conference updates and details to come. It’s getting about time for “sponge learning” mode. I’m looking forward to seeing what so many worked on, and tremendously excited about the potential this year’s Swift language updates are setting up.

Why I don’t want Xcode on the iPad — macOS and iPadOS

With the impressive announcement of the latest iPad Pro’s now being available with the M1 chip, seems like a whole lot of people (in the communities I follow) are talking about the announcement with a general theme of “WTF are we going to do with that chip in there?” Often they’re Apple-platform developers and saying “Boy, wouldn’t it be freakin’ awesome if we could use Xcode on the iPad?!”. I don’t want Xcode on the iPad. I used to think I did, or that it might be useful, but I’ve reconsidered. My reasoning stems from the different underlying constraints of iPadOS vs. macOS – some physical, some philosophical. I don’t want to these constraints to go away. I value each in their own context. And I hope that these various different operating systems aren’t ever fully merged, exclusively choosing one set of constraints over the other.

Personal vs Multi-user

iOS — and its younger, larger sibling iPadOS — are all about being a personal device. I love these kinds of devices, using them regularly. These devices are, very intentionally, single-user. My iPhone is MINE, and while I might hand it to someone else, everything on it is associated with me. Any accounts, all the related permissions – they’re all about me or echo’s of me. There’s no concept of a “multi-user” iOS device*.

(*) Yes, I know there’s unix deep under the covers which fully supports a multi-user Operating System. The way the device is exposed to me – a developer and consumer of it – is all about a single individual user.

macOS, on the other hand, is built from the ground up as a multi-user system. It supports, if you want to use it, multiple people using the same device at the same time. You “log in” to macOS, and “your account” has all the details about who you are and what permissions, constraints, etc – you can do. In practice, this ends up mostly being a single person working on a single macOS device, at any given time. The underlying concepts that allow for multiple users are there, dating back to the earliest UNIX shared-system semantics with multiple users. As a side effect of that fundamental concept, macOS supports multiple programs running in parallel as well, which segues into what I think is the major philosophical difference between these two kinds of devices: focus.


Somewhat related to the personal vs. multi-user difference is an element of “focus”. I also tend to think of this as overlapping a bit with a concept of “foreground” vs. “background” applications. iOS and iPadOS are fundamentally more focused devices. The expectation, and design, of the device and operating system is that you’re doing one thing – and all of your focus is on that one thing. Everything else is off in the background, or more likely put on a side-shelf out of view and quite possibly paused.

Yeah, there’s the iPadOS split screen setup that makes my previous statement a hasty generalization. I think it’s a weak-ass stab at doing more than one thing at once, a worthy attempt to see if would be sufficient , but near useless in most of the scenarios I’ve tried. The iOS family (iOS, iPadOS, tvOS, and watchOS) all seem to share this concept of “doing a single kind of task”, and translating that into “focused on a single application”.

macOS, with it’s multi-user background, does a bunch of stuff in parallel. There is still a concept of foreground and background, and more usefully a concept of one application that’s focused and primary. Almost more importantly is how the operating handles the parallel stuff that you might talk about being in the background. The key is that what runs in the background is (generally) under the control of the person using the device. If I want to run half a dozen programs at the same time – that’s cool – the system time-slices to try and give all of them relatively equal waiting and effort. With iOS, you’re either in the foreground – and king of the world – or background – and extremely limited in what you can do, and often on a clock for the time remaining to do anything. The iOS based operating system can potentially kill a background app at any moment.

Multiple Windows

The other way the macOS “focus” and “parallel apps” concept shows itself is in windowing. On iOS, your app isn’t resizable, and for all practical purposes, you’re working in a single window — that takes over all the screen real-estate — to do whatever you’re working on.

In comparison, macOS has multiple windows, that you can resize and place however you choose. Well, mostly however you choose – there are limits. The ability to control the size of a window and place it allows you to prioritize the information contained with it. I end up setting window size and location, almost unconsciously, based on what I’m doing. You can have a front and center focus, but still have other information — updating and live — easily available with almost no context switching needed. You can choose what you’re mixing and when. The multiple window paradigm expanded years ago to include the acknowledgement of multiple kinds of spaces – even switching between them – as well as multiple physical displays.

Integrating Knowledge

The multiple windows — with different apps in various windows, all more-or-less resizable — is why I prefer macOS over iPadOS for almost all of my “producing” work. I use multiple programs at once, very frequently because I want to bring together lots of information from disparate sources and use it together. It’s almost irrespective of what I’m doing – being creative, reviewing and analyzes, following up on operational tasks, etc. For me it all boils down to the fact that most of the work I do on a computer is all about integrating knowledge.

Often that combination is a browser, editor, and something else such as a terminal window, spreadsheet, or image editor (such as the super-amazing Acorn). Sometimes it’s multiple separate browser windows and an email program. When I’m developing and working in Xcode, I frequently have multiple browser windows, a terminal, and Xcode itself all up and operational at the same time. I use the browsers for reading docs, searching Google (or StackOverflow). I often keep others apps running in parallel – Slack, email, and sometimes Discord. With these apps nearby, but not intrusively visible, I get updates or I can context switch into them to ask a “WTF am I doing wrong” question of cohorts available through the nearby applications.

I can’t begin to do this on iPadOS without a huge amount of friction. Switching contexts and focus on iPadOS and iOS is time consuming and you’ve got to think about it. It takes so much time and effort, that you’re breaking out of the context of what you’re trying do. On top of that, you can’t really to control the size of the windows on the screen to match the priorities of what you’re working on – it’s more of an “all or nothing”.

I think its obvious, by when I’m doing something that’s more “consuming” focused – reading, watching, etc – then iOS based devices can excel. No distractions, and I can focus on just what I’m trying to read, learn, understand, etc.

What I have found to be effective is using an iPad in addition to macOS. Using the features of Continuity and Universal Clipboard, an iPad becomes a separate dedicated screen for a single purpose. It’s not uncommon for me to have slack, email, marking up a PDF, or a scribble/sketching program (such Notability or Linea Sketch) running on my iPad as almost another monitor for the same system. Then there’s the even-more personal use of watchOS to interact with the Mac. Authenticating that I’m there and unlocking macOS, or using it to verify a system permission for doing something. I don’t know the name of that feature, but I use it all the time.

Not all of my tasks are about integrating knowledge, but almost all my technology or creative work certainly is. In the cases where I’m working on a single thing and want to exclude all distractions and other information, iOS and iPadOS are beautiful. But that’s comparatively rare; almost always when I’m doing something that’s purely solo-creative such as drawing, writing, taking photo’s, or filming. Anything where I need, or want, to mix information together, especially any form of collaboration, begs for the multiple windows, sized for their individual tasks, and background programs running on my behalf.

So thanks, but no: I don’t want Xcode on the iPad.

Second Guessing Yourself

I’m working through the book: Crafting Interpreters. The author — Bob Nystrom — used Java for the first half of the book, a lovely choice, but I wanted to try it using the Swift programming language. Translating it on the fly was a means to exercise my programming and language knowledge muscles, and that does come with challenges. Some translation choices I made worked out well, others have helpfully provided a good learning experience. This post is about happened was while I was hunting, and resolving, programming mistakes this past week.

Recursive functions have long been something that I found difficult. They are a staple of some computer science tasks, and the heart of the techniques that are used in scanning, parsing, and interpreting as shown of this book. It was because I found them difficult that I wanted to do this project; to build on my experience using these techniques.

It shouldn’t surprise you that I made mistakes while translating from Java examples into Swift. There wasn’t anything super-esoteric at the start, a little miss here or there. My first run through implementing a while loop didn’t actually loop (oops). The one that really frustrated me was a flaw in not properly isolating how I set up environments before invoking enclosing blocks or functions, which meant a little Fibonacci example blew up – values oscillating instead of growing. It was kind of my worst nightmare, interpreting recursive code within a bunch of compiled recursive code. I made similar errors in the parser, small subtle things, that meant some of the examples wouldn’t parse.

Testing and Debugging

So I backed up, and started writing tests to work the the pieces: scanner, parser, and interpreter. I probably should have built in testing from the start. I think more importantly — I did it when I needed it. The testing provided a framework to repeatedly work sections of code so that I could debug them and understand what was happening. I started out using the debugger in Xcode to work through the issues, but the ten-plus layers of recursively called functions exploded beyond my head’s capacity to track where I was and what was going on.

I reverted back to more direct tools. I annotated my parser code (and later the interpreter) a flag variable named omgVerbose, printing out each function layer as I stepped into it. With that in place, I could run the test, view the (often quite long) trace, and walk it through for the example. I traced my code in an editor and compared what I thought it should do to what showed in the trace. I even added a little indention mechanism, so that every layer deeper in the recursive parsing would show up as indented. It was what you might call “ghetto”, but it got the job done.

With the two side by side, I traced what was happening much more easily — the whole point of doing this work. It took some of the stuff that was overwhelming my brain and set it outside, so I could focus on what was changing, and have an immediate reference to what had happened just before. I looked at what happened, what was happening, and made an assessment of what should happen next. Then I compared it to what did happen — and used that to find the spot where I made a programming mistake.

Do I really know what I know?

The second guessing hit me while I was debugging the interpreter. It was the worst kind of bug; the kind where the test sample seems to work — it doesn’t crash or throw an exception — but you get the wrong output. The code snippet was advanced, working the recursive nature of this language I was implementing:

fun fib(n) {
    if (n <= 1) return n;
    return fib(n - 2) + fib(n - 1);
for (var i = 0; i < 20; i = i + 1) {
    print fib(i);

I added my omgVerbose variable to show me the execution flow within the interpreter. The output from that short little function was incredibly overwhelming. That was the point at which the metaphorical light-bulb 💡 went off for me, making it clear and immediately obvious WHY people created debuggers in the first place. The printed execution trace was overwhelming, and a debugger could let me step through it, piece by piece, and inspect. I didn’t think I was up for making a debugger for my interpreter right then though, so I just fortified myself with a long walk to clear my head, some fresh coffee, and tucked in to trace it through.

I read through the trace, scrolling through my editor with the underlying code, seeing what was executed and what happened next, and trying to nail down where the failure was occurring. While going back to my implementation in Swift, I stared at it so long that the words lost meaning. I struggled so much with finding this error, with my own frustration for not understanding why it was failing, that I started to second guess how even the Swift language was working. For at least an hour, I’d managed to confuse myself with exceptions and optionals. At one point, I’d nearly convinced myself that try before a statement implied that the statement returned an optional value (which it very definitely does NOT). I did ultimately find the issue — it had nothing to do with either optional values or error handling. I had made a mistake in scoping when invoking a function within my interpreter – effects were bleeding between function invocations, but only showing when the same function and parameters were called from the same context. The recursive function I was testing happened to highlight this extremely well.

The point isn’t the flaw I made, but that anyone can get to second guessing themselves. I’ve been programming in one language or another for over 30 years, and programming using the Swift language for over five years. I am not the most amazing programming I know, but I’m comfortably competent. Comfortable enough to ship apps written in it, or reach for the Swift language to explore a problem without worrying about having to learn it first. Even still, I second guessed myself.

Recovering From Brain Lock

I suspect it may be more common to encounter this situation when you’re trying to learn something new. I wasn’t using this to learn Swift, but to learn techniques about how to build a programming language and interpreter. It is puzzle solving – exploring a space, making some assumptions, and following the path of where that might go. When you find a mistake, backtrack a bit and take another path. You’re already in the mode where you’re questioning your assumptions, so it’s pretty easy to have that slip back to questioning broader or more baseline knowledge.

This is when it’s more important than ever to step back and “get out of your head”. The good ole ‘rubber ducky’ approach isn’t bad: explain what you think is happening to the rubber duck. Saying it aloud externalizes it enough to illuminate a poor, or mistaken, assumption or prediction. Same thing goes when explaining it to another person – typing, calling, or whatever – better even if they have a different perspective from yours and can ask questions you’re not.

Just as importantly, step away from the problem for a bit. It doesn’t get easier by just pushing forward when you’re frustrated and/or confused, it gets harder. Step outside and go for a walk, away from the keyboard and problem. Listen to music, watch a show you enjoy, or the go for the best recovery mechanism of them all: get a good night’s sleep before coming back to the problem if you can.

Translating Java Into Swift

I’m working through the online book Crafting Interpreters, (which I highly recommend, if you’re curious about how such things are built). While going through it, I’m making a stab at translating the example code in the book (which is in Java) into Swift. This is not something I’m very familiar with, so I’m trying a couple different ways of tackling the development. I thought I’d write about what’s worked, and where I think I might have painted myself into a corner.

Replacing the Visitor Pattern with Protocols and Extensions

A lot of the example code takes advantage of classes and uses a fair number recursive calls. The author tackles some of the awkwardness of that by aggressively using a visitor pattern over Java classes. A clear translation win is leveraging Swift’s protocols and extensions to accommodate the same kind of setup. It makes it extremely easy to define an interface that you’d otherwise wrap into a visitor pattern (using a protocol), and use an extension to conform classes to the protocol. I think the end result is far easier to understand and read, which is huge for me – as this is not something I’ve tried before and I find a lot of the elements difficult to understand to start with.

Recursive Enumerations Replacing Classes

I approached this one with the initial thought of “I’ve no idea if this’ll work”. There are a number of “representation” classes that make up the tree structure that you build while parsing and interpreting. The author used a bit of code to generate the classes from a basic grammar (and then re-uses it as the grammar evolves within the book while we’re implementing things). This may not have been the wisest choice, but I went for hand-coding these classes, and while I was in there I tried using Enumerations. I have to admit, the section in The Swift Programming Language on enumerations had an interior section on Recursive Enumerations that used arithmetic expressions as an example. That really pushed me to consider that kind of thing for what in Java were structural classes. I had already known about, and enjoyed using, enumerations with associated types. It seemed like it could match pretty darned closely.

For processing expressions, I think it’s been a complete win for me. A bit of the code — one of those structural classes — looks akin to this:

public indirect enum Expression {
    case literal(LiteralExpression)
    case unary(UnaryExpression, Expression)
    case binary(Expression, OperatorExpression, Expression)
    case grouping(Expression)
    case variable(Token)
    case assign(Token, Expression)

This lines up with the simple grammar rules I was implementing. Not all of the grammar you need works well for this basic structure. I think it’s been great for representing expressions in particular.

Results with Explicit Failure Types Replacing Java Exception

Let me start off with: This one may have been a mistake.

UPDATE: Yep, that was definitely a mistake.

A lot of the author’s code uses Java’s Exceptions as an explicit part of the control flow. You can almost do the same in Swift, but the typing and catching of errors is… a little stranger. The author was passing additional content back in the errors – using the exceptions to “pop all the way out of an evaluation” which could be many layers of recursion deep. So I thought, what the hell – lets see how Swift’s Result type works for this.

I took a portion of the interpreter code (the bit that evaluates expressions) and decided to convert it to returning a Result instead of throwing an exception. The protocol I set up for this started as the following:

public protocol Interpretable {
    func evaluate(_ env: Environment) throws -> RuntimeValue

And I updated it to return a Result:

public protocol Interpretable {
    func evaluate(_ env: Environment) -> Result<RuntimeValue, RuntimeError>

In terms of passing back additional data (as associated values within a RuntimeError – an enumeration), it worked beautifully. I could interrogate and pull out whatever I needed very easily.

The downside is that to access the innards of a returned result means that you have to use switch on it, and then deal with internals as cases. In almost all cases, I found I wanted to simply propagate a failure up from internal functions, so what I really wanted was something more like a JavaScript Promise (or a Swift Promise library) rather than a raw Result.

The Interpretable protocol applies to those recursive enumerations I showed a bit earlier, the end result was indentions-from-hell where I’m switching on this, that, and the other – making the code quite a bit more difficult to read than I think it needs to be. By itself, result doesn’t add all that much, but when added to switching based on the enumeration that made up the structural representation of the expressions, I had most of the “work” indented four to six layers in.

The upside is that everything is explicitly defined. Swift’s enforcement of exhaustive switch statements makes it so that the compiler “keeps me honest”. Just to share a taste of what I inflicted upon myself, here’s a portion of the evaluate function used within the interpreter:

extension Expression: Interpretable {
    public func evaluate(_ env: Environment) -> Result<RuntimeValue, RuntimeError> {
        switch self {
        case let .literal(litexpr):
            return litexpr.evaluate(env)

        case let .assign(tok, expr):
            switch expr.evaluate(env) {
            case let .success(value):
                do {
                    try env.assign(tok, value)
                    return .success(RuntimeValue.none)
                } catch {
                    return .failure(RuntimeError.undefinedVariable(tok, message: "\(error)"))
            case let .failure(err):
                return .failure(err)

The layer that uses this, my Parser code, uses the throws type logic, so I have a few places where I end up with an “impedance mismatch” (apologies – you get my way-old idiomatic Electrical Engineering references here sometimes). The Parser class uses the exceptions, and the Expressions that it evaluates return Result types – so there’s translation that needs to happen between the two.

I’m going to keep running with it to see it through, but the jury is definitely out on “Was this a good idea?” I have been half-tempted to just go all-in and make the Parser return a result type, but honestly the whole code that ends up something like:

switch result {
case let .success(value):
    // do something with the value
case let .failure(err):
    // often propagating the error:
    return .failure(err)

Just felt so overwhelming that I stopped at the evaluating expressions.

Swift LOX

The language I’m implementing is called LOX, a creation of the author. The translation is in progress, but operational, and probably flawed in a number of ways. I’m not all the way through the book – next up is adding block syntax to the grammar, parser, and interpreter – but it’s coming along reasonably well.

If you want to look further at the translation-atrocities I’ve committed, feel free: the code is available at GitHub at I’m going to carry it through at least another chapter or two to get the elements I wanted to learn for another project. Because of that, I expect it’ll evolve a bit from where it is today, but I may not ever fully enable the language. We’ll just have to wait and see.

%d bloggers like this: