I’m starting, or more specifically re-starting, a project that I envisioned a couple years ago. In some of the apps I’ve created, I’ve found it useful – sometimes critical to what I want – to provide small charts (visualizations) of data from the app. One example of this is a series of histograms that show the network latencies to each step in a route from my machine to a location on the internet. There’s some great CLI tools that display the raw data in a terminal window (mtr, if you’re curious), but I think it’s easier to understand viewing the data as a series histograms. In the case of mtr, you can see where network traffic slows down or doesn’t get past.

work in progress – histogram sequence from traceroute data

Looking at the example above, you can see a fairly consistent latency through to the end point, with the majority of the variability at that first step (in this example, it is my wifi router hub). The down side is that the code to display just that – not even getting to labels, axis, etc – was pretty heavy.

A year ago, I tried coming up with drawing this kind of graph in SwiftUI, including displaying axis and labels. Turns out that with SwiftUI’s constraints, and the “every expanding” side effect of using GeometryReader in a SwiftUI layout, it was darned tricky to get it working. I got something working, on which I could overlay SwiftUI shape objects with explicit offsets. The downside was that it wasn’t flexible and small changes could break the layout dramatically. Last year, Apple updated SwiftUI with Canvas, which I used when generating the series of histograms above. That really opened the doors for me. Now I’d like to do the same using a SwiftUI like syntax. It seems like it would be a natural fit, and it would make creating those little sequences of visualizations a hell of a lot easier.

I know there’s other charting libraries out there – and good ones. The “ole standby” that many people use is Daniel Gindi’s Charts, but even a quick search on Swift Package Index shows other folks offering their own takes on some of these libraries, several of which are interesting for the number of stars they’ve collected. (Majid’s SwiftUICharts in particular looks interesting to me, fits in well with SwiftUI, and appears to be stable).

I’ve cobbled previous charts and plots with anything from Excel to using the D3.js library in javascript (which is a really amazing library). That background had me following a trend towards describing charts more declaratively, which fits in well with how SwiftUI works. I dug into Observable’s Plot and the Vega-lite project, and the research papers behind them. Both of have a very declarative style of describing a chart, the style of which fits in wonderfully well with the kind of declarative syntax that SwiftUI uses. Seeing what I can do to enable that kind of functionality, but using Swift, is what I’m after, and what I’d really like to be able to use.

Part of my goal in setting this up is to learn the “ins and outs” of what it takes to enable this kind of thing using result builders. I don’t feel like have a solid understanding right now, and the way I learn best is by doing – so I’m aiming to learn and do in public. I’d like to work out how to tell someone else how they could solve this kind of problem, and right now I’m far from it. I’m guessing the solution likely includes not only result builders, but careful use and design with generics and protocols as well. I’m planning on writing about my development process (and my learning!) as I tackle this. Rather than waiting for when it’s all “good and done” (assuming I get there), I have set this up to develop completely in the open. I do expect it’ll get messy, and it may never come to anything, but I figured it was worth a shot.

If you think tackling something like this might be fun, or just want to poke your head in to watch how I tackle it, you’re welcome to join in. I would love to have others to bounce ideas off of, or help with the coding if you’re so inclined. If you’re interested in following along, I’m starting with some of Github’s community tools on a repository. I set up an organization called SwiftViz, and made the repository Chart to house this project. I’ve enabled Chart Discussions – no idea how well or crappy that will work out – as a starting point. I do expect to evolve as it goes. I thought about just writing about the efforts on this blog, but a blog doesn’t really end up being very interactive – can’t have much of a discussion on it – and I’d like to at least entertain the possibility that someone else might want to collaborate with me on this.

It’s early days, and it will undoubtably get messy, but if you’re interested, drop into the repository discussions and say hello, or reach out to me on twitter if that’s easier.

RealityKit on macOS

Guessing which frameworks are going to be updated, and which aren’t, is — I think — a part of developing software on Apple platforms. Sometimes it’s clear based on what’s been updated over the past three or fours, as is the case with RealityKit. I started my experiments with SceneKit, another lovely high-level API to 3D graphics, but wanted to utilize more of RealityKit for some of my procedural graphics work (Lindenmayer trees).

RealityKit has a huge API surface, only some of it directly related to 3D rendering. Other parts include physics simulations, including rigid body collisions, an ECS, and — as of WWDC’21 — methods for procedural geometry/mesh creation. RealityKit is fundamentally an API that’s meant to mix of 3D rendered content into live images from the real world. Because of that, the camera for rendering 3D content is tightly controlled to match the physical position and location of the actual camera on a device. So it isn’t surprising that you might think, “Hmmm.. how’s that going to work on macOS – which only has a forward-facing camera, and few to none of the fancy motion and depth sensors an iOS device has.”

The good news is that RealityKit, with macOS 10.15, added the capability to run on macOS. It includes an ARView for macOS (even though there’s no ARKit) inside RealityKit. It’s more constrained than the UIKit version, but it’s there. After a comment from James Thomson on a podcast last year talking about getting that view working on macOS, I wanted to do the same. It’s relatively easy to get an ARView showing up – but figuring out what do to with it from there is not as obvious. It turns out that the macOS-only ARView doesn’t respond to mouse, touch, or trackpad input – so user interactions don’t influence the camera position. You can’t appear to move around or look at different things without code to explicitly move the camera. If you use the UIKit ARView on macOS built with mac Catalyst, it’s a different story. This bit is about mixing AppKit and RealityKit together without using MacCatalyst and the UIKit Apis.

Once I’d convinced myself this wasn’t entirely a fools errand, I wrangled up the code to enable user interaction to move the camera. I made my own subclass of ARView and added in keyboard, trackpad, and mouse support to move the camera. It isn’t identical to how SceneKit works, but it’s sufficient to display content and look around the environment. I wanted something where I could load up and view content using RealityKit, but running on macOS. In particular, I wanted to load up a USD file and get sufficient screen captures to render the set of images into an animated gif. As a side note, there’s not an easy and consistently clean way to describe a 3D object in documentation – but a lot of the HTML based mechanisms (include DocC, which I’ve been writing about lately) have no trouble displaying an animated gif.

The end result took a bit, but I got there (the fish in question is from WWDC’21 sample code):

While trying to get this working, I learned that the ARView implementation for macOS that comes with RealityKit appears to swallow up mouse events, which meant there wasn’t an easy way to control it from SwiftUI views using gesture recognizers. I switched to the older mouse and keyboard interaction apis that AppKit supports, making a subclass to add in the various overrides to provide mouse and keyboard event handling. After I got it all working, I extracted the SwiftUI wrapper. Finally, I put the subclass into a it’s own swift package, available for anyone to use:

CameraControlARView (API Docs).

If you’re interested in using RealityKit on macOS, you’re welcome to use it – or to pull it apart to see how it works and use that to do something you prefer. My goal in making this tiny open-source package is to help the next soul who’s stumbling through figuring out how to get started with RealityKit and wanting to experiment a bit on macOS to see how things work.

Design decisions and technical choices

I made a (perhaps) odd choice when I assembled the SwiftUI container for it – I set it up so that you create the instance of my subclass either in a SwiftUI view, or hold it outside that and pass it through. The point was to have a reference to the ARView (and its properties) available from the declarative space of SwiftUI so that you could interact with the 3D scene and entities within it from the SwiftUI control declarations. I didn’t see another straight-forward path to creating the scene (such as a SceneKit Scene) externally and passing it down and in, and in order to do on-the-fly file loading (such as when you drop a USDZ file into the view) I needed to have access to that scene.

The other design choice I made was to make the camera motion support two different modes of motion. The first mode is what’s called “arc-ball” – it orbits and looking at a specific point in 3D space from a variety of rotations. You can imagine it as following arcs around a sphere, centered on a specific point. I also wanted to have a mode where I could move freely move about, which I called “first person” mode. In first person mode, you can move the camera forward, back, side to side, as well as turn to look elsewhere. There are a couple other popular 3D movement modes out there (turn-table being one I considered, but decided to not attempt), but I stuck to these two modes.

The rest is all pretty straight-forward. I’d fortunately become more comfortable with the gems in the Accelerate/SIMD framework, which provide lovely quick tools for creating quaternions and matrices. I still ended up embedding a few “build this rotation matrix from this angle around that axis” kinds of methods – stuff that’s under the covers in SceneKit, but I’m not aware of a “central” implementation in the platform tooling otherwise. It’s not “hard” per se, just tricky to make sure you get correct.

Generating Animated GIF from 3D models

I went ahead and put the macOS App project up publicly on Github as well: Film3D. It’s far from a complete app, it doesn’t even have a placeholder icon (as I’m writing this) and the user interface is a ludicrous “just get it working” sort of thing, but it works to load a USDZ file, set the camera, and capture images into an animated gif.

There’s more I’d like to do with it, but I’m not planning on selling it – I just wanted something that does what it does so that I can use it to create images for documentation.

DocC plugin PSA

In my last two posts about using DocC, I’ve been implicitly encouraging the use of the new docc-plugin for the Swift package manager. I did so knowing that it is in beta and the underlying APIs are evolving – and in my example scripts I include how to do the same thing with commands that directly invoke the swift compiler and DocC to generate documentation.

With the release of Xcode 13.3 beta 3 today, the API for package manager command plugins (of which the swift-docc-plugin is a lovely example) has changed slightly – and as such, what’s currently on the main branch isn’t compatible. There’s a pending pull request which I believe will resolve and update things back to working, but if you’re using the plugin, you should be aware that there’s an issue.


The pull request was merged on March 1st, which resolves using the plugin with Xcode 13.3 beta 3, and Ethan posted a bit of nice detail on the Swift Forums showing what’s changed when you use the plugin after the Swift Package Manager API updates. The big piece is that the --target option moved from in front of generate-documentation to after it, and there’s additional options for specifying a specific product, useful if you have a swift package that generates multiple products.

In the meantime, using docc on the command line directly does the job well, and hopefully soon we will have a fix added to the plugin for the updated API. I also hope that a release will be available before too long, but that will only happen when all the moving parts have stabilized. It is a lovely tool, and we’re in a beta period – so I’m not surprised that there are a few bumps along the road. I’m sure the end result will be better for the iteration.

In case you need the earlier steps laid out, here’s a script for generating documentation into a directory for static hosting:


set -e  # exit on a non-zero return code from a command
set -x  # print a trace of commands as they execute

rm -rf .build
mkdir -p .build/symbol-graphs

$(xcrun --find swift) build --target RSTree \
    -Xswiftc -emit-symbol-graph \
    -Xswiftc -emit-symbol-graph-dir -Xswiftc .build/symbol-graphs

# Enables deterministic output
# - useful when you're committing the results to host on github pages

$(xcrun --find docc) convert Sources/RSTree/Documentation.docc \
    --output-path ./docs \
    --fallback-display-name RSTree \
    --fallback-bundle-identifier com.github.heckj.RSTree \
    --fallback-bundle-version 0.1.0 \
    --additional-symbol-graph-dir .build/symbol-graphs \
    --emit-digest \
    --transform-for-static-hosting \
    --hosting-base-path 'RSTree'

Tips for getting the most out of DocC

1 – Start by adding doc comments to your types.

The starting point is adding a single short summary sentence as comment (using the ///) for each public type in your library or app. Feel free to add more: if you add a “blank line” (meaning include another line with /// but nothing else in it) followed by additional content, that content appears as the “Discussion” or “Overview” for the type. This is a great place to drop in a code snippet showing how to use that type alongside the discussion of what it is and how to use it.

The reason I picked this as the starting point is that everything you add immediately becomes available through Xcode’s quick-help. You get maximum immediate benefit, as it provides content to the panel that opens when you option-click on a type.

The initial single line doc comment is referred to as the abstract. When a symbol (whether it’s a class, struct, protocol, property, or method) doesn’t have an abstract, DocC generates one for you. And that is where the much-commented-on No Overview Available comes from.

2 – Add doc comments to your public methods and properties.

The important bit here isn’t a default initializers or functions that enable conformance to common standard library protocols. This is about the the elements on your type (struct, class, enum, or whatever) that make it unique and special. Aim to get to the core of what the symbol does – and most importantly, why you’d use it and what you need to use it. Sometimes its simple – and a single line abstract describing what it represents is more than sufficient. For those times, a single line summary (the first line with ///) does wonders. If it’s more complex, add that after a break (a line with /// and nothing else as a doc comment), then talk to the details on how to use it, or what to expect.

If you’re writing the doc comments using Xcode, it provides a handy “generate a doc comment” command (command-option-/ being the default key mapping to that Xcode gem) that creates a stub. If you do this for a function or method, it includes the parameters formatted correctly for DocC and placeholders for you to fill in.

If you stop right after doing this, you have the basic, effective documentation that is available to anyone loading your framework or package. But there is still a lot you can do to make it easier to use your library.

3 – Add a documentation catalog and a library overview.

Adding a documentation catalog is the 3rd thing on this list – that’s not a mistake. Everything above provides immediate benefit while you (and any collaborators) work on, or with, the source code. I think of the Documentation Catalog as being akin to the idea that you should include a README file with a project. It provides an introduction and framing to your library, and is the heart of where to learn more.

The documentation catalog is a directory with a collection of goodies. It houses the top-level overview of your library, and any assets (images, articles, and more — should you go that far). When you add a catalog and then use Product > Build Documentation within Xcode, those docs appear in the Xcode documentation window. You can export an archive for others to use from that window. There is also terminal commands to generate documentation that’s suitable for static hosting. I created an earlier post (Hosting Your Swift Library Docs on Github Pages) that goes into detail on the steps of doing just that.

When you create a documentation catalog using Xcode, its template includes a markdown file with a loose structure for what to add. It is intentionally, and specifically, structured for DocC. I recommend writing the content under the Overview heading first, and then going back and writing the summary later. The bottom section of the template (this text that starts with ## Topics and has a placeholder 3rd level heading with a generic symbol underneath) is the placeholder for you to organize the symbols of your project – the next tip.

4 – Add the top-level curation for your library.

In the world of DocC, curation is a term that means to create organization for a set of symbols. When I suggest adding top-level curation, I’m referring to adding organization to that top-page in your library, that includes the overview. This is the text beneath the overview that starts with ## Topics, followed by one or more 3rd level headers (###) with various grouping names.

DocC started with collating only public types, methods, protocols, and so on for libraries and frameworks. As it’s evolving, the project is starting to add in the capability for documenting apps as well. In the latest Xcode 13.3 beta, when you add a catalog and build docs for an app, it includes internal types as well as public.

How you organize the symbols is subtle, but important. We have a tendency to scan down from the top – so organize the most important, or frequently used, elements near the top. Arrange symbols that have less importance beneath. DocC always builds a structure for you, even if you don’t provide one. When you provide some structure, it tries to take that into account first, and fills in any missing gaps. Docc places the symbols you don’t include in your structure beneath what you provide using a generic organization, grouped based on the type of symbol.

That generic structure isn’t too bad. It’s certainly understandable for many developers, so if you don’t get to this point, you still have useful docs due to the abstracts and overviews — that’s a win!

5 – Add a walk-through article.

Add an article that provides a walk-through of the most common, basic usage of library. The Documentation article template from Xcode looks suspiciously like the top-level organization page, but the intention of an article is quite different. It still has a structural pattern of a one-line abstract at the top (the placeholder that reads summary in the template). Beneath that a ## Overview heading, and then a placeholder for a summary of the article. When I write an article, I leave that all alone and jump down into the content.

A single ### heading after the overview section with content underneath it is where to start. Group what you’re showing, frequently I’ve seen this listed by task or step, by the 3rd level headings, and include the detail underneath. Intermix text and code snippets – and images if you can make the time and it makes sense. The point of what you’re making is to provide a quick overview of how to use the library, or at least one way of getting started with it. You may find there’s a lot you could say – too much for one article. That’s fine — in fact, that’s wonderful — just make more articles and add them in.

As you add an article, curate it into your documentation. By which I mean go back to that top-level document and add a link to your article. The article links look like <doc:Your_Article's_File_Name>. When you build the documentation, the rendered content that’s dropped in place there is the top-level heading in your article, and it makes a link over to your content. An article is only useful when you get easily get to it.

I debated if this should be above the curation suggestion or not. It’s more work, certainly, but there is a ton of value. If you can’t, or don’t want, to spend a lot of time on this – focus on getting out something that shows someone wanting to use your library how to do it. There’s so much more that can be said about articles and organization, and opinions on what’s good and bad – but focus first on showing the key elements of your library, and you can build from there.

6 – Add strategic code snippets.

Hopefully your article had useful, direct code snippets in it. The next step to making that kind of directly usable content easier to find is to add code snippets into the overviews for some of your symbols. Take the time to pick the most important, or perhaps the most common, ways your library is used. I like to scribble that down on the side, and build my own little stack-rank of what I think would be most interesting to someone wanting to use my library. Then go through and add code snippets showing how to use that type, method, or function.

A pattern that I’m using at the moment is to create a unit test function that doesn’t actually test anything other than the code I just typed compiles correctly. When I’m making code snippets — both for articles and overviews in the reference section — I add a test case and put the code snippet in there. It keeps me honest with parameters and explicit formatting, and most importantly I’ve found – when the library evolves and something changes, you’ve got an immediate notice in the tests that something in the documentation needs an update there as well. I often put a comment in those “unit tests” that indicate where I used the snippet – for those times when a test fails and then I need to sort of what needs a tweak or update.

7 – Add curation to the rest of your types.

That same organization you provided for the top level of your library? Yeah – this is doing it for pieces below that top level. If you have a large class or struct, or types that include other types, this is hitting at the heart of making the list of symbols within them readily understandable.

While you can recursively do this through all your types, and a person with completion-is tendencies (which very much describes me) might want to, focus on the top level items and biggest collections. It is most useful to get all of your library’s types that include other symbols. When I’m doing this – I work down the types in the same order I added them into top-level structure. That’s assuming you ordered it most-important-first, so as long as you got that pretty close, you’re getting the most valuable pieces done first.

For types with a small number of sub-symbols (properties, enumeration cases, etc), I’ll include the organization of that symbol directly in the source code. But for a symbol with more than 4 or 5 symbols beneath it, I find it easier to manage by making an extension page in the Documentation catalog and including the curation/organization content there.

The process of adding an extension page is straightforward: in your documentation catalog (the directory ending in .docc) add a directory named Extensions, and then add markdown files into it. Xcode includes an extension template file type you can use. I name each of the files for the symbol that it provides organization for. Inside the extension file, include the curation. The content will look something like the following (example borrowed from in-the-works documentation updates for Nick Lockwood’s Euclid library):

# ``Euclid/Angle``

## Topics

### Creating Angles

- ``Angle/init(degrees:)``
- ``Angle/init(radians:)``
- ``Angle/degrees(_:)``
- ``Angle/radians(_:)``

### Inspecting Angles

- ``Angle/degrees``
- ``Angle/radians``

### Common Angles

- ``Angle/zero``
- ``Angle/halfPi``
- ``Angle/pi``

The reason I like using an extension file is that it doesn’t consume a lot of vertical space in the source code. I think it’s useful to have the abstract and overview as doc comments in the source, next to what they describe. I also think it’s less valuable for the organization to be there, and there is a cost to the readability of the source code. The organization isn’t the kind of thing you need to update when you’re tweaking a method’s signature or internals and how it works.

What are the symbols for my project?

When I’m tackling this curation process in adding documentation, I find it super-helpful to know the available symbols and their links. A symbol link is one of those items that represent a symbol that in text start and end with double back-ticks. For example, “init(_:)“ is a symbol link that references to an initializer. When you have two functions that have the same signature, but different types, then DocC adds a little hash value on to the end to disambiguate them.

Unfortunately, getting a complete list is kind of pain right now. Xcode auto-completion does a pretty good job of auto-completing symbols, so you can explore with auto-completion, but I recently ran into some issues where it missed a few.

One way you can find the values is by running xcrun docc preview on the command line, and navigating to through the rendered docs to the symbol you’re trying to find. The URL of that page has the symbol details at the end of the URL. If the symbol has a disambiguating hash code, it’s also is reflected in the URL. But that’s a lot of clicking around to get to a single value.

Another way to get the values is generating the documentation with the --emit-digest option, and extracting what you want from there. I learned this from the DocC team members on the Swift Forums. The emit digest option adds a file to the generated content named docs/linkable-entities.json. It’s a JSON file that includes a documentation reference for every symbol published in your docs as a documentation link. And a substring of the documentation link is what DocC uses as symbol links.

I knocked together a shell script snippet that uses the command line tools jq and other shell commands to generate a sorted list of all the possible symbols:

cat docs/linkable-entities.json \
  | jq '.[].referenceURL' -r > all_identifiers.txt

sort all_identifiers.txt \
  | sed -e 's/doc:\/\/SceneKitDebugTools\/documentation\///g' \
  | sed -e 's/^/- ``/g' \
  | sed -e 's/$/``/g' > all_symbols.txt

The sed commands are regular-expression replacing the lines that convert the referenceURLs into symbols. It strips off the doc://YourModuleName/documentation (the example above being pulled from a tiny library I have) from the list of URLs and adds into the double back-ticks.

I’ve started using the DocC plugin for generating documentation suitable for hosting on Github pages, but you don’t need to use that for getting this file generated. Running docc convert with the --emit-digest will get the same file generated for you. It’s placed in the output directory you specify for the conversion. If you’re lost on how to do all that, take look at one of the scripts that I’m using: docbuild.bash. I started doing this before the DocC plugin tool became available, so it includes all the steps from building the source, to converting the symbol graphs using DocC.

Hosting your Swift Library Docs on Github Pages

The beta for Xcode 13.3 dropped yesterday. With it came a released version of Swift 5.6 and a bunch of neat additions that the 5.6 release enables. A feature I was watching closely was two-fold: the capability for plugins to extend the commands available within swift’s package manager, and a static hosting option that was added to the documentation compiler tool (DocC) that Apple announced and open sourced this past year.

The two elements came together with the initial release of swift-docc-plugin, a plugin to swift package manager that adds the ability to generate documentation on the command line with the command swift package generate-documentation. Before I go any farther, I should note that when the DocC team put this together, they knew a lot of folks wanted to host their content as if it were static HTML – so they took the time to document HOW to do that, and best of all – they hosted _that_ documentation on github pages (I’m assuming using their own tools to do it). The articles, all of which are sourced from the content in the git repository, are hosted on GitHub pages:

This works, and quite nicely – I pushed up documentation for the Lindenmayer library I’ve been working on using the following command:

swift package \
    --allow-writing-to-directory ./docs \
    --target Lindenmayer \
    generate-documentation \
    --output-path ./docs \
    --emit-digest \
    --disable-indexing \
    --transform-for-static-hosting \
    --hosting-base-path 'Lindenmayer'

BUT… there are some quirks you should be aware of.

First, the ability to statically host the content does NOT mean that it is static content. DocC is a different take on managing documentation from almost all of the similar, prior tools. Where Doxygen, JavaDoc, Sphinx, Hugo, AsciiDoc and others read through the source and generate HTML, that is not what DocC is doing. While the content dumped out for static hosting ultimately includes HTML, DocC relies on two other critical components to do its work. It starts with the compiler generating what’s called a Symbol Graph. That’s a file that contains all the symbols – types, properties, type aliases, etc – in your source code, and the relationships between them. DocC then tweaks and adjusts that graph of symbols and more specifically mixes it with additional (authored, not automatic) markdown files from a directory – which is called the documentation catalog. If the markdown files in the documentation catalog, or the original source, don’t provide content or structure for the relationships in the symbol graph, then DocC builds up a default set, and attempts to provide a default structure. For what it’s worth, this default content set is where the dreaded “No Overview Available” comes from. The resulting combination is serialized into a bunch of JSON files, stored in the filesystem. Those JSON files contain the information needed to render the content – but the core of that content rendering happens from another project, A Vue-based JavaScript single page app: swift-docc-render.

The JSON files allows this documentation content to be more easily consumed by more than just browsers. I think it’s a reasonable assumption to presume this is what drives and enables the Xcode quick-help summary information.

Back to DocC and static hosting – what this means is that the content that gets dumped into the “docs” directory isn’t just plain HTML – it’s the Javascript and all the associated content as well. The effect this has is that while it LOOKS like you could just pop open the index.html in that directory and see your content, it’s not going to work. Instead you’ll just get a blank page, and if you happen to look at the JavaScript console, you’ll see errors reporting that the requested URL wasn’t found on this server.

It also means that the content isn’t available at the root of the GitHub pages you pushed. The root for my Lindenmayer project is – but going there directly doesn’t show anything. Instead you need to go a couple directories down: Also note that the name of the library is lower-cased. The first thing I tried was, which didn’t work either.

The key thing is to be aware that the URL you want to point people to has that documentation/your_library_name_lowercased extended on it. Oh – and that first repetition of the name is the github repo, in case you don’t have the benefit of having them the same. For example, for the swift automerge library, the reposity is automerge-swift, while the package name is automerge. The URL for the hosted pages on github then becomes:


The example generate-documentation command I provided above has extra bits in it that you probably don’t need, in particular the --emit-digest option. This option generates an additional JSON file at the top level of the content (linkable-entities.json) which contains a list of all of the (public) symbols within the library. I’ve intentionally chosen to include this file in the content I’m hosting on Github pages (at
, although it’s not (to my knowledge) used by the single-page app that displays the HTML content. The formal content definition (written as an OpenAPI spec) is defined in the DocC repository at The short form is that it provides a list of all the symbols that I can use to reference symbols within that content, which otherwise proved of hard to get.

If you’re curious what this looks like, try the following command (assuming you have curl and jq installed):

curl -sL | jq '.[].referenceURL' -r.

The output looks something like:


These are the full reference links used within the DocC, and a portion of these reference links are what can be used as symbols within the markdown files in the documentation catalog.


maps to the symbol:


In a few cases, those little hash extensions (the -5b948 bit in the example) are tricky to find. Xcode does a reasonable job of dropping them in using code completion, but I’ve found a few bugs where they were hard to ascertain. This JSON file with all the symbols is exactly what I needed to get a full list.

I haven’t (yet?) figured out a means to transform these doc:// resource URLs into web URLs, but I’ve got a notion there’s a means to enable that for the ability to cross-link documentation when libraries build, or depend, on other libraries. Maybe that’s ultimately what this digest is for, but if so – it’s not a feature that’s easily usable yet from DocC. I’m still exploring the internals of DocC, but there’s an idea of a resolver – which can be an external process or service – that provides this mapping for DocC to build the links (I think?) it needs to enable that functionality.

Looking for help to solve a specific 3D math/trig problem

I’ve been working on a Swift library that implements Lindenmayer systems, but the past week has me deeply stuck on a specific 3D math problem. There’s a specific 3D rendering command that’s described in  The Algorithmic Beauty of Plants – the ‘$’ character in that work, that has a special meaning that’s taken me quite a bit to sort out. The idea is that as you progress through rendering, this particular command rotates around the axis that you’re “growing in” such that the “up” vector (which is 90° pitched up from the “forward” vector) is as close to vertical as possible.

It turns out this little critter is critical to getting tree representations that match the examples of what’s in the book. I haven’t solved it – I thought I had earlier, but I managed to delude myself, so now I’m back to trying to sort the problem. The course of solving this had let to some nice side effects – such I know have a nice 3D debugging view for my rendered trees – but I haven’t yet finalized on HOW to solve the problem. In years past, if I saw someone stuck this badly on a problem, I’d advise them to ask for help – so that’s what I’m doing.

I’m using a 3D affine transform (a 4×4 matrix of floats) to represent a combination of translations and rotations in 3D space doing a sort of 3D turtle graphics kind of thing. From this state I know the heading that I’m going, and what I’d like to do is roll around this particular axis. The problem that I’m trying to solve is determining the angle (let’s call it θ) to roll that results in one of the heading vectors being as close to vertical (+Y) as possible while still on the plane of rotation that’s constrained to the “heading” axis.

My starting points for the “forward” heading is +1 on the Y axis, and a local “up” heading as +1 on the Z axis.

The way I was trying to tackle this problem was applying the built-up set of translations and rotations using the 3D Affine transform and then figuring out if there was trigonometry I could use to solve for the angle. Since I know the axis around which I want to roll (or can compute it by applying the transform to the unit vector that represents it – the (0,0,1) vector), I was looking at the Rodrigues rotational formula, but my linear algebra (matrix math) skills and understanding are fairly weak – and I couldn’t see a path to solving that equation for θ given known vectors for the heading, or vectors on the plane that can be used to define a cross-product that is the heading, as well as knowing the world vector.

I’m heading towards trying to solve this by iteratively applying various values of θ and homing in on the solution based on the the resulting value that has the Y component value. I can apply the roll as an affine transform that I multiply onto the current transform, and then test the result of a unit “up” vector – rinse and repeat to find the one that gives me the best “Y” component value.

I’d like to know If there’s a way to solve this directly – to compute the value of θ that I can use to directly do the rotation, rather than numerically iterate/solve into the solution. I wasn’t sure how active (or if I’d get a response) on the GameDev stack exchange but I tried asking:

If you’re familiar with 3D graphics, rotations, transforms, or the mathematics of solving this sort of thing, I’d love to have any input or advice on how to solve this problem, even just some concrete knowledge of if this problem is amenable to a direct solution – or if it’s the kind of thing that requires an iterative numerical solution like I’m considering.

UPDATE: Solved!

I got an answer from a friend in slack who saw these, and I’ve mostly implemented it. There’s a few numerical instability points I need to sort out, but the gist is: Yes, there’s definitely a way to explicitly calculate the angle of rotation needed. The summary of the answer that put me onto the right path is: “Look at this problem as projecting the vector you want to roll towards as a vector on the plane that’s the base of the heading. Once its projected, you can compute the angle between two vectors, and that should be what you need.”

The flow of the process is as follows:

  • start with a unit vector that represents the ‘up’ vector that I want to compare (+Z in my case)
  • pull the 3×3 translation matrix from the affine matrix, and use that to rotate the ‘up’ vector. We’ll be comparing against this later to get the angle. Because it’s explicitly set as a unit vector in the ‘up’ direction, we already know it’s on the plane that can be represented by the normal of that plane – which is our heading vector.
  • Use a similar technique to rotate the ‘heading’ vector (what starts as +Y for me) by the rotation matrix.

(a quick double check here I did in my testing was that these two remained 90° apart after rotation – primarily to verify that I didn’t screw up the rotation calculation)

  • Now that we have the heading, we use that as a normal vector for the plane upon which we want to project our two vectors, and from which we can get the angle desired. The vector (the ‘rotated up’ vector) is already on this plane. The other vector is the +Y world vector – the “as vertical as possible” component of this.

The formula for projecting a vector on a plane is:

vec_projected = vector - ( ( vector • plane_normal ) / plane_normal.length^2 ) * plane_normal

You can look at this conceptually as taking the vector you want to project and subtracting from it the portion of the vector that corresponds to the normal vector, which leaves you with just the component that’s aligned on the plane.

🎩-tip to Greg Titus, who referred me to ProjectionOfVectorOntoPlan at MapleSoft.

  • Once both the vectors are projected onto that base plane, then use the equation to calculate the angle between two vectors:
    dot(rotated_up_vector, projected_vector) /
            ( length(projected_vector) * length(rotated_up_vector) 

There’s some numerical instability points I need to work out where my current code is returning ‘NaN’ from the final comparison, and the directional component isn’t included in that – so I need to sort out a way to determine which direction to rotate, but the fundamentals part of it is at least sorted.

Solved, even better

After I’d implemented most of the above details to prove to myself that it worked, I received a suggestion from DMGregory with a significantly better answer. His solution uses Vector cross-products to get define the plane that’s the base upon which we’ll rotate, and then uses the Arctangent function with two parameters to compute the angle from the dot products from those two angles.

It’s a denser solution, but he provided a helpful way to think about it that made a lot of sense to me after I sketched it all out in a notebook so I could understand it:

planeRight = normalize(cross(worldUp, heading));

planeUp = cross(heading, planeRight);

angle = atan2(dot(currentUp, planeRight), dot(currentUp, planeUp));

You can think of the dot products as getting us the x & ycoordinates of our current up vector in this plane, and from that the two-argument arctangent gets us the angle of the vector from the positive x-axis in that plane.


Since I’m using SceneKit, I did have to fiddle around with the cross-products to make them legit for a right-handed coordinate scheme, but the resulting solution helpfully provides a proper direction to rotation as well as the value, which is something that the original solution didn’t have.

API Design decisions behind Lindenmayer in Swift

Procedural generation of art is fascinating to me. The scope of efforts that fall into the bucket of procedural generation is huge. Quite a lot of what you find is either focused on art or video games. Within procedural generation, there is a topic that really caught my eye, I think primarily because it wasn’t just about art or games, but rather botany.

I learned of L-systems, also known as Lindenmayer systems, quite some time ago, and knew that you could use them to generate interesting fractal images. L-systems were devised by Aristid Lindenmayer as a formal means to describe and model plant growth. Much of the background was printed in 1990 in the book The Algorithmic Beauty of Plants. The book is currently available from the site Algorithmic Botany in PDF form (which I linked above). I find it fascinating, and after skimming through it a couple of times, I started to look for what code might be available that I could use to play with it. I pretty quickly found the Swift Playground ‘lindenmayer‘, created by Henri Normak. That was neat, but I wanted to go beyond what it could do to re-generate some of the more complex examples from the book.

Fast forward a number of months, and I’ve now published an early release of a swift library that you can use to generate, and render, Lindenmayer systems. The library is available as a swift package – meaning it is intended to be used on Apple platforms (iOS macOS, etc) and quite a lot of it (the core) could be used by swift on Linux without a lot of trouble. I’m hosting the project, and the source for it, on Github at The current release (0.7.0) is not at all finalized, but has a lot of the features that I wanted to use: contextual rules, parametric symbols, as well as image and 3D model representations of the L-systems.

To tease a bit of what it can do, let me share some examples from the book, and then my examples of the same L-systems, ported to using the library I created:

Rendered trees from The Algorithmic Beauty of Plants, page 59
The ported L-systems using the original parameters as the book, rendered in 3D

The rendered result is really close – but not quite there. Part of it stems from a the representation of a specific symbol in the original (‘$’), and my interpretation of what that does when render the state of an evolved L-System in three dimensions. It could just as easily be a mistake in how I interpreted and ported the original L-systems that were published in the book. You can really see the differences in the second set of trees that I tried to create from the book:

I debated how I wanted to create this library, and in the end landed on doing something was would push me as a developer, forcing me to more deeply understand the swift programming language as well as how to design APIs with it. The result isn’t an interpreted thing, but something that closely uses the swift compiler and tries to match to leveraging the type safety. If you want to make an L-system with this library, you’re doing it in the swift programming language.

The core of an L-system is a set of modules – an abstract representation – and a set of rules that you apply to these modules. You start with an initial set of modules – maybe just one – and on each ‘iteration’ of an L-system, you go through the entire list of all the existing modules, and apply a set of rules. The rules are set up to match a module, or a specific sequence of modules, and if they do – then you replace the current module in its current location within the sequence, with whatever the set of modules the rule returns. Typically only one rule applies, but in any case I set it up so that tries all the rules in order and only uses the first rule that reports it matches. If no rules match, the symbol is ignored and left in place.

A capability that I wanted to enable was implementing a parametric L-system. This means that the symbols aren’t just one-character things that you interpret, but objects (called modules) that can have parameters. Those parameters can be read, and used to determine if a rule should be chosen, or what the rule produces. I chose to use Swift closures for constructing the rules, the idea being that you could define a module as a Swift struct (with or without properties), evaluate if a rule applied to a module (or set of modules) by either their type, or by their type and properties. If choose, then also making the types with any properties available to compute values and choose what modules should be the replacements. My thinking was anything you could compute using Swift was more immediately available by using a closure, and had the benefit of being type-checked by the compiler.

To make this idea more concrete, take a look through a variation of the system that created the tree images above:

    struct Trunk: Module {
        public var name = "A"

        let growthDistance: Double
        let diameter: Double

    struct MainBranch: Module {
        public var name = "B"

        let growthDistance: Double
        let diameter: Double

    struct SecondaryBranch: Module {
        public var name = "C"

        let growthDistance: Double
        let diameter: Double

    struct StaticTrunk: Module {
        public var name = "A°"
        public var render3D: ThreeDRenderCmd {
                length: growthDistance,
                radius: diameter / 2,
                color: ColorRepresentation(red: 0.7, green: 0.3, blue: 0.1, alpha: 0.95)

        let growthDistance: Double
        let diameter: Double

    public struct Definitions: Codable, Equatable {
        var contractionRatioForTrunk: Double = 0.9 /* Contraction ratio for the trunk */
        var contractionRatioForBranch: Double = 0.6 /* Contraction ratio for branches */
        var branchAngle: Double = 45 /* Branching angle from the trunk */
        var lateralBranchAngle: Double = 45 /* Branching angle for lateral axes */
        var divergence: Double = 137.5 /* Divergence angle */
        var widthContraction: Double = 0.707 /* Width contraction ratio */
        var trunklength: Double = 10.0
        var trunkdiameter: Double = 1.0

        init(r1: Double = 0.9,
             r2: Double = 0.6,
             a0: Double = 45,
             a2: Double = 45)
            contractionRatioForTrunk = r1
            contractionRatioForBranch = r2
            branchAngle = a0
            lateralBranchAngle = a2

    static let defines = Definitions()
    public static let figure2_6A = defines
    public static let figure2_6B = Definitions(r1: 0.9, r2: 0.9, a0: 45, a2: 45)
    public static let figure2_6C = Definitions(r1: 0.9, r2: 0.8, a0: 45, a2: 45)
    public static let figure2_6D = Definitions(r1: 0.9, r2: 0.7, a0: 30, a2: -30)

    public static var monopodialTree = Lindenmayer.withDefines(
        [Trunk(growthDistance: defines.trunklength, diameter: defines.trunkdiameter)],
        prng: PRNG(seed: 42),
        parameters: defines
    .rewriteWithParams(directContext: Trunk.self) { trunk, params in

        // original: !(w) F(s) [ &(a0) B(s * r2, w * wr) ] /(d) A(s * r1, w * wr)
        // Conversion:
        // s -> trunk.growthDistance, w -> trunk.diameter
        // !(w) F(s) => reduce width of pen, then draw the line forward a distance of 's'
        //   this is covered by returning a StaticTrunk that doesn't continue to evolve
        // [ &(a0) B(s * r2, w * wr) ] /(d)
        //   => branch, pitch down by a0 degrees, then grow a B branch (s = s * r2, w = w * wr)
        //      then end the branch, and yaw around by d°
            StaticTrunk(growthDistance: trunk.growthDistance, diameter: trunk.diameter),
            Modules.PitchDown(angle: params.branchAngle),
            MainBranch(growthDistance: trunk.growthDistance * params.contractionRatioForBranch,
                       diameter: trunk.diameter * params.widthContraction),
            Modules.TurnLeft(angle: params.divergence),
            Trunk(growthDistance: trunk.growthDistance * params.contractionRatioForTrunk,
                  diameter: trunk.diameter * params.widthContraction),
    .rewriteWithParams(directContext: MainBranch.self) { branch, params in
        // Original P2: B(s, w) -> !(w) F(s) [ -(a2) @V C(s * r2, w * wr) ] C(s * r1, w * wr)
        // !(w) F(s) - Static Main Branch
            StaticTrunk(growthDistance: branch.growthDistance, diameter: branch.diameter),

            Modules.RollLeft(angle: params.lateralBranchAngle),
            SecondaryBranch(growthDistance: branch.growthDistance * params.contractionRatioForBranch,
                            diameter: branch.diameter * params.widthContraction),


            SecondaryBranch(growthDistance: branch.growthDistance * params.contractionRatioForTrunk,
                            diameter: branch.diameter * params.widthContraction),
    .rewriteWithParams(directContext: SecondaryBranch.self) { branch, params in
        // Original: P3: C(s, w) -> !(w) F(s) [ +(a2) @V B(s * r2, w * wr) ] B(s * r1, w * wr)
            StaticTrunk(growthDistance: branch.growthDistance, diameter: branch.diameter),

            Modules.RollRight(angle: params.branchAngle),

            MainBranch(growthDistance: branch.growthDistance * params.contractionRatioForBranch,
                       diameter: branch.diameter * params.widthContraction),

            MainBranch(growthDistance: branch.growthDistance * params.contractionRatioForTrunk,
                       diameter: branch.diameter * params.widthContraction),

As a side note, the library uses both protocols as types (also known as existential types) and generic types – both for generating the L-systems and the rules within an L-system. Enabling that made for some tedious development work, as there’s not a lot of “meta-programming” capabilities with the Swift language today. That said, I’m happy with the results as they stand right now. I’m still debating if I would get any benefit from leveraging the Swift DSL capabilities with result builders. I’ve watched and re-watched Becca Dax’s talk from WWDC 21: Write a DSL in Swift using result builders at least four times, but so far I haven’t convinced myself it’s a win over the factory methods I’ve currently enabled.

The 2D representation draws into SwiftUI’s canvas (which is pretty much a clone of the stuff that draws into a CoreGraphics context that Henri Normak shared in his playground), and the 3D representation work generates up a SceneKit scene, and there’s so much more to go!

I love the idea of being able to use this to generate images or 3D models, and explore the variety of things you can create with L-systems. A huge shout-out to Dr. Kate Compton, who’s writing over many years has enabled me to explore a variety of things within procedural generation. I’m still working up to being able to “generate a 1000 bowls of oatmeal“, at least easily. One of the recent additions I enabled was randomization within the library. I included a seedable pseudo-random number generator, so you can make things deterministic if you want.

This was the first real effort I’ve taken to generating 3D scenes, so I may need to step back and re-think through the whole renderer that generates SceneKit scenes, and I haven’t yet even begun to explore how I might enable the same with RealityKit. The 2D version was relatively straight-forward, but when you get into 3D there’s all sorts of bizarre complications of rotations to deal with – and while I have something basically working, it’s not intuitive to understand – or debug.

I have a number of ideas for how I might continue to grow this – but if you find it interesting, feel free to try it out. I’ve enabled discussions on the Github repo, or you can track me down on twitter pretty easily, if you have questions or suggestions. I’m rather fixated on the issue that what I’m generating doesn’t exactly match the results from the book, and what I interpreted incorrectly, but I think the gist of what this is and does is reasonably solid. If you come to play, don’t be surprised if the examples that I have built into the library change as I iterate on getting the results to match more closely to the originally published work.

I dearly wish that the Swift Playgrounds version 4 (the update that just came this winter) out allowed for Swift packages to be used within a playground. Alas, that doesn’t appear to be the case – but you can still experiment with this library using Swift Playgrounds by including it in an App. I’ll have to explore how to publish that as an example… another thing for the TO-DO list for the project!

Adding DocC to an existing swift package

During WWDC 21, Apple announced that they would be open sourcing documentation tooling (DocC) that’s used to build and provide documentation within Apple. At the tail end of October 2021, the initial version of DocC was released — available on Github, scattered through multiple repositories:

Apple hosts documentation about DocC (presumably written and published with DocC) at its Developer documentation site.

The initial release of DocC is primarily oriented around providing documentation for libraries and frameworks. It isn’t quite useful for documenting the internals of apps (yet!), but it’s getting closer. Some developers are waiting for features they see as critical before adopting DocC. I suspect the most sought-after feature is the ability to output something that can be dropped into a directory and directly hosted — such as rendering the documentation so that it can be published through Github Pages.

If you’re curious about the specifics of how the DocC project is considering enabling “static hosting”, Scotty has submitted a pull request to that feature, along with a parallel PR by Dobromir in the swift-docc-render repository.

With this feature, and some other recent improvements and plans, including some outside of DocC, I am looking forward to adding documentation to a few libraries I’m using, but which haven’t yet adopted DocC.

Unfortunately, Apple’s documentation on what you do to adding documentation to an existing swift library (say, one provided by Package.swift) has some trouble — it’s so highly focused on adding documentation using Xcode, and to when you have an Xcode project, that it heavily neglects the details for settings things up with package-only swift libraries. There’s a good article by Keith Harrison from this summer (at that helps, but also leans heavily on using Xcode.

If you want to build the documentation through just Package.swift (for instance, if the library has no xcodeproj file), then make sure the Package.swift tools version is at 5.5:

// swift-tools-version:5.5

Where you put the documentation is important as well. There’s a one-liner from the Apple’s article:

For Swift packages, place the documentation catalog in the same folder as your library’s source files for DocC to associate the catalog with the library.

Documenting a Swift Framework or Package

When you’re not working in Xcode, this means adding a directory in the sources directory of your package with an extension .docc on it. For example, when I drafted a documentation stub pull request for automerge-swift, I used the package name automerge.docc, not the repository name automerge-swift.docc. The name of the directory doesn’t appear to matter, but I like Keith’s suggestion of having it match the package name. Within the .docc directory, include at least one markdown file that has at the top of a special marker that lines up with the name of the package. In my documentation PR for automerge-swift, I ended up with the following structure and files:

  • Source/Automerge.docc/
  • Source/Automerge.docc/

You can get Apple’s templates by making a (throw-away) Xcode project, adding documentation files to it, and copying them out – but I thought I’d include a few examples for anyone else looking on the web for the quick detail.

The template for the overall catalog:

# ``YourPackageNameHere``
One line summary
## Overview
Short (a paragraph, maybe two) overview of your library.
## Topics
### Group
- <doc:AnArticle>

A few notes about this template:

  • The name at the top should match the name of your swift package, surrounded by the double-back ticks.
  • The ## headings are the major groupings. You can provide additional organization with ###, but don’t bother with sub-sub-headings (#### or more). Its legit markdown, but ignored by DocC.
  • If DocC can’t find a symbol reference that you have in your markdown, or a link to the relevant <doc:> article, then it may just pass on displaying it. In some cases, (links to symbols using the “ syntax) a misspelled symbol may be presented as you provided it, in a fixed-width font, but not have a link.
  • The doc link above references the name of the article’s markdown file – not the title within it. Although to keep yourself sane, I’d recommend keeping the title and header pretty much identical.

And to match that, here’s a quick and dirty template for an Article, loosely based off the one Apple provides within Xcode:

# Article
One sentence overview/abstract introducing the article
## Overview
Introductory paragraph(s)
## How to do something interesting
Your content
### section heading
Your content

Third, there’s an template that Xcode is calling an Extension, which you can use to add additional detail onto an existing symbol – it incorporates what you included with the symbol, and allows you to add more – but external to the documentation in the source. There’s a nice detail of this template within Apple’s article Adding Structure to Your Documentation Pages.

While you can edit and set these up with any text editor, you get a notable win when you use Xcode to editing these files.

  1. The DocC compiler provides nice fixit’s – notes about what’s wrong – when it can’t find or link to a symbol after you tried to build the docs with Product > Build Documentation using Xcode.
  2. Xcode provides some very handy symbol code-completion capabilities within Xcode that you’d otherwise have to guess or learn through trial and error.

At some point, hopefully in the near future, you’ll be able to render out a bunch of static HTML with your documentation. For now, the easiest way to iterate and see the feedback is by building the docs through Xcode. The shortcut keybinding (Shift-Control-Command-D) is a finger-bender, but after Xcode builds the documentation, you immediately see the results in the documentation window of Xcode.

A time for change

For some reason, the fall seems to match the most with the inflection points in my life where I’ve made notable changes. This fall is no different, as today is the last day of a contract position that I started just before the COVID lockdown. The 18 month gig was wonderful – sometimes a bit “hard mode” due to the constraints that large companies put on contractors. I went into it with a couple of goals, and I’m finishing it out feeling like I very much achieved (or made huge progress against) those goals.

“What’s next?” is a common question; from others to me, and within my own brain. For a long time, I absorbed a tremendous amount of my self-identity from what I did. Or more specifically, from the communities that I was a part of – large influences including groups of coworkers – but also external groups. I’m fairly addicted to solving problems and puzzles, so I have a hard time stepping back to understand “what I want to do” in a way that’s not heavily influenced by the group or groups that I’m currently or recently involved with. In practical terms, this hits me as “I’ve been working in and around this problem space for the last X months, so all the interesting puzzles that immediately come to mind tend to be related to that problem space.

Now that I’m wrapping up my contract gig, I’m taking some time off – away from all my home influences – to try and let my inner-self simmer down. While I give myself time to wind down, I’m making lists of things that “sound interesting”. Getting it out of my head into writing often lets me put it aside mentally. I learned long ago I’d obsess until I wrote it down – so I make lists and notes to free myself. I’m also reviewing my journals from the time period when I last had a break. I’m hoping that refreshing my thoughts will help ease things back out such that my own voice rides to the top. There were also some things I set aside when I took the contract that I may want to revisit and consider. I have the luxury of taking some time to figure out what’s next, so I’m trying to take it.

Kubernetes and Developers

Three years ago (April, 2018) Packt published my book Kubernetes for Developers. There weren’t many books related to Kubernetes on the market, and the implementation of Kubernetes was still early – solid, but early. Looking back, I’m pleased with the content I created. It’s still useful today, and for technical content that is pretty darned amazing. I wrote it for an audience of NodeJS and Python developers, and while some of the examples and tasks are now a touch dated, the core concepts remain sound. I originally hoped for at least an 18 month lifespan of the content, and I think it has doubled that.

When I wrote that book, a lot of folks that I interacted with wondered if Kubernetes would stick around, or if it was just a flash-in-the-pan fad. Over the decade prior to writing it, I used with a large variety of devops tools (Puppet, Chef, Ansible, Terraform, and a number of older variations), providing me with experience and understanding. When I saw how Kubernetes represented the infrastructure it manages, I thought it would be a win. The concepts were the “right” primitives, modular and composable, they felt natural, and applied consistently. When I saw AWS “flinch” — responding to the competitive pressures of Google releasing and supporting Kubernetes with cloud services — I knew it was destined for success.

I saw, and still see, a place where this kind of infrastructure can be an incredible boon to a developer or small development team — but with Kubernetes there is a notable downside. While it has great flexibility and composes well, it’s brutally complex, has a very steep learning curve, and can be notably opaque. For a developer looking at the raw output of Kubernetes manifests, it can be a horror show. Values are copied and repeated all over the place, the loose coupling means you have to know the conventions to trace the constructs it makes, and error messages presume in-depth knowledge of of how the infrastructure works. In addition, there are conventions that rule functional operation, and when you step to trying to edit something, it’s astonishingly easy to misconfigure things with an extra space because — well — YAML.

I recently came back to creating application manifests for Kubernetes, and wrote a (simple) helm chart for an application. Coming back after a couple years working on different technology reminded me how opaque and confusing the whole process was. The updated version of Helm (version 3) is an improvement over its predecessor for a number of technical reasons, but it’s no easier to use to develop charts. Creating even a simple chart expects a very deep knowledge of Kubernetes, the options it provides and how to specify them, knowledge of the coupling that’s implicit within the manifests — where data is repeated, and the conventions that rule them. It was very eye opening. The win that I hoped for three years ago when I published the book — that developers could see and have a guide to using the power of Kubernetes to support the development process — hasn’t entirely come to pass. It still could, but needs work.

As a side note, I ran across the blog post 13 Best Practices for Using Helm, which I recommend if you find yourself stepping into the world of using Helm to help manage Kubernetes manifests for your applications.

For developers who are being asked to be responsible for their code running in production, running apps with Kubernetes provides useful levers and feedback loops. Taking advantage of Kubernetes in your development flow is not going to speed up that tight loop where you’re enabling a new HTTP endpoint, but it can make a notable difference when you get the stage of integration, acceptance, and functional testing — the places where you verify correctness, performance analysis, and resiliency testing. As a developer, if you know what Kubernetes is looking for, you can write code to communicate back to the cluster. This in turn lets the cluster manage your code – individually or at scale – far more effectively. Kubernetes has the concept of liveness and readiness probes for any application. By crafting extensions on your application, you can provide direct signals of when things are all good, when there’s trouble, and when your app needs to restart.

The same pattern of interaction that Kubernetes uses to manage its resources is used by observability tools. Once you’re comfortable sending signals to Kubernetes, you can extend that and send metrics, logs, and even distributed tracing – about how your application is working with the details that let you debug, or forecast, how your code operates. Open source observability tools such as Prometheus, Grafana, and Jaeger all comfortably run within Kubernetes, enabling you to quickly provide observability to your apps. The same observability that you use in production can provide you with additional insights to experiment and explore during development. A post I wrote two years ago – Adding tracing with Jaeger to an express application – is still a popular article, and I used that setup to characterize an app by capturing and visualizing traces while running an integration test.

Having been periodically responsible for large and small “dev and test labs” over several decades of my career, it appeals to me that I can create a cluster, deploy the code, run functional and performance tests, and validate the results while also getting a full load of metrics, traces, and logging details. And because it’s both ephemeral and software defined, I can create, run, capture data, and destroy in whatever iterative loop makes the most sense for what I need. In the smallest scaled scenario, where I don’t need a lot of hardware, I can verify everything on a laptop. And when I need scale, I can use a cloud provider to host a cluster, and get the scale that I need, use it, and tear it down again at the end.

Getting back into these recent from-nothing deployments into a cluster, and writing new charts, reminds me that while the primitives are great, the tooling and user-interface for developers working with this has a long, long way to go. My past experience is that developer tools can be among the last to get decent, let alone good, user interfaces. Its often only slightly ahead of the dreaded “enterprise application” tools in terms of thoughtful user-experience or visual design. With work, I think the complexities of Kubernetes could be encapsulated – visible if you needed or wanted to really see. That work should let you focus on how your app works within a cluster, and use the good stuff from Kubernetes and the surrounding community of tools to establish an effective information feedback loop. It could be so much better supporting developers and allowing them to verify, optimize, and analyze their applications.

Public Service Announcement

If you’re a lone dev, or a small team, and want to take advantage of Kubernetes as I described above, don’t abuse yourself by spinning up a big cluster for a production environment that amounts to a single, or a few, containers.

It’s incredibly easy to spend a bunch of excess money from which you get essentially no value. If you want to use Kubernetes for smaller work, start with Minikube or Rancher’s k3s, and grow to using cloud services once you’ve exceeded what you can do on a single laptop.

%d bloggers like this: