Kubernetes and Developers

Three years ago (April, 2018) Packt published my book Kubernetes for Developers. There weren’t many books related to Kubernetes on the market, and the implementation of Kubernetes was still early – solid, but early. Looking back, I’m pleased with the content I created. It’s still useful today, and for technical content that is pretty darned amazing. I wrote it for an audience of NodeJS and Python developers, and while some of the examples and tasks are now a touch dated, the core concepts remain sound. I originally hoped for at least an 18 month lifespan of the content, and I think it has doubled that.

When I wrote that book, a lot of folks that I interacted with wondered if Kubernetes would stick around, or if it was just a flash-in-the-pan fad. Over the decade prior to writing it, I used with a large variety of devops tools (Puppet, Chef, Ansible, Terraform, and a number of older variations), providing me with experience and understanding. When I saw how Kubernetes represented the infrastructure it manages, I thought it would be a win. The concepts were the “right” primitives, modular and composable, they felt natural, and applied consistently. When I saw AWS “flinch” — responding to the competitive pressures of Google releasing and supporting Kubernetes with cloud services — I knew it was destined for success.

I saw, and still see, a place where this kind of infrastructure can be an incredible boon to a developer or small development team — but with Kubernetes there is a notable downside. While it has great flexibility and composes well, it’s brutally complex, has a very steep learning curve, and can be notably opaque. For a developer looking at the raw output of Kubernetes manifests, it can be a horror show. Values are copied and repeated all over the place, the loose coupling means you have to know the conventions to trace the constructs it makes, and error messages presume in-depth knowledge of of how the infrastructure works. In addition, there are conventions that rule functional operation, and when you step to trying to edit something, it’s astonishingly easy to misconfigure things with an extra space because — well — YAML.

I recently came back to creating application manifests for Kubernetes, and wrote a (simple) helm chart for an application. Coming back after a couple years working on different technology reminded me how opaque and confusing the whole process was. The updated version of Helm (version 3) is an improvement over its predecessor for a number of technical reasons, but it’s no easier to use to develop charts. Creating even a simple chart expects a very deep knowledge of Kubernetes, the options it provides and how to specify them, knowledge of the coupling that’s implicit within the manifests — where data is repeated, and the conventions that rule them. It was very eye opening. The win that I hoped for three years ago when I published the book — that developers could see and have a guide to using the power of Kubernetes to support the development process — hasn’t entirely come to pass. It still could, but needs work.

As a side note, I ran across the blog post 13 Best Practices for Using Helm, which I recommend if you find yourself stepping into the world of using Helm to help manage Kubernetes manifests for your applications.

For developers who are being asked to be responsible for their code running in production, running apps with Kubernetes provides useful levers and feedback loops. Taking advantage of Kubernetes in your development flow is not going to speed up that tight loop where you’re enabling a new HTTP endpoint, but it can make a notable difference when you get the stage of integration, acceptance, and functional testing — the places where you verify correctness, performance analysis, and resiliency testing. As a developer, if you know what Kubernetes is looking for, you can write code to communicate back to the cluster. This in turn lets the cluster manage your code – individually or at scale – far more effectively. Kubernetes has the concept of liveness and readiness probes for any application. By crafting extensions on your application, you can provide direct signals of when things are all good, when there’s trouble, and when your app needs to restart.

The same pattern of interaction that Kubernetes uses to manage its resources is used by observability tools. Once you’re comfortable sending signals to Kubernetes, you can extend that and send metrics, logs, and even distributed tracing – about how your application is working with the details that let you debug, or forecast, how your code operates. Open source observability tools such as Prometheus, Grafana, and Jaeger all comfortably run within Kubernetes, enabling you to quickly provide observability to your apps. The same observability that you use in production can provide you with additional insights to experiment and explore during development. A post I wrote two years ago – Adding tracing with Jaeger to an express application – is still a popular article, and I used that setup to characterize an app by capturing and visualizing traces while running an integration test.

Having been periodically responsible for large and small “dev and test labs” over several decades of my career, it appeals to me that I can create a cluster, deploy the code, run functional and performance tests, and validate the results while also getting a full load of metrics, traces, and logging details. And because it’s both ephemeral and software defined, I can create, run, capture data, and destroy in whatever iterative loop makes the most sense for what I need. In the smallest scaled scenario, where I don’t need a lot of hardware, I can verify everything on a laptop. And when I need scale, I can use a cloud provider to host a cluster, and get the scale that I need, use it, and tear it down again at the end.

Getting back into these recent from-nothing deployments into a cluster, and writing new charts, reminds me that while the primitives are great, the tooling and user-interface for developers working with this has a long, long way to go. My past experience is that developer tools can be among the last to get decent, let alone good, user interfaces. Its often only slightly ahead of the dreaded “enterprise application” tools in terms of thoughtful user-experience or visual design. With work, I think the complexities of Kubernetes could be encapsulated – visible if you needed or wanted to really see. That work should let you focus on how your app works within a cluster, and use the good stuff from Kubernetes and the surrounding community of tools to establish an effective information feedback loop. It could be so much better supporting developers and allowing them to verify, optimize, and analyze their applications.

Public Service Announcement

If you’re a lone dev, or a small team, and want to take advantage of Kubernetes as I described above, don’t abuse yourself by spinning up a big cluster for a production environment that amounts to a single, or a few, containers.

It’s incredibly easy to spend a bunch of excess money from which you get essentially no value. If you want to use Kubernetes for smaller work, start with Minikube or Rancher’s k3s, and grow to using cloud services once you’ve exceeded what you can do on a single laptop.

Concurrency, Combine, and Swift 5.5

I started a post that brings together all the moving parts that have been discussed in the various concurrency proposals that are going into Swift 5.5. They’re all accessible through GitHub, and the discussions in the public forums. The combined view of all the moving parts is complex. I was aiming to post something this weekend, but in the end I trashed the draft after I read Paul Hudson’s What’s New in Swift 5.5. He has outdone himself with a beautiful job of describing, and pulling together, all the moving parts, and I think there’s little at this stage that I could add — anything I wrote would be a poor shadow, repeating the work he put into that overview. If you haven’t yet read it, do so. It’s a stellar overview and detailed description (with examples!) of a lot of the moving parts that we’ll see this year. And I’m sure it sets the foundation for quite a bit more to come!

I started closely following the updates to the Swift language, partially because there’s an obvious overlap with what has been proposed and how Combine operates. Some of what/how Combine does it’s processing is getting getting pushed down into the language, which is frankly amazing. I don’t know how Apple’s Combine framework will evolve, but I fully expect it to embrace the async/await foundation, as well as the iteration, sequence, and stream-like structures that are in flight. There’s a massive collection of concurrency proposals and work, and if that alone was what Apple announced and shared at WWDC, it would be huge. I’m guessing that it’s only the foundation for quite a lot of other elements, as yet to be seen. So yeah – I’m one of the people thinking “This year’s gunna be a big one!” when it comes to WWDC announcements.

For me, the portion of the concurrency proposals that’s been most intriguing, exciting, and perhaps a little scary, is the addition of Actors and Global Actors. Building on top of the concurrency features, they provide some amazing coordination and memory-safety constructs that make it far easier to write safe code for an increasingly multi-core system. And I suspect that Actors aren’t going to be limited to a single device, but that it’ll be extended to communicating and coordinating across devices. There’s nothing yet in the public proposals, but I think there’s still quite a bit more coming.

I’m looking forward to these hitting the streets, and (hopefully) API use of Global Actors to help provide compiler-enforced expectations around callbacks and functions that need to interact on the main thread. Yeah – mostly that’s UI updates. The existing class of bugs related to UI updates on macOS and iOS have been helped by warnings from the amazing Clang Thread Sanitizer, but now it can be enforced – in Swift at least – within the compiler. I also expect a surge of complexity due to the annotations and new keywords for concurrency. I suspect it may be overwhelming for a lot of developers, especially new developers getting involved in the platform. While you probably don’t need it to learn swift, I’m guessing that it’ll be pretty front and center to a lot of core app development. And I wouldn’t be surprised to see a lot of discussion, and confusion, around the techniques of linking the earlier Cocoa delegate call-back style into async methods and functions. I think it may be challenging – at least at the start.

Like any new tool, I predict that Actors will get over-used in the near future, maybe for quite a while, before collapsing back to a more sane level of usage in constructing software. In any case, I think it’ll change a lot of conversations about how current software components work together. I hope that the concurrency elements come with equally excellent testing infrastructure and patterns. I’m still bummed that Combine-Schedulers wasn’t something that was included with the Combine framework directly, or at least with XCTest, while at the same time being immensely grateful to the Point•free crew for making that library available.

I’m not going to make any other WWDC predictions, and I’m clearing my mental decks for the conference updates and details to come. It’s getting about time for “sponge learning” mode. I’m looking forward to seeing what so many worked on, and tremendously excited about the potential this year’s Swift language updates are setting up.

Why I don’t want Xcode on the iPad — macOS and iPadOS

With the impressive announcement of the latest iPad Pro’s now being available with the M1 chip, seems like a whole lot of people (in the communities I follow) are talking about the announcement with a general theme of “WTF are we going to do with that chip in there?” Often they’re Apple-platform developers and saying “Boy, wouldn’t it be freakin’ awesome if we could use Xcode on the iPad?!”. I don’t want Xcode on the iPad. I used to think I did, or that it might be useful, but I’ve reconsidered. My reasoning stems from the different underlying constraints of iPadOS vs. macOS – some physical, some philosophical. I don’t want to these constraints to go away. I value each in their own context. And I hope that these various different operating systems aren’t ever fully merged, exclusively choosing one set of constraints over the other.

Personal vs Multi-user

iOS — and its younger, larger sibling iPadOS — are all about being a personal device. I love these kinds of devices, using them regularly. These devices are, very intentionally, single-user. My iPhone is MINE, and while I might hand it to someone else, everything on it is associated with me. Any accounts, all the related permissions – they’re all about me or echo’s of me. There’s no concept of a “multi-user” iOS device*.

(*) Yes, I know there’s unix deep under the covers which fully supports a multi-user Operating System. The way the device is exposed to me – a developer and consumer of it – is all about a single individual user.

macOS, on the other hand, is built from the ground up as a multi-user system. It supports, if you want to use it, multiple people using the same device at the same time. You “log in” to macOS, and “your account” has all the details about who you are and what permissions, constraints, etc – you can do. In practice, this ends up mostly being a single person working on a single macOS device, at any given time. The underlying concepts that allow for multiple users are there, dating back to the earliest UNIX shared-system semantics with multiple users. As a side effect of that fundamental concept, macOS supports multiple programs running in parallel as well, which segues into what I think is the major philosophical difference between these two kinds of devices: focus.

Focus

Somewhat related to the personal vs. multi-user difference is an element of “focus”. I also tend to think of this as overlapping a bit with a concept of “foreground” vs. “background” applications. iOS and iPadOS are fundamentally more focused devices. The expectation, and design, of the device and operating system is that you’re doing one thing – and all of your focus is on that one thing. Everything else is off in the background, or more likely put on a side-shelf out of view and quite possibly paused.

Yeah, there’s the iPadOS split screen setup that makes my previous statement a hasty generalization. I think it’s a weak-ass stab at doing more than one thing at once, a worthy attempt to see if would be sufficient , but near useless in most of the scenarios I’ve tried. The iOS family (iOS, iPadOS, tvOS, and watchOS) all seem to share this concept of “doing a single kind of task”, and translating that into “focused on a single application”.

macOS, with it’s multi-user background, does a bunch of stuff in parallel. There is still a concept of foreground and background, and more usefully a concept of one application that’s focused and primary. Almost more importantly is how the operating handles the parallel stuff that you might talk about being in the background. The key is that what runs in the background is (generally) under the control of the person using the device. If I want to run half a dozen programs at the same time – that’s cool – the system time-slices to try and give all of them relatively equal waiting and effort. With iOS, you’re either in the foreground – and king of the world – or background – and extremely limited in what you can do, and often on a clock for the time remaining to do anything. The iOS based operating system can potentially kill a background app at any moment.

Multiple Windows

The other way the macOS “focus” and “parallel apps” concept shows itself is in windowing. On iOS, your app isn’t resizable, and for all practical purposes, you’re working in a single window — that takes over all the screen real-estate — to do whatever you’re working on.

In comparison, macOS has multiple windows, that you can resize and place however you choose. Well, mostly however you choose – there are limits. The ability to control the size of a window and place it allows you to prioritize the information contained with it. I end up setting window size and location, almost unconsciously, based on what I’m doing. You can have a front and center focus, but still have other information — updating and live — easily available with almost no context switching needed. You can choose what you’re mixing and when. The multiple window paradigm expanded years ago to include the acknowledgement of multiple kinds of spaces – even switching between them – as well as multiple physical displays.

Integrating Knowledge

The multiple windows — with different apps in various windows, all more-or-less resizable — is why I prefer macOS over iPadOS for almost all of my “producing” work. I use multiple programs at once, very frequently because I want to bring together lots of information from disparate sources and use it together. It’s almost irrespective of what I’m doing – being creative, reviewing and analyzes, following up on operational tasks, etc. For me it all boils down to the fact that most of the work I do on a computer is all about integrating knowledge.

Often that combination is a browser, editor, and something else such as a terminal window, spreadsheet, or image editor (such as the super-amazing Acorn). Sometimes it’s multiple separate browser windows and an email program. When I’m developing and working in Xcode, I frequently have multiple browser windows, a terminal, and Xcode itself all up and operational at the same time. I use the browsers for reading docs, searching Google (or StackOverflow). I often keep others apps running in parallel – Slack, email, and sometimes Discord. With these apps nearby, but not intrusively visible, I get updates or I can context switch into them to ask a “WTF am I doing wrong” question of cohorts available through the nearby applications.

I can’t begin to do this on iPadOS without a huge amount of friction. Switching contexts and focus on iPadOS and iOS is time consuming and you’ve got to think about it. It takes so much time and effort, that you’re breaking out of the context of what you’re trying do. On top of that, you can’t really to control the size of the windows on the screen to match the priorities of what you’re working on – it’s more of an “all or nothing”.

I think its obvious, by when I’m doing something that’s more “consuming” focused – reading, watching, etc – then iOS based devices can excel. No distractions, and I can focus on just what I’m trying to read, learn, understand, etc.

What I have found to be effective is using an iPad in addition to macOS. Using the features of Continuity and Universal Clipboard, an iPad becomes a separate dedicated screen for a single purpose. It’s not uncommon for me to have slack, email, marking up a PDF, or a scribble/sketching program (such Notability or Linea Sketch) running on my iPad as almost another monitor for the same system. Then there’s the even-more personal use of watchOS to interact with the Mac. Authenticating that I’m there and unlocking macOS, or using it to verify a system permission for doing something. I don’t know the name of that feature, but I use it all the time.

Not all of my tasks are about integrating knowledge, but almost all my technology or creative work certainly is. In the cases where I’m working on a single thing and want to exclude all distractions and other information, iOS and iPadOS are beautiful. But that’s comparatively rare; almost always when I’m doing something that’s purely solo-creative such as drawing, writing, taking photo’s, or filming. Anything where I need, or want, to mix information together, especially any form of collaboration, begs for the multiple windows, sized for their individual tasks, and background programs running on my behalf.

So thanks, but no: I don’t want Xcode on the iPad.

Second Guessing Yourself

I’m working through the book: Crafting Interpreters. The author — Bob Nystrom — used Java for the first half of the book, a lovely choice, but I wanted to try it using the Swift programming language. Translating it on the fly was a means to exercise my programming and language knowledge muscles, and that does come with challenges. Some translation choices I made worked out well, others have helpfully provided a good learning experience. This post is about happened was while I was hunting, and resolving, programming mistakes this past week.

Recursive functions have long been something that I found difficult. They are a staple of some computer science tasks, and the heart of the techniques that are used in scanning, parsing, and interpreting as shown of this book. It was because I found them difficult that I wanted to do this project; to build on my experience using these techniques.

It shouldn’t surprise you that I made mistakes while translating from Java examples into Swift. There wasn’t anything super-esoteric at the start, a little miss here or there. My first run through implementing a while loop didn’t actually loop (oops). The one that really frustrated me was a flaw in not properly isolating how I set up environments before invoking enclosing blocks or functions, which meant a little Fibonacci example blew up – values oscillating instead of growing. It was kind of my worst nightmare, interpreting recursive code within a bunch of compiled recursive code. I made similar errors in the parser, small subtle things, that meant some of the examples wouldn’t parse.

Testing and Debugging

So I backed up, and started writing tests to work the the pieces: scanner, parser, and interpreter. I probably should have built in testing from the start. I think more importantly — I did it when I needed it. The testing provided a framework to repeatedly work sections of code so that I could debug them and understand what was happening. I started out using the debugger in Xcode to work through the issues, but the ten-plus layers of recursively called functions exploded beyond my head’s capacity to track where I was and what was going on.

I reverted back to more direct tools. I annotated my parser code (and later the interpreter) a flag variable named omgVerbose, printing out each function layer as I stepped into it. With that in place, I could run the test, view the (often quite long) trace, and walk it through for the example. I traced my code in an editor and compared what I thought it should do to what showed in the trace. I even added a little indention mechanism, so that every layer deeper in the recursive parsing would show up as indented. It was what you might call “ghetto”, but it got the job done.

With the two side by side, I traced what was happening much more easily — the whole point of doing this work. It took some of the stuff that was overwhelming my brain and set it outside, so I could focus on what was changing, and have an immediate reference to what had happened just before. I looked at what happened, what was happening, and made an assessment of what should happen next. Then I compared it to what did happen — and used that to find the spot where I made a programming mistake.

Do I really know what I know?

The second guessing hit me while I was debugging the interpreter. It was the worst kind of bug; the kind where the test sample seems to work — it doesn’t crash or throw an exception — but you get the wrong output. The code snippet was advanced, working the recursive nature of this language I was implementing:

fun fib(n) {
    if (n <= 1) return n;
    return fib(n - 2) + fib(n - 1);
}
for (var i = 0; i < 20; i = i + 1) {
    print fib(i);
}

I added my omgVerbose variable to show me the execution flow within the interpreter. The output from that short little function was incredibly overwhelming. That was the point at which the metaphorical light-bulb 💡 went off for me, making it clear and immediately obvious WHY people created debuggers in the first place. The printed execution trace was overwhelming, and a debugger could let me step through it, piece by piece, and inspect. I didn’t think I was up for making a debugger for my interpreter right then though, so I just fortified myself with a long walk to clear my head, some fresh coffee, and tucked in to trace it through.

I read through the trace, scrolling through my editor with the underlying code, seeing what was executed and what happened next, and trying to nail down where the failure was occurring. While going back to my implementation in Swift, I stared at it so long that the words lost meaning. I struggled so much with finding this error, with my own frustration for not understanding why it was failing, that I started to second guess how even the Swift language was working. For at least an hour, I’d managed to confuse myself with exceptions and optionals. At one point, I’d nearly convinced myself that try before a statement implied that the statement returned an optional value (which it very definitely does NOT). I did ultimately find the issue — it had nothing to do with either optional values or error handling. I had made a mistake in scoping when invoking a function within my interpreter – effects were bleeding between function invocations, but only showing when the same function and parameters were called from the same context. The recursive function I was testing happened to highlight this extremely well.

The point isn’t the flaw I made, but that anyone can get to second guessing themselves. I’ve been programming in one language or another for over 30 years, and programming using the Swift language for over five years. I am not the most amazing programming I know, but I’m comfortably competent. Comfortable enough to ship apps written in it, or reach for the Swift language to explore a problem without worrying about having to learn it first. Even still, I second guessed myself.

Recovering From Brain Lock

I suspect it may be more common to encounter this situation when you’re trying to learn something new. I wasn’t using this to learn Swift, but to learn techniques about how to build a programming language and interpreter. It is puzzle solving – exploring a space, making some assumptions, and following the path of where that might go. When you find a mistake, backtrack a bit and take another path. You’re already in the mode where you’re questioning your assumptions, so it’s pretty easy to have that slip back to questioning broader or more baseline knowledge.

This is when it’s more important than ever to step back and “get out of your head”. The good ole ‘rubber ducky’ approach isn’t bad: explain what you think is happening to the rubber duck. Saying it aloud externalizes it enough to illuminate a poor, or mistaken, assumption or prediction. Same thing goes when explaining it to another person – typing, calling, or whatever – better even if they have a different perspective from yours and can ask questions you’re not.

Just as importantly, step away from the problem for a bit. It doesn’t get easier by just pushing forward when you’re frustrated and/or confused, it gets harder. Step outside and go for a walk, away from the keyboard and problem. Listen to music, watch a show you enjoy, or the go for the best recovery mechanism of them all: get a good night’s sleep before coming back to the problem if you can.

Translating Java Into Swift

I’m working through the online book Crafting Interpreters, (which I highly recommend, if you’re curious about how such things are built). While going through it, I’m making a stab at translating the example code in the book (which is in Java) into Swift. This is not something I’m very familiar with, so I’m trying a couple different ways of tackling the development. I thought I’d write about what’s worked, and where I think I might have painted myself into a corner.

Replacing the Visitor Pattern with Protocols and Extensions

A lot of the example code takes advantage of classes and uses a fair number recursive calls. The author tackles some of the awkwardness of that by aggressively using a visitor pattern over Java classes. A clear translation win is leveraging Swift’s protocols and extensions to accommodate the same kind of setup. It makes it extremely easy to define an interface that you’d otherwise wrap into a visitor pattern (using a protocol), and use an extension to conform classes to the protocol. I think the end result is far easier to understand and read, which is huge for me – as this is not something I’ve tried before and I find a lot of the elements difficult to understand to start with.

Recursive Enumerations Replacing Classes

I approached this one with the initial thought of “I’ve no idea if this’ll work”. There are a number of “representation” classes that make up the tree structure that you build while parsing and interpreting. The author used a bit of code to generate the classes from a basic grammar (and then re-uses it as the grammar evolves within the book while we’re implementing things). This may not have been the wisest choice, but I went for hand-coding these classes, and while I was in there I tried using Enumerations. I have to admit, the section in The Swift Programming Language on enumerations had an interior section on Recursive Enumerations that used arithmetic expressions as an example. That really pushed me to consider that kind of thing for what in Java were structural classes. I had already known about, and enjoyed using, enumerations with associated types. It seemed like it could match pretty darned closely.

For processing expressions, I think it’s been a complete win for me. A bit of the code — one of those structural classes — looks akin to this:

public indirect enum Expression {
    case literal(LiteralExpression)
    case unary(UnaryExpression, Expression)
    case binary(Expression, OperatorExpression, Expression)
    case grouping(Expression)
    case variable(Token)
    case assign(Token, Expression)
}

This lines up with the simple grammar rules I was implementing. Not all of the grammar you need works well for this basic structure. I think it’s been great for representing expressions in particular.

Results with Explicit Failure Types Replacing Java Exception

Let me start off with: This one may have been a mistake.

UPDATE: Yep, that was definitely a mistake.

A lot of the author’s code uses Java’s Exceptions as an explicit part of the control flow. You can almost do the same in Swift, but the typing and catching of errors is… a little stranger. The author was passing additional content back in the errors – using the exceptions to “pop all the way out of an evaluation” which could be many layers of recursion deep. So I thought, what the hell – lets see how Swift’s Result type works for this.

I took a portion of the interpreter code (the bit that evaluates expressions) and decided to convert it to returning a Result instead of throwing an exception. The protocol I set up for this started as the following:

public protocol Interpretable {
    func evaluate(_ env: Environment) throws -> RuntimeValue
}

And I updated it to return a Result:

public protocol Interpretable {
    func evaluate(_ env: Environment) -> Result<RuntimeValue, RuntimeError>
}

In terms of passing back additional data (as associated values within a RuntimeError – an enumeration), it worked beautifully. I could interrogate and pull out whatever I needed very easily.

The downside is that to access the innards of a returned result means that you have to use switch on it, and then deal with internals as cases. In almost all cases, I found I wanted to simply propagate a failure up from internal functions, so what I really wanted was something more like a JavaScript Promise (or a Swift Promise library) rather than a raw Result.

The Interpretable protocol applies to those recursive enumerations I showed a bit earlier, the end result was indentions-from-hell where I’m switching on this, that, and the other – making the code quite a bit more difficult to read than I think it needs to be. By itself, result doesn’t add all that much, but when added to switching based on the enumeration that made up the structural representation of the expressions, I had most of the “work” indented four to six layers in.

The upside is that everything is explicitly defined. Swift’s enforcement of exhaustive switch statements makes it so that the compiler “keeps me honest”. Just to share a taste of what I inflicted upon myself, here’s a portion of the evaluate function used within the interpreter:

extension Expression: Interpretable {
    public func evaluate(_ env: Environment) -> Result<RuntimeValue, RuntimeError> {
        switch self {
        case let .literal(litexpr):
            return litexpr.evaluate(env)

        case let .assign(tok, expr):
            switch expr.evaluate(env) {
            case let .success(value):
                do {
                    try env.assign(tok, value)
                    return .success(RuntimeValue.none)
                } catch {
                    return .failure(RuntimeError.undefinedVariable(tok, message: "\(error)"))
                }
            case let .failure(err):
                return .failure(err)
            }

The layer that uses this, my Parser code, uses the throws type logic, so I have a few places where I end up with an “impedance mismatch” (apologies – you get my way-old idiomatic Electrical Engineering references here sometimes). The Parser class uses the exceptions, and the Expressions that it evaluates return Result types – so there’s translation that needs to happen between the two.

I’m going to keep running with it to see it through, but the jury is definitely out on “Was this a good idea?” I have been half-tempted to just go all-in and make the Parser return a result type, but honestly the whole code that ends up something like:

switch result {
case let .success(value):
    // do something with the value
case let .failure(err):
    // often propagating the error:
    return .failure(err)

Just felt so overwhelming that I stopped at the evaluating expressions.

Swift LOX

The language I’m implementing is called LOX, a creation of the author. The translation is in progress, but operational, and probably flawed in a number of ways. I’m not all the way through the book – next up is adding block syntax to the grammar, parser, and interpreter – but it’s coming along reasonably well.

If you want to look further at the translation-atrocities I’ve committed, feel free: the code is available at GitHub at https://github.com/heckj/slox. I’m going to carry it through at least another chapter or two to get the elements I wanted to learn for another project. Because of that, I expect it’ll evolve a bit from where it is today, but I may not ever fully enable the language. We’ll just have to wait and see.

Iterating through Strings in Swift

I recently decided to dive into a new bit of learning – creating my own software language interpreter. No, I’ve not gone stark raving mad due to COVID isolation, it is an interesting challenge that I wanted to understand better. Over a year ago, I remember Gus mentioning the process of creating an online book in his blog – that book was Crafting Interpreters, and I kept a reference to that site.

I’ve started working through the book – except that instead of doing the example code in Java, I’m converting the examples and content on the fly and implementing it in Swift. I just worked through implementing the scanner portion of the code. That code required me to read through a text file (or string) and get tokens from it. And as it turns out, this was super easy to do in swift, but quite non-obvious for me.

Since its inception the Swift language has changed, I think a couple of times, how it deals with strings. Because it supports the UTF8 strings, you can’t just iterate through it byte at a time and get what you want. A lot of early (pre-swift 3) examples did some of this, or variations on the theme, but that’s no longer valid. So a number of examples on StackOverflow tackling this kind of thing are dead code and very out-of-date.

The first, and most obvious way, to iterate through the characters in a string is to use it within a for loop. This is the pattern that you’ll see directly in The Swift Programming Language Guide on Strings and Characters:

for char in yourString {
    print(char)
}

Works great, super efficient… except, you don’t have any reference to do some of the tricks that parsers and scanners want to do – which is peek to see what the next character might be.

The way you can interact with strings in this fashion involve a specific type called String.Index. The details are in a section just a bit further down: Accessing and Modifying Strings in that same guide.

Don’t make the mistake of thinking a String.Index is just a number that you can add and subtract to move around the index. Unicode makes it significantly more tricky than that, and the language represents String.Index as its own separate type, I think partially to to keep me (and you) from making that mistake. Fortunately, the standard library offers a couple of properties on each string to give us reference points: startIndex and endIndex. You can step forward to the next index with a methods on the index: String.Index(after: _someIndex). There’s also a way to step back – use String.Index(before: _someIndex). These allow you to step forward (or back) through a string, character by character, or shift the index location forward a step (or two) that you kind of need when you’re making a scanner for a language interpretter.

One last mechanism that’s helpful to know: you can retrieve a substring from the characters between two index locations, using a range expression made up of the two indices. A substring isn’t quite a string – it’s a different type in Swift. Swift plays some tricks with referencing the original string when you’re using this type, but you an easily make a full string to use as an argument for another function easily enough:

newString = String(firstString[indexA...indexB])

As I mentioned earlier, this is all in the Swift language guide. I hate to admit that it hasn’t been the first place I looked for help and guidance, but it probably should have been. It’s all there, and more.

Nested Observable Objects in SwiftUI

This one often starts with the phrase:

Hey, why isn’t my view updating? It shows the initial data, but it doesn’t update when that data gets changed!

… more than one person, including me …

When you get into seeing the code for the view, how it’s formed, and what the models look like, you see a pattern appear: nested objects that conform to ObservableObject with a reference from one to another, a top level object passed into the view, and view elements that follow the dot-notation chain to create the display.

Let me show you some code, a very simplified representation of this pattern:

class MainThing : ObservableObject {
    @Published var element : SomeElement
    init(element : SomeElement) {
        self.element = element
    }
}

class SomeElement : ObservableObject {
    @Published var value : String
    init(value : String) {
        self.value = value
    }
}

And a view that displays it:

struct MainThingView: View {
    @ObservedObject var model : MainThing
    var body: some View {
        HStack {
            Text("Detail:")
            Text(model.element.value)
        }
    }
}

At first blush, this looks fine – the view displays, and property within the nested view is shown as you’d expect; so what’s the problem? The issue is when you update that nested element’s property, even though it’s listed as @Published, the change doesn’t propagate to the view.

I’ve seen this pattern described as “nested observable objects”, and it’s a subtle quirk of SwiftUI and how the Combine ObservableObject protocol works that can be surprising. You can work around this, and get your view updating with some tweaks to the top level object, but I’m not sure that I’d suggest this as a good practice. When you hit this pattern, it’s a good time to step back and look at the bigger picture. What are you setting up with your views and models? Can you make them a more aligned to a direct representation? Let me explain what’s happening, how you can work around it, and you can judge for yourself.

This pattern is at the intersection of Combine and SwiftUI, and specific to classes. There are two parts to it, the first of which is the ObservableObject protocol, and the second part is one or more properties on that object that have the @Published property wrapper. An object that conforms to the observable object protocol has a publisher on it with a specific name: objectWillChange. You can dig around a bit, and you’ll find what it publishes may not be what you expect: it isn’t publishing the values changing, just that something will change. The type aliases for this publisher point to a type signature of Publisher<Void, Never>. Not what I expected when I first uncovered it, and it made me scratch my head.

The idea, as I understand it, is that the publisher is specifically meant to provide a signal that something has changed – but not the details of what changed. From there, the code within the SwiftUI framework (which uses it), invalidates any relevant view, and looks up what it needs from the referenced object to display a new view.

When most folks use this protocol, they’re not creating the publisher – they’re letting the swift compiler do the heavy lifting, which synthesizes the code to create it, and with the @Published property wrapper, to hook up and watch the properties that should trigger it.

So here’s the kicker to what’s happening: The @Published property wrapper watches for the properties to have changed. We’re dealing with a class here, so we’re in the world of reference semantics. When you update something in a class, you’re not updating the reference to the class – the reference stays the same. That’s the benefit (and trouble) with reference semantics – it’s not entirely obvious that something down below that reference was updated, but as a benefit – you’re not having to copy around the world of what’s in there. If you had replaced the element property with a new instance of SomeElement, then it would trigger the publisher.

So what can you do to work around this? If you’re really tied to this nested class object structure, then you change the objects a bit to “manually” support notifying that publisher chain when the property within a nested object has updated, and you want that reflected in a SwiftUI view. One way to tackle this is to add your own connections from the synthesized publishers on internal @Published property wrappers to the synthesized subject that ObservableObject provides.

The @Published property wrapper synthesizes a publisher for you, referenced with a $ preceding the property name. The compiler synthesizes a Subject for ObservableObject. The subject has a send() method on it that you can invoke. Invoking send() doesn’t require any arguments – it’s not sending any specific data – instead it’s a trigger to say “publish the fact that something is changing and views should be invalidated and redisplayed”. The code below explicitly connects the publisher synthesized within SomeElement to the subject in MainThing.

class MainThing : ObservableObject {
    @Published var element : SomeElement
    var cancellable : AnyCancellable?
    init(element : SomeElement) {
        self.element = element
        self.cancellable = self.element.$value.sink(
            receiveValue: { [weak self] _ in
                self?.objectWillChange.send()
            }
        )
    }
}

There’s a whole world of problems with this setup: from breaking the idea of encapsulation across objects to the fact that it’s incredibly fragile. If you change the element property within MainThing to a new instance, you also need to re-establish the publisher chain to the objectWillChange subject. You can manually create your own subject, such as a PassthroughSubject<Void, Never> to your top-level class and manage the connections to invoke send() on it.

This whole pattern seems to be a “code smell“. If you follow this primrose path, you’ll find yourself triggering the “invalidate and redisplay” at a high level, perhaps more often than you want. A large set of those additional connections, especially with a model changing regularly, pretty quickly leads to performance issues, as the effects invalidate larger swaths of view hierarchy for potentially small changes. One or two connections like this won’t hurt much, but more certainly can.

What seems to be better advice is to look closely at your views, and revise them to make more, and more targeted views. Structure your views so that each view displays a single level of the object structure, matching views to the classes that conform to ObservableObject. In the case above, you could make a view for displaying SomeElement (or even several views) that display’s the property from it that you want shown. Pass the property element to that view, and let it track the publisher chain for you.

struct FocusedView: View {
    @ObservedObject var element : SomeElement
    var body: some View {
        Text(element.value)
    }
}

struct MainThingView: View {
    @ObservedObject var model : MainThing
    var body: some View {
        HStack {
            Text("Detail:")
            FocusedView(element: model.element)
        }
    }
}

This pattern implies making more, smaller, and focused views, and lets the engine inside SwiftUI do the relevant tracking. Then you don’t have to deal with the book keeping, and your views potentially get quite a bit simpler as well.

Integrating SwiftUI Bindings and Combine

A misconception I had when first learning SwiftUI and Combine was that SwiftUI relied on Combine alone for updating data. There was a throw-away comment in one of the 2019 WWDC presentations (Data Flow through SwiftUI) relating the two, and I over-interpreted it to mean that SwiftUI solely used Combine. To be very clear – it doesn’t. SwiftUI nicely integrates with Combine, and the components you use to expose external reference models into SwiftUI (such as @ObservedObject, @EnvironmentObject, @StateObject, and @Published) use it. But quite a lot of the interaction with user interface elements, such as Text, Toggle, or the selection in List operate using a different tool: Bindings.

I think the most important distinction between the two is the direction of data flow. With bindings, the data travels in BOTH directions, and in Combine it travels in a SINGLE direction. You could describe Combine pipelines as being a “one way street for data”, and Bindings as a “two way street”.

Bindings are also not set up to manipulate the timing or transform the data; and the operate on individual values – not streams of data or values over time. That is entirely by design. A core philosophy of the design of SwiftUI is having a single source of truth. A binding exposes that single source of truth, and allows user interface elements to both reflect it and change it. All the while, it keeps the source of truth clear and easily understood. It is designed to support high-speed, low-overheard updates of single values within a view.

So when you start to think something like “Hey, I’d really like to tweak the timing on this…”, then you’re in the realm of Combine. An example might be debouncing text field updates and using the result to trigger a more expensive computation, such as a network call to get information. Unfortunately, there’s not a direct path to create or get a publisher from an existing binding, such as a @State variable within a view.

So how do you get these things to work together? You can rearrange your code to pass in publishers, or otherwise externalize the data from your view with on of the ObservableObject types. If you want to work on data that the view should own, such as view state, then the best option are a couple of view modifiers: onReceive(_:perform:) and onChange(:of:perform:).

Publishing to a Binding

Linking a publisher to a binding, such as a view’s state variable, is in my mind the easier path. How you arrange it depends on where you create the publisher. If the publisher exists outside the view, pass it into your view on initialization, and use it with the onReceive(_:perform:) view modifier.

The view to which its attached becomes subscriber. Remember that in Combine, the subscriber “drives all the action”, so the view is now driving the publisher, for as long as it exists. When the onReceive modifier connects to the publisher, it requests unlimited demand. When SwiftUI invalidates the view and recreates it, the pipeline will get set up again, and requests additional demand from the publisher. If your publisher does heavier work, such as a network request, you might find this a bit surprising.

If you do run into this, it’s a good idea to move that publisher, and the work, into an external object that isn’t owned by the view, but used by it. You can use the publisher from multiple views, and it’s a perfect place for the Combine share operator to use a single publisher from multiple subscribers without needing to replicate the network requests for each one.

Publishing from a Binding

Prior to the SwiftUI updates in iOS 14, publishing from a binding was pretty much squashed. Local state declarations, such as a private state variable, don’t support tacking on your own custom getters and setters, where you could put in a closure to trigger a side effect.

Fortunately, with iOS 14, SwiftUI added the onChange(of:perform:) view modifier, perfectly suited for invoking your own closure when a state variable changes. Specify the binding you to which you want to watch, and do your work within a closure you provide. To get from a closure, an imperative style of code, to a Combine publisher, a declarative style of code, means you need to work with something that crosses that boundary. The most common way is using a Combine subject, designed for imperative code to publish values. I tend to use passthroughSubject, especially within a view, since it’s lightweight and doesn’t require references to other values. This makes it easy to use in a declaration, outside of an initializer.

Again, remember that when using Combine, the subscriber is what drives the action. So be wary of thinking of this flow as pushing a value into a publisher chain; that can lead you to making some incorrect assumptions about how things will react. What’s typically happening is something, somewhere, is already subscribed to the publisher – and it did so with an unlimited demand – meaning any values added in get propagated almost immediately.

Sample Code

To illustrate how these work, I created a simple view, with both a binding sending a value into a publisher, and another getting updated from a publisher chain. The example includes two state variables: one that is the immediate binding for a TextField that updates as you type, the second is a holding spot for values out of a publisher.

A PassthroughSubject provides the imperative to declarative connection by using send() to publish values, called from within an onChange view modifier on the TextField. To debounce the input – in this case delay updating until you’ve stopped typing for a second – the publisher pipeline in the initializer pulls from the subject, uses the debounce operator, and connects to the state variable delayed using the onReceive view modifier.

import SwiftUI
import Combine

struct PublisherBindingExampleView: View {
    
    @State private var filterText = ""
    @State private var delayed = ""
    
    private var relay = PassthroughSubject<String,Never>()
    private var debouncedPublisher: AnyPublisher<String, Never>
    
    init() {
        self.debouncedPublisher = relay
            .debounce(for: 1, scheduler: RunLoop.main)
            .eraseToAnyPublisher()
    }
    
    var body: some View {
        VStack {
            TextField("filter", text: $filterText)
                .onChange(of: filterText, perform: { value in
                    relay.send(value)
                })
            Text("Delayed result: \(delayed)")
                .onReceive(debouncedPublisher, perform: { value in
                    delayed = value
                })
        }
    }
}

You might be tempted to put the combine pipeline directly on the subject, but if you chained any operators, then you would be changing the type of the subject. That would effectively hide it and make it impossible to get a reference to the send() method.

Likewise, you might want to move the publisher chain up to be a declaration, but there’s a quirk here: initialization order. Since you need to reference another local variable (relay in this case), you need to set up that chain from inside a convenience initializer so that the other variables are already initialized and available to use. If you tried to just set it up in the declaration, the compiler would hand you an error such as:

Cannot use instance member 'relay' within property initializer; property initializers run before 'self' is available

You might say, “Hey, but what about that neat trick with lazy initialization?”. In this case, it doesn’t conveniently work. When you use a variable declared with lazy within the onReceive view modifier, you get the error:

Cannot use mutating getter on immutable value: 'self' is immutable

A lazy property mutates an object on its first invocation, so it’s always considered mutating, which doesn’t work with SwiftUI views, which are immutable.

As a final note, keep what you set up and process within a view’s initializer to an absolute minimum. SwiftUI views are intended to be light and ephemeral – and SwiftUI invalidates and recreates them, somethings quite often, so minimize any work you do within them. If, for example, you find yourself reaching to set up a publisher that makes a network request to get some data, seriously consider refactoring your code to externalize that request into a helper object that provides a publisher, rather than have the view repeatedly set it up each time.

The evolution of “safe” and “unsafe” in the Swift programming language

There’s been a lot of motion in the last four months of the evolution of the Swift programming language that I’ve been wanting, waiting, and hoping for. The language maintainers are tackling concurrency as a first-class construct in the language. I’m following along with the language evolution proposals in the forums, and so far have mostly been able to keep up with the details – although I’m sure I’m missing a lot of the fine-grained implications. Reading the forum pitches where people are talking them through has been fascinating.

One of the interesting take-aways is that the terms “safe” and “unsafe”, or at least the specific implications of when they’re used in the swift language, are broadening what they cover with the upcoming changes. You could start to see it as early as last October when the Swift Concurrency Roadmap was published, but the wording wasn’t fully in place, more of just conceptual frameworks. The details of the broadening of the definition didn’t hit home for me until I caught up with the recent discussion on the pitch for task local values.

Prior to these language extensions, “safe” implied something pretty narrow and specific: memory safety. It started with some swift-language specific guarantees about variables being guaranteed to be initialized before use. There’s a Swift.org blog post from back in 2015 that calls this out, and far more detail in The Swift Programming Language book’s chapter on Memory Safety. There’s an even better explanation and detail in the presentation Unsafe Swift from WWDC 20. The heart of it being that the “safe” APIs have more preconditions and guarantees wrapped around them.

Across the recent pitches and proposals, some of the language terms that use safe are now being used to imply concurrency safety, somewhat independently of memory safety. The goal looks to be to provide APIs that have some guarantees about thread-safe access and updates. And along with the safe versions, there are some potential “unsafe” variants to use when you need the escape hatch and are willing to take on the thread safety guarantees yourself.

There is likely going to be some detail about what it means to be safe in the upcoming 3rd pitch for Structured Concurrency (both the task local values pitch and the 2nd structured concurrency pitch mention another round of updates and details for the structured concurrency proposal. As I’m writing this, it’s clear there’s still a lot of active and careful work going on there.

M1 arm64 native OpenSSL with vcpkg

This article isn’t a how-to so much as a debugging/dev diary entry for future-me, and any other soul who stumbles into the same (or similar) issues.

Let me provide the backdrop for this story:

I’m working on a private C++ language based project, previously written to be cross platform (Windows, Linux, and Mac). It has a number of C++ library dependencies, which it’s managing with vcpkg, a fairly nice library package manager solution for C++ projects. It happens to align well with this project, which uses CMake as its build system. One of the dependencies that this project uses is grpc, which in turn has a transitive dependency on OpenSSL.

With the M1 series of laptops available from Apple, we wanted to compile and use this same code as an M1 arm64 native binary. Sure – makes sense, should be easy. The good news is, it mostly has been. Both vcpkg and openssl recently had updates to resolve compilation issues with M1/arm based Macs, most of which revolved around a (common) built-in assumption that macOS meant you were building for an x86_64 architecture, or maybe, just maybe, cross-compiling for iOS. The part that hasn’t been so smooth is there’s an odd complication with vcpkg, openssl, and the M1 Macs that ends up with a linker error when the build system tries to integrate and link the binaries created and managed by vcpkg. It boils down to this:

ld: in /Users/heckj/bin/vcpkg/installed/arm64-osx/debug/lib/libcrypto.a(a_strex.o), building for macOS, but linking in object file built for iOS

For anyone else hitting this sort of thing, there’s two macOS specific command-line tools that you should know about to investigate this kind of thing: lipo and otool.

lipo

lipo does a number of things – mostly around merging various archives into fat libraries, but the key part I’ve been using it for is to determine what architecture a library was built to support. The command lipo -info /path/to/library.a, tells you the architecture for that library. I stashed a copy of vcpkg in ~/bin on my laptop, so using the command lipo -info ~/bin/vcpkg/installed/arm64-osx/lib/libcrypto.a reports the following:

Non-fat file: /Users/heckj/bin/vcpkg/installed/arm64-osx/lib/libcrypto.a is architecture: arm64

Prior to OpenSSL version 1.1.1i, that reported an x86_64 binary.

otool

As I’ve learned, architecture alone isn’t sufficient for C++ code when linking the library (at least on macOS). When the libraries are created (compiled), the libraries are also marked with information about what platform they were built for. This is a little harder to dig out, and where the command line tool otool comes into play. I found some great detail on Apple’s Developer forum in the thread at https://developer.apple.com/forums/thread/662611, which describes using otool, but not quite all the detail. Here’s the quick summary:

You can view the platform that is embedded into the library code directly using otool -lv. Now this generates a lot of output, and you’re looking for some specific patterns. For example, the command

otool -lv ~/bin/vcpkg/installed/arm64-osx/debug/lib/libcrypto.a  | grep -A5 LC_

includes this stanza in the (copious) output:

      cmd LC_VERSION_MIN_IPHONEOS
  cmdsize 16
  version 5.0
      sdk n/a
Load command 2

And as far as I’ve been able to discern, if you see LC_VERSION_MIN_IPHONEOS in the output, it means the library was built for an iOS platform, and you’ll get the linker error I listed above when you try to link it to code built for macOS.

Another library which did get compiled “correctly” shows the following stanza within its output:

      cmd LC_BUILD_VERSION
  cmdsize 24
 platform MACOS
    minos 11.0
      sdk 11.1
   ntools 0

Spotting LC_BUILD_VERSION and then the details following it shows the library can be linked against macOS code build for version 11.0 or later.

Debugging

After a number of false starts and deeper digging, I found that OpenSSL 1.1.1i included a patch that enabled arm64 macOS compilation. The patch https://github.com/openssl/openssl/pull/12369 specifically enables a new platform code: darwin64-arm64-cc.

The vcpkg codebase grabbed this update recently, with patch https://github.com/microsoft/vcpkg/pull/15298, but even with this patch in place, the build was failing with the error above.

I grabbed OpenSSL from its source, and started poking around. A lot of that poking and learning the innards of how OpenSSL does its builds wasn’t entirely useful, so I won’t detail all the dead ends I attempted to follow. What I did learn, in the end, was that the terminal – being native arm64 or rosetta2 emulated x86_64 – can make a huge difference. For convince, I made a copy of iTerm and ran it under rosetta so that I could easily install homebrew and use all those various tools, even though it wasn’t arm64 native.

What I found is that if I want to make a macOS native version of OpenSSL, I need to run the compilation from a native arm64 terminal session. Something is inferring the target platform – not sure what – from the shell in which it runs. I manually configured and compiled OpenSSL with an arm64 native terminal, and was able to get an arm64 library, with the internal markers for macOS.

From there, I thought that perhaps this was an issue of how OpenSSL was configured and built. That’s something that vcpkg controls, with these little snippets of cmake under the covers. vcpkg does a nice job of keeping logs, so I went through the compilation for openssl (using --triplet arm64-osx in case you want to follow along), and grabbed all the configuration and compilation steps. I grabbed those logs and put them into their own text file, and edited them so I could run them step by step in my own terminal window and see the status of the output as it went. I then adapted the steps to reference a clean git repository checkout from openssl and the tag OpenSSL_1_1_1i, and updated the prefix to a separate directory (/Users/heckj/openssl_arm64-osx/debug) so I could inspect things without stepping into the vcpkg spaces.

cd /Users/heckj/src/openssl
git reset --hard OpenSSL_1_1_1i
export CC=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
export AR=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ar
export LD=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld
export RANLIB=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib
export MAKE=/usr/bin/make
export MAKEDEPPROG=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
export PATH=$PATH:/Users/heckj/bin/vcpkg/downloads/tools/ninja-1.10.1-osx
/usr/bin/perl Configure no-shared enable-static-engine no-zlib no-ssl2 no-idea no-bf no-cast no-seed no-md2 no-tests darwin64-arm64-cc --prefix=/Users/heckj/openssl_arm64-osx/debug --openssldir=/etc/ssl -fPIC "--sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk"
/usr/local/Cellar/cmake/3.19.3/bin/cmake -DDIR=/Users/heckj/src/openssl -P /Users/heckj/bin/vcpkg/ports/openssl/unix/remove-deps.cmake
# [2/3]
cd /Users/heckj/src/openssl
export PATH=$PATH:/Users/heckj/bin/vcpkg/downloads/tools/ninja-1.10.1-osx
/usr/local/Cellar/cmake/3.19.3/bin/cmake -E touch /Users/heckj/src/openssl/krb5.h
/usr/bin/make build_libs

What I found was the if I ran this code under a terminal running under rosetta, I’d get the results that indicated the code was built against an iOS platform. And when I ran it under a native arm64 terminal, it would correctly report macOS as the platform.

This is a major insight, but I haven’t yet figured out how to apply it…

I originally installed and compiled vcpkg using the rosetta terminal, and it was running as an x86_64 binary, so I thought perhaps that was the issue. Unfortunately, not. After I installed vcpkg with the arm64 native (and verified the binary was arm64 with the lipo -info command), I made another run at installing openssl, but ended up with the same iOS linked binary.

Prior to getting this far, I opened issue 13854 as a question on the OpenSSL repository, which details some of this story. However, I now longer think that’s an issue, as I was able to get an arm64 native binary when I manually compiled things. There might be something OpenSSL could do to make this easier/better, but its build setup is incredibly complex and I get lost pretty darn quickly within it.

So to date, I’ve trailed this back to some interaction that vcpkg, and x86_64 emulated binaries, are having on the build – but that’s it.

The story ends here, as I don’t have a solution. I have filed issue 15741 with the vcpkg project, with a summary of these details.

For anyone reading until the bitter end, I’d love any suggestions on how to fully resolve this. I hope that someone stumbles across the issue at some point with more knowledge than I and has a solution in the future. In the meantime, this blog post will hopefully record the error and how you can diagnose the architecture that a library is compiled for, even if it doesn’t solve the end problem of getting you to a final resolution.

%d bloggers like this: