The evolving world of the server-side developer

Over the past couple of weeks, I’ve talked with a number of folks about Kubernetes, Marathon, Docker Compose, AWS CloudFormation, Terraform, and the bevy of innovation that’s been playing out over the past 12-18 months as Docker swims across the ecosystem.  This is at the top of a lot of people’s minds – how to make applications easier to run, easier to manage themselves, and how to do that consistently. I wrote a little about this whole set a few weeks ago. I thought it would be worthwhile to put down my view of the changing world of a server-side software developer.


All of these technologies are focused on making it easier to run sets of applications (with the in-vogue term being microservices) and keep them running. The significant change is containers evolving from the world of virtual machines. The two are not mutually exclusive – as you can easily run containers in VMs, but fundamentally containers are doing to virtual machines what virtual machines did to physical x86 servers – they’re changing up the management domain – encapsulating the specific thing you want to run, giving it a solid structure, and (mostly) making it easy.

AWS took virtual machines and accelerated the concept of dividing up physical hardware into smaller pieces. Spin up, tear down, API driven, measured and controlled the crap out of it – and made The Cloud. That change, that speed, has shifted our expectations on what it takes to get the infrastructure in place so we can produce software. When we get to basics, everything we’re doing is about running an application: doing what makes it easier, faster, cheaper, more secure, more reliable. Microsoft Azure, AWS, and Google compute are leading the race to drive that even further towards commoditization. And if you’re asking “what the hell does that mean” – it means economically cheaper and consistent for all practical purposes. The cost of running an application is more measurable than it has ever been before, and it shouldn’t surprise anyone that business, let alone technologies would encourage us to optimize against cost and speed.

For software delivery, virtual machines mostly followed the pattern of physical machines for deploying an OS, using it for patches and updates, and leveraged the whole idea of OS packages as a means of deploying software. Lots of companies still dropped some compressed bundle of code for their own software deployment (a zip, war, jar, or whatever). A few bothered to learn the truly arcane craft of making OS level packages, but a pretty minimal set of folks every really integrated that into their build pipelines. The whole world of configuration management tools (cfEngine, Puppet, and Chef) errupted in this space to make keeping this explosive growth of “OS instances” – virtual servers – in control.

Now containers exist, and have formalized a way of layering up “just what we need” and to some extent “exactly what we want”. Whether it’s a compiled binary or a interpretter and code, containers let you pull it all together, lock it into a consistent structure, and run it easily. On top of that speed, you’re removing a lot of cruft with containers – it’s one of the most brilliant benefits: trimming down to just what you need to run the application. In a world where we can now consistently and clearly get numbers for “what it costs to run … on AWS”, it’s an efficiency we can measure, and we do. Virtual machines were doing this (slowly) with efforts like JeOS (just enough OS), but Docker and friends got into that footrace and the older just left the JeOS efforts standing still.

In all this speed, having consistent ways to describe sets of processes working together, their connections and dependencies together, and being able to set that all up quickly, repeatedly, and reliably is a powerful tool. In the VM world, that is what AWS Cloudformation does – or Terraform, or JuJu, or even BOSH or SaltStack, if you squint at them. Those all work with VMs – the basic unit being a virtual machine, with an OS and processes getting installed. In the container world, the same stuff is happening with Kubernetes, Marathon, or Docker Compose. And in the container world, a developer can mostly do all that on their laptop…


Docker scales down extremely well. Developers are reaping this benefit, since you can “run the world” on your laptop. Where a developer might have needed to have a whole virtual infrastructure (e.g. an ESXi cluster, or a cloud account), the slimness of containers often means we can curry away all the cruft and get down to just the applications – and that slimming is often sufficient to run everything on a laptop. The punchline “Your datacenter on a laptop” isn’t entirely insane. With everything at the developers fingertips, overall development productivity increases, quality can be more easily verified, etc. The win is all about the cycle time from writing code to seeing it work.

Another benefit is the specificity of the connections between processes. I can’t tell you the number of times in the VM or physical based world where I was part of a review chain, trying to drag out what processes connected to what, what firewalls holes needed to be punched, etc. We drew massive visio diagrams, reviewed them, and spent a lot of hours  bitching about having the update the things. Docker compose, Kubernetes, etc – includes all this information up front. Where we created those visio diagrams, they’re now part of the code. Imaging a dynamic visio/box like diagram that shows a Docker Compose up and running, state of each process and status/utilization of all the connections. Like the fire-control stations on a ship, the whole thing diagramed – sections lighting up  for clarity of what’s working, and what isn’t. We are steps away from that concept.


Upgrading is now a whole different beast. Containers are pretty locked down, and due to their nature of overlaying filesystems, an update to an underlying layer generally means you need a whole new container deployed. The same path happens for a code update and a security patch to your SSL libraries. How your tooling supports this is a big deal, and most of these systems have a pretty clear means of doing what’s typically called a blue/green deployment to solve this problem. It remains to be seen what we do for tracking security updates, CVEs that apply to underlying libraries, and how we patch and update our code as we find the issues. Up until a few months ago, there weren’t even initial solutions to help developers do this.

Containers also allow the environment to scale. Add more hardware, use more resources – the application not only scales down, but it scales up. Apple, Facebook, Microsoft, Google, Netflix – and frankly a ton of companies you’re maybe not even aware of – all have applications that span many machines. With the connections between containers an integral part of the system, that scaling process is no longer unclear, and the world of scheduling and multi-container management is where some of the most interesting innovation is happening today. Applications can go from running on a laptop to scaling across multiple datacenters. The “how” this happens – what tools are used and how the logistical problems are solved remains open. It’s a hell of a challenge, and there’s probably some we’ve not even hit on yet that we’ll find as critical to solving these problems.

the roads forward

These last two items – scaling and upgrading/updates – are the two places where the frameworks will distinguish themselves. How they help developers, what they do, what limits they place, and how well they work will drive the next generation of software development for “server side” developers. All of this is independent of the language you’re writing it. Java, Go, Javascript, Python, Erlang, Ruby – it all applies the same.

There are whole new set of problems that will emerge with distributed systems being the norm. Emergent behaviors and common patterns still have to be established and understood, there’s a whole world of “new to many” that we’ll be working in for quite a while. Like the crazy interesting that rule 30 is to cellular automota.



Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s