AI: Law and Policy

The WhiteHouse announced a series a workshops on AI and its impact to law and policy near the beginning of this month, and the UW School of Law hosted one of those workshops. It was open to the public with registration – so I went. I was looking forward to hear some voices and opinions that I otherwise wouldn’t.

Ed Felton kicked it off, but I think the highlights from Oren Etzioni were more targeted and insightful. The media hype/mob think has this kind of conversation turned on its head. Real issues and concerns need to be discussed, but a lot of the discourse in this space is misdirected and distracting from some of the nearer term core issues. Through his presentation, Oren reiterated that while there are significant things to talk about in this space, the threat of the immediate rise of general artificial intelligence (machines that think like a person) are still a considerable ways off, and the issues at hand that need to be discussed are specifically related to how humans are using existing “narrow” AI to solve problems, capture information, and train systems.

My own personal highlight from this workshop was the second session: a panel which included Kate Crawford and Jack Balkin, folks I didn’t know much about.

The framing and examples that Jack cited during the conversation were new (to me?) and fascinating ideas about how to approach the problems and issues associated with use of this technology. This conversation focused on “the people and companies using AI systems”, and in particular the value and obligations that we should consider when using and/or providing an AI service.

One of his core points was that this model already exists within US Law, and it’s the mechanism for professions and the implied fiduciary duties that exist there – duties of faith, trustworthiness, and appropriate behavior. He put it succinctly in a beautiful way “These guys shouldn’t screw you over”.

I’m sure I won’t do any summary here justice, but fortunately you can make your own opinions. The workshop/presentations were recorded and are available on YouTube: The panel with Jack and Kate starts at 1:15 into that clip and runs about an hour.

If you’re interested in more traditional news coverage about this whole setup, Geekwire has some news coverage of the event as well.


Shared (maybe sharded) Perspective

I started writing this on the flight back from OSCON 2016 in Austin, Texas.

The highlight of the conference was a lovely rambling evening catching up with Brian Dorsey. During one part of that ramble, we talked about our past jobs and the challenges they brought us. We both shared some anecdotes about a few particularly harrowing and or challenging situations. After reminscing through a few of those war stories, I mentioned that Karen and I had made a pact early in our relationship: either of us could tell the other to quit their job and we would – no arguments. Questions maybe, but it would be done and done immediately. I have shared this story with folks periodically, being grateful that we had the insight to set this up, but not making much of it beyond that. Brian insisted that this was sufficiently insightful that I should share it much more broadly, hence this post.

On the surface it may not sound like much, but under it is a deep trust and partnership agreement. There are times when we all loose some perspective, sometimes without even realizing that we lost it or that we are heading down some strange road. As Brian said, “sometimes we’re the frogs, just not clear that the water is continuing to heat up.”

Karen invoked it once (although I think she came close to invoking it a couple of times in the past) on me, and I’ve invoked it once on her. In fact, I think I called her on it first, when we were just starting out in Columbia, MO. When I didn’t see the water steaming around me, she often did – and vice versa. That’s a precious thing in a partnership – a bit of redundancy to perspective, but I think most importantly that there was a committment that our partnership was far, far more important than any given job.

At a quick glance, it could be a very sabotaging thing – where one partner might make it impossible to make a run at a career, but in practice it is just the opposite. I know I’ve got someone backing me up and I can make a truly headlong (sometimes crazed) run into new ideas, jobs, efforts, etc. It is a sort of freedom: knowing she always has my back, including telling me when I screw up. The result is our careers, her and mine, have both been joint efforts, and I think all the greater for it on both sides.

Learn Fearlessly

I like to teach. I have for years, and I use that skill in nearly every position I’ve had for the past decade. Most of those positions have been in software development, often managing teams or organizations.

A weakness I consistently see in engineering organizations is around communications; sharing knowledge. I offset that by communicating (and teaching) a lot. Teaching in this context isn’t just “learn how to program …” kinds of things, it’s learning in general. How an existing system or tool works, what the value proposition is, what customers are looking for, even a simple retrospective: what went well, what didn’t, what do we want to change.

In teaching and learning, there are two sides: proactively sharing information, and taking information in and understanding it. The folks who have done the absolute best at taking in information and understanding it have a common trait:

They learn fearlessly.

Its so easy to fall into the mindset of “I don’t want to look stupid, so I won’t ask”. That’s the fear I’m speaking about. It’s something I had to struggle with for quite a while, so I know it’s not easy. Many people have an innate desire to belong and to be respected. Its hard to put yourself out there, asking what others may perceive as “a stupid question”. And for the requisite Herbert quote:

Fear is the mind killer.

That fear isn’t always misplaced. I have seen (and been the recipient) of mockery for asking a question. DO NOT LET THAT STOP YOU. Here’s the secret: the person doing the mocking is just betraying their own insecurities.

Yeah, it’s going sting when this happens. Maybe more than sting. If you’re looking for a piece of advice (and this whole blog post sort of is one), then ignore it. Don’t react it, don’t acknowledge it – treat it as if that person didn’t say anything or doesn’t even exist. The number one method for dealing with this kind of person is to not give them any attention. In short, Don’t feed the energy monster.

When I’m on the teaching side of this experience, I redirect those people quickly – stamping out that behavior immediately. If they report up to me, you can bet there’s a long, private conversation about to happen after that occurs.

When I’m on the learning side, I view it as a responsibility to ask questions. There is an aphorism:

If you hear one person ask a question, probably more are thinking it, but aren’t stepping up to ask.

When you ask questions that aren’t clear to you – you’re not just helping yourself, you’re helping others. You’re also helping the person teaching. When I’m teaching I appreciate questions because it gives me a chance to try another path to convey the concepts I want to get across. If you don’t ask, I won’t know to try.

Two questions I use most commonly when I’m learning anything are “Where can I learn more about this?” and “What would you recommend as good resources to dig deeper?”. I also find it useful to walk up to a relevant person and start out with “Could you help me understand…”.

Pick your tactics, and keep at it.

Just don’t stop learning.

Engineering for Quality

I have been thinking a lot about the process of debugging: that moment when you’re aware that there’s a problem but you don’t quite know where and are trying to nail it down so it can be resolved. Its something you can learn in general principles, and tends to be very specific to your product when you’re down to the details. I am developing some internal teaching here, as this knowledge seems to be gathered in fits and starts – and it would benefit from some common understanding. How to think about the problem, how to break it down (isolation), and how to reason about software with which you may not be familiar just seems to be a bit of a gap I’m working to close.

In doing this, I want to put the “why” in perspective – to take it from first principles – and share a holistic vision of what we’re after: Engineering for Quality.

I think we’re all on the same page now with the idea that any software will have defects, and the the purpose of a quality program is to reduce the cost of those defects. There’s a lovely grey area that often turns into religious wars about missing features vs. defects that I’m going to leap over and utterly ignore.

The oft-cited research in Formal Methods for Executable Software Models from the IBM System Sciences Institute:


Relative Costs to Fix Software Defects (Source: IBM Systems Sciences Institute)


The takeaway is simple: fixing bugs earlier in the development process costs a crap-ton less.

The organization structures to achieve this are, and have been, changing. A decade and more ago there were dedicated, separate teams for testing vs. development. As with most things in the technology industry, that’s changed dramatically, with various hype cycles and organization experiments working through variants more of less aggressively and effectively.

Organization structure impacts product development tremendously (the aphorism that software is defined by org structure is far more true than most people realize), but regardless of how individuals are collected into teams, there are common tasks regardless of the self-identity (tester vs SDE vs. developer) of the individual doing the work at any given moment.

Trends in team based software development in the last few years have pretty dramatically impacted the engineering processes used for identifying defects and dealing with the them. The most impactful of these trends include:

  • smaller teams with more focused product/service responsibilities. The proverbial two-pizza team that’s responsible for not only developing a service, but also running it.
  • faster iterations and smaller scopes in development cycles. The design – implement – deliver sequence (aka “agile” development)
  • the concept of test driven development, and driving some quality gates and assumption to the very front of the development process
  • the expansion of continuous integration tools driving automated testing via pull request annotations and quality gates.
  • The result of these trends is “quality engineering” responsibilities have migrated from separate teams to embedded teams or overlapping responsibilities for all developers. What had been separate teams are being overlapped to some degree.

The effects of this overlap have set new expectations that are frequently “understood”, but not often written down. Yes, implying that level of understanding is not at all consistent. These new expectations for individuals responsible for quality include:

  • Understand code architecture. Not just understand the general architecture, but can read and trace code paths to understand what software is doing.
  • Can design, reason, and write and execute code, and should be growing their skills to leverage programming to do their jobs more efficiently.
  • There’s significantly less documentation written before and after the development process, with the code and tests for the code becoming the source of truth for what’s expected in the software, from internal structures to overall feature capabilities.
  • Knowing what the code does is no longer sufficient, as how the code is built, assembled, and configured is now expected. Especially as we continue to see more distributed systems development, the whole develop/build/test cycle is now required as a common understanding across teams.

Software tooling is accelerating these trends. Cloud computing has moved services to be easily replicated, software defined, and disposable. Combined with continuous integration and continuous delivery tools such at TravisCI, Jenkins, Semaphore.IO, Concourse, etc the art and science of testing now commonly spans the breadth of the development process. Where it started with SaaS based services, now created-as-a-product projects are taking advantage of these tools. The end result of which is testing happening more frequently, generating more information, and getting used both earlier in and different ways in the development process.

What had been quality reports are getting cycled earlier into the development process as quality gates. Product requirement shifts are accelerating with the “desire to have improved business agility“, so even internal release goals are constantly shifting and being re-evaluated.

Bringing this back a bit, I thought I’d write down what I look for and what I expect when I am discussing these responsibilities. To keep it simple, I’ll limit this to the responsibilities of someone focused entirely and only on product quality:

The primary job responsibility is creating information about the quality of the product.

In the course of pursuing this goal, I’d expect someone to be able to design and develop tests, integrate those tests into a CI system, generate and collect data and metrics, and refine that raw data into knowledge and information in multiple forms.

Along with these duties, I would expect the individual to learn the architecture, responsibilities, and function of all the components making up the product, being able to perform consistent debugging and isolation on unexpected results to the logical components.

The information to be shared provided includes analysis of quality and expected of the product and components of the product from multiple viewpoints and leveraging object metrics including coverage from use-cases, APIs, and code coverage. Summary analysis from continuous integration systems and quality gates are expected to be included in repeated aggregate form and coupled to the build and development process. When unexpected results are encountered, then detailed analysis identifying objective, easily repeatable steps to reproduce the result along with all debugging and isolation steps to reduce the scope of any further search for resolution.

Analysis provided must be based on objective and measurable mechanisms, clearly and effectively communicated in written, and when relevant graphical and verbal form, and technically accurate.

How to maintain a git remote fork

This is a “how to” on a process for managing changes that you keep locally (and only locally) on a fork of a git repository.

I can’t believe I’m writing this as a “how to”, but I keep wanting to refer people to this kind of information, so I guess if I want it out here, I should do something about it.

To be very clear, 99% of the time, you should NOT DO THIS. If you’re intentionally going here, be very aware of the additional work you just signed up for – both now, and in the future.


In the world of open source software, you make a fork of software all the time. Github has made it super easy, and more importantly, it’s how they (and git) have arranged to collaborate on software. This “how to” is for when you decide that you want to maintain your own fork, with changes in addition, or just divergent, from the original project. For most cases, you are going to be much better off submitting back your changes. Be damned sure you need to keep your changes to yourself.

If you want to keep your changes locally and just for yourself, then immediately recognize that you have just taken on “technical debt”. The interest rate for this debt could be high or low. Following the “debt” metaphor, the cost is based on how much activity and change happens in the repository from which you forked and want to take future changes.

I’m writing this article presuming you want or need to keep a fork with “a few changes added”, and you want to keep it otherwise up to date with the changes happening by others in the open source community. So let’s get to it – I’ll start with some terms and git basics:

First and foremost, realize that git repositories are sequences of diffs that make up a history (or streams of histories). This is all source control 101 stuff, but getting that you’re dealing with sequences of diffs, applied one after another, is a key insight.


If you’re new to git, do yourself a favor and read through some docs and guides. The site at is a pretty good “getting started” reference.

When you make a “fork”- either on github or cloning locally, you’re making a copy of that history of diffs – in fact, of all the histories that are contained within the repository.


is doing this:


And it’s the same thing that’s happening if you invoke “git clone” on the command line, just the fork you’re making is local to your machine and filesystem as opposed to hosted on github.

Most commonly, you won’t have access to make changes to someone else’s repository on github. You generally “make a fork” because when you make a copy, you do have permissions to make changes to that instance. It’s what the whole github flow concept is about – make a fork, push your changes up to that repository, and then make a pull request to propose them back to someone else’s repository.

Setting up your own forks and their branches

Github makes it easy to make forks on github, but in reality you can fork and push changes in and out of any git repository – so it’s entirely possible to make a clone out of github and keep it in a local git source control system – maybe Enterprise Github or Bitbucket, whatever you’re using – as long as it supports git.

What github doesn’t make easy is keeping your fork “up to date” with remote changes, wether or not it’s on the hosted github site. You do this work yourself.

So let’s talk some descriptive terms for the moving parts here:


“Upstream” is a common term in open source software representing the project that you’re taking changes from, and to which you might contribute back, but maybe don’t have full contributor access to. “Upstream” includes an intentional concept that it’s a source of truth that you want to follow.

When you make a fork on github you don’t see the “links” but they’re there. You do, however, see the links when you make a fork using “git clone”. And by links, I mean the thing that’s called a remote in git.

When you make a clone, the repository will store within it where that clone came from. You can also add your own “remotes” – so you can keep track of many different repositories. That allows you to keep an insane number of remotes if you want to – for your sanity, I recommend you only keep a few, and you keep it simple.

By default, when you do a “git clone…” command, the name of remote is set as “origin”. So in the example above, the “origin” remote for that blue git repository is If you look at the remotes for any local git repository (using the command “git remote -v”), you’ll see something like this:

$  git remote -v
origin (fetch)
origin (push)

Git lets you separate up where you’re getting updates from, and where you’re pushing updates to. This gives you all the rope you need to really hang yourself with complexity, so do yourself a favor and keep your process simple! I never divide fetch and push on a remote, so I’m not going into that level of detail here.

What I recommend doing is adding a remote to your local clone to the upstream repository. And because I’m nice and predictable that way, I like to name them consistently – so I call it “upstream”. You can do this like so:

$ git remote add upstream

and then “git remote -v” shows:

origin (fetch)
origin (push)
upstream (fetch)
upstream (push)

What you end up with is a sort of triangle of repositories all linked to each other.


Handling the Changes

Now it’s down to handling the changes. When you make changes, you add them as “commits” to a branch. If you’re going to be managing a long-running set of additions over something upstream and taking the upstream changes, I recommend doing this in a branch. It’ll just make some of the commands you need to run easier.

The three commands you need to know how to use are “fetch”, “merge” and “rebase”. Fetch is about getting data from remote repositories, merge is making changes to your local clone, and rebase is about replaying sequences of commits in new orders, or moving where they’re attached. The very common “git pull” command is really a combination of “git fetch” and “git merge” just combined into a single command for convenience.

For the purposes of this example, I’m going to assume you made a branch (which I’m labelling as “mystuff”) and you’re leaving the branch “master” to track changes from upstream. In this example, the green boxes represent a couple of commits upstream that you want to take into your local project, where you’re managing your additional change (the dark red commits).


The first thing you want to do is get the changes from upstream. I’m going to do this in sort of pedantic commands here to make it clear whats happening:

git fetch --all --prune

Is my “goto” command – it gets all the updates from all the remotes, removes any “remote” branches that are now gone on the remotes, but doesn’t “change” your working copy. It’s getting all the info, but not making any explicit changes to your local clone.

To take the changes in, switch to the master branch and “merge” them in. If you’re being “safe”, you’ll use the command extension “–ff-only”, which means only add the changes if the are in the same sequence upstream. “ff-only” meaning “fast forward only”, and implies no attempts at merging.

git checkout master
git merge upstream/master --ff-only

When this is done, you’ll have the changes from “upstream” in your master branch, but your “mystuff” branch won’t reflect those:


If you plan on making some contributions back to the upstream project, then you’ll probably want to update your fork on github with those changes, although that’s not critical for the main challenge we have here. If you wanted to do that, the command to push those changes:

git push origin master


The last thing you want to do is move your changes to be applied after the changes from the upstream repository. I’m specifically suggesting “after” because you if you’re maintaining a branch for a long time, you’ll need to repeat this process every time you want to take changes from the “upstream” project. Remember these are applying “diffs” in sequence – so every time you do this, if the diffs you’re managing conflict with diffs that got added, you’ll need to resolve those conflicts and update those commits. That’s the core of the technical debt – managing those conflicts with every update from the upstream project.

The way I like to do this is using the “rebase” command with the “-i” flag:

# switch to the 'mystuff' branch
git checkout mystuff
# start the rebase
git rebase -i master mystuff

The order the rebase command options is important. The “-i” flag will list out your changes that you’re about to apply in a text list, and generally I just take it as is. Save and quit that list, and the rebase will start applying. If it goes smoothly, you’re all done. If there’s a problem with the changes, then you need to make the relevant updates and invoke “git rebase –continue” until the rebase is complete. Atlassian has a nice little tutorial on the rebase process.upstream_rebased.png


The quick version

Now that I’ve walked you through the long way, there’s a compressed version of this process you can use if you want to:

git checkout mystuff
git fetch upstream --prune
git rebase -i upstream/master mystuff

And you’ll achieve the same result for your mystuff branch (minus updating your local master, and pushing that master branch up to your fork)


Deep learning and AI research

Several years ago the website was my go-to site for technical news, information, and useful links. Enter the era of twitter, combined with the over-selling-of-stuff website design O’Reilly was experimenting with, and I fell away from the site. I was tired of being inundated with advertising to buy training, listen to XYZ podcast or video, etc. I wanted written, short form articles. About the only thing I kept keeping an eye out for was Nat Torkington’s “4 short links” curated bits of technical tastiness, which O’Reilly reorganized and tossed under the banner of Radar. I get it – sites change, adapt, and we fell apart, it’s all good. I used to read lots of sites, although to slow fall-away of RSS (with the shutdown of Google Reader as a banner moment) as a standard has really stalled my acquisition of interesting content from peers. Frankly, twitter kind of sucks as a replacement in that regard, more-so lately than ever…

So it was with some excitement that I ran across the articles under O’Reilly Radar: Artificial Intelligence, and most specifically one article written by Beau Cronin from July of last year: In search of a model for modeling intelligence. The article was excellent, but what got me really excited was the reference to additional writing and research, in particular How to Grow a Mind: Statistics, Structure, and Abstraction. This paper is a true gem to me – it’s an opinionated review of some of the specific problem spaces in AI research in which I’m deeply interested : representation and the process of “learning” knowledge.

This paper, along with another I found today (Representation Learning: A Review and New Perspectives updated in April 2014) have provided me with some deep reading that will keep me through the weekend and almost more interestingly: a plethora of citations to try and track down and understand. I love review papers, even (especially?) opinionated ones in that they are the connectors to all this great potential for information. Concepts deep, shallow, and sometimes just seriously screwed up wrong, but pushing out on the borders of our knowledge.

So I think at this point, if you’re interested in AI research, or picking up a little of the hype and rabid blather that is the media’s current font for AI, you could do a hell of a lot worse than the AI Topic at Maybe I’ll fall away from this site again as it tries to sell me on screen casts and tutorials for how to run Spark machine learning programs on a Raspberry Pi. In the meantime there are some good links there.

It’s a creek worth panning.


Planes don’t fly like a bird…

I was excited and anxiously following the matches between Google’s AlphaGo program and Lee Se-Dol, excited when AlphaGo won the first matches, and frankly even more ecstatic when Lee Se-Dol beat the program in a match. There is a another match to go, that I’ll be anxiously waiting to see the results.

The specific value of this program and it’s success is fairly minimal, but what it represents is pretty enormous. AI has been all over the news, it seems to be the hot topic in the echo chamber of some VCs and silicon valley, and recent advances have cemented wins in the space that a decade or more ago were thought “a long ways off”.

The collection and processing of massive amounts of data have led us to using more data-intensive mechanisms (like deep knowledge networks), and new insights. The whole venture of AI is still transforming, and just like we saw a shift in the 70’s away from perceptrons and neural networks towards symbolic and calculus based systems, I suspect we’ll see a swing in the opposite direction in AI research now as well – everything to “deep learning”, and quite a bit less on symbolic algebra.

What’s so interesting to me is that AlphaGo is a fusion of the two mechanisms. It’s using deep learning and intensive data mechanisms that mimic neurons in combination with symbolic algebra and search functions. Planes don’t fly like birds, but we none the less derive incredible value from them. AlphaGo represents something like that – solving the incredibly complex problem space of the game of Go. The wright brother’s first plane was just as exciting. Not at all like a bird, but it lofted into the air and powered itself. There is plenty of room for advancement and improvements and this represents (to me) a breakthrough advance that starts that process.

I wish Iain Banks was still with us, I’d love to hear what he thought of this advancement.

Content is king – disrupting the academic publishing economy

On the weekends, I catch up on physics related and other news using sites/apps like Flipboard and Medium. One article really stood out to me:

In the article the researcher is citing as saying “sharing this knowledge is legal”, but I wonder if that’s a mistranslation from the author not intending legal, but ethical or moral. What’s happening at Sci-Hub is the later of the two, unfortunately not the former. Reading the article, I immediately sympathized with the researcher (Alexandra Elbakyan) and that underdog part of me rooted for her to charge on, while the part of me that’s been steeped in intellectual property law from my years in technology and open source knows she’s fighting a loosing battle.

I’ve hit this very frustration myself – even as a member of ACM, there are articles I frequently want to skim or read as I’m researching topics, and I hit a paywall – or for areas of curiousity that I’m not affiliated with, the barriers are faster and harder. Abstracts are supposed to help resolve some of that, but frankly I haven’t been able to use them to determine if an article is “interesting” or not, where interesting is “even understandable to my limited knowledge in the specifics of the field”. Neurobiology and related topics to deep learning is where I’ve hit a lot of that lately, but the cost of $25 to $50 to read an article I may not even be able to understand under the hope that it would further my knowledge isn’t a cost I’m willing to take.

There seems to be a tremendous opporuntity to disrupt the existing publishing environment – the barriers appear more social or environmental than technical. The Internet itself provides a solidly continuing commodization of storing and sharing content (or the site Sci-Hub likely couldn’t afford to operate) – but the current example is taking this on from the wrong end – after the paper has been published. It needs to be approached from the scientists and academics themselves. The various groups that are continuing the publish or perish meme in academia, and the source of all that content. The business goal is pretty simple – disintermediate the publishers. Shoot – that’s exactly what a site like Medium is heading towards doing, just for less academic content. I haven’t done the research, but it seems possible that a small team running a site under an initial grant from something like a Bill & Melinda Gates Foundation could fund a seed to start a process rolling that shared information in an exceptionally low cost way. A foundation like that would also have the contacts to academic communities to help share the knowledge of it’s purpose and existance. The “business” of the non-profit could leveraging donations or a combination of donations and lower-cost academic subscriptions in order to pay for the infrastructure. Examples like Wikipedia and The Internet Archive stand out to show it can be done. The backing of a non-profit and a fairly large philanthropic organization may be just enough to kick it over the edge of acceptability to take it from “just something posted on the internet” to “academic archive with peer review” that’s accepted and embraced by a multitude of scientific disciplines.

I’m sure there are other barriers that the existing publishers have cunningly errected to keep their hold on the legacy system from papers and books. I also think it’s worth jousting at that particular windmill – for the freedom of knowledge.





Six rules for setting up continuous integration systems

I’ve been using continuous integration, and continuous deployment, across multiple languages, companies, and teams to drive and verify the quality of development. The most frequently visited piece in this blog is a walk-through for setting up continuous integration for python projects with Jenkins.

CI systems have evolved and expanded over time. From the earliest days of buildbot and cruisecontrol to modern systems leveraging jenkins or hosted services like TravisCI and CloudBees the capabilities have been growing, becoming more diverse, and more capable. We have evolved from just compiling the code to doing static analysis, running unit and functional tests in addition to full system composition from multiple repositories, and running full integration tests.

Here are some of the most important lessons I’ve learned in using these systems. If you’re not doing these today, why not?

1) Keep all the logic for building your code in scripts in your code repositories/source control.

It’s super easy to fall into the trap of putting complex logic into tools like Jenkins. It’s one of the earliest, and easiest, anti-patterns to hit. The dependencies needed for you build as it grows and changes over time, and the logic as you start having dependent builds (if you separate out core libraries), grows almost silently until you end up with a build server that is a special snowflake and source of fear. Fragile, easy to break, and hard to set up.

The single best remedy I’ve seen for this is to not let anything into your build server that you can’t set up with a script, and set up your build system entirely with some software configuration management. Ansible, puppet, or chef – make sure that you can reproduce the whole functionality at a moments notice from some git repository. If you’re just setting up your build system now, or even just refactoring it or adding to it, do this now. Don’t put it off.

2) Gate merging code on the results of tests, and mercilessly hunt down and resolve unstable tests.

Our “unit test” frameworks and CI systems have evolved to the point that we’re as often running functional tests with unit test frameworks. The difference being that you’re as like to be expecting dependent services to be available for your tests, and working the code down and through those dependencies. Common services like MongoDB, MySQL, RabbitMQ, etc are the most common.

Our testing infrastructure can also be where those systems get stressed the most – setup and teardown, many builds running in parallel and vying for resources, and the inevitable “scrounge computing” that often goes into setting up build infrastructure means that resources available for your builds may well be far, far below what a developer has available to them on their laptop while working.

Asynchronous tests often don’t have all the callbacks or hooks to know when setup it complete, so you get little hacks like “sleep 5” scattered in your code – only those values can end up being massively different between a developer’s laptop and the tiny VM or container that’s running in your public or private cloud and hosting your builds. The worst part is that its hard to debug a race condition failure from a low-resource failure, and in many cases low-resources will acerbate a race condition.

Do everything you can to make setup and tear down of your dependent services consistent, and know when they complete (this is something that TravisCI handles quite well). You will probably have some “sleep 5″s in there, but be ruthless about allowing them in or expanding the values, always questioning a change.

3) Speed of your gates will directly impact developer velocity and productivity, keep it lean and be consistent with mocks or dummies in your testing code.

The dark side of really good quality gates in continuous integration is that developers start using them; relying on them like a crutch. The old “it’s taking 30 minutes to compile” is slowly being replaced by “I’m waiting for the CI system to do it’s run against my PR”. This will vary by development team culture more than anything, but ruthlessly drive down the time it takes to run tests, and my rule of thumb is that good gating tests should never exceed roughly 15-20 minutes of total run time.

Developer’s relish being lazy, so why not just push up that branch and submit a pull request to see how it goes so they don’t have to run the tests themselves. I’ve not found it effective to stop this, but simply to be aware that it’s there. Treating the time to build and verify as a metric you watch, and then dealing with that as technical debt as it grow, is something you should keep in mind. Periodic refactoring the tests, or even the whole build process, will pay off in overall developer productivity, especially as the team contributing to the code grows.

As a general rule, most developers hate to document anything, but relish learning new tricks – so I’ve found encouraging this socially during code reviews to be a tremendously effective pattern. Make it part of your development team’s culture, and it will reinforce itself as they see the wins and value.

4) Make sure all the developers can reproduce what gets run in the CI system by themselves.

The other side of that “good gates crutch” is that you’ll get some developers throwing up their hands or pointing fingers, saying they can’t reproduce the issue and it “works for them”. Don’t hide any scripts, and don’t restrict your developers from resources to reproduce the issues. Ideally any developer on your team can reproduce the entire build system in a few minutes, and be able to manually invoke any scripts that the automated systems run.

On top of this, encourage mocks and dummies, commenting the hell out of them in the test code, and setting up patterns in your tests that all can follow. A good example in your code repository is worth more than several pages of written documentation external to the code. Additionally, the less you depend on external resources for tests, the faster the tests can actually be to validating core logic.

Most importantly you want to keep quality accountability “all the way up front – with developers writing the code”. If the tests work on the developer’s laptop, but not on the build server, have the developer run the tests on the build server or their own instance they created when you made the whole build system reproducible, but don’t let that accountability slip or devolve into finger pointing.

5) Cascade your builds and tests, driving full integration tests with CI systems.

It’s critical to verifying overall system quality that you build, deploy, configure, and run your code as a whole system. As your code and teams grow, you’ll have multiple repositories to coordinate, and external dependencies that will be constantly growing with you. This setup, and these tests, will take the longest, are the trickiest to debug, consume the most resources, and frankly provide the most value.

I’ve found that It’s worth gating pull requests on an exceptionally stable, minimal set of full system integration tests – the proverbial “smoke test” if you will. Choosing what to add into this set of tests, and what the remove, will be a constant balancing act for what’s proving stable and where the consistent pain points are in your development path.

It’s worth also having longer running regression tests. The set of tests where you add tests as you find bugs, or tests you automate from feature acceptance testing, going through full end-user scenarios. These are also the ideal tests to look for memory leaks, do fault injection on distributed systems, and in general make sure the system *fails* as well as works in the way you intend.

Adding this effort up front – validating end-user features and scenarios with acceptance testing – takes more time, but pays off in the end. If you don’t do this, you have implicit, ever growing technical debt with a fairly high interest rate: people time. It’s the most expensive debt you have, and you’ll spend it periodically dedicating teams to regression testing.

The side effect you won’t see until you have this operational is increased confidence and agility with your product development. When you have confidence in your systems working as designed, you can be faster and freer in changing underlying systems, knowing you can easily verify everything still works or finding the relevant issues fast while you’re doing development. If you have acceptance tests and user scenarios fully automated, your whole product can move far more agilely and effectively, evolving with the business needs.

6) Develop, trend, and review benchmarks about your product from your CI systems.

Use a service like DataDog, or a local instance of graphite, but plan for, develop, and then watch benchmark metrics in and from your code. I mentioned earlier the time it takes to run your “merge gates” as a meaningful metric, and there are metrics in your applications that will be just as important to running in production. Memory footprints, CPU and IO budgets, tracing of calls and timing for user interactions and computationally intensive functions.

This enables you to really judge the efficacy of optimizations, as well as search for and find unexpected failures in complex systems. With cloud services, we are finally able to ascribe meaningful and useful costs associated with all of this. These aren’t just development measures, but business measures as we can balance responsiveness of the system, reliability, and cost to run the services.

The Data Center API

When I was the Director of Cloud and Automation Services at Disney, I toured Microsoft’s Quincy datacenter. Somewhere in the conversations around that tour, a phrase was dropped that’s stuck with me ever since: the concept of a “DataCenter API”. This was long before “software defined data center” was a concept, and cloud services were finally getting a toe-hold on the enterprise mind. When we talked about the API, it was as if it existed, used and beloved; I later learned that Data Center API was just a concept, a pitch.

You know what, it was a really damned good pitch. It stuck with me, and I’m still thinking about it, looking at services in that context, 6 years later.

It is really past time for that pitch to become a reality for enterprise IT that act as service providers, at least if they don’t want to just move their services into public cloud. It shouldn’t be a surprise that capabilities in based on cloud hosting has outstripped enterprise IT (black suit IT) and even internal webops teams (black shirt IT).

The Data Center API that we bandied about were APIs for the long term operational needs of running services. Something that developers could know and use, even in the development of their application. In many respects, it was a precursor to the idea of devops:

  • API for alerting, notifying staff of issues – a plea for help to a service back on track
  • API log capture, event processing, visualization and searching
  • API for metrics and setting up monitoring, aggregation of values, and trending

Any one of those bullets has multiple SaaS providers supporting cloud services today, and those are just the basics. These days a hosted service provider might also set up a hystrix monitoring and status consoles, traceback/exception logging and tracking, service tracing and analysis, and more.

The SaaS services enabling these today all started out with humans as their primary consumer, but many are now offering APIs as well. API’s to create and set up, as well as APIs to consume or query. It is still piecemeal, but hosted services now have available to them “The Data Center API”.

So why are they not available to developers in the Enterprise? Mostly these services are unavailable due to cost or contract concerns, frequently they’re too expensive when set up by current Enterprise IT vendors. Sometimes there’s a business fear about letting even metadata about a business service go outside the firewall. Sometimes it’s just a turf battle. Enterprise agreements are usually sold such that increased usage is increased cost.

Have you seen a the bill for a large data feed in Splunk recently? Splunk has amazing functionality, but if I were “buying”, it’s gotten to the point that I would create an ELK stack for each application instead. The incremental cost of running an instance of the ELK stack is likely cheaper than the incremental cost of adding that data in a centralized instance of Splunk. I’m picking on Splunk here, but the same applies to centralized enterprise monitoring solutions, orchestration services, and so on. If there’s anything that a cost center (which is what most enterprise ‘service providers’ are) hates it is unpredictable, variable expenses.

The pieces that make up a Data Center API that enterprises purchase are still mostly proprietary and “best of breed”. High price is still seen as somewhat acceptable, and I don’t see that lasting. Hosted services are driving the commoditization: the capability is becoming assumed and the with multiple hosted providers starting to differentiate from each other, they’re providing the competition in the space that will ultimately commoditize it.

The final implication is that more capabilities are easily and regularly available for cloud hosted services than for services in an enterprise data center. It will be fascinating to watch the next several years and how (and if) large “enterprise service providers” deal with this.