iPad Pro and the Apple Pencil

Last tuesday I didn’t catch up with the Apple announcements until late afternoon. I dropped in on my local development team, and between some conversations around the debugging topic of the day the undercurrent was all about the Apple announcements. “Did you hear…”, “Have you seen…” – kind of what you might expect actually. I got the 2 minute rundown from the gang, happy to hear about the Apple TV update, pleased that iPad Pro wasn’t a false rumor, but what really caught me – like many others I think – was the Apple Pencil to go along with the iPad Pro.

When I got home, I kicked up the (old) Apple TV, found the keynote and played it through. Watching the video of the people drawing, sketching, and writing on the iPad Pro was so immensely, hugely compelling to me that I backed it up and re-watched it three or four times. Ever since I saw the first tablet style computer, what I saw in that video is the concept that I wanted to use. 

In high school I learned draughting – the old school, table-based draughting. I loved it, and it formed up a lot of how I even write. Block print, or a more personal, stylized variant of it, became my norm. As I continued in computing, I wanted to use that style of interaction and capturing information with computers. With the stylus and windows OS of the first tablet computers, I tried. OneNote was the only useful thing there, and the world was focused on handwriting recognition. I tried it, and it kind of sucked. But what really sucked was how useless it was for anything other than the one program that seemed halfway decent for it. It wasn’t a general computing device any more, because the software sucked for that style of interaction – the stylus and tablet concept really failed for me.

Fast forward seven years and Apple announced the iPhone, but most compellingly a touch interface and software that worked pretty darn well. Yeah, there’s things I wish they’d incorporated differently, but it was finally the software that did the interfaces correctly. Microsoft technically could have done the same, 7 years earlier with their tablet, but they didn’t have the vision or gumption to make those changes. Forward through multiple iPhones, the iPad, and to now – another 7 years later – and we’re looking at the iPad Pro and Apple Pencil. Like others, I secretly longed for the dead newton and what it could do – it was such innovative technology that I thought could have been advanced, but I get the need to kill it too. The business reality of it, even if it sucks. And through that time, Steve Jobs famously raging against the crappy experience that was the tablet and stylus.

Two years ago, I invested in a Wacom Cintiq. The smallest, cheapest one. It’s a great device, worked beautifully – and it suffers from two major problems. The OS still isn’t aligned with using a tablet/stylus interface. Windows or Mac OS X – it’s not the touch interface with it’s affordances for communicating with something other than a keyboard and mouse. That was mitigated by the applications I could use – AutoDesk’s SketchBook, Gus’ amazing Acorn, but in the end wasn’t generally useful. More painful for me was how absolutely unportable it was. Even the small Cintiq was a mess of cables and adapters to make it work, and the piles of confusion that was setting up the laptop and it side by side was tedious and expansive in space.

Now I see the iPad Pro and the Apple Pencil, which I’m desperately hoping is the successful culmination of what I wanted to use 14 years ago. Generally useful, expressive sketching with pressure and tilt sensitivity, and supremely portable. I want to drop that onto a slightly inclined writing desk and see how it feels. I want to know if it’s the digital reincarnation of the draughting desk that I learned to love in high school. I really hope it is.

How corporate culture is killing private cloud

I have a hypothesis on why private cloud is such a struggle:
Corporations are culturally ingrained not to accept loss of control and unpredictability in expenses, and their adoption of private cloud flies right under the foot of that cultural bias.
Let me explain:
A huge portion of the saleable use cases for private cloud are all about dev/test, which is inherently unpredictable. The solution that most enterprises use is to encyst that unpredictabilty down at the developer team level.  The typical corporate pattern is to purchase up hardware for themselves, depreciate it out over the requisite years, and repurpose the hell out of it (or slice it up into virtual environments).
When an IT organization tries to provide a cloud service, the first thing they come out with is “here’s the costs for hosting this”, and “what’s the cost center you want to sign up for your account”? They’re a cost center too, so they sum up all the costs, maybe subsidize it a little, and pass it along. Even though a private cloud may be far cheaper for the overall corporation, the act of passing along that variable cost is the equivalent of tossing a hot potato into the budget cycle.
All of a sudden, the director of the development team using this service gets the hot potato of  variable costs, and depending on how the IT organization is managing the funny money, it could easily be quite a bit higher than the cost of those depreciated desktops behind Larry’s desk, or the smallish investment of a VMware cluster for the dev team to utilize from a wiring closet.
The historical baggage that IT has to provide 100% reliable services just acerbates this problem. It means that corporate private clouds are under a tremendous amount of scrutiny for uptime, in many cases far more so than any public cloud. Making anything more reliable costs more-than-incrementally more money, and that in turn gets passed forward, making the variable costs even more extreme in potential volatility.
Developers are (slowly) getting culturally ingrained to deal with failures in their programming and creation of distributed systems, but corporate managers and directors certainly are not. Organizationally there’s little to no tolerance – and few corporations have sufficient scale to run the same concept of actual availability zones internally.
But most of all, there’s an alergic reaction to the variability of the cost, and often as not a director locks down control to the IT resources provided, even with a private cloud. And suddenly you’re back to the delays of sending emails, asking for permission, and waiting for someone to pull the trigger to get you even a small virtual machine.
Others have done a good job explaining why a wait of even a few hours for a virtual machine is untenable, but let me give it a try as well:
Developers coming in from small companies or startups, or their own experiences with development using cloud resources, are instantly frustrated that they can’t get the resources they need, when they need it. It’s gone beyond the issues of a matter of convenience, but to efficiency. 
Most, even simple, enterprise applications are distributed systems. They have been for ages, we just preferred to call them “client server”, “three tier”, or later “n-tier” during their times in the 90’s and 2000’s prior to really having cloud capabilities. The truth is as you scale those applications up, they’re more of what you expect to hear of as full on distributed systems, and as you add capabilities integrating with other services (an example may be local corporate authentication, logging services, and application metrics collection) you’re into full-bore distributed systems, perhaps even using modern SaaS for some of the components of that system, with complex dependencies and all the fun that comes with it.
The implicit goal the developer has is to have an entire working copy of all the moving parts, and the capability of making changes to any one part of it, while keeping the rest static. Being able to reset that environment – wipe it all away and build it up again – provides a tremendous amount of development efficiency, and many good developers do this as a hedge against the creep of making “just one more change”, and missing the implications. The world of continuous integration, and continuous deployment, does exactly this. The build systems are being advanced to build up the scripts to destroy and rebuild the environment, and ideally can run multiple of these environments side by side. You’re not at the point where you want to spin up, and destroy, these resources with every developer’s commit. Asking for even a hour “wait” for permission in these cases is just completely ludicrous.
The developer response to this has been to leverage the resources they always have at hand, and are under THEIR control, to get what they need done. Vagrant and virtualbox for virtual machines, and then into the world of Docker with boot2docker, or vagrant-coreos and fleet to create the world of a half-dozen to two dozen small services all interacting with each other. And it’s doable on their laptops, or a developer can cobble a pretty decent sized container cluster from the depreciated desktops behind larry’s desk, and best of all – the developers can have their own playground, set it up and destroy it based on their own development needs.

Creative Challenges

Beyond what I post in twitter and facebook directly, I have a blog because I want a place to host my longer form writing. I want a creative outlet for my sometimes perverse need to document the things around me, to explain how to do something, or to just rant about some random topic. Most of the content on this blog is very technical in nature. Someone who isn’t into technology and programming would probably be woefully disappointed with the standard fare here. This post – well – it’ll be a little different.

I’ve been thinking a lot about trying to pick up some new creative challenges.

Sketching and drawing is one, with the main idea being getting back into the habit of sketching things around me. My mother does the most amazing pen and ink and silverpoint sketches. Faces, life drawing, and sometimes landscapes or other fixed scenes. I struggle with proportion in my life drawing, and practice is about the only way it’ll get better. Fortunately, that’s easy to swag – as a pen and paper are easy to come by in a day to day office environment.

Writing, which I’m doing some of here, is another creative outlet that I’ve enjoyed for a long time. I tend to write more “how to” and technical documentation these days. I enjoy reading a lot of fiction and have thought a lot about writing some of my own fiction. That usually stumbles down around the ankles of “what the hell do I want to write about”. Regardless of those stubbed toes, I’m thinking that I might gear myself up and jump into the deep end and even compete with my sweetie during NaNoWriMo this year. I’ve no idea what the story might be, but knuckling down and doing it seems like a really good creative challenge.

The most out-there thing that I’m thinking of taking a swing at is machinima: making a video/film production using video game technology. I’m not aiming to go all special effects and such, just exploring the process of telling a story in a video format, and learning the basic tools and techniques to make that happen. I’m kind of thinking of it somewhere between the idea of cartooning to animation to film-making – just using tools that I can leverage on a laptop today, as opposed to actual film. I was sort of inspired by Unity 5 and Unreal Engine 4 coming out basically for free this year – a little kick that the tooling is getting even easier to come by and use. Mix that in with Make HumanBlender, and iMovie and quite a bit of the tooling – to at least learn the process and try it all out – is totally available and at no real cost. I’ve been spending an hour here or there watching videos and learning the basics of 3D modeling, but without a particular end in mind. Making an a story through machinima seems like a neat way to use it.

The tough part isn’t the tools, but the story: it’s the same problem I stumble on while writing – what story to tell? Again, no idea – but it sounds like a terrifically good creative challenge to work on, so I’m hereby putting that on my list too.

EDIT: Finally found a pic of one of mom’s silverpoints:


Ryan, Silverpoint by Caroline G. Heck

Code reviews – how

I posted previously about why to do code reviews, and this post is a follow up to walk through some mechanics for doing them.

When to do a code review

A huge number of developers are using git, and a similarly huge number are familiar with Github. So with that as a basis, my opinion is the best place to do a code review is at the point of sending (or receiving) a pull request.

If you’re not familiar with the term “pull request”, it’s the mechanism that Github has enabled in their website for someone to request to merge specific changes into a branch of code. Bitbucket, Atlassian’s Stash, and other tools have similar concepts. In OpenStack we used Gerrit for this purpose (in fact, all their reviews are completely public and viewable at review.openstack.org), and the earliest tooling I remember hearing about and setting up was Reviewboard (even contributed a few pieces to it back in the day). Suffice to say there’s lots of options out there to help set up and put some structure around the process of doing a code review.

How to set up your code to be reviewed

Tooling alone will not make a good code review, it just enables it. What you are reviewing, or what you share to review can have a huge impact on how a code review works.

Some of the rules of thumb I follow for making a code reviews easier are:

  • Keep commits small and focused

Each commit should have a message that describes the change, and it should be relatively orthogonal to other changes. It’s totally acceptable to have a pull request with just a single, small commit. Honestly, those are the easiest to review.

  • Don’t make a lot of different kinds of changes in the same pull request

If you’re doing a sweeping refactor that changes the code through-out a lot of places, then consider putting up the follow-on changes that add functionality into a separate pull request. If you’re slightly refactoring and then adding some functionality, that can work, just beware of how much you’re adding in that can confuse the intent to a reader. Basically, try and keep to a “single topic” within all the changes in a single pull request.

  • Keep the overall code for review well bounded

Sometimes there is no way to avoid massive changes to prepare for, or implement major architectural shifts. Nonetheless, Try and keep the changes as simple as possible and easily understandable. You want to avoid changes so large that it takes more than 30 minutes to read and understand the review, and I try and aim for most reviews to be able to be read and understood within 5 to 15 minutes.

  • If there is other changes that are required, explain what those are and why

In lots of larger projects, you may need to make changes in multiple different repositories at once, or have multiple pull requests that depend on each other. Working on changes in OpenStack was completely like that. In those cases, make sure you do something to “link” together the set of pull requests so that the reader/reviewing can reference what you’re expecting. A reference to the other relevant pull request(s) are often all that’s needed.

How to review someone else’s code

First and foremost, remember that a code review is about people communicating. Your culture WILL show through. In my teams, I lead and encourage folks to communicate objectively and positively. How you phrase and communicate will be as important as what you say.

First, read through the code (and changes) that are being suggested. Make sure you understand the intent of the change, and how it was solved/implemented. If there’s some arcane bit of perl-ish anonymous function code that you don’t understand, ask about it. Most of all, understand what was intended with the changes, and how it was done.

Second, look for architectural patterns:

  • Do the changes match up with how you’ve generally been structuring similar changes?
  • Does something fly in the face of the architecture that you’ve set for the surrounding code?
  • Does the implementation break a hole in an otherwise clean interface and make an API into a leaky abstraction?

Third, look for the stylistic details:

  • Do the variable names make sense?
  • Does it follow the coding conventions you’re using?
  • Are there any obvious misspellings?
  • Do the comments and internal code documentation annotations match the functions?

Last, review the continuous integration results:

  • Is it passing the tests? For that matter, does the new code HAVE tests (assuming that may be important to you)?
  • Did the overall code coverage of the project go down?

Very often, these pieces are answered by automated continuous integration tooling. If you don’t have continuous integration, check into it: maybe TravisCI and Coveralls or Jenkins with the violations plugin combined with the Github Pull Request Builder.

Communications Style

Most of the horror stories about bad code reviews that I’ve heard (or been subjected to) have been about bad communications, not bad code. Objective and honest is excellent; questioning and discussing is good; personal attacks, verboten. I’ve searched for the best way to describe what I mean here, but I think Ed Catmull did the best job in Creativity, Inc.: be candid.

While I’m on the subject, one of the best things I’ve found for writing in reviews was taking a leaf from one of Pixar’s books. Ask yourself, “How do I plus-one this?”. Even if the code looks fantastic to you, it’s worth taking the time to read it through thinking “Yeah it works, but how could we make it better?”. You don’t need to go overboard here – in many code reviews, I honestly just say “Yep, looks good to me”, but it’s still worth the thoughts.

Code reviews – why

Code reviews are something I’ve been doing professionally for ages. It wasn’t until I was actively building development teams from the ground up that I took the time to really nail down the details of what I was trying to do and why. Nailed it down enough to explain it to someone else, anyway.

In this post, I’ll go over my perspective of the “why” of doing code reviews. And I will  follow with another post on the “how” of doing code reviews as well.

I’m writing this with the idea that you’re not a solo developer, and that you’re probably working with a team of 2 to 8 people (or even more) to collaboratively develop something. Maybe you’re building a new dev team, or maybe you’re just looking for some details on how to get better at your craft.

What is a code review?

I hope it’s not really the case any longer, but at one point when I heard code review, it gave me the impression of a bunch of grognards with neckbeards sitting around a table, ripping apart your code, and mocking you personally for a weak attempt at development. Or maybe that just shows my earlier insecurities, I’m not sure. That’s not what a code review is or should be, though.

A code review is having one or more people look over a bit of code – proposed changes, a project, or even a library you’re considering using – with the goal that at the end of the development process, your product will be better for it.

Why even do a code review?

The highest level goal of a code review is to make your product better. There’s a lot of ways that can happen, and I’ll detail out some of the below. Knowing why you’re doing code reviews, and talking with your peers or team about why you would want to do a code review is a good idea. It’s not just some random process to make things better, it has a goal, and the more that everyone understands that goal, the better you and your team will be at achieving it.

The most obvious benefit is verifying correctness. Having another person double-check what’s been created to see if it does what you say it does. When I’m writing code, I get these notions about what the code is doing and what I’m intending it to do, and what I write usually matches that – but it’s easy to miss little things in the process. If you’re working with a stricter language, maybe you’ve got a compiler to help keep you on the straight and narrow. Hopefully you’re using some continuous integration and you’ve enabled some tooling to help with static analysis and style checking. But when it comes down to the nitty-gritty of the code, tooling is only so smart – and like autocorrect on smartphones, it’s pretty dumb to context and will miss quite a bit.

If you’re code reviewing something from outside your team, maybe a library you’re about to include that you found on NPM, Github, or that your buddy told you about over a beer, then the review is more about doing what I call code archaeology. Figuring out what it’s meant to do (which hopefully you know), and how it’s really doing it. I read code all the time; even code that I don’t use. There’s always lessons to be learned there. But I digress, this is mostly about why you’re doing code reviews within your team.

The second benefit is functional knowledge sharing, or what I call “improving the bus factor“. On any team it is natural to have folks that get good at a specific topic areas – a functional component, a design, or something. But you don’t want to be stuck if that person leaves, gets sick, or is just unavailable. Then you’re in a jam, and it’s not easy to work through what’s happening there. Yeah, you can always task someone to go read through it after the fact, but then you’re already down a notch in that you can’t ask “why”, and “what did you intend for this to be doing…”.

You can in a code review, and if you don’t understand what you’re reading, you should. The huge side benefit of this knowledge sharing is cross training and learning from each other. “Hey, that’s a neat way to do that…” is something I love to hear (or read) when one developer is reviewing another’s code. Likewise, if you read through something that you think is a quick hack and would be better done another way, you can talk it through. Maybe you need the quick fix to meet a deadline, but at least it’s a known issue, not a hidden, ticking time bomb that would otherwise be missed.

On a more positive side, developers reviewing each other’s code is somewhat like jamming together musically. You get the idea of each other’s styles and riffs, and the more you do it, the easier it is to anticipate how to work together and seamlessly spin out some interesting melody.

Along with actively learning from each other, it’s a means to grow into how to work with one another, and to a large extent that’s the key benefit of a code review – it’s a point and means of actively spreading your development culture. Patterns of development, evolving architecture, coding conventions, and even social conventions are easily shared during a code review.

To summarize, the reason you’re doing code reviews:

  • Verifying correctness
  • Functional knowledge sharing
  • Actively spreading your development culture

In my team’s post-mortems and reviews, we often talk about what we’re looking for in code reviews, and how we might want to change this or that aspect of it. Maybe it’s keeping an eye to encourage better unit testing, or making sure we keep the interfaces between two components strict so we can refactor one component or the other more easily.

Wading into the deep end with NodeJS

A year ago, I was thinking I wanted to know more about NodeJS, and now we are actively developing a product based on NodeJS. I got a lot of great help and insight from the Seattle NodeJS meetup over the last year, so back in December I offered to pay-back a bit and do a talk on our “lessons learned”.

The end result is a relatively short presentation on lessons learned diving into NodeJS. Much to the consternation of one of my friends, I’ve really enjoyed diving into the depths of NodeJS and learning how to really get things done with it.

There’s frustrations with it, to be sure – honestly most of that is it simply being a young ecosystem, but I’ve found it to be enormously effective too. I’ll have up more posts about NodeJS and things I’ve learned while using it later…


This blog has been quiet, well – dead frankly – for nearly two years. Not all of that accountable to technical difficulties, but I was self-hosting wordpress for ages, and the last hosting service I used had frequent issues with the DB backend that I finally resolved this past weekend – by moving to hosting on wordpress.com.

I’ve been spending most of this last year developing in NodeJS, and I’m starting to think it’s time to also kick in some self-learning with Swift as well. For most of the little one liners and such, I’ve been doing what the rest of the world has been doing – posting it to twitter or Facebook. But while they’re nice and “social”, those platforms are terrible for writing longer-form pieces. And that’s what I’m hoping and aiming to do over the coming months – start posting a bit more “how-to”, like I used to (and are in the archives).

I’ve got to say, I’ve actually been a bit inspired to do this by Khan Academy – curious for ages, I’ve been taking some of their advanced math courses to refresh myself on topics that I thought would be fun to learn on the side. I’m in the process now of trucking through linear algebra, and that got me all excited about not only learning, but maybe getting a bit back into teaching.

We’ll see how it bears out in reality, but I’ve already got some topics in mind that have come up as I’ve been jumping back and forth between language ecosystems (python, NodeJS, Objective-C, and now swift).


Yeah, ages since I posted here. Not sure anyone is even reading here anymore, but if you are, well – you’ll be surprised to see a new entry in the RSS feed or however you’ve kept track of this otherwise dormant feed.

It’s the day after thanksgiving and Karen and I were talking about all the various decisions we’ve made leading us to today. Living in Seattle, a great little house, doing things we love. Karen described it, stretching back two decades, of generally “erring on the side of adventure”. Moving to Seattle – now 13 years ago, leaving Singingfish/Thompson/AOL, joining Docusign, leaving Docusign, joining up and working for Disney, in turn leaving Disney, etc. Nothing we’ve done has been a sure bet. Lots of them were “pretty out there” in terms of “can it, or will it even work out”.

Probably the most strange to me is that I tend to think of myself as being risk averse. I’m sure there’s plenty of my family that would smack me upside the head for that. We’ve taken quite a number of flyers, and the sum total of the game has been pretty darned good. We definitely have a lot to be thankful for.

OpenStack docs and tooling in 20 minutes

I’ve gone through the routine several times now, so I decided to make it easy to replicate to help out some friends get started with all the tooling and setup needed to build, review, and contribute to OpenStack Documentation.

I’m a huge fan of CloudEnvy, so I’ve created a public github repository with the envy configuration and setup scripts to be able to set up a VM and completely build out all the existing documentation in roughly 20-25 minutes.

First, we install cloudenvy. It’s a python module, so it’s really easy to install with pip. My recommended installation process:

pip install -U cloudenvy

If you’re working on a mac laptop (like I do), you may need to use

sudo pip install -U cloudenvy

Once cloudenvy is installed, you need to set up the credentials to your handy-dandy local OpenStack cloud (y’all have one of those, don’t you?). For cloudenvy, you create a file in your home directory named .cloudenvy akin to this:

      os_username: username
      os_password: password
      os_tenant_name: tenant_name
      os_auth_url: http://keystone.example.com:5000/v2.0/

Obviously, put in the proper values for your cloud.

Now you just need to clone the doctools envyfile setup, switch to that directory, and kick off Envy!

git clone https://github.com/heckj/envyfile-openstack-docs.git
cd envyfile-openstack-docs
envy up

20-25 minutes later, you’ll have a virtual machine running with all the tooling installed, run-through, and the output generated for all the documentation in the openstack manuals repository. The envyfile puts all this into your virtual machine at


To get there, you can use the command envy ssh to connect to the machine and do what you need.

For more on the how-to with contributing to OpenStack documentation, check out the wiki page https://wiki.openstack.org/wiki/Documentation/HowTo.

Do photon’s have mass?

My grandmother in Burlington, IA had this massive house overlooking the mississippi there. In the windows, she had these ornaments – little glass bulbs with pinwheel looking things in them that spun and spun and spun in the sunlight streaming through the huge windows overlooking the river.

Years later in college, I learned that the window trinket was a classic science experiment regarding photons having mass. I saw one on think-geek some time ago, and got one for my house: