How To: Debugging and Isolation

There’s a number of skills that I think are critical to being a good engineer. Although my 20-year-old self would have scoffed at the idea, “clear, correct and concise communication” is probably the most important one. As a team gets larger, this gets harder – for everyone. Sharing ideas and knowledge, communicating across groups that do things differently, or just even have a different focus, can be, and often is, a challenge.

One of the places I see this is in the boundary between folks who’ve written or created some code (and implicitly know how some or all of it works) and folks who are trying to use the same – but without the knowledge. They’re learning, trying things out, getting confused, often asking for help.

A conversation often solves that pain point, but someone needs to have that knowledge to have that conversation. Or maybe, it’s not working the way you expected, and you’re now in the state where you need to figure out how someone else got a surprising result. You need a way to do this learning. That is where debugging and isolation come in – techniques to do this learning.

Here’s the dirty little secret: almost nobody, developers and creators of the systems included, keep perfect knowledge of how a system works in their heads. Everyone uses this technique, even if they don’t call it as such. Knowing it, knowing about it, and using it intentionally is a skill every engineer should have.

In my head, I call this “The game of figuring out how this thing works”. It’s a kind of game I’ve loved since I was a kid, which probably goes a long way to explaining me. Truth is, we all do this as kids, although wrapping some names and a formal process around it isn’t something that most folks do. Point is, you already know how to do this – it’s just a matter of practice to get better and applying it.

Playing this game is simple, here’s how it works:

  • describe the thing you’re trying to figure out
  • make a guess (create a hypothesis) about “if I do XXX, then YYY will happen”
  • try it and see!
    • if it didn’t work, make another guess
    • if it did work, add that to your knowledge
  • rinse and repeat, cook until done, etc

This first sequence is the portion called debugging. It works, but getting to a full description of everything you can do, and every reaction that could come out, is and can be a time consuming affair. What you’re doing in your head is building up a model of how this works – in essence, you’re reverse engineering to figure it out. And yeah, if you clued in that this is exactly the same thing as the “scientific method“, you’re on the right track.

Note that I called it “simple”, not “easy”. This particular game can be very frustrating, just as it can be incredibly rewarding. There are doctoral thesis and research papers on ways to do this better, specific techniques to use in specific situations, etc. In the software engineering world, there’s also tooling to help get you the information you’re after in an easily consumable form. But that’s not always there, and things like the tooling aren’t distributed evenly. Recognize that this can be hard and difficult. It is worth calling that out, because it frequently seems like anything simple should also be easy.

In the debugging I described at the top of this article, you’re doing this as a precursor to figure out to make something work the way you expected it to work, instead of the way it is currently working. Or maybe you’re trying to reproduce what someone else saw, but doesn’t happen for you. Whatever the specific goal you are trying to achieve, the basic process is the same.

So what’s this isolation? Isolation is a way to make debugging easier and faster, and makes things a bit easier to generally figure our problems. The whole idea behind isolation is that you can often take a problem and break it down into subproblems that can be solved independently. Or said another way, most “things” are made up of components that are assembled together, and you can often use this structure to reduce the scope or what you’re guessing about.

Most things we build are like this, and we tend to naturally think about things in hierarchies and layers. Sometimes there is the trick of even figuring out what something is made of (that would be why we’re smashing atoms together: to break them apart and figure out what they’re made of), but when you’re working on figuring out a software problem you have the benefit of having some nice clean boundaries on that problem. Well, hopefully clean boundaries. Often it’s written done, or you can ask someone, or if it’s open source, you can read the code.

I go back and forth between isolation and debugging techniques. I start off making sure that I understand how something is put together by coming up with my tests and verifying them, and when something doesn’t work as I expected, then I break it down into the smallest piece of the puzzle that isn’t working the way I expected.

Keep a piece of scrap paper around while you’re doing this. It’s my cheap version of a scientists lab notebook. I personally love butcher paper and a few colored pens, or a good whiteboard – but I mostly make do with a legal notepad and whatever pen happens to be at hand. Draw the diagrams of what you’re thinking the system is broken down into. Then draw how they’re interacting. Don’t try to keep it all in your head. Just the act of drawing it out and scribbling on a piece of paper can help organize your own mental model of what is happening and how they are connected.

There is a side bonus to scribling down notes and diagrams as well: when it comes time to communicate with someone else, you have something you can share. I’m a very visual person – I like to describe with diagrams, and sometimes a prodigious amount of hand waving – showing movement, focus of conversation, etc. In general, sharing a visual representation of what you’re talking about with others can be a great way of helping explain what you’ve learned to someone else.

There are plenty of formal ways to do this drawing (UML diagrams such as layer diagrams, sequence diagrams, etc).  While you are doing this for yourself – draw in a way that is meaningful to you. Writing the notes is for you first and foremost – if you want to share this information, you can clean it up after the fact. Bubbles and lines connecting them is one I like to use – scrawling graphs if need be. Sometimes layers and boxes makes more sense. And most of the time I’m doing this sort of thing, it’s the interactions between components – their sequences and what happens next – that you’re diagramming and trying to figure out (this is where different colored pens come in handy, plus who doesn’t like lots of colors in the diagram!)

At the end of this process, you are probably trying to share this detail with someone else. I would encourage you to use the adage that was drilled into me by physics professors in college: show your work. There is nothing wrong with taking a photo of your notebook page where you figured out where something wasn’t working as you expeced, and including that with writing describing what you found, and what you’re after.

Most systems these days make it easy to include pictures as well text – bug reports and pull requests in github, for example. I have a set of employees who just aboslutely adore putting this kind of thing in powerpoint. Honestly, whatever works and you can share with someone else is good. There is a whole set of skills around communicating clearly, but just getting the process started is more than half the battle.

Leadership skill: moderation

Coffee conversation threaded through a number of topics this morning, mostly flitting around diversity and how it is a continued challenge. I’m implying in technology fields, but really it’s everywhere. A post by Anne Halsall on Medium that has stuck with me is her post When Women Engineer. If you haven’t seen it, take a moment and give it a solid read-through. It is an excellent post that is looking at the crossover of sexism and how most companies have a very male-oriented bias on how they interpret, well, everything.

When I first read Anne’s post, one of my first thoughts was “Hey, some of what you’re seeing isn’t just male-sexism, it’s corporate-standard stupidity in dealing with different personalities”. There’s some of that in there and Anne reiterated that it wasn’t just lack of dealing with different personalities.

One of the skills critical to leadership is being an effective moderator. The reason applies to some of what Anne saw, but extends through different personalities, and as I’ve concretely learned over the past 18 months, is an excellent tool to help overcome significant cultural differences. “Overcoming” meaning communicating clearly – not forcing a change in interaction that come from different cultures.

Moderating is not just about making sure everyone obeys some basic politeness rules – I think my best metaphor right now is that it’s like working a complex sound engineering board where each of the people in the conversation are one of the inputs. Some are darned quite, some are loud – and leveling those out as best as possible is an obvious part of the effort. But there’s also the communication that may come across as garbeled. What may not be apparent as a need is being not only willing, but pro-active in asking clarifying questions or otherwise changing the flow of the conversation to get out a full understanding. The queues for this can be really darned subtle – a raised eyebrow, furrowed brows, or other more whole-body expressions of surprise – or the tone of the conversation a few minutes later.

A moderator has implicit (sometimes explicit) control of the conversation, and establishing that up front – often as simple as saying “I’ll be modering this meeting” – is important. You can’t be an effective moderator without it, and there are definitely times when it’s forgotten. You may need to wait out a particularly long winded or excessively vocal individual. I personally try not to engage in the “shout them down” tactic, but I’ll admit to having used it too. Honestly, when I’ve had to do that, I figured that I screwed up some better method – it’s just so… inelegant.

There is also a bit of a social contract that you are agreeing to when you’re moderating: that you won’t be injecting your own hidden agenda into the conversation. That is, you won’t be using the conversational control to your end “evil” ends. Hidden is the critical word here – all conversations have a purpose, so making it explicit what that purpose is up front – calling it out before any in-depth conversation has happened – is a good way of getting that into everyone’s heads. From there, it’s paying attention to the conversation flow, the individuals, and guiding it to try and achieve those ends. You may have to reiterate that purpose during the conversation – that’s OK. Plenty of good stuff is found in the eddys of the main flow – don’t stop it entirely, but have a sense of when it’s time to get back on track.

You might have read that good meetings all have agendas. I’m generally in agreement and one of the formulas that I try to use is starting off any conversation with the agenda and goals. That helps in immediately setting the stage for moderating the conversation, as in doing so, you have implicitly asserted that you’re paying attention to that purpose and goal. In the “conversation is a river flowing metaphor”, you stepped up the rudder and said you’d pilot the boat.

This applies all over the place in discussions – a scrum daily standup meeting (which is nicely formulaic, even though I tend to repeat what I’m looking for in the status update), to design brainstorming to technical topic discussions of “the best way to thread that mobius needle”.

One of the characteristics that a moderator has to have (or overcome) is to be willing to engage in what may be perceived as conflict. That is, you have to be willing to step in, contradict, and stop or redirect a conversation. Growing up, I was a pretty conflict-adverse person – so much so that I’d let people  walk over me, and walk over a conversation. I had to really work on that, be willing to step into a conversation, to signal with body language or conversational gambits that we needed to stop and/or re-route the flow of conversation. And yes, you’ll hit those times when all of those mechanisms fail – when emotion has gotten so heated that someone is just raving on – the only thing you can do is to stop the conversation entirely. It may even come across as terrifically rude, but the best thing you can do is get up and step away from the conversation. Sometimes that physically walking out of the room. Another choice is to let the individual who’s really going on just exhaust themselves, but recognize that the conversation may be best to set aside for the time being, or the problem/agenda may need to be reframed entirely.

As a side note, engineers – often male engineers – can be notoriously obtuse or just outright ignorant of body language cues in conversation. Most of the time I think it is non-malicious, but there will be people who intentionally ignore body or even verbal cues in order to continue their point or topic, or to ignore or override others trying to be involved.

A skill I have personally focused on developing (and which I recommend) is the ability to take notes while moderating a discussion. It may sound to you like “How on earth would you have TIME to take notes as well as moderate?!”. The answer: make the time. I am willing to ask for a pause in the conversation while I’m taking notes when backed up a bit, and by writing down the perspectives and summaries of what people are saying, it helps me externalize the content of what has been said. It actually makes it clearer to me, as the very process forces me to summarize and replay back what I thought I heard. I’ve found more instances of what was garbled communication by writing it down. When I heard it I thought I internalized it, but when I tried to right it down I realized that it wasn’t making sense. And then there is the side benefit of having written notes of the meeting, which I recommend saving – as not everyone will remember the conversation, and in some cases may not have been a part of it.

To reiterate, moderating is a skill that I view as critical to good leadership. If you’re leading a team, formally or informally, think about how you can apply it. Think about it, and DO it.  It’s one of the ways a leader can “get roadblocks out of the way”, and if you’re aspiring to lead teams, it’s something you’d do well to invest your time in learning.

Neural network in your pocket

One of the most interesting announcements at WWDC has flown mostly flown under the radar: an extension to the already impressive Accelerate framework called BNNS. Accelerate, if you don’t know, is a deeply (often hand) optimized low-level library for computation that’s set up to leverage the specifics of Apple platform. It’s the “go-to” library for numerical computations, linear algebra, matrix manipulations, etc for speed and efficiency.

BNNS stands for “Basic Neural Network Subroutines” and includes optimized routines for the key pieces of setting up and using convolutional neural networks. The intent of this library is to provide efficient libraries for putting together enough of a neural network to classify or recognize. In particular, it doesn’t include the functions that you would use for training a neural network, although I’m sure its possible to manually add the code to do exactly that.

When I heard about BNNS and watched WWDC Session 715, I immediately wondered what it would take to start with something in KerasTensorflow or Theano: do the training with those libraries having constructed the network, and then port the network and trained values into a neural network constructed with BNNS routines. I suspect there’s a pretty reasonably straightforward way to do it, but I haven’t dug it out yet.

Fortunately, recognition or inference is relatively “cheap” computationally – at least compared to training of networks. So I expect there is a good potential to train externally, load as a data feed into a IOS application, and then recognize it. If you find examples of anyone doing and showing that, I would appreciate a comment about it here on the blog. Considering that training a large, diverse network can eat of the equivalent of a couple of racks of high-end-gaming-machine-quality GPU systems, that’s not something I’d look at trying to do on an IOS device.

While I think I understand most of the common pieces of building neural networks, I can’t really claim it yet as I haven’t tried to teach it to someone else, or yet used it “in anger” – when I needed to get something specific done NOW. Either of those is what usually has me hitting those subtle roadblocks and potholes that lead to “Ah-ha!” moments of deep understanding. This may be a good side project: to pick a couple of easier targets and see where I can get to. Maybe grab the MNIST digit recognition or something and see what I can train with Keras or Tensorflow, and then figure out how to translate and port that into classifiers for IOS using BNNS.

 

Marathon, Kubernetes, Docker Compose, Terraform, Puppet, Chef, Ansible, SaltStack, and Juju

Yesterday, I wrote about fully autonomic services. It’s been on my mind for months now, or more accurately years. To get to the end state of a self-managing service, there’s a ton of knowledge that needs to be encapsulated in some form. As we represent this knowledge, we tend to it in one of two forms: imperative and declarative, and often built up in layers.

Programmers deal with this in their day to day jobs. They’re imparting knowledge, putting it into a form that systems (computer languages, libraries, and frameworks) can use. The expression of that knowledge, and how it relates to what we’re trying to do, is the art and science, the essential craftsmanship of programming.

Imperative and declarative weave back in forth in the expression of programming. While it’s really something of a “chicken and egg” argument, I’d say we often start with imperative forms of expression, especially while we’re exploring all the different ways we could solve the problems, and learning from that. As it starts to become common – as we “commoditize” or standardize on ways of doing this, we create new language (sometimes new computer languages!) to represent that learning – we turn it into a declarative form. Sometimes this is in the form of libraries or frameworks, sometimes it’s in the form of layers of software, encapsulating and simplifying.

To create a fully autonomic systems, we need to capture information about how to do the things we want the service to do. The challenge in this space is two fold: how to capture that knowledge, and coming up with good ways to frame the problems. If you look at Docker ComposeKubernetes and Marathon, they are presenting a means of thinking about this problem. They define declarative mechanisms to describe the programs/services they’re managing. Marathon calls them ‘applications‘, and Kubernetes calls a similar concept ‘deployments‘, Compose uses ‘docker-compose‘ as that declaration, sort of avoiding a description like ‘application’ or ‘deployment’ altogether.  Terraform calls the concept ‘configurations‘, JuJu calls it ‘charms‘. They’re all declarations on what to run and how they relate to each other in order to provide some specific software service.

One of the most notable differences in these systems is that Compose, Marathon and Terraform all stop at the level of declaring the structures, letting plugins or “something else” orchestrate the coordination for the interesting tasks of a red/green upgrade or rolling upgrade, where Kubernetes is taking a stab at including a means of doing just that within it’s domain of responsibility. The implication in the case of Kubernetes is that developers deploying services with Kubernetes will learn (or know) how the system works and develop code within those constraints – in programmers terms, use it like a framework. In the case or Marathon, it expects a developer to tell it what to do – to use it more like a library. Kubernetes is far more opinionated in this respect, and in a large part betting on the knowledge that it’s development community already has for proving out that it got the right level of abstractions nailed down.

A notable difference in Marathon and Kubernetes from Terraform and Compose is that they include an expected responsibility to keep running and “keep an eye” on the ‘applications’/’deployments’ they’re responsible for.

Puppet, Chef, Ansible, and SaltStack are all focused on the world of configuring services within a single virtual (or physical) machine – the “install and configure” and “start it running”. The concepts they were built for has a responsibility stop at the boundary of getting the virtual (or physical) machine set up, and didn’t include the concept of handling a failure of the machine. Keep in mind that these systems were created long before cloud computing was a reality, and the idea of asking for another machine wasn’t a few seconds of work behind an API, but days, weeks, or months of work. In a general sense, what they made declarative was the pieces within a virtual machine, and didn’t expand to the realm of stitching a lot of virtual machines together.

For the container versus virtual machine divide: Yes, it’s possible to use Puppet, Chef, and Ansible to configure containers, but I’d easily argue that while you can also drive a screw into the wall with a hammer, that doesn’t make it a terrifically good idea or use of the relevant tools.

As a side note: SaltStack stands out in this space with the concept of a reactor, so it’s intentionally sticking around and listening to what’s happening from within the VM (or VMs) that it’s ‘responsible for’.

The stand-out challenge that I keep in my mind for this kind of automation is “How can it be used to easily install, configure, run, and keep running an instance of OpenStack on 3 to 50 physical machines?” It doesn’t have to be OpenStack, of course – and I don’t mean “an IaaS service”, but a significantly complex application with different rules and challenges for the different parts needed to run it. I spent two years at the now defunct Nebula doing exactly this, and it’s a challenge that none of these systems can completely solve themselves. It’s why “OpenStack Fuel” exists as a project, encapsulating that knowledge that’s otherwise represented in Puppet declarations and orchestrated externally using something they created called “Railgun“.

Another side note: this particular challenge has been the source of several companies (Nebula, Piston, and Mirantis) and the failing of many enterprise installations of OpenStack, as just getting the bloody thing installed is a right pain in the ass.

Kubernetes, Marathon, Compose, and Terraform wouldn’t stand a chance of the ‘OpenStack’ challenge, primarily because they all expect an IaaS API to exist which can give them a “server” when they need it, or they work at the level containers, where a Container API (Docker, RKT, GCE, etc) can go spin up a container at request. The concept of the challenge is still useful there – but probably needs another form for a real example. Take a look at Netflix’s architecture and their adoption of micro services for another of the seriously complex use case that’s a relevant touch-stone. Every company that’s seriously providing web-based services has these same kinds of complexity and scale issues, and each services architecture is different. Making those architectures a simpler reality is what these systems are after.

UPDATE:

That’ll teach me to write such a review the day before DockerCon. This morning, the Docker responsibilites and capabilities changed:-)

As of Docker 1.12, it looks like Docker Compose has been combined into the code, and Docker is heading to take on some of the same space as Marathon and Kubernetes, called that feature “Docker Stacks and Distributed Application Bundles“. It’ll take me a bit to review the new material to see where this really lands out, but I’m not at all surprised that Docker is reaching their essential product responsibilities into this area.

 

 

Fully Autonomic Services

Fifteen years ago, IBM introduced a term into the world of computing that’s stuck with me: Autonomic Computing. The concept fascinated me – first at the most basic level, simple recovery scenarios and how to write more robust services. That interest led to digging around in the technologies that enable seamless failover and in more recent years into distributed systems and managing those systems – quorum and consensus protocols, how to develop both with and for them, and the technologies that have quite a bit of attention in some circles – managing sets of VMs or containers – to provide comprehensive services.

On a walk this morning, I was reflecting on those interests and how they have all been on a path to fully autonomic computing. A goal of self managing services: services with human interfaces that require far less deep technical knowledge in order to get the capabilities available. Often that deep knowledge was myself, so in some respects I’ve been trying to program myself out of a job for the past 15 years.

Ten years ago, many of my peers were focused on cfEngine, Puppet, and later Chef: “Software configuration management systems”. Data centers and NOCs were often looking for people “skilled with ITIL” and knowledgable in “effective change management controls”, with an intrinsic goal of being the humans to provide the capabilities that autonomic services were aimed at providing. The technology has continued to advance – plenty of proprietary solutions I generally won’t detail, and quite a number of open source technologies that I will.

JuJu first caught my attention, both with its horrifically stupid name, and it’s powerful capabilities. It went beyond a single machine, to represent the combinations of applications and back-ends that make up a holilstic service, to set it up and run it. It was very Canonical, but also open source. More recently, SaltStack and Terraform continued this pattern with VMs as the base content – the unit of distribution, leveraging the rise of cloud computing. Many years before this, the atomic unit of delivery was an OS package, or maybe a tarball, JAR, or later WAR file. All super specific to the implementations of whatever OS, or in the case of JAR/WAR – language. Cloud services that have finally started to a compute server (VM) as a commodity, disposable resource into the common vernacular, and Docker popularized taking that “down a step” to containers as the unit of deployment.

Marathon and Kubernetes are now providing service orchestration for containers, and while I personally use VMs most commonly, I think containers may be the better path forward, simply because I expect them to be cheaper in the long run. The cloud providers have been in this arena for a while – HEAT in OpenStack as the obvious clone of Amazon CloudFormation, and a variety of startups and orchestration technologies that solve some of the point problems around the same space, and the whole hype-ball of “serverless”, leveraging small bits of computing responding to events as an even greater level of possible efficiency.

Moving this onto physical devices that install into a small office, or even a home, is a tricky game, and the generalized technology isn’t there, although there are some clear wins. This is what Nutanix excels at – small cloud-services-in-a-box. Limited options, easy to set up, seamless to use, commodity price point.

Five years ago I was looking at this problem through the lens of an enterprise “service provider” – what many IT/web-ops shops have become, small internal service providers to their companies, and frankly competing on some level with Amazon, Google, and Azure. I was still looking at the space in terms of “What would be an amazing “datacenter” API that developers could leverage to run their services?”. “Where are the costs in running an Enterprise data center, and how can we reduce them?” was another common question. I thought then, and still tend to believe, the ultimate expression of that would be something like Heroku, or it’s open source clone/private enterprise variant: Pivotal CloudFoundry. Couple that kind of base platform with various logging and other administrative capabilities supporting your services, and you remove a tremendous amount of cost from the space of managing a datacenter – at least when applications can move onto it, and therein lies a huge portion of the crux. Most classic enterprise services can’t move like that, and many may never.

In the past several years, I’ve come to think a lot more about small installations of autonomic services. The small local office with a local physical presence. Running on bare metal, to be specific. In that kind of idea, something like Kubernetes or Marathon not in the large – crossing an entire datacenter, but in the small – focusing on a single service becomes really compelling. Both of these go beyond “setting up the structure” that Terraform does, and like a distributred initD script or systemD unit, they actively monitor what they’ve started, at least on some level. Both open source software platforms haven’t really stitched everything together to get to a point of reacting seamlessly to service levels, but it’s getting pretty damned close. With these tools, you’re nearly at the point where you can have a single mechanism that creates a service, keeps it running, upgrades it when you have updates, and can scale up (or down) to some extent, and recover from some failures.

We’re slowly getting close to fully autonomic services.

Open Source is the key to industry collaboration

Saturday morning, and I’m at my usual haunt reading through science papers, news, and other aggregates of information. Most of what I’ve subscribed to is related to science – much of it computer science related – but also technology and business. One of the articles I ran across this morning caught my eye: Nicholas Negroponte says Apple is not helping the tech industry.

Given that WWDC is next week, and I’ve long been a very happy consumer of Apple’s technology innovations, I was curious what stones the “One Laptop Per Child” creator was throwing at Apple. (Raspberry Pi by the way, kicked OLPC’s ass in low-cost – although not low power – computing innovation). I don’t know if this is just the way Jenni Ryall wrote the line, or if it really comes direct from Negroponte, but the line that immediately caught my eye was:

He claimed that in 20 years, the company has not written one research paper or attended any external research meetings, such as working groups, government-funded workshops, or held their own onsite research meetings with external scientists in the way Google, Microsoft and Facebook often do.

It’s a patently false assertion, given the company’s noted inclusion even while Steve Jobs was still running the company in the web standards efforts, including what I consider one of the most effective collaboration means: open source with webkit. Apple is going even further these days, an example of which is the swift programming language (https://swift.org). While the company still has secrets and is noting for maintaining that secrecy, it is also more open than it’s ever been.

There are other companies that are more open: Google immediately comes to mind, and Microsoft with Satya as CEO is making some damned impressive inroads in that direction as well. But the most notable part of what I’m seeing as effective serious collaboration isn’t meetings with scientists and attending industry forums, but creating and actively participating in open source.

When I think of industry collaboration or research meetings, I usually think of standards committees and industry collaboration forums. It may be a limit of my own imagination and experience, but my viewpoint is these groups make a small problem into a tangled rat’s nest of overcomplexity. The results of those collaborations tend to be a morass of group-thought half-solved problems or “business opportunities” cited by architecture astronauts for implementig the “defined” APIs or logical constructs. The good ones go all the way to operable demos. Almost none actually get to the level of vetting interoperability.

If you want to drive an open standard or collaboration, share the ideas, API definitions, AND an implementation. It is an intentional choice to commoditize, to agree on how to collaborate or interact. To really get there, you need to do it, not just talk about it. Get it out there with concrete code. The interoperating implementations highlight what *actually* works, and what doesn’t – and people finding it useful to solve problems is the true measure of value.

An Open Letter to King County Metro

This afternoon, we spotted these signs in the Queen Anne neighborhood on the King County Metro Route 4. We purchased our house with the benefit of this route being so close and convenient, and the King County Metro system has been proposing to remove this specific route for quite a while.

My letter to Jack Whisner follows:

Good afternoon Jack,

Per the posted notices on the Queen Anne route 4 stops, I want to protest the loss of this loop of service in the strongest possible terms. Reducing the Queen Anne Service routes to just a route 3 is in my mind a tremendous loss of service to the neighborhood and the services in this neighborhood. In addition to the John Hay Elementary School that this route serves, there’s several handicapped homes that take extensive advantage of the bus route service, and in my past 15 years of using this transit route, the ridership has been increasing consistently – right up until the point that King County Metro changed the layover and forced all the participants of the route to “get out and walk the rest of the way” at Queen Anne and Blaine St, earlier this year.
I was annoyed, but accepting (for the good of the drivers, as it was posted) of this change previously, but now I simply perceive it as yet another inconvenience put in place to make it appear as though King County Metro is “doing us a service”, when in fact the service is decreasing and getting consistently worse compared to years past.
I’ve been a strong and active advocate for the King County transit system, and specifically advocated in the neighborhood and generally for all measures supporting the Transit service and advances in funding. That will absolutely end, and in fact completely reverse, as I’ve no interest in supporting a service that doesn’t support the walking neighborhoods of Seattle that I know and love, instead focusing on the massive hops and packing the busses ever more full to have a metric of “increased ridership”, when in fact the service availability for neighborhood residents around the core of Seattle has decreased.
The route 4 is a staple of the Queen Anne neighborhood and should remain open, and King County should undo the layover change that moved the end-of-route from Nob Hill Ave and Galer St to Queen Anne Ave N and Blaine St.
Sincerely,
Joseph Heck

route4_destruction

AI: Law and Policy

The WhiteHouse announced a series a workshops on AI and its impact to law and policy near the beginning of this month, and the UW School of Law hosted one of those workshops. It was open to the public with registration – so I went. I was looking forward to hear some voices and opinions that I otherwise wouldn’t.

Ed Felton kicked it off, but I think the highlights from Oren Etzioni were more targeted and insightful. The media hype/mob think has this kind of conversation turned on its head. Real issues and concerns need to be discussed, but a lot of the discourse in this space is misdirected and distracting from some of the nearer term core issues. Through his presentation, Oren reiterated that while there are significant things to talk about in this space, the threat of the immediate rise of general artificial intelligence (machines that think like a person) are still a considerable ways off, and the issues at hand that need to be discussed are specifically related to how humans are using existing “narrow” AI to solve problems, capture information, and train systems.

My own personal highlight from this workshop was the second session: a panel which included Kate Crawford and Jack Balkin, folks I didn’t know much about.

The framing and examples that Jack cited during the conversation were new (to me?) and fascinating ideas about how to approach the problems and issues associated with use of this technology. This conversation focused on “the people and companies using AI systems”, and in particular the value and obligations that we should consider when using and/or providing an AI service.

One of his core points was that this model already exists within US Law, and it’s the mechanism for professions and the implied fiduciary duties that exist there – duties of faith, trustworthiness, and appropriate behavior. He put it succinctly in a beautiful way “These guys shouldn’t screw you over”.

I’m sure I won’t do any summary here justice, but fortunately you can make your own opinions. The workshop/presentations were recorded and are available on YouTube: https://www.youtube.com/watch?v=A-99kMuWlXk. The panel with Jack and Kate starts at 1:15 into that clip and runs about an hour.

If you’re interested in more traditional news coverage about this whole setup, Geekwire has some news coverage of the event as well.

 

Shared (maybe sharded) Perspective

I started writing this on the flight back from OSCON 2016 in Austin, Texas.

The highlight of the conference was a lovely rambling evening catching up with Brian Dorsey. During one part of that ramble, we talked about our past jobs and the challenges they brought us. We both shared some anecdotes about a few particularly harrowing and or challenging situations. After reminscing through a few of those war stories, I mentioned that Karen and I had made a pact early in our relationship: either of us could tell the other to quit their job and we would – no arguments. Questions maybe, but it would be done and done immediately. I have shared this story with folks periodically, being grateful that we had the insight to set this up, but not making much of it beyond that. Brian insisted that this was sufficiently insightful that I should share it much more broadly, hence this post.

On the surface it may not sound like much, but under it is a deep trust and partnership agreement. There are times when we all loose some perspective, sometimes without even realizing that we lost it or that we are heading down some strange road. As Brian said, “sometimes we’re the frogs, just not clear that the water is continuing to heat up.”

Karen invoked it once (although I think she came close to invoking it a couple of times in the past) on me, and I’ve invoked it once on her. In fact, I think I called her on it first, when we were just starting out in Columbia, MO. When I didn’t see the water steaming around me, she often did – and vice versa. That’s a precious thing in a partnership – a bit of redundancy to perspective, but I think most importantly that there was a committment that our partnership was far, far more important than any given job.

At a quick glance, it could be a very sabotaging thing – where one partner might make it impossible to make a run at a career, but in practice it is just the opposite. I know I’ve got someone backing me up and I can make a truly headlong (sometimes crazed) run into new ideas, jobs, efforts, etc. It is a sort of freedom: knowing she always has my back, including telling me when I screw up. The result is our careers, her and mine, have both been joint efforts, and I think all the greater for it on both sides.

Learn Fearlessly

I like to teach. I have for years, and I use that skill in nearly every position I’ve had for the past decade. Most of those positions have been in software development, often managing teams or organizations.

A weakness I consistently see in engineering organizations is around communications; sharing knowledge. I offset that by communicating (and teaching) a lot. Teaching in this context isn’t just “learn how to program …” kinds of things, it’s learning in general. How an existing system or tool works, what the value proposition is, what customers are looking for, even a simple retrospective: what went well, what didn’t, what do we want to change.

In teaching and learning, there are two sides: proactively sharing information, and taking information in and understanding it. The folks who have done the absolute best at taking in information and understanding it have a common trait:

They learn fearlessly.

Its so easy to fall into the mindset of “I don’t want to look stupid, so I won’t ask”. That’s the fear I’m speaking about. It’s something I had to struggle with for quite a while, so I know it’s not easy. Many people have an innate desire to belong and to be respected. Its hard to put yourself out there, asking what others may perceive as “a stupid question”. And for the requisite Herbert quote:

Fear is the mind killer.

That fear isn’t always misplaced. I have seen (and been the recipient) of mockery for asking a question. DO NOT LET THAT STOP YOU. Here’s the secret: the person doing the mocking is just betraying their own insecurities.

Yeah, it’s going sting when this happens. Maybe more than sting. If you’re looking for a piece of advice (and this whole blog post sort of is one), then ignore it. Don’t react it, don’t acknowledge it – treat it as if that person didn’t say anything or doesn’t even exist. The number one method for dealing with this kind of person is to not give them any attention. In short, Don’t feed the energy monster.

When I’m on the teaching side of this experience, I redirect those people quickly – stamping out that behavior immediately. If they report up to me, you can bet there’s a long, private conversation about to happen after that occurs.

When I’m on the learning side, I view it as a responsibility to ask questions. There is an aphorism:

If you hear one person ask a question, probably more are thinking it, but aren’t stepping up to ask.

When you ask questions that aren’t clear to you – you’re not just helping yourself, you’re helping others. You’re also helping the person teaching. When I’m teaching I appreciate questions because it gives me a chance to try another path to convey the concepts I want to get across. If you don’t ask, I won’t know to try.

Two questions I use most commonly when I’m learning anything are “Where can I learn more about this?” and “What would you recommend as good resources to dig deeper?”. I also find it useful to walk up to a relevant person and start out with “Could you help me understand…”.

Pick your tactics, and keep at it.

Just don’t stop learning.