How corporate culture is killing private cloud

I have a hypothesis on why private cloud is such a struggle:
Corporations are culturally ingrained not to accept loss of control and unpredictability in expenses, and their adoption of private cloud flies right under the foot of that cultural bias.
Let me explain:
A huge portion of the saleable use cases for private cloud are all about dev/test, which is inherently unpredictable. The solution that most enterprises use is to encyst that unpredictabilty down at the developer team level.  The typical corporate pattern is to purchase up hardware for themselves, depreciate it out over the requisite years, and repurpose the hell out of it (or slice it up into virtual environments).
When an IT organization tries to provide a cloud service, the first thing they come out with is “here’s the costs for hosting this”, and “what’s the cost center you want to sign up for your account”? They’re a cost center too, so they sum up all the costs, maybe subsidize it a little, and pass it along. Even though a private cloud may be far cheaper for the overall corporation, the act of passing along that variable cost is the equivalent of tossing a hot potato into the budget cycle.
All of a sudden, the director of the development team using this service gets the hot potato of  variable costs, and depending on how the IT organization is managing the funny money, it could easily be quite a bit higher than the cost of those depreciated desktops behind Larry’s desk, or the smallish investment of a VMware cluster for the dev team to utilize from a wiring closet.
The historical baggage that IT has to provide 100% reliable services just acerbates this problem. It means that corporate private clouds are under a tremendous amount of scrutiny for uptime, in many cases far more so than any public cloud. Making anything more reliable costs more-than-incrementally more money, and that in turn gets passed forward, making the variable costs even more extreme in potential volatility.
Developers are (slowly) getting culturally ingrained to deal with failures in their programming and creation of distributed systems, but corporate managers and directors certainly are not. Organizationally there’s little to no tolerance – and few corporations have sufficient scale to run the same concept of actual availability zones internally.
But most of all, there’s an alergic reaction to the variability of the cost, and often as not a director locks down control to the IT resources provided, even with a private cloud. And suddenly you’re back to the delays of sending emails, asking for permission, and waiting for someone to pull the trigger to get you even a small virtual machine.
Others have done a good job explaining why a wait of even a few hours for a virtual machine is untenable, but let me give it a try as well:
Developers coming in from small companies or startups, or their own experiences with development using cloud resources, are instantly frustrated that they can’t get the resources they need, when they need it. It’s gone beyond the issues of a matter of convenience, but to efficiency. 
Most, even simple, enterprise applications are distributed systems. They have been for ages, we just preferred to call them “client server”, “three tier”, or later “n-tier” during their times in the 90’s and 2000’s prior to really having cloud capabilities. The truth is as you scale those applications up, they’re more of what you expect to hear of as full on distributed systems, and as you add capabilities integrating with other services (an example may be local corporate authentication, logging services, and application metrics collection) you’re into full-bore distributed systems, perhaps even using modern SaaS for some of the components of that system, with complex dependencies and all the fun that comes with it.
The implicit goal the developer has is to have an entire working copy of all the moving parts, and the capability of making changes to any one part of it, while keeping the rest static. Being able to reset that environment – wipe it all away and build it up again – provides a tremendous amount of development efficiency, and many good developers do this as a hedge against the creep of making “just one more change”, and missing the implications. The world of continuous integration, and continuous deployment, does exactly this. The build systems are being advanced to build up the scripts to destroy and rebuild the environment, and ideally can run multiple of these environments side by side. You’re not at the point where you want to spin up, and destroy, these resources with every developer’s commit. Asking for even a hour “wait” for permission in these cases is just completely ludicrous.
The developer response to this has been to leverage the resources they always have at hand, and are under THEIR control, to get what they need done. Vagrant and virtualbox for virtual machines, and then into the world of Docker with boot2docker, or vagrant-coreos and fleet to create the world of a half-dozen to two dozen small services all interacting with each other. And it’s doable on their laptops, or a developer can cobble a pretty decent sized container cluster from the depreciated desktops behind larry’s desk, and best of all – the developers can have their own playground, set it up and destroy it based on their own development needs.

Creative Challenges

Beyond what I post in twitter and facebook directly, I have a blog because I want a place to host my longer form writing. I want a creative outlet for my sometimes perverse need to document the things around me, to explain how to do something, or to just rant about some random topic. Most of the content on this blog is very technical in nature. Someone who isn’t into technology and programming would probably be woefully disappointed with the standard fare here. This post – well – it’ll be a little different.

I’ve been thinking a lot about trying to pick up some new creative challenges.

Sketching and drawing is one, with the main idea being getting back into the habit of sketching things around me. My mother does the most amazing pen and ink and silverpoint sketches. Faces, life drawing, and sometimes landscapes or other fixed scenes. I struggle with proportion in my life drawing, and practice is about the only way it’ll get better. Fortunately, that’s easy to swag – as a pen and paper are easy to come by in a day to day office environment.

Writing, which I’m doing some of here, is another creative outlet that I’ve enjoyed for a long time. I tend to write more “how to” and technical documentation these days. I enjoy reading a lot of fiction and have thought a lot about writing some of my own fiction. That usually stumbles down around the ankles of “what the hell do I want to write about”. Regardless of those stubbed toes, I’m thinking that I might gear myself up and jump into the deep end and even compete with my sweetie during NaNoWriMo this year. I’ve no idea what the story might be, but knuckling down and doing it seems like a really good creative challenge.

The most out-there thing that I’m thinking of taking a swing at is machinima: making a video/film production using video game technology. I’m not aiming to go all special effects and such, just exploring the process of telling a story in a video format, and learning the basic tools and techniques to make that happen. I’m kind of thinking of it somewhere between the idea of cartooning to animation to film-making – just using tools that I can leverage on a laptop today, as opposed to actual film. I was sort of inspired by Unity 5 and Unreal Engine 4 coming out basically for free this year – a little kick that the tooling is getting even easier to come by and use. Mix that in with Make HumanBlender, and iMovie and quite a bit of the tooling – to at least learn the process and try it all out – is totally available and at no real cost. I’ve been spending an hour here or there watching videos and learning the basics of 3D modeling, but without a particular end in mind. Making an a story through machinima seems like a neat way to use it.

The tough part isn’t the tools, but the story: it’s the same problem I stumble on while writing – what story to tell? Again, no idea – but it sounds like a terrifically good creative challenge to work on, so I’m hereby putting that on my list too.

EDIT: Finally found a pic of one of mom’s silverpoints:

https://www.facebook.com/photo.php?fbid=1134753062554&set=a.1134750822498.20349.1639145132&type=1

Ryan, Silverpoint by Caroline G. Heck

Code reviews – how

I posted previously about why to do code reviews, and this post is a follow up to walk through some mechanics for doing them.

When to do a code review

A huge number of developers are using git, and a similarly huge number are familiar with Github. So with that as a basis, my opinion is the best place to do a code review is at the point of sending (or receiving) a pull request.

If you’re not familiar with the term “pull request”, it’s the mechanism that Github has enabled in their website for someone to request to merge specific changes into a branch of code. Bitbucket, Atlassian’s Stash, and other tools have similar concepts. In OpenStack we used Gerrit for this purpose (in fact, all their reviews are completely public and viewable at review.openstack.org), and the earliest tooling I remember hearing about and setting up was Reviewboard (even contributed a few pieces to it back in the day). Suffice to say there’s lots of options out there to help set up and put some structure around the process of doing a code review.

How to set up your code to be reviewed

Tooling alone will not make a good code review, it just enables it. What you are reviewing, or what you share to review can have a huge impact on how a code review works.

Some of the rules of thumb I follow for making a code reviews easier are:

  • Keep commits small and focused

Each commit should have a message that describes the change, and it should be relatively orthogonal to other changes. It’s totally acceptable to have a pull request with just a single, small commit. Honestly, those are the easiest to review.

  • Don’t make a lot of different kinds of changes in the same pull request

If you’re doing a sweeping refactor that changes the code through-out a lot of places, then consider putting up the follow-on changes that add functionality into a separate pull request. If you’re slightly refactoring and then adding some functionality, that can work, just beware of how much you’re adding in that can confuse the intent to a reader. Basically, try and keep to a “single topic” within all the changes in a single pull request.

  • Keep the overall code for review well bounded

Sometimes there is no way to avoid massive changes to prepare for, or implement major architectural shifts. Nonetheless, Try and keep the changes as simple as possible and easily understandable. You want to avoid changes so large that it takes more than 30 minutes to read and understand the review, and I try and aim for most reviews to be able to be read and understood within 5 to 15 minutes.

  • If there is other changes that are required, explain what those are and why

In lots of larger projects, you may need to make changes in multiple different repositories at once, or have multiple pull requests that depend on each other. Working on changes in OpenStack was completely like that. In those cases, make sure you do something to “link” together the set of pull requests so that the reader/reviewing can reference what you’re expecting. A reference to the other relevant pull request(s) are often all that’s needed.

How to review someone else’s code

First and foremost, remember that a code review is about people communicating. Your culture WILL show through. In my teams, I lead and encourage folks to communicate objectively and positively. How you phrase and communicate will be as important as what you say.

First, read through the code (and changes) that are being suggested. Make sure you understand the intent of the change, and how it was solved/implemented. If there’s some arcane bit of perl-ish anonymous function code that you don’t understand, ask about it. Most of all, understand what was intended with the changes, and how it was done.

Second, look for architectural patterns:

  • Do the changes match up with how you’ve generally been structuring similar changes?
  • Does something fly in the face of the architecture that you’ve set for the surrounding code?
  • Does the implementation break a hole in an otherwise clean interface and make an API into a leaky abstraction?

Third, look for the stylistic details:

  • Do the variable names make sense?
  • Does it follow the coding conventions you’re using?
  • Are there any obvious misspellings?
  • Do the comments and internal code documentation annotations match the functions?

Last, review the continuous integration results:

  • Is it passing the tests? For that matter, does the new code HAVE tests (assuming that may be important to you)?
  • Did the overall code coverage of the project go down?

Very often, these pieces are answered by automated continuous integration tooling. If you don’t have continuous integration, check into it: maybe TravisCI and Coveralls or Jenkins with the violations plugin combined with the Github Pull Request Builder.

Communications Style

Most of the horror stories about bad code reviews that I’ve heard (or been subjected to) have been about bad communications, not bad code. Objective and honest is excellent; questioning and discussing is good; personal attacks, verboten. I’ve searched for the best way to describe what I mean here, but I think Ed Catmull did the best job in Creativity, Inc.: be candid.

While I’m on the subject, one of the best things I’ve found for writing in reviews was taking a leaf from one of Pixar’s books. Ask yourself, “How do I plus-one this?”. Even if the code looks fantastic to you, it’s worth taking the time to read it through thinking “Yeah it works, but how could we make it better?”. You don’t need to go overboard here – in many code reviews, I honestly just say “Yep, looks good to me”, but it’s still worth the thoughts.

Code reviews – why

Code reviews are something I’ve been doing professionally for ages. It wasn’t until I was actively building development teams from the ground up that I took the time to really nail down the details of what I was trying to do and why. Nailed it down enough to explain it to someone else, anyway.

In this post, I’ll go over my perspective of the “why” of doing code reviews. And I will  follow with another post on the “how” of doing code reviews as well.

I’m writing this with the idea that you’re not a solo developer, and that you’re probably working with a team of 2 to 8 people (or even more) to collaboratively develop something. Maybe you’re building a new dev team, or maybe you’re just looking for some details on how to get better at your craft.

What is a code review?

I hope it’s not really the case any longer, but at one point when I heard code review, it gave me the impression of a bunch of grognards with neckbeards sitting around a table, ripping apart your code, and mocking you personally for a weak attempt at development. Or maybe that just shows my earlier insecurities, I’m not sure. That’s not what a code review is or should be, though.

A code review is having one or more people look over a bit of code – proposed changes, a project, or even a library you’re considering using – with the goal that at the end of the development process, your product will be better for it.

Why even do a code review?

The highest level goal of a code review is to make your product better. There’s a lot of ways that can happen, and I’ll detail out some of the below. Knowing why you’re doing code reviews, and talking with your peers or team about why you would want to do a code review is a good idea. It’s not just some random process to make things better, it has a goal, and the more that everyone understands that goal, the better you and your team will be at achieving it.

The most obvious benefit is verifying correctness. Having another person double-check what’s been created to see if it does what you say it does. When I’m writing code, I get these notions about what the code is doing and what I’m intending it to do, and what I write usually matches that – but it’s easy to miss little things in the process. If you’re working with a stricter language, maybe you’ve got a compiler to help keep you on the straight and narrow. Hopefully you’re using some continuous integration and you’ve enabled some tooling to help with static analysis and style checking. But when it comes down to the nitty-gritty of the code, tooling is only so smart – and like autocorrect on smartphones, it’s pretty dumb to context and will miss quite a bit.

If you’re code reviewing something from outside your team, maybe a library you’re about to include that you found on NPM, Github, or that your buddy told you about over a beer, then the review is more about doing what I call code archaeology. Figuring out what it’s meant to do (which hopefully you know), and how it’s really doing it. I read code all the time; even code that I don’t use. There’s always lessons to be learned there. But I digress, this is mostly about why you’re doing code reviews within your team.

The second benefit is functional knowledge sharing, or what I call “improving the bus factor“. On any team it is natural to have folks that get good at a specific topic areas – a functional component, a design, or something. But you don’t want to be stuck if that person leaves, gets sick, or is just unavailable. Then you’re in a jam, and it’s not easy to work through what’s happening there. Yeah, you can always task someone to go read through it after the fact, but then you’re already down a notch in that you can’t ask “why”, and “what did you intend for this to be doing…”.

You can in a code review, and if you don’t understand what you’re reading, you should. The huge side benefit of this knowledge sharing is cross training and learning from each other. “Hey, that’s a neat way to do that…” is something I love to hear (or read) when one developer is reviewing another’s code. Likewise, if you read through something that you think is a quick hack and would be better done another way, you can talk it through. Maybe you need the quick fix to meet a deadline, but at least it’s a known issue, not a hidden, ticking time bomb that would otherwise be missed.

On a more positive side, developers reviewing each other’s code is somewhat like jamming together musically. You get the idea of each other’s styles and riffs, and the more you do it, the easier it is to anticipate how to work together and seamlessly spin out some interesting melody.

Along with actively learning from each other, it’s a means to grow into how to work with one another, and to a large extent that’s the key benefit of a code review – it’s a point and means of actively spreading your development culture. Patterns of development, evolving architecture, coding conventions, and even social conventions are easily shared during a code review.

To summarize, the reason you’re doing code reviews:

  • Verifying correctness
  • Functional knowledge sharing
  • Actively spreading your development culture

In my team’s post-mortems and reviews, we often talk about what we’re looking for in code reviews, and how we might want to change this or that aspect of it. Maybe it’s keeping an eye to encourage better unit testing, or making sure we keep the interfaces between two components strict so we can refactor one component or the other more easily.

Wading into the deep end with NodeJS

A year ago, I was thinking I wanted to know more about NodeJS, and now we are actively developing a product based on NodeJS. I got a lot of great help and insight from the Seattle NodeJS meetup over the last year, so back in December I offered to pay-back a bit and do a talk on our “lessons learned”.

The end result is a relatively short presentation on lessons learned diving into NodeJS. Much to the consternation of one of my friends, I’ve really enjoyed diving into the depths of NodeJS and learning how to really get things done with it.

There’s frustrations with it, to be sure – honestly most of that is it simply being a young ecosystem, but I’ve found it to be enormously effective too. I’ll have up more posts about NodeJS and things I’ve learned while using it later…

Reactivating

This blog has been quiet, well – dead frankly – for nearly two years. Not all of that accountable to technical difficulties, but I was self-hosting wordpress for ages, and the last hosting service I used had frequent issues with the DB backend that I finally resolved this past weekend – by moving to hosting on wordpress.com.

I’ve been spending most of this last year developing in NodeJS, and I’m starting to think it’s time to also kick in some self-learning with Swift as well. For most of the little one liners and such, I’ve been doing what the rest of the world has been doing – posting it to twitter or Facebook. But while they’re nice and “social”, those platforms are terrible for writing longer-form pieces. And that’s what I’m hoping and aiming to do over the coming months – start posting a bit more “how-to”, like I used to (and are in the archives).

I’ve got to say, I’ve actually been a bit inspired to do this by Khan Academy – curious for ages, I’ve been taking some of their advanced math courses to refresh myself on topics that I thought would be fun to learn on the side. I’m in the process now of trucking through linear algebra, and that got me all excited about not only learning, but maybe getting a bit back into teaching.

We’ll see how it bears out in reality, but I’ve already got some topics in mind that have come up as I’ve been jumping back and forth between language ecosystems (python, NodeJS, Objective-C, and now swift).

Thankfuls…

Yeah, ages since I posted here. Not sure anyone is even reading here anymore, but if you are, well – you’ll be surprised to see a new entry in the RSS feed or however you’ve kept track of this otherwise dormant feed.

It’s the day after thanksgiving and Karen and I were talking about all the various decisions we’ve made leading us to today. Living in Seattle, a great little house, doing things we love. Karen described it, stretching back two decades, of generally “erring on the side of adventure”. Moving to Seattle – now 13 years ago, leaving Singingfish/Thompson/AOL, joining Docusign, leaving Docusign, joining up and working for Disney, in turn leaving Disney, etc. Nothing we’ve done has been a sure bet. Lots of them were “pretty out there” in terms of “can it, or will it even work out”.

Probably the most strange to me is that I tend to think of myself as being risk averse. I’m sure there’s plenty of my family that would smack me upside the head for that. We’ve taken quite a number of flyers, and the sum total of the game has been pretty darned good. We definitely have a lot to be thankful for.

OpenStack docs and tooling in 20 minutes

I’ve gone through the routine several times now, so I decided to make it easy to replicate to help out some friends get started with all the tooling and setup needed to build, review, and contribute to OpenStack Documentation.

I’m a huge fan of CloudEnvy, so I’ve created a public github repository with the envy configuration and setup scripts to be able to set up a VM and completely build out all the existing documentation in roughly 20-25 minutes.

First, we install cloudenvy. It’s a python module, so it’s really easy to install with pip. My recommended installation process:

pip install -U cloudenvy

If you’re working on a mac laptop (like I do), you may need to use

sudo pip install -U cloudenvy

Once cloudenvy is installed, you need to set up the credentials to your handy-dandy local OpenStack cloud (y’all have one of those, don’t you?). For cloudenvy, you create a file in your home directory named .cloudenvy akin to this:

cloudenvy:
  clouds:
    cloud01:
      os_username: username
      os_password: password
      os_tenant_name: tenant_name
      os_auth_url: http://keystone.example.com:5000/v2.0/

Obviously, put in the proper values for your cloud.

Now you just need to clone the doctools envyfile setup, switch to that directory, and kick off Envy!

git clone https://github.com/heckj/envyfile-openstack-docs.git
cd envyfile-openstack-docs
envy up

20-25 minutes later, you’ll have a virtual machine running with all the tooling installed, run-through, and the output generated for all the documentation in the openstack manuals repository. The envyfile puts all this into your virtual machine at

~/src/openstack-manuals

To get there, you can use the command envy ssh to connect to the machine and do what you need.

For more on the how-to with contributing to OpenStack documentation, check out the wiki page https://wiki.openstack.org/wiki/Documentation/HowTo.

Do photon’s have mass?

My grandmother in Burlington, IA had this massive house overlooking the mississippi there. In the windows, she had these ornaments – little glass bulbs with pinwheel looking things in them that spun and spun and spun in the sunlight streaming through the huge windows overlooking the river.

Years later in college, I learned that the window trinket was a classic science experiment regarding photons having mass. I saw one on think-geek some time ago, and got one for my house:

Making keystoneclient python library a little easier to work with

A few weeks prior to the Grizzly OpenStack Design Summit, I was digging around in various python-*client libraries for OpenStack. Glanceclient had just started to use python-keystoneclient to take care of it’s auth needs, but everyone else was doing it themselves – intertia from having it in the base project from the early days and never refactoring things as clients replicated and split in the Essex release.

Looking at what glanceclient did, and had to do, I got really annoyed and wanted the client to have a much easier to use interface. At the same time, I was also digging around trying to allow the keystoneclient CLI to accept and use an override for the endpoint from the command line. Turns out the various mechanations to make the original client setup work with a system with two distinct URL endpoints was quite a mess under the covers, and that mess just propagated through to anyone trying to use the library.

We just landed some new code updates with keystoneclient to make it much easier to use. So this little article is intended to be a quick guide to using the python keystoneclient library and some of it’s new features. While we’re getting v3 API support installed, we’re still very actively using v2 apis, so we’ll use v2 API examples throughout.

The first is just getting a client object established.


>>> from keystoneclient.v2_0 import client
>>> help(client)

We’ve expanded the documentation extensively to make it easier to use the library. The base client is still working from httplib2 – I didn’t rage-change it into the requests library (although it was damned close).

There’s a couple of common things that you’ll want to do with initializing the client. The first is to authorize the client with the bootstrapping pieces so you can use it to configure keystone. In general, I’m sort of expecting this to be done mostly from the CLI, but you can also do it from the python code directly. To use this setup, you’ll need to initialize the client with two pieces of data:

  • token
  • endpoint

Token is what you’ll have configured in your keystone.conf file under admin_token and endpoint is the URL to your keystone service. If you were using devstack, it would be http://localhost:35357/v2.0

A bit of example code (making up the admin_token)

from keystone client.v2_0 import client

adminclient = client.Client(token='9fc31e32f61e78f114a40999fbf594c2',
                            endpoint='http://localhost:35357/v2.0')

Now at this point, you’ll have an instance of the client, and can start interacting with all the internal structures in keystone. adminclient.tenants.list() for example.

You may have spotted the authenticate() method on the client. If you’re using the token/endpoint setup, you do not want to call this method. When you’re using the admin_token setup, you don’t
have a full authorization token as retrieved from keystone, you’re short-cutting the system. This mode is really only intended to be used to bootstrap in projects, users, etc. Once you’ve done that, you’re better using the username/password setup with the client.

To do that, you minimally need to know the username, the password, and the “public” endpoint of Keystone. With the v2 API, the public and administrative endpoints are separate. With devstack, the example public API endpoint is http://localhost:5000/v2.0.

A bit of an example:

from keystoneclient.v2_0 import client
kc = client.Client(username='heckj', password='e2112EFFd3ff',
                   auth_url='http://localhost:5000/v2.0')

At this point, the client has been initialized, and as a default it will immediately attempt to authenticate() to the endpoint, so it already has some authorization data. With the updated keystoneclient library, this authorization info is stashed into an attribute “auth_ref”. You can check out the code in more detail – the class is keystoneclient.access.AccessInfo, and this represents the token that we retrieved after calling authenticate() to auth against keystone.

With only providing a username and password, the token is really only useful for about two things – getting a list of clients that this user can authorize to (getting a ‘scoped’ token – where the token represents authorization to a project), and then retrieving that token.

>>> kc.username
'heckj'
>>> kc.auth_ref
{u'token': {u'expires': u'2012-11-12T23:28:58Z', u'id': u'97913f8839634946afab2897ac19908d'}, u'serviceCatalog': {}, u'user': {u'username': u'heckj', u'roles_links': [], u'id': u'c8d112a0932a454097dfba0f3b598bdc', u'roles': [], u'name': u'heckj'}}
>>> kc.auth_ref.scoped
False
>>> kc.tenants.list()
[]
>>> kc.authenticate(tenant_name='heckj-project')
True
>>> kc.auth_ref.scoped
True
>>> kc.auth_ref
{u'token': {u'expires': u'2012-11-12T23:37:10Z', u'id': u'6d811d7c39034813b6cab2ad083cdf3e', u'tenant': {u'id': u'7dbf826d086c4580a28cf860a6d13046', u'enabled': True, u'description': u'', u'name': u'heckj-project'}}, u'serviceCatalog': [{u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://localhost:8776/v1/7dbf826d086c4580a28cf860a6d13046', u'region': u'RegionOne', u'internalURL': u'http://localhost:8776/v1/7dbf826d086c4580a28cf860a6d13046', u'publicURL': u'http://localhost:8776/v1/7dbf826d086c4580a28cf860a6d13046'}], u'type': u'volume', u'name': u'Volume Service'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://localhost:9292/v1', u'region': u'RegionOne', u'internalURL': u'http://localhost:9292/v1', u'publicURL': u'http://localhost:9292/v1'}], u'type': u'image', u'name': u'Image Service'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://localhost:8774/v2/7dbf826d086c4580a28cf860a6d13046', u'region': u'RegionOne', u'internalURL': u'http://localhost:8774/v2/7dbf826d086c4580a28cf860a6d13046', u'publicURL': u'http://localhost:8774/v2/7dbf826d086c4580a28cf860a6d13046'}], u'type': u'compute', u'name': u'Compute Service'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://localhost:8773/services/Admin', u'region': u'RegionOne', u'internalURL': u'http://localhost:8773/services/Cloud', u'publicURL': u'http://localhost:8773/services/Cloud'}], u'type': u'ec2', u'name': u'EC2 Service'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://localhost:35357/v2.0', u'region': u'RegionOne', u'internalURL': u'http://localhost:5000/v2.0', u'publicURL': u'http://localhost:5000/v2.0'}], u'type': u'identity', u'name': u'Identity Service'}], u'user': {u'username': u'heckj', u'roles_links': [], u'id': u'c8d112a0932a454097dfba0f3b598bdc', u'roles': [{u'name': u'Member'}], u'name': u'heckj'}, u'metadata': {u'is_admin': 0, u'roles': [u'08ccc339c0074a548104b9050bdf9492']}}

You might have noticed that you can now call authenticate() on the client and just pass in values that are missing from previous authenticate() calls, or you can switch them out entirely. You can change the username, password, project, etc – anything that you’d otherwise normally initialize with the client to do what you need.