I work in an environment that has an existing CMDB. Over the past year, I’ve spent a fair number of man hours from my team and an equal number of hours of my own thinking about what it is, what we want from it, and how so much of what’s available today just doesn’t cut it.
The thing that we’ll label as a “CMDB”, to me, isn’t. It isn’t a configuration management database. For us, it’s an inventory of assets – digital and physical. It’s a metadata store that allows us to assign ownership, and an ancillary data set that makes categorizing incidents, requests, and changes in the classic service management sense a bit easier across a very wide organization. If any small company came to me and said “yeah, gotta have a CMDB!” I’d be looking very closely at how potentially insane they were. Most small orgs and companies just don’t need it. It’s honestly only useful when you breach some amount of scale.
The worst part is the ITIL definition of what should be in a CMDB has been effectively unachievable because of the costs associated with it. The classic ITIL world of CMDB has this data repository being updated with process (typically manual) as changes are approved – it’s meant to represent the “desired state” of an operational world. Only it doesn’t. Really, it never has. And even with the highest priced tools on the market today never will. At best it’s an audit-against tool that you can see “yeah, it matched or didn’t match when we ran that scan a few days ago”.
It doesn’t have to be this way though. What most of us want from a CMDB is what we get implicitly, to some degree, from many of our monitoring solutions – a digital map of our environment. The monitoring pieces created their own version – typically configured by hand, or sometimes configured automatically with a hand to help guide (Zenoss and Hyperic both do a pretty good job at this). The monitoring systems then use that data model to know who to alert when something goes wrong, or if they’re really good – to share some set of analysis around “service X is down because the component A that it relies on is down”.
Virtualization is pushing this all right over a critical tipping point. The “old” CMDB is dead, lets jump to the new. We need a model of our environment that maps our physical and digital assets. We need it to show us dependencies in an ever increasing world, and we need it to help inform us – especially in a larger organization, who to contact if there’s an issue with a service. If we have to fill out all this data and information by hand, we’re lost. The rate of change is increasing, and we *want it* to increase. Look to the model of continuous deployment, the natural successor to the software development process that is continuous integration. Now in classic #devops style, let’s apply that right on through and into operations and running our services.
What doesn’t exist today, in our collective musings about a DevOps toolchain, are (currently) the tools to integrate the knowledge that the deployment tools have into updating a digital asset model . Even these tools don’t know the dependencies (i.e. what database is this rails app using, which memcache server/port combo is being used, etc) – but it’s there, just slightly under the covers.
The other place where we want/need this knowledge stashed? Our monitoring systems. The continuos tests against our live services to assert they’re OK. Many open source systems include some level of a model just implicitly in their configurations. Nagios, Munin, Zenoss, Hyperic, etc. I am still struggling to find monitoring systems that have the concept of dynamic configuration through an API access built in to the base of them. Still – much of their configuration is something that we might naturally want to store in a map of our services.
The way to get this data? Drive it from the tools that are implementing the changes. Have it as a service that can be updated and modified through simple API’s that Buildout, Func, Capistrano, Fabric, ControlTier, or whatever can access and inform. Use the manifests and details that the system configuration tiers (BCFG, cfengine, Chef, Puppet, SmartFrog) have been built with to populate this map as they deploy and invoke their services.
This is all a step to moving all of our infrastructure, historically so very physical, into the digital world. There are tremendous efficiencies to be gained – both financially (using our physical assets more effectively, or just using what you need from an existing infrastructure provider) and from a service perspective (being able to reconfigure and deploy your services to match the market needs).
Much of devops is focused on deployment, because that’s where we spend most of our time today. That’s good – but we can not forget that it is just one small part of the overall process for these services from inception to retirement.
And before any of the classic CMDB folks find me and start shooting, yes – I’m very aware of the work that is going on at CMDBf around federating CMDBs. The idea there is good – they’re heading in the right direction. At a 10,000 foot view they’re going the right direction with their standard. The foundation that it is built on is – in my opinion – now outdated and needs to be revisited. The implementation needs to be simplified. I would recommend you look at the labels of the members that are coming together around the federated CMDB concept. Do you see anything in there that shouts of “open, adaptable, flexible”? I don’t. I see the same kind of collaboration that led to J2EE standards and the W* standards around “web services”. What is needed is something simpler, more open, and with publicly available implementations. I would never expect BMC, CA, HP, IBM, or Microsoft to help provide that – it’s just not in their best interests when they have revenue tied to the services and software they provide in this space today.