the CMDB is dead, long live the CMDB

I work in an environment that has an existing CMDB. Over the past year, I’ve spent a fair number of man hours from my team and an equal number of hours of my own thinking about what it is, what we want from it, and how so much of what’s available today just doesn’t cut it.

The thing that we’ll label as a “CMDB”, to me, isn’t. It isn’t a configuration management database. For us, it’s an inventory of assets – digital and physical. It’s a metadata store that allows us to assign ownership, and an ancillary data set that makes categorizing incidents, requests, and changes in the classic service management sense a bit easier across a very wide organization. If any small company came to me and said “yeah, gotta have a CMDB!” I’d be looking very closely at how potentially insane they were. Most small orgs and companies just don’t need it. It’s honestly only useful when you breach some amount of scale.

The worst part is the ITIL definition of what should be in a CMDB has been effectively unachievable because of the costs associated with it. The classic ITIL world of CMDB has this data repository being updated with process (typically manual) as changes are approved – it’s meant to represent the “desired state” of an operational world. Only it doesn’t. Really, it never has. And even with the highest priced tools on the market today never will. At best it’s an audit-against tool that you can see “yeah, it matched or didn’t match when we ran that scan a few days ago”.

It doesn’t have to be this way though. What most of us want from a CMDB is what we get implicitly, to some degree, from many of our monitoring solutions – a digital map of our environment. The monitoring pieces created their own version – typically configured by hand, or sometimes configured automatically with a hand to help guide (Zenoss and Hyperic both do a pretty good job at this). The monitoring systems then use that data model to know who to alert when something goes wrong, or if they’re really good – to share some set of analysis around “service X is down because the component A that it relies on is down”.

Virtualization is pushing this all right over a critical tipping point. The “old” CMDB is dead, lets jump to the new. We need a model of our environment that maps our physical and digital assets. We need it to show us dependencies in an ever increasing world, and we need it to help inform us – especially in a larger organization, who to contact if there’s an issue with a service. If we have to fill out all this data and information by hand, we’re lost. The rate of change is increasing, and we *want it* to increase. Look to the model of continuous deployment, the natural successor to the software development process that is continuous integration. Now in classic #devops style, let’s apply that right on through and into operations and running our services.

What doesn’t exist today, in our collective musings about a DevOps toolchain, are (currently) the tools to integrate the knowledge that the deployment tools have into updating a digital asset model . Even these tools don’t know the dependencies (i.e. what database is this rails app using, which memcache server/port combo is being used, etc) – but it’s there, just slightly under the covers.

The other place where we want/need this knowledge stashed? Our monitoring systems. The continuos tests against our live services to assert they’re OK. Many open source systems include some level of a model just implicitly in their configurations. Nagios, Munin, Zenoss, Hyperic, etc. I am still struggling to find monitoring systems that have the concept of dynamic configuration through an API access built in to the base of them. Still – much of their configuration is something that we might naturally want to store in a map of our services.

The way to get this data? Drive it from the tools that are implementing the changes. Have it as a service that can be updated and modified through simple API’s that Buildout, Func, Capistrano, Fabric, ControlTier, or whatever can access and inform. Use the manifests and details that the system configuration tiers (BCFG, cfengine, Chef, Puppet, SmartFrog) have been built with to populate this map as they deploy and invoke their services.

This is all a step to moving all of our infrastructure, historically so very physical, into the digital world. There are tremendous efficiencies to be gained – both financially (using our physical assets more effectively, or just using what you need from an existing infrastructure provider) and from a service perspective (being able to reconfigure and deploy your services to match the market needs).

Much of devops is focused on deployment, because that’s where we spend most of our time today. That’s good – but we can not forget that it is just one small part of the overall process for these services from inception to retirement.

And before any of the classic CMDB folks find me and start shooting, yes – I’m very aware of the work that is going on at CMDBf around federating CMDBs. The idea there is good – they’re heading in the right direction. At a 10,000 foot view they’re going the right direction with their standard. The foundation that it is built on is – in my opinion – now outdated and needs to be revisited. The implementation needs to be simplified. I would recommend you look at the labels of the members that are coming together around the federated CMDB concept. Do you see anything in there that shouts of “open, adaptable, flexible”? I don’t. I see the same kind of collaboration that led to J2EE standards and the W* standards around “web services”. What is needed is something simpler, more open, and with publicly available implementations. I would never expect BMC, CA, HP, IBM, or Microsoft to help provide that – it’s just not in their best interests when they have revenue tied to the services and software they provide in this space today.

Published by heckj

Developer, author, and life-long student. Writes online at https://rhonabwy.com/.

8 thoughts on “the CMDB is dead, long live the CMDB

  1. Hi! I just wanted to drop a line and point out that Chef already does this to a large extent – we take all the data about your systems, feed it to a RESTful web service that stores and indexes it all, and then let you query that either from within Chef or in external systems. You can even store arbitrary data in there.

    Like

    1. Thanks Adam!

      I knew Chef was doing some of this with the component/project Ohai, but I wasn’t aware it was presented externally from the Chef infrastructure.

      Like

  2. Not sure I understand.

    Classical CMDBs are used in highly controlled manufacturing environments (aeronautical engineering, medical device manufacturing, nuclear powerplants, munitions factories, etc.). In fact, aeronautical engineering is where the tool set is derived from. If there is a key principle to understand here it seems to be — the level of control you put in place should be carefully assessed against the costs of the control. Where lives or millions of dollars are on the line you want to maintain the highest level of control possible and that often includes CMDBs / a “laser-accurate” model of the existing environment.

    I think the problem here is that fresh technologists learn a little about CMDB and other methodologies (Six Sigma?) from a university or books and then preach what they have learned to businesses that trust them but don’t need the new standard. The result? business trends emerge that are disconnected from real business needs. The lesson? be very skeptical of “standard” solutions… study where they have come from and where, specifically, the have been successful before implementing them and be sure to study and monitor them after implementation as well. The real world is often quite different from our ideas! 🙂

    Like

  3. Hi Joe,
    Your article is one that many ‘Enterprise scale’ organizations, IT managers/directors, and Software/Service providers should read and carefully digest.
    *
    As an aside, I would concur that smaller organizations may well not benefit from having even a CMDB, although its best if they have the foresight to consider keeping track of their assets at a reasonably identifiable level.
    *
    The core of your article identifies (correctly I believe) the disjointed nature of the way we manage the physical/logical (virtual?) infrastructures we have in place, and consequently the business services and ultimate benefits we provide to our customers. Your article further goes on to suggest that the ‘physical’ world does not make best use of the digital capability we have, and I strongly agree with that. It will be a challenge for many software providers, to overcome the natural (vested interest) resistance of the bigger industry players, and to provide tools which integrate ALL layers within an Enterprise Infrastructure and to do so under recognised industry controls (for audit and compliance purposes). Somewhere out there, there will be a brave little company or two, who see the simplicity of your argument, who overcome that resistance, who find the resources, and who take it forward. It makes complete sense, and I’m sure it will happen soon.

    Like

  4. a very interesting post. Not so provocative as someone may think…from my experience it’s a well representation of what people are speaking behind closed doors in many companies…great post!

    Like

  5. Great article. I find myself in the sticky position of trying to come up with a CMDB solution. Currently we are using Puppet for configuration management and would like to populate the fact data into a CMDB, which we can then reference via a web based tool and manage the data. Secondly, we would like to pull this data into JIRA which we are using for incident and release management. Any advice/suggestions?

    Like

    1. Sorry for the very late response…

      There’s not much in terms of good setups for what you’re after – at least that I’m aware of. My own bias would be to find or use something that provides that model for you – Chef maybe, or if puppet has something internal to it. If that has an addressable component (i.e. REST interface) to each node in the model, then link that into JIRA to start to get some of that binding.

      I’m guessing you’re after more history and correlation down to the code flaw/feature request level.

      On project I just learned about was Vogeler (http://github.com/lusis/vogeler), which I’m just starting to look at. I might be sending you down a complete rabbit hole there though.

      What I’ve done in my own environment is wrapped a REST interface around one of those crappy vendor CMDB’s to at least let me link and update easily. That’s inherently linked to the incident/change/request system that vendor provides – so I’ve got a bit of win there already, even if I’m damned grumpy about its other flaws.

      Like

Comments are closed.

%d bloggers like this: