When I was the Director of Cloud and Automation Services at Disney, I toured Microsoft’s Quincy datacenter. Somewhere in the conversations around that tour, a phrase was dropped that’s stuck with me ever since: the concept of a “DataCenter API”. This was long before “software defined data center” was a concept, and cloud services were finally getting a toe-hold on the enterprise mind. When we talked about the API, it was as if it existed, used and beloved; I later learned that Data Center API was just a concept, a pitch.
You know what, it was a really damned good pitch. It stuck with me, and I’m still thinking about it, looking at services in that context, 6 years later.
It is really past time for that pitch to become a reality for enterprise IT that act as service providers, at least if they don’t want to just move their services into public cloud. It shouldn’t be a surprise that capabilities in based on cloud hosting has outstripped enterprise IT (black suit IT) and even internal webops teams (black shirt IT).
The Data Center API that we bandied about were APIs for the long term operational needs of running services. Something that developers could know and use, even in the development of their application. In many respects, it was a precursor to the idea of devops:
- API for alerting, notifying staff of issues – a plea for help to a service back on track
- API log capture, event processing, visualization and searching
- API for metrics and setting up monitoring, aggregation of values, and trending
Any one of those bullets has multiple SaaS providers supporting cloud services today, and those are just the basics. These days a hosted service provider might also set up a hystrix monitoring and status consoles, traceback/exception logging and tracking, service tracing and analysis, and more.
The SaaS services enabling these today all started out with humans as their primary consumer, but many are now offering APIs as well. API’s to create and set up, as well as APIs to consume or query. It is still piecemeal, but hosted services now have available to them “The Data Center API”.
So why are they not available to developers in the Enterprise? Mostly these services are unavailable due to cost or contract concerns, frequently they’re too expensive when set up by current Enterprise IT vendors. Sometimes there’s a business fear about letting even metadata about a business service go outside the firewall. Sometimes it’s just a turf battle. Enterprise agreements are usually sold such that increased usage is increased cost.
Have you seen a the bill for a large data feed in Splunk recently? Splunk has amazing functionality, but if I were “buying”, it’s gotten to the point that I would create an ELK stack for each application instead. The incremental cost of running an instance of the ELK stack is likely cheaper than the incremental cost of adding that data in a centralized instance of Splunk. I’m picking on Splunk here, but the same applies to centralized enterprise monitoring solutions, orchestration services, and so on. If there’s anything that a cost center (which is what most enterprise ‘service providers’ are) hates it is unpredictable, variable expenses.
The pieces that make up a Data Center API that enterprises purchase are still mostly proprietary and “best of breed”. High price is still seen as somewhat acceptable, and I don’t see that lasting. Hosted services are driving the commoditization: the capability is becoming assumed and the with multiple hosted providers starting to differentiate from each other, they’re providing the competition in the space that will ultimately commoditize it.
The final implication is that more capabilities are easily and regularly available for cloud hosted services than for services in an enterprise data center. It will be fascinating to watch the next several years and how (and if) large “enterprise service providers” deal with this.