Tim OReilly posted an entry today entitled Operations: The New Secret Sauce, where he makes some interesting points. The post is really a relayed and summarized email transcript between he and Nat about how Microsoft may have this wicked mean advantage by being operationally effective.
My first reaction is not polite. Absolutely no offense intended to Debra Chrapaty, but everything I’ve ever heard of working with MS Operations – from MS staff – has been singularly less than complementary. But Tim’s article is about how the business units of Microsoft can do some neat integration bits because they’ve got both ends of the platform – IE and the servers. Uhm, okay. Maybe. If Sharepoint is any example, I don’t have much faith – but give them that this is new era with Ray Ozzie at the wheel, and maybe they’ll do something.
But it’s really Nat’s quote in there that makes me wonder what they hell he was smoking when he replied. It reads “Deployment tools have never been open source’s strong point: OS has always been about the developer, rarely about the deployer. cf the hacker’s disdain for IT who get stuck with deployment and management…” He does go on to mention Nagios and capistrano – but he completely misses the componentry that has been around for AGES in the open source world to make deployments significantly faster, easier, and more sensible. I’ll start by naming cfengine, systemimager, kickstart, and pxe. Rolling out a new box in the MS world? Until Ghost – and more recently VMWare started coming along with it’s recent tools, it has almost always been a complete pain in the ass. Yeah, there’s sysprep – but fundamentally the roll out of systems (key to doing deployment in my mind) has always been significantly more difficult in the MS world than it ever has in the open source world.
And finally, I’d point out that companies that are really doing it down to the hardware (Google, predominantly) is using a custom linux kernel and they’ve got their systems down to velcro’d components to make them fast and effecient. Tell me Microsoft has even come close to this level of efficiency. A lot of people talk about 20 or 30 servers per system administrator. A few friends of mine are working where the ration is closer to 400 to 800. And these are at a lot of your large API companies – Yahoo, Google, etc. Tell me anyone in Microsoft is that efficient. And how did it happen? Only because it was all available as open source and the operating systems were scripting down to the bone. You can script windows – reasonably well and in several languages finally. But it still fundamentally sucks to install on Windows with the registry tweaks, encapsulated data in IIS, and god forbid if you start working with any Active Directory services… In the open source world, it’s typically a lot more of “it’s a file, copy it in place and get the hell out of the way”.
That is, admittedly, oversimplifying things – but not that much. I’ve helped build installers for both MS and open source (that would be using InstallShield and NSIS on Win32 and making rpms and dpkg’s on OSS). When it comes right down to it, the open source side was far, far easier.
And the final point that Tim makes (buried in the comments) about Microsoft flowing their experience back into the open source pool from their need to install and deploy better/faster? Well, we’ll see. I disagree that we’ve seen anything there – in fact, I think the opposite is more the case. It was (and is) far easier to knock out Win2K images and replicate machines and services than it is to do the same with WinXP or Win2003. All the while, open source has been back and forth with a large number of strategies and attacks – feeding their successes and failures back into the pool. Again, cfengine, systemimager, kickstart, pxe, …
Sorry Tim, I think you’re full of shit on this one.