The new data centers rock – even the old hands admit that
I do have seen some data centers throughout my career. There were some cool ones among these like the debis (now T-Systems) data center in Munich using a WWII bunker or the Lufthansa data center in Kelsterbach with its two shells that move against each other in case of an earthquake and the airport fire fighters across the road. The other cool thing in Kelsterbach is a raised floor you can actually stand in. Compared to many other data centers where the raised floor is exactly what the term announces a floor that is raised by about two to three feet max, this makes a major difference.
So you can imagine that I was quite reluctant to do another datacenter tour with customers. It felt a little bit like “have you seen one, you know them all”. But I would have missed something that really felt like the next step of data center computing. I have been to the Microsoft data centers in Chicago and in Dublin and they are fascinating. Container computing sounded strange but actually makes a lot of sense. When you look at public cloud computing it is all about standardization and scalability. And these new data centers scale beyond limits.
The experience started with the trip to the data center. In Chicago the driver passed by it several times without noticing it. It just does not announce itself as a data center. This is part of the security concept. Nondescript buildings do raise the security. There are then several layers of physical security, e.g. fences, segregated areas of authorization. The computing room itself looks different. It looks much like a parking garage. Only instead of cars or trucks they have parked containers in there. Surely they still have classic areas of racks in these data centers but the mass of computing power comes in containers. At Microsoft there are even several generations of this approach already in use. The latest is deployed in Quincy, Washington and Dublin. You’ll get a good overview in the video.
The key component is the container itself which is explained best in this short video:
One of the common customer questions is what kind of server hardware is used in these containers. I am not spilling any names here but let me tell you that the decisive factor is neither price nor performance. It is the ability to ship the large numbers of servers needed for the rapid buildup of cloud computing power. It is fascinating to see how they connect a container to power, water and network and hand it over to operations within 24 hours. It is determined remotely what workload a container will be used for. You cannot see from the outside whether a container runs Windows Azure, Office 365, bing or Xbox live. Again this is also part of the security concept. This video gives you an idea on the container delivery and hook up in Chicago:
So how is data center efficiency judged nowadays. The key measurement is energy efficiency. This starts with building the data center and its systems through operations and includes retirement of components as well as locations as a whole. Microsoft’s flagship is the data center in Quincy,WA, and the usage of hydro energy. This creates a new level of efficiency expressed through the power utilization effectiveness value. For Microsoft’s latest generation that is in the range of 1.15 to 1.25 where a classic datacenter would be at about 2.
Colt / Verne Global
Colt has stepped up to drive the rapid deployment idea and the clean energy approach to new heights. Verne Global wanted to build a data center in Iceland for its natural energy sources of hydro and geothermal energy. It was Colt coming up with a modular design that made this 100% clean energy approach real and deployed a full data center in record time. A classic bespoke data center would have taken 12-18 months to build. Colt built the data center within 4 months. And we are not talking about a small around the corner computing center in the basement but a fully fledge large datacenter (500 square meter).
Verne had chosen the location quite cleverly for several reasons. First of all there is the mentioned energy advantage, secondly the location allowed to tap into network cables for Europe and the U.S.A. alike and finally by having obtained a retired NATO base physical security is starting with the lay of the ground already.
You can find more information on the Colt approach and the Verne Global story on http://tomorrowsdatacenter.com
Google and others
I have not had the chance to visit a Google, Salesforce.com or Amazon data center yet. On Amazon you do find only very few information about their data centers. The same applies to Salesforce.com more or less. They do speak a little bit more about the technology but do not share a detailed view on the data centers.
Google has created a video on their data center approach. This video though is from 2004 and would be unfair to claim that their power utilization effectiveness value, 1.25 for Google, has been surpassed by Microsoft, 1.15-1.25 with Gen4. I assume that by now, 2011, Google also has improved further. Nevertheless here is the Google video:
UPDATE UPDATE UPDATE
Here is some news about the Google data center in Finland: http://www.wired.com/wiredenterprise/2012/01/google-finland/
Beyond these obvious major players there are more players driving data centers to the next level. T-Systems as an example runs a project with Intel where they generate the power and the cooling for one of their data center cells completely by a hydrogen fuel cell.
The race is on and we all benefit from it
Even so much of this data center innovation happens outside of the general visibility we all benefit from it. These optimized data centers create the same computing power at a much lower energy usage as distributed computing would do. On top of that many of the technology innovations created as a part of this will also improve smaller data centers or even single server systems.
If you get the chance to visit any of these data centers, go for it.