I was reading an interesting article talking about virtualization of storage being the next big thing, and for many it will be. I wonder though if we should not be looking more towards the follow the moon vehicle for IT infrastructure, the concept of data center virtualization. An evolution of the existing of the colocation center or the traditional in-house data center towards a more end to end global infrastructure platform, based upon and around an application centric solution following the sun or the moon, where the data center itself becomes a point in time service, virtualized if you will.
We need to be thinking of virtualization as an evolving path, in which we continue to further abstract the end user from the infrastructure and the application, where we move towards service down a wire or online rather than locking the user to a specific device with a client application, with all the anciliary components to deliver that service. At the same time from an IT cost and business empowerment angle virtualization of the data center allows us to transform not only how we organize and support the IT services that the business needs, it allows us to look at how we host and power these services towards a follow the moon approach.
In IT support a few years ago, colleagues and CIOs were talking about the concept of follow the sun. This was quite simple, they wanted to unify the support, in a given enterprise if a London server went down which Tokyo needed to use, traditionally the regional politics would get in the way of fixing the problem, the operations team in Tokyo would have to call London, ask a London engineer to log on and see what the issue is, ok so say a 30 minute delay in most cases not world ending, but if the engineer couldn’t access that system, if the engineer had to visit the site, we could be talking Tokyo being unable to access that application for the best part of the day. So we adopted or looked at follow the sun support models, where APP Support looked after and had access to all their systems globally, that as Tokyo monitored their batch, if a London server went on its holidays, they could log on, take a look, try and fix it, and if they couldn’t then they could call London IT and ask for a server guy. It requires a degree of trust, of new thinking and organization, why could Tokyo server guy not look at the server, that was deemed as a step too far for some.
Anyway, follow the sun brought us a more fluidic support model reducing response times and empowered overnight changes and upgrades to the systems that previously might have been more challenging to action. If London had to reboot a network switch at 3am, that was fine, globally there was cover from APP Support who were on site who could log on and check the batch, the web site or the application was still working ok.
We introduced the world to server virtualization where instead of having a server per application, we could buy a server with a bit more memory, a bit more storage and have that ‘cut up’ and shared amongst the business units. It worked on the whole very well, but IT was still a bit confused and still is in many ways about how we charge for it and how we ‘make a profit’ for their cost center, you see we can only absorb so much before someone has to pay for the underlying infrastructure the 400TB of storage, the 32GB of RAM in each server, and compare that with the 1u special that might be good enough for a given application, keeping the per unit virtual machine cost competitive could be a challenge if we didn’t look at the way we billed for capacity, for delivering IT service.
At the same time we had application virtualization which meant instead of having a server per application, we could so easily have a shared pool of resources which I could ask my application to use, that my application only worked 9-5 meant that I could buy 9-5 compute resources from that shared pool. However this meant less perceived control to the application team, IT couldn’t understand again that in this concept it’s not about making a profit on the grid infrastructure, covering your costs, its about onboarding applications in order to spread the cost, spread the business benefits, and reduce your server count which will save you money anyway.
We had the networks team talking about network virtualization, putting many lans down the one connection which was fantastic, coupled with people asking about storage virtualization, “why do I care where my files are stored” and they were right. But the challenges came when application teams and business sponsors didn’t quite understand what their actual requirements were in terms of performance and availability, “the cheapest they might say”, or “the fastest”, but if no one is going through in the background and archiving the data, asking do you really need to keep this all online, and if the backups work but take forever to restore, then we simply just keep eating more and more storage, regardless of how that storage is provided.
Moving back then to follow the moon, what is it that I feel we are trying to achieve?
Data center virtualization delivered through abstraction of the infrastructure and application delivery process. (in the IT world)
Having my IT services, my infrastructure and my application operate wherever the power, the carbon footprint or the support costs are cheapest to have the lowest operational costs at any point in time. (in the business world)
A statement that sounds more complex than it is, but let us take Martinsbank as an example, it has offices in Tokyo, New York, London and Paris. London is the hub, it connects all the regions. At the moment if London goes down the world ends, so it needs high availability, it needs expensive data centers with all the bells and whistles, as this is where some of the core applications and services are hosted, additionally we cannot quite shut elements of London down and do ‘maintenance’ on the data center, it might take 6 months to get approval to power down the bcp London data center. Therefore any upgrade work is time consuming, prohibitively disruptive and ‘expensive to end users’, so it gets put off for longer which simply perpetuates the workload until it just has to get done, at which point end users are outraged at the disruption.
Think about this though. If I had my data center virtualized, that is I had my server, my network and my storage virtualized, (independent of how the application is set up), could I not quite easily say to Tokyo, “be London for the day”, here are the London virtual machines, their storage, their network infrastructure configurations, be London, let me fail-over London to Tokyo. Think of what that could do for the data center for IT. With this scenario, with a data center if you like as a set of config files, a few TB of storage and virtual machine configurations, I could start a number of concepts:
- Business continuity between regions – why have London bcp site if New York can run it all be it a little bit slower, but run it nonetheless
- Data centers hosted by power or carbon cost – why not have my data center London, run in Mumbai, in China or Iceland, wherever I can purchase power, cooling and people at the lowest operational cost. But you see we can take this to the next level, where we have my London office run on servers, on infrastructure overnight in the region where the power is cheaper but have everything supported by my staff in London who are working.
- Upgrades become possible, want to deploy 10GB Ethernet, fine we can failover the data center workloads to a range of offices or data centers, whilst we do what we have always wanted to do, whether it’s upgrade the cooling, re-do the cabling which has become a mess, or simply refresh to the latest most efficient server hardware without the twelve month projects, the debates, the costs and the disruption.
- We can move towards an exciting concept where the IT for the business is in effect a few files, granted very large ones, but files nonetheless which move around our own internal cloud delivering the services we need, at a price and availability that we can afford, down to the possibility at a granular level.
A right size application hosting scenario, development we want that wickedly fast and always on (our developers are expensive), fine we host that on our Tier3 data centers, Production also needs to be stable and available, performance is also important, fine we put that on our tier1 data center, tier3 production is important but if it goes down there are redundant systems or other ways around it, so can we put that on the cheaper data centers, and our UAT applications put in a box somewhere we only need them when we have to do application validation for a project go-live.
Taken a little further then, we could extend the concept of follow the moon, I’ll have my data center move around the globe following power costs and combine it with follow the sun, so I have no need for oncall, where I can power down sites, data centers that aren’t needed. I have one data center, one application pool, which moves around the globe rather than a data center per region on 24/7 with 99.995% availability, when I can have 8 hour slots in which the US is online, EMEA is online, and the rest of the globe is online. I just need operations to co-ordinate the data center hand over process which might not necessarily be that difficult. We can though evolve follow the moon, with follow the sun, to cloud, to a position where instead of each region having their own set of services and applications, we have credit which has regional components for each region, but is one application pool providing credit for that organization, with one set of infrastructure, a credit cloud, if you want to subscribe, not a problem it’s pay on use. The same with the email, with dns, with accounts and HR. Oh there will be data privacy and ownership issues to discuss and deal with, but they can be in the right format, with that we can change the nature of the organization from one of regions deviating from the master plan, to one in which the organization adjusts itself to that region – no more of the “can I have a DL380 in Tokyo, please we buy IBM”, resolve those regional and historical issues which always got explained by that magical statement, that is not how we do things. As we standardize and customize on a per region basis, we can further consolidate applications, reduce costs and be more dynamic in the enterprise. With this model, Tokyo needing more capacity for their credit system as the markets go wild, is not an issue, we can turn on London and have them share resources. We can right size:
- Regional requirements – how much of Tokyo’s capacity is duplicate and how much is actually used – how much could we consolidate and have London or New York use that capacity?
- Application requirements – right sizing the infrastructure to the application, stop having everything on all day every day on the fastest systems hosted in the most expensive data centers, could the London phonebook system not shut down when London systems are no longer in use?
- Infrastructure – examine what we have an how we can do things better, have the infrastructure aligned with the business need and able to cater to its requirements on the many operational levels that we have, UAT, development/test, production and production that just can’t go down.
Interestingly, I wonder how much of these possibilities will be constrained by the business, the application teams and the IT? Are people ready for change, and are we ready to evolve the business to a point in which it is working on the basis of revenue combined with the greater good, than the regional cost center or center of excellence concept – oh New York does email/dns/active directory, after all we’re a US company, why can’t the dns, email, and active directory be a shared business service that works wherever it is cheapest and most efficient operationally? Does the infrastructure need to be in New York, or can my New York teams run the service wherever the services move around the globe following the sun.
Whether this goes ahead is one thing, but with follow the moon concepts, with the evolution of application and infrastructure virtualization, we can consolidate our cost base, transform our delivery, and interestingly start the move to taking the enterprise to enterprise 2.0.