How should we p2v? Should p2v not be linked to application & enterprise optimization

| March 18, 2013 | 0 Comments

I got a text from a colleague, I liked your article about p2v benefits, I had a question, how would you do it then?

I called him up (we hadn’t spoken for months despite the fact he’s now moved on to IT architecture up the road to see what he was up to), and thus we have the post below.

What the diagram below symbolizes in summary is that p2v should not be a one of project. More than a cost of doing business so to speak, a continual service improvement process which is linked and to usage, costs and future projects.  The smartest example of this I heard was one in which there was effectively a p2v and performance/utilization team that spent all day looking at servers deployed and identified candidates for virtualization.  Their goal was when a system went live to take a snapshot over a few months and identify the utilization metrics, then with business sign off and an understanding of future requirements, the servers would then be virtualized, so that the servers could then be re-deployed for the next project.

The key being the p2v strategy had multiple objectives:

  • Increase utilization of storage, compute and network resources – for reduced cost and improved efficiency
  • Replace aged commodity systems with virtual machines where possible – end the cycle of legacy servers older than 18 – 24 months – this was linked to an end of goldstock and support contracts for ‘commodity hardware’
  • Ensure projects where the business, vendor or operational requirements forced a team or project on to a physical server, that it was analyzed three months (sometimes a little longer dependent on business drivers) to analyze workloads, metrics and possibilities for leveraging virtual solutions
  • That the company would eventually switch from buying servers for projects and using physical servers either to provide platforms or for initial go-live for projects where requirements, metrics and performance deliverables were difficult to link or benchmark on a virtual offering.

As with everything our infrastructure and the application should be a fluid point in time concept. Today, I’m running Visual Studio 6 on Windows 2003, tomorrow in line with where business requirements for the application, where our development and infrastructure strategies are headed I should be porting the application code to Visual Studio .Net on Windows 2012 or to the cloud.

We should be leveraging p2v, technology metrics and business requirements to continually improve our offering, avoid the concept of keeping still, reducing costs, complexity and barriers to delivery or success. Analyzing the infrastructure and the application throughout the lifecycle in partnership with those of the end user allows us to map out application and technical roadmaps to proactively adapt the IT around business and end user needs, to link infrastructure level consolidation 12 servers to 4, to application and business consolidation, 12 business lines to 4, 37 coding languages to 4, 8 platforms to 2 (Windows/Linux), 4 types of web servers to 2 (IIS/IHS), to remove cost and complexity across the lines of business, across the applications and the enterprise.

Category: Data Center, News, Virtualisation

Leave a Reply

Your email address will not be published. Required fields are marked *