Data Center Testing: When It Comes to a Data Center Rollout, Kicking the Tires and then Driving It Off the Lot is Not Enough

It seems like there are a dozen ways to succeed and a thousand ways to fail. And it is disconcerting how easily a project that is on track can end up in the ditch. I’ve noticed that often success comes down to discerning between the merely urgent and the truly important. And when the tyranny of the urgent overshadows more important issues, you can end up with a wrecked project.

Nowhere is this more true than in the data center. I know how it is. Feature creep and schedule delays can edge performance and scalability testing out of the schedule. But that leaves you vulnerable to SLA violations and service outages. And that is when deferring the important for the urgent can really cost you.

Estimated Outage Cost-per-minute

Application Cost 
Supply chain management $11,000
e-commerce $10,000 
Customer service $3,700
ATM/POS/EFT $3,500  
Financial management   $1,500
Source: Alinean  

What it comes down to is this: you can’t wait until the system is deployed to find out how it will perform. You have to know before deployment, when fixes don’t cost five digits per minute. Failure to budget for testing (typically three to five percent of the data center budget) is like building a new house with top-of-the-line plumbing but not turning on a single faucet to check any leakages in the installation until it is finished, decorated and moved into. Something you hope your home-builder didn’t do.

And you shouldn’t do it, either. Testing is a strategic imperative, a plan that avoids the almost certain loss of revenue and customers when things go wrong - performance issues, SLA violations or service outages. I’ve seen organizations gain a competitive advantage through testing, because it enables them to do performance optimization and offer a level of reliability that separates them from the crowd.

One recent OnDemand project involved a major financial trading site flipped the switch on a new data center with zero problems in the first 48 hours, a first for them. They were one of only two trading sites in the US that were able to keep up with the pre-April 15th trading peak, avoiding the delays and outages experienced by other sites.

All because of robust performance testing during the project lifecycle with best-practices test methodology expertise available with OnDemand. It is no surprise that those who come to us for testing keep coming back. The ROI is clear.

Some organizations rely on the team that designed and implemented the system to do the testing. However, the project team, whether an in-house team or outside consultants, will not see their own blind spots and therefore won’t test for them. Testing, like accounting, requires the objectivity and accuracy that comes with separation of duties. And, like a good auditor, Spirent follows best practices, in fact develops best practices as new technologies and protocols emerge, through participation in standards bodies and partnering with manufacturers as implementations are developed. That is simply a level of expertise not likely to be duplicated in-house or with a consultant.

It is that kind of expertise that enables us to develop test methodologies that anticipate future needs for capacity, services and traffic types.

For the sake of your project, your revenue and your customers, take a break from the deceptively urgent to consider the value of the truly important for your organization and avoid ending up in a ditch.



comments powered by Disqus
× Spirent.com uses cookies to enhance and streamline your experience. By continuing to browse our site, you are agreeing to the use of cookies.