spirent.com

Ensuring the Accuracy of the TestCloud Application Tests

By Spirent On September 19, 2011
Networks
apps, test content, TestCloud

In a previous blog I discussed how we had started to build out the test content for different kinds of applications across categories like P2P, video, chat and social media in our Spirent (formerly Mu) TestCloud store. Fast-forward to today, and we’ve now got well over 2,000 tests, with coverage for hundreds of different apps. We’ve also got lots of customers who are actively using these ready-to-run tests for a wide range of use-cases – everything from verifying application detection signatures to validating application policies, as well as billing and charging.

But regardless of their domain, there are two common questions that customers are curious to understand:

  1. How do we select the applications in the first place?
  2. How do we ensure the accuracy of the tests?

So for this blog I’m going to give you a behind-the-scenes view into our test content creation process.

Decisions, decisions … which apps to focus on?

The Apple App Store now has over 425,000 apps, the Android Market has over 200,000 apps, and Facebook has over 600,000. So between these three leading stores, that’s over 1 million apps. Of course, by the time you read this, the numbers above are probably way out of date! How do you go about prioritizing which apps to build tests for?

We focus on 3 key sources of input:

  1. Customer requests
    Many of our networking vendor customers are building products that are being released in the near future. They have lists of applications from their Marketing teams that they need to detect. They share the list with us, we build and post the tests, they start testing. There’s nothing like building stuff that customers want, and doing it really, really quickly.
  2.  New versions of popular apps
    We all know which apps are consistently the most popular – Skype, Netflix, BitTorrent etc. In fact, Skype has a new version on average every 6 weeks on one platform or another. So we keep track of these apps and whenever a new version is posted, we add it to the list and get going.
  3. Monitoring sites listing the hot new apps
    There are many sites out there that claim to know the top apps in the last minute/hour/day/week/month. There’s also a bunch of Top 50 Apps ever/this year that list the must-have apps. We try to monitor these too, to make sure we keep our pipeline strong.

But how do we make sure the tests are accurate?

Being in the business of building hundreds of application tests each month, it became clear to us early on that there had to be some level of rigor and discipline to build a scalable process so that we had a high degree of confidence in the quality of the resulting test content.

In fact, it’s very much like the approach to ensuring network security – there’s no single step that in and of itself is the most important. It’s the multiple layers and best practices that collectively provide a solid defense.

The following summarizes the key steps we take for each and every test we build:

  • Environment setup – we have iPhones, iPads, Androids, PCs and Macs in the lab. We then install and configure the specific client (and sometimes servers too).
  • Capture operation – Next we activate the capture mechanism (either pcaps or HAR files) and perform the specific user operations – this could be typing an IM, downloading a file, starting a movie stream, making some moves in a game etc…
  • Content review – Next we review the captured assets such as a pcap and clean out any extraneous ‘noise’ like unrelated background traffic.
  • Test transformation – At this point we import the asset into Spirent (formerly Mu) Studio which auto-transforms it into a MuSL scenario.
  • Bare wire test – Using Spirent (formerly Mu) Studio, we run the test once as-is on bare wires to verify the integrity of the test.
  • Network test – Now we run the test through a real networking device (such as a UTM or application firewall) with application intelligence to see if it detects the application.
  • Concurrency test – Next we run the test a third time, but now with high levels of concurrency (tens of thousands of connections) to verify system performance.
  • PCAP comparison – Lastly we compare the original pcap with a replayed pcap for a final comparison.
  • Documentation – We then document and store all the assets, reports and pertinent data in our github instance and capture the client, version, platform and operations as part of the test meta data.

Can I say that we have a truly bullet-proof system, with 100% accuracy across every test both now and in the future? Absolutely not, and for the same reason that no organization’s network can ever be 100% secure. But I can say that we continually refine and evolve our processes to inch out more and more of the impeding factors, to increase the level of automation, and ultimately provide the highest level of confidence. And given where we are today, I’d say we’re doing pretty well.

 
comments powered by Disqus
× Spirent.com uses cookies to enhance and streamline your experience. By continuing to browse our site, you are agreeing to the use of cookies.