spirent.com

Testing for Economic Impact: Using Quality of Experience

Image of a compass pointing to Quality

When we’re testing for provisioning scale for a device under test candidate, what is the meaningful metric to measure? Traditional metrics such as bandwidth, connections per second, or open connections are narrow focused metrics that measure specific engineering attributes of the device. For example, open connections and connections per second will give you information about table scaling and bandwidth will give you information about forwarding efficiency but neither of these metrics will put it all together and directly measure how users perceive quality over time.

These metrics tend to be overly optimistic in what they imply about how one might provision the device in the production network. Due to measurement uncertainty caused by layers above the target metric impacting performance in unpredictable ways an unknown condition may be observed. The objective when picking a core metric used for determining peak scale is to choose a metric that is not impacted by any upper layer. In addition, the metric should fold in all lower layer metrics working as a system to catch failure impact. Finally, the metric should be economically meaningful.

This metric is QoE or Quality of Experience. Consumers of the network use services such as Facebook or Twitter, and the services in turn orchestrated protocols such as HTTP, SIP, and video, which in turn drive transport layer protocol such as TCP or UDP, which will then load the network with bandwidth.

Quality of Experience then becomes the top of stack metric which by definition measures the impact of all lower layer events and interactions. Quality of Experience is also core economically to the network, specifically, if users are unhappy they will not invest in the network. When we talk about measuring Quality of Experience, what we are really saying is “what is my experience with the service now, and how does previous history with my experience impact my perception?”

The modern network must model that we have an instantaneous high Quality of Experience but also must have deep predictability. To put that another way, previous failures as measured by the user will asymmetrically impact the perception of the user right now. In Quality of Experience analysis, you could have 1,000 samples of high Quality of Experience and one sample of poor experience and users may weight each of these equally. In practical terms this means before you start testing you need to understand the traffic flowing across the network, and for each one of the services you need to establish a minimum definition for acceptable quality.

For example, a modern webpage will have around 200 URLs forming the single page. For the instantaneous quality experience for this page to be considered high, all 200 URLs need to render in 1 to 2 seconds. Further, the variance from loading the same page overtime needs to be very small. Lastly, any hard failure such as a broken link or page immediately reduces the quality of experience to unacceptable. With this definition of service Quality of Experience, we now have a measurement stick to determine the maximum number of concurrent users and user rate that we can add to the device under test.

Now that we have a reliable way of measuring, we can measure provision scale and provision rate accurately. This helps us to understand the economics of deployment of the device under test. If because of measurement uncertainty we attempt to resolve the problem by throwing resources at the device under test, users will have a good quality experience, but the cost for user will be high. Typically, with over provisioned devices on your test, the cost associated with maintaining the device is not only high initially because you’re overpaying for performance you don’t need, but also recurring because typically you will purchase a service contract which can generally represent 15 to 20% per year of the cost of the device.

Because of this, you will have to pass that extra cost on to each user. With this, you risk losing the users due to high price. On the other hand, if you use engineering metrics like bandwidth, you will most likely get an overly optimistic view of what you can provision in the network. This generally leads to unintentional under-provisioning of a device. The net effect is that user experience becomes unpredictable. In this scenario, you lose customers due to the perception of poor service quality. Therefore, both reputation and profitability are optimized when you correctly measure using Quality of Experience as a yardstick. This is because Quality of Experience helps you navigate the tension between under and over provisioning in your network.

Quality of Experience (QoE) models must be designed with an eye on continuous improvement using objective feedback measures. For more information on how you can test for economic impacts using Quality of Experience, please visit: http://www.spirent.com/Solutions/Security-Applications.

 

comments powered by Disqus
× Spirent.com uses cookies to enhance and streamline your experience. By continuing to browse our site, you are agreeing to the use of cookies.