Spirent circle logo
Cloud & Virtualization

With Gaming, Every Millisecond Matters

By:

In the very near future, gaming is set to explode in innovation and expanded user adoption, with use cases that will redefine networking requirements – challenging the promises of 5G, which is still rolling out. Learn the test and validation strategies for low latency edge that must be in place to make all this possible.

Within the next two years, gaming platform companies are poised to take gaming to new heights.

Growth trajectories suggest playing time and user bases will grow exponentially. As we discussed in our previous post, this will have significant impact on traffic volumes and low-latency edge requirements.

In gaming, data processing workloads are typically shared between the user’s console and the cloud. Image processing rates, in particular, have a massive impact on overall user experiences.

A fundamental metric in computer graphics, frame rates measure the frequency at which consecutive images (frames) are captured or displayed. It is not uncommon for games to stream as many as 60 frames per second (fps) thanks to alpha channel blending and 4K resolution. It won’t be long, however, before we see rates of 120-400 fps.

At the same time, the frame sizes themselves are growing as images move from standard definition to HD to 4K and eventually 8K as AR/VR adoption increases. As well, the 8K adaptive bitrate (ABR) streaming channels will need to be synchronized, while recovery mechanisms at the transport layer are generally not yet effective.

These trends present a dual challenge to user quality of experience (QoE). Let’s dive into this coming reality and the implications for the networks being designed to support them.

Low latency is critical for gaming but difficult to achieve

What happens if a frame is late or lost in transit between the console and the cloud? The gamer experiences blockiness, blurred images and discontinuous motion. (Then they promptly tell the world via social media.)

The gold standard is to achieve 20ms latency or less from the render server in the cloud to the console. That includes the render time in the cloud, transit time through the cloud, latency from network elasticity events (e.g., expand or contract cloud resources), as well as the natural latency in the ground.

Latency in the ground? Yes, it takes a packet about 5 microseconds to traverse 1 km of fiber in the ground. That’s 25ms for a packet to go from a data center in California to a user in New York, and even longer for international users. The 20ms latency budget is already blown and no processing has even taken place!

The obvious solution is to move the rendering farm closer to the user, which means the cloud must know the user’s location. While gaming today is done over Wi-Fi or a wired connection, we’re starting to see next-gen mobile networks factoring into the equation. As AR/VR experiences take hold, game services will need to know the user’s location and direct traffic to a data center within 500-1000 km to minimize latency.

Gaming services will need intelligence and agility to expand or contract access points and resources on an on-demand basis, accounting for scenarios ranging from a few players getting together in the home to gaming conventions that draw dozens or hundreds of players to one location. The cloud must be ready to direct usage spike traffic to a nearby cloud and dynamically scale its resources.

Quotes

The cloud must be ready to direct usage spike traffic to a nearby cloud and dynamically scale its resources.

Jitter: even more critical than latency

Variance in latency, or jitter, is always bad. Low jitter (less than 5% of the median latency) means more predictable latency and is the goal for any network supporting gaming. Just as end-to-end latency comes from the network, processing time, the cloud and image processing, so too does jitter.

Image rendering in the cloud impacts jitter because it focuses on flexible software rather than dedicated ASICs and CPUs that have predictable latency. With rendering farms consisting of 100K hardware GPUs there can be a significant impact on jitter.

From an AR/VR gamer’s perspective, jitter can have major repercussions. Dual 8K channels to every headset (8K per eye) will be a 4X increase in data rate and a doubling in the number of channels. But if one channel is delayed from another, the right and left eyes won’t be synchronized. It will be headache central.

Gaming platform “Metaverses” must handle mixed reality at enhanced frame rates and resolutions, with ever-shrinking latency and jitter requirements.

Also discussed in our last blog, gaming platforms are the entry points to the metaverse world of immersive entertainment, social medial and e-commerce, so quality of experience, especially latency and jitter, must be managed carefully to attract and hold customers.

Testing for the metaverse future

How can we ensure gaming platforms coming to market now will be able handle increasing traffic needs throughout an average 7-year lifespan?

Based on Spirent’s customer engagements, we recognize a key place to start is performing these important test cases:

  • Ensure the platform can handle today’s traffic load

  • Measure future end-to-end performance with emulated next-generation traffic patterns

  • Determine end-to-end latency and jitter between users and data centers and between data centers

  • Measure micro variances in latency and jitter

  • Test against concurrency, number of users, etc.

  • Identify the weakest link

Players take good quality of experience for granted, but remember bad quality for a long time, which too often results in a loss of customers. With multi-billion-dollar investments in gaming infrastructure in the balance, failure due to poor network quality is not an option. Luckily, it’s entirely avoidable with the right test strategy.

In our next post we’ll provide benchmarking latency data and its impact on gaming.

Learn more about Spirent’s cloud and virtualization test and assurance and 5G Network Benchmarking for Cloud Gaming solutions.

Like our content?

Subscribe to our blogs here.

Blog Newsletter Subscription

Chris Chapman

Senior Methodologist, Spirent

With over 20 years in Telecommunications and 11+ years of network performance theory, Chris has extensive knowledge in testing and deployment of L1-7 network systems. His expertise includes performance analysis of QoS, QoE, TCP, IP (v4 and v6), UDP, QoE, HTTP(S), FTP, WAN acceleration, BGP, OSPF, IS-IS. MPLS, LDP, RSVP, VPLS, firewalls and load balancers. His specialties are centered on testing L1-7.