The Secret Life of Modern RF Signals - Part 3
My last posting (weeks ago…sorry) pointed out that wireless channels are unpredictable and rarely repeatable. Yet we have channel models that purport to let us test how our receivers under real-world RF conditions. How?
As in all of communications engineering, channel modeling is based on statistics. What we need is a large amount of empirical fading data from which we can extract the statistical properties of the data set and replicate them in an RF “pipe”.
I should point out that recent advances in antenna techniques have driven a genuine need for some stochastic RF channel modeling, but at this point in the blog we are 1) still talking about relatively simple single-in, single-out (SISO) RF channels, and 2) intending to test “corner cases” where slight design differences can result in significant performance differences. This requires statistical models.
Step 1 is to gather data. This is usually done in academia… many a student has helped gather a massive amount of data to serve as inputs for statistical modeling. Step 2 is to map the data to some repeatable statistical function. Even a huge amount of randomly captured channel data pales in comparison to the amount we’d need to accurately replicate a world full of RF channels, so an appropriate statistical model has to be identified.
In most statistical disciplines a “first best guess” approximation involves assuming a normal or Gaussian distribution unless other available information invalidates the assumption. The object, then, is to treat the real and imaginary parts of the channel parameters as independent, identically distributed (i.i.d.) random variables.
By assuming that the real and imaginary fading parameter domains are distributed normally, phase is equally distributed, and the magnitude follows a Rayleigh distribution. The baseline method of distributed RF fading is called Rayleigh fading, and this is where we start really talking about channel emulation.
A Rayleigh distribution is based on zero-mean normal distributions of both real and imaginary parts of the fading components. This leads to a good fading model that does not include a line-of-sight (LOS) component. A similar set of models that does include the LOS component is based on the Rician (or Rice) distribution. The starting point for this model is similar to the Rayleigh model, but with non-zero means of the normally distributed real and imaginary parts of the fading parameters.
Other distributions attempt to use more and more realistic models… as an example, Nakagami models attempt to more accurately replicate the effects of larger delay-time spreading. Markov-based models attempt to be even “more random” than other models.
As you can tell, we are only scratching the surface here. In practice, arguments around using particular flavors of these fundamentally similar models are based on empirical observations captured in particular environmental scenarios. Every once in awhile a paper will report, for example, that a set of gathered data closely matches a Nakagami distribution. However, when we’re talking about testing receivers in a mobile-based system, the variability in usage scenarios quickly makes the differences between models irrelevant. Our main concern is to ensure that our testing crosses the performance boundaries of the device under test, and does so in a “random enough” way to ensure performance no matter where or how the system is deployed.
Next time we’ll look into how standard-driven testing uses these statistical models to ensure specific levels of confidence in the testing.