Monday, November 5, 2018

linear systems - Help in understanding the formula of Signal-to-Noise-Ratio (SNR) - Part 1




  • Question 1:


    Consider an Autoregressive model : \begin{align} y[n] &= y[n-1] + x[n]\\ z[n] &= y[n] + w[n] \end{align} where $y$ is the output observation, $x$ is a random input and $w$ is Additive White Gaussian Noise (AWGN). In general, I have seen in many research articles and tutorials that the SNR = $\frac{E[y^2]}{\sigma^2_w}$



    How to arrive at this formula and what is the meaning of Expectation? Is it calculating the mean?





Answer



$\mathsf{SNR}$ (signal-to-noise ratio) is a generic term whose value can be defined in different ways by different people, and as long as one states clearly what is meant by $\mathsf{SNR}$ in a particular document, there is no confusion. Thus, there is no "arriving" at the formula $\mathsf{SNR} = \frac{E[y^2]}{\sigma_w^2}$ at all: it is the definition of the term $\mathsf{SNR}$ as it used in that particular document. There is, of course, a general feeling among communications systems designers and analysts that the error probability $P_e$ should be a decreasing function of $\mathsf{SNR}$, but exactly which decreasing function it is depends on the definition of $\mathsf{SNR}$ in use. For example, someone could define $\mathsf{SNR}$ as $\frac{E[|y|]}{\sigma_w}$ or as $\frac{\sqrt{E[y^2]}}{\sigma_w}$ or as $\frac{E_b}{N_0}$ (a quantity that I like to call the BEND ratio ("bit energy to noise density ratio")), because those definitions make more sense in that particular application, or lead to more memorable formulas. If $\mathsf{SNR}$ is defined to be the BEND ratio, then the bit error probability for coherent demodulation of binary orthogonal FSK would be $P_e = Q(\sqrt{\mathsf{SNR}}\,)$ (where $Q(\cdot)$ is the complementary standard Gaussian CDF) while for differentially coherent demodulation of differentially encoded PSK, the bit error probability would be $p_e =\frac{1}{2}e^{-\mathsf{SNR}}$, etc. Skipping over the fine details to the executive summary, large values of $\mathsf{SNR}$ are better than small values of $\mathsf{SNR}$.




Since the $x[n]$'s are random (variables), so are the $y[n]$'s random variables since they are sums of the $x[k]$'s. Note that \begin{align} y[n] &= x[n]+y[n-1]\\ &= x[n] + \left(x[n-1]+y[n-2]\right)\\ &= x[n]+x[n-1]+\left(x[n-2]+y[n-3]\right)\\ &\ddots\\ &= \sum_{k\leq n} x[k] \end{align} Thus, $z[n] = y[n]+w[n]$ is the sum of two independent random variables and so \begin{align}\require{cancel}E[(z[n])^2] &= E[(y[n])^2] + E[(w[n])^2]+2E[y[n]]\cancelto{0}{E[w[n]]}\\ &= E[(y[n])^2] + \sigma_w^2\tag{1}\label{1}\end{align} where $E$ denotes expectation (as the OP suspected) and the expectation is indeed calculating the mean, not of the random variable $z[n]$ but of the random variable $(z[n])^2$, and $\sigma^2$ is the variance of the AWGN. So, the definition of $\mathsf{SNR}$ that the OP is reading about, viz. $\frac{E[y^2]}{\sigma_w^2}$ is the ratio of the two terms in $(1)$. The numerator is the instantaneous power of the output signal $y$ and the denominator is the instantaneous power of the AWGN. This definition of $\mathsf{SNR}$ is deemed by the authors of the paper that the OP is reading to be the most convenient for their purposes. Others may choose to carry out similar analyses using different definitions of $\mathsf{SNR}$ and their formulas will appear to be different. But, arguments about "My $\mathsf{SNR}$ is larger than your $\mathsf{SNR}$" are readily resolved by refusing to compare apples with oranges and looking at the important parameters: which system achieves smaller error probability, or needs smaller (peak or average) transmitter power, uses smaller bandwidth, etc. The rest is just semantics.


No comments:

Post a Comment

periodic trends - Comparing radii in lithium, beryllium, magnesium, aluminium and sodium ions

Apparently the of last four, $\ce{Mg^2+}$ is closest in radius to $\ce{Li+}$. Is this true, and if so, why would a whole larger shell ($\ce{...