I am trying to simulate a model of an ADC and determine its performance.
One of the interesting properties is the ENOB (Effective Number Of Bits), which can be calculated from SINAD (SIgnal-to-Noise And Distortion ratio).
On that SINAD Wikipedia page, there is a PDF document which suggests that the second definition on the SINAD Wikipedia page is the one to use and which I interpreted to be
$$ SINAD = 10 \log_{10} \left( \frac{p_f} {\sum_i{(p_i)} - p_0 - p_f} \right)dB $$
Where $p_x$ is the power the FFT bin at frequency $x$, $p_f$ is the power of the frequency bin containing the signal frequency $f$ and $p_0$ is the DC component. I calculate the power of each bin by squaring the normalized amplitude. Also note that the sum in that equation runs over all frequencies from $0$ to the Nyquist bandwidth $f_s/2$ with $f_s$ being the sampling frequency of the ADC.
That same document also defines $$ ENOB = \frac{SINAD-1.76dB}{6.02} $$
Using my SINAD and this ENOB definition, I compared the output of an ideal $N$ bit ADC model, which matched for my FFT depth.
In another (related) PDF document, it states that the FFT depth must be large enough in order to distinguish between the FFT noise floor and the ADC noise. To my surprise, it also says that the FFT noise floor is $$ 10 \log_{10} \left( \frac{M}{2} \right)dB $$ below the theoretical signal-to-quantization-noise $SNR=6.02N+1.76dB$, which makes me wonder how $M$ can thus ever be "too low"?
While playing with different ADC models, I have seen that the ENOB does vary significantly when I set $M$ too low, so I am wondering if there is any guide on how to choose $M$ for a desired ADC bitwidth of $N$?
No comments:
Post a Comment