Sunday, November 25, 2018

filters - Is up-sampling prior to cross-correlation useless?



Consider a simple case where two signals from two different sensors are cross-correlated, and the time-delay-of-arrival computed from the absissa of the peak of their cross-correlation function.


Now let us further assume that due to the dimensionality constraints of both antennas and the constraints on maximum possible sampling rate, the maximum attainable delay possible is $D$, corresponding to 10 samples.


The problem:


Because of those constraints, your computed delay may vary from any integer value between 0 and 10 samples, that is: $0 \le D \le 10$. This is problematic because what I really want is fractional-delay discrimination of the delay between the two signals impinging on my antennas, and changing the dimensions or the sampling rate are not an option.


Some thoughts:




  • Naturally, the first thing I think of for this case is upsampling the signals before performing a cross-correlation. However I think this is 'cheating' somehow, because I am not really adding any new information into the system.





  • I do not understand how upsampling is not 'cheating' in a sense. Yes, we are reconstructing our signal based on its currently observed frequency information, but how does this give one knowledge of where a signal truly started between, say, $D=7$ and $D=8$? Where was this information contained in the original signal that determined that the true fractional-delay start of the signal was actually at $D=7.751$?




The question(s):




  • Is this truly 'cheating'?



    • If not, then where is this new 'information' coming from?

    • If yes, then what other options are available for estimating fractional-delay times?





  • I am aware of upsampling the result of the cross-correlation, in an attempt to garner sub-sample answers to the delay, but is this too not also a form of 'cheating'? Why is it different from upsampling prior to the cross-correlation?




If indeed it is the case that the upsampling is not 'cheating', then why would we ever need to increase our sampling rate? (Isnt having a higher sampling rate always better in a sense than interpolating a low sampled signal?)


It would seem then that we could just sample at a very low rate and interpolate as much as we want. Would this then not make increasing the sample rate 'useless' in light of simply interpolating a signal to our heart's desire? I realize that interpolation takes computational time and simply starting with a higher sample rate would not, but is that then the only reason?


Thanks.



Answer




It's not cheating, and it's also not adding any new information. What you are doing is the same thing that any upsampling LPF is doing- adding zeros and then reconstructing the waveform with the already known frequency information. Thus, there is no new information, but there is still finer time resolution.


Upsampling the result is similar- no new information but finer time resolution. You can do something very similar through quadratic interpolation.


All of these methods- upsampling and polynomial interpolation- get their information on where the fractional peak is from both the peak itself and its neighbors. A quick pictorial example. Balanced Peak


The blue line in the picture above is my simulated cross-correlation data (though it could be any result, not just a cross-correlation). It is what I call a "balanced" peak because the neighbors are symmetric. As you might expect, the resulting quadratic interpolation (red line) indicates that the true peak is at zero.


The image below, on the other hand, shows an unbalanced peak. Please note that nothing has changed in the result except for the values of the two nearest neighbors. This causes the interpolator, though, to shift its estimate of the fractional peak. enter image description here


A nifty side benefit of these methods (polynomial interpolation and upsampling) is that it also gives you an estimate of the true peak value, though we are usually more interested in the location.



If indeed it is the case that the upsampling is not 'cheating', then why would we ever need to increase our sampling rate?



To satisfy the Nyquist criterion.




Isn't having a higher sampling rate always better in a sense than interpolating a low sampled signal?



No. From a theoretical standpoint, as long as the Nyquist criterion is satisfied it doesn't matter what the sample rate is. From a practical standpoint you generally go with as low a sample rate as you can get away with to reduce the storage requirements and computational load, which in turn reduces the resources that are needed and power consumption.


No comments:

Post a Comment

periodic trends - Comparing radii in lithium, beryllium, magnesium, aluminium and sodium ions

Apparently the of last four, $\ce{Mg^2+}$ is closest in radius to $\ce{Li+}$. Is this true, and if so, why would a whole larger shell ($\ce{...