The information that I put down are from the site http://www.gaussianwaves.com/2011/05/ebn0-vs-ber-for-bpsk-over-rayleigh-channel-and-awgn-channel-2/ . My QUestions are due to some misconception and will really appreciate if some help is provided to clear the concepts.
AWGN channel:
If $y$ is the received data and $n$ is the input, then signal is received as $y = x + n$. This is an FIR model.
Rayleigh channel
The received signal here is $y = hx +n$ where $h$ can be $h_1$ or $h_2$. This is also an FIR model.
- Case1: When $h = h_1$:
h_1 = 1/sqrt(2)*[randn(1,N) + j*randn(1,N)]
; % Rayleigh channel creates the coefficients $h$ of length $N$ representing the channel, where $N$ is the number of data samples. This is a flat fading Rayleigh channel because the number of taps = 1. Please correct me if I am wrong in saying that the number of taps =1, so it is a flat fading. In general, number of taps = number of delays.
- Case2: When $h=h_2$
If $h_2 = [1, 0, 0.5, -0.2]$ then also we call it Rayleigh but now it is also called multi-path. So, number of multipaths = length(h_2) = 4 ?
How is $h_2$ a Rayleigh random variable? What is the proper way to represent $h_2$ as rayleigh channel and the output $y$? Explanation with the help of code statement will be useful.
Theory says that there is Clarke's and Young's model for channel representation. Clarke's model uses FIR representation using sinusoids and cosines. Why do we need this model when we can simply apply the technique used to generate $h_1$ ?
Answer
The fundamental idea to keep in mind is that in a wireless channels with reflections, if you transmit $s(t),$ you'll receive $$r(t)=\sum_{i=1}^Na_is(t-\tau_i).$$
Another important idea is that whether the channel is flat or not depends only on $s(t)$.
For instance, let's say that the symbol time is $T_s$. Let's call the longest delay in the channel $\tau_{L}$. If $\tau_L > T_s$, then one symbol will appear at the receiver "on top" of other symbols. This is called "intersymbol interference" and the channel is called a "wideband" channel or a "frequency-selective" channel.
On the other hand, if $\tau_L << T_s$ (that is, if the longest delay is much shorter than the symbol time), then there is no intersymbol interference. However, each symbol is still "interfering with itself": many replicas of the same symbol, each with different amplitude $a_i$, are received at the same time. If $N$ is large, then you can invoke the central limit theorem and claim that the overall effect of all these replicas is equivalent to multiplication by a single Gaussian random number. This channel is called "flat".
Note: the same channel can behave as a "wideband" channel or as a "flat" channel to two signals with different symbol rates.
Now, to the question about how to simulate this. It all depends on the sampling frequency used to sample the channel impulse response. When you say that a channel model consists of a single random number, what you're saying is that you're sampling the channel impulse response at the symbol rate. In other words, the channel gain $h_1$ is the aggregate effect of all the $a_i$ and $\tau_i$.
Even for a flat channel, if you sample fast enough, you'll be able to resolve all the $a_i$ and $\tau_i$.
In your question, vector $h_2$ can only be interpreted properly if you know the sampling rate. If the sampling rate is the symbol rate, then the channel's longest delay is of the order of 4 symbol times, and you'll have quite severe intersymbol interference. If the sampling rate is much faster than the symbol rate, the channel is flat.
No comments:
Post a Comment