Thursday, July 6, 2017

image processing - How moving part pixel intensity values of video frames becomes dominant compared to stationary part intensities in reconstructed frames?



Hello everyone i want to do dynamic texture video sementation using the Fourier transform in MATLAB. I am applying 3-D fft on dynamic texture video frames (using matlab function 'fftn') and reconstructing video frames back from the phase spectrum only (using matlab function 'ifftn').



  1. I have found moving part pixel intensity values becomes dominant (means its intensity values are increased so much) compared to stationary part intensities in reconstructed frames of original frames .(e.g.in waterfall and traffic on road, water part and moving car's intensity values are increased respectively compared to the stationary background). how does this happen?

  2. Also What is the relation between time shift and phase change if we take Fourier transform of video frames?

  3. I want know that what changes will occur in magnitude and phase spectrum amongst the frames and can we derive any relation amongst the interframe magnitude or phase spectrum changes?



Answer



Q1. As shown by Oppenheim's experiment, the phase spectrum contains most of the structural information about the image. In 2D this are things like lines and edges. In 3D it is things like lines and edges but also movement. Instead of 2D frames and time, imagine the video as a 3D solid where the z-axis is the frame number. If you took a slice along the z-axis (3rd dimension), movement appears like an edge in the signal.


When you reconstruct using just the phase spectrum, you give all the component sinusoids the same magnitude. This basically normalises the brightness everywhere in the image. Try it in MATLAB:


I = double(imread('YOURIMAGE.TIF'));

f = fft(I);
f = f./abs(f);
I = real(ifft(f));
imagesc(I);

In a 3D image, such as a video, this normalisation is also in the time axis. Hence some smaller amplitude moving parts becoming greater in amplitude (and the opposite for high amplitude parts).


Q2. One video frame in your case is the same as 1 pixel in the x or y directions. The phase change depends on the movement.


Q3. Again that depends on the movement. Slow movement will fall into the lower frequency of the spectrum, high movement into the higher frequency. The spectrum is a global response though. It can only tell you about the entire signal. If you want to do analysis locally (i.e. restricted in location and time) you have to use quadrature filters if you want phase information.


No comments:

Post a Comment

periodic trends - Comparing radii in lithium, beryllium, magnesium, aluminium and sodium ions

Apparently the of last four, $\ce{Mg^2+}$ is closest in radius to $\ce{Li+}$. Is this true, and if so, why would a whole larger shell ($\ce{...