I'm currently trying to build a preamble detector for my OFDM
transceiver which i have built in Simulink for the purpose of frame
synchronization. This is kinda a lenghty post, but i hope
those who can help me please take some time and take a look at it.
Appreciate it!
This is what i have done so far. This is autocorrelation of a PN
sequence: http://geocities.com/antonio_magma/corr/corrawgn.gif
I used 64 IFFT to convert the binary bits into real-valued time domain
(with conjugate symmetry), pass through AWGN and then correlated. This
is the resulting graph after autocorrelation: http://geocities.com/antonio_magma/corr/corrawgngraph.gif
It looks fine with the first sample having a larger magnitude as
compared to the others.
1st question:
Why is there such a huge difference with the results? In the 2nd graph
i'm getting almost noise like signal. Does it mean i should use
autocorr instead of xcorr with my model? Or am i building the model
wrong?
Next, because typically the preamble is repeated and transmitted
multiple times for synchronization, i've tried to concatenate the
output of IFFT forming 4 identical sequences to be transmitted and
correlated.
2nd question:
With reference to the 4corrawgngraph, the 1st peak became even larger
which i guess could be due to the superposition of the 4 sequences. But
shouldnt the output be like this? http://geocities.com/antonio_magma/corr/corrop.gif
I understand the basic theory behind double correlation but how do i
implement it to obtain an output like yunchiu-corr? Guys, is it
something that i've done wrong with my simulation (which i think is the
case) or i should have designed it differently, any ideas please let me
know coz i've been stuck at this for quite some time. Thx guys...