I think I have to make things clearer.I have been given two signals, say, x
and
r
. x
is the original signal and r
is the output of the CHANNEL which is a corrupted version of x
.
r
is to be the input of the equalizer which will be driven by various algorithms (LMS, Leaky LMS, RLS etc ).
Lets call the out of the qualizer (driven first by LMS) be y
.I suppose the difference between y
and a delayed version of x
is the the error, e
.
Now, the channel is completely unknown.Also, I have no idea about the equalizer tap weights.
The idea of the project is to use those two signals, x
and r
, and the algorithm to come out with plots of (1) e
vrs delay (2) e
vrs filter length.
The reason of these plots is to determine the Optimal Delay and Optimal filter length for the algorithm.
I hope it a bit clear now.I really need help because I'm completely new to Adaptive filters.
I hope to hearing from you.
Regards