wtr
Full Member level 5
It's been a while since I've been in the weeds with DSP. I've mostly been involved in more abstract levels of data processing on frames (networking), or doing crypto etc. in FPGA land. Therefore I'm rusty and need guidance and a refresher on DSP techniques.
I've been assigned a task to characterize a transceiver.
Here is my thought process. Please help me iron out the correct steps.
My initial thought was to somehow compress the entire 10ms window to 1 max value for frequency at offset.
How do I do this? What steps do I take to turn samples into max hold style graph?
Regards,
Wes
I've been assigned a task to characterize a transceiver.
Here is my thought process. Please help me iron out the correct steps.
- I have a signal generator blasting a signal in at fc=2440MHz.
- I've setup constant bandwidth and gain etc on the transceiver - receiver.
- I then program the transceiver center frequency to sweep across the so called passband and roll off (I'm moving Rx window, but could move Tx)
- For each step of the sweep I generate a binary data file.
- The binary data file can be unpacked to generate iq0 and iq1 or samples=np.stack((iq0, iq1), 1) * 2**-15
- These samples can then be viewed in spectrum or fft or iq form.
My initial thought was to somehow compress the entire 10ms window to 1 max value for frequency at offset.
How do I do this? What steps do I take to turn samples into max hold style graph?
Regards,
Wes