Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Spread spectrum: noise density

Status
Not open for further replies.

Ghost Tweaker

Member level 5
Member level 5
Joined
Nov 11, 2003
Messages
94
Helped
2
Reputation
4
Reaction score
2
Trophy points
1,288
Activity points
980
adc spread spectrum

Hi everyone,

let's consider a signal that is spreaded well below noise (let's say 50Hz data modulated by a 1.023MHz PRN) and then translated to an RF frequency by a sine carrier. The received signal power is only -130dBm. Noise density is taken -174dBm/Hz.
Thus:
-> The CNR (carrier to noise ratio) is thus -130-(-174) = 44dB.
-> The SNR is -130 - (-174+10log(2.046MHz))= -174dBm - 111dBm = -19dB.

I understand the notion of signal burried into noise since the signal is 19dB below noise for the 2MHz BW.
My question is about what my components (LNA, ...) really see: noise over the 2MHz or do they see a sine corrupted by -174dBm noise?
Perhaps that I should re-read my communication course, sorry...but it seems tricky to me...
thanks in advance for the help
Ghost
 

Ghost Tweaker said:
Hi everyone,
My question is about what my components (LNA, ...) really see: noise over the 2MHz or do they see a sine corrupted by -174dBm noise?
Perhaps that I should re-read my communication course, sorry...but it seems tricky to me...
thanks in advance for the help
Ghost

just noise, but noise is really a thing! in CDMA, signal is artificially drowned in noise, and in reciever, it again jump out of the pool by a certain filtering (psudo random codes). this is the case here too, lna see a noise, but not white noise, and this noise have a lot of informations, your signal, and maybe a lot of CDMA signals, and other things!

marti
 

Thanks a lot, I think it's now clear for me. The SNR doesn't really makes sense in a CDMA system before the signal has been despread. At least you hace to take care not to further the SNR along the front-end.
One (last thing) I don't understand well is the ADC at the end of the RF front-end. How can we make a n-bit ddecision before the de-spread process if there are several signals mixed together?

Ghost
 

well, the signal is corrupted, if it could not be well filtered. by the way, there is a lot of methods to overcome this (i mean, where the signal is concentrate in a certain narrow-band). for example, the chanel noise could be estimated with a smart cicuit (such as neural network), this way, the chanel charasteristics is estimated, consequently the noise output is estimated, and thus the main signal could be well reproduced. i check the method with a noise value more than 3times biger than the signal. (by the way, the most critical charasteristic of noise is that its average is zero). hope it will help.

marti
 

About your sentence: the signal is corrupted if it could not be filtered. Even if the signal is filtered, there is still noise (or other CDMA signals) in the passband...
 

Ghost Tweaker said:
About your sentence: the signal is corrupted if it could not be filtered. Even if the signal is filtered, there is still noise (or other CDMA signals) in the passband...

i meant channel modeling, and noise estimation, and other considerations, by "well-filtering"; not a simple rectangular frequency domain filter. this way the CDMA is work, and other designs, including yours :).

rgrds, marti
 

I'm working on the RF rfont-end and thus I'm limited to passband filters...
 

Ghost Tweaker said:
I'm working on the RF rfont-end and thus I'm limited to passband filters...
well, so you can strengthen both signal and noise, u should separate them some way; best way is to not use any filtering, pass the whole-band signal to the section which should take care of noise removing.

marti
 

CDMA is not magical way to transmit the signal bellow the noise floor. It is just another way to share a resource (spectrum) between the users. Although in your example SNR is -19 dB, the energy-per-bit-to-noise ratio is +27 dB and makes you confortable to work with 50 b/s in 2Mb channel.
My advice is: if you want to get rid of the noise, filter it and reduce amplified bandwidth just to one where your useful spectrum is. Use good LNA and high linearity A/D converter (if you try to implement processing to improve the performance).
And remember, Shannon sees everything.
 

Thanks you all for the interest. Debeli, what basically disturbs me is that the 27dB bit to noise ratio is post correlation and this correlation process is realized after the RF front-end ADC. Pre correlation, I have -19dB and this ADC conversion disturbs me...
 

Esentially, you can your digital correlating receiver consider as matched filter for your 50 b/s modulated signal, where each bit is modulated with KNOWN PN sequence ( the filter response is matched to it). This enables you to improve your signal for 43 dB in your case (I have first wrongly assumed 2 MHz chip rate instead of 1), while the noise density remains (almost) the same. The trick is that spreading sequence carries no information to receiver, and what is important is amount of energy that you have in one information bit duration.
In old days the correlation and despreading have been done before (or without) A/D conversion and it worked well, also. For you it is most important to use dynamic range of the A/D most effectively in order to let digital processing do the best job.
You can find a lot literature about spread spectrum in any search on Google or IEEE. If you want a traditional source of SS systems try to find:
"The Spectrum Communications vol 1-3", Omura et al.

Maybe it helps
D.
 

Thanx.

My problem is not the spread spectrum technique, I understand the energy spread, ...

what disturbs me is that signal is below noise before the ADC and thus the ADC converts noise!?!?!
 

what disturbs me is that signal is below noise before the ADC and thus the ADC converts noise

I would like rather say that the signal is IN the noise. Noise makes most of converted signal, but the data signal stil influences values of several least significant bits, so you can lean on the ADC.
Let us make following experiment: You have sampled your signal and you have your signal in L (L is number of chips (20000 in your case)) samples Pn=Nn + Sn where deviation (power) of Nn (Noise samples) is much larger, in this case 100 times (or 20 dB) as Sn which is your PN sequence (+-1). Mean of both is 0.
When you correlate the PN sequence to Nn+Sn, the deviation of Nn is not changed and remains 100, and therefore the power of Nn is the same, but now the Sn is equal 1, and not any more random with mean 0, so now the value that you integrates is equal to number of chips and you suddenly
get the power of Sn like 20000, which is exactly 23 dB more than it what you have had before correlation. This example is not completly correct but could be used to visualize the concept.

Regards,
D
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top