James84
Newbie level 1
- Joined
- Jan 30, 2010
- Messages
- 1
- Helped
- 0
- Reputation
- 0
- Reaction score
- 0
- Trophy points
- 1,281
- Activity points
- 1,289
Hello,
According to Wikipedia (Quantization Error,) "In the typical case, the original signal is ordinarily much larger than one LSB. When this is the case, the quantization error is not significantly correlated with the signal, and has an approximately uniform distribution. "
My question is, under what condition can I assume that the quantization error becomes uniform distribution?
According to Quantization Theory by Widrow, the characteristic function or CF of input signal must be band limited and be zero for some range. But my input signal is not a random variable and there is no PDF for that.
Is there another theory or theorem for determinant input signal?
According to Wikipedia (Quantization Error,) "In the typical case, the original signal is ordinarily much larger than one LSB. When this is the case, the quantization error is not significantly correlated with the signal, and has an approximately uniform distribution. "
My question is, under what condition can I assume that the quantization error becomes uniform distribution?
According to Quantization Theory by Widrow, the characteristic function or CF of input signal must be band limited and be zero for some range. But my input signal is not a random variable and there is no PDF for that.
Is there another theory or theorem for determinant input signal?