sensitivity in filters and number of bits of system

Status
Not open for further replies.

akbarza

Full Member level 2
Joined
Dec 18, 2013
Messages
131
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,296
Activity points
2,556
hi
i read a pdf file that i think belongs to book opamp for every one .
in chapter 16 of the book-filter design chapter-sensitivity is explained.
for a tolerance in a 8 order low pass butterworth filter, the difference between ideal response and real response is 0.35dB .
then in the book is written:
If this filter is intended for a data acquisition application, it could be used at best in a 4-bit
system. In comparison, if the maximum full-scale error of a 12-bit system is given with 1/2
LSB, then maximum pass-band deviation would be – 0.001 dB, or 0.012%
can you explain about these 4 bit and -0.001 dB?
how is reached to 4-bit from .35 db error?
thanks
 

Attachments

  • filterdesign.pdf
    292.8 KB · Views: 142

Above 16-51 pic discussed is 4.1% G error. 4 bits = 6.25% = 1 LSB error which is the closest bit

count to cover the 4.1% error.

12 bits to 1/2 LSB, then G = 1 + 1 / 2**13 = 1.000122 = .0122% G error = + .001 db error


Regards, Dana.
 
Last edited:
The calculation is simple, but as I think inappropriate for a typical data acquisition application. You'll expect full resolution at DC or low to mid frequency but not necessarily near the cut-off frequency.
 

I think of a true RMS meter, the high precision stuff HP/Agilent or Fluke
make. They would spec out to a BW, so to FVMs point that probably not the
actual 3 db BW but rather one that is << less to insure, in this case, no peaking
errors occur.

But these days high precision is done thru linearization and cal, a typical system
at manufacture looks like this process -




Regards, Dana.
 

Status
Not open for further replies.

Similar threads

Cookies are required to use this site. You must accept them to continue using the site. Learn more…