ADC bit calculation method

Status
Not open for further replies.

dipnirvana

Full Member level 4
Joined
Jan 23, 2009
Messages
210
Helped
26
Reputation
52
Reaction score
4
Trophy points
1,298
Location
India
Visit site
Activity points
2,296
Hi i have very basic question t put forward ,

Let say u want to measure a battery voltage which is 2.5 V with 1% error . The minimum voltage that battery can go is 0.1V.

Now one way of calculating the no of bit required is 2.5/0.1 that is dynamic range is 25 . So, my ADC is bit requirement is 5bit (32 steps). Or should it be by other method , that is minimum voltage to measure is 1% of 0.1V that is 0.001V . Now the dynamic range is 2.5/0.001=2500 that is requirement of 14 bit.

So two requirement is very different ..which one is correct?
 

14 bit is correct, if you actualy need to measure 0.1 V with 1% accuracy, which sounds unreasonable. I guess that 1% of 2.5 V, which converts to +/- 25 mV absolute resolution would be sufficient. This means 7 bit resolution.
 

Don't take values i put forward to real world values . i just took it illustrate the confusion . Actually the confusion there as some application note says that u only take DC error into the account and hence first method where as some app note talk about 2nd method .

If i think more let 's if a ADC input pin has 2.5V why would that ADC need a 7 bit to get to the 1% if gain error + quantization error is less than 1% .
 

Status
Not open for further replies.

Similar threads

Cookies are required to use this site. You must accept them to continue using the site. Learn more…