I am designing temperature sensor based on PTAT to frequency converter using current starved ring oscillator
I would like to ask you some questions please
1) Since the PTAT is based on absolute value of resistor, the PTAT current is changing from to process to another, which shift the sensor frequency up or down but the linearity is reserved
Do I need to correct for the process variation by using some trimming schemes?
2) how to measure (simulate) the temperature sensor resolution of the sensor
3) how to calculate the temperature sensor output error (°C)
So you design a temperature_to_frequency converter.
1) This frequency has to be measured and processed somehow. We don´t have any information how you want to do this.
So I guess it´s a good idea to "calibrate" the sensor at the same place where the frequency gets processed.
2) the frequency here is an analog measure, thus the resolution should be infinte.
Yes I designed a temperature to frequency converter
The frequency measurement will be done using a micro controller rather than on chip frequency to digital converter
1-Now the question how I can do the calibration?, from circuit perspective I can introduce tunable elements as variable for calibration process, what is the procedure of the calibration?
2- it means if the MC is used to measure the frequency, then the resolution will be defined by controller
3- I think it is the same as "Sensor Accuracy", maximum error reading
I will tell my procedure how I do it (but not sure if it is right or not)
from the sensor output (frequency vs Temp), I took the derivative (the sensor sensitivity Hz/°C)
since the sensitivity is not ideal constant rather change with temperature, I take the average value, now I have the slope of the ideal approximated sensor output
Next step I plot a tangent to the actual sensor output with the slope equal to the obtained value and intersect the actual curve at -40 ° (-40 °C is the minimum simulated temperature)
the result show a difference between the actual curve and the tangent line so I take this difference
Now I divide the last result by the sensitivity value and get an error expressed in °C, as an example my sensor has a maximum error of 1 °C
From the chart it looks like 2.8 kHz/°C. Rather linear.
With 765 kHz at 0°C.
So the formula should be: Temp_degree = (frequency - 765 kHz ) / 2.8 kHz/°C
Calibration:
What errors in what magnitude do you expect?
And what do you want to calibrate.
It depends on your application.
Usual simple calibration is just
* offset (correction by "addition")
* gain (correction by "multiplication)
Two known measurement points are necessary...and simple math.
2) resolution is rather determined by software than by hardware
let me calibration issue before moving to fix the other questions
The calibration I mean is about the sensor output change with process corner is you can see from the figure below
in order to use your formula Temp_degree = (frequency - 765 kHz ) / 2.8 kHz/°C, as you see that 765 kHz is not applicable for every corner and has to be determined for every fabricated chip
else I can introduce adjustable element in the circuit like for instance adjustable current that I can use to calibrate and adjust the value to 765 kHz for every corner
but the thing that I am not sure if this mandatory or not
With this, I guess, nobody of us can help.
If you don´t know, we surely don´t either.
And you don´t give any information about the application and about the expected accuracy and precision, so we are unable to give recommendation.
****
Calibration:
For sure you are free to do the calibration on the chip. Only you know what is easier or what makes more sense.
Like already said: I´d do it at the microcontroller with simple math:
frequency_corrected = frequency_read * calibration_gain + calibration_offset.
I am using the temperature sensor for on chip heat monitoring
the maximum error should be <= +-°C for the temperature range from -40 to 85 °C
You have cleared for me most of my questions
As I should use the MC as well, I would like to follow your procedure the microcontroller as you said with simple math:
frequency_corrected = frequency_read * calibration_gain + calibration_offset.
This would solve my complete problem if you little explain it to me please, how is this calibration procedure
I am imagining it this way:
I connect the sensor to the MC and I need to read at least two points to know the actual gain, and I also need to read the sensor frequency at 0 °C. in this procedure it means I need to have reference temperature sensor
I can't imagine a calibration without reference.
So yes. Two known temperatures. Not necessarily 0°C, any other temperature will do.
The rest is math. Since this is a standard calibration method for linear measurement I expect it to be explained many thousand times.
I expect even video tutorials ... and code.
Please try an internet search if you need a deeper explanation..