I am given the task of documenting some embedded C code written by a contractor who has since moved on and am having trouble understanding some scaling used in an expression for computing the PWM value required to maintain a given output voltage for a buck converter operating in discontinuous mode.
I have derived the expression given in the article en.wikipedia.org/wiki/Buck_converter myself as a sanity check and I agree with it.
I have rearranged to get D on the left and noted that D = PWM value / MAX PWM value.
From this I have an expression for PWM in terms of the other variables as implemented int he code extract below.
The circuit is a basic FET switch with series inductor, infrared LED load (typically no more than a few hundred milliamps) with parallel capacitor
and diode -- everything is standard, just like the figure in the wikipedia article.
The implementation I have received is a C application running on an Atmel ATXMega128A1.
The system clock is running at 32MHz from the PLL.
The specific configuration for the PWM is:
the timer used is TCC1, with prescaler DIV1 (TCC1_CTRLA = 1), the period is 1600 cycles (TCC1_PER = 1599), which gives 20kHz PWM frequency with the prescaler value of 1, and the PWM is set to single slope (TCC1_CTRLB = 0x33).
The code is:
Code:
f = ( (((unsigned long)PWM_TOP * (unsigned long)PWM_TOP) >> 8) * (unsigned long)NVIS_INDUCTOR_UH ) / 1000;
f = (f * (unsigned long)F_PWM) >> 8;
f = f >> 4;
//====================
f = (f * (unsigned long)intensity) / 1000;
f = f * (unsigned long)mean_current; // Maximum current is 1,300mA before exceeding long size
g = (unsigned long)power_in_mv * (unsigned long)power_in_mv;
g = g / (unsigned long)current_scale;
g = g - (unsigned long)power_in_mv;
g = g >> 5;
pwm_buffer[index] = (unsigned short)sqrt((unsigned long)((unsigned long)f / (unsigned long)g));
Here,
PWM_TOP is #define 1599 (the maximum PWM value),
intensity is just a multiplier with no units, to allow the user to set the intensity of the LED through a configuration file,
mean_current is the load current in mA,
current_scale is the required load voltage in mV,
and power_in_mv is the supply voltage, nominally 12V, from a lead acid battery.
I understand the two divisions by 1000, to ensure the multipliers are balanced on top and bottom, and I understand the division by 16 and 32 (>>4 and >>5) to give the multiplier of 2 on the top.
What I don't understand is the two divisions by 256 (>>8 twice). It is as if the PWM value is scaled by 256 and when it is squared, it needs to be reduced by 256 *256. I don't see where this comes from.
I can see no other scaling factors used anywhere else in the code.
Does anyone have any idea?
I have pored over the Atmel data sheet PWM chapter looking for clues but have found nothing. I am sure it something very simple which I have overlooked.
Thank you for taking the time to read all this.