Understanding Voltage Rails in High-End Processors

FreshmanNewbie

Advanced Member level 1
Advanced Member level 1
Joined
May 10, 2020
Messages
464
Helped
0
Reputation
0
Reaction score
3
Trophy points
18
Activity points
4,594
I have observed that high-end processors typically operate at relatively low voltage levels, often around 1.2V or even lower. I am curious to understand how such low voltage levels are sufficient for the operation of these devices.

Through my research on this forum, I have learned that CPUs generally have multiple power rails, such as those for the core and I/O, each operating at different voltages.

I would like to gain a deeper understanding of the reasoning behind using separate voltage rails for the core and I/O. Additionally, I am interested in knowing which of these rails typically requires higher current and the underlying reason for this higher current demand. Any insights would be really helpful.
 

I assume you understand the ratio of Vgs/Vt for Vds with lower Q=CV and reduction of Vt with wavelength of lithographic wafers.

For the fastest CPUs with super low voltage capability, a Vt of 0.2V–0.3V is a reasonable estimate based on current trends.

My Rule of Thumb (RoT) is Vgs/Vt = 2 is nominal Ron and reducing this ratio increases Rds*Cout = Tau while reducing clock rates as Ron = Vdd/Id increases with Vgs=Vds. While increasing Vdd reduces on and transition time while clocks selected to faster rates it also increases the current during transition which is a linear shoot-thru by design to maintain impedance it does limit the ratio of Vdd Max/Min in all CMOS as this dynamic power to some exponent< ^4?) that I forget. The original CD4xxx high voltage types which had a Vt= 1.5V and Beta ~= 0.05 had a very high ratio of Vdd max/min.

While traditional Power FETs with Vgs(th) = 2 to 4 V, my RoT is to use 2.5 x Vgs(th) for RdsOn. and lower for the "subthreshold types."