Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.
generally when you go to lower nodes, gate oxide will also get thinner and if it is SiO2 then due to its small dielectric constant and thickness the leakage current will be more
Hence now days at lower nodes high k metal gate process is preferred.
Can any body let me know how actually the sub-threshold leakage current increases in scaled down technologies. I need a quantitative and qualitative answer. Kindly post the answer with relevant mathematical equations.
Increasing leakage is a deliberate tradeoff made in pursuit of
speed. Ileak/W that would have been unacceptable is now
tolerated in process development goals. You also have simple
physics like tunneling currents in ultrathin dielectrics that is
not going to respond to anything but exotic gate compositions,
which don't come for free (in terms of manufacturability and
reliability) either.
When foundries push VT lower, as they do in pursuit of speed
at low Vdd, this puts you further, or more likely, into the
subthreshold conduction. You could take your worst case min
VT and the current/W at VT, and your worst case subthreshold
slope (mV/dec) and figure that your leakage is something like
(IperW@VT)*Wtotal/10^(VT*1000/subSlopemVdec).
Like, if W=1um gives you 1uA at VTsat=0.5 and your subthreshold
slope is 100mV/decade, and you have 1M transistors of this
geometry in the "off" state exposed to the same Vds as your
VTsat test condition,
(1E-6)*1E6/10^(0.5*1000/100) = 10mA
but if VT drops to 0.4 and everything else remains equal you
would see 100mA just because of subthreshold slope vs VT
foot-race.
I can give a quick qualitative explanation. I can probably add a quantitative explanation if there is a request and have time. Qualitatively, you can think of an energy barrier between the Source and Drain regions when the gate is turned off which prevents carriers from flowing across the channel. Applying a drain potential, reduces the energy level at the drain, but the barrier still exists because the gate is turned off. When you turn on VG, the channel barrier reduces and carriers flow, giving you current.
As the technology node is scaled, the distance between the source and drain is reduced and now the drain influence on the channel through the depletion region between drain/channel when VDS>0 is applied even when the gate is turned off. This reduces the barrier even though VG=0, which allows some of the carriers from the source to travel up to the drain. This is your leakage current. This current increases with node scaling, because Lg scaling brings S and D closer to each other and gate does not have sufficient control over the channel. This is one of the reasons why you see worse sub threshold slope and DIBL (Drain induced barrier lowering) at scaled nodes. And this it the primary reason why the industry is moving towards FinFETs where the gate is all around the channel on three different sides (two sides + top, Intel calls this Trigate but essentially its a FinFET first invented in the 90s at UC Berkeley), where the gate now has more control over the channel, thus allowing gate length scaling and preventing worse leakage.
Hope this helps, its better illustrated with band diagrams or energy level charts. Also, any device physics textbook will give you the equations to support this. I explained Source to Drain leakage, while there are other leakage components, such as leakage through body, gate leakage (mitigated by high-K, as explained in one of the posts earlier), GIDL/band-to-band tunneling etc.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.