Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

How to define supply voltage level regarding to IR drop?

Status
Not open for further replies.

yw21century

Newbie
Newbie level 3
Joined
Apr 16, 2017
Messages
3
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
34
Hi, guys

We are designing an ASIC chip based on TSMC 40 technology. The nominal voltage is 1.1v (VDD) and the maximum clock frequency is 500MHz. We define static IR drop criteria as no more than VDD*5% (including VDD and VSS) and dynamic IR drop criteria as no more than VDD*15% (including VDD andVSS)

Now we are defining supply voltage level of our chip. Which criteria should we use to calculate minimum supply voltage level in order that junction voltage not less than foundry requirement, say, VDD*(1-10%)? I meant, should I use VDD*(1-10%)+VDD*5% or use VDD*(1-10%)+VDD*15% as minimum supply voltage of our chip?

Appreciate if you can give some elaboration!

Bruce
 

minimum supply voltage can easily be as low as ~0.5 if you can tolerate the slow down. is is not the IR drop that will kill you, it is the transistors that will stop switching reliability once you get too close to Vt.
 

There's "layers" to this "onion".

As noted supply droop is not going to be a device
reliability impactor, but might be a problem for
reliable operation due to a "timing miss".

If you are on the hook for timing closure then
you must meet the "assumptions" built into the
timing models. Supply tolerance is certainly in
there, and newer technologies tend to declare
tighter voltage tolerances. "Back in the day",
5V +/-10% on HCS, ACS logic gave way to +/-5%
tolerances as people started trying to get 0.6um
CMOS to stand up (and many failed to get that,
for extended temp range, or succeeded only by
adding process complexity (LDD, halo, etc.).
A -15% tolerance at your end probably puts
you afoul of foundry-blessed timing models.
At the least you ought to recharacterize and
rerun timing analysis for what you claim your
core droop will be.

Core supply droop is going to be logic-pattern-
variable. Transient droop from a best-case-idle
to worst-case-thrash would have both inductive
and resistive components to the supply deflection,
including off-chip elements like bond wire and
package (if any) inductance. You do not want to
depend on anything "statistical" or "averaged"
from a test pattern's DIDD; you need to know what
is the worst, and that the logic still hangs together.
 

If you have authority over placement, you might make
an effort to locate the more delay-critical blocks near
the chip periphery where R is lowest.

Enhancing the bussing is another thing you could do,
sometimes sacrificing routing lanes for supply robustness
will be worthwhile. If the chip has only periphery pads
you could improve it for not much cost by a post-passivation
copper redistribution layer and put some pads in the core
for it to hit from above. A chip with area array pads ought
to have no real I*R problems if the vdd, vss pads are
distributed sensibly.

I expect there are power tools that would try and show
you bus voltage drop and current density, but I do not
use any - still sizing busses in Excel, since I deal in amps
and hand lay out power stages.
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top