Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Confidence level in Montecarlo simulation

Status
Not open for further replies.

Junus2012

Advanced Member level 5
Advanced Member level 5
Joined
Jan 9, 2012
Messages
1,552
Helped
47
Reputation
98
Reaction score
53
Trophy points
1,328
Location
Italy
Activity points
15,235
Hello,

What the confidence level means in Montecarlo simulation,
ist it better to set it to some value?

Thanks
 

Confidence level is a statistical term that defines the probability that the outcome that your model predicts is actually true. Generally, the confidence level that you select will determine the sample size required in your model.
 

    Junus2012

    Points: 2
    Helpful Answer Positive Rating
Based on my experience, confidence in "confidence" is low. I always
saw the modeling weasels sandbag (on somebody's orders) the stats
pessimistically. Fab gets to ship 3 sigma so make the model stats
"3 sigma" really 4. Like that.

If you are not doing volume production then you only care about
the outliers and what they may show about design centering and
sensitivity. But any outlier (again in my experience) needs to be
"challenged" if you don't want to get into gross overdesign /
underspec.

I once had to go as far as, for every failing iteration, refer back to
a simulation of WAT devices & test conditions, and explain that
"this particular failure was for a NPN beta that would fail WAT"
or whatever. Because models were statistical trash, for sigma
and sometimes stacking up multiple detail param corner cases
just puts you into unfitted unrealism.
 

    Junus2012

    Points: 2
    Helpful Answer Positive Rating
Dear friends,

Thanks for the reply and explanation,

So how the the confidence level that you select will determine the sample size required in your model?
Can you please explain that in steps, it will be really helpful.

I have tried to set the confidence level to 99% using initial size of sample. Then I found the difference between the minimum and maximum steady deviation is large, resulting in a notable difference between the predicted minimum and maximum yield. I increased the number of sample and the difference becomes less and less as I increase the number of samples. So where I should stop and say that the number of samples is enough?
 

"Enough" is a judgment call and involves a lot of externalities like how
much yielded die cost you can stand in the speculative marketing plan.
You might inquire of others closer to the action, what kind of probe
yield they have found to be "OK" or "Ouch!". Of course everybody
wants to see 99.9% (because 100% we all know is unreasonable).

Nowadays people want to see super high probe yield so they can
ship wafers based on a couple of touchdowns and wave their hands
at the statistics. Add to this any challenges about probe-level test
coverage / accuracy / development cost that make a "no touch" or
"no touch" wafer acceptance look real appealing.

But I work on stuff whose customers aren't so forgiving of test
escapes or amenable to hand-waving arguments about
"confidence" when they wanted observability into -that die,
right there- being for sure good or not.

So you may have a built-in conflict between "commercial best
practices" and "doing it right" (best practices being quite related to
conditions and requirements).
 

    Junus2012

    Points: 2
    Helpful Answer Positive Rating
Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top