Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.
Confidence level is a statistical term that defines the probability that the outcome that your model predicts is actually true. Generally, the confidence level that you select will determine the sample size required in your model.
Based on my experience, confidence in "confidence" is low. I always
saw the modeling weasels sandbag (on somebody's orders) the stats
pessimistically. Fab gets to ship 3 sigma so make the model stats
"3 sigma" really 4. Like that.
If you are not doing volume production then you only care about
the outliers and what they may show about design centering and
sensitivity. But any outlier (again in my experience) needs to be
"challenged" if you don't want to get into gross overdesign /
underspec.
I once had to go as far as, for every failing iteration, refer back to
a simulation of WAT devices & test conditions, and explain that
"this particular failure was for a NPN beta that would fail WAT"
or whatever. Because models were statistical trash, for sigma
and sometimes stacking up multiple detail param corner cases
just puts you into unfitted unrealism.
So how the the confidence level that you select will determine the sample size required in your model?
Can you please explain that in steps, it will be really helpful.
I have tried to set the confidence level to 99% using initial size of sample. Then I found the difference between the minimum and maximum steady deviation is large, resulting in a notable difference between the predicted minimum and maximum yield. I increased the number of sample and the difference becomes less and less as I increase the number of samples. So where I should stop and say that the number of samples is enough?
"Enough" is a judgment call and involves a lot of externalities like how
much yielded die cost you can stand in the speculative marketing plan.
You might inquire of others closer to the action, what kind of probe
yield they have found to be "OK" or "Ouch!". Of course everybody
wants to see 99.9% (because 100% we all know is unreasonable).
Nowadays people want to see super high probe yield so they can
ship wafers based on a couple of touchdowns and wave their hands
at the statistics. Add to this any challenges about probe-level test
coverage / accuracy / development cost that make a "no touch" or
"no touch" wafer acceptance look real appealing.
But I work on stuff whose customers aren't so forgiving of test
escapes or amenable to hand-waving arguments about
"confidence" when they wanted observability into -that die,
right there- being for sure good or not.
So you may have a built-in conflict between "commercial best
practices" and "doing it right" (best practices being quite related to
conditions and requirements).
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.