Based on my experience, confidence in "confidence" is low. I always
saw the modeling weasels sandbag (on somebody's orders) the stats
pessimistically. Fab gets to ship 3 sigma so make the model stats
"3 sigma" really 4. Like that.
If you are not doing volume production then you only care about
the outliers and what they may show about design centering and
sensitivity. But any outlier (again in my experience) needs to be
"challenged" if you don't want to get into gross overdesign /
underspec.
I once had to go as far as, for every failing iteration, refer back to
a simulation of WAT devices & test conditions, and explain that
"this particular failure was for a NPN beta that would fail WAT"
or whatever. Because models were statistical trash, for sigma
and sometimes stacking up multiple detail param corner cases
just puts you into unfitted unrealism.