Monte Carlo runs
I labored for years under a Monte-Carlo-centric regime
and I have a low opinion of the approach. For starters,
95% of the runs are worthless (in the sense that they are
not going to produce an outlier in any parameter of interest).
Yet your management will believe in this religion because
they want to believe, and you're stuck waiting out all those
worthless runs.
If you had the broader view and the time to spend and a
setup which is determinstic pseudorandom (as many are) you
can build a body of knowledge in short order, telling which
seeds are repeatably "bad actors". Then you could run them
first, a much shorter loop which is kind of like a "corners"
approach, but not restricted to the limited cases a digital
foundry might propose.
I have also found many times that MC statistics are sandbagged
to the point of being garbage, and you can get hit with sets
of params which would fail WAT. When I had the say-so,
I developed my own scripts which would skip iterations
that were shown to be WAT-rejectable.
Garbage in, waste 95% of your wall-time, garbage out. Yeah.
Of course somebody's CAD / modeling group -could- produce
a distribution- and detail-accurate statistics set, and your
design management -could- adopt more efficient approaches
to design criticism (such as I describe, or other). But MC
tools and approaches are sold specifically on the idea that
nobody has to think or be diligent - just push the button and
look at the histogram (much) later. It's thought, I guess,
to be a substitute for having designers who understand their
devices and their process.
I prefer not to work at places where this is thought to be a
good thing.