We commonly during the PVT (process, voltage , temperature) we treat the VDD as corner, for example if our nominal VDD=3.3 V we also test say the VDD=3 V and VDD= 3.6.
I want to ask about the purpose of this test, is it to emulate the supply voltage variation due to the coupled noise? or to emulate the supply voltage variation due to supply voltage imperfection?
This can be seen a couple of ways.
Take for an example an LDO design.
We would sweep the supply from its maximum to minimum during the testing to ensure that the LDO regulates throughout its specified range.
This sweep can be a slow sweep where we are checking only the DC effects. Like for example, the supply is from a battery which is draining out.
This would be like a DC Line Regulation Test.
But it can also be an AC signal. For example, the supply is from a DC-DC converter or something which can have very specific frequency spectrum.
This would be like a PSR/PSRR simulation
Another thing we could consider is a supply step, i.e. the response on the output to a step in the supply (step up, 3.3V to 3.6V or step down, 3.3V to 3V).
The range specified would be different for each of them and the response of the circuit would be different to each of the test on the supply.
And in effect we might be testing on both the "supply voltage variation due to the coupled noise or to emulate the supply voltage variation due to supply voltage imperfection"
Your product will advertise a supply range within which specs are guaranteed (or at least asserted) to hold.
For confidence against all kinds of field-return and field-finger-pointing your test regime should encompass that, with margin.
Customers can source supplies that deliver the 5% over-life-and-whatever tolerance (they feed off each other). You don't want to test to the edge, you need enough guardband on test limits to include all uncertainty.
10% might be excessive as padding but is a safe staring point. It will be harder to claw back test margin against any yield impact, later.