ATPG pattern are in the following order:
test_setup -> shift -> capture -> shift -> capture ...
test_setup is for chip level initialization, not for scan shift value stabilization. Furthermore, test_setup is not executed for subsequent shift (which first scan in data does apply)
This thread is about input/output delay constraint setting, however it has been deviated / mis-led.
assuming the following waveform @50MHz clock:
Scan input: 0/1 { '0ns' D; }
Scan clock: P { '0ns' D; '5ns' U; '15ns' D; }
Let internal clock latency to be 1ns
Case #1:
Data delay from scan input pad to first scan FF is 2ns
Total data delay = 2ns
Total clock delay = 5ns (first clock edge) + 1ns = 6ns
Total data delay < total clock delay, setup is met (omit Tsu for simplication)
Case #2:
Data delay from scan input pad to first scan FF is 7ns
Total data delay = 7ns
Total clock delay = 5ns (first clock edge) + 1ns = 6ns
Total data delay > total clock delay, setup is violated (omit Tsu for simplication)
Come back to SID/SOD setup, PT STA process use them in the following way:
SID + data delay < Tper + clock delay
To reflect the actual scenario, SID = Tper - first clock edge = 20 - 5 = 15ns
Revisit case study:
Case #1:
Data delay from scan input pad to first scan FF is 2ns
SID (15ns) + data delay (2ns) < Tper (20ns) + clock delay (1ns), so setup is met
Case #2:
Data delay from scan input pad to first scan FF is 7ns
SID (15ns) + data delay (7ns) > Tper (20ns) + clock delay (1ns), so setup is violated
I believed that should be the right way instead of hoping for test_setup or simulation. However, I do agreed that timing simulation is important to catch setup/hold issue.