Pardon if this is getting away from the main question, but am curious if anyone has any experience with this ...
Some area-critical design requirements discourage resettable flops everywhere due to them not truly needing to be reset in *principle* (and in reality).
(or SEE requirements may discourage async-resets, but many libs don't provide a true sync-reset flop)
However, in gate-level simulation *practice*, advanced synthesis optimizations esp. with complementary-output flops can result in logic-clouds that will not resolve if the flop outputs are X, and not only on reset paths.
(DC-Ultra seems to be very aggressive about doing this, even when setting hdlin_ff_always_sync_set_reset to true).
The result is an inability to get parts of a design out of X during reset in gate-level sims.
Built-in time-zero random initialization simulator features often will not help if e.g. the clock-network leafs are not at non-X during time-zero (e.g. due to a PLL model or delays with SDF annotation).
In this case, some type of simulator-specific script after time-zero (after clock settling), or a custom-PLI-function approach might be the only practical solution.
(we resorted to the latter, even re-calling the PLI in the middle of a sim when the PLL got restarted)
And even then, one must be careful to not be fooled into being helped by the random init values coincidentally hiding a true bug!
2-value logic simulation also seems dangerous since it might hide useful timing-violation behaviors.
I think I once read a discussion somewhere about enhancing simulators to support a complementary "not-X" value in addition to X, to get around this problem.
(maybe it was at the end of item 5 of this ESNUG:
https://www.deepchip.com/posts/0246.html )
I'd also love to know about a tech-article that tries to give thorough practical treatment of all of this.