I am trying to simulate MOSFET capacitance in cadence using transient analysis. The simplest form of the circuit and waveform is shown in the image.
To get the value of capacitance I am using following formulae:
1) C = integ(I) / V2-V1
2) C = avg(I) / { (V2-V1) * freq }
Everything works fine with the ideal capacitor (from analogLib). I get the accurate values for the plugged-in capacitor. How as soon as I connect MOSCAP, the resultant capacitance is low (around 40% lower). Also, this inaccuracy depends on the difference between V1 and V2. Having 10mV to 50mV of difference between V1 and V2 gives me capacitance quite close to the expected value. The larger the voltage difference, the lower the capacitance value I get.
Can someone please explain or tell me where I am going wrong? In principle, the extracted capacitance should not depend on the voltage difference as long as it is accounted for?
Maybe my cadence simulation settings are wrong? I am using the default spectre simulation settings. the time period is about 10ns and I make sure that it is enough to charge the used capacitance.
Thanks in advance