Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Plotting noise figure (NF) versus nFET current density in Cadence (current on log scale)

giorgi3092

Junior Member level 2
Junior Member level 2
Joined
Jul 31, 2021
Messages
23
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
296
Hi, I am cross posting this from Reddit because I did not get a full answer there.

I am starting to design a multistage differential LNA in a CMOS process. The textbooks (e.g. by Voinigescu) suggest that the first stage must always have lowest NF based on Frii's equation of cascades amplifier blocks. I get that.

He suggests that the way to start is to select the current density (mA/um) such that the noise figure is the minimum and if that current density is kept at that value, noise figure will remain constant (even irrespective of frequency in case of CMOS). He goes ahead and states that for CMOS processes, the value is almost constant and is about 0.15 mA/um (down to 65nm). Mine is 22nm, and it is expected to be a little more - up to 0.3 mA/um.

The issue I have is - how do I generate those nice plots of NF versus current density in Cadense? I see these figures in almost all LNA papers, but nobody shows the test fixture of this simulation. I am referring to such plots:
1704253091632.png



My idea is to throw in one transistor on the schematic - hook up bias tees at the gate and drain with proper drain voltage which seems to be max of 0.9 volts in my case. I also connect two 50 ohm ports at the input and output. I fix total gate width to 10 um for example, and vary gate voltage from 0.2 to 0.7 volts.

Then, I do S-parameter simulation at my design frequency (with noise enabled), choosing gate voltage as the sweep variable from 0.2 to 0.7 volts.

This way, I get NF versus Vgs, instead of NF versus current density. How do I plot NF versus current density? Should I change my test fixture?

Also, since I want the drain current to be varied logarithmically, it seems the sweep variable must be the drain current instead of gate voltage, but if I vary the drain current with a current source how do I set an appropriate gate voltage?

Has anyone done this before and can enlighten me?

Thank you.



EDIT:
I will also supply my setup:
1704256846105.png


S-parameter simulation setup:
1704256908231.png


Does this look okay?
 

Attachments

  • 1704256908268.png
    1704256908268.png
    177.2 KB · Views: 139
Last edited:
Bias the MOS Transistor with an appropriate VGS voltage (must be higher than VGS(th) and Drive this transistor with a Constant Current Source thru Drain Side than change this current and plot NFmin vs. IDS
 
You will need to add a DC operating point analysis (DC OP) to your simulation. This will allow you to extract the current flowing through the drain of your transistor at each Vgs step.

With the DC OP analysis, you can create a parametric sweep of Vgs and at each step extract both the NF (from S-parameter analysis) and Id (from DC OP analysis).

Then, to calc/plot NF versus current density (Id/W) with steps.

Here's a more detailed step-by-step for your simulation setup:

A. Adjust your setup to include both S-parameter analysis with noise and a DC operating point analysis.
B. Perform the sweep over Vgs and record both NF and IDS for each Vgs.
C. Post-process the data to calculate Id/W for each step, where W is the total gate width of your transistor.
D. Plot NF versus Id/W using a data visualization tool of your choice (e.g., MATLAB, Python with Matplotlib, or even the built-in plotting tools in Cadence if available).

As for the concern about varying the drain current logarithmically:

1. You can indeed sweep Id directly by using a current source at the drain and stepping its value.
2. For each Id value, you would adjust Vgs to maintain the desired drain current. This could be done by employing a feedback control mechanism in your test setup or via an iterative simulation process where you manually tune Vgs until the desired Id is achieved for each step.

Remember, the goal is to capture the relationship between NF and the bias conditions of your transistor, and there can be more than one way to do it.
If your simulator allows it, you might find a more automated way to do the sweep and data extraction.
--- Updated ---

more ...
- make sure the transistor isn't slipping into the triode region at any point in your sweep.
- verify that the biasing network does not significantly contribute to the noise figure
- verify NF frequency dependence vs temp
- verify input and output matching which affects NF,
- verify overall gain and stability, which in turn can influence the perceived noise figure due to mismatch losses.
- simulate across process corners (slow, typical, fast)
- Optimize tradeoffs between NF and power efficiency with bias current, Vcc.
- validate your simulation results against a simple analytical model or empirical results
- for pretty plots, using a script to automate the data extraction and plotting using Python with Pandas and Matplotlib libraries
 
Last edited:

LaTeX Commands Quick-Menu:

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top