minimum channel length

Status
Not open for further replies.

Junus2012

Advanced Member level 5
Joined
Jan 9, 2012
Messages
1,552
Helped
47
Reputation
98
Reaction score
53
Trophy points
1,328
Location
Italy
Activity points
15,235
Hello,

I use technology 0.35 µm, but in analog design I usually for better performance I fix the length to 1 µm.

Now we want to move to 90 nm, and I see many people fix the length to 200 nm.

Now if 200 nm is ok for 90 nm technology, why then we use 0.7 µm or 1 µm in the 0.35 µm technology. Why not the minimum of 0.35 µm which is still bigger than 200 nm technology.

Thanks

Regards
 

Other people can make up whatever rules-of-thumb they want,
your L should be determined by "what the circuit wants" (to meet
the constraints of the datasheet electrical table, and other non-spec
"care-abouts" like noise, yield, settling time etc.

You might be facing a situation in shorter nodes, that they eliminate
(say) 10um max-L devices from model pull once Lmin goes below
100nm - because who, besides those whiney analog guys, ever can
use a 100:1 geometry range on a FET finger?

But seems you're pretty far from that, and just should concern
yourself with setting L to get outcomes (like do you care more in
a specific instance about Rout@bias, about gm@bias, a little about
Cgs and Cgd, ??? Starting from a uniform "private rule" is a way to
start without much thinking. Not a way to finish.
 
Thanks freebird for your reply,

If my target specification requires minimum L= 1 µm for example which meets the specific design metrics, Also suppose in the design I have High aspect ratio with transistor width say 250 µm. Such design is implemented with 0.35 µm.

Now what is the point of being proud to move to smaller node technology when I again need mostly to use the same dimensions ??, I am only concerning with analog circuit point of view.


Secondly, always in analog circuit advised to work at least twice the channel length of the node technology, is that relevant to the uncertainty of the fabrication resolution to reduce the amount of error?


Thanks
 

"It's not about you" (Mr. Analog with your handful of op amps and ADCs).
It's about them and wanting their 100Mgates to fit in board / module space.
You're just along for the ride and you're lucky they let you sit in the back seat
instead of the trunk.

You only think I'm kidding.
 
Thanks freebird for your reply
, I understood that

but what about my second question


always in analog circuit advised to work at least twice the channel length of the node technology, is that relevant to the uncertainty of the fabrication resolution to reduce the amount of error?
 

@OP ... I'll share a story.

I once presented a design to some fellow(s) at a very large semiconductor company (by fellow a mean title). They asked me what channel lengths i used for DAC current mirrors, to which I replied, minimum. There was some consternation, as they said the process suggested 2-3x min L for matching. I had a way around that because of a patent I developed, but something like that can be embarrassing if you are not aware of it. This was a real design used in many products, not just academic exercise.

Sometimes you need larger devices for things like matching. Other times, the smaller devices will buy you much better bandwidth and speed for switching considerations. Point being that each process scale has some benefits and some drawbacks, and it's up to you to design around them.
 
Last edited:

    Junus2012

    Points: 2
    Helpful Answer Positive Rating
The minimum L will be set (in concert w/ Vdd, temp range, device
application, reliable lifetime) by what can make a releasable
digital logic gate. This doesn't make it good for analog.

- barely capable lithography at Lmin certainly means poor
analog matching (Wmin too). Digital delay mismatch is just
allocated to some portion of the timing model and a digital
designer will just believe that model covers all (it should, and
you cannot know without a deep dive, you will not get that
deep dive until there's a panic, so just turn the crank).

- Digital gates impose a peak-hot-carrier injection bias for a
fraction of the gate transition time, times frequency of. Many
things you'd like to do for analog look a lot like hot carrier
stress bias (Vgs=Vdd/2, Vds ~ Vdd looks kinda like a current
mirror application, static, so much faster VT / leakage / gate
kink creep). Such effects can ruin current mirror fidelity and
if signal chain has concerns for the matching of currents at
different stages (like, say, every single-stage CMOS op amp
ever) you could end up with a rather embarassing field returns
problem.

Meanwhile the digital gate's shifts are less of a threat to its
simpler and fewer care-abouts like supply current, prop delay
while also accruing much more slowly (L for L). And if there
-was- a digital reliability problem the likelier outcome would
be a reduction in rated supply voltage. Which also brings
Good Things for analog, yeah boy.
 

    Junus2012

    Points: 2
    Helpful Answer Positive Rating
Thank you friends very much for your reply,

Now I got it,

The length and width is definitely what you have to pay to achieve regardless of the used technology,

However, the minimum size rule of thump of 2 X L of the used technology is not about reaching an absolute value of length or width, it is about the lithography resolution as freebird mentioned,
working with minimum L for analog circuit, assuming it is fitting the design goal specification will suffer from lithography uncertainty during fabrication process and then the actual transistor width and length will be different from the designed values, leading to whatever it leads in addition to the mismatching

Thank you once again
Best Regards
 

Status
Not open for further replies.
Cookies are required to use this site. You must accept them to continue using the site. Learn more…