Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Do I still need good cable if applying Calibration for VNA

Status
Not open for further replies.

SIQ

Newbie level 6
Newbie level 6
Joined
Jun 3, 2006
Messages
13
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,281
Activity points
1,412
Hi,

As SOLT calibration can correct for mismatch for testing cable. My question is if I still need good testing cable? more precisely, is there any measurement difference for DUT if using ideal cable (50ohm) and non-ideal cable (like 50ohm +/- 10%).
Thanks.
 

To calibrate out cable imperfections, they must remain constant.
One important thing about the expensive VNA cables is that phase + insertion loss are very stable when you bend the cable.
 

In addition, cable loss and impedance deviation will reduce the measurement accuracy, particularly with unmatched DUT.
 

That is all true. Never the less, you do not have to buy a $2500 cable to make a good measurement. You can find pretty good used cables on ebay for $50 to $100 that can work almost as well. Try to find one that is rated to 26 GHz, that way at lower frequencies they might still hang in there with good repeatability. For example:

**broken link removed**

**broken link removed**

**broken link removed**
 
Last edited:
hello,

In addition to the good advice in the previous postings:
If you have an unknown cable set, calibrate this set and see what happens when you bend the cable. If you have to measure devices with impedance that result in S11, S22 almost 1, check the change in impedance when you apply such load (and bend the cable).

When the variations are acceptable, I see no reason for not using them (except for cosmetic reasons).

If you have to use very long cables (many wavelengths long), I would also do a 1 or 2 port sweep for the cable itself (to know the return loss and insertion loss). Large ripple in parameters versus frequency can be a sign of bad cable construction (or damaged cable).

Cable screening can be of importance also. Interference will lead to inaccuracy of the measurements
 

If you just want to make a basic insertion loss measurement then the best way to do it is to fit precision 10dB or 20dB attenuators at the test end of your cable and use the THRU cal function.
Provided the attenuators are very good quality then the cable becomes less critical. Your device under test will be presented with very close to a perfect 50R source and load as it is connected direct to the precision attenuators.

This basic method will outperform a 'typical' VNA SOLT calibration (i.e. SOLT without the attenuators) for a simple thru loss measurement especially for measuring very low insertion loss at microwave frequencies using fairly long cables.
 
Hello G0HZU,

You are right on the attenuators. I used 2*4 dB attenuators (total 8 dB) at the input of an amplifier with bad input VSWR that had to work with an antenna with bad VSWR (reception measurement only). Without the attenuation there was significant ripple in S21 (about 5 dB) due to internal reflection from amplifier's input and antenna output.

The internal reflection problem I had with some foam dielectric coaxial cables also (just above 2 GHz, fresh cables). Therefore I don't trust cable where S21, or S11 doesn't show monotonic behavior with increasing frequency.
 

This basic method will outperform a 'typical' VNA SOLT calibration (i.e. SOLT without the attenuators) for a simple thru loss measurement especially for measuring very low insertion loss at microwave frequencies using fairly long cables.

I do not agree. The attentuator method is useful work around for simple scalar network analyzers that can not be calibrated.

For vector network analyzers, calibration with a good cal kit is the most accurate method.
 

I do not agree. The attentuator method is useful work around for simple scalar network analyzers that can not be calibrated.

For vector network analyzers, calibration with a good cal kit is the most accurate method.

Put yourself in the shoes of the analyser. It usually defaults to a few hundred sweep points.

Then look at what is being presented to it at maybe 6GHz. i.e. a transmission line that is maybe 1 metre long that rotates the smith chart maybe 60-100 times at 6GHz.

How can the analyser calibrate this out with so few measurement points without making interpolation errors once the device is connected? The transmission length will change slightly causing tiny errors.

I've had this argument so many times in the past and I've seen so many poor measurements (poor in relation to using attenuators) made using SOLT on long cables. Usually the measurement will show tiny and fast ripple on the plot due to the errors in correcting for the transmission line. The rate of the error ripple will be a function of the length of the cable.

If the attenuators present >30dB return loss then I'll leave it to you to work out the possible measurement error due to VSWR mismatch. It will be a LOT less error than a SOLT calibration on a long cable with a few hundred sweep points :)

To put this in perspective, I'm referring to small errors on low loss broadband meaurements eg the insertion loss of a filter or switch that spans to maybe 6GHz or higher.

The precision attenuators with a thru cal will give a better thru loss measurement than a 'typical' SOLT cal with long cables without the attenuators.
 
Last edited:

Put yourself in the shoes of the analyser.

I come from the Bochum University measurement group that has invented many of the modern calibration methods as implemented in R&S or Agilent VNA, so I have a certain bias towards calibration rather than pure hardware approach.

Of course, you can break calibration if you combine long bad cables with few sweep points, and maybe add some user error. Yes, you can break it.

But your solution with the attentuators has some major weaknesses, too. It suffers from the unknown return loss of the attenuator (how good is that?) and will decrease the SNR and dynamic range of the instrument by 20 or 40dB. The increased noise in the sampled signal will increase the measurement error.

Finally, if you insist to use an attentuator, you can always combine your attentuator method with a proper SOLT calibration. That will be more precise than attenuators with simple THRU normalization.
 

I come from the Bochum University measurement group that has invented many of the modern calibration methods as implemented in R&S or Agilent VNA, so I have a certain bias towards calibration rather than pure hardware approach.

Of course, you can break calibration if you combine long bad cables with few sweep points, and maybe add some user error. Yes, you can break it.

But your solution with the attentuators has some major weaknesses, too. It suffers from the unknown return loss of the attenuator (how good is that?) and will decrease the SNR and dynamic range of the instrument by 20 or 40dB. The increased noise in the sampled signal will increase the measurement error.

Finally, if you insist to use an attentuator, you can always combine your attentuator method with a proper SOLT calibration. That will be more precise than attenuators with simple THRU normalization.

But the attenuator return loss can be checked and measured with good confidence on the VNA.

The increased noise in the sampled signal will increase the measurement error.

I specifically referred to a low insertion loss measurement where accuracy is key. Most modern VNAs can cope quite well with the extra loss of the attenuators without introducing too much noise. I'd argue that the SOLT error on a long cable will be much more significant.

Of course, you can break calibration if you combine long bad cables with few sweep points, and maybe add some user error. Yes, you can break it.
That's my point. I've seen so many engineers get errors on a SOLT using the default analyser sweep settings on meaurements at several GHz using long cables.

Finally, if you insist to use an attentuator, you can always combine your attentuator method with a proper SOLT calibration. That will be more precise than attenuators with simple THRU normalization.
As long as the attenuators have >30dB return loss then I would still argue that simpler is better. The measurement error with >30dB return loss is so small then I'd argue that the error introduced from the physical act of SOLT (rather than theory) will present a risk of making the measurement worse. But I doubt you would be able to 'see' the error or know it was an error :)

Also note:
If I preset the Agilent 8753ES by my desk it defaults to 201 sweep points. If I get time I'll make some measurements comparing 1metre cables on a default SOLT using an 85033E cal kit against a thru loss using some nice 18GHz attenuators on something like a filter or switch.
 
Last edited:

But the attenuator return loss can be checked and measured with good confidence on the VNA.

How? You have described that you do not trust calibration, so you start with an un-calibrated inaccurate VNA. To measure the return loss of your attenuator, you must first calibrate your VNA. How to calibrate that, if you have no reference that you can trust? If you just use another attenuator as the reference, your results might look nice, but all the impedance error from the attenuator is now in your data.

In SOLT, the standards are very well defined and measured against trusted standards. With your attenuator method, you have no trusted reference to start with.

---------- Post added at 11:29 ---------- Previous post was at 11:22 ----------

Also note:
If I preset the Agilent 8753ES by my desk it defaults to 201 sweep points. If I get time I'll make some measurements comparing 1metre cables on a default SOLT using an 85033E cal kit against a thru loss using some nice 18GHz attenuators on something like a filter or switch.

I am sure that you can create poor data by using calibration in the wrong way. However, it would be easy to just use more data points if you have long cables.

If you do the test, do not forget to measure and compare the return loss of a high precision 50ohm load, short and load.
 

How? You have described that you do not trust calibration, so you start with an un-calibrated inaccurate VNA. To measure the return loss of your attenuator, you must first calibrate your VNA. How to calibrate that, if you have no reference that you can trust? If you just use another attenuator as the reference, your results might look nice, but all the impedance error from the attenuator is now in your data.

In SOLT, the standards are very well defined and measured against trusted standards. With your attenuator method, you have no trusted reference to start with.

How?
By simply calibrating the VNA direct at the instrument port.

You have described that you do not trust calibration
I have described the real world limits of trying to calibrate to compensate for a very long (imperfect) cable at microwave frequencies.
That isn't the same as calibrating and making a measurement of an attenuator direct at the VNA port. I'm quite happy to trust a calibration that doesn't involve a very long imperfect transmission line in series with the cal kit.

Do you see the difference?
 

Do you see the difference?

Yes, I see your point.

Maybe we can agree that your method of improving return loss in hardware, and more advanced calibration methods that can remove imperfections of the hardware, can be combined.
 

Yes, I see your point.

Maybe we can agree that your method of improving return loss in hardware, and more advanced calibration methods that can remove imperfections of the hardware, can be combined.

Yes, they can be combined :)

I have always preferred the attenuator method because it is the most reliable and it is quick! As long as the return loss seen into the attenuator + cable + VNA is ballpark 30dB then the mismatch error is something like 0.006dB (from memory)

It also presents the device under test with a source and load that is close to a resistive 50 ohms and that is a good place to be IMO :)

Note:
Obviously, if the attenuators are used at a freq where they can't present a decent return loss than you have to resort to a SOLT cal but I'm lucky in that I have attenuators here that are pretty good over the freq range I usually design over.
 
Last edited:

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top