Really? Why is that? If I use drain modulation, then I assume that the only phase shift would be due to changing output impedance as Vd changes. But if I use a class D amp with a low Q output tuning network, I assumed that the changes would be pretty much negligible. And since the input is hard switching, I shouldn't have to worry about input capacitance variations.A switching amp will have more phase variation than a linear one.
I know it's not temperature that's causing it. Must be something with the biasing. Would a fixed gate bias voltage work, or would it be necessary to regulate the Iq with feedback?Usually if the linear PA is designed for a single frequency, and if it use a well designed bias circuit (filtering, bias point, temperature compensation, etc), should be no problem to meet the AM-PM requirements.
Now that you mention it, my bias structure impedance may be high (what would you define as a suitable impedance at DC for biasing?). I'm biasing my gate through about 300ohms and a 2.2uH inductance. But I have trouble seeing how that impedance can hurt me. The only DC path to my gate should be from my bias circuit (I'm using a MRF173 FET). So how could the operation of the FET change my gate bias?This is a very common problem in communications systems. It is called AM-to-PM conversion, and amplifiers are specified to minimize it.
I would suggest that you use a very stiff (low impedance) bias structure to keep any RF rectification to a minimum. A DC bias point change will cause a phase shift.
I measured my phase shift by putting a tuned coil on the output of the amp and looking at the field from the coil with a pickup loop. The coil has a Q of about 150, so it filters off pretty much all harmonics, so I doubt this is the case.Another reason for a phase shift would be the generation of harmonics and/or clipping, which, when summed in the time domain, can look like phase shift.
I only turn on the bias when doing a pulse (which is only about 50us long). So I wouldn't notice current draw changes in that case.mtwieg: In case of a real class A amp (so a good heat generating device), and modern LDMOS devices, 30 degrees seems a lot to me. I assume that the supply current doesn't change much during the pulse.
Yes this is a very good point. I was planning on using devices with as small of a Cdg ass possible (which I assume is the primary mechanism for feed through), but I would definitely have to watch out for this effect.regarding the class D, I think the problem is in the drive signal feed through. When you reduce the output by reducing the drain voltage, the feed through signal (via the reverse capacitance) will no longer be small w.r.t. the output, hence affecting phase and amplitude.
Right, everything is nonideal, but it's just a matter of which is closer to ideal. Probably no use in trying to generalize between two so drastically different topologies...Theoretically class-D (or other switching classes) are fine, but in reality is no amplifier can be a true Class-D, as non-zero switch resistances and capacitive as well as inductive parasitics restrict the ideal performances, when a linear (class-A) PA stays (luckily) in the same state from the begging to the end.
Again, this may be my issue. But what is the actual mechanism here that causes poor AM-PM? And what impedance should be aiming for?Filtering and impedances at low frequencies (baseband) are very important for the bias circuit to get good AM-PM.
Right, thermal memory is one thing I probably won't be able to model or compensate for... which is why I thought class D might be a better choice since it should dissipate less heat.I don't know if is the case, but there is a type of thermal memory effect that can generate poor AM-PM performance. This is related to the fast changing of the temperature of the junctions (with modulation), which is hard to be detected with standard temperature measurements.
To get considerable phase modulation, as you reported, there must be some high Q circuit part that slightly changes it's resonance frequency depending on the signal level. When I hear about your test setup, I suspect, that the tuned coil added for test is the cause of phase modulation. If you don't have other high Q resonant circuits in the PA, it's the most likely explanation, I think. The resonance frequency shift can be e.g achieved by signal dependent output impedance variations.I measured my phase shift by putting a tuned coil on the output of the amp and looking at the field from the coil with a pickup loop. The coil has a Q of about 150, so it filters off pretty much all harmonics, so I doubt this is the case.
The drain is biased as shown. The drain voltage is a fixed 30VDC (this amp is input modulated, not gate or drain modulated). I allow the gate bias 20us of rise time, then switch the TR switch on, wait 4us more, and then apply the RF input. These should allow plenty of time for both stages to reach a steady state. I check this by looking at the source current (using the source resistors), the drain voltage, and the voltage at the anode of D1. And I measure the phase about 20us into the RF pulse. Everything should be steady by then...Regarding bias, How did you bias the drain? When you turn on the gate bias, it takes time for the drain bias to settle. Are you sure everything has been settled before applying the RF input power?
As shown in the schematic, there isn't any impedance matching done anywhere. Previously I did have impedance matching, but I ran into a problem where the gain of the transmitter was drifting with temperature (I had a thread on that a while back...) and I found that removing the matching networks helped a lot. Now the gain deviates by about 0.1dB. But I don't think I need it that good, so if adding a matching network back in would help, I'd definitely consider it.Did you match the output for maximum power at a certain drive level (with the risk of operation close to voltage saturation and effect of non-linear mosfet capacitances), or did you accept some reduction in gain to assure you stay away from saturation/clipping?
Currently I'm biasing the first stage with 0.35A and the second stage with 0.81A. Strangely, I still start to see significant distortion on the output, despite the huge bias I'm applying.What is the DC input power when generating 4 W. I hope it is huge, so you are in fact having a "small signal" amplifier.
In the MRF173 datasheet, they show the gate bias being applied through a 10K resistor... I guess they don't care about stiff biasing or something.The impedance at low frequencies this should be low, but ideally would be to tune the filter circuit for best performance that you are looking for.
This is a reasonable explanation, but without a high Q filter on the output I don't see how I could measure the phase of the fundamental signal accurately at high power levels. I have a nice VNA, which has a power sweep option on its menu, but I haven't been able to get it to work for some reason... that would be so useful for this...To get considerable phase modulation, as you reported, there must be some high Q circuit part that slightly changes it's resonance frequency depending on the signal level. When I hear about your test setup, I suspect, that the tuned coil added for test is the cause of phase modulation. If you don't have other high Q resonant circuits in the PA, it's the most likely explanation, I think. The resonance frequency shift can be e.g achieved by signal dependent output impedance variations.
As a first try, get rid of the additional filter.
The filter should have sufficient harmonic suppresion, and preferably no phase or amplitude dispersion near the carrier frequency. That seems feasible. In addition, you should check, that you don't have unwanted resonances with respective phase dispersion created by the chokes. This would be seen from the amplifier S21.but without a high Q filter on the output I don't see how I could measure the phase of the fundamental signal accurately at high power levels
Okay, I could probably make a 3 pole butterworth LPF. That should be good, right?The filter should have sufficient harmonic suppresion, and preferably no phase or amplitude dispersion near the carrier frequency. That seems feasible.
The chokes are air core, and should be resonant in the hundreds of MHz. Also, what is phase dispersion?In addition, you should check, that you don't have unwanted resonances with respective phase dispersion created by the chokes. This would be seen from the amplifier S21.
Oh yeah, I forgot to mention that's my TR switch. The 50ohm matched coil connects to the cathode of D1. I thought I'd throw it in just in case it's part of the problem.I also wonder about the purpose of D1/D2. Do you operate it as an attenuator? In any case, it will make the low-pass filter characteristics strongly ampltitude depending, related to both attenuator setpoint and amplifier output power.
okay makes sense.With phase dispersion, I mean dφ/dω, phase versus frequency variation. With high choke self resonance frequencies, it should be no problem.
It's basically the same with the output filter. If the S21 is almost constant around the frequency of interest, the amplitude sensitivity can be expected low as well. A butterwoth filter shouldn't be bad.
It's a very common SPDT switch configuration using PIN diodes and a lumped element lambda/4 transmission line. Figure 2 in this doc: https://www.skyworksinc.com/downloads/press_room/published_articles/Elektronica_072009_English.pdfI don't exactly understand the TR switch configuration, particularly D2. Did you replace it by short/open for the measurement?
Good observation... will a conjugate match on my drain to my 50ohm load provide nulling of this current, or do I need to hang an inductor in parallel with my Cds (with DC blocking caps of course) somehow?There can be a current clipping problem, 4W into 50 Ohms is about 0.4 A, however this device has 105 pF output capacitance and 10 pF reverse (115 pF total). This is 22 Ohms, resulting in a reactive current of about 20/22 = 0.92A. Total AC current as seen by Gm current source will be about 1Apk. If you don't compensate this capacitance with an inductance, the amplifier will leave class A operation (as you have 0.81A bias current). You may see a slight current increase from steady state during the RF pulse.
I thought capacitance would decrease with increasing voltage? Anyways yeah a different device may help a lot, but I'd like to make this a learning experience rather than just playing trial and error with different parts.The output capacitance is also voltage dependent (as shown in the datasheet), so this will introduce phase shift between the actual current through Gm and the drain voltage at increasing RF drain voltage. It is like the effective capacitance increases with increasing RF drain voltage. A smaller or UHF device has less output (and input) capacitance.
Yes, I'll check that too. I do have a 500MHz scope, so I should be able to.You may also check the first stage for voltage or current clipping. If you have a fast oscilloscope that can display sufficient harmonics, you may check the waveforms (you probably already did as you mentioned output distortion).
The whole thing.The 30 degrees phase shift, is this for the complete amplifier, or for one of the stages?
Well I should emphasize that the coil is the intended load for the amplifier, so the phase with that load is ultimately what I'm interested in. However for diagnostic purposes, yes I should also be probing drain voltages directly. Also, on a future design I may throw in a small current sense resistor on the source (in series with the source capacitor). Might that be useful for telling whether my distortion is due to lack of bias voltage or current?Regarding the actual amplifier load (the 50 Ohms tuned coil), this may suppress harmonics towards the output, but this doesn’t guarantee that there are no harmonics at the drain. As you mentioned distortion, they may be present.
I assume you're saying that distortion makes it look like phase shift? Sure, that's why I was intending to do measurements with a filter on the output.
Unfortunately, all relevant components, except for the diodes are missing in your schematic. Seeing the paper, I get an idea, how the T/R circuit could be connected. To understand possible phase shift issues, we should see the real circuit used in the test. Presently, it looks like a shorted filter output with no antenna feed at all.It's a very common SPDT switch configuration using PIN diodes and a lumped element lambda/4 transmission line.
Unfortunately, all relevant components, except for the diodes are missing in your schematic. Seeing the paper, I get an idea, how the T/R circuit could be connected. To understand possible phase shift issues, we should see the real circuit used in the test. Presently, it looks like a shorted filter output with no antenna feed at all.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?