Hello guys,
The question comes from evaluate two rubidium clock sources, one is bought as used clock with nice price, another is disassembled from an old basestation. I don't know the detailed spec of two clocks. After reading some material on time source measurement, I found the methods on the table (commerial counter TIC or dual mixer time difference method) require to use a more stable clock to measure another one. Because all the methods above are relative measurement, the confidence is based on priori knowledge on standard performance.
So, my curiosity drive me to ask how to compare two clock with unknow performance (short term jitter and long term stability)? I am not sure if my understanding is correct, we need to find a way to evaluate a clock WITHOUT effect of another clock.
I think you need to compare your clocks to a standard such as the WWV or CHU time clicks which are broadcast on shortwave radio. (I did this with my Commodore-64 for 24 hours to discover it lost just 3 seconds.)
Or see if you can interpret GPS signals from satellites orbiting our planet. The system is so precise that my iPhone places my location on a map within a mere few feet.
You can't, all time measurements are relative.
Look at http://leapsecond.com/ here you will find plenty of information on how to measure the performance of clocks at home,
Beware you are going down a rabbit hole here.
as you probably know, there IS no such thing as a rubidium "clock".
there is a rubidium physics package, that resonates at 6 834 682 608 Hz. to use that feature, you have three cells, a rubidium lamp, an resonator cell where microwave energy goes in, bounces around with the rubidium atoms, and can make the rubidium dump some electrons. then there is a simple light photodetector.
when you input the exact frequency to the absorption cell, the rubidium makes the light intensity passing thru it dip. if you dither the frequency +/- a little, you can figure out where the exact light null is in frequency.
So you typically have some sort of crystal oscillator, multiplied up to the magic frequency, that is slowly frequency dithered to figure out how to center it on the magic frequency.
So what output are you trying to measure? there IS NO 6..8 Ghz output typically provided. there might be a ~10 MHz crystal frequency, but like i said, it is deliberately dithering.....so you must take that into effect as you meausre the stability. or sometimes there is a 1 second pulse derived from the crystal oscillator.
one thing you can do it to compare clock 1's crystal oscillator frequency to clock 2's. you can beat the two in a simple DC coupled mixer as a phase detector, and see if the phase output of the phase detector is stable. If not, you will see an output slowly ramping from -.2V to +.2V, and every time it does that twice you have slipped one complete cycle. integrate that up over time, and you know the frequency drift.
but you will not know which of the rubidium standards is bad. Hopefullythey are both pretty good, and track each other precisely without many cycle slips.
I think you need to compare your clocks to a standard such as the WWV or CHU time clicks which are broadcast on shortwave radio. (I did this with my Commodore-64 for 24 hours to discover it lost just 3 seconds.)
Or see if you can interpret GPS signals from satellites orbiting our planet. The system is so precise that my iPhone places my location on a map within a mere few feet.
Well, the GPS derived 1PPS is the cheapest way to get a pretty well long term stable timing signal. But, my concern is that the 1PPS output jitter will be badly effected by weather and other factors, which could not be well controlled.
as you probably know, there IS no such thing as a rubidium "clock".
there is a rubidium physics package, that resonates at 6 834 682 608 Hz. to use that feature, you have three cells, a rubidium lamp, an resonator cell where microwave energy goes in, bounces around with the rubidium atoms, and can make the rubidium dump some electrons. then there is a simple light photodetector.
when you input the exact frequency to the absorption cell, the rubidium makes the light intensity passing thru it dip. if you dither the frequency +/- a little, you can figure out where the exact light null is in frequency.
So you typically have some sort of crystal oscillator, multiplied up to the magic frequency, that is slowly frequency dithered to figure out how to center it on the magic frequency.
So what output are you trying to measure? there IS NO 6..8 Ghz output typically provided. there might be a ~10 MHz crystal frequency, but like i said, it is deliberately dithering.....so you must take that into effect as you meausre the stability. or sometimes there is a 1 second pulse derived from the crystal oscillator.
one thing you can do it to compare clock 1's crystal oscillator frequency to clock 2's. you can beat the two in a simple DC coupled mixer as a phase detector, and see if the phase output of the phase detector is stable. If not, you will see an output slowly ramping from -.2V to +.2V, and every time it does that twice you have slipped one complete cycle. integrate that up over time, and you know the frequency drift.
but you will not know which of the rubidium standards is bad. Hopefullythey are both pretty good, and track each other precisely without many cycle slips.
Very nice explanation! Thanks a lot!
The thing I want to do is exactly what you pointed out. I want to compare the XTAL frequency output, i.e., 10Mhz output from those two rubidum module "clock". Because, I want to figure out which one is more suitable as reference standard clock at homelab.
The beat frequency method seems to be the same method as mentioned in paper from nist (section 12.3.1)
Even the time measurement is relative, but, my curiosity guides me to think how nist guys to evaluate their clocks. As depicting in building a Rub. clock, it involves too much engineering factors, which will degrades the actual performance. When technique evolves to build a higher performance clock, how to prove it in practice?
All existing models on the table should be worse than new one. That's the one I want to figure out.
You can't, all time measurements are relative.
Look at http://leapsecond.com/ here you will find plenty of information on how to measure the performance of clocks at home,
Beware you are going down a rabbit hole here.
as you probably know, there IS no such thing as a rubidium "clock".
there is a rubidium physics package, that resonates at 6 834 682 608 Hz. to use that feature, you have three cells, a rubidium lamp, an resonator cell where microwave energy goes in, bounces around with the rubidium atoms, and can make the rubidium dump some electrons. then there is a simple light photodetector.
when you input the exact frequency to the absorption cell, the rubidium makes the light intensity passing thru it dip. if you dither the frequency +/- a little, you can figure out where the exact light null is in frequency.
So you typically have some sort of crystal oscillator, multiplied up to the magic frequency, that is slowly frequency dithered to figure out how to center it on the magic frequency.
So what output are you trying to measure? there IS NO 6..8 Ghz output typically provided. there might be a ~10 MHz crystal frequency, but like i said, it is deliberately dithering.....so you must take that into effect as you meausre the stability. or sometimes there is a 1 second pulse derived from the crystal oscillator.
one thing you can do it to compare clock 1's crystal oscillator frequency to clock 2's. you can beat the two in a simple DC coupled mixer as a phase detector, and see if the phase output of the phase detector is stable. If not, you will see an output slowly ramping from -.2V to +.2V, and every time it does that twice you have slipped one complete cycle. integrate that up over time, and you know the frequency drift.
but you will not know which of the rubidium standards is bad. Hopefullythey are both pretty good, and track each other precisely without many cycle slips.