In this post we will look at using a counter's high timing measurement resolution for making Jitter measurements on a clock signal. Jitter is defined as the deviation of a signal edge transition from its ideal time. The most common instrument used for measuring Jitter is a high performance scope, so that begs the question why use a counter? High performance scopes are very expensive, ranging from tens of thousands of dollars to hundreds of thousands of dollars. The counter offers a much lower cost alternative with measurement resolution specs that match or come close to those found on a high performance scope. There are various ways to quantify Jitter, the three most common are: Period Jitter, Cycle to Cycle Period Jitter, and Time Interval Error (TIE). A high performance scope can quantify all three types. A counter typically can only quantify Period Jitter. For more on Jitter fundamentals and Jitter types click here.

Period Jitter is an RMS calculation of the difference of each period from a waveform average. It can be calculated by making a large amount of period measurements on a signal and calculating the standard deviation of the period measurements. To measure period Jitter on a counter we cannot simple use the "Period" or "Frequency" measurement function, because these are integration measurements that average multiple period or frequency measurements together. We need a single shot period measurements for calculating Period Jitter. To do this we would use the "Time Interval" measurement menu and in the Time Interval menu you will find a measurement called a "Single Period" measurement that is ideal for calculating Period Jitter.

When measuring Period Jitter with a counter the main counter spec that you need to be aware of is its maximum resolution. Resolution can be spec'd in digits or as a time value, which is often referred to as the time interval resolution. The time value resolution is the spec we are interested in for calculating the counter's Jitter measurement floor. This time represents the standard deviation of the random measurement error associated with a time interval measurement. Since the standard deviation in an edge to edge timing measurement is the same thing as RMS Period Jitter, a counter's time interval resolution spec is its jitter measurement floor. Another way to look at it is the counter's time interval resolution is its internal RMS jitter error in its measurement. For example Agilent's 53230A counter has a time interval resolution spec of 20 ps so its Period Jitter measurement floor is 20 ps.

Let's look at a Period Jitter measurement example using the 53230A counter. The 53230A is setup to make single period measurements on a 10 MHz digital clock. Using the statistics function the counter computes the standard deviation after each successive measurement. As mentioned early the standard deviation calculation of the period measurements is the RMS Jitter of the signal. The below screen shot circled in red shows the measured RMS Jitter of the digital clock. From the statistics we can also see the peak to peak Jitter and we can see the mean value of the period is right around 10 MHz.

In this post we looked at measuring and analyzing Jitter with a frequency counter. Because of their high resolution and statistical features, counter's make a good low cost tool for measuring Period Jitter compared compared to a high performance scope. Modern counter's offer basic plotting features, like trend charts and histograms, which allow you to identify Jitter patterns and deterministic Jitter for deeper analysis. If you have any questions on this post feel free to email me. If you have any personal experience or incite to add to this post please use the comments section below.

For more on the 53230A counter click here

Period Jitter is an RMS calculation of the difference of each period from a waveform average. It can be calculated by making a large amount of period measurements on a signal and calculating the standard deviation of the period measurements. To measure period Jitter on a counter we cannot simple use the "Period" or "Frequency" measurement function, because these are integration measurements that average multiple period or frequency measurements together. We need a single shot period measurements for calculating Period Jitter. To do this we would use the "Time Interval" measurement menu and in the Time Interval menu you will find a measurement called a "Single Period" measurement that is ideal for calculating Period Jitter.

When measuring Period Jitter with a counter the main counter spec that you need to be aware of is its maximum resolution. Resolution can be spec'd in digits or as a time value, which is often referred to as the time interval resolution. The time value resolution is the spec we are interested in for calculating the counter's Jitter measurement floor. This time represents the standard deviation of the random measurement error associated with a time interval measurement. Since the standard deviation in an edge to edge timing measurement is the same thing as RMS Period Jitter, a counter's time interval resolution spec is its jitter measurement floor. Another way to look at it is the counter's time interval resolution is its internal RMS jitter error in its measurement. For example Agilent's 53230A counter has a time interval resolution spec of 20 ps so its Period Jitter measurement floor is 20 ps.

Let's look at a Period Jitter measurement example using the 53230A counter. The 53230A is setup to make single period measurements on a 10 MHz digital clock. Using the statistics function the counter computes the standard deviation after each successive measurement. As mentioned early the standard deviation calculation of the period measurements is the RMS Jitter of the signal. The below screen shot circled in red shows the measured RMS Jitter of the digital clock. From the statistics we can also see the peak to peak Jitter and we can see the mean value of the period is right around 10 MHz.

Modern counters like the 53230A also provide some basic plotting features, which can be a helpful tool for analyzing Jitter. The below screen shot shows a histogram plot of the measurements made on the digital clock signal. In the histogram plot we can see two separate Gaussian distributions, which means that besides random Jitter we also have deterministic Jitter in the signal. The deterministic Jitter jumps between 10.01 MHz and 9.99 MHz. Even though our mean came out to be about 10 MHz we can see from the histogram that the signal is typically 10 KHz off of the ideal frequency of 10 MHz.

As a comparison, the same signal was measured with a high performance scope. A screen shot of the measurement using the scope can be seen below. The scope screen shot is zoomed in on the signal's rising edge and the scope's persistence setting is on. To measure the Period Jitter of the signal the histogram feature of the scope is on (histogram shown at bottom of screen shot in light blue). Notice that the scope's measured RMS Jitter value (circled in red) matches that of the counter. Also notice that the histograms match as well.

For more on the 53230A counter click here

Do you have any good "rule-of-thumbs" on how many acquisitions should be used to represent valid data on this type of measurement? I'm doing this type of characterization on high-speed clocks with an older HP counter. I'm using the standard deviation statistic to understand my system's jitter, but concerned that I might be not taking enough data to be capturing a Gaussian distribution.

ReplyDeleteHey Andy, Sorry for the slow reply. Are you plotting your readings or are you just capturing the data and calculating the standard deviation? I would recommend plotting your readings you should see a Gaussian distribution take form. Now if you have systematic or periodic jitter mixed in with your random jitter then you should start seeing multiple distributions clustered together like a mountain range (like the above screen captures). If you can clearly make out Gaussian distributions in your readings then you are probably taking enough readings. 1000 to 10,000 readings tends to be a safe range, but it depends on the interval you are measuring at. Please note that in theory clocks and oscillator noise is non-convergent meaning if you measure the standard deviation forever it will not settle on a value but continue to increase towards infinity. This is due to the drift in clocks and oscillators. For more information I would recommend the Frequency Stability Handbook http://tf.nist.gov/general/pdf/2220.pdf

ReplyDelete