Monday, November 28, 2011

Using Offset Compensation when Making Resistance Measurements

Here is a short straight forward video showing how the offset compensation (OCOMP) feature on DMMs is used. This feature is great when making resistance measurements on active circuits or any DUT that has stray voltage. Enjoy!

Click here to check out more test and measurement video tutorials on YouTube

Click here for more info on Agilent DMMs

Friday, November 18, 2011

Sharing Reference and Timing Signals Over Long Distances

In this post we look at easy way to share reference, timing, and trigger signals between test equipment separated by long distances. This is useful if you want to share a 10 MHz precision timebase signal with multiple instruments in various laboratories across your company's site or maybe you need to route a trigger signal among multiple instruments in a distributive test setup. Common short distance signal mediums like coax cables is often not a viable solution for long distance needs because of issues like high signal attenuation, group delay, and outside electromagnetic interfernce. A good viable solution for routing signals over long distances is by employing fiber optic equipment.
In the following example a pulsed timing signal was sent over 10 Km. This was accomplished using a fiber optic transmitter and receiver link as well as 10 Km of fiber optic cable. Below is a scope screen capture of the signal before modulation and transmission (in yellow) and after it was received and demodulated (in green)

For this example the optic transmitter / receiver pair that I used was the DiLink 4 GHz microwave link modules by Linear Photonics (link below). The conversion and transmission process of the optic link is not perfect but it's close, with performance specs that are orders of magnitude better then using coax or any metal based transmission method. Fiber optic cables have low signal attenuation and they do not suffer from external signal degradation factors such as electromagnetic interference and cross talk.
In a vacuum light travels at about 3.3 us per kilometer (1000 m / 299,792,458 m/s).  Because the index of refraction of most fiber optic cables is about 1.5, light travels about 1.5 times as fast in a vacuum as it does in the cable. This works out to about 4.9 us of latency for every kilometer and for our example with 10 Km of fiber optic cabling we would expect the delay to be 49 us. If we zoom in a bit on our input signal and 10 Km later output signal we can see the expected signal path delay in the below scope screen capture.

When you need to share a reference, timing, or trigger signal across long distances fiber optic links are a great solution for signal transmission. Compared to using a metal based cabling, like coax, fiber optic links provide better signal integrity, less attenuation, and are less susceptible to outside interference. If you have used fiber optic links for test and measurement applications please share your experiences in a comment below.

Click here to check out the Linear Photonics webpage

Friday, November 11, 2011

Comprehensive Look at Sources of Error in DC Voltage Measurements Part 2

This is part 2 of a 2 part post that takes a comprehensive look at all of the factors that can lead to errors in a DC voltage measurement with a DMM and how to eliminate them so you can achieve the highest accuracy possible in your measurement. In part 2 we will cover the following topics: loading errors,  power-line noise, injected current noise, and ground loop errors. If you are a seasoned DMM measurement veteran and you feel I missed something in the following sections please add it as a comment.

Loading Errors Due to Input Resistance — Measurement loading errors occur when the resistance of the DUT is an appreciable percentage of the DMM’s own input resistance. The figure below shows this error source. To reduce the effects of loading errors, and to minimize noise pickup, see if your DMM allows you to set its input resistance to a higher value. For instance, Agilent 34401A’s input resistance can be set from 10 M to > 10 G for the 100 mVdc, 1 Vdc, and 10 Vdc ranges.

Ri should be much larger than Rs or loading error will be a factor in the measurement
Power-Line Noise — This type of noise is caused by the powerline voltage signal (50 Hz or 60 Hz) being coupled onto the measurement setup either from the DUT, the DMM, or both. This noise appears as an AC ripple summed on top of the DC level you are measuring. To eliminate this common noise source DMM designers use integrating or averaging measurement time settings that are integer multiples of the powerline noise's period. Remember if you integrate over a sine wave you get zero. This is typically called normal mode rejection or NMR. If you set the integration time to an integer value of the powerline cycles (PLCs) of the spurious input, these errors (and their harmonics) will  average out to approximately zero. For instance, the Agilent 34401A provides three integration times to reject power-line frequency noise (and power-line frequency harmonics). When you apply power to the DMM, it measures the power-line frequency (50 Hz or 60 Hz), and then determines the proper integration time. The table below shows the noise rejection achieved with various configurations. For better resolution and increased noise rejection, select a longer integration time.

Noise Caused by Injected Current — Residual capacitances in the DMM’s power transformer cause small currents to flow from the LO terminal to earth ground. The frequency of the injected current is the power line frequency or possibly harmonics of the power line frequency. The injected current is dependent upon the power line configuration and frequency. With Connection A (see figure below), the injected current flows from the earth connection provided by the circuit to the LO terminal of the DMM, adding no noise to the measurement. However, with Connection B, the injected current flows through the resistor R, thereby adding noise to the measurement. With Connection B, larger values of R will worsen the problem.

The measurement noise caused by injected current can be significantly reduced by setting the integration time of the DMM to 1 power line cycle (PLC) or greater.

Ground Loop Error — I have done two posts on this topic and the links for them are below. The second being the more thorough one.

Understanding Ground Loop Error in Voltage Measurements

If you think I missed anything or if you have a question please leave it in a comment

Monday, November 7, 2011

Comprehensive Look at Sources of Error in DC Voltage Measurements Part 1

In this two part post we will (or at least attempt to) take a comprehensive look at all of the factors that can lead to errors in a DC voltage measurement with a DMM and how to eliminate them so you can achieve the highest accuracy possible in your measurement. In part one we will cover radio frequency interference, thermal EMF errors, noise caused by magnetic fields, and common mode rejection. If you are a seasoned DMM measurement veteran and you feel I missed something in the following sections please add it as a comment.

Radio Frequency Interference -- Most voltage-measuring instruments can generate false readings in the presence of large, high-frequency signal sources such as nearby radio and television transmitters, computer monitors, and cellular telephones. Especially when the high frequency energy is coupled to the multimeter on the system cabling. This effect can be severe when the cabling is 1/4, 1/2, or any integer multiple of the high frequency wavelength. You probably have experienced this type of effect first hand if you ever placed a mobile phone near speaker wiring and heard bursts of noise from the speaker that were certainly not part of the intended audio experience. To reduce interference, try to minimize the exposure of the system cabling to high-frequency RF sources. You can add shielding to the cabling or use shielded cabling.  If the measurement is extremely sensitive to RFI radiating from the DMM or your DUT, use a common mode choke in the system cabling, as shown in the figure below, to attenuate DMM emissions. Often you can see this same EMI reducing method being used on the data cable for your computer monitor.

Thermal EMF Errors -- Thermoelectric voltages, the most common source of error in low level voltage measurements, are generated when circuit connections are made with dissimilar metals at different temperatures. Each metal-to-metal junction forms a thermocouple, which generates a voltage proportional to the junction temperature. It is a good idea to take the necessary precautions to minimize thermocouple voltages and temperature variations in low level voltage measurements. The best connections are formed using copper-to-copper crimped connections. The figure below shows common thermoelectric voltages for connections between dissimilar metals.

Agilent benchtop DMMs use copper alloy for their input connectors
Noise Caused by Magnetic Fields -- When you make measurements near magnetic fields, take precautionary steps to avoid inducing voltages in the measurement connections. Voltage can be induced by either movement of the input connection wiring in a fixed magnetic field, or by a varying magnetic field. An unshielded, poorly dressed input wire moving in the earth’s magnetic field can generate several millivolts. The varying magnetic field around the ac power line can also induce voltages up to several hundred millivolts. Be especially careful when working near conductors carrying large currents. Where possible, route cabling away from magnetic fields, which are commonly present around electric motors, generators, televisions and computer monitors. In addition, when you are operating near magnetic fields, be certain that the input wiring has proper strain relief and is tied down securely. Use twisted-pair connections to the multimeter to reduce the noise pickup loop area, or dress the wires as closely together as possible.

For more on magnetic coupling and other spurious coupling issues in measurements check out the post Ground Loops and Other Spurious Coupling Mechanisms and How to Prevent Them

Common Mode Rejection (CMR) -- Ideally, a DMM is completely isolated from earth-referenced circuits. However, there is finite resistance between the DMM’s input LO terminal and earth ground. This can cause errors when measuring low voltages that are floating relative to earth ground. Check out the post Understanding Common Mode DMM Specifications for more information on CMR.

Stay tuned for part 2 next week!

Friday, November 4, 2011

How to do an Accuracy Calculation on Agilent's 53200 Series Counters

Calculating and understanding the accuracy of Agilent’s 53200A series of universal counters is not easy and can be a source of frustration. In this post I will walk you through an example frequency accuracy calculation using Agilent’s 53230A universal counter. Each step will include an explanation of the various terms so in the future the reader can perform an accuracy calculation on any of the 53200A series of counters with any input signal. I will also refer to data sheet pages where calculations, specs, and notes can be obtained. Let’s start by setting the initial conditions of the signal we are measuring and the counter we are using.

Signal being measured: 1 MHz sine wave with 1 Vpp amplitude and 1 mV of RMS noise

Counter hardware and settings: 53230A with TCXO timebase set for frequency measurements in ‘Auto’ mode (default) with a 1 s gate time. The input amplitude range is 5 V with a trigger level of 0 V on a positive edge event. The counter was calibrated at the factory upon purchase and it has been ~90 days since the initial 30 day warm-up period (page 15 note 1 of data sheet).

The basic accuracy calculation is as follows (page 16 datasheet):
Accuracy = +/- [(k*Random Uncertainty) + Systematic Uncertainty + Timebase Uncertainty]

For a definition of random, systematic, and timebase uncertainty refer to page 16 of the data sheet.
Let’s start by calculating the random uncertainty (RU) error which is the hardest of the three. The variable ‘k’ is the standard deviation or sigma multiplier for establishing the confidence interval of a Gaussian or normal distribution, RU has a Gaussian distribution. For our calculation we will choose a ‘k’ of 3 which gives us a confidence interval of 99.7%. The equation for calculating the RU is (data sheet page 16):
RU = [1.4*(TSS^2 + TE^2)^1/2] / RE*gate

TSS --> Single-shot time resolution spec found on page 19 of data sheet. For the 53230A this spec is 20 ps.

TE --> Threshold error (page 19) is the amplitude noise on the signal being measured that causes error at the trigger level point on the signal edge. The equation to calculate TE for 5 V input range is: (500μV^2 + EN^2 + VX^2)^1/2 / SR
  •  The ‘500μV’ term is amplitude noise added by the counter
  • The ‘EN’ term is the amplitude noise on the signal being measured (page 20). In our initial conditions we said this was 1 mV. EN is sometimes hard to calculate so if you are working with a clean signal it is safe to assume EN is zero.
  • The ‘SR’ is the slew rate of the signal at the counter’s set trigger point. The slew rate or the rate of change at any point on a given waveform is found by taking the derivative of the waveform function and then solving for the time when the set trigger amplitude occurs. Fortunately on page 20 the data sheet provides an SR formula for a sine wave and a square wave. The formulas are the result of solving for the derivative of each waveform and using the point of max SR as the trigger point. Choosing a trigger point on the waveform that has the highest slew rate leads to better measurement accuracy. In our example we have a sine wave which has max SR at the 50% amplitude point. A sine wave with no DC offset will then have its max slew rate at the 0V crossing and that is why we choose 0 V trigger point for our initial conditions. So we can use data sheet calculation for our example: SR = 2*pi*F*V0 to p
    • ‘F’ is the frequency of the signal being measured.
    • ‘V0 to pk’ is the delta amplitude from 0 to peak. For our signal this is 0.5 V (half of 1 Vpp).

RE --> When the 53230A and 53220A are in ‘Recip’ mode they return the average of all the measurements within the set gate time as the measurement result. When they are in ‘Auto’ mode, which is the default mode, they use a proprietary algorithm to achieve better resolution from a set of measurements in a specific gate time compared to just simply averaging the measurements. ‘RE’ stands for Resolution Enhancement. When in ‘Auto’ mode the RE factor can be more than 1 because the resolution enhancement algorithm is being implemented. When using the 53210A, 53220A, and the 53230A in ‘Recip’ or ‘TST’ modes RE = 1.   
Let’s calculate SR, TE, and RE
SR = 2*pi*F*V0 to pk = 2*3.1416*1e6*0.5 = 3.1416e6
TE = (500μV^2 + EN^2 + VX^2)^1/2 / SR = ((500e-6^2 + 1e-3^2)^1/2) / 3.1416e6 = 3.5588e-10

The data sheet on page 19 gives clear instructions on what to do for RE when Tss >> TE, we simply use the following equation RE = √(FIN * Gate/16) and check it against the max value RE table shown below:
Gate time >= 1 s, RE max of 6
Gate time 100 ms, RE max of 4
Gate time 10 ms, RE max of 2
Gate time =< 1 ms, RE = 1

If your gate time falls somewhere in between the gate time values in the table just use an RE value that falls between the table values, for instance if your gate time is 600 ms use an RE value of 5. Now here is where things get confusing, since the data sheet does not really provide much guidance on what to do for cases when Tss > TE, Tss < TE, and Tss << TE (which is the case we have for our example). The safe thing to do here is to just use the same procedure that was used for Tss >> TE. Since the resolution enhancement algorithm results in increased resolution the higher TE is, using the above calculation method ensures a safe result every time. Hopefully a future version of the data sheet will provide a better guidance for calculating RE. Let’s calculate the RE value for our example measurement:

 RE = √(FIN * Gate/16) = √(1e6 * 1/16) = 250, since the result of the equation is higher than the table value we use the table value so RE = 6

Now we have the variables we need to determine the random uncertainty in our accuracy calculation.
RU = [1.4*(TSS^2 + TE^2)^1/2] / RE*gate = [1.4*(20e-12^2 + 3.5588e-10^2)^1/2] / 6*1
RU = 8.3170e-011

Notice above, since TSS and TE are random error components we apply the root sum of squares (RSS) method to them. Also since they are random the higher the gate time the lower the random uncertainty is due to more averaging.
From page 16 of the data sheet the SU is determined as follows:
If RE ≥ 2: 10 ps / gate (max), 2 ps / gate (typ)
If RE < 2 or REC mode (RE = 1): 100 ps / gate

We can see SU is based off of the gate time and the calculated RE value. For each range of RE values there are two different ways to calculate SU, ‘max’ or maximum and ‘typ’ or typical. The typical spec is the performance of most of the units tested. The maximum is the warranty spec of the counter. For our calculation we will use the maximum:
SU = 10 ps / gate = 10e-12 / 1 = 10e-12

The last term we need to calculate is the Timebase Uncertainty (TU). On page 15 you will find the TU equation, the various time base specs, and four important notes. Understanding time base specs can be a little tricky. It is important to note that crystal oscillators are as much a mechanical device as they are an electrical device. That is why factors such as temperature, loss of power, and movement have such a dramatic effect on time base uncertainty. The TU calculation is:
TU = ( Aging + Temperature + Calibration Uncertainty )

The following is an overview of each of the three parts that make up the TU calculation:
Aging --> The 53200A series of counters must be on for 30 days after you receive it from the factory before the time base aging specs take effect. This is a settling time that is required by the physics of the crystal oscillator. The aging specs apply from the date of the counter’s last calibration. Since the 53230A for this example has been around 90 days after the initial settling time and we have the TCXO timebase our aging spec is +/- 0.6 ppm. This was calculated by multiplying the 30-day aging spec by 3 (90 days). Use the 30-day aging spec times the number of months since calibration for about 5 months at which time you will want to switch to the 1-year aging spec since it will be the lesser of the two. Notice the second half of note one on page 15, after the first year you use half of the 1-year and 30-day aging specs.

      Temperature --> the first spec “0 °C to 55 °C relative to 25 °C” is used if the counter was operated at temperatures more than +/- 5 °C from the temperature it was calibrated at (25 °C ideally). Since we are not sure what temperatures our example counter experienced during shipping we will use this in our calculation. The TCAL spec can be ignored for our accuracy calculation since it is included in the aging spec.

           Calibration Uncertainty --> This term is only used if the counter still has the factory calibration. It was added because after the counter is shipped from the factory we do not know what kind of handling the counter may experience during shipping. For instance the package may be dropped which can affect the calibration. Once the counter has been calibrated again this term can be ignored. Since in our example the factory was the last place the 53230A was calibrated we will use this spec. The 53200A series of counters should be calibrated onsite for best performance. If you send the 53200A counter off site for calibration you should add the “Initial factory calibration” spec into any accuracy calculation.
      We can now calculate the TU:

      TU = (Aging + Temp + CU) = 0.6 ppm + 1 ppm + 0.5 ppm = 6e-7 + 1e-6 + 5e-7 = 2.1 e-6
      Finally we now have all the information and data needed to calculate the basic accuracy of our example 1 MHz signal:

Accuracy = +/- [(k*RU) + SU + TU] = (3 * 8.317e-011) + 10e-12 + 2.1e-6 = 2.10025951e-6 or in parts and rounded 2.1003 ppm

Since the error calculation is in parts the result 2.10026 ppm means that if our signal was exactly 1 MHz the counter would output a reading between 1,000,002.10026  and  999,997.899740 as the measured frequency.

Important Accuracy Notes:
Notice in for our example that the timebase error is multiple orders of magnitude higher than the other error components. Even with the OCXO option this will still be true. That means in the future, if you are not using a highly stable external time base reference, you can just use the TU as a very close approximation of the counter’s measurement accuracy.

The accuracy of any of the 53200A counters can be increased by multiple orders of magnitude by using a rubidium or GPS based external frequency standard.

Better measurement accuracy is achieved by calibrating the 53200A series counter onsite after the initial 30 day warm-up / settling period and all future calibrations are carried out onsite.

Triggering at the amplitude point of maximum slew rate on the signal you are measuring provides better measurement accuracy.