Friday, December 16, 2011

Simulating Power Transients for Testing ECUs

Electronic control units (ECUs) used in automotive and aerospace/defense applications need to be immune to the harsh power systems in which they operate. Power system surges and drop-outs are common, so you need to thoroughly validate your ECU to assure proper operation. To assist ECU designers, standard ISO test specifications have been developed that replicate the power transients seen in automotive applications. These test specifications are rigorous, and the test equipment required to generate these transients is specialized and expensive. As a result, this equipment typically remains in the quality control (QC) lab, and it may not be accessible to the design engineers who need it most.

In this post we will look at how modern high performance power supplies and their arbitrary waveform (arb) generation capability provide an easy to use and capable platform for power transient testing of ECUs. To do this we will generate two example waveforms common in the automotive industry. The focus of each example will be ease of use (no code) and the power supply's arb performance. To generate the waveforms we will use the N6705B DC Power Analyzer. N6705B is a modular high performance power supply with up to 4 supply outputs and over 20 modules to choose from. Outputs can be put in series or parallel for higher voltage and current needs. For creating power arbs 50 W and above the N675xA series of modules is the best choice for ECU power transient testing. The N675xA series can generate arbs with edge rates of ~33V/ms into a full resistive or capacitive load up to 680 uF.

The first figure below shows a power supply reset test pulse train that is commonly used for ECU test. In this case, the N6752A module is used to create a simple pulse-train using a sequence of pulses (created right from the front panel). The device under test (DUT) was a load of 100 Ω in parallel with 10 μF. A close up of the final pulse shows a rise time of 553 μs (second figure below). Fall time (not shown) was measured to be 206 μs. Rise and fall times of approximately 1 ms are commonly called for in these types of tests. 



Next let’s generate a transient waveform for engine crank immunity testing using a high performance power supply like the N6705B. We will use the Starting Profile waveform in the ISO 16750-2 specification, which is pictured below.


At first glance this waveform seems complex but really it can be divided into four common waveforms, three ramps and one repeating sinewave. Using the N6705B and its built-in waveforms along with its waveform sequencing capability, we can easily build the engine crank waveform. Below are some screen shots (click on to enlarge) that show building the four waveforms and sequencing them together all from the N6705B's front panel.


And finally below we get to the resulting engine crank output waveform into a load of 100 Ω in parallel with 10 μF.


If you have to capture and recreate or just create complex custom waveforms this can be done fairly easy too. Typically you can capture a waveform on a scope or generate it using software like Matlab and then transfer it to the power supply via a CSV file using a remote connection or a USB memory stick. In some cases the power supply may have its own waveform editing software. For instance the N6705B has accompanying software (model number 14585A) that provides waveform editing as well as other measurement features. 

In the post we looked at how modern power supplies like the N6705B can be used to generate complex power transients for ECU testing. They provide an easy to use power transient testing alternative to design engineers who can't easily access expensive test setups in the quality control lab or who want to avoid expensive back and forth trips to compliance testing labs. If you have any personal incites or comments you want to add please use the comments section below.

Friday, December 9, 2011

Digitizing with a DMM

The following video demonstrates how to use one of Agilent's high performance DMMs as a low frequency digitizer and plot the result without writing any code. This can be useful in applications like DC to AC inverter distortion analysis. Where you want to capture the resulting AC voltage or current waveform from the inverter output to analyze its distortion in the time or frequency domain.


The video demonstrates collecting the measurement data from memory using the DMM's web interface and then transferring to Excel by cutting and pasting. Another easy way to do this without any code is by using Agilent's free Command Expert software. Command Expert allows you to create an Excel spreadsheet that will connect to the DMM, configure the measurement, execute the measurement, collect the measurement data, and plot it. Click here to check out my post on Command Expert.

Click here to go to the 34411A DMM product page

Monday, November 28, 2011

Using Offset Compensation when Making Resistance Measurements

Here is a short straight forward video showing how the offset compensation (OCOMP) feature on DMMs is used. This feature is great when making resistance measurements on active circuits or any DUT that has stray voltage. Enjoy!



Click here to check out more test and measurement video tutorials on YouTube

Click here for more info on Agilent DMMs

Friday, November 18, 2011

Sharing Reference and Timing Signals Over Long Distances

In this post we look at easy way to share reference, timing, and trigger signals between test equipment separated by long distances. This is useful if you want to share a 10 MHz precision timebase signal with multiple instruments in various laboratories across your company's site or maybe you need to route a trigger signal among multiple instruments in a distributive test setup. Common short distance signal mediums like coax cables is often not a viable solution for long distance needs because of issues like high signal attenuation, group delay, and outside electromagnetic interfernce. A good viable solution for routing signals over long distances is by employing fiber optic equipment.
In the following example a pulsed timing signal was sent over 10 Km. This was accomplished using a fiber optic transmitter and receiver link as well as 10 Km of fiber optic cable. Below is a scope screen capture of the signal before modulation and transmission (in yellow) and after it was received and demodulated (in green)


For this example the optic transmitter / receiver pair that I used was the DiLink 4 GHz microwave link modules by Linear Photonics (link below). The conversion and transmission process of the optic link is not perfect but it's close, with performance specs that are orders of magnitude better then using coax or any metal based transmission method. Fiber optic cables have low signal attenuation and they do not suffer from external signal degradation factors such as electromagnetic interference and cross talk.
In a vacuum light travels at about 3.3 us per kilometer (1000 m / 299,792,458 m/s).  Because the index of refraction of most fiber optic cables is about 1.5, light travels about 1.5 times as fast in a vacuum as it does in the cable. This works out to about 4.9 us of latency for every kilometer and for our example with 10 Km of fiber optic cabling we would expect the delay to be 49 us. If we zoom in a bit on our input signal and 10 Km later output signal we can see the expected signal path delay in the below scope screen capture.


When you need to share a reference, timing, or trigger signal across long distances fiber optic links are a great solution for signal transmission. Compared to using a metal based cabling, like coax, fiber optic links provide better signal integrity, less attenuation, and are less susceptible to outside interference. If you have used fiber optic links for test and measurement applications please share your experiences in a comment below.

Click here to check out the Linear Photonics webpage

Friday, November 11, 2011

Comprehensive Look at Sources of Error in DC Voltage Measurements Part 2

This is part 2 of a 2 part post that takes a comprehensive look at all of the factors that can lead to errors in a DC voltage measurement with a DMM and how to eliminate them so you can achieve the highest accuracy possible in your measurement. In part 2 we will cover the following topics: loading errors,  power-line noise, injected current noise, and ground loop errors. If you are a seasoned DMM measurement veteran and you feel I missed something in the following sections please add it as a comment.

Loading Errors Due to Input Resistance — Measurement loading errors occur when the resistance of the DUT is an appreciable percentage of the DMM’s own input resistance. The figure below shows this error source. To reduce the effects of loading errors, and to minimize noise pickup, see if your DMM allows you to set its input resistance to a higher value. For instance, Agilent 34401A’s input resistance can be set from 10 M to > 10 G for the 100 mVdc, 1 Vdc, and 10 Vdc ranges.

Ri should be much larger than Rs or loading error will be a factor in the measurement
Power-Line Noise — This type of noise is caused by the powerline voltage signal (50 Hz or 60 Hz) being coupled onto the measurement setup either from the DUT, the DMM, or both. This noise appears as an AC ripple summed on top of the DC level you are measuring. To eliminate this common noise source DMM designers use integrating or averaging measurement time settings that are integer multiples of the powerline noise's period. Remember if you integrate over a sine wave you get zero. This is typically called normal mode rejection or NMR. If you set the integration time to an integer value of the powerline cycles (PLCs) of the spurious input, these errors (and their harmonics) will  average out to approximately zero. For instance, the Agilent 34401A provides three integration times to reject power-line frequency noise (and power-line frequency harmonics). When you apply power to the DMM, it measures the power-line frequency (50 Hz or 60 Hz), and then determines the proper integration time. The table below shows the noise rejection achieved with various configurations. For better resolution and increased noise rejection, select a longer integration time.



Noise Caused by Injected Current — Residual capacitances in the DMM’s power transformer cause small currents to flow from the LO terminal to earth ground. The frequency of the injected current is the power line frequency or possibly harmonics of the power line frequency. The injected current is dependent upon the power line configuration and frequency. With Connection A (see figure below), the injected current flows from the earth connection provided by the circuit to the LO terminal of the DMM, adding no noise to the measurement. However, with Connection B, the injected current flows through the resistor R, thereby adding noise to the measurement. With Connection B, larger values of R will worsen the problem.


The measurement noise caused by injected current can be significantly reduced by setting the integration time of the DMM to 1 power line cycle (PLC) or greater.

Ground Loop Error — I have done two posts on this topic and the links for them are below. The second being the more thorough one.

Understanding Ground Loop Error in Voltage Measurements


If you think I missed anything or if you have a question please leave it in a comment


Monday, November 7, 2011

Comprehensive Look at Sources of Error in DC Voltage Measurements Part 1

In this two part post we will (or at least attempt to) take a comprehensive look at all of the factors that can lead to errors in a DC voltage measurement with a DMM and how to eliminate them so you can achieve the highest accuracy possible in your measurement. In part one we will cover radio frequency interference, thermal EMF errors, noise caused by magnetic fields, and common mode rejection. If you are a seasoned DMM measurement veteran and you feel I missed something in the following sections please add it as a comment.


Radio Frequency Interference -- Most voltage-measuring instruments can generate false readings in the presence of large, high-frequency signal sources such as nearby radio and television transmitters, computer monitors, and cellular telephones. Especially when the high frequency energy is coupled to the multimeter on the system cabling. This effect can be severe when the cabling is 1/4, 1/2, or any integer multiple of the high frequency wavelength. You probably have experienced this type of effect first hand if you ever placed a mobile phone near speaker wiring and heard bursts of noise from the speaker that were certainly not part of the intended audio experience. To reduce interference, try to minimize the exposure of the system cabling to high-frequency RF sources. You can add shielding to the cabling or use shielded cabling.  If the measurement is extremely sensitive to RFI radiating from the DMM or your DUT, use a common mode choke in the system cabling, as shown in the figure below, to attenuate DMM emissions. Often you can see this same EMI reducing method being used on the data cable for your computer monitor.


Thermal EMF Errors -- Thermoelectric voltages, the most common source of error in low level voltage measurements, are generated when circuit connections are made with dissimilar metals at different temperatures. Each metal-to-metal junction forms a thermocouple, which generates a voltage proportional to the junction temperature. It is a good idea to take the necessary precautions to minimize thermocouple voltages and temperature variations in low level voltage measurements. The best connections are formed using copper-to-copper crimped connections. The figure below shows common thermoelectric voltages for connections between dissimilar metals.

Agilent benchtop DMMs use copper alloy for their input connectors
Noise Caused by Magnetic Fields -- When you make measurements near magnetic fields, take precautionary steps to avoid inducing voltages in the measurement connections. Voltage can be induced by either movement of the input connection wiring in a fixed magnetic field, or by a varying magnetic field. An unshielded, poorly dressed input wire moving in the earth’s magnetic field can generate several millivolts. The varying magnetic field around the ac power line can also induce voltages up to several hundred millivolts. Be especially careful when working near conductors carrying large currents. Where possible, route cabling away from magnetic fields, which are commonly present around electric motors, generators, televisions and computer monitors. In addition, when you are operating near magnetic fields, be certain that the input wiring has proper strain relief and is tied down securely. Use twisted-pair connections to the multimeter to reduce the noise pickup loop area, or dress the wires as closely together as possible.

For more on magnetic coupling and other spurious coupling issues in measurements check out the post Ground Loops and Other Spurious Coupling Mechanisms and How to Prevent Them

Common Mode Rejection (CMR) -- Ideally, a DMM is completely isolated from earth-referenced circuits. However, there is finite resistance between the DMM’s input LO terminal and earth ground. This can cause errors when measuring low voltages that are floating relative to earth ground. Check out the post Understanding Common Mode DMM Specifications for more information on CMR.

Stay tuned for part 2 next week!


Friday, November 4, 2011

How to do an Accuracy Calculation on Agilent's 53200 Series Counters


Calculating and understanding the accuracy of Agilent’s 53200A series of universal counters is not easy and can be a source of frustration. In this post I will walk you through an example frequency accuracy calculation using Agilent’s 53230A universal counter. Each step will include an explanation of the various terms so in the future the reader can perform an accuracy calculation on any of the 53200A series of counters with any input signal. I will also refer to data sheet pages where calculations, specs, and notes can be obtained. Let’s start by setting the initial conditions of the signal we are measuring and the counter we are using.

Signal being measured: 1 MHz sine wave with 1 Vpp amplitude and 1 mV of RMS noise

Counter hardware and settings: 53230A with TCXO timebase set for frequency measurements in ‘Auto’ mode (default) with a 1 s gate time. The input amplitude range is 5 V with a trigger level of 0 V on a positive edge event. The counter was calibrated at the factory upon purchase and it has been ~90 days since the initial 30 day warm-up period (page 15 note 1 of data sheet).


The basic accuracy calculation is as follows (page 16 datasheet):
Accuracy = +/- [(k*Random Uncertainty) + Systematic Uncertainty + Timebase Uncertainty]

For a definition of random, systematic, and timebase uncertainty refer to page 16 of the data sheet.
Let’s start by calculating the random uncertainty (RU) error which is the hardest of the three. The variable ‘k’ is the standard deviation or sigma multiplier for establishing the confidence interval of a Gaussian or normal distribution, RU has a Gaussian distribution. For our calculation we will choose a ‘k’ of 3 which gives us a confidence interval of 99.7%. The equation for calculating the RU is (data sheet page 16):
RU = [1.4*(TSS^2 + TE^2)^1/2] / RE*gate

TSS --> Single-shot time resolution spec found on page 19 of data sheet. For the 53230A this spec is 20 ps.

TE --> Threshold error (page 19) is the amplitude noise on the signal being measured that causes error at the trigger level point on the signal edge. The equation to calculate TE for 5 V input range is: (500μV^2 + EN^2 + VX^2)^1/2 / SR
  •  The ‘500μV’ term is amplitude noise added by the counter
  • The ‘EN’ term is the amplitude noise on the signal being measured (page 20). In our initial conditions we said this was 1 mV. EN is sometimes hard to calculate so if you are working with a clean signal it is safe to assume EN is zero.
  • The ‘SR’ is the slew rate of the signal at the counter’s set trigger point. The slew rate or the rate of change at any point on a given waveform is found by taking the derivative of the waveform function and then solving for the time when the set trigger amplitude occurs. Fortunately on page 20 the data sheet provides an SR formula for a sine wave and a square wave. The formulas are the result of solving for the derivative of each waveform and using the point of max SR as the trigger point. Choosing a trigger point on the waveform that has the highest slew rate leads to better measurement accuracy. In our example we have a sine wave which has max SR at the 50% amplitude point. A sine wave with no DC offset will then have its max slew rate at the 0V crossing and that is why we choose 0 V trigger point for our initial conditions. So we can use data sheet calculation for our example: SR = 2*pi*F*V0 to p
    • ‘F’ is the frequency of the signal being measured.
    • ‘V0 to pk’ is the delta amplitude from 0 to peak. For our signal this is 0.5 V (half of 1 Vpp).

RE --> When the 53230A and 53220A are in ‘Recip’ mode they return the average of all the measurements within the set gate time as the measurement result. When they are in ‘Auto’ mode, which is the default mode, they use a proprietary algorithm to achieve better resolution from a set of measurements in a specific gate time compared to just simply averaging the measurements. ‘RE’ stands for Resolution Enhancement. When in ‘Auto’ mode the RE factor can be more than 1 because the resolution enhancement algorithm is being implemented. When using the 53210A, 53220A, and the 53230A in ‘Recip’ or ‘TST’ modes RE = 1.   
Let’s calculate SR, TE, and RE
SR = 2*pi*F*V0 to pk = 2*3.1416*1e6*0.5 = 3.1416e6
TE = (500μV^2 + EN^2 + VX^2)^1/2 / SR = ((500e-6^2 + 1e-3^2)^1/2) / 3.1416e6 = 3.5588e-10

The data sheet on page 19 gives clear instructions on what to do for RE when Tss >> TE, we simply use the following equation RE = √(FIN * Gate/16) and check it against the max value RE table shown below:
Gate time >= 1 s, RE max of 6
Gate time 100 ms, RE max of 4
Gate time 10 ms, RE max of 2
Gate time =< 1 ms, RE = 1

If your gate time falls somewhere in between the gate time values in the table just use an RE value that falls between the table values, for instance if your gate time is 600 ms use an RE value of 5. Now here is where things get confusing, since the data sheet does not really provide much guidance on what to do for cases when Tss > TE, Tss < TE, and Tss << TE (which is the case we have for our example). The safe thing to do here is to just use the same procedure that was used for Tss >> TE. Since the resolution enhancement algorithm results in increased resolution the higher TE is, using the above calculation method ensures a safe result every time. Hopefully a future version of the data sheet will provide a better guidance for calculating RE. Let’s calculate the RE value for our example measurement:

 RE = √(FIN * Gate/16) = √(1e6 * 1/16) = 250, since the result of the equation is higher than the table value we use the table value so RE = 6

Now we have the variables we need to determine the random uncertainty in our accuracy calculation.
RU = [1.4*(TSS^2 + TE^2)^1/2] / RE*gate = [1.4*(20e-12^2 + 3.5588e-10^2)^1/2] / 6*1
RU = 8.3170e-011

Notice above, since TSS and TE are random error components we apply the root sum of squares (RSS) method to them. Also since they are random the higher the gate time the lower the random uncertainty is due to more averaging.
From page 16 of the data sheet the SU is determined as follows:
If RE ≥ 2: 10 ps / gate (max), 2 ps / gate (typ)
If RE < 2 or REC mode (RE = 1): 100 ps / gate

We can see SU is based off of the gate time and the calculated RE value. For each range of RE values there are two different ways to calculate SU, ‘max’ or maximum and ‘typ’ or typical. The typical spec is the performance of most of the units tested. The maximum is the warranty spec of the counter. For our calculation we will use the maximum:
SU = 10 ps / gate = 10e-12 / 1 = 10e-12

The last term we need to calculate is the Timebase Uncertainty (TU). On page 15 you will find the TU equation, the various time base specs, and four important notes. Understanding time base specs can be a little tricky. It is important to note that crystal oscillators are as much a mechanical device as they are an electrical device. That is why factors such as temperature, loss of power, and movement have such a dramatic effect on time base uncertainty. The TU calculation is:
TU = ( Aging + Temperature + Calibration Uncertainty )

The following is an overview of each of the three parts that make up the TU calculation:
Aging --> The 53200A series of counters must be on for 30 days after you receive it from the factory before the time base aging specs take effect. This is a settling time that is required by the physics of the crystal oscillator. The aging specs apply from the date of the counter’s last calibration. Since the 53230A for this example has been around 90 days after the initial settling time and we have the TCXO timebase our aging spec is +/- 0.6 ppm. This was calculated by multiplying the 30-day aging spec by 3 (90 days). Use the 30-day aging spec times the number of months since calibration for about 5 months at which time you will want to switch to the 1-year aging spec since it will be the lesser of the two. Notice the second half of note one on page 15, after the first year you use half of the 1-year and 30-day aging specs.

      Temperature --> the first spec “0 °C to 55 °C relative to 25 °C” is used if the counter was operated at temperatures more than +/- 5 °C from the temperature it was calibrated at (25 °C ideally). Since we are not sure what temperatures our example counter experienced during shipping we will use this in our calculation. The TCAL spec can be ignored for our accuracy calculation since it is included in the aging spec.

           Calibration Uncertainty --> This term is only used if the counter still has the factory calibration. It was added because after the counter is shipped from the factory we do not know what kind of handling the counter may experience during shipping. For instance the package may be dropped which can affect the calibration. Once the counter has been calibrated again this term can be ignored. Since in our example the factory was the last place the 53230A was calibrated we will use this spec. The 53200A series of counters should be calibrated onsite for best performance. If you send the 53200A counter off site for calibration you should add the “Initial factory calibration” spec into any accuracy calculation.
      
      We can now calculate the TU:

      TU = (Aging + Temp + CU) = 0.6 ppm + 1 ppm + 0.5 ppm = 6e-7 + 1e-6 + 5e-7 = 2.1 e-6
      
      Finally we now have all the information and data needed to calculate the basic accuracy of our example 1 MHz signal:

Accuracy = +/- [(k*RU) + SU + TU] = (3 * 8.317e-011) + 10e-12 + 2.1e-6 = 2.10025951e-6 or in parts and rounded 2.1003 ppm

Since the error calculation is in parts the result 2.10026 ppm means that if our signal was exactly 1 MHz the counter would output a reading between 1,000,002.10026  and  999,997.899740 as the measured frequency.

Important Accuracy Notes:
Notice in for our example that the timebase error is multiple orders of magnitude higher than the other error components. Even with the OCXO option this will still be true. That means in the future, if you are not using a highly stable external time base reference, you can just use the TU as a very close approximation of the counter’s measurement accuracy.

The accuracy of any of the 53200A counters can be increased by multiple orders of magnitude by using a rubidium or GPS based external frequency standard.

Better measurement accuracy is achieved by calibrating the 53200A series counter onsite after the initial 30 day warm-up / settling period and all future calibrations are carried out onsite.

Triggering at the amplitude point of maximum slew rate on the signal you are measuring provides better measurement accuracy.

Monday, October 31, 2011

Agilent Releases "Command Expert" Software

Agilent recently released a free software package called "Command Expert." If you write software that controls test and measurement instruments Command Expert will make your life easier. For non-software writers that need to create remote instrument control sequences this will also make your life easier. One of the best things about Command Expert is it's free. The below bullets outline Command Expert's top four headlines.
  • Enables fast prototyping and development of instrument command sequences
  • Provides seamless integration with Excel, LabVIEW, Visual Studio, VEE and SystemVue
  • Makes it easy to find, use, and view full documentation for SCPI and IVI-COM commands
  • Simplifies the process of retrieving measurement data from instruments
Command Expert allows you to easily build instrument routines or sequences using SCPI and IVI commands. You can then test the sequences and read back measurement data right into the Command Expert interface. Once you have your finished sequence you can save it to run again later or you can export it to your current instrument control programming project. Command Expert will wrap the sequence up into a function that can be directly exported into your programming project (it generates the code for you). Supported development environments  include Visual Studio, VEE, LabView, Excel, and System Vue. You start out by adding your connected instruments to the instrument pane. From there you can select instruments to access their SCPI or IVI documentation. The documentation UI on Command Expert makes it really easy to search, find, and get info on the selected instruments command set. Once you begin adding a command to your sequence, Command Expert has command auto complete capabilities to make it easy. It also can quickly guide you through adding parameters to commands. Below is a screen shot of Command Expert's UI. It shows a user that adding a command to their sequence for a scope that they named "InfiniiVision" (click to enlarge).


You can also insert wait statements into sequences. Once your sequence is built you can test and fine tune it using Command Expert. When you are done you can save your sequence or export it to a programming environment. 

One of my favorite features on Command Expert is how it integrates with Excel. It automatically plugs into Excel allowing you to create stand alone Excel sequences. This an awesome feature for non-programmers or for programmers who just want to setup a test routine quickly. Once you create a sequence using Command Expert you can export it to an Excel spreadsheet where that sequence can run on its own with no coding required (Command Expert must be installed on the PC). On an Excel based sequence you can setup cells on the spreadsheet as input parameters for defining instrument ranges or settings and you can use plotting features for displaying measured data. Below is an example using a scope:


Once again Command Expert is totally free so I encourage you to check it out if you are instrument programmer or if you are looking for a away run instrument routines without buying expensive software packages.


Monday, October 24, 2011

Video Demo Using Agilent iOS Programming Tools with Xcode to Control LXI Instruments

In the video below I demonstrate how to use Agilent iOS IO programming tools using Apple's Xcode to connect and query an Ethernet connected instrument. The tools abstract low level network sockets, data buffering, and error handling to make the creation of instrument control apps faster and easier. The Agilent iOS IO programming tools are free for download along with the example app used in the video. If the video screen is too small to view in the blog just click on it to view directly in youtube where more sizing options are available.


The link below will take you to the webpage where you can download the programming tools demonstrated in the video. On the website you can also find programming tools for Android.

Smart device programming tools website

Monday, October 17, 2011

Matlab Program for Simulating ECG Waveforms with an Arbitrary Waveform Generator

Because of the popularity of my 2/21/11 post Simulating Complex ECG Patterns with an Arbitrary Waveform Generator I decided to create a Matlab program that allows you to create and customize electrocardiogram (ECG) waveforms that can be easily transferred to an arbitrary waveform generator (AWG). The program is called "ECG Waveform Simulator" and can be downloaded free of charge from Matlab Central (see link below). The program allows you to customize the "typical" ECG waveform by allowing you to modify amplitude, duration, and in some cases interval for the standard ECG waveform parts including the P wave, Q wave, R wave, S wave, T wave, and U wave. The program allows you to directly transfer the ECG waveform you created to an 33521A or 33522A AWG via a Ethernet connection or to store it on a CSV file that could later be uploaded to an AWG. This programmed combined with an AWG provides engineers involved with designing and testing ECG monitoring and measurement equipment an simple and flexible test solution.

One huge benefit of using modern AWGs, like the 33521A and 33522A, for simulating ECG waveforms is a feature typically referred to as arbitrary waveform (arb) sequencing. Arb sequencing allows the user to seamlessly create a complex waveform pattern combining multiple arbs stored on the AWG. It is analogues to creating a playlist on your MP3 player using various songs stored in the MP3's memory. Sequencing gives you the ability to create complex ECG patterns by combining multiple ECG waveforms. Below is a screen shot from a scope showing an example of three different ECG waveforms that I created using the ECG Waveform Simulator. The three waveforms were output using the sequencing capability on the 33522A AWG (notice the second waveform is played twice).


In the above sequence the first ECG waveform is played once, the second is played twice, and the third is played once. Of course we could have just as easily played the fist waveform 50 times, the second 10 times, and third 101 times. For more information on creating arb sequences with the 33521A or 33522A check out my post Creating Arbitrary Waveform Sequences.

Click here to download ECG Waveform Simulator from Matlab Central

Click here to check Agilent's AWGs

Monday, October 10, 2011

Determining How Much Oscilloscope Bandwidth is Needed to Accurately Capture a Signal

If you input a 100 MHz sine wave with a 1 Vpp amplitude into an oscilloscope with a max frequency of 100 MHz what will you see on the display? You will still see a 100 MHz sine wave, but it will no longer by 1 Vpp. Instead the measured amplitude will be about 700 mVpp. That is because the max frequency rating of a scope is its 3 dB roll off point, just like how a low pass filter is rated. That means any frequency components at the scope's max frequency will be attenuated 3 dB or 30%. For non-sine wave signals used at the a scope's upper frequency limits the result is even worse because entire frequency components can be eliminated. The figures below show a 100 MHz scope and a 500 MHz scope both measuring the same 100 MHz digital clock signal.
100 MHz Scope Measuring 100 MHz Digital Clock
500 MHz Scope Measuring 100 MHz Digital Clock

In the 100 MHz scope screen shot you can see that all frequency components that make up the digital square wave have been attenuated except for the center frequency. In the bottom 500 MHz scope screen shot we get a much better picture of what the clock signal really looks like. The following are good rules of thumb when determining how much scope bandwidth you need to accurately capture a signal:

  • For analog signals choose a scope bandwidth that is at least 3 times larger then the center frequency of the signal.
  • For square or pulse type waveforms choose a scope bandwidth that is at least 5 times larger then the center frequency of the signal. This will ensure you capture up to the 5th harmonic of the signal.
The two most common types of responses that scope's have at their max frequency are Gaussian response and Maximally-Flat response, which are both shown below.
Scopes with frequency ranges 1 GHz or below typically have the Gaussian response and high bandwidth scopes typically have the Maximally-Flat response. With knowledge of the response of your scope there is a much more accurate calculation you can perform to determine the scope bandwidth needed to measure a digital signal. The first step is to determine the maximum practical frequency component within the signal under test. We refer to this frequency component as fknee. Dr. Howard W. Johnson has written a book on this topic titled, “High-speed Digital Design – A Handbook of Black Magic ”. All fast rising edges have an infinite spectrum of frequency components. However, there is an inflection (or “knee”) in the frequency spectrum of fast edges where frequency components higher than fknee are insignificant in determining the shape of the signal. For digital signals with rise time characteristics based on 10% to 90% thresholds, fknee is equal to 0.5 divided by the rise time of the signal: 
fKnee = 0.5/RT (10% - 90%)

The next step is to determine the required bandwidth of the oscilloscope to measure this signal. The table below shows multiplying factors for various degrees of accuracy for scopes with a Gaussian or a Maximally-Flat frequency response.

This calculation has nothing to do with the frequency or clock rate of your signal, just the rise time. Let's walk through an example with a Gaussian Response scope measuring a signal with a 1 ns rise time and we want 3% accuracy or better. Using the "fknee" calculation above, the highest frequency component (fknee) would be 500 MHz. From the table above, to achieve 3% accuracy or better we need a scope with a max range of at least 950 MHz. For the example we just did the clock rate of the digital signal could have been 100 MHz or 500 MHz, it doesn't matter the rise time is what determined the bandwidth needed.

One last note, don't forget to check/consider the bandwidth of the cabling or probe you are using along with the connection method to the signal!


Monday, October 3, 2011

Plotting Tools Now Available for Displaying Measurement Data on Android Devices

Back in early September I blogged about how Agilent launched a webpage entitled "Smart Device Programming Tools and Examples for Instrument Control" that features IO programming tools for controlling LXI instruments with Android and iOS smart devices. The IO programming tools and accompanying example apps that demonstrate the use of the tools are all free to download (for more info on the IO tools click here). In this post I am happy to announce that last week programming tools for plotting and analyzing measurement data on Android devices were added to the webpage. The free plotting tools are called Agilent Android Plots or AAPlots for short. Below is a sample plot that was created using AAPlots on a Motorola Xoom:

AAPlots includes features like markers, pinch zoom, and touch panning. AAPlots includes the following chart types: XY line chart, scatter chart, stripline chart, histogram chart, area chart, and bar chart. 

Besides just AAPlots, the source code for four more Android example apps were added to the webpage, they are as follows:
  • AAPlotsDemo: This  app demonstrates the use of the AAPlots programming tools in detail
  • Example 34972A: This app demonstrates the use of AAIo and AAPlots on the 34972A DAQ / Switch Unit.
  • Example MSOX3054A: This app demonstrates the use of AAIo and AAPlots on the MSO-X 3054A Oscilloscope.
  • Example 53230A: This app demonstrates the use of AAIo and AAPlots on the 53230A Universal Counter.
The webpage includes an email link for sending sending comments, suggestions, and feedback on the iOS and Android instrument control programming tools.






Tuesday, September 27, 2011

Remote Long Distance Control of LXI Instruments

Below are two videos I did featuring two flexible low cost wireless methods for remote long distance control of LXI instruments. This is for test / datalogging / data acquisition applications where there is no Ethernet or other network available, for instance when doing outdoor test measurements. The first video features using an RF Ethernet Bridge and the second video features using a cellular router. In each video a particular instrument is featured, but the remote long distance control method discussed could be used with any LXI instrument.

RF Ethernet Bridge:

Cellular Router:


One aspect not covered in the video when using these methods for remote long distance control of LXI instruments is network latency. Because of the distance between the computer and instrument and because of the various network layers, network latency can be much longer so any software created for these types of long distance instrument control application should be robust enough to handle longer network latency. For more info on using a a cellular router check out an earlier post I did on using a cellular router for instrument control, click here



Monday, September 19, 2011

DMM Resistance Measurement Considerations

The digital multimeter or DMM offers two methods for measuring resistance: 2–wire and 4–wire ohms. For both methods, the test current flows from the input HI terminal and then through the resistor being measured. For 2–wire ohms, the voltage drop across the resistor being measured is sensed internal to the multimeter. Therefore, test lead resistance is also measured. For 4–wire ohms, separate "sense" connections are required. Since no current flows in the sense leads, the resistance in these leads does not give a measurement error. In this blog post I will discuss some general considerations and tips when making DMM resistance measurements.

4–Wire Ohms Measurements
4-wire ohm measurement use the HI and LO DMM leads as well as the HI-Sense leads (that is why they are called "4-wire"), the setup for a 4-wire ohms measurement is shown below. 

The sense leads essentially extend the DMM measurement to the DUT junctions instead of the HI and LO terminals . This eliminates the voltage drop across the HI and LO leads caused by the test current. Since the sense leads are high impedance there is essentially no current flow into the sense inputs. The 4–wire ohms method provides the most accurate way to measure small resistances. Test lead resistances and contact resistances are automatically reduced using this method. Four–wire ohms is often used in automated test applications where resistive and/or long cable lengths, numerous connections, or switches exist between the DMM and the DUT.

Removing Test Lead Resistance Errors
Modern DMMs offer a built-in function, often labeled as "Null" or "Math", for eliminating test lead error.To use the Math function on a DMM you short the test leads to together. The Math function will then make a resistance measurement of the test leads and store it. The DMM will then mathematically subtract the measured lead resistance for subsequent resistance measurements to cancel out the lead resistance error.

Minimizing Power Dissipation Effects
When measuring resistors designed for temperature measurements (or other resistive devices with large temperature coefficients), be aware that the DMM will dissipate some power in the device–under–test. If power dissipation is a problem, you should select the DMM's next higher measurement range to reduce the errors to acceptable levels. The following table shows examples of Agilent's 34410A and 34411A DMMs source current for various measurement ranges.


Errors in High Resistance Measurements
When you are measuring large resistances, significant errors can occur due to insulation resistance and surface cleanliness. You should take the necessary precautions to maintain a "clean" high–resistance system. Test leads and fixtures are susceptible to leakage due to moisture absorption in insulating materials and "dirty" surface films. Nylon and PVC are relatively poor insulators (10^9 Ω) when compared to PTFE (Teflon) insulators (10^13 Ω). Leakage from nylon or PVC insulators can easily contribute a 0.1% error when measuring a 1 MΩ resistance in humid conditions.

Click here to check out the DMMs that Agilent offers



Friday, September 16, 2011

Agilent Introduces a Low-cost 6.5 Digit PXI DMM

On September 13, Agilent introduced the low-cost M9181A 6.5 digit PXI digital multimeter. The M9181A provides the most common measurement functions such as DC voltage, DC current, AC voltage, AC current, 2- and 4-wire resistance.

Features:
• 6½ digit resolution
• Up to 150 readings per second at 4½ digits
• Basic 1 year DCV accuracy of 90 ppm
• DCV, ACV, DCI, ACI, 2- and 4-wire resistance
• Floating isolation (CAT II) to 240 V (floating measurement to 240 V maximum)
• Software drivers to support most common programming environments
• PXI form factor
• Chassis connector compatibility: PXI-1 (J-1 only)

Other Agilent PXI family members include the M9182A and M9183A which provide higher throughput and better accuracy as well as additional measurement functions such as capacitance and temperature.

Thursday, September 8, 2011

New M9036A PXIe Embedded Controller from Agilent

Agilent just released the M9036A a new embedded PXIe controller, which enables a compact platform solution. With the 2-link, 2x8 Gen 2 backplane configuration, it is an ideal match for the Agilent M9018A PXIe chassis. This three-slot module easily integrates into hybrid test systems using GPIB, USB, and LAN with the built-in front panel interfaces. Built upon a mid-performance Intel Core i5 dual-core processor with Hyper-Threading Technology, the M9036A is designed for applications in multi-tasking environments.

Features and Specs:
  • Gen 2 PCIe backplane switches
  • Intel core I5 dual-core 2.4 GHz processor
  • 160 GB solid-state drive
  • 2x8 or 4x4 PXIe PCIe link configuration
  • PXIe PCIe data bandwidth: 2 GB/s to/from processor and 4 GB/s max between PCIe backplane links

Tuesday, September 6, 2011

Smart Device Programming Tools and Examples for Instrument Control

I am excited to announce that Agilent just launched a website where you can download free of charge smart device programming tools and examples for instrument control. What is a smart device you ask? Smart device refers to the smart phones and tablet PCs that continue to become more and more a part of our personal and professional computing and connected lifestyle. One of the big benefits of smart devices for test and measurement is they offer the ultimate in ubiquitous access to test data and instrument control. They are natural fit with LXI instrumentation since they can communicate with an Ethernet network, either using Wi-Fi or through a cellular provider’s network via the Internet.

One of the challenges today of using smart devices for Ethernet based instrument control is there is a not much smart device programming experience out there and there is a lack of smart device programming tools specific to the test and measurement industry. Agilent is working to change that by providing programming tools and example code for Apple’s iOS and Google’s Android specifically designed for LAN / LXI instrument control.


The first programming tools and examples we are rolling out are for instrument IO. When you create instrument control software on a Windows-based PC, many IO tools are available, such as VISA and IVI-COM drivers, that make instrument programming easier by doing all the low-level connection management and data handling for you. Unfortunately these tools and drivers do not exist for smart device programming environments until now. The IO programming tools can be downloaded below for both the iOS and Android. Also below you will find other links for content related to LXI instrumentation and using smart devices for instrument control. 

Also on the webpage you will find an email node. Please feel free to use this email node for any questions you have on using the smart device programming tools and for suggestions on future smart device programming tools you would like to see. The link for the new webpage can be found below, enjoy!



Monday, August 29, 2011

Simulating Jitter with an Arbitrary Waveform Generator Part 2

This is part 2 of a two part post on simulating jitter with an AWG, to check out part 1 click here. In part 1 we discussed what jitter on a digital signal is, namely mistimed edge crossings. With larger arbitrary waveform memory capacity and superb signal integrity, we discussed how modern AWGs make a great low cost solution for simulating complex jitter patterns on digital communication and clock signals for noise immunity and BER testing purposes. Finally we were introduced to the error function (ERF) which we can use to build digital pulses with edge events that we can easily and quantitatively vary to simulate jitter. In this post we will build on the ERF concept and put an algorithm together for adding complex jitter patterns to our digital pulses.

In the following examples we will use Matlab to build the arbitrary waveform, the Agilent 33522A dual channel AWG to output it, and the Agilent 54833D oscilloscope to view the waveform. Initially, we will build a 500 Kbps clock signal as an arbitrary waveform using Matlab and its built in ERF feature. We will be using a 250 MSa/s sample rate so the waveform points will be spaced 4ns apart. Let's first start out by building a waveform consisting of two pulses and adding a small value to the edge crossing time of the second pulse's falling edge to simulate jitter. The inner loop is where the pulse is created using a positive and negative ERF function. The first loop consists of two iterations to create two pulses. Notice the highlighted section in the negative ERF function. This term is what will cause the edge shift in the second pulse. Notice the term will be 0 for the first iteration and 150e-12 for the second, causing the edge to crossing to shift 150 ps.

pulse=[];
   for  r=0:1:1;
    for i = 5.04E-7:4E-9:2.5E-6;    
        y = erf((i-1E-6)/10E-9)+erf(-(i-2E-6+r*150E-12)/10E-9)-1;
        pulse=[pulse y];
    end
   end

The waveform was saved to a CSV and then loaded onto the 33522A AWG. The output waveform from the 33522A was then captured using the scope. The image below is a zoomed in view from the scope showing the falling edge of the two pulses. Notice that the shift between the pulses is ~ 150 ps (click image to enlarge).

We can use this method to produce pulses with any variety of edge crossing times and add them to our arbitrary waveform. In this way we can generate a clock signal with a precise amount of jitter on each pulse. We can add as many of these precisely defined pulses up to the limit of the waveform memory. In this example, I am using the Agilent 33522A with a waveform memory of 1 MSa and each pulse is defined by 500 samples allowing up to 2000 pulses per waveform. With the 33522A’s optional waveform memory of 16 MSa we could get up to 32k pulses per waveform.

Let’s use this capability to simulate the jitter we might expect from a power supply coupling to the clock line. For this example I will inject 200 ps of periodic jitter at 5 kHz.

    pulse=[];
   for  r=2*3.1415/100:2*3.1415/100:2*3.1415;
    for i = 5.04E-7:4E-9:2.5E-6;    
        y = erf((i-1E-6)/10E-9)+erf(-(i-2E-6+sin(r)*100E-12)/10E-9)-1;
        pulse=[pulse y];
    end
   end

Here is the result.

Just by looking at the color graded display of the oscilloscope capture you can see the periodic nature of the jitter. The sine wave spends more time near its maximum amplitude than it does near its zero crossing, this is shown on the oscilloscope as pink being the greatest density of samples, followed by blue and with green being the lowest density near the center. In this example we used a simple sine wave at a given frequency to define our jitter pattern. We could make the jitter pattern more complex by using a random number generator with a normal distribution to represent random Gaussian jitter or we could have added together multiple sine waves at various frequencies to represent multiple spurious coupled jitter sources.

So lets review and explicitly write out the algorithm we just went over for generating an arbitrary waveform that consists of a long series of digital pulses with an added jitter pattern to the edge crossings:
  1. Determine your signal rate, how many points per pulse, and how many pulses are in the waveform. These settings are highly dependent on the AWG you are working with and the jitter characteristics you want to simulate.
  2. Create a loop that builds your ideal pulse with a positive and negative ERF using the example above. Remember the rise / fall times of the pulse is limited by the AWG's sample rate.
  3. Add the jitter pattern function and jitter magnitude you want to simulate to the desired edge or edges of the signal. This is done by adding the jitter pattern and magnitude inside the ERF function.
  4. Create the outer loop which controls the number of digital pulses created and if necessary steps the jitter pattern variable through the appropriate values.
The examples and the algorithm discussed so far in this post apply mainly to clock signals. Of course the same concepts could be applied to simulate jitter on digital communication signals like SPI and CAN for BER and noise immunity testing. There is just the added complexity of plugging in 1s and 0s at the right spot in the waveform to simulate meaningful data.

I posted the code to a Matlab function that creates a user specified digital clock signal with jitter on it on Matlab central. Click here to download

To check out Agilent's AWGs click here

Monday, August 22, 2011

Simulating Jitter with an Arbitrary Waveform Generator Part 1

The Arbitrary Waveform Generator (AWG) is an instrument you will probably find on the bench of most electrical engineers. It allows you to produce a variety waveforms from built in functions like square and sine to arbitrary user defined waveforms. As AWG technology progresses, it opens the door for new applications. In this two part blog post we will look how to use an AWG to simulate jitter on digital clock and communication signals. This is extremely useful when doing noise immunity and BER testing on digital circuits.

AWGs have always been a great way to create serial data or clock signals as you have the ability to produce accurate signals with very precise edge placements. Usually you can place an edge crossing to better than 1/100th of the sample interval of the AWG. With this accuracy you are able to add sub-nanosecond timing error to clock or data signals to test your systems susceptibility to jitter. With basic AWG waveform memories now in the millions of points, you are now able to add jitter to longer pulse patterns in more interesting ways.

Jitter is defined as the deviation in or displacement of some aspect of the pulses in a digital signal. What is most often characterized is the Time Interval Error (TIE) which is the timing deviation of the edge crossing of the serial data signal relative to a clock. This can also be the timing deviation of a clock relative to an ideal clock.

Jitter and TIE can end up on a signal through a variety of mechanisms. For example, jitter can be the result of spurious coupling from a switching power supply to the digital systems clock signal. A designer would be prudent to test his systems vulnerability to such an occurrence. While there are many more expensive solutions for injecting jitter onto a clock signal, most AWGs are perfectly capable of simulating this kind of jitter.

You can change the edge crossing position of an arbitrary waveform very precisely in steps on the order of the sample period of the AWG (1/sample rate) divided by its vertical resolution in bits. You also want to pay close attention to the AWG's jitter spec, which specifies the amount of jitter error that the AWG will add to your "ideal" digital signal. Modern AWGs will have jitter specs < 100 ps, for instance Agilent's 33521/22A AWGs have a jitter spec of < 40 ps. Now we need a mathematical algorithm that allows us to easily create our digital pulses and allows us to easily manipulate the pulse edges to simulate jitter. Lets first start out with creating the pulses using the error function (ERF), which is the integral of the Gaussian or Normal Distribution. The ERF is defined and plotted as (click to enlarge):

The ERF gives a positive step from -1 to 1 with the zero crossing at t0. In the Gaussian distribution σis the variance or the measure width of the distribution. The correlation of σ to the width of the rising edge in the error function gives a 10-90 risetime of about 2σ. The negative step or falling edge is defined as the ERF function multiplied by -1. The only limitation is that σ needs to be greater than 2 AWG sample periods in order to ensure adequate oversampling.

That will do it for part 1. In part 2 we will see how we can use the ERF to simulate jitter on a digital signal. We will go over some examples using Matlab code and Agilent's 33522A.