Friday, February 27, 2009

Counting Type ADCs - 2

Justify Full
Analog to Digital Converts
By Ibrahim Kamal

The Electronic circuit
Any Experienced reader may have noticed that the hardware for such a device is very simple (with disregard to the microcontroller). Indeed, you only need to slightly change the design of the DAC explained in this early tutorial. The components on the left part of the schematic are standard in most of the projects, which are the capacitors (C1 and C2), the crystal oscillator (X1), the reset switch (SW1) with the debouncing capacitor (C3) and resistor (R12), and the connector (J1) for ISP programming. The resistor R13 to R32 and LM358 comparator are responsible of converting Digital signals to Analog signals with 10 bits resolution, and adding to that a micro-controller, we get an ADC.



As mentioned before, the purpose of a micro-controller in this application is to generate the analog signals to be compared with the measured input voltage (Vin). The LM358 comparator will give a change of logic level (from 0 to 1) indicating that the simulated analog voltage reached the measured voltage, thus indicating the end of conversion.

References:
  1. Counting Type ADC, Analog to Digital Converter, http://www.ikalogic.com/
  2. http://en.wikipedia.org

Counting Type ADCs - 1

Analog to Digital Converts
By Ibrahim Kamal

Overview In this article, we will discuss a very common type of digital to analog converters called the counting type ADC. Based on our previous simple tutorial about DACs (digital to analog converters), this article presents a technique of building ADCs that you can use to learn and master the process of analog to digital conversion, but also use it in many of you projects, adding an incredible feature to basic microcontrollers like the 8051 that don't have integrated ADCs.

The principle of operation
While there are many ways of building and implementing a counting type ADC, they all rely on the same basic idea. I am going to use a micro-controller by default to perform all the required logical operations, because in most cases, where an ADC is found, a microcontroller is also present. As you can see in figure 1A, the counting type ADC is based on Digital to analog converter (DAC). An R/2R DAC is very suitable for this task, and it can offer more precision by adding more branches to the R/2R network, consequently adding more bit depth to the converter, or you may call it "more resolution". To understand how an analog input is converted to digital data, you have to think in the reverse direction, because that's what really happens, the microcontroller tries to mimic the analog input by producing the closest analog input through the DAC. More precisely.



Fig. 1A: Principle of operation of a counting type ADC


The microcontroller generates a digital ramp (a signal increasing from 0 to max scale) on the input of the DAC, which in case of an 8-bit DAC with a 5V maximum output voltage, would be a counting from 0 to 255, generating an increasing analog voltage (0v to 5V) at the output of the DAC, which is then fed to a comparator to be compared with analog input signal being converted.



Fig. 1B: Principle of operation of a counting type ADC


While the analog output voltage from the DAC is smaller than the measured input, the comparator will output logic 0. Once the DAC output a voltage that is slightly bigger than the analog input, the comparator will output logic 1 indicating the end of the conversion (refereed to as EOC). The job of the microcontroller is to monitor the output of the comparator, and record the value of the data that was sent to the DAC just before, or just after the comparator' output flipped form 0 to 1. Figure 1B may help you to imagine the relation between the 3 major signals: The analog input being measured, the Analog output from the DAC and the EOC signal. The resolutions - which are the smallest change in the input that can be detected - depend on the number of data line 'n'. That means that you can build your own ADC with any precision you may need. That becomes interesting sometimes when you need very low or very high precision ADCs.


References:
  1. Counting Type ADC, Analog to Digital Converter, http://www.ikalogic.com/
  2. http://en.wikipedia.org

Thursday, February 26, 2009

Understanding ADC Specifications 10


By Len Staller
Embedded Systems Design
(02/24/05, 05:24:00 PM EST)

Reading ADC Specification NumbersJustify FullThe ADC specifications posted in data sheets serve to define the performance of an ADC in different types of applications. The engineer uses these specifications to define if, how, and in what way the ADC should be used in an application. Performance specifications can also be a guarantee that an ADC will perform in a certain way. If a specification is labeled as a maximum or minimum, this is implied. For example, in the ADC specification shown in Table 1, the data sheet excerpt gives an INL error maximum of 1 LSB. This should mean the manufacturer has tested the ADC and is stating that INL error should not be greater or less than 1 LSB. Besides minimum and maximum, specifications listed as typical are also given. This is not a guarantee but simply represents typical performance for that ADC. For example, if a data sheet specifies 2 LSB INL in the "Typical" column, there's no implied guarantee that the engineer won't find the ADC with higher INL error.

Though a typical number is not a guarantee, it should give the designer an idea of how the ADC will perform, since these numbers are generally derived from the manufacturer's characterization data or are expected by design. Typical numbers are more helpful when the manufacturer gives the standard deviation from the mean of the tested specification. This gives the engineer more information on how the ADC's performance can be expected to deviate from the numbers posted as typical. Keep this in mind when comparing ADC data sheets, especially if the specification is critical to your design. An ADC with a typical 2 LSB INL may yield higher INL error than expected, making a 12-bit ADC effectively a 10-bit ADC—caveat emptor!

Len Staller serves as an applications engineer for Silicon Laboratories' microcontroller products. Previously, he was an applications engineer for Cygnal Integrated Products, which was acquired by Silicon Laboratories in 2003. Staller has a bachelor's degree in electrical engineering from The University of Texas at Austin. He can be reached at Len.Staller@silabs.com.

Reference:
  1. http://www.embended.com/
  2. http://en.wikipedia.org


Wednesday, February 25, 2009

Understanding ADC Specifications 9

Justify Full
By Len Staller
Embedded Systems Design
(02/24/05, 05:24:00 PM EST)

Signal-to-noise and distortion
Signal-to-noise and distortion (SiNAD) offers a more complete picture by including the noise and harmonic distortion in one specification. SiNAD gives a description of how the measured signal will compare to the noise and distortion. You can calculate the SiNAD ratio using Equation 6.

SiNAD = 20 log | V1 / √V22 + V23 + … + V2n + V2noise
(Equation 6)

Spurious-free dynamic range
Finally, spurious-free dynamic range (SFDR) is the difference between the magnitude of the measured signal and its highest spur peak. This spur is typically a harmonic of the measured signal but doesn't have to be. SFDR is shown in Figure 11.



Figure 11: Spurious-free dynamic range (SFDR)


Reference:
  1. http://www.embended.com/
  2. http://en.wikipedia.org

Monday, February 23, 2009

Understanding ADC Specifications 8

By Len Staller
Embedded Systems Design
(02/24/05, 05:24:00 PM EST)

Harmonic distortion
Nonlinearity in the data converter results in harmonic distortion when analyzed in the frequency domain. Such distortion is observed as "spurs" in the FFT at harmonics of the measured signal as illustrated in Figure 10. This distortion is referred to as total harmonic distortion (THD), and its power is calculated in Equation 5.


Figure 10: FFT showing harmonic distortion

THD = 20 log | √V22 + V22 + … + V2n / V1| (Equation 5)

The magnitude of harmonic distortion diminishes at high frequencies to the point that its magnitude is less than the noise floor or is beyond the bandwidth of interest. Data sheets typically specify to what order the harmonic distortion has been calculated. Manufacturers will specify which harmonic is used in calculating THD; for example, up to the fifth harmonic is common (see the example ADC specification in Table 1).


Table 1: Example: Silicon Labs C8051F060 16-bit ADC electrical characteristics


Reference:
  1. http://www.embended.com/
  2. http://en.wikipedia.org

Understanding ADC Specifications 7

By Len Staller
Embedded Systems Design

(02/24/05, 05:24:00 PM EST)


Signal-to-noise ratio
The signal-to-noise ratio (SNR) is the ratio of the root mean square (RMS) power of the input signal to the RMS noise power (excluding harmonic distortion), expressed in decibels (dB), as shown in Equation 3.

SNR(dB) = 20 log | Vsignal (rms) / V noise (rms) (Equation 3)

SNR is a comparison of the noise to be expected with respect to the measured signal. The noise measured in an SNR calculation doesn't include harmonic distortion but does include quantization noise (an artifact of quantization error) and all other sources of noise (for example, thermal noise). This noise floor is depicted in the FFT plot in Figure 9. For a given ADC resolution, the quantization noise is what limits an ADC to its theoretical best SNR because quantization error is the only error in an ideal ADC. The theoretical best SNR is calculated in Equation 4.



Figure 9: SNR— A measure of the signal compared to the noise floor

SNR(dB) = 6.02N + 1.76 (4) (Equation 4)
Where N is the ADC resolution

Quantization noise can only be reduced by making a higher-resolution measurement (in other words, a higher-resolution ADC or oversampling). Other sources of noise include thermal noise, 1/ƒ noise, and sample clock jitter.


Reference:
  1. http://www.embended.com/
  2. http://en.wikipedia.org

Sunday, February 22, 2009

Understanding ADC Specifications 6

By Len Staller
Embedded Systems Design
(02/24/05, 05:24:00 PM EST)

Absolute error
The absolute error is the total DC measurement error and is characterized by the offset, full-scale, INL, and DNL errors. Quantization error also affects accuracy, but it's inherent in the analog-to-digital conversion process (and so does not vary from one ADC to another of equal resolution). When designing with an ADC, the engineer uses the performance specifications posted in the data sheet to calculate the maximum absolute error that can be expected in the measurement, if it's important. Offset and full-scale errors can be reduced by calibration at the expense of dynamic range and the cost of the calibration process itself. Adding or subtracting a constant number to or from the ADC output codes can minimize offset error. Multiplying the ADC output codes by a correction factor can minimize full-scale error. Absolute error is less important in some applications, such as closed-loop control, where DNL is most important.

Dynamic performance
An ADC's dynamic performance is specified using parameters obtained via frequency-domain analysis and is typically measured by performing a fast Fourier transform (FFT) on the output codes of the ADC. In Figure 8, the fundamental frequency is the input signal frequency. This is the signal measured with the ADC. Everything else is noise—the unwanted signals—to be characterized with respect to the desired signal. This includes harmonic distortion, thermal noise, 1/ƒ noise, and quantization noise. (The figure is exaggerated for ease of observation.) Some sources of noise may not derive from the ADC itself. For example, distortion and thermal noise originate from the external circuit at the input to the ADC. Engineers minimize outside sources of error when assessing the performance of an ADC and in their system design.



Figure 8: An FFT of ADC output codes


Reference:
  1. http://www.embended.com/
  2. http://en.wikipedia.org

Saturday, February 21, 2009

Understanding ADC Specifications 5


By Len Staller

Embedded Systems Design
(02/24/05, 05:24:00 PM EST)

Non-linearity
Ideally, each code width (LSB) on an ADC's transfer function should be uniform in size. For example, all codes in Figure 2 should represent exactly 1/8th of the ADC's full-scale voltage reference. The difference in code widths from one code to the next is differential nonlinearity (DNL). The code width (or LSB) of an ADC is shown in Equation 1.

LSB = Vref / 2N (Equation 1)

The voltage difference between each code transition should be equal to one LSB, as defined in Equation 1. Deviation of each code from an LSB is measured as DNL. This can be observed as uneven spacing of the code "steps" or transition boundaries on the ADC's transfer-function plot. In Figure 6, a selected digital output code width is shown as larger than the previous code's step size. This difference is DNL error. DNL is calculated as shown in Equation 2.



Figure 6: Differential nonlinearity

DNL = (Vn+1 – Vn) / VLSB (Equation 2)

The integral nonlinearity (INL) is the deviation of an ADC's transfer function from a straight line. This line is often a best-fit line among the points in the plot but can also be a line that connects the highest and lowest data points, or endpoints. INL is determined by measuring the voltage at which all code transitions occur and comparing them to the ideal. The difference between the ideal voltage levels at which code transitions occur and the actual voltage is the INL error, expressed in LSBs. INL error at any given point in an ADC's transfer function is the accumulation of all DNL errors of all previous (or lower) ADC codes, hence it's called integral nonlinearity. This is observed as the deviation from a straight-line transfer function, as shown in Figure 7.


Figure 7: Integral nonlinearity error

Because nonlinearity in measurement will cause distortion, INL will also affect the dynamic performance of an ADC.

Reference:
  1. http://www.embended.com/
  2. http://en.wikipedia.org

Thursday, February 19, 2009

Understanding ADC Specifications 4


By Len Staller
Embedded Systems Design
(02/24/05, 05:24:00 PM EST)

Offset error, full-scale error
The ideal transfer function line will intersect the origin of the plot. The first code boundary will occur at 1 LSB as shown in Figure 1. You can observe offset error as a shifting of the entire transfer function left or right along the input voltage axis, as shown in Figure 3.



Figure 3: Offset error

An error of - 1/2 LSB is intentionally introduced into some ADCs but is still included in the specification in the data sheet. Thus, the offset-error specification posted in the data sheet includes 1/2 LSB of offset by design. This is done to shift the potential quantization error in a measurement from 0 to 1 LSB to - 1/2 to +1/2 LSB. In this way, the magnitude of quantization error is intended to be < style="text-align: center;">



Figure 4: Quantization error vs. output code




Figure 5: Full-scale error

Full-scale error is the difference between the ideal code transition to the highest output code and the actual transition to the output code when the offset error is zero. This is observed as a change in slope of the transfer function line as shown in Figure 5. A similar specification, gain error, also describes the non-ideal slope of the transfer function as well as what the highest code transition would be without the offset error. Full-scale error accounts for both gain and offset deviation from the ideal transfer function. Both full-scale and gain errors are commonly used by ADC manufacturers.

Reference:
  1. http://www.embended.com/
  2. http://en.wikipedia.org

Wednesday, February 18, 2009

Understanding Analog to Digital Converter Specifications 3

By Len Staller
Embedded Systems Design

(02/24/05, 05:24:00 PM EST)


The ideal transfer function
The transfer function of an ADC is a plot of the voltage input to the ADC versus the code's output by the ADC. Such a plot is not continuous but is a plot of 2N codes, where N is the ADC's resolution in bits. If you were to connect the codes by lines (usually at code-transition boundaries), the ideal transfer function would plot a straight line. A line drawn through the points at each code boundary would begin at the origin of the plot, and the slope of the plot for each supplied ADC would be the same as shown in Figure 1.



Figure 1: Ideal transfer function of a 3-bit ADC

Figure 1 depicts an ideal transfer function for a 3-bit ADC with reference points at code transition boundaries. The output code will be its lowest (000) at less than 1/8 of the full-scale (the size of this ADC's code width). Also, note that the ADC reaches its full-scale output code (111) at 7/8 of full scale, not at the full-scale value. Thus, the transition to the maximum digital output does not occur at full-scale input voltage. The transition occurs at one code width—or least significant bit (LSB)—less than full-scale input voltage (in other words, voltage reference voltage).



Figure 2: 3-bit ADC transfer function with - 1/2 LSB offset

The transfer function can be implemented with an offset of - 1/2 LSB, as shown in Figure 2. This shift of the transfer function to the left shifts the quantization error from a range of (- 1 to 0 LSB) to (- 1/2 to +1/2 LSB). Although this offset is intentional, it's often included in a data sheet as part of offset error (see section on offset error). Limitations in the materials used in fabrication mean that real-world ADCs won't have this perfect transfer function. It's these deviations from the perfect transfer function that define the DC accuracy and are characterized by the specifications in a data sheet. The DC performance specifications described have accompanying figures that depict two transfer function segments: the ideal transfer function (solid, blue lines) and a transfer function that deviates from the ideal with the applicable error described (dashed, yellow line). This is done to better illustrate the meaning of the performance specifications.

Reference:
  1. http://www.embended.com/
  2. http://en.wikipedia.org

Understanding Analog to Digital Converter Specifications 2


By Len Staller
Embedded Systems Design
(02/24/05, 05:24:00 PM EST)

DC accuracy
Many signals remain relatively static, such as those from temperature sensors or pressure transducers. In such applications, the measured voltage is related to some physical measurement, and the absolute accuracy of the voltage measurement is important. The ADC specifications that describe this type of accuracy are offset error, full-scale error, differential nonlinearity (DNL), and integral nonlinearity (INL). These four specifications build a complete description of an ADC's absolute accuracy.

Although not a specification, one of the fundamental errors in ADC measurement is a result of the data-conversion process itself: quantization error. This error cannot be avoided in ADC measurements. DC accuracy, and resulting absolute error are determined by four specs—offset, full-scale/gain error, INL, and DNL. Quantization error is an artifact of representing an analog signal with a digital number (in other words, an artifact of analog-to-digital conversion). Maximum quantization error is determined by the resolution of the measurement (resolution of the ADC, or measurement if signal is oversampled). Further, quantization error will appear as noise, referred to as quantization noise in the dynamic analysis. For example, quantization error will appear as the noise floor in an FFT plot of a measured signal input to an ADC, which I'll discuss later in the dynamic performance section).

Reference:
  1. http://www.embended.com/
  2. http://en.wikipedia.org

Monday, February 16, 2009

Understanding Analog to Digital Converter Specifications 1

Justify Full
By Len Staller
Embedded Systems Design
(02/24/05, 05:24:00 PM EST)

Confused by analog-to-digital converter specifications? Here's a primer to help you decipher them and make the right decisions for your project.

Although manufacturers use common terms to describe analog-to-digital converters (ADCs), the way ADC makers specify the performance of ADCs in data sheets can be confusing, especially for a newcomers. But to select the correct ADC for an application, it's essential to understand the specifications. This guide will help engineers to better understand the specifications commonly posted in manufacturers' data sheets that describe the performance of successive approximation register (SAR) ADCs.

ABCs of ADCs
ADCs convert an analog signal input to a digital output code. ADC measurements deviate from the ideal due to variations in the manufacturing process common to all integrated circuits (ICs) and through various sources of inaccuracy in the analog-to-digital conversion process. The ADC performance specifications will quantify the errors that are caused by the ADC itself.

ADC performance specifications are generally categorized in two ways: DC accuracy and dynamic performance. Most applications use ADCs to measure a relatively static, DC-like signal (for example, a temperature sensor or strain-gauge voltage) or a dynamic signal (such as processing of a voice signal or tone detection). The application determines which specifications the designer will consider the most important.

For example, a DTMF decoder samples a telephone signal to determine which button is depressed on a touchtone keypad. Here, the concern is the measurement of a signal's power (at a given set of frequencies) among other tones and noise generated by ADC measurement errors. In this design, the engineer will be most concerned with dynamic performance specifications such as signal-to-noise ratio and harmonic distortion. In another example, a system may measure a sensor output to determine the temperature of a fluid. In this case, the DC accuracy of a measurement is prevalent so the offset, gain, and non-linearity will be most important.

Reference:
  1. http://www.embended.com/
  2. http://en.wikipedia.org

Sunday, February 15, 2009

Time Stretch Analog to Digital Converter (TS-ADC)

Time-stretch analog-to-digital converter (TS-ADC) is an analog-to-digital converter (ADC) system that has the capability of digitizing very high bandwidth signals that cannot be captured by conventional electronic ADCs. Alternatively, it is also known as the Photonic Time Stretch (PTS) digitizer, since it uses an optical frontend. It relies on the process of time-stretch, which effectively slows down the analog signal in time (or compresses its bandwidth) before a slow electronic ADC can digitize it.

Background
There is a huge demand for very high-speed analog-to-digital converters (ADCs), as they are needed for test and measurement equipment in laboratories and in high speed data communications systems. Most of the ADCs are based purely on electronic circuits, which have limited speeds and add a lot of impairments, limiting the bandwidth of the signals that can digitized and the achievable signal-to-noise ratio. In the TS-ADC, this limitation is overcome by time-stretching the analog signal, which effectively slows down the signal in time prior to digitization. By doing so, the bandwidth (and carrier frequency) of the signal is compressed. Electronic ADCs that would have been too slow to digitize the original signal, can now be used to capture this slowed down signal.

How it works



Fig. 1 A time-stretch analog-to-digital converter (with a stretch factor of 4) is shown. The original analog signal is time-stretched and segmented with the help of a time-stretch preprocessor (generally on optical frontend). Slowed down segments are captured by conventional electronic ADCs. The digitized samples are rearranged to obtain the digital representation of the original signal.


Fig. 2 Optical frontend for a time-stretch analog-to-digital converter is shown. The original analog signal is modulated over a chirped optical pulse (obtained by dispersing a ultra-short supercontinuum pulse). Second dispersive medium stretches the optical pulse further. At the photodetector (PD) output, stretched replica of original signal is obtained.

The basic operating principle of the TS-ADC is shown in Fig. 1. The time-stretch processor, which is generally an optical frontend, stretches the signal in time. It also divides the signal into multiple segments using a filter, for example a wavelength division multiplexing (WDM) filter, to ensure that the stretched replica of the original analog signal segments do not overlap each other in time after stretching. The time-stretched and slowed down signal segments are then converted into digital samples by slow electronic ADCs. Finally, these samples are collected by a digital signal processor (DSP) and rearranged in a manner such that output data is the digital representation of the original analog signal. Any distortion added to the signal by the time-stretch preprocessor is also removed by the DSP.

An optical frontend is commonly used to accomplish this process of time-stretch, as shown in Fig. 2. An ultrashort optical pulse (typically 100 to 200 femtoseconds long), also called a supercontinuum pulse, which has a broad optical bandwidth, is time-stretched by dispersing it in a highly dispersive medium (such as a dispersion compensating fiber). This process results in (an almost) linear time-to-wavelength mapping in the stretched pulse, because different wavelengths travel at different speeds in the dispersive medium. The obtained pulse is called a chirped pulse as its frequency is changing with time, and it is typically a few nanoseconds long. The analog signal is modulated onto this chirped pulse using an electro-optic intensity modulator. Subsequently, the modulated pulse is stretched further in the second dispersive medium which has much higher dispersion value. Finally, this obtained optical pulse is converted to electrical domain by a photodetector, giving the stretched replica of the original analog signal.

For continuous operation, a train of supercontinuum pulses is used. The chirped pulses arriving at the electro-optic modulator should be wide enough (in time) such that the trailing edge of one pulse overlaps the leading edge of the next pulse. For segmentation, optical filters separate the signal into multiple wavelength channels at the output of the second dispersive medium. For each channel, a separate photodetector and backend electronic ADC is used. Finally the outputs of these ADCs are passed on to the DSP, which generates the desired digital output.

Impulse response of the photonic time-stretch (PTS) system



Fig. 3 Capture of a 95-GHz RF tone using the photonic time-stretch digitizer. The signal is captured at an effective sample rate of 10-Terasamples-per-second.

The PTS processor is based on specialized analog optical (or microwave photonic) fiber links such as those used in cable TV distribution. While the dispersion of fiber is a nuisance in conventional analog optical links, time-stretch technique exploits it to slow down the electrical waveform in the optical domain. In the cable TV link, the light source is a continuous-wave (CW) laser. In PTS, the source is a chirped pulse laser.

In a conventional analog optical link, dispersion causes the upper and lower modulation sidebands, foptical ± felectrical, to slip in relative phase. At certain frequencies, their beats with the optical carrier interfere destructively, creating nulls in the frequency response of the system. For practical systems the first null is at tens of GHz, which is sufficient for handling most electrical signals of interest. Although it may seem that the dispersion penalty places a fundamental limit on the impulse response (or the bandwidth) of the time-stretch system, it can be eliminated. The dispersion penalty vanishes with single-sideband modulation. Alternatively, one can use the modulator’s secondary (inverse) output port to eliminate the dispersion penalty, in much the same way as two antennas can eliminate spatial nulls in wireless communication (hence the two antennas on top of a WiFi access point). Thus, the impulse response (bandwidth) of a time-stretch system is limited only by the bandwidth of the electro-optic modulator, which is about 120 GHz—a value that is adequate for capturing most electrical waveforms of interest.

Extremely large stretch factors can be obtained using long lengths of fiber, but at the cost of larger loss—a problem that has been overcome by employing Raman amplification within the dispersive fiber itself, leading to the world’s fastest real-time digitizer, as shown in Fig. 3. Also, using PTS, capture of very high frequency signals with a world record resolution in 10-GHz bandwidth range has been achieved.

Comparison with time lens imaging
Another technique, that involves a time lens, can also be used to slow down (mostly optical) signals in time. The time-lens concept relies on the mathematical equivalence between spatial diffraction and temporal dispersion, the so-called space-time duality. A lens held a fixed distance from an object produces a magnified image visible to the eye. The lens imparts a quadratic phase shift to the spatial frequency components of the optical waves; in conjunction with the free space propagation (object to lens, lens to eye), this generates a magnified image. Owing to the mathematical equivalence between paraxial diffraction and temporal dispersion, an optical waveform can be temporally imaged by a three-step process of dispersing it in time, subjecting it to a phase shift that is quadratic in time (the time lens itself), and dispersing it again. Theoretically, a focused aberration-free image is obtained under a specific condition when the two dispersive elements and the phase shift satisfy the temporal equivalent of the classic lens equation. Alternatively, the time lens can be used without the second dispersive element to transfer the waveform’s temporal profile to the spectral domain, analogous to the property that an ordinary lens produces the spatial Fourier transform of an object at its focal points.

In contrast to the time-lens approach, PTS is not based on the space-time duality – there is no lens equation that needs to be satisfied in order to obtain an error-free slowed-down version of the input waveform. Time-stretch technique also offers continuous-time acquisition performance, a feature needed for mainstream applications of oscilloscopes.
Another important difference between the two techniques is that the time lens requires the input signal to be subjected to high amount of dispersion before further processing. For electrical waveforms, the electronic devices that have the required characteristics: (1) high dispersion to loss ratio, (2) uniform dispersion, and (3) broad bandwidths, do not exist. This renders time lens not suitable for slowing down wideband electrical waveforms. In contrast, PTS does not have such a requirement. It was developed specifically for slowing down electrical waveforms and enable high-speed digitizers.

References:
  1. A. S. Bhushan, F. Coppinger, and B. Jalali, “Time-stretched analogue-to-digital conversion," Electronics Letters vol. 34, no. 9, pp. 839-841, April 1998.
  2. Y. Han and B. Jalali, “Photonic Time-Stretched Analog-to-Digital Converter: Fundamental Concepts and Practical Considerations," Journal of Lightwave Technology, Vol. 21, Issue 12, pp. 3085-3103, Dec. 2003.
  3. J. Capmany and D. Novak, “Microwave photonics combines two worlds," Nature Photonics 1, 319-330 (2007).
  4. J. Chou, O. Boyraz, D. Solli, and B. Jalali, “Femtosecond real-time single-shot digitizer," Applied Physics Letters 91, 161105 (2007).
  5. S. Gupta and B. Jalali, “Time-warp correction and calibration in photonic time-stretch analog-to-digital converter," Optics Letters 33, 2674-2676 (2008).
  6. B. H. Kolner and M. Nazarathy, “Temporal imaging with a time lens," Optics Letters 14, 630-632 (1989.
  7. J. W. Goodman, “Introduction to Fourier Optics," McGraw-Hill (1968).
  8. http://en.wikipedia.org

Saturday, February 14, 2009

Successive Approximation Analog to Digital Converter (ADC)

A successive approximation ADC is a type of analog-to-digital converter that converts a continuous analog waveform into a discrete digital representation via a binary search through all possible quantization levels before finally converging upon a digital output for each conversion.

Successive Approximation ADC Block Diagram

Algorithm
The successive approximation Analog to digital converter circuit typically consists of four chief subcircuits:
  1. A sample and hold circuit to acquire the input voltage (Vin).
  2. An analog voltage comparator that compares Vin to the output of the internal DAC and outputs the result of the comparison to the successive approximation register (SAR).
  3. A successive approximation register subcircuit designed to supply an approximate digital code of Vin to the internal DAC.
  4. An internal reference DAC that supplies the comparator with an analog voltage equivalent of the digital code output of the SAR for comparison with Vin.

The successive approximation register is initialized so that the most significant bit (MSB) is equal to a digital 1. This code is fed into the DAC which then supplies the analog equivalent of this digital code (Vref/2) into the comparator circuit for comparison with the sampled input voltage. If this analog voltage exceeds Vin the comparator causes the SAR to reset this bit and set the next bit to a digital 1. If it is lower, then the bit is left a 1 and the next bit is set to 1. This binary search continues until every bit in the SAR has been tested. The resulting code is the digital approximation of the sampled input voltage and is finally output by the ADC at the end of the conversion (EOC).

Mathematically, let Vin = xVref, so x in [-1, 1] is the normalized input voltage. The objective is to approximately digitize x to an accuracy of 1/2n. The algorithm proceeds as follows:
  1. Initial approximation x0 = 0.
  2. ith approximation xi = xi-1 - s(xi-1 - x)/2i.

where, s(x) is the signum-function(sgn(x)) (+1 for x ≥ 0, -1 for x <>
  1. An input voltage source Vin.
  2. A reference voltage source Vref to normalize the input.
  3. A DAC to convert the ith approximation xi to a voltage.
  4. A Comparator to perform the function s(xi - x) by comparing the DAC's voltage with the input voltage.
  5. A Register to store the output of the comparator and apply xi-1 - s(xi-1 - x)/2i.

Charge-Redistribution Successive Approximation ADC
One of the most common implementations of the successive approximation ADC, the charge-redistribution successive approximation ADC, uses a charge scaling DAC. The charge scaling DAC simply consists of an array of individually switched binary-weighted capacitors. The amount of charge upon each capacitor in the array is used to perform the aforementioned binary search in conjunction with a comparator internal to the DAC and the successive approximation register.

The DAC conversion is performed in four basic steps.
  1. First, the capacitor array is completely discharged to the offset voltage of the comparator, VOS. This step provides automatic offset cancellation(i.e. The offset voltage represents nothing but dead charge which cant be juggled by the capacitors).
  2. Next, all of the capacitors within the array are switched to the input signal, vIN. The capacitors now have a charge equal to their respective capacitance times the input voltage minus the offset voltage upon each of them.
  3. In the third step, the capacitors are then switched so that this charge is applied across the comparator's input, creating a comparator input voltage equal to -vIN.
  4. Finally, the actual conversion process proceeds. First, the MSB capacitor is switched to VREF, which corresponds to the full-scale range of the ADC. Due to the binary-weighting of the array the MSB capacitor forms a 1:1 divided between it and the rest of the array. Thus, the input voltage to the comparator is now -vIN plus VREF/2. Subsequently, if vIN is greater than VREF/2 then the comparator outputs a digital 1 as the MSB, otherwise it outputs a digital 0 as the MSB. Each capacitor is tested in the same manner until the comparator input voltage converges to the offset voltage, or at least as close as possible given the resolution of the DAC.

Charge Scaling DAC


Reference:

  1. http://en.wikipedia.org

Friday, February 13, 2009

A Flash Analog to Digital Converter (ADC)


A Flash ADC (also known as a Direct conversion ADC) is a type of analog-to-digital converter that uses a linear voltage ladder with a comparator at each "rung" of the ladder to compare the input voltage to successive reference voltages. Often these reference ladders are constructed of many resistors; however modern implementations show that also capacitive voltage division is possible. The output of these comparators is generally fed into a digital encoder, which converts the inputs into a binary value (the collected outputs from the comparators can be thought of as a unary value).

Benefits and drawbacks
Flash converters are extremely fast compared to many other types of ADCs, which usually narrow in on the "correct" answer over a series of stages. Compared to these, a Flash converter is also quite simple and, apart from the analog comparators, only requires logic for the final conversion to binary.

A Flash converter requires a huge number of comparators compared to other ADCs, especially as the precision increases. A Flash converter requires 2n-1comparators for an n-bit conversion. The size and cost of all those comparators makes Flash converters generally impractical for precisions much greater than 8 bits (255 comparators). In place of these comparators, most other ADCs substitute more complex logic which can be scaled more easily for increased precision.

Implementation
Flash ADCs have been implemented in many technologies, varying from silicon based bipolar (BJT) and complementary metal oxide FETs (CMOS) technologies till rarely used III-V technologies. Often this type of ADCs is used as a first medium sized analog circuit verification.

The earliest implementations consisted of a reference ladder of well matched resistors connected to a reference voltage. Each tap at the resistor ladder is used for one comparator, possibly preceded by an amplification stage, and thus generates a logical '0' or '1' depending if the measured voltage is above or below the reference voltage of the resistor tap. The reason to add an amplifier is twofold: it amplifies the voltage difference and therefore suppresses the offset of the comparator and the kick-back noise of the comparator, towards the reference ladder, is also strongly suppressed. Typically designs from 4-bit up to 6-bit, and sometimes 7-bit are produced.

To save some power, designs with a capacitive reference ladders have been demonstrated. Instead of only clocking the comparator(s), also the input stage is sampling its reference value. As the sampling is done at a very high rate, the leakage of the capacitors is negligible.

Recently, offset calibration has been introduced in the flash ADC designs. Instead of properly designing the analog circuit (which actually means increasing the components sizes to suppress variation) the offset is removed during use. A test signal is applied and each the offset of each comparator is calibrated to below the LSB size of the ADC. Due to the heavy calibration effort the design are up to now always limit to 4-bits.

Folding ADC
The number of comparators can be reduced somewhat by adding a folding circuit in front, making a so-called "folding ADC". Instead of using the comparators in a Flash ADC only once, during a ramp input signal, the folding ADC re-uses the comparators multiple times. If a m-times folding circuit is used in an n-bit ADC, the actual number of comparator can be reduced from 2n-1 to 2n/m (there is always one needed to detect the range crossover). Typical folding circuits are e.g. the Gilbert multiplier, or analog wired-OR circuits.

Application
The very high sample rates of this type of ADC enable Gigahertz applications like radar detection, wide band radio receivers and optical communication links. More often the flash ADC is embedded in a large IC containing many digital decoding functions. Also a small flash ADC circuit may be present inside a Delta-sigma modulation loop.

Reference:
  1. http://en.wikipedia.org

Thursday, February 12, 2009

Applications Analog to Digital Converter


Application to music recording
ADCs are integral to current music reproduction technology. Since much music production is done on computers, when an analog recording is used, an ADC is needed to create the PCM data stream that goes onto a compact disc.

The current crop of AD converters utilized in music can sample at rates up to 192 kilohertz. Many people in the business consider this an overkill and pure marketing hype, due to the Nyquist-Shannon sampling theorem. Simply put, they say the analog waveform does not have enough information in it to necessitate such high sampling rates, and typical recording techniques for high-fidelity audio are usually sampled at either 44.1 kHz (the standard for CD) or 48 kHz (commonly used for radio/TV broadcast applications). However, this kind of bandwidth headroom allows the use of cheaper or faster anti-aliasing filters of less severe filtering slopes. The proponents of oversampling assert that such shallower anti-aliasing filters produce less deleterious effects on sound quality, exactly because of their gentler slopes. Others prefer entirely filterless AD conversion, arguing that aliasing is less detrimental to sound perception than pre-conversion brickwall filtering. Considerable literature exists on these matters, but commercial considerations often play a significant role. Most high-profile recording studios record in 24-bit/192-176.4 kHz PCM or in DSD formats, and then downsample or decimate the signal for Red-Book CD production.

Other applications
AD converters are used virtually everywhere where an analog signal has to be processed, stored, or transported in digital form. Fast videos ADCs are used, for example, in TV tuner cards. Slow on-chip 8, 10, 12, or 16 bits ADCs are common in microcontrollers. Very fast ADCs are needed in digital oscilloscopes, and are crucial for new applications like software-defined radio.

Reference:
  1. http://en.wikipedia.org

Wednesday, February 11, 2009

Commercial Analog to Digital Converters


These are usually integrated circuits. Most converters sample with 6 to 24 bits of resolution, and produce fewer than 1 mega sample per second. Thermal noise generated by passive components such as resistors masks the measurement when higher resolution is desired. For audio applications and in room temperatures, such noise is usually a little less than 1 μV (micro volt) of white noise. If the Most Significant Bit corresponds to a standard 2 volts of output signal, this translates to a noise-limited performance that is less than 20~21 bits, and obviates the need for any dithering. Mega- and gig sample per second converters are available, though (Feb 2002). Mega sample converters are required in digital video cameras, videos capture cards, and TV tuner cards to convert full-speed analog video to digital video files. Commercial converters usually have ±0.5 to ±1.5 LSB error in their output.

In many cases the most expensive part of an integrated circuit is the pins, because they make the package larger, and each pin has to be connected to the integrated circuit's silicon. To save pins, it's common for slow ADCs to send their data one bit at a time over a serial interface to the computer, with the next bit coming out when a clock signal changes state, say from zero to 5V. This saves quite a few pins on the ADC package, and in many cases, does not make the overall design any more complex. (Even microprocessors which use memory-mapped IO only need a few bits of a port to implement a serial bus to an ADC.)

Commercial ADCs often have several inputs that feed the same converter, usually through an analog multiplexer. Different models of ADC may include sample and hold circuits, instrumentation amplifiers or differential inputs, where the quantity measured is the difference between two voltages.

Reference:
  1. http://en.wikipedia.org

Analog to Digital Converter (ADC) Structures



These are the most common ways of implementing an electronic ADC:
  1. A direct conversion ADC or flash ADC has a bank of comparators, each firing for their decoded voltage range. The comparator bank feeds a logic circuit that generates a code for each voltage range. Direct conversion is very fast, but usually has only 8 bits of resolution (255 comparators - since the number of comparators required is 2n - 1) or fewer, as it needs a large, expensive circuit. ADCs of this type have a large die size, a high input capacitance, and are prone to produce glitches on the output (by outputting an out-of-sequence code). Scaling to newer sub micron technologies does not help as the device mismatch is the dominant design limitation. They are often used for video, wideband communications or other fast signals in optical storage.
  2. A successive approximation ADC uses a comparator to reject ranges of voltages, eventually settling on a final voltage range. Successive approximation works by constantly comparing the input voltage to the output of an internal digital to analog converter (DAC, fed by the current value of the approximation) until the best approximation is achieved. At each step in this process, a binary value of the approximation is stored in a successive approximation register (SAR). The SAR uses a reference voltage (which is the largest signal the ADC is to convert) for comparisons. For example if the input voltage is 60 V and the reference voltage is 100 V, in the 1st clock cycle, 60 V is compared to 50 V (the reference, divided by two. This is the voltage at the output of the internal DAC when the input is a '1' followed by zeros), and the voltage from the comparator is positive (or '1') (because 60 V is greater than 50 V). At this point the first binary digit (MSB) is set to a '1'. In the 2nd clock cycle the input voltage is compared to 75 V (being halfway between 100 and 50 V: This is the output of the internal DAC when its input is '11' followed by zeros) because 60 V is less than 75 V, the comparator output is now negative (or '0'). The second binary digit is therefore set to a '0'. In the 3rd clock cycle, the input voltage is compared with 62.5 V (halfway between 50 V and 75 V: This is the output of the internal DAC when its input is '101' followed by zeros). The output of the comparator is negative or '0' (because 60 V is less than 62.5 V) so the third binary digit is set to a 0. The fourth clock cycle similarly results in the fourth digit being a '1' (60 V is greater than 56.25 V, the DAC output for '1001' followed by zeros). The result of this would be in the binary form 1001. This is also called bit-weighting conversion, and is similar to a binary search. The analogue value is rounded to the nearest binary value below, meaning this converter type is mid-rise (see above). Because the approximations are successive (not simultaneous), the conversion takes one clock-cycle for each bit of resolution desired. The clock frequency must be equal to the sampling frequency multiplied by the number of bits of resolution desired. For example, to sample audio at 44.1 kHz with 32-bit resolution, a clock frequency of over 1.4 MHz would be required. ADCs of this type have good resolutions and quite wide ranges. They are more complex than some other designs.
  3. A ramp-compare ADC (also called integrating, dual-slope or multi-slope ADC) produces a saw-tooth signal that ramps up, then quickly falls to zero. When the ramp starts, a timer starts counting. When the ramp voltage matches the input, a comparator fires, and the timer's value is recorded. Timed ramp converters require the least number of transistors. The ramp time is sensitive to temperature because the circuit generating the ramp is often just some simple oscillator. There are two solutions: use a clocked counter driving a DAC and then use the comparator to preserve the counter's value, or calibrate the timed ramp. A special advantage of the ramp-compare system is that comparing a second signal just requires another comparator, and another register to store the voltage value. A very simple (non-linear) ramp-converter can be implemented with a microcontroller and one resistor and capacitor. Vice versa a filled capacitor can be taken from an integrator, time-to-amplitude converter, phase detector, sample and hold circuit, or peak and hold circuit and discharged. This has the advantage that a slow comparator cannot be disturbed by fast input changes.
  4. A delta-encoded ADC has an up-down counter that feeds a digital to analog converter (DAC). The input signal and the DAC both go to a comparator. The comparator controls the counter. The circuit uses negative feedback from the comparator to adjust the counter until the DAC's output is close enough to the input signal. The number is read from the counter. Delta converters have very wide ranges, and high resolution, but the conversion time is dependent on the input signal level, though it will always have a guaranteed worst-case. Delta converters are often very good choices to read real-world signals. Most signals from physical systems do not change abruptly. Some converters combine the delta and successive approximation approaches; this works especially well when high frequencies are known to be small in magnitude.
  5. A pipeline ADC (also called subranging quantizer) uses two or more steps of subranging. First, a coarse conversion is done. In a second step, the difference to the input signal is determined with a digital to analog converter (DAC). This difference is then converted finer, and the results are combined in a last step. This can be considered a refinement of the successive approximation ADC wherein the feedback reference signal consists of the interim conversion of a whole range of bits (for example, four bits) rather than just the next-most-significant bit. By combining the merits of the successive approximation and flash ADCs this type is fast, has a high resolution, and only requires a small die size.
  6. A Sigma-Delta ADC (also known as a Delta-Sigma ADC) oversamples the desired signal by a large factor and filters the desired signal band. Generally a smaller number of bits than required are converted using a Flash ADC after the Filter. The resulting signal, along with the error generated by the discrete levels of the Flash, is fed back and subtracted from the input to the filter. This negative feedback has the effect of noise shaping the error due to the Flash so that it does not appear in the desired signal frequencies. A digital filter (decimation filter) follows the ADC, which reduces the sampling rate, filters off unwanted noise signal and increases the resolution of the output. (sigma-delta modulation, also called delta-sigma modulation)

There can be other ADCs that use a combination of electronics and other technologies:
  1. A Time-stretch analog-to-digital converter (TS-ADC) digitizes a very wide bandwidth analog signal, which cannot be digitized by a conventional electronic ADC, by time-stretching the signal prior to digitization. It commonly uses a photonic preprocessor frontend to time-stretch the signal, which effectively slows the signal down in time and compresses its bandwidth. As a result, an electronic backend ADC, that would have been too slow to capture the original signal, can now capture this slowed down signal. For continuous capture of the signal, the frontend also divides the signal into multiple segments in addition to time stretching. A separate electronic ADC individually digitizes each segment. Finally, a digital signal processor rearranges the samples and removes any distortions added by the frontend to yield the binary data that is the digital representation of the original analog signal.

Reference:
  1. http://en.wikipedia.org

Monday, February 9, 2009

Response Type of Analog to Digital Converter (ADC)


Linear ADCs
Most ADCs are of a type known as linear, although analog-to-digital conversion is an inherently non-linear process (since the mapping of a continuous space to a discrete space is a piecewise-constant and therefore non-linear operation). The term linear as used here means that the range of the input values that map to each output value has a linear relationship with the output value, i.e., that the output value k is used for the range of input values from m(k + b) to m(k + 1 + b), where m and b are constants. Here b is typically 0 or −0.5. When b = 0, the ADC is referred to as mid-rise, and when b = −0.5 it is referred to as mid-tread.

Non-linear ADCs
If the probability density function of a signal being digitized is uniform, then the signal-to-noise ratio relative to the quantization noise is the best possible. Because of this, it's usual to pass the signal through its cumulative distribution function (CDF) before the quantization. This is good because the regions that are more important get quantized with a better resolution. In the dequantization process, the inverse CDF is needed. This is the same principle behind the companders used in some tape-recorders and other communication systems, and is related to entropy maximization. (Never confuse companders with compressors!)

For example, a voice signal has a Laplacian distribution. This means that the region around the lowest levels, near 0, carries more information than the regions with higher amplitudes. Because of this, logarithmic ADCs are very common in voice communication systems to increase the dynamic range of the representable values while retaining fine-granular fidelity in the low-amplitude region. An eight-bit a-law or the μ-law logarithmic ADC covers the wide dynamic range and has a high resolution in the critical low-amplitude region, that would otherwise require a 12-bit linear ADC.

Accuracy
An ADC has several sources of errors. Quantization error and (assuming the ADC is intended to be linear) non-linearity is intrinsic to any analog-to-digital conversion. There is also a so-called aperture error, which is due to a clock jitter and is revealed when digitizing a time-variant signal (not a constant value). These errors are measured in a unit called the LSB, which is an abbreviation for least significant bit. In the above example of an eight-bit ADC, an error of one LSB is 1/256 of the full signal range, or about 0.4%.

Quantization error
Quantization error is due to the finite resolution of the ADC, and is an unavoidable imperfection in all types of ADC. The magnitude of the quantization error at the sampling instant is between zero and half of one LSB. In the general case, the original signal is much larger than one LSB. When this happens, the quantization error is not correlated with the signal, and has a uniform distribution. Its RMS value is the standard deviation of this distribution, given by

(1/√12)LSB ≈ 0.289 LSB

In the eight-bit ADC example, this represents 0.113% of the full signal range. At lower levels the quantizing error becomes dependent of the input signal, resulting in distortion. This distortion is created after the anti-aliasing filter, and if these distortions are above 1/2 the sample rate they will alias back into the audio band. In order to make the quantizing error independent of the input signal, noise with an amplitude of 1 quantization step is added to the signal. This slightly reduces signal to noise ratio, but completely eliminates the distortion. It is known as dither.

Non-linearity
All ADCs suffer from non-linearity errors caused by their physical imperfections, resulting in their output to deviate from a linear function (or some other function, in the case of a deliberately non-linear ADC) of their input. These errors can sometimes be mitigated by calibration, or prevented by testing. Important parameters for linearity are integral non-linearity (INL) and differential non-linearity (DNL). These non-linearities reduce the dynamic range of the signals that can be digitized by the ADC, also reducing the effective resolution of the ADC.

Aperture error
Imagine that we are digitizing a sine wave x(t) = Asin2πf0t. Provided that the actual sampling time uncertainty due to the clock jitter is Δt, the error caused by this phenomenon can be estimated as

Eap ≤ | x1 (t) Δt | ≤ 2A π ∫0 Δt

One can see that the error is relatively small at low frequencies, but can become significant at high frequencies. This effect can be ignored if it is relatively small as compared with quantizing error. Jitter requirements can be calculated using the following formula:

Δt < 1/2q π ∫0

where q is a number of ADC bits.

ADC
Resolution
(bit)
Input Frequency
1 Hz 44.1 KHz 192 KHz 1 MHz 10 MHz 100 MHz 1 GHz
8 1243 us 28.2 ns 6.48 ns 1.24 ns 124 ps 12.4 ps 1.24 ps
10 311 µs 7.05 ns 1.62 ns 311 ps 31.1 ps 3.11 ps 0.31 ps
12 77.7 µs 1.76 ns 405 ps 77.7 ps 7.77 ps 0.78 ps 0.08 ps
14 19.4 µs 441 ps 101 ps 19.4 ps 1.94 ps 0.19 ps 0.02 ps
16 4.86 µs 110 ps 25.3 ps 4.86 ps 0.49 ps 0.05 ps -
18 1.21 µs 27.5 ps 6.32 ps 1.21 ps 0.12 ps - -
20 304 ns 6.88 ps 1.58 ps 0.16 ps - - -
24 19.0 ns 0.43 ps 0.10 ps - - - -
32 74.1 ps - - - - - -

This table shows, for example, that it is not worth using a precise 24-bit ADC for sound recording if we don't have an ultra low jitter clock. One should consider taking this phenomenon into account before choosing an ADC.

Sampling rate
The analog signal is continuous in time and it is necessary to convert this to a flow of digital values. It is therefore required to define the rate at which new digital values are sampled from the analog signal. The rate of new values is called the sampling rate or sampling frequency of the converter.

A continuously varying bandlimited signal can be sampled (that is, the signal values at intervals of time T, the sampling time, are measured and stored) and then the original signal can be exactly reproduced from the discrete-time values by an interpolation formula. The accuracy is limited by quantization error. However, this faithful reproduction is only possible if the sampling rate is higher than twice the highest frequency of the signal. This is essentially what is embodied in the Shannon-Nyquist sampling theorem.

Since a practical ADC cannot make an instantaneous conversion, the input value must necessarily be held constant during the time that the converter performs a conversion (called the conversion time). An input circuit called a sample and hold performs this task—in most cases by using a capacitor to store the analogue voltage at the input, and using an electronic switch or gate to disconnect the capacitor from the input. Many ADC integrated circuits include the sample and hold subsystem internally.

Aliasing
All ADCs work by sampling their input at discrete intervals of time. Their output is therefore an incomplete picture of the behavior of the input. There is no way of knowing, by looking at the output, what the input was doing between one sampling instant and the next. If the input is known to be changing slowly compared to the sampling rate, then it can be assumed that the value of the signal between two sample instants was somewhere between the two sampled values. If, however, the input signal is changing fast compared to the sample rate, then this assumption is not valid.

If the digital values produced by the ADC are, at some later stage in the system, converted back to analog values by a digital to analog converter or DAC, it is desirable that the output of the DAC be a faithful representation of the original signal. If the input signal is changing much faster than the sample rate, then this will not be the case, and spurious signals called aliases will be produced at the output of the DAC. The frequency of the aliased signal is the difference between the signal frequency and the sampling rate. For example, a 2 kHz sinewave being sampled at 1.5 kHz would be reconstructed as a 500 Hz sinewave. This problem is called aliasing.

To avoid aliasing, the input to an ADC must be low-pass filtered to remove frequencies above half the sampling rate. This filter is called an anti-aliasing filter, and is essential for a practical ADC system that is applied to analog signals with higher frequency content. Although aliasing in most systems is unwanted, it should also be noted that it can be exploited to provide simultaneous down-mixing of a band-limited high frequency signal (see frequency mixer).

Dither
In A to D converters, performance can usually be improved using dither. This is a very small amount of random noise (white noise) which is added to the input before conversion. Its amplitude is set to be about half of the least significant bit. Its effect is to cause the state of the LSB to randomly oscillate between 0 and 1 in the presence of very low levels of input, rather than sticking at a fixed value. Rather than the signal simply getting cut off altogether at this low level (which is only being quantized to a resolution of 1 bit), it extends the effective range of signals that the A to D converter can convert, at the expense of a slight increase in noise - effectively the quantization error is diffused across a series of noise values which is far less objectionable than a hard cutoff. The result is an accurate representation of the signal over time. A suitable filter at the output of the system can thus recover this small signal variation.

An audio signal of very low level (with respect to the bit depth of the ADC) sampled without dither sounds extremely distorted and unpleasant. Without dither the low level always yields a '1' from the A to D. With dithering, the true level of the audio is still recorded as a series of values over time, rather than a series of separate bits at one instant in time.

A virtually identical process, also called dither or dithering, is often used when quantizing photographic images to a fewer number of bits per pixel—the image becomes noisier but to the eye looks far more realistic than the quantized image, which otherwise becomes banded. This analogous process may help to visualize the effect of dither on an analogue audio signal that is converted to digital.

Dithering is also used in integrating systems such as electricity meters. Since the values are added together, the dithering produces results that are more exact than the LSB of the analog-to-digital converter. Note that dither can only increase the resolution of a sampler, it cannot improve the linearity, and thus accuracy does not necessarily improve.

Oversampling
Usually, signals are sampled at the minimum rate required, for economy, with the result that the quantization noise introduced is white noise spread over the whole pass band of the converter. If a signal is sampled at a rate much higher than the Nyquist frequency and then digitally filtered to limit it to the signal bandwidth then there are 3 main advantages:

Digital filters can have better properties (sharper rolloff, phase) than analogue filters, so a sharper anti-aliasing filter can be realized and then the signal can be downsampled giving a better result

A 20 bit ADC can be made to act as a 24 bit ADC with 256x oversampling
the signal-to-noise ratio due to quantization noise will be higher than if the whole available band had been used. With this technique, it is possible to obtain an effective resolution larger than that provided by the converter alone.

Reference:
  1. http://en.wikipedia.org

Sunday, February 8, 2009

Concepts of Analog to Digital Converter (ADC)


Resolution
The resolution of the converter indicates the number of discrete values it can produce over the range of analog values. The values are usually stored electronically in binary form, so the resolution is usually expressed in bits. In consequence, the number of discrete values available, or "levels", is usually a power of two. For example, an ADC with a resolution of 8 bits can encode an analog input to one in 256 different levels, since 28 = 256. The values can represent the ranges from 0 to 255 (i.e. unsigned integer) or from -128 to 127 (i.e. signed integer), depending on the application.

Resolution can also be defined electrically, and expressed in volts. The voltage resolution of an ADC is equal to its overall voltage measurement range divided by the number of discrete intervals as in the formula:

Q = EFSR/2M = EFSR/N

Where:
Q is resolution in volts per step (volts per output code),
EFSR is the full scale voltage range = VRefHi − VRefLo,
M is the ADC's resolution in bits, and
N is the number of intervals, given by the number of available levels (output codes), which is: N = 2M

Some examples may help:

Example 1
Full scale measurement range = 0 to 10 volts
ADC resolution is 12 bits: 212 = 4096 quantization levels (codes)
ADC voltage resolution is: (10V - 0V) / 4096 codes = 10V / 4096 codes » 0.00244 volts/code 2.44 mV/code

Example 2
Full scale measurement range = -10 to +10 volts
ADC resolution is 14 bits: 214 = 16384 quantization levels (codes)
ADC voltage resolution is: (10V - (-10V)) / 16384 codes = 20V / 16384 codes » 0.00122 volts/code 1.22 mV/code

Example 3
Full scale measurement range = 0 to 8 volts
ADC resolution is 3 bits: 23 = 8 quantization levels (codes)
ADC voltage resolution is: (8 V − 0 V)/8 codes = 8 V/8 codes = 1 volts/code = 1000 mV/code

In practice, the smallest output code ("0" in an unsigned system) represents a voltage range which is 0.5X of the ADC voltage resolution (Q)(meaning half-wide of the ADC voltage Q ) while the largest output code represents a voltage range which is 1.5X of the ADC voltage resolution (meaning 50% wider than the ADC voltage resolution). The other N − 2 codes are all equal in width and represent the ADC voltage resolution (Q) calculated above. Doing this centers the code on an input voltage that represents the Mth division of the input voltage range. For example, in Example 3, with the 3-bit ADC spanning an 8 V range, each of the N divisions would represent 1 V, except the 1st ("0" code) which is 0.5 V wide, and the last ("7" code) which is 1.5 V wide. Doing this the "1" code spans a voltage range from 0.5 to 1.5 V, the "2" code spans a voltage range from 1.5 to 2.5 V, etc. Thus, if the input signal is at 3/8ths of the full-scale voltage, then the ADC outputs the "3" code, and will do so as long as the voltage stays within the range of 2.5/8ths and 3.5/8ths. This practice is called "Mid-Tread" operation. This type of ADC can be modeled mathematically as:

ADCCode = ROUND((2M/VRefHi - VRefLo)*(VIn - VRefLo))

The exception to this convention seems to be the Microchip PIC processor, where all M steps are equal width. This practice is called "Mid-Rise with Offset" operation.

ADCCode = FLOOR((2M/VRefHi - VRefLo)*(VIn - VRefLo))

In practice, the useful resolution of a converter is limited by the best signal-to-noise ratio that can be achieved for a digitized signal. An ADC can resolve a signal to only a certain number of bits of resolution, called the "effective number of bits" (ENOB). One effective bit of resolution changes the signal-to-noise ratio of the digitized signal by 6 dB, if the ADC limits the resolution. If a preamplifier has been used prior to A/D conversion, the noise introduced by the amplifier can be an important contributing factor towards the overall SNR.

Reference:
  1. http://en.wikipedia.org

Saturday, February 7, 2009

Analog to Digital Converter (ADC)


An analog-to-digital converter (abbreviated ADC, A/D or A to D) is a device which converts continuous signals to discrete digital numbers. The reverse operation is performed by a digital-to-analog converter (DAC). Typically, an ADC is an electronic device that converts an input analog voltage (or current) to a digJustify Fullital number. However, some non-electronic or only partially electronic devices, such as rotary encoders, can also be considered ADCs. The digital output may use different coding schemes, such as binary, Gray code or two's complement binary.

Analog to digital converter is device that mostly used on data acquisition system. Computer using discrete data, meanwhile physic quantity is analog data. Physic quantity converted to electricity unit (voltage or current) through the sensor/transducer. So that Microcontroller to be able to read analog signal from sensor, then need one analog to digital converter that change analog signal become digital signal.

Conversion process can do with a several method. Several parameters that will determine quality an analog to digital converter, that is:
  1. Quantization error
  2. Un-linearity
  3. Missing code (lost)
  4. Conversion time

References:
  1. http://en.wikipedia.org
  2. Translated to english, WolfGankLink, Pengukuran, Pengaturan, dan Pengontrolan dengan PC

Friday, February 6, 2009

Symbol to Digital Conversion


Since symbols (e.g., alphanumeric characters) are not continuous, converting symbols to digital form is rather simpler and less prone to data loss than analog to digital conversion. Instead of sampling and quantization as in D/A (digital-to-analog) conversion, such techniques as polling and encoding are used.

A symbol input device usually consists of a number of switches that are polled at regular intervals to see which switches are pressed. Data will be lost if, within a single polling interval, two switches are pressed, or a switch is pressed, released, and pressed again. This polling can be done by a specialized processor in the device to prevent burdening the main CPU. When a new symbol has been entered, the device typically sends an interrupt to alert the CPU to read it.

For devices with only a few switches (such as the buttons on a joystick), the status of each can be encoded as bits (usually 0 for released and 1 for pressed) in a single word. This is useful when combinations of key presses are meaningful, and is sometimes used for passing the status of modifier keys on a keyboard (such as shift and control). But it does not scale to support more keys than the number of bits in a single byte or word.

Devices with many switches (such as a computer keyboard) usually arrange these switches in a scan matrix, with the individual switches on the intersections of x and y lines. When a switch is pressed, it connects the corresponding x and y lines together. Polling (often called scanning in this case) is done by activating each x line in sequence and detecting which y lines then have a signal, thus which keys are pressed. When the keyboard processor detects that a key has changed state, it sends a signal to the CPU indicating the scan code of the key and its new state. The symbol is then encoded, or converted into a number, based on the status of modifier keys and the desired character encoding.

A custom encoding can be used for a specific application with no loss of data. However, using a standard encoding such as ASCII is problematic if a symbol such as 'ß' needs to be converted but is not in the standard.

Reference:
  1. http://en.wikipedia.org

Thursday, February 5, 2009

The Differences Between Digital Native Learners And Digital Immigrant Teachers.


The disconnect between how students learn and how teachers teach is easy to understand when one considers that the current school system was designed for an agrarian and manufacturing world. However, the world has changed and continues to change in a fast-paced manner.

Today’s multitasking students are better equipped for this change than many adults. In fact researchers Ian Jukes and Anita Dosaj refer to this disconnect as the result of poor communication between “digital natives,” today’s students and “digital immigrants,” many adults. These parents and educators, the digital immigrants, speak DSL, digital as a second language. Look at the problems caused by the differences between how digital students learn and how non-digital teachers teach.

The differences between digital native learners and digital immigrant teachers.

Digital Native Learners
  1. Prefer receiving information quickly from multiple multimedia sources.
  2. Prefer parallel processing and multitasking.
  3. Prefer processing pictures, sounds and video before text.
  4. Prefer random access to hyper linked multimedia information.
  5. Prefer to interact/network simultaneously with many others.
  6. Prefer to learn “just-in-time.”
  7. Prefer instant gratification and instant rewards
  8. Prefer learning that is relevant, instantly useful and fun.

Digital Immigrant Teachers
  1. Prefer slow and controlled release of information from limited sources.
  2. Prefer singular processing and single or limited tasking.
  3. Prefer to provide text before pictures, sounds and video.
  4. Prefer to provide information linearly, logically and sequentially.
  5. Prefer students to work independently rather than network and interact.
  6. Prefer to teach, “just-in-case” (it’s on the exam).
  7. Prefer deferred gratification and deferred rewards.
  8. Prefer to teach to the curriculum guide and standardized tests.

References:
  1. Ian Jukes and Anita Dosaj, The InfoSavvy Group, February 2003
  2. http://www.aple.com/au/education/digitalkids/disconnect/landscape.html
  3. http://en.wikipedia.org

Wednesday, February 4, 2009

Digital Native


A digital native is a person who has grown up with digital technology such as computers, the Internet, mobile phones and MP3.

Marc Prensky claims to have coined the term digital native, as it pertains to a new breed of student entering educational establishments. The term draws an analogy to a country's natives, for whom the local religion, language, and folkways are natural and indigenous, over against immigrants to a country who often are expected to adapt and assimilate to their newly adopted home. Prensky refers to accents employed by digital immigrants, such as printing documents rather than commenting on screen or printing out emails to save in hard copy form. Digital immigrants are said to have a "thick accent" when operating in the digital world in distinctly pre-digital ways, when, for instance, he might "dial" someone on the telephone to ask if his e-mail was received.

A Digital Native research project is being run jointly by the Berkman Centre for Internet & Society at Harvard Law School and the Research Center for Information Law at the University of St. Gallen in Switzerland. Gartner presented on the term at their May, 2007 IT Expo (Emerging Trends) Symposium in Barcelona. More recently, Gartner referenced Prensky's work, specifically the 18 areas of change comprising the Work Style of Digital Natives, in their "IT-Based Collaboration and Social Networks Accelerate R&D" research paper published on January 22, 2008.

Discourse
Not everyone agrees with the language and underlying assumptions of the digital native, particularly as it pertains to the concept of their differentiation. There are many reasonable arguments against this differentiation. It suggests a fluidity with technology that not all children and young adults have, and a corresponding awkwardness with technology that not all older adults have. It entirely ignores the fact that the digital universe was conceived of and created by digital immigrants. Although this could be defended by using the the "Eloi and Morlocks" strategy, which is to say that society could be split into two groups: the "leisurely" and the "technicians." Most people are "Eloi," and that is what the theory of digital natives is describing: the majority. So it is unimportant that previous generation of "Morlocks" initiated the digital revolution. In its application, the concept of the digital native should include the significant differences between technology users and technology creators.

Crucially, there is debate over whether there is any adequate evidence for claims made about digital natives and their implications for education. Bennett, Maton & Kervin (2008), for example, critically review the research evidence and describe some accounts of digital natives as an academic form of a moral panic.

Reference:
  1. http://en.wikipedia.org