It is intended to study the effect of device mismatch and noise on INL,
DNL and signal to noise ratio of the spectrum of an unit element (R-string) or binary (R2R) DAC.
All calculations are done with JavaScript at the client side. JavaScript source code
is available in the HTML document and can be modified as needed.
To Do: How many DAC simulations should be done for INL, DNL?
Noise amplitude is 6 times value of standarddeviation.
8-bits, unit element ramdom LSB 0.004: abs(INL, DNL) < 0.1; TNM = -58.81
8-bits, unit element ramdom LSB 0.04: abs(INL, DNL) < 1.5; TNM = -52.89..-56.23
8-bits, unit element ramdom LSB 0.4: abs(INL, DNL) < 10; TNM = -38.85..-46.54
DAC experiments
Select Number of Bits as 8, DAC converter 0, Number of periods 17 and leave Tolerance and Error section as it is (0.0).
This is the default setting.
R-calculation
Is there a limit necessary?
Comparing unit element and binary DAC INL, DNL
Example: 6 Bits, 160 runs, Tolerance random 0.01, systematic 0.00
Ideally the simulation shows SNR= -9.19 - (-46.75).
With random tolerance simulation shows SNR = -9.19 - (-46.58).
The unit element DAC shows 50m DNL max and 300m INL max and SNR = (-9.2) - (-46.76) dB..
The binary element DAC shows initially 2 DNL max, 1 INL max and SNR = (-9.21) - (-45.01) dB.
Adding a systematic error of 0.15 gives an 300m DNL max, 250m INL max and SNR = (-9.21) - (-46.2) dB.
Adding a systematic error of 0.3 gives an 300m DNL max, 250m INL max and SNR = (-9.2) - (-46.48) dB.
A binary element ADC can be easily improved to an unit element DAC.
for (var i = 0; i < 2 * nBits; i++) {
rReal[i] = 1 + randomNormal(nonlinearA) + i * nonlinearB; // sets R
if (rReal[i] < 0.5) { rReal[i] = 0.5; }
// if (rReal[i] > 1.5) { rReal[i] = 1.5; }
}
Voltage calculation
function r2rDAC(nB, xN,rReal) {
var result = 0;
var rX = 2;
var bitX = 1;
result = (xN & bitX) * rReal[0] / (rReal[0] + rReal[1]) ;
bitX = 2 * bitX; // next higher bit
rX = 2 * rReal[1] * rReal[0] / ( rReal[0] + rReal[1]) ;
for (var i = 1; i < nB; i++) { // All bits
result = (xN & bitX)/bitX * (rReal[2 * i] + rX) / ( rReal[2 * i] + rX + 2 * rReal[2 * i + 1])
+ result * 2 * rReal[2 * i + 1] / ( rReal[2 * i] + rX + 2 * rReal[2 * i + 1]);
rX = (rX + rReal[2 * i]) * 2 * rReal[2 * i + 1] / ( rReal[2 * i] + rX + 2 * rReal[2 * i + 1]);
bitX = 2 * bitX; // next higher bit
}
return result;
}
Error correction
DNL above 0.5 means the smallest step size is too big. Resolution is lost.
DNL below -0.5 means the step size is too small this code can not be applied.
A code is missing.
Absolute INL bigger than 0.5 is not tolerable.
Omitting one bit gives half INL, DNL error.
Error correction can be:
Reducing the number of bits.
Plain power of 2 (Done)
Using a lookup table
Using a calculation scheme
Reducing the number of codes.
Plain power of two
Using a lookup table
Using a calculation scheme
Binary element correction
Discussion using a non power of 2 number of codes
Typically measured values are displayed in a decimal system.
Typical voltage ranges are +-1 V, 2.5 V, 3 V, 5 V, 12 V, 220 V, 280 V
From this table it looks like non power of 2 number of codes can be used.
An ideal lookup table can get very big.
DAC Resolution
Accuracy of individual components is important.
Calibration leads to lost resolution or additional subranging DAC to compensate for error.
Non power of 2 DAC Calibration
Estimate a new stepsize with DNLmax = 0.45 DNLnew (10% guard band)
A unit element DAC is inherent monotonic and is sorted.
A lookup table is generated by finding the nearest neighbor to the ideal value using the new step value.
Probably it can happen, that DNL < 0.5 is not achieved and DNL has to be changed (Iteration).
A binary element DAc needs first a sorting of output values and calculation of DNL.
Then the same steps as above can be done:
largest DNLmax = 0.45 DNLnew
newStep = oldSTep * DNLmax / 0.45
Algorithm: Obj array: code, output; sort for output; DNLmax; newStep; newNrCode
make lookup array; ramp/sine INL, DNL, FFT calculation
Calibration with power of 2:
The only change is to calculate the new step size differently: DNLnew = 0.5 DNLold
Binary non power of 2 algorithmic calibration
Due to the architecture and algorithm negative DNL at bit switchiing can be compensated.
If the DNLmax > 0.5 for algorithmic correction the number of bits has to be reduced.
A lookup table can be done according to the previous non power of 2 scheme.
Binary element error correction:
Binary element DACs have only a limited number (NBit) of components where error correction is needed.
There should be only NBit error correction codes needed.
For each bit a correction gives a reduced number of codes.
To prevent big steps 2R has to be bigger than R by error.
Codereal = Codeideal + sum i=0,n Bit(i) * error(i);
error(0) = transition(0,1) can not be corrected
error(1) = transition(1,2) could be 0,+1
error(2) = transition(3,4) - error(1) could be 0,1,..3
error(3) = transition(7,8) - error(2) - error(1) could be 0,1,..7
...
sum errors is number of lost codes.