DAC Error Simulation

DAC property

Number of Bits (10 or more can take quite some time)
DAC converter (Unit element: 0, Binary: 1)

FFT signal

Number of Periods (Default: 17)
Ratio FFT points and number of codes (Default: 16)

Zoom transfer:

Start code
Number of codes
Windowing:

Number of INL, DNL runs:

Number

Element Tolerance:

Random LSB
Systematic LSB

Noise Error:

Noise amplitude (LSB):
Error correction:



Transfer function:

Zoomed transfer function:

DA Characterization INL and DNL error:

DA Characterization Spectral test:

Sine time signal:

SNR = 1.76 dB + 6.02 B dB + 10 log10( N / 2 )
B: Number of Bits
N: Number of samples = 16 * 2 B
N = 2048 -> 30 dB

Error correction



DAC Theory


INL, DNL equation for unit and binary element DAC.

Binary weighted topology


R2R:


\( \sigma_{INLmax} \approx \frac{1}{2} \cdot 2^{\frac{B}{2}} \cdot \sigma_{\epsilon}\)


DNL:
\( \sigma_{DNLmax} \approx 2^{\frac{B}{2}} \cdot \sigma_{\epsilon} \)
\( \sigma_{DNLmax} \approx 2 \sigma_{INLmax} \)

B, 2B, 3B elements

Unit element topology


R-string:


\( \sigma_{INLmax} \approx \frac{1}{2} \cdot 2^{\frac{B}{2}} \cdot \sigma_{\epsilon}\)


DNL:
\( \sigma_{DNLmax} = \sigma_{\epsilon} \)

2B elements
Bits 24812162024
σINLmax / σe 128321285122048
Binary weighted σDNLmax 24166425610244096
SQNR / dB 13.825.8449.927498.08122.16146.24
Simulated:
SQNR / dB
NFFT = 16*2B
-13.45-(-24) =
10.55
-9.75-(-34.77) =
25.02
-9.07-(-58.85) =
49.78
-9.03-(-83.01) =
73.98
-9.03-(-107.11) =
98.08 (35 s)
-9.03-(-131.2) =
122.17 (9.5 min)
-9.03-(-155.29) =
146.26 (21 min)
Error simulation
Random error 0.010.01/0.0025
Binary weighted σINLmax / σe 360/20
Binary weighted σDNLmax 585/30
Binary weighted SQNR / dB 42.3441.66/56.05


Noise amplitude is 6 times value of standarddeviation.
8-bits, unit element ramdom LSB 0.004: abs(INL, DNL) < 0.1; TNM = -58.81
8-bits, unit element ramdom LSB 0.04: abs(INL, DNL) < 1.5; TNM = -52.89..-56.23
8-bits, unit element ramdom LSB 0.4: abs(INL, DNL) < 10; TNM = -38.85..-46.54

DAC experiments


Select Number of Bits as 8, DAC converter 0, Number of periods 17 and leave Tolerance and Error section as it is (0.0). This is the default setting.

R-calculation

Is there a limit necessary?

Comparing unit element and binary DAC INL, DNL

Example: 6 Bits, 160 runs, Tolerance random 0.01, systematic 0.00

Ideally the simulation shows SNR= -9.19 - (-46.75).
With random tolerance simulation shows SNR = -9.19 - (-46.58).
The unit element DAC shows 50m DNL max and 300m INL max and SNR = (-9.2) - (-46.76) dB..
The binary element DAC shows initially 2 DNL max, 1 INL max and SNR = (-9.21) - (-45.01) dB.
Adding a systematic error of 0.15 gives an 300m DNL max, 250m INL max and SNR = (-9.21) - (-46.2) dB.
Adding a systematic error of 0.3 gives an 300m DNL max, 250m INL max and SNR = (-9.2) - (-46.48) dB.
A binary element ADC can be easily improved to an unit element DAC.

Check functionality

Implementation

Random value generation
	  function randomNormal(std) {
         return Math.cos(2 * Math.PI * Math.random()) * Math.sqrt(-2 * Math.log(Math.random())) * std * 2;
      }
R-string
Resistance values rReal[i]
		   for (var i = 0; i < nWerte; i++) { // n-1 resistance values to end at 1
		     rReal[i] = 1 + randomNormal(nonlinearA) + i * nonlinearB;
			 if (rReal[i] < 0.5) { rReal[i] = 0.5; } 
			 // if (rReal[i] > 1.5) { rReal[i] = 1.5; }  
			 rSum = rSum + rReal[i];
		   }
Voltage calculation
    x[i + nWerte] = x[i - 1 + nWerte] + rReal[i-1]/rSum + noise/nWerte;   
R2R
Resistance values rReal[i]
		   for (var i = 0; i < 2 * nBits; i++) {
		     rReal[i] = 1 + randomNormal(nonlinearA) + i * nonlinearB;  // sets R
			 if (rReal[i] < 0.5) { rReal[i] = 0.5; } 
			 // if (rReal[i] > 1.5) { rReal[i] = 1.5; } 
		   }
Voltage calculation
      function r2rDAC(nB, xN,rReal) {
	    var result = 0;
		var rX = 2;
		var bitX = 1;
		result = (xN & bitX) * rReal[0] / (rReal[0] + rReal[1]) ;
  	    bitX = 2 * bitX; // next higher bit			
		rX = 2 * rReal[1] * rReal[0] / ( rReal[0] + rReal[1]) ;
		for (var i = 1; i < nB; i++) {  // All bits
           result = (xN & bitX)/bitX * (rReal[2 * i] + rX) / ( rReal[2 * i] + rX + 2 * rReal[2 * i + 1])
                    + result * 2 * rReal[2 * i + 1] / ( rReal[2 * i] + rX + 2 * rReal[2 * i + 1]);		   
           rX = (rX + rReal[2 * i]) * 2 * rReal[2 * i + 1] / ( rReal[2 * i] + rX + 2 * rReal[2 * i + 1]);
		   bitX = 2 * bitX; // next higher bit			
		}		
		return result;
	  }

Error correction


DNL above 0.5 means the smallest step size is too big. Resolution is lost.
DNL below -0.5 means the step size is too small this code can not be applied. A code is missing.
Absolute INL bigger than 0.5 is not tolerable.
Omitting one bit gives half INL, DNL error.

Error correction can be:
  1. Reducing the number of bits.
  2. Reducing the number of codes.
  3. Binary element correction

Discussion using a non power of 2 number of codes


Typically measured values are displayed in a decimal system. Typical voltage ranges are +-1 V, 2.5 V, 3 V, 5 V, 12 V, 220 V, 280 V

Resolutions are 8, 10, 12, 16, 20, 24 bits, 256, 1 024, 4 096, 65 536, 1 048 576, 16 777 216 codes.
RangeCodes
+-1 V 200 1 0004 00050 0001 000 00010 000 000
+-2.5 V 250 1 0004 00050 0001 000 00012 500 000
+-3 V 150 9003 00060 000900 00015 000 000

From this table it looks like non power of 2 number of codes can be used.

An ideal lookup table can get very big.

DAC Resolution


Accuracy of individual components is important.
Calibration leads to lost resolution or additional subranging DAC to compensate for error.

Non power of 2 DAC Calibration


Estimate a new stepsize with DNLmax = 0.45 DNLnew (10% guard band)
A unit element DAC is inherent monotonic and is sorted. A lookup table is generated by finding the nearest neighbor to the ideal value using the new step value.
Probably it can happen, that DNL < 0.5 is not achieved and DNL has to be changed (Iteration).

A binary element DAc needs first a sorting of output values and calculation of DNL.
Then the same steps as above can be done:
largest DNLmax = 0.45 DNLnew
newStep = oldSTep * DNLmax / 0.45
Algorithm: Obj array: code, output; sort for output; DNLmax; newStep; newNrCode
make lookup array; ramp/sine INL, DNL, FFT calculation

Calibration with power of 2:


The only change is to calculate the new step size differently: DNLnew = 0.5 DNLold

Binary non power of 2 algorithmic calibration


Due to the architecture and algorithm negative DNL at bit switchiing can be compensated.

Bi[iBit] = 2,4,8,16,32,64,256.. Negativ DNL -> integer correction c[iBit]= (Vout(2^îBit) - Vout(2^îBit - 1)) / newStep
Correct c[n] with sum( c[i], n);
Take requested code, newCode = code + sum ( trunc(code/Bi[i]) * c[i], iBit);

If the DNLmax > 0.5 for algorithmic correction the number of bits has to be reduced.
A lookup table can be done according to the previous non power of 2 scheme.

Binary element error correction:


Binary element DACs have only a limited number (NBit) of components where error correction is needed.
There should be only NBit error correction codes needed.

For each bit a correction gives a reduced number of codes.
To prevent big steps 2R has to be bigger than R by error.
Codereal = Codeideal + sum i=0,n Bit(i) * error(i);

error(0) = transition(0,1) can not be corrected
error(1) = transition(1,2) could be 0,+1
error(2) = transition(3,4) - error(1) could be 0,1,..3
error(3) = transition(7,8) - error(2) - error(1) could be 0,1,..7
...
sum errors is number of lost codes.

Influence of Noise

Next steps


Summary


20 Bit FFT calculation takes 10 minutes.

To Do:



FFT with calibration tool can be used to investigate.