AI/ML: Use ML techniques for CTLE AMI modeling

Note:

  • Dataset and ipython notebook of this post is available at SPISim’s github page: [HERE]
  • This notebook may be better rendered by nbviewer and can be viewed  [HERE].

Motivation:

One of SPISim's main service is IBIS-AMI modeling and consulting. In most cases, this happens when IC companies is ready to release simulation model for their customers to do system design. We will then require data either from circuit simulation, lab measurements or data sheet spec. in order to create associated IBIS-AMI model. Occasionally, we also receive request to provide AMI model for architecture planning purpose. In this situation, there is no data or spec. In stead, the client asks to input performance parameters dynamically (not as preset) so that they can evaluate performance at the architecture level before committing to a certain spec. and design accordingly. In such case, we may need to generate data dynamically based on user's input before it will be fed into existing IBIS-AMI models of same kind. Continues linear equalizer, CTLE, is often used in modern SERDES or even DDR5 design. It is basically a filter in the frequency domain (FD) with various peaking and gain properties. As IBIS-AMI is simulated in the time domain (TD), the core implementations in the model is iFFT to convert into impulse response in TD. CtleFDTD

In order to create such CTLE AMI model from user provided spec. on the fly, we would like to avoid time-consuming parameter sweep (in order to match the performance) during runtime of initial set-up call. Thus machine learning techniques may be applied to help use build a prediction model to map input attributes to associated CTLE design parameters so that its FD and TD response can be generated directly. After that, we can feed the data into existing CTLE C/C++ code blocks for channel simulation.

Problem Statement:

We would like to build a prediction model such that when user provide a desired CTLE performance parameters, such as DC Gain, peaking frequency and peaking value, this model will map to corresponding CTLE design parameters such as pole and zero locations. Once this is mapped, CTLE frequency response will be generated followed by time-domain impulse response. The resulting IBIS-AMI CTLE model can be used for channel simulation for evaluating such CTLE block's impact in a system... before actual silicon design has been done.

Generate Data:

Overview

The model to be build is for nominal (i.e. numerical) prediction for about three to six attributes, depending on the CTLE structure, from four input parameters, namely dc gain, peak frequency, peak value and bandwidth. We will sample the input space randomly (as full combinatorial is impractical) then perform measurement programmingly in order generate enough dataset for modeling purpose.

Defing CTLE equations

We will target the following two most common CTLE structures:

IEEE 802.3bs 200/400Gbe Chip-to-module (C2M) CTLE:

C2M_CTLE

IEEE 802.3bj Channel Operating Margin (COM) CTLE:

COM_CTLE

Define sampling points and attributes

All these pole and zeros values are continuous (numerical), so a sub-sampling from full solution space will be performed. Once frequency response corresponding to each set of configuration is generated, we will measure is dc gain, peak frequency and value, bandwidth (3dB loss from the peak), and frequencies when 10% and 50% gain between dc and peak values happened. The last two attributes will help us increasing the attributes available when creating the prediction model.

Attributes to be extracted:

CTLEAttr

Synthesize and measure data

A python script (in order to be used with subsequent Q-learning w/ OpenAI later) has been written to synthesize these frequency response and perform measurement at the same time.

Quantize and Sampling:

Sampling

Synthesize:

Synthesize

Measurement:

Measurement

The end results after this data generation phase is a 100,000 points dataset for each of the two CTLE structures. We can now proceed for the prediction modeling.

Prepare Data:

In [42]:
## Using COM CTLE as an example below:

# Environment Setup:
import os
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import numpy as np

prjHome = 'C:/Temp/WinProj/CTLEMdl'
workDir = prjHome + '/wsp/'
srcFile = prjHome + '/dat/COM_CTLEData.csv'

def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
    path = os.path.join(workDir, fig_id + "." + fig_extension)
    print("Saving figure", fig_id)
    if tight_layout:
        plt.tight_layout()
    plt.savefig(path, format=fig_extension, dpi=resolution)
In [43]:
# Read Data
srcData = pd.read_csv(srcFile)
srcData.head()

# Info about the data
srcData.head()
srcData.info()
srcData.describe()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 100000 entries, 0 to 99999
Data columns (total 11 columns):
ID         100000 non-null int64
Gdc        100000 non-null float64
P1         100000 non-null float64
P2         100000 non-null float64
Z1         100000 non-null float64
Gain       100000 non-null float64
PeakF      100000 non-null float64
PeakVal    100000 non-null float64
BandW      100000 non-null float64
Freq10     100000 non-null float64
Freq50     100000 non-null float64
dtypes: float64(10), int64(1)
memory usage: 8.4 MB
Out[43]:
ID Gdc P1 P2 Z1 Gain PeakF PeakVal BandW Freq10 Freq50
count 100000.000000 100000.000000 1.000000e+05 1.000000e+05 1.000000e+05 100000.000000 1.000000e+05 100000.000000 1.000000e+05 1.000000e+05 1.000000e+05
mean 49999.500000 0.574911 1.724742e+10 5.235940e+09 5.241625e+09 0.574911 3.428616e+09 2.374620 1.496343e+10 4.145855e+08 1.136585e+09
std 28867.657797 0.230302 4.323718e+09 2.593138e+09 2.589769e+09 0.230302 4.468393e+09 3.346094 1.048535e+10 5.330043e+08 1.446972e+09
min 0.000000 0.200000 1.000000e+10 1.000000e+09 1.000000e+09 0.200000 0.000000e+00 0.200000 9.965473e+08 0.000000e+00 -1.000000e+00
25% 24999.750000 0.350000 1.350000e+10 3.000000e+09 3.000000e+09 0.350000 0.000000e+00 0.500000 4.770445e+09 0.000000e+00 -1.000000e+00
50% 49999.500000 0.600000 1.700000e+10 5.000000e+09 5.000000e+09 0.600000 0.000000e+00 0.800000 1.410597e+10 0.000000e+00 -1.000000e+00
75% 74999.250000 0.750000 2.100000e+10 7.500000e+09 7.500000e+09 0.750000 7.557558e+09 2.710536 2.386211e+10 8.974728e+08 2.510339e+09
max 99999.000000 0.950000 2.450000e+10 9.500000e+09 9.500000e+09 0.950000 1.516517e+10 16.731528 3.965752e+10 1.768803e+09 4.678211e+09

Seems full justified! Let's plot some distribution...

In [44]:
# Drop the ID column
mdlData = srcData.drop(columns=['ID'])

# plot distribution
mdlData.hist(bins=50, figsize=(20,15))
save_fig("attribute_histogram_plots")
plt.show()
Saving figure attribute_histogram_plots

There are abnomal high peaks for Freq10, Freq50 and PeakF. We need to plot the FD data to see what's going on...

Error checking:

NoPeaking

Apparently, this is caused by CTLE without peaking. We can safely remove these data points as they will not be used in actual design.

In [45]:
# Drop those freq peak at the beginning (i.e. no peak)
mdlTemp = mdlData[(mdlData['PeakF'] > 100)]
mdlTemp.info()


# plot distribution again
mdlTemp.hist(bins=50, figsize=(20,15))
save_fig("attribute_histogram_plots2")
plt.show()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 41588 entries, 1 to 99997
Data columns (total 10 columns):
Gdc        41588 non-null float64
P1         41588 non-null float64
P2         41588 non-null float64
Z1         41588 non-null float64
Gain       41588 non-null float64
PeakF      41588 non-null float64
PeakVal    41588 non-null float64
BandW      41588 non-null float64
Freq10     41588 non-null float64
Freq50     41588 non-null float64
dtypes: float64(10)
memory usage: 3.5 MB
Saving figure attribute_histogram_plots2

Now the distribution seems good. We can proceed to separate variables (i.e. attributes) and targets

In [46]:
# take this as modeling data from this point
mdlData = mdlTemp

varList = ['Gdc', 'P1', 'P2', 'Z1']
tarList = ['Gain', 'PeakF', 'PeakVal']

varData = mdlData[varList]
tarData = mdlData[tarList]

Choose a Model:

We will use Keras for the modeling framework. While it will call Tensorflow on our machine in this case, the GPU is only used for training purpose. We will use (shallow) neural network for modeling as we want to implement the resulting models in our IBIS-AMI model's C++ codes.

In [47]:
from keras.models import Sequential
from keras.layers import Dense, Dropout

numVars = len(varList)  # independent variables
numTars = len(tarList)  # output targets
nnetMdl = Sequential()
# input layer
nnetMdl.add(Dense(units=64, activation='relu', input_dim=numVars))

# hidden layers
nnetMdl.add(Dropout(0.3, noise_shape=None, seed=None))
nnetMdl.add(Dense(64, activation = "relu"))
nnetMdl.add(Dropout(0.2, noise_shape=None, seed=None))
          
# output layer
nnetMdl.add(Dense(units=numTars, activation='sigmoid'))
nnetMdl.compile(loss='mean_squared_error', optimizer='adam')

# Provide some info
#from keras.utils import plot_model
#plot_model(nnetMdl, to_file= workDir + 'model.png')
nnetMdl.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_10 (Dense)             (None, 64)                320       
_________________________________________________________________
dropout_7 (Dropout)          (None, 64)                0         
_________________________________________________________________
dense_11 (Dense)             (None, 64)                4160      
_________________________________________________________________
dropout_8 (Dropout)          (None, 64)                0         
_________________________________________________________________
dense_12 (Dense)             (None, 3)                 195       
=================================================================
Total params: 4,675
Trainable params: 4,675
Non-trainable params: 0
_________________________________________________________________

Training:

We will do the 20% training/testing split for the modeling. Note that we need to scale the input attributes to be between 0~1 so that neuron's activation function can be used to differentiate and calculate weights. These scaler will be applied "inversely" when we predict the actual performance later on.

In [48]:
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split

# Prepare Training (tran) and Validation (test) dataset
varTran, varTest, tarTran, tarTest = train_test_split(varData, tarData, test_size=0.2)

# scale the data
from sklearn import preprocessing
varScal = preprocessing.MinMaxScaler()
varTran = varScal.fit_transform(varTran)
varTest = varScal.transform(varTest)

tarScal = preprocessing.MinMaxScaler()
tarTran = tarScal.fit_transform(tarTran)

Now we can do the model fit:

In [49]:
# model fit
hist = nnetMdl.fit(varTran, tarTran, epochs=100, batch_size=1000, validation_split=0.1)
tarTemp = nnetMdl.predict(varTest, batch_size=1000)
#predict = tarScal.inverse_transform(tarTemp)
#resRMSE = np.sqrt(mean_squared_error(tarTest, predict))
resRMSE = np.sqrt(mean_squared_error(tarScal.transform(tarTest), tarTemp))
resRMSE
Train on 29943 samples, validate on 3327 samples
Epoch 1/100
29943/29943 [==============================] - 0s 12us/step - loss: 0.0632 - val_loss: 0.0462
Epoch 2/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0394 - val_loss: 0.0218
Epoch 3/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0218 - val_loss: 0.0090
Epoch 4/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0134 - val_loss: 0.0046
Epoch 5/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0102 - val_loss: 0.0039
Epoch 6/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0090 - val_loss: 0.0036
Epoch 7/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0082 - val_loss: 0.0032
Epoch 8/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0075 - val_loss: 0.0030
Epoch 9/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0071 - val_loss: 0.0027
Epoch 10/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0067 - val_loss: 0.0025
Epoch 11/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0063 - val_loss: 0.0022
Epoch 12/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0059 - val_loss: 0.0020
Epoch 13/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0056 - val_loss: 0.0018
Epoch 14/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0053 - val_loss: 0.0017
Epoch 15/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0050 - val_loss: 0.0015
Epoch 16/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0048 - val_loss: 0.0014
Epoch 17/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0046 - val_loss: 0.0013
Epoch 18/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0045 - val_loss: 0.0012
Epoch 19/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0043 - val_loss: 0.0011
Epoch 20/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0042 - val_loss: 0.0011
Epoch 21/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0041 - val_loss: 9.9891e-04
Epoch 22/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0040 - val_loss: 9.5673e-04
Epoch 23/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0039 - val_loss: 9.1935e-04
Epoch 24/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0038 - val_loss: 8.7424e-04
Epoch 25/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0037 - val_loss: 8.3335e-04
Epoch 26/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0036 - val_loss: 8.0617e-04
Epoch 27/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0035 - val_loss: 7.7511e-04
Epoch 28/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0035 - val_loss: 7.6336e-04
Epoch 29/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0034 - val_loss: 7.4145e-04
Epoch 30/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0034 - val_loss: 7.1555e-04
Epoch 31/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0033 - val_loss: 6.8232e-04
Epoch 32/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0033 - val_loss: 6.8118e-04
Epoch 33/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0033 - val_loss: 6.5987e-04
Epoch 34/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0032 - val_loss: 6.5535e-04
Epoch 35/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0032 - val_loss: 6.4880e-04
Epoch 36/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0032 - val_loss: 6.2126e-04
Epoch 37/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0031 - val_loss: 6.1235e-04
Epoch 38/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0031 - val_loss: 6.0875e-04
Epoch 39/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0030 - val_loss: 5.8204e-04
Epoch 40/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0030 - val_loss: 5.8521e-04
Epoch 41/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0030 - val_loss: 5.8456e-04
Epoch 42/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0030 - val_loss: 5.5742e-04
Epoch 43/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0029 - val_loss: 5.5412e-04
Epoch 44/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0029 - val_loss: 5.5415e-04
Epoch 45/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0029 - val_loss: 5.3159e-04
Epoch 46/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0029 - val_loss: 5.2046e-04
Epoch 47/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0028 - val_loss: 5.1748e-04
Epoch 48/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0028 - val_loss: 5.1205e-04
Epoch 49/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0027 - val_loss: 5.0424e-04
Epoch 50/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0028 - val_loss: 4.9067e-04
Epoch 51/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0027 - val_loss: 4.7902e-04
Epoch 52/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0027 - val_loss: 4.7667e-04
Epoch 53/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0026 - val_loss: 4.6521e-04
Epoch 54/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0026 - val_loss: 4.6684e-04
Epoch 55/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0026 - val_loss: 4.7006e-04
Epoch 56/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0026 - val_loss: 4.5770e-04
Epoch 57/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0025 - val_loss: 4.3075e-04
Epoch 58/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0025 - val_loss: 4.3796e-04
Epoch 59/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0025 - val_loss: 4.3114e-04
Epoch 60/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0025 - val_loss: 4.1051e-04
Epoch 61/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0025 - val_loss: 4.0642e-04
Epoch 62/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0025 - val_loss: 4.1214e-04
Epoch 63/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0025 - val_loss: 3.9472e-04
Epoch 64/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0024 - val_loss: 3.9697e-04
Epoch 65/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0024 - val_loss: 3.8548e-04
Epoch 66/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0024 - val_loss: 3.9030e-04
Epoch 67/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0023 - val_loss: 3.7588e-04
Epoch 68/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0024 - val_loss: 3.6643e-04
Epoch 69/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0023 - val_loss: 3.6973e-04
Epoch 70/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0023 - val_loss: 3.6345e-04
Epoch 71/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0023 - val_loss: 3.5743e-04
Epoch 72/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0023 - val_loss: 3.5294e-04
Epoch 73/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0023 - val_loss: 3.6533e-04
Epoch 74/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0022 - val_loss: 3.5859e-04
Epoch 75/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0022 - val_loss: 3.3832e-04
Epoch 76/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0022 - val_loss: 3.5197e-04
Epoch 77/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0022 - val_loss: 3.4445e-04
Epoch 78/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0022 - val_loss: 3.3888e-04
Epoch 79/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0022 - val_loss: 3.3597e-04
Epoch 80/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0022 - val_loss: 3.2317e-04
Epoch 81/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0021 - val_loss: 3.2205e-04
Epoch 82/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.4191e-04
Epoch 83/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.2288e-04
Epoch 84/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.1419e-04
Epoch 85/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.1307e-04
Epoch 86/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.1795e-04
Epoch 87/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.1200e-04
Epoch 88/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.0641e-04
Epoch 89/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.2401e-04
Epoch 90/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.0903e-04
Epoch 91/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 3.1448e-04
Epoch 92/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 3.0788e-04
Epoch 93/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 3.0349e-04
Epoch 94/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 3.0098e-04
Epoch 95/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0020 - val_loss: 3.1119e-04
Epoch 96/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 3.0249e-04
Epoch 97/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 2.8934e-04
Epoch 98/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 2.9429e-04
Epoch 99/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 2.8466e-04
Epoch 100/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0019 - val_loss: 3.0773e-04
Out[49]:
0.01786895428237113

Let's see how this neural network learns over different Epoch

In [50]:
# plot history
plt.plot(hist.history['loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper right')
plt.show()

Looks quite reasonable. We can save the Keras model now together with scaler for later evaluation.

In [51]:
# save model and architecture to single file
nnetMdl.save(workDir + "COM_nnetMdl.h5")

# also save scaler
from sklearn.externals import joblib
joblib.dump(varScal, workDir + 'VarScaler.save') 
joblib.dump(tarScal, workDir + 'TarScaler.save') 
print("Saved model to disk")
Saved model to disk

Prediction:

Now let's use this model to make some prediction

In [52]:
# generate prediction
predict = tarScal.inverse_transform(tarTemp)
allData = np.concatenate([varTest, tarTest, predict], axis = 1)
allData.shape
headLst = [varList, tarList, tarList]
headStr = ''.join(str(e) + ',' for e in headLst)
np.savetxt(workDir + 'COMCtleIOP.csv', allData, delimiter=',', header=headStr)

Let's take some 50 points and see how the prediction work

In [53]:
# plot some data
begIndx = 100
endIndx = 150
indxAry = np.arange(0, len(varTest), 1)

plt.scatter(indxAry[begIndx:endIndx], tarTest.iloc[:,0][begIndx:endIndx])
plt.scatter(indxAry[begIndx:endIndx], predict[:,0][begIndx:endIndx])
Out[53]:
<matplotlib.collections.PathCollection at 0x242d059d390>
In [54]:
# Plot Peak Freq.
plt.scatter(indxAry[begIndx:endIndx], tarTest.iloc[:,1][begIndx:endIndx])
plt.scatter(indxAry[begIndx:endIndx], predict[:,1][begIndx:endIndx])
Out[54]:
<matplotlib.collections.PathCollection at 0x242d72df2e8>
In [55]:
# Plot Peak Value
plt.scatter(indxAry[begIndx:endIndx], tarTest.iloc[:,2][begIndx:endIndx])
plt.scatter(indxAry[begIndx:endIndx], predict[:,2][begIndx:endIndx])
Out[55]:
<matplotlib.collections.PathCollection at 0x242d5ea39e8>

Reverse Direction:

The goal of this modeling is to map performance to CTLE poles and zeros locations. What we just did is the other way around (to make sure such neural network's structure meets our need). Now we needs to reverse the direction for actual modeling. To provide more attributes for better predictions, we will also use frequencies where 10% and 50% gain happened as part of the input attributes.

In [56]:
tarList = ['Gdc', 'P1', 'P2', 'Z1']
varList = ['Gain', 'PeakF', 'PeakVal', 'Freq10', 'Freq50']

varData = mdlData[varList]
tarData = mdlData[tarList]
In [57]:
from keras.models import Sequential
from keras.layers import Dense, Dropout

numVars = len(varList)  # independent variables
numTars = len(tarList)  # output targets
nnetMdl = Sequential()
# input layer
nnetMdl.add(Dense(units=64, activation='relu', input_dim=numVars))

# hidden layers
nnetMdl.add(Dropout(0.3, noise_shape=None, seed=None))
nnetMdl.add(Dense(64, activation = "relu"))
nnetMdl.add(Dropout(0.2, noise_shape=None, seed=None))
          
# output layer
nnetMdl.add(Dense(units=numTars, activation='sigmoid'))
nnetMdl.compile(loss='mean_squared_error', optimizer='adam')

# Provide some info
#from keras.utils import plot_model
#plot_model(nnetMdl, to_file= workDir + 'model.png')
nnetMdl.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_13 (Dense)             (None, 64)                384       
_________________________________________________________________
dropout_9 (Dropout)          (None, 64)                0         
_________________________________________________________________
dense_14 (Dense)             (None, 64)                4160      
_________________________________________________________________
dropout_10 (Dropout)         (None, 64)                0         
_________________________________________________________________
dense_15 (Dense)             (None, 4)                 260       
=================================================================
Total params: 4,804
Trainable params: 4,804
Non-trainable params: 0
_________________________________________________________________
In [58]:
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split

# Prepare Training (tran) and Validation (test) dataset
varTran, varTest, tarTran, tarTest = train_test_split(varData, tarData, test_size=0.2)

# scale the data
from sklearn import preprocessing
varScal = preprocessing.MinMaxScaler()
varTran = varScal.fit_transform(varTran)
varTest = varScal.transform(varTest)

tarScal = preprocessing.MinMaxScaler()
tarTran = tarScal.fit_transform(tarTran)
In [59]:
# model fit
hist = nnetMdl.fit(varTran, tarTran, epochs=100, batch_size=1000, validation_split=0.1)
tarTemp = nnetMdl.predict(varTest, batch_size=1000)
#predict = tarScal.inverse_transform(tarTemp)
#resRMSE = np.sqrt(mean_squared_error(tarTest, predict))
resRMSE = np.sqrt(mean_squared_error(tarScal.transform(tarTest), tarTemp))
resRMSE
Train on 29943 samples, validate on 3327 samples
Epoch 1/100
29943/29943 [==============================] - 0s 15us/step - loss: 0.0800 - val_loss: 0.0638
Epoch 2/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0578 - val_loss: 0.0457
Epoch 3/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0458 - val_loss: 0.0380
Epoch 4/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0408 - val_loss: 0.0344
Epoch 5/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0378 - val_loss: 0.0317
Epoch 6/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0354 - val_loss: 0.0299
Epoch 7/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0340 - val_loss: 0.0287
Epoch 8/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0327 - val_loss: 0.0276
Epoch 9/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0315 - val_loss: 0.0265
Epoch 10/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0303 - val_loss: 0.0254
Epoch 11/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0293 - val_loss: 0.0244
Epoch 12/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0284 - val_loss: 0.0235
Epoch 13/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0274 - val_loss: 0.0225
Epoch 14/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0266 - val_loss: 0.0215
Epoch 15/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0257 - val_loss: 0.0202
Epoch 16/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0245 - val_loss: 0.0189
Epoch 17/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0235 - val_loss: 0.0175
Epoch 18/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0223 - val_loss: 0.0160
Epoch 19/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0213 - val_loss: 0.0146
Epoch 20/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0202 - val_loss: 0.0135
Epoch 21/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0193 - val_loss: 0.0125
Epoch 22/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0186 - val_loss: 0.0117
Epoch 23/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0179 - val_loss: 0.0109
Epoch 24/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0171 - val_loss: 0.0104
Epoch 25/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0167 - val_loss: 0.0099
Epoch 26/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0163 - val_loss: 0.0094
Epoch 27/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0156 - val_loss: 0.0090
Epoch 28/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0154 - val_loss: 0.0087
Epoch 29/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0151 - val_loss: 0.0084
Epoch 30/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0146 - val_loss: 0.0081
Epoch 31/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0143 - val_loss: 0.0077
Epoch 32/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0139 - val_loss: 0.0074
Epoch 33/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0138 - val_loss: 0.0072
Epoch 34/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0135 - val_loss: 0.0071
Epoch 35/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0132 - val_loss: 0.0069
Epoch 36/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0131 - val_loss: 0.0068
Epoch 37/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0130 - val_loss: 0.0066
Epoch 38/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0129 - val_loss: 0.0065
Epoch 39/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0127 - val_loss: 0.0063
Epoch 40/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0124 - val_loss: 0.0062
Epoch 41/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0122 - val_loss: 0.0061
Epoch 42/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0122 - val_loss: 0.0059
Epoch 43/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0120 - val_loss: 0.0059
Epoch 44/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0119 - val_loss: 0.0057
Epoch 45/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0118 - val_loss: 0.0057
Epoch 46/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0117 - val_loss: 0.0056
Epoch 47/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0115 - val_loss: 0.0055
Epoch 48/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0114 - val_loss: 0.0055
Epoch 49/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0114 - val_loss: 0.0055
Epoch 50/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0112 - val_loss: 0.0053
Epoch 51/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0111 - val_loss: 0.0052
Epoch 52/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0112 - val_loss: 0.0052
Epoch 53/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0111 - val_loss: 0.0052
Epoch 54/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0109 - val_loss: 0.0051
Epoch 55/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0108 - val_loss: 0.0050
Epoch 56/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0107 - val_loss: 0.0049
Epoch 57/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0107 - val_loss: 0.0050
Epoch 58/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0107 - val_loss: 0.0049
Epoch 59/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0106 - val_loss: 0.0048
Epoch 60/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0104 - val_loss: 0.0047
Epoch 61/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0103 - val_loss: 0.0046
Epoch 62/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0102 - val_loss: 0.0046
Epoch 63/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0103 - val_loss: 0.0046
Epoch 64/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0102 - val_loss: 0.0045
Epoch 65/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0101 - val_loss: 0.0044
Epoch 66/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0101 - val_loss: 0.0044
Epoch 67/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0100 - val_loss: 0.0045
Epoch 68/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0098 - val_loss: 0.0043
Epoch 69/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0098 - val_loss: 0.0043
Epoch 70/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0098 - val_loss: 0.0043
Epoch 71/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0097 - val_loss: 0.0042
Epoch 72/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0096 - val_loss: 0.0043
Epoch 73/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0096 - val_loss: 0.0041
Epoch 74/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0094 - val_loss: 0.0042
Epoch 75/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0094 - val_loss: 0.0041
Epoch 76/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0094 - val_loss: 0.0041
Epoch 77/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0093 - val_loss: 0.0040
Epoch 78/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0094 - val_loss: 0.0040
Epoch 79/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0093 - val_loss: 0.0040
Epoch 80/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0092 - val_loss: 0.0039
Epoch 81/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0091 - val_loss: 0.0039
Epoch 82/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0091 - val_loss: 0.0039
Epoch 83/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0091 - val_loss: 0.0039
Epoch 84/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0091 - val_loss: 0.0038
Epoch 85/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0090 - val_loss: 0.0039
Epoch 86/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0089 - val_loss: 0.0038
Epoch 87/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0089 - val_loss: 0.0038
Epoch 88/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0088 - val_loss: 0.0037
Epoch 89/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0088 - val_loss: 0.0037
Epoch 90/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0087 - val_loss: 0.0037
Epoch 91/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0087 - val_loss: 0.0037
Epoch 92/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0086 - val_loss: 0.0036
Epoch 93/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0087 - val_loss: 0.0037
Epoch 94/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0086 - val_loss: 0.0036
Epoch 95/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0084 - val_loss: 0.0036
Epoch 96/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0086 - val_loss: 0.0036
Epoch 97/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0085 - val_loss: 0.0036
Epoch 98/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0085 - val_loss: 0.0035
Epoch 99/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0085 - val_loss: 0.0035
Epoch 100/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0084 - val_loss: 0.0034
Out[59]:
0.0589564154176633
In [60]:
# plot history
plt.plot(hist.history['loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper right')
plt.show()
In [61]:
# Separated Keras' architecture and synopse weight for later Cpp conversion
from keras.models import model_from_json
# serialize model to JSON
nnetMdl_json = nnetMdl.to_json()
with open("COM_nnetMdl_Rev.json", "w") as json_file:
    json_file.write(nnetMdl_json)
# serialize weights to HDF5
nnetMdl.save_weights("COM_nnetMdl_W_Rev.h5")

# save model and architecture to single file
nnetMdl.save(workDir + "COM_nnetMdl_Rev.h5")
print("Saved model to disk")

# also save scaler
from sklearn.externals import joblib
joblib.dump(varScal, workDir + 'Rev_VarScaler.save') 
joblib.dump(tarScal, workDir + 'Rev_TarScaler.save') 
Saved model to disk
Out[61]:
['C:/Temp/WinProj/CTLEMdl/wsp/Rev_TarScaler.save']
In [62]:
# generate prediction
predict = tarScal.inverse_transform(tarTemp)
allData = np.concatenate([varTest, tarTest, predict], axis = 1)
allData.shape
headLst = [varList, tarList, tarList]
headStr = ''.join(str(e) + ',' for e in headLst)
np.savetxt(workDir + 'COMCtleIOP_Rev.csv', allData, delimiter=',', header=headStr)
In [63]:
# plot Gdc
begIndx = 100
endIndx = 150
indxAry = np.arange(0, len(varTest), 1)
plt.scatter(indxAry[begIndx:endIndx], tarTest.iloc[:,0][begIndx:endIndx])
plt.scatter(indxAry[begIndx:endIndx], predict[:,0][begIndx:endIndx])
Out[63]:
<matplotlib.collections.PathCollection at 0x242ccefe1d0>
In [64]:
# Plot P1
plt.scatter(indxAry[begIndx:endIndx], tarTest.iloc[:,1][begIndx:endIndx])
plt.scatter(indxAry[begIndx:endIndx], predict[:,1][begIndx:endIndx])
Out[64]:
<matplotlib.collections.PathCollection at 0x242ccc7c470>
In [65]:
# Plot P2
plt.scatter(indxAry[begIndx:endIndx], tarTest.iloc[:,2][begIndx:endIndx])
plt.scatter(indxAry[begIndx:endIndx], predict[:,2][begIndx:endIndx])
Out[65]:
<matplotlib.collections.PathCollection at 0x242d6f15dd8>
In [66]:
# Plot Z1
plt.scatter(indxAry[begIndx:endIndx], tarTest.iloc[:,3][begIndx:endIndx])
plt.scatter(indxAry[begIndx:endIndx], predict[:,3][begIndx:endIndx])
Out[66]:
<matplotlib.collections.PathCollection at 0x242ccbaa978>

It seems this "reversed" neural network also work reasonably well. We will further fine-tune later on.

Deployment:

Now that we have trained model in Keras' .h5 format, we can translate this model into corresponding cpp codes using Keras2Cpp:

Keras2Cpp

Its github repository is here: Keras2Cpp

Resulting file can be compiled together with keras_model.cc, keras_model.h in our AMI library.

Conclusion:

In this post/notebook, we explore the flow to create a neural network based model for CTLE's parameter prediction. Data science techniques have been used. The resulting Keras' model is then converted into C++ code for implementation in our IBIS-AMI library. With this performance based CTLE model, our user can run channel simulation before committing actual silicon design.

IBIS-AMI: Study of DDR Asymmetric Rt/Ft in Existing IBIS-AMI Flow

[This blog post is written in preparation for the presentation of the same title to be given at the 2019 DesignCon IBIS Summit. Presentation slides and audio recording are linked at the bottom of this post.]

This paper is written by both Wei-hsing Huang (principle consultant at SPISim USA) and Wei-kai Shih, who is Tokyo based.

Motivation:

Here in US, one of IBIS committee’s working groups, IBIS-ATM (advanced technology modeling) has regular meeting on Tue. I try to call-in whenever possible to gain insights on upcoming modeling trends. During mid 2018, DDR5 related topics were brought up: Existing AMI reference flow described in the spec. focuses on differential or SERDES. For example, the stimulus waveform is from -0.5 to 0.5 and/or a single impulse response is used for analysis, thus assuming symmetric rise time (Rt) and fall time (Ft) mostly. Whether this reference flow can be applied to DDR, which may have asymmetric Rt/Ft and single-ended like DQ, is the center of discussion. Different EDA companies in this work group have different opinions. Some think the flow can be used directly with minimal change while others think the flow has fundamental shortcomings for DDR. Thing about IBIS spec. change is that whoever think the current version has deficiencies needs to write a “buffer issue resolution document (BIRD)”. Doing so will inevitably disclose some of the trade secrets or expose shortcoming of the the tool. As a result, while there are companies which think change may be needed, no flow change have been proposed at this point. As a model maker, I wonder then how existing flow can be applied to DDR without major change? Thus this study is to demonstrate “one” possible implementation. Existing EDA companies may have more sophisticated algorithms/implementations to support this asymmetric condition, but the existence of “one” such possible flow may convince model makers that it’s time to think about how DDR AMI may be implemented rather than waiting for the unlikely spec. change.

AMI_Init:

There are both “statistical” and “bit-by-bit” flows in channel analysis. In either case, the first step an EDA tool will do before calling AMI model is “channel calibration”. According to the spec. the impulse response of the channel, which includes analog buffer, is obtained here. For a SERDES design which has no asymmetric Rt/Ft issue, this impulse is then sent to TX AMI followed by RX AMI, resulting impulse response is then calculated using probability density function (PDF), integrated to be cumulative density function (CDF), then obtain bathtub plots etc.

The textbook definition of an impulse response is from a “delta response” input which happens at the infinite small time step. In real situation, there is no such thing as an “infinite small time step”. The minimal step used by a simulator is a “time step” which is usually 1ps or more. Buffer will not toggle from low to high back to low in a single time step. So in reality, simulator often uses step response then take derivative to get impulse response. Now the problem comes: for an analog channel with asymmetric Rt/Ft, these two step response (ignoring the sign) are different. That means we will have two different impulse response, then which one should be send to AMI models? A note here up front is that it’s EDA tool which sets up the calibration, so it has any nodal information, such as pad of Tx and Rx analog buffer, if needed.

Asymmetric Rt/Ft:

One may think that there is no such limitation that an AMI model can only be called once. So theoretically, a simulator can run analysis flow twice… impulse calculated from rising step response is used for the first time and the one from falling step response is used for the second time. However, not only is this not efficient, a model may not be implemented properly such that calling AMI_Init again right after AMI_Close may cause crash if it’s in the same process and model pointer was not released completely. Thus doing so may hamper a simulator’s robustness.

As depicted in the picture above… if a simulator uses a long UI pulse to calibrate the channel, then both rising and falling step response are included in one simulation. Now let the data captured at Tx analog pad as X1 and X2 for rising and falling portion respectively, the data captured at Rx analog pad Y1 and Y2 will be X1 and X2 convolved with interconnect’s transfer function, which is LTI. If we derive a Xform(t) which is transfer function between X1 and X2, then that Xform(t) should also be able to transform between Y1 and Y2.  That means if a simulator can calculate Xform(t) it self, then regardless the impulse response it sent to AMI models is calculated from rising or falling step response, it can always “reconstruct” the result from the other type of impulse response using this Xform(t) function.

To prove this concept, we have written a simple matlab script taking step inputs of different slew rate, say inp1 and inp2. It calculates the Xform(t) function from both inputs and then reconstruct the response out2′ from out1. When overlaying nominal output out2 and reconstructed out2′ together, we can see that they match very well, thus prove the concept.

Once we have response from both different slew rates, we can construct their respective eyes then use each one’s different portion to construct a synthesized eye. Such eye will not be symmetric like that calculated from SERDES.

When calculating PDF for asymmetric case, one may also need to consider the precedent bit’s value and use a tree like structure to keep track of possible bit sequence. For example, for a typical SERDES bit sequence, if encoding is not considered, each bit will have 50% one and 50% zero. PDF is constructed based on that assumption. But in an asymmetric case, if the data used at the cursor is from rising response, then the cursor bit must be 1 while (cursor – 1) must be zero. If (cursor – 2) is 1 again, then the tail of falling response at (cursor – 1) will be superimposed to the cursor data. That is, we can’t treat each bit to have same 50% probability when constructing PDF. It’s not a binomial distribution as each occurrence is not independent. A simulator may need to determine the maximum bit length to keep track of first, then based on that depth to form tree-like sequence which leads to the rising or falling steps at the cursor location. Finally use superimpose to construct the overall response.

AMI_GetWave:

According to the reference flow for the bit-by-bit case: equalized Tx output from digital bit sequence is converted with channel’s impulse response. The resulting waveform is then sent to Rx EQ before getting final results. Either Tx EQ or Rx EQ or both may not be LTI so usage of aforementioned Xform(t) is not applicable.

As a fruit of thought… the spec. only mentions that in a bit-by-bit mode, the output of Tx AMI model is equalized digital sequence, while input to the Rx EQ must be the channel response from that sequence, then are there other ways to get such response to Rx yet with different Rt/Ft considered?

One example is like shown in top half of the picture above. If a simulator takes that equalized digital input and “simulate” to get final response, then this “simulation process” should have taken different Rt/Ft into account and has valid results. However, this process will be slow and I don’t think any simulator is doing it this way. Furthermore, the spec. specifically say it needs to “convolve” with impulse response. First of all, this impulse can be from rising or falling. Secondly, even we decide to decovolve with input first (thus has sequences of different delta response) then convolve with pulse response (i.e. one simulated UI), will there be any issue?

From the plot above…  we can see that when a pulse has different rising and falling slew rate, using superimpose to construct 011… will find “glitches” at the trailing high state portion. The severity of this “glitches” depends on how much difference the Rt/Ft is. So using a pulse response here will still not work.

A simple matlab script has also been written to demonstrate occurrence of such “glitches”. This proves that not only using an impulse response to convolve with Tx EQ’s output is problematic, even using a full simulated pulse (which has asymmetric Rt/Ft’s effect) to convolve delta sequences (this delta sequence is original TX EQ’s output deconvolve with one digital bit) will still be problematic. Glitches will happen for consecutive ones or zeros due to the mismatches of Rt and Ft. Thus one must use rise step and fall step response instead when doing such kind of convolution.

Summary:

In this presentation, we discussed how existing AMI flow may be applied to asymmetric Rt/Ft such as those often seen in DDR case. A “smarter” EDA tool should be able to handle this situation without changing on spec.’s reference flow. When a channel analysis is performed in a “statistical” flow, an EDA tool can obtain waveform data at both Tx and Rx analog buffer’s pads during calibration process. Such data can be used to construct a transform function, XForm(t). With this function, impulse response through EQ can be reconstructed and thus built an asymmetric eye. Tree structure may be needed to keep track of possible bit combinations. In a “bit-by-bit” flow, the current spec. may be too specific as it forces to use convolution of TX EQ’s output with channel’s impulse response before sending to RX EQ. Such direct convolution may be problematic. A “smarter” simulator may calculate it using different method without changing data output from TX EQ and input to the RX EQ. Step response should be used as different Rt/Ft will cause “glitches” when consecutive ones/zeros are present if convolution method is used.

Links:

Presentation: [HERE] (http://www.spisim.com/support/paperetc/20180202_DesignConSummit_SPISim.pdf)

Audio recording (English): [HERE]

IBIS-AMI: An end-to-end AMI modeling flow

In previous post, I mentioned about the “IBIS cook-book” as a good reference for the analog portion of the buffer modeling. Unfortunately, when it comes to the equalization part, i.e. AMI, there is no similar counterpart AFAIK. For the AMI modeling, the EQ algorithms need to be realized with algorithms/procedures implemented as spec. compliant APIs and written in C language. These functions then need to be compiled as a dynamic library in either dynamic link libraries (.dll on windows) or “shared objects (.so on linux-like). Different compiler and build tool has different ways to create such files. So it’s fair to say that many of these aspects are actually in the computer science/programming domains which are outside the electrical or modeling scopes. It is unlikely to have a document to detail all these processes step-by-step.

In this post, instead of writing those “programming” details, I would like to give a high-level overview about what different steps of the AMI modeling process are… from end to end.  Briefly, they can be arranged in the following steps based on execution order:

  1. Analog modeling
  2. Prepare collateral
  3. Define architecture
  4. Create models
  5. Model validation
  6. Channel correlation
  7. Documentation

The following sections will describe each part in details.

Analog modeling:

Believe it or not, the first step of AMI modeling is to create proper IBIS models… i.e. its analog portion. This is particular true if circuit being modeled belongs to TX. A TX AMI model is equalizing signals which includes its own analog buffer’s effect measured at the TX pad. So if there is no channel (pass-through) and it’s under nominal loading condition, the analog response of the TX will be the signals to be equalized. That is to say, without knowing what will be equalized (i.e. what the model’s analog behavior is), one can’t calculate the TX AMI model’s EQ parameters.

Take the plot above as an example. This is a FFE EQ circuit. The flat lines indicated by two yellow arrows are different de-emphasis settings, thus controlled by AMI. However, the rising/falling slew rate, wave shape and dc levels etc as circled in red are all analog behaviors. Thus an accurate IBIS model must be created first to establish the base lines for equalization. Recently, BIRD 194 has been proposed to use touch-stone file in lieu of an IBIS model… still the analog model must be there.

For a RX circuit, it may be easier as an input buffer is usually just a ESD clamp or terminator. Thus it doesn’t take much effort to create the IBIS model. Interested people may see my previous posts regarding various IBIS modeling topics.

Prepare collateral:

AMI’s data can be obtained from different sources: circuit simulation, lab/silicon measurement or data sheet. For simulation case, simulation must be done and the resulting waveform’s performance needs to be extracted. These values will serve as a “design targets” based on whitch AMI model’s parameters are being tuned.

For example, this is a typical TX waveform and measured data:

Various curves have been “lined-up” for easy post-processing. Using our VPro, we batch measured the value at the 5.3ns for different curves and created a table:Similarly, data collected from measurement needs to be quantified. This may be done manually and maybe labor intensive as the noise is usually there:

Some of the circuits may have response is in frequency domain. In this case, various points (DC, fundamental freq. 2X fundamental etc) needs to be measured like above.

If it’s from data sheet, then the values are already there yet there may be different ways to realize such performance. For example, equations of different zeros and poles locations may all have same DC gain or gain at particular frequencies, so which one to pick may depending on other factors.

Define architecture:

Based on the collateral and the data sheet, the modeler needs to determine how the AMI models will be built. Usually it should reflect the IC’s design functions so there are not much ambiguity here. For example, if the Rx circuit has DFE/CDR functions, then the AMI models must also contain such modules. On the other hand, some data my be represented in different ways and proper judgement needs to be made. Take this waveform as an example:

It’s already very obvious that it has a FFE with one post-tap. However, since the analog behavior needs to be represented by an IBIS model, then one needs to decide how these different behaviors, boxed in different colors, should be modeled. They can be constructed with several different IBIS models or a single IBIS model yet with some “scaling” block included so that IBIS of similar wave shapes can be squeezed or stretched. For a repeater, oftentimes people only care about what goes into and what comes out of this AMI model. The abilities to “probe” signals between a repeater’s RX and TX may be limited by the capabilities of simulator used. As a result, a modeler may have freedom determining which functions go into Rx and which go to Tx. In some cases, same model yet with different architecture needs to be created to meet different usage scenarios. An example has been discussed in our previous post [HERE]

Create models:

Once architecture is defined, next step is the actual C/C++ implementation. This is where programming part starts. Ideally, building blocks from previous projects are there already or will be created as a module so that they can be reused in the future. Multiple instance of the same models may be loaded together in some cases so the usage of “static” variables or function need to be very careful. Good programming practice comes into play here. I have seen models only work with certain bit-rate and 32 samples per UI. That indicates the model is “hard-coded”… it does not have codes to up-sample or down-sample the data based on the sampling-interval passed in from the API function. Accompanied with writing model’s C codes are unit testing, source revision control, compilations and dependencies check etc. The last one is particular important on linux as if your model relies on some external libraries and it is not linked statically, the same model running fine on developer’s machine will not even pass golden checker at user’s end…. because the library is not available there. Typically one will need to prepare several machines, virtual or not, which are “fresh” from OS installation and are the oldest “distros” one is willing to support. All these are typical software development process being applied toward this AMI modeling scope.

After the binary .dll/.so files are generated, then next step is to assemble a proper .ami files. Depending on parameter types (integer, values, corners etc), different flavors of syntax are available to create such file. In addition, different EDA simulators has different ways to present the parameter selections to its end user. So one may need to choose best syntax so that choices of parameter values will always be selected properly in targeted simulators. For example, if one already select TYP/MIN/MAX corner for the IBIS model, he/she should not have to do so again for the AMI part. It doesn’t make sense at all if a MIN AMI model will be used with MAX corner IBIS model… the corner should be “synchronized”.

Once the model is ready, next step is to tune the parameters so that each of the performance target will be matched. Some interface, such as PCIe, has pre-defined FFE tap weights so there are no ambiguities. In most cases, one need to find the parameter’s values to match measured or simulated performance. Such tasks is very tedious and error prone if doing manually and process like our “AutoTune” will come very handy:

Basically, our tool let user specify matching target and tool will use bisection algorithm to find the tap values. Hundred of cases can be “tuned” in a matter of minutes. In some other cases, grid search may be needed.

Model validation:

Just like traditional IBIS, the first step of model validation is to run it through golden checker. However, one needs to do so on different platforms:

The golden checker didn’t start checking the included AMI binary models until quite recently. Basically it loads the .ibs file, identifies models with AMI functions, then check the .ami file syntax. Finally, the checker will load the associated .dll/.so files. Due to the fact that different OS platform loads binary files differently, that means certain models (e.g. .dll) can only be checked on associated platform (e.g. Windows). That’s why one needs to perform the same check on different platforms to make sure they are all successful. Library dependencies or platform issues can be identified quickly here. However, the golden checker will not drive the binary file. So the functional checks described in next paragraph will be next step.

Typically, an AMI model have several parameters. To validate a model thoroughly, all combinations of these parameters values need to be exercised. We can “parameterize” settings in a .ami file like below:

Here, pattern like %VARIABLE_NAME% is used to create a .ami template. Then our SPIMPro can be used to generate all combinations of possible parameter values and create as a table. There can usually be hundreds or even thousands cases. Similar to the process described in “Systematic approach mentioned in my previous post”, we can then generate corresponding .ami files for all these cases. So there will be hundreds or thousands of them! Next step is to be able to “drive” them and obtain single model’s performance. Depending on the EDA tools, most of them either do not have automation capability to do this in batch mode or may require further programming. In our case, our SPIMPro and SPIVPro have built-in functions to support this sweeping flow in batch mode all in the same environment. SPISimAMI model driver is used extensively here! Once each case’s simulation is done, again one needs to extract the performance then compare with those obtained from raw data and make delta comparison.

A scattering plot like below will quickly indicate which AMI parameter combinations may not work properly in newly created AMI models. In this case, one needs to go back to the modeling stage to check the codes then do this sweep validation all over again.

Channel correlation:

The model validation mentioned in previous section is only for a single model, not the full channel. So one still needs to pick several full channels set-up to fully qualify the models. A caveat of the channel analysis is that it only shows time domain data regardless the flow is “statistical” or “bit-by-bit”, that means it is often not easy to qualify frequency domain component such as CTLE. In this case, a corresponding s-parameter whose Sdd12 (differential input to differential output) is represented by this CTLE AMI settings can be used for an apple-to-apple comparison, like schematic shown below:

Another required step here is to test with different EDA vendor’s tool. This presents another challenge because channel simulator is usually pricey and it’s rarely the case that one company will have all of them (e.g. ADS, HyperLynx, SystemSI, QCD and HSpice etc). Different EDA tools does invoke AMI models differently… for example, some simulator passes absolute path for DLL_Path reserved parameter while others only sent relative path. So without going through this step, it’s difficult to predict what a model will behave on different tools.

Documentation:

Once all these are done, the final step is of course to create an AMI model usage guide together with some sample set-ups. Usually it will starts with IBIS model’s pin model associations and some performance chart, followed by descriptions of different AMI parameters’ meaning and mapping to the data sheet. One may also add extra info. such as alternatives if the user’s EDA tool does not support newer keyword such as Dll_Path, Dll_ID or Supporting_Files etc. Waveform comparison between original data (silicon measurement vs AMI results) should also be included. Finally it will be beneficial to provide instructions on how an example channel using this model can be set-up in popular EDA tools such as ADS, HyperLynx or HSpice.

Summary:

There you have it.. the end-to-end AMI modeling process without touching programming details! Both AMI API and programming languages are moving targets as they both evolve with time. Thus one must continue honing skills and techniques involved to be able to deliver good quality models efficiently and quickly. This is a task which requires disciplines and experience of different domains. After sharing these with you readers, do you still want to do it yourself? 🙂 Happy modeling!

IBIS-AMI: Something about CTLE

Overview:

Continuous time linear equalizer, or CTLE for short, is a commonly used in modern communication channel. In a system where lossy channels are present, a CTLE can often recover signal quality for receiver or down stream continuous signaling. There have been many articles online discussing how a CTLE works theoretically. More thorough technical details are certainly also available in college/graduate level communication/IC design text book. In this blog post, I would like to focus more on its IBIS-AMI modeling aspect from a practical point of view. While not all secret sauce will be revealed here:-), hopefully the points mentioned here will give reader a good staring point in implementing or determining their CTLE/AMI modeling methodologies.

[Credit:] Some of the pictures used in this post are from Professor Sam Palermo’s course webpage linked below. He was also my colleague at Intel previously. (not knowing each other though..)

ECEN 720: High-Speed Links Circuits and Systems

 

What and why CTLE:

The picture above shows two common SERDES channel setups. While the one at the top has a direct connection between Tx and Rx, the bottom one has a “repeater” to cascade up stream and down stream channels together. This “cascading” can be repeated more than once so there maybe more than two channels involved. CTLE may sit inside the Rx of both set-ups or the middle “ReDriver” in the bottom one. In either case, the S-parameter block represents a generalized channel. It may contain passive elements such as package, transmission lines, vias or connectors etc. A characteristic of such channel is that it presents different losses across spectrum, i.e. dispersion.

For example, if we plot these channel’s differential input to differential output, we may see their frequency domain (FD) loss as shown above.

Digital signals being transmitted are more or less like sequence of bit/square pulse. We know that very rich frequency components are generated during its sharp rising/falling transition edges. Ideally, a communication channel to propagate these signals should behave like an (unit-gain) all pass filter. That is, various frequency components of the signal should not be treated equally, otherwise distortion will occur. Such ideal response can be indicated as the green box below:

In reality, such all pass filter does come often. In order to compensate our lossy channels (as indicated by the red box) to be more like the ideal case (green box) as an end result, we need to apply equalization (indicated by blue box). This is why an equalizer is often used… basically it provides a transfer function/frequency response to compensate the lossy channel in order to recover the signal quality. A point worth taken here is that the blue box and red box are “tie” together. So using same equalizer for channels of different losses may cause under or over compensated. That is, an equalizer is related to the channel being compensated. Another point is that CTLE is just a subset of such linear equalizer.

CTLE is a subset of linear equalizer:

A linear equalizer can be implemented in many different ways. For example, a feed-forward equalizer is often used in the Tx side and within DFE:

FFE’s behavior is more or less easier to predict and its AMI implementation is also quite straight forward. For example, a single pre-tap or post-tap’s FFE response can be easily visualized and predicted:

Now, a CTLE is a more “generalized” linear equalizer, so its behavior is usually represented in terms of frequency responses. Thus, to accommodate/compensate channels of different losses, we will have different FD responses for CTLE:

Now that IBIS-AMI modeling for CTLE is of concern, how do we obtain such modeling data for CTLE and how they should be modeled?

Different types of CTLE modeling data:

While CTLE’s behavior can be easily understood in frequency domain, for IBIS-AMI or channel analysis, it eventually needs to come back to time domain (FD) to convolve with inputs. This is because both statistical or bit-by-bit mode of link analysis are in time domain. Thus we have several choice: provide model FD data and have it converted to TD inside the implemented AMI model, or simply provide TD response directly to the model. The benefit of the first approach is that model can perform iFFT based on analysis’ bit rate and sampling rate’s settings. The advantage of the latter one is that the provided TD model can be checked to have good quality and model does not need to do similar iFFT every time simulation starts. Of course, the best implementation, i.e. like us SPISim’s approach, is to support both modes for best flexibility and expandability 🙂

  • Frequency domain data:

Depending on the availability of original EQ design, there are several possibilities for FD data: Synthesized with poles and zeros, extract from S-parameters or AC simulation to extract response.

  1. Poles and Zeros: Given different number of poles, zeros and their locations along with dc boost level, one can synthesize FD response curves:So say if we are given a data sheet which has EQ level of some key frequencies like below: Then one can sweep different number and locations of poles and zeros to obtain matching curves to meet the spec.:Such synthesized curves are well behaved in terms of passivity and causality etc,  and can be extended to covered desired frequency bandwidth.
  2. Extract from S-parameters: Another way to obtain frequency response is from EQ circuit’s existing S-parameter. This will provide best correlation scenarios for generated AMI model because the raw data can serve as a design target. However, there are many intermediate steps one have to perform first. For example, the given s-parameter may be single ended and only with limited frequency range (due to limitation of VNA being used), so if tool like our SPISim’s SPro is used, then one needs to: reording port (from Even-Odd ordering, i.e. 1-3, 2-4 etc to Sequential ordering, i.e. 1, 2 -> 3, 4), then convert to differential/mixed mode, after that extrapolate toward dc and high frequencies (many algorithms can be used and such extrapolation must also abide by physics) and finally extract the only related differential input -> differential output portion data.
  3. AC simulation: This assumes original design is available for AC simulation. Such raw data still needs to be sanity checked in terms of loss and phase change. For example, if gain are not flat toward DC and high-frequency range, then extra fixing may be needed otherwise iFFT results will be spurious.
  • Time domain data: time domain response can be obtained from aforementioned FD data by doing iFFT directly as shown below. It may also be obtained by simulating original EQ circuit in time domain. However, there are still several considerations:
  1. How to do iFFT: padding with zeros or conjugate are usually needed for real data iFFT. If the original FD data is not “clean” in terms of causality, passivity or asymptotic behavior, then they need to be fixed first.
  2. TD simulation: Is simulating impulse response possible? If not, maybe a step response should be performed instead. Then what is the time step or ramp speed to excite input stimuli? Note that during IBIS-AMI’s link analysis, the time step being used there may be different from the one being used here, so how will you scale the data accordingly. Once a step response is available, successive differentiation will produce impulse response with proper scaling.

How to implement CTLE AMI model:

Now that we have data to model, how will they be implemented in C/C++ codes to support AMI API for link analysis is another level of consideration.

  • Decision mechanism: As mentioned previously, a CTLE FD response targets at a channel of certain loss, thus the decision to use appropriate CTLE settings based on that particular channel at hand must either be decided by user or the model itself. While the former (user decision) does not need further explanation, the latter case (model decision, i.e. being adaptive) is a whole different topic and often vendor specific.

Typically, such adaptive mechanism has a pre-sorted CTLE in terms of strength or EQ level, then a figure-of-merit (FOM) needs to be extracted from equalized signal. That is, apply a tentative CTLE to the received data, then calculate such FOM. Then increase or decrease the EQ level by using adjacent CTLE curves and see whether FOM improves. Continue doing so until either selected CTLE “ID” settles or reach the range bounds. This process may be performed across many different cycles until it “stabilized” or being “locked”. Thus, the model may need to go through training period first to determine best CTLE being used during subsequent link analysis.

  • EQ configurations:

So now you have a bunch of settings or data like below, how should you architecture the model properly such that it can be extended in the future with revised CTLE response or allow user to perform corner selections (which essentially adds another dimension):

This is now more in software architecture domain and needs some trade-off considerations. For example, you may want to provide fine grid full spectrum FD/TD response but the data will may become to big. So internal re-sampling may be needed. For FD data, the model may needs to sample to have 2^N points for efficient iFFT. Different corner/parameter selection should not be hard coded in the models because future revised model’s parameter may be different. For external source data, encryption is usually needed to protect the modeling IP. With proper planning, one may reuse same CLTE module in many different design without customization on a case-by-case basis.

  • Correlations:

Finally it’s time to correlate the create CTLE AMI model against original EQ design or its behavioral model. Done properly, you should see signals being “recovered” from left to right below:

However, getting results like this in the first try may be a wishful thinking. In particular, the IBIS-AMI model does not work alone… it needs to work together with associated IBIS model (analog front-end) in most link simulator. So that IBIS model’s parasitics and loading etc will all affect the result. Besides, the high-impedance assumption of the AMI model also means proper termination matching is needed before one can drop them in for direct replacement of existing EQ circuit or behavioral models for correlation.

Summary:

At this point, you may realize that while a CTLE can be easily understood from its theoretic behavior perspective, its implementation to meet IBIS-AMI demands is a different story. We have seen CTLE models made by other vendor not expandable at all such that the client need to scratch the existing ones for minor revised CTLE behavior/settings (also because this particular model maker charges too much, of course). It’s our hope that the learning experience mentioned in this post will provide some guidance or considerations regardless when you decide to deep dive developing your own CTLE IBIS-AMI model, or maybe just delegate such tasks to professional model makers like us 🙂

IBIS Model: IBIS AMI modeling flow

In previous several posts, we talked about IBIS modeling of analog buffer front end.  In today’s post, we are going to give an overview of the modeling of the algorithmic portion which provide equalization to both TX and RX. These algorithmic blocks are usually modeled with IBIS AMI, the “Algorithmic Modeling Interface” portion of the IBIS spec.

IBIS AMI’s scope:

IBISAMI_Block

 

The figure above represents the channel from end to end. The passive channel is composed of various passive elements, such as PCB traces modeled with transmission lines and vias, connectors modeled with s-parameters. The pink block is the analog buffer which acts as front end to interface to the channel directly. They are usually modeled in IBIS. In the TX portion before the front end, and RX portion after that, are equalization circuits. They can be modeled in IBIS AMI. So the AMI model actually works with IBIS to complete the both TX and RX path from latch to latch, instead of pad to pad. In an IBIS model which use AMI, one will find a statement like below, which points to the AMI’s parameter file in .ami extension, and the compiled portion in either .dll (dynamic link library on windows), or .so (shared object on linux), and the IDE with which these .dll/.so files are produced.

IBISAMI_Include

Why IBIS AMI:

The main quantitative measure of signal integrity are eye height and eye width  of an eye plot. From eye plot, bit-error rate (BER) or other plot like bath tub curve can be derived. The eye plot is formed by folding many bits in time domain with waveform representing the response of these bit sequences. When doing end-to-end channel analysis with transistor  buffer or traditional IBIS model, one have to go through actual simulation to obtain such waveform. This is very time consuming and can only acquire very limited number of bits. In order to speed up the process, waveform synthesis is desired, thus brings the need of a new spec. and the invention of AMI. With this performance requirement for technical considerations included, the following lists why IBIS AMI is required:

  • Industrial standard: There are several high level modeling language, like matlab or system-vue, which can achieve the same aforementioned purpose within their own environment. However, this will limit the portability and choice of simulators. AMI bridges the gap by defining an open interface which all IC and EDA vendors can interact with to exchange data for system analysis.
  • Performance: As explained in the beginning of this section, very low BER requires data from millions of bits. It’s desired to be able to obtain this data in seconds but not days or weeks.
  • Flexibility: Before IBIS AMI, IBIS committee has attempted to address the equalization needs via now somewhat outdated keywords like driver schedule and bus hold. The time it take to revise these keywords in IBIS committee is just too long to meet the evolution speed of technology. Even in IBIS V4.0 era, other languages like Verilog-A are introduced, yet when million of bits and sensitive IP design (as explained below) are under considerations, compiled libraries in binary form (rather than interpreted like Verilog-A) can meet the demands much better.
  • IP Protection: EQ design is usually considered very sensitive IP to an IC vendor. Thus a model which can’t be reverse engineered and has better control over which design parameters to expose, like AMI spec, is desired.

What is in IBIS AMI:

Now that we know the scope and application of the AMI, let’s take a look at its components in more details. Depending on your perspective:

  • For an IBIS AMI model developer: IBIS AMI is an interface realized in three functions which you must provide the implementations in whatever language but compiled into .dll or .so. These three functions are Init, GetWave and Close:IBISAMI_Header
    • AMI_Init: This function must be implemented. Since lengthy bit sequence will be broken into small chunks and analysis accordingly, there are some data structure may be reused many times. In such usage scenario, the common “initialization” should be done in this function. It is somewhat llik “constructor” of an objective oriented language. When a direct pulse is used to synthesize the BER based on LTI (linear, time-invariant) assumption, computation will be done in this function and implementation of Getwave function is not needed.
    • AMI_Close: This function must be implemented. It acts as “destructor” to clean up and release the memory allocated back to OS.
    • AMI_Getwave: This function is optional to implement. If the channel is non-LTI, direct synthesis to get BER is not possible. In that case, the waveform of lengthy bit sequence is needed. GetWave function’s implementation provides such a mechanism to compute and convert the input bit sequence into their corresponding response.
  • For an IBIS AMI model user: three different files are included in an AMI model release:
    • IBIS file: This is the analog front end of the buffer. An “[Algorithmic Model]” must be used to point to the next two files, .ami and .so/.dll for the algorithmic part of the model.
    • .ami file: This is a plain text file which list the parameters model exposed, their variable types and range. For example, a 4-tap EQ with weights for one pre-cursor and three post-cursors are defined below. Also specified in the .ami file are implementation of the GetWave functions in compiled .dll/.so and the usage modes. When a simulator read the .ami file through pointer from the .ibs file, it will know how to interact with the .dll/.so files for system analysis.

IBISAMI_AMIParam

.dll/.so:

      These are compiled portion of the model. Note that both .dll and .so files also depends on your OS is 32-bit or 64-bit. So to run such AMI models on an 64-bit machine, one must have 64-bit .dll or .so as well.

 
IBIS AMI usage scenarios:

An AMI model developer can’t foresee how the developed models will be used. However, the model’s implementation itself will impose such limitation. There are two modes of AMI operations: Statistical or Empirical. If the “GetWave” function is not implemented, the model will only be able to run in “Statistical Flow”, meaning the passive channel must be LTI. On the other hand, if “GetWave” function is implemented, then the model may also run in “Empirical Flow”, which allows non-LTI channel. The figure below gives an overview of the two modes of AMI operations:

IBISAMI_Work

      • Statistical flow: In this flow, the channel is LTI. That means waveform from different bit sequences may be constructed from single bit’s impulse response using superposition. So provided TD impulse response of the passive channel, the TX and RX models can perform convolution on such single pulse. Once the simulator receive the this from RX model, it can perform peak distortion analysis like superposition to get the BER or eye directly.
      • Empirical flow: In this flow, the channel is non LTI and no waveform superposition should be done. Thus a digital bit sequence must be formed. This sequence may or may not be broken into smaller chunks then convolved with passive channel portion’s impulse response. The results are then called via TX and RX’s getwave function to form actual TD waveform of the full channel. Simulator will then fold the waveform to compute the BER and other parameters.

In reality, TX and RX AMI models may be from different vendors. Thus their combination also set the limits on how the models can be used. Interested user may see the section 10 of the IBIS spec. for detailed operation explanations.

This post gives a brief overview of IBIS AMI. The modeling flow of AMI model impose a challenge to the model developers. They usually need to know more inside details about the EQ design rather than doing black-box modeling approach. Besides, the extracted EQ  algorithms along with their parameters must be coded in C/C++ at least in order to compile and generate required .dll/.so files. Lastly, the flow is more or less EQ implementation dependent (depends on which high-level language the EQ was designed) and the model validation also requires deeper knowledge about the signal integrity. It’s common to have both EDA and SI expertise like what we have here at SPISim to work closely with IC vendors to deliver such models in good qualities. In the future posts, we may come back to cover all these steps and topics in more details.