AI/ML: Use ML techniques for CTLE AMI modeling

Note:

  • Dataset and ipython notebook of this post is available at SPISim’s github page: [HERE]
  • This notebook may be better rendered by nbviewer and can be viewed  [HERE].

Use ML techniques for SERDES CTLE modeling for IBIS-AMI simulation

Table of contents:

1.Motivation ...

2.Problem Statements ...

3.Generate Data ...

4.Prepare Data ...

5.Choose a Model ...

6.Training ...

7.Prediction ...

8.Reverse Direction ...

9.Deployment ...

10.Conclusion ...

Motivation:

One of SPISim's main service is IBIS-AMI modeling and consulting. In most cases, this happens when IC companies is ready to release simulation model for their customers to do system design. We will then require data either from circuit simulation, lab measurements or data sheet spec. in order to create associated IBIS-AMI model. Occasionally, we also receive request to provide AMI model for architecture planning purpose. In this situation, there is no data or spec. In stead, the client asks to input performance parameters dynamically (not as preset) so that they can evaluate performance at the architecture level before committing to a certain spec. and design accordingly. In such case, we may need to generate data dynamically based on user's input before it will be fed into existing IBIS-AMI models of same kind. Continues linear equalizer, CTLE, is often used in modern SERDES or even DDR5 design. It is basically a filter in the frequency domain (FD) with various peaking and gain properties. As IBIS-AMI is simulated in the time domain (TD), the core implementations in the model is iFFT to convert into impulse response in TD. CtleFDTD

In order to create such CTLE AMI model from user provided spec. on the fly, we would like to avoid time-consuming parameter sweep (in order to match the performance) during runtime of initial set-up call. Thus machine learning techniques may be applied to help use build a prediction model to map input attributes to associated CTLE design parameters so that its FD and TD response can be generated directly. After that, we can feed the data into existing CTLE C/C++ code blocks for channel simulation.

Problem Statement:

We would like to build a prediction model such that when user provide a desired CTLE performance parameters, such as DC Gain, peaking frequency and peaking value, this model will map to corresponding CTLE design parameters such as pole and zero locations. Once this is mapped, CTLE frequency response will be generated followed by time-domain impulse response. The resulting IBIS-AMI CTLE model can be used for channel simulation for evaluating such CTLE block's impact in a system... before actual silicon design has been done.

Generate Data:

Overview

The model to be build is for nominal (i.e. numerical) prediction for about three to six attributes, depending on the CTLE structure, from four input parameters, namely dc gain, peak frequency, peak value and bandwidth. We will sample the input space randomly (as full combinatorial is impractical) then perform measurement programmingly in order generate enough dataset for modeling purpose.

Defing CTLE equations

We will target the following two most common CTLE structures:

IEEE 802.3bs 200/400Gbe Chip-to-module (C2M) CTLE:

C2M_CTLE

IEEE 802.3bj Channel Operating Margin (COM) CTLE:

COM_CTLE

Define sampling points and attributes

All these pole and zeros values are continuous (numerical), so a sub-sampling from full solution space will be performed. Once frequency response corresponding to each set of configuration is generated, we will measure is dc gain, peak frequency and value, bandwidth (3dB loss from the peak), and frequencies when 10% and 50% gain between dc and peak values happened. The last two attributes will help us increasing the attributes available when creating the prediction model.

Attributes to be extracted:

CTLEAttr

Synthesize and measure data

A python script (in order to be used with subsequent Q-learning w/ OpenAI later) has been written to synthesize these frequency response and perform measurement at the same time.

Quantize and Sampling:

Sampling

Synthesize:

Synthesize

Measurement:

Measurement

The end results after this data generation phase is a 100,000 points dataset for each of the two CTLE structures. We can now proceed for the prediction modeling.

Prepare Data:

In [42]:
## Using COM CTLE as an example below:

# Environment Setup:
import os
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import numpy as np

prjHome = 'C:/Temp/WinProj/CTLEMdl'
workDir = prjHome + '/wsp/'
srcFile = prjHome + '/dat/COM_CTLEData.csv'

def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
    path = os.path.join(workDir, fig_id + "." + fig_extension)
    print("Saving figure", fig_id)
    if tight_layout:
        plt.tight_layout()
    plt.savefig(path, format=fig_extension, dpi=resolution)
In [43]:
# Read Data
srcData = pd.read_csv(srcFile)
srcData.head()

# Info about the data
srcData.head()
srcData.info()
srcData.describe()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 100000 entries, 0 to 99999
Data columns (total 11 columns):
ID         100000 non-null int64
Gdc        100000 non-null float64
P1         100000 non-null float64
P2         100000 non-null float64
Z1         100000 non-null float64
Gain       100000 non-null float64
PeakF      100000 non-null float64
PeakVal    100000 non-null float64
BandW      100000 non-null float64
Freq10     100000 non-null float64
Freq50     100000 non-null float64
dtypes: float64(10), int64(1)
memory usage: 8.4 MB
Out[43]:
ID Gdc P1 P2 Z1 Gain PeakF PeakVal BandW Freq10 Freq50
count 100000.000000 100000.000000 1.000000e+05 1.000000e+05 1.000000e+05 100000.000000 1.000000e+05 100000.000000 1.000000e+05 1.000000e+05 1.000000e+05
mean 49999.500000 0.574911 1.724742e+10 5.235940e+09 5.241625e+09 0.574911 3.428616e+09 2.374620 1.496343e+10 4.145855e+08 1.136585e+09
std 28867.657797 0.230302 4.323718e+09 2.593138e+09 2.589769e+09 0.230302 4.468393e+09 3.346094 1.048535e+10 5.330043e+08 1.446972e+09
min 0.000000 0.200000 1.000000e+10 1.000000e+09 1.000000e+09 0.200000 0.000000e+00 0.200000 9.965473e+08 0.000000e+00 -1.000000e+00
25% 24999.750000 0.350000 1.350000e+10 3.000000e+09 3.000000e+09 0.350000 0.000000e+00 0.500000 4.770445e+09 0.000000e+00 -1.000000e+00
50% 49999.500000 0.600000 1.700000e+10 5.000000e+09 5.000000e+09 0.600000 0.000000e+00 0.800000 1.410597e+10 0.000000e+00 -1.000000e+00
75% 74999.250000 0.750000 2.100000e+10 7.500000e+09 7.500000e+09 0.750000 7.557558e+09 2.710536 2.386211e+10 8.974728e+08 2.510339e+09
max 99999.000000 0.950000 2.450000e+10 9.500000e+09 9.500000e+09 0.950000 1.516517e+10 16.731528 3.965752e+10 1.768803e+09 4.678211e+09

Seems full justified! Let's plot some distribution...

In [44]:
# Drop the ID column
mdlData = srcData.drop(columns=['ID'])

# plot distribution
mdlData.hist(bins=50, figsize=(20,15))
save_fig("attribute_histogram_plots")
plt.show()
Saving figure attribute_histogram_plots

There are abnomal high peaks for Freq10, Freq50 and PeakF. We need to plot the FD data to see what's going on...

Error checking:

NoPeaking

Apparently, this is caused by CTLE without peaking. We can safely remove these data points as they will not be used in actual design.

In [45]:
# Drop those freq peak at the beginning (i.e. no peak)
mdlTemp = mdlData[(mdlData['PeakF'] > 100)]
mdlTemp.info()


# plot distribution again
mdlTemp.hist(bins=50, figsize=(20,15))
save_fig("attribute_histogram_plots2")
plt.show()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 41588 entries, 1 to 99997
Data columns (total 10 columns):
Gdc        41588 non-null float64
P1         41588 non-null float64
P2         41588 non-null float64
Z1         41588 non-null float64
Gain       41588 non-null float64
PeakF      41588 non-null float64
PeakVal    41588 non-null float64
BandW      41588 non-null float64
Freq10     41588 non-null float64
Freq50     41588 non-null float64
dtypes: float64(10)
memory usage: 3.5 MB
Saving figure attribute_histogram_plots2

Now the distribution seems good. We can proceed to separate variables (i.e. attributes) and targets

In [46]:
# take this as modeling data from this point
mdlData = mdlTemp

varList = ['Gdc', 'P1', 'P2', 'Z1']
tarList = ['Gain', 'PeakF', 'PeakVal']

varData = mdlData[varList]
tarData = mdlData[tarList]

Choose a Model:

We will use Keras for the modeling framework. While it will call Tensorflow on our machine in this case, the GPU is only used for training purpose. We will use (shallow) neural network for modeling as we want to implement the resulting models in our IBIS-AMI model's C++ codes.

In [47]:
from keras.models import Sequential
from keras.layers import Dense, Dropout

numVars = len(varList)  # independent variables
numTars = len(tarList)  # output targets
nnetMdl = Sequential()
# input layer
nnetMdl.add(Dense(units=64, activation='relu', input_dim=numVars))

# hidden layers
nnetMdl.add(Dropout(0.3, noise_shape=None, seed=None))
nnetMdl.add(Dense(64, activation = "relu"))
nnetMdl.add(Dropout(0.2, noise_shape=None, seed=None))
          
# output layer
nnetMdl.add(Dense(units=numTars, activation='sigmoid'))
nnetMdl.compile(loss='mean_squared_error', optimizer='adam')

# Provide some info
#from keras.utils import plot_model
#plot_model(nnetMdl, to_file= workDir + 'model.png')
nnetMdl.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_10 (Dense)             (None, 64)                320       
_________________________________________________________________
dropout_7 (Dropout)          (None, 64)                0         
_________________________________________________________________
dense_11 (Dense)             (None, 64)                4160      
_________________________________________________________________
dropout_8 (Dropout)          (None, 64)                0         
_________________________________________________________________
dense_12 (Dense)             (None, 3)                 195       
=================================================================
Total params: 4,675
Trainable params: 4,675
Non-trainable params: 0
_________________________________________________________________

Training:

We will do the 20% training/testing split for the modeling. Note that we need to scale the input attributes to be between 0~1 so that neuron's activation function can be used to differentiate and calculate weights. These scaler will be applied "inversely" when we predict the actual performance later on.

In [48]:
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split

# Prepare Training (tran) and Validation (test) dataset
varTran, varTest, tarTran, tarTest = train_test_split(varData, tarData, test_size=0.2)

# scale the data
from sklearn import preprocessing
varScal = preprocessing.MinMaxScaler()
varTran = varScal.fit_transform(varTran)
varTest = varScal.transform(varTest)

tarScal = preprocessing.MinMaxScaler()
tarTran = tarScal.fit_transform(tarTran)

Now we can do the model fit:

In [49]:
# model fit
hist = nnetMdl.fit(varTran, tarTran, epochs=100, batch_size=1000, validation_split=0.1)
tarTemp = nnetMdl.predict(varTest, batch_size=1000)
#predict = tarScal.inverse_transform(tarTemp)
#resRMSE = np.sqrt(mean_squared_error(tarTest, predict))
resRMSE = np.sqrt(mean_squared_error(tarScal.transform(tarTest), tarTemp))
resRMSE
Train on 29943 samples, validate on 3327 samples
Epoch 1/100
29943/29943 [==============================] - 0s 12us/step - loss: 0.0632 - val_loss: 0.0462
Epoch 2/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0394 - val_loss: 0.0218
Epoch 3/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0218 - val_loss: 0.0090
Epoch 4/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0134 - val_loss: 0.0046
Epoch 5/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0102 - val_loss: 0.0039
Epoch 6/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0090 - val_loss: 0.0036
Epoch 7/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0082 - val_loss: 0.0032
Epoch 8/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0075 - val_loss: 0.0030
Epoch 9/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0071 - val_loss: 0.0027
Epoch 10/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0067 - val_loss: 0.0025
Epoch 11/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0063 - val_loss: 0.0022
Epoch 12/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0059 - val_loss: 0.0020
Epoch 13/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0056 - val_loss: 0.0018
Epoch 14/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0053 - val_loss: 0.0017
Epoch 15/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0050 - val_loss: 0.0015
Epoch 16/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0048 - val_loss: 0.0014
Epoch 17/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0046 - val_loss: 0.0013
Epoch 18/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0045 - val_loss: 0.0012
Epoch 19/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0043 - val_loss: 0.0011
Epoch 20/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0042 - val_loss: 0.0011
Epoch 21/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0041 - val_loss: 9.9891e-04
Epoch 22/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0040 - val_loss: 9.5673e-04
Epoch 23/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0039 - val_loss: 9.1935e-04
Epoch 24/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0038 - val_loss: 8.7424e-04
Epoch 25/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0037 - val_loss: 8.3335e-04
Epoch 26/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0036 - val_loss: 8.0617e-04
Epoch 27/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0035 - val_loss: 7.7511e-04
Epoch 28/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0035 - val_loss: 7.6336e-04
Epoch 29/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0034 - val_loss: 7.4145e-04
Epoch 30/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0034 - val_loss: 7.1555e-04
Epoch 31/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0033 - val_loss: 6.8232e-04
Epoch 32/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0033 - val_loss: 6.8118e-04
Epoch 33/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0033 - val_loss: 6.5987e-04
Epoch 34/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0032 - val_loss: 6.5535e-04
Epoch 35/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0032 - val_loss: 6.4880e-04
Epoch 36/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0032 - val_loss: 6.2126e-04
Epoch 37/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0031 - val_loss: 6.1235e-04
Epoch 38/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0031 - val_loss: 6.0875e-04
Epoch 39/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0030 - val_loss: 5.8204e-04
Epoch 40/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0030 - val_loss: 5.8521e-04
Epoch 41/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0030 - val_loss: 5.8456e-04
Epoch 42/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0030 - val_loss: 5.5742e-04
Epoch 43/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0029 - val_loss: 5.5412e-04
Epoch 44/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0029 - val_loss: 5.5415e-04
Epoch 45/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0029 - val_loss: 5.3159e-04
Epoch 46/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0029 - val_loss: 5.2046e-04
Epoch 47/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0028 - val_loss: 5.1748e-04
Epoch 48/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0028 - val_loss: 5.1205e-04
Epoch 49/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0027 - val_loss: 5.0424e-04
Epoch 50/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0028 - val_loss: 4.9067e-04
Epoch 51/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0027 - val_loss: 4.7902e-04
Epoch 52/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0027 - val_loss: 4.7667e-04
Epoch 53/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0026 - val_loss: 4.6521e-04
Epoch 54/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0026 - val_loss: 4.6684e-04
Epoch 55/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0026 - val_loss: 4.7006e-04
Epoch 56/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0026 - val_loss: 4.5770e-04
Epoch 57/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0025 - val_loss: 4.3075e-04
Epoch 58/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0025 - val_loss: 4.3796e-04
Epoch 59/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0025 - val_loss: 4.3114e-04
Epoch 60/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0025 - val_loss: 4.1051e-04
Epoch 61/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0025 - val_loss: 4.0642e-04
Epoch 62/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0025 - val_loss: 4.1214e-04
Epoch 63/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0025 - val_loss: 3.9472e-04
Epoch 64/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0024 - val_loss: 3.9697e-04
Epoch 65/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0024 - val_loss: 3.8548e-04
Epoch 66/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0024 - val_loss: 3.9030e-04
Epoch 67/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0023 - val_loss: 3.7588e-04
Epoch 68/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0024 - val_loss: 3.6643e-04
Epoch 69/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0023 - val_loss: 3.6973e-04
Epoch 70/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0023 - val_loss: 3.6345e-04
Epoch 71/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0023 - val_loss: 3.5743e-04
Epoch 72/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0023 - val_loss: 3.5294e-04
Epoch 73/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0023 - val_loss: 3.6533e-04
Epoch 74/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0022 - val_loss: 3.5859e-04
Epoch 75/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0022 - val_loss: 3.3832e-04
Epoch 76/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0022 - val_loss: 3.5197e-04
Epoch 77/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0022 - val_loss: 3.4445e-04
Epoch 78/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0022 - val_loss: 3.3888e-04
Epoch 79/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0022 - val_loss: 3.3597e-04
Epoch 80/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0022 - val_loss: 3.2317e-04
Epoch 81/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0021 - val_loss: 3.2205e-04
Epoch 82/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.4191e-04
Epoch 83/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.2288e-04
Epoch 84/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.1419e-04
Epoch 85/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.1307e-04
Epoch 86/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.1795e-04
Epoch 87/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.1200e-04
Epoch 88/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.0641e-04
Epoch 89/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.2401e-04
Epoch 90/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0021 - val_loss: 3.0903e-04
Epoch 91/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 3.1448e-04
Epoch 92/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 3.0788e-04
Epoch 93/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 3.0349e-04
Epoch 94/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 3.0098e-04
Epoch 95/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0020 - val_loss: 3.1119e-04
Epoch 96/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 3.0249e-04
Epoch 97/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 2.8934e-04
Epoch 98/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 2.9429e-04
Epoch 99/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0020 - val_loss: 2.8466e-04
Epoch 100/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0019 - val_loss: 3.0773e-04
Out[49]:
0.01786895428237113

Let's see how this neural network learns over different Epoch

In [50]:
# plot history
plt.plot(hist.history['loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper right')
plt.show()

Looks quite reasonable. We can save the Keras model now together with scaler for later evaluation.

In [51]:
# save model and architecture to single file
nnetMdl.save(workDir + "COM_nnetMdl.h5")

# also save scaler
from sklearn.externals import joblib
joblib.dump(varScal, workDir + 'VarScaler.save') 
joblib.dump(tarScal, workDir + 'TarScaler.save') 
print("Saved model to disk")
Saved model to disk

Prediction:

Now let's use this model to make some prediction

In [52]:
# generate prediction
predict = tarScal.inverse_transform(tarTemp)
allData = np.concatenate([varTest, tarTest, predict], axis = 1)
allData.shape
headLst = [varList, tarList, tarList]
headStr = ''.join(str(e) + ',' for e in headLst)
np.savetxt(workDir + 'COMCtleIOP.csv', allData, delimiter=',', header=headStr)

Let's take some 50 points and see how the prediction work

In [53]:
# plot some data
begIndx = 100
endIndx = 150
indxAry = np.arange(0, len(varTest), 1)

plt.scatter(indxAry[begIndx:endIndx], tarTest.iloc[:,0][begIndx:endIndx])
plt.scatter(indxAry[begIndx:endIndx], predict[:,0][begIndx:endIndx])
Out[53]:
<matplotlib.collections.PathCollection at 0x242d059d390>
In [54]:
# Plot Peak Freq.
plt.scatter(indxAry[begIndx:endIndx], tarTest.iloc[:,1][begIndx:endIndx])
plt.scatter(indxAry[begIndx:endIndx], predict[:,1][begIndx:endIndx])
Out[54]:
<matplotlib.collections.PathCollection at 0x242d72df2e8>
In [55]:
# Plot Peak Value
plt.scatter(indxAry[begIndx:endIndx], tarTest.iloc[:,2][begIndx:endIndx])
plt.scatter(indxAry[begIndx:endIndx], predict[:,2][begIndx:endIndx])
Out[55]:
<matplotlib.collections.PathCollection at 0x242d5ea39e8>

Reverse Direction:

The goal of this modeling is to map performance to CTLE poles and zeros locations. What we just did is the other way around (to make sure such neural network's structure meets our need). Now we needs to reverse the direction for actual modeling. To provide more attributes for better predictions, we will also use frequencies where 10% and 50% gain happened as part of the input attributes.

In [56]:
tarList = ['Gdc', 'P1', 'P2', 'Z1']
varList = ['Gain', 'PeakF', 'PeakVal', 'Freq10', 'Freq50']

varData = mdlData[varList]
tarData = mdlData[tarList]
In [57]:
from keras.models import Sequential
from keras.layers import Dense, Dropout

numVars = len(varList)  # independent variables
numTars = len(tarList)  # output targets
nnetMdl = Sequential()
# input layer
nnetMdl.add(Dense(units=64, activation='relu', input_dim=numVars))

# hidden layers
nnetMdl.add(Dropout(0.3, noise_shape=None, seed=None))
nnetMdl.add(Dense(64, activation = "relu"))
nnetMdl.add(Dropout(0.2, noise_shape=None, seed=None))
          
# output layer
nnetMdl.add(Dense(units=numTars, activation='sigmoid'))
nnetMdl.compile(loss='mean_squared_error', optimizer='adam')

# Provide some info
#from keras.utils import plot_model
#plot_model(nnetMdl, to_file= workDir + 'model.png')
nnetMdl.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_13 (Dense)             (None, 64)                384       
_________________________________________________________________
dropout_9 (Dropout)          (None, 64)                0         
_________________________________________________________________
dense_14 (Dense)             (None, 64)                4160      
_________________________________________________________________
dropout_10 (Dropout)         (None, 64)                0         
_________________________________________________________________
dense_15 (Dense)             (None, 4)                 260       
=================================================================
Total params: 4,804
Trainable params: 4,804
Non-trainable params: 0
_________________________________________________________________
In [58]:
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split

# Prepare Training (tran) and Validation (test) dataset
varTran, varTest, tarTran, tarTest = train_test_split(varData, tarData, test_size=0.2)

# scale the data
from sklearn import preprocessing
varScal = preprocessing.MinMaxScaler()
varTran = varScal.fit_transform(varTran)
varTest = varScal.transform(varTest)

tarScal = preprocessing.MinMaxScaler()
tarTran = tarScal.fit_transform(tarTran)
In [59]:
# model fit
hist = nnetMdl.fit(varTran, tarTran, epochs=100, batch_size=1000, validation_split=0.1)
tarTemp = nnetMdl.predict(varTest, batch_size=1000)
#predict = tarScal.inverse_transform(tarTemp)
#resRMSE = np.sqrt(mean_squared_error(tarTest, predict))
resRMSE = np.sqrt(mean_squared_error(tarScal.transform(tarTest), tarTemp))
resRMSE
Train on 29943 samples, validate on 3327 samples
Epoch 1/100
29943/29943 [==============================] - 0s 15us/step - loss: 0.0800 - val_loss: 0.0638
Epoch 2/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0578 - val_loss: 0.0457
Epoch 3/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0458 - val_loss: 0.0380
Epoch 4/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0408 - val_loss: 0.0344
Epoch 5/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0378 - val_loss: 0.0317
Epoch 6/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0354 - val_loss: 0.0299
Epoch 7/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0340 - val_loss: 0.0287
Epoch 8/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0327 - val_loss: 0.0276
Epoch 9/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0315 - val_loss: 0.0265
Epoch 10/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0303 - val_loss: 0.0254
Epoch 11/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0293 - val_loss: 0.0244
Epoch 12/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0284 - val_loss: 0.0235
Epoch 13/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0274 - val_loss: 0.0225
Epoch 14/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0266 - val_loss: 0.0215
Epoch 15/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0257 - val_loss: 0.0202
Epoch 16/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0245 - val_loss: 0.0189
Epoch 17/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0235 - val_loss: 0.0175
Epoch 18/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0223 - val_loss: 0.0160
Epoch 19/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0213 - val_loss: 0.0146
Epoch 20/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0202 - val_loss: 0.0135
Epoch 21/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0193 - val_loss: 0.0125
Epoch 22/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0186 - val_loss: 0.0117
Epoch 23/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0179 - val_loss: 0.0109
Epoch 24/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0171 - val_loss: 0.0104
Epoch 25/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0167 - val_loss: 0.0099
Epoch 26/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0163 - val_loss: 0.0094
Epoch 27/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0156 - val_loss: 0.0090
Epoch 28/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0154 - val_loss: 0.0087
Epoch 29/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0151 - val_loss: 0.0084
Epoch 30/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0146 - val_loss: 0.0081
Epoch 31/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0143 - val_loss: 0.0077
Epoch 32/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0139 - val_loss: 0.0074
Epoch 33/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0138 - val_loss: 0.0072
Epoch 34/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0135 - val_loss: 0.0071
Epoch 35/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0132 - val_loss: 0.0069
Epoch 36/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0131 - val_loss: 0.0068
Epoch 37/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0130 - val_loss: 0.0066
Epoch 38/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0129 - val_loss: 0.0065
Epoch 39/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0127 - val_loss: 0.0063
Epoch 40/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0124 - val_loss: 0.0062
Epoch 41/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0122 - val_loss: 0.0061
Epoch 42/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0122 - val_loss: 0.0059
Epoch 43/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0120 - val_loss: 0.0059
Epoch 44/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0119 - val_loss: 0.0057
Epoch 45/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0118 - val_loss: 0.0057
Epoch 46/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0117 - val_loss: 0.0056
Epoch 47/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0115 - val_loss: 0.0055
Epoch 48/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0114 - val_loss: 0.0055
Epoch 49/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0114 - val_loss: 0.0055
Epoch 50/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0112 - val_loss: 0.0053
Epoch 51/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0111 - val_loss: 0.0052
Epoch 52/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0112 - val_loss: 0.0052
Epoch 53/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0111 - val_loss: 0.0052
Epoch 54/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0109 - val_loss: 0.0051
Epoch 55/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0108 - val_loss: 0.0050
Epoch 56/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0107 - val_loss: 0.0049
Epoch 57/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0107 - val_loss: 0.0050
Epoch 58/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0107 - val_loss: 0.0049
Epoch 59/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0106 - val_loss: 0.0048
Epoch 60/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0104 - val_loss: 0.0047
Epoch 61/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0103 - val_loss: 0.0046
Epoch 62/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0102 - val_loss: 0.0046
Epoch 63/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0103 - val_loss: 0.0046
Epoch 64/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0102 - val_loss: 0.0045
Epoch 65/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0101 - val_loss: 0.0044
Epoch 66/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0101 - val_loss: 0.0044
Epoch 67/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0100 - val_loss: 0.0045
Epoch 68/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0098 - val_loss: 0.0043
Epoch 69/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0098 - val_loss: 0.0043
Epoch 70/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0098 - val_loss: 0.0043
Epoch 71/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0097 - val_loss: 0.0042
Epoch 72/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0096 - val_loss: 0.0043
Epoch 73/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0096 - val_loss: 0.0041
Epoch 74/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0094 - val_loss: 0.0042
Epoch 75/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0094 - val_loss: 0.0041
Epoch 76/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0094 - val_loss: 0.0041
Epoch 77/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0093 - val_loss: 0.0040
Epoch 78/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0094 - val_loss: 0.0040
Epoch 79/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0093 - val_loss: 0.0040
Epoch 80/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0092 - val_loss: 0.0039
Epoch 81/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0091 - val_loss: 0.0039
Epoch 82/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0091 - val_loss: 0.0039
Epoch 83/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0091 - val_loss: 0.0039
Epoch 84/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0091 - val_loss: 0.0038
Epoch 85/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0090 - val_loss: 0.0039
Epoch 86/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0089 - val_loss: 0.0038
Epoch 87/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0089 - val_loss: 0.0038
Epoch 88/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0088 - val_loss: 0.0037
Epoch 89/100
29943/29943 [==============================] - 0s 4us/step - loss: 0.0088 - val_loss: 0.0037
Epoch 90/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0087 - val_loss: 0.0037
Epoch 91/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0087 - val_loss: 0.0037
Epoch 92/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0086 - val_loss: 0.0036
Epoch 93/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0087 - val_loss: 0.0037
Epoch 94/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0086 - val_loss: 0.0036
Epoch 95/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0084 - val_loss: 0.0036
Epoch 96/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0086 - val_loss: 0.0036
Epoch 97/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0085 - val_loss: 0.0036
Epoch 98/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0085 - val_loss: 0.0035
Epoch 99/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0085 - val_loss: 0.0035
Epoch 100/100
29943/29943 [==============================] - 0s 3us/step - loss: 0.0084 - val_loss: 0.0034
Out[59]:
0.0589564154176633
In [60]:
# plot history
plt.plot(hist.history['loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper right')
plt.show()
In [61]:
# Separated Keras' architecture and synopse weight for later Cpp conversion
from keras.models import model_from_json
# serialize model to JSON
nnetMdl_json = nnetMdl.to_json()
with open("COM_nnetMdl_Rev.json", "w") as json_file:
    json_file.write(nnetMdl_json)
# serialize weights to HDF5
nnetMdl.save_weights("COM_nnetMdl_W_Rev.h5")

# save model and architecture to single file
nnetMdl.save(workDir + "COM_nnetMdl_Rev.h5")
print("Saved model to disk")

# also save scaler
from sklearn.externals import joblib
joblib.dump(varScal, workDir + 'Rev_VarScaler.save') 
joblib.dump(tarScal, workDir + 'Rev_TarScaler.save') 
Saved model to disk
Out[61]:
['C:/Temp/WinProj/CTLEMdl/wsp/Rev_TarScaler.save']
In [62]:
# generate prediction
predict = tarScal.inverse_transform(tarTemp)
allData = np.concatenate([varTest, tarTest, predict], axis = 1)
allData.shape
headLst = [varList, tarList, tarList]
headStr = ''.join(str(e) + ',' for e in headLst)
np.savetxt(workDir + 'COMCtleIOP_Rev.csv', allData, delimiter=',', header=headStr)
In [63]:
# plot Gdc
begIndx = 100
endIndx = 150
indxAry = np.arange(0, len(varTest), 1)
plt.scatter(indxAry[begIndx:endIndx], tarTest.iloc[:,0][begIndx:endIndx])
plt.scatter(indxAry[begIndx:endIndx], predict[:,0][begIndx:endIndx])
Out[63]:
<matplotlib.collections.PathCollection at 0x242ccefe1d0>
In [64]:
# Plot P1
plt.scatter(indxAry[begIndx:endIndx], tarTest.iloc[:,1][begIndx:endIndx])
plt.scatter(indxAry[begIndx:endIndx], predict[:,1][begIndx:endIndx])
Out[64]:
<matplotlib.collections.PathCollection at 0x242ccc7c470>
In [65]:
# Plot P2
plt.scatter(indxAry[begIndx:endIndx], tarTest.iloc[:,2][begIndx:endIndx])
plt.scatter(indxAry[begIndx:endIndx], predict[:,2][begIndx:endIndx])
Out[65]:
<matplotlib.collections.PathCollection at 0x242d6f15dd8>
In [66]:
# Plot Z1
plt.scatter(indxAry[begIndx:endIndx], tarTest.iloc[:,3][begIndx:endIndx])
plt.scatter(indxAry[begIndx:endIndx], predict[:,3][begIndx:endIndx])
Out[66]:
<matplotlib.collections.PathCollection at 0x242ccbaa978>

It seems this "reversed" neural network also work reasonably well. We will further fine-tune later on.

Deployment:

Now that we have trained model in Keras' .h5 format, we can translate this model into corresponding cpp codes using Keras2Cpp:

Keras2Cpp

Its github repository is here: Keras2Cpp

Resulting file can be compiled together with keras_model.cc, keras_model.h in our AMI library.

Conclusion:

In this post/notebook, we explore the flow to create a neural network based model for CTLE's parameter prediction. Data science techniques have been used. The resulting Keras' model is then converted into C++ code for implementation in our IBIS-AMI library. With this performance based CTLE model, our user can run channel simulation before committing actual silicon design.

IBIS-AMI: An end-to-end AMI modeling flow

In previous post, I mentioned about the “IBIS cook-book” as a good reference for the analog portion of the buffer modeling. Unfortunately, when it comes to the equalization part, i.e. AMI, there is no similar counterpart AFAIK. For the AMI modeling, the EQ algorithms need to be realized with algorithms/procedures implemented as spec. compliant APIs and written in C language. These functions then need to be compiled as a dynamic library in either dynamic link libraries (.dll on windows) or “shared objects (.so on linux-like). Different compiler and build tool has different ways to create such files. So it’s fair to say that many of these aspects are actually in the computer science/programming domains which are outside the electrical or modeling scopes. It is unlikely to have a document to detail all these processes step-by-step.

In this post, instead of writing those “programming” details, I would like to give a high-level overview about what different steps of the AMI modeling process are… from end to end.  Briefly, they can be arranged in the following steps based on execution order:

  1. Analog modeling
  2. Prepare collateral
  3. Define architecture
  4. Create models
  5. Model validation
  6. Channel correlation
  7. Documentation

The following sections will describe each part in details.

Analog modeling:

Believe it or not, the first step of AMI modeling is to create proper IBIS models… i.e. its analog portion. This is particular true if circuit being modeled belongs to TX. A TX AMI model is equalizing signals which includes its own analog buffer’s effect measured at the TX pad. So if there is no channel (pass-through) and it’s under nominal loading condition, the analog response of the TX will be the signals to be equalized. That is to say, without knowing what will be equalized (i.e. what the model’s analog behavior is), one can’t calculate the TX AMI model’s EQ parameters.

Take the plot above as an example. This is a FFE EQ circuit. The flat lines indicated by two yellow arrows are different de-emphasis settings, thus controlled by AMI. However, the rising/falling slew rate, wave shape and dc levels etc as circled in red are all analog behaviors. Thus an accurate IBIS model must be created first to establish the base lines for equalization. Recently, BIRD 194 has been proposed to use touch-stone file in lieu of an IBIS model… still the analog model must be there.

For a RX circuit, it may be easier as an input buffer is usually just a ESD clamp or terminator. Thus it doesn’t take much effort to create the IBIS model. Interested people may see my previous posts regarding various IBIS modeling topics.

Prepare collateral:

AMI’s data can be obtained from different sources: circuit simulation, lab/silicon measurement or data sheet. For simulation case, simulation must be done and the resulting waveform’s performance needs to be extracted. These values will serve as a “design targets” based on whitch AMI model’s parameters are being tuned.

For example, this is a typical TX waveform and measured data:

Various curves have been “lined-up” for easy post-processing. Using our VPro, we batch measured the value at the 5.3ns for different curves and created a table:Similarly, data collected from measurement needs to be quantified. This may be done manually and maybe labor intensive as the noise is usually there:

Some of the circuits may have response is in frequency domain. In this case, various points (DC, fundamental freq. 2X fundamental etc) needs to be measured like above.

If it’s from data sheet, then the values are already there yet there may be different ways to realize such performance. For example, equations of different zeros and poles locations may all have same DC gain or gain at particular frequencies, so which one to pick may depending on other factors.

Define architecture:

Based on the collateral and the data sheet, the modeler needs to determine how the AMI models will be built. Usually it should reflect the IC’s design functions so there are not much ambiguity here. For example, if the Rx circuit has DFE/CDR functions, then the AMI models must also contain such modules. On the other hand, some data my be represented in different ways and proper judgement needs to be made. Take this waveform as an example:

It’s already very obvious that it has a FFE with one post-tap. However, since the analog behavior needs to be represented by an IBIS model, then one needs to decide how these different behaviors, boxed in different colors, should be modeled. They can be constructed with several different IBIS models or a single IBIS model yet with some “scaling” block included so that IBIS of similar wave shapes can be squeezed or stretched. For a repeater, oftentimes people only care about what goes into and what comes out of this AMI model. The abilities to “probe” signals between a repeater’s RX and TX may be limited by the capabilities of simulator used. As a result, a modeler may have freedom determining which functions go into Rx and which go to Tx. In some cases, same model yet with different architecture needs to be created to meet different usage scenarios. An example has been discussed in our previous post [HERE]

Create models:

Once architecture is defined, next step is the actual C/C++ implementation. This is where programming part starts. Ideally, building blocks from previous projects are there already or will be created as a module so that they can be reused in the future. Multiple instance of the same models may be loaded together in some cases so the usage of “static” variables or function need to be very careful. Good programming practice comes into play here. I have seen models only work with certain bit-rate and 32 samples per UI. That indicates the model is “hard-coded”… it does not have codes to up-sample or down-sample the data based on the sampling-interval passed in from the API function. Accompanied with writing model’s C codes are unit testing, source revision control, compilations and dependencies check etc. The last one is particular important on linux as if your model relies on some external libraries and it is not linked statically, the same model running fine on developer’s machine will not even pass golden checker at user’s end…. because the library is not available there. Typically one will need to prepare several machines, virtual or not, which are “fresh” from OS installation and are the oldest “distros” one is willing to support. All these are typical software development process being applied toward this AMI modeling scope.

After the binary .dll/.so files are generated, then next step is to assemble a proper .ami files. Depending on parameter types (integer, values, corners etc), different flavors of syntax are available to create such file. In addition, different EDA simulators has different ways to present the parameter selections to its end user. So one may need to choose best syntax so that choices of parameter values will always be selected properly in targeted simulators. For example, if one already select TYP/MIN/MAX corner for the IBIS model, he/she should not have to do so again for the AMI part. It doesn’t make sense at all if a MIN AMI model will be used with MAX corner IBIS model… the corner should be “synchronized”.

Once the model is ready, next step is to tune the parameters so that each of the performance target will be matched. Some interface, such as PCIe, has pre-defined FFE tap weights so there are no ambiguities. In most cases, one need to find the parameter’s values to match measured or simulated performance. Such tasks is very tedious and error prone if doing manually and process like our “AutoTune” will come very handy:

Basically, our tool let user specify matching target and tool will use bisection algorithm to find the tap values. Hundred of cases can be “tuned” in a matter of minutes. In some other cases, grid search may be needed.

Model validation:

Just like traditional IBIS, the first step of model validation is to run it through golden checker. However, one needs to do so on different platforms:

The golden checker didn’t start checking the included AMI binary models until quite recently. Basically it loads the .ibs file, identifies models with AMI functions, then check the .ami file syntax. Finally, the checker will load the associated .dll/.so files. Due to the fact that different OS platform loads binary files differently, that means certain models (e.g. .dll) can only be checked on associated platform (e.g. Windows). That’s why one needs to perform the same check on different platforms to make sure they are all successful. Library dependencies or platform issues can be identified quickly here. However, the golden checker will not drive the binary file. So the functional checks described in next paragraph will be next step.

Typically, an AMI model have several parameters. To validate a model thoroughly, all combinations of these parameters values need to be exercised. We can “parameterize” settings in a .ami file like below:

Here, pattern like %VARIABLE_NAME% is used to create a .ami template. Then our SPIMPro can be used to generate all combinations of possible parameter values and create as a table. There can usually be hundreds or even thousands cases. Similar to the process described in “Systematic approach mentioned in my previous post”, we can then generate corresponding .ami files for all these cases. So there will be hundreds or thousands of them! Next step is to be able to “drive” them and obtain single model’s performance. Depending on the EDA tools, most of them either do not have automation capability to do this in batch mode or may require further programming. In our case, our SPIMPro and SPIVPro have built-in functions to support this sweeping flow in batch mode all in the same environment. SPISimAMI model driver is used extensively here! Once each case’s simulation is done, again one needs to extract the performance then compare with those obtained from raw data and make delta comparison.

A scattering plot like below will quickly indicate which AMI parameter combinations may not work properly in newly created AMI models. In this case, one needs to go back to the modeling stage to check the codes then do this sweep validation all over again.

Channel correlation:

The model validation mentioned in previous section is only for a single model, not the full channel. So one still needs to pick several full channels set-up to fully qualify the models. A caveat of the channel analysis is that it only shows time domain data regardless the flow is “statistical” or “bit-by-bit”, that means it is often not easy to qualify frequency domain component such as CTLE. In this case, a corresponding s-parameter whose Sdd12 (differential input to differential output) is represented by this CTLE AMI settings can be used for an apple-to-apple comparison, like schematic shown below:

Another required step here is to test with different EDA vendor’s tool. This presents another challenge because channel simulator is usually pricey and it’s rarely the case that one company will have all of them (e.g. ADS, HyperLynx, SystemSI, QCD and HSpice etc). Different EDA tools does invoke AMI models differently… for example, some simulator passes absolute path for DLL_Path reserved parameter while others only sent relative path. So without going through this step, it’s difficult to predict what a model will behave on different tools.

Documentation:

Once all these are done, the final step is of course to create an AMI model usage guide together with some sample set-ups. Usually it will starts with IBIS model’s pin model associations and some performance chart, followed by descriptions of different AMI parameters’ meaning and mapping to the data sheet. One may also add extra info. such as alternatives if the user’s EDA tool does not support newer keyword such as Dll_Path, Dll_ID or Supporting_Files etc. Waveform comparison between original data (silicon measurement vs AMI results) should also be included. Finally it will be beneficial to provide instructions on how an example channel using this model can be set-up in popular EDA tools such as ADS, HyperLynx or HSpice.

Summary:

There you have it.. the end-to-end AMI modeling process without touching programming details! Both AMI API and programming languages are moving targets as they both evolve with time. Thus one must continue honing skills and techniques involved to be able to deliver good quality models efficiently and quickly. This is a task which requires disciplines and experience of different domains. After sharing these with you readers, do you still want to do it yourself? 🙂 Happy modeling!

IBIS Model: IBIS AMI modeling flow

In previous several posts, we talked about IBIS modeling of analog buffer front end.  In today’s post, we are going to give an overview of the modeling of the algorithmic portion which provide equalization to both TX and RX. These algorithmic blocks are usually modeled with IBIS AMI, the “Algorithmic Modeling Interface” portion of the IBIS spec.

IBIS AMI’s scope:

IBISAMI_Block

 

The figure above represents the channel from end to end. The passive channel is composed of various passive elements, such as PCB traces modeled with transmission lines and vias, connectors modeled with s-parameters. The pink block is the analog buffer which acts as front end to interface to the channel directly. They are usually modeled in IBIS. In the TX portion before the front end, and RX portion after that, are equalization circuits. They can be modeled in IBIS AMI. So the AMI model actually works with IBIS to complete the both TX and RX path from latch to latch, instead of pad to pad. In an IBIS model which use AMI, one will find a statement like below, which points to the AMI’s parameter file in .ami extension, and the compiled portion in either .dll (dynamic link library on windows), or .so (shared object on linux), and the IDE with which these .dll/.so files are produced.

IBISAMI_Include

Why IBIS AMI:

The main quantitative measure of signal integrity are eye height and eye width  of an eye plot. From eye plot, bit-error rate (BER) or other plot like bath tub curve can be derived. The eye plot is formed by folding many bits in time domain with waveform representing the response of these bit sequences. When doing end-to-end channel analysis with transistor  buffer or traditional IBIS model, one have to go through actual simulation to obtain such waveform. This is very time consuming and can only acquire very limited number of bits. In order to speed up the process, waveform synthesis is desired, thus brings the need of a new spec. and the invention of AMI. With this performance requirement for technical considerations included, the following lists why IBIS AMI is required:

  • Industrial standard: There are several high level modeling language, like matlab or system-vue, which can achieve the same aforementioned purpose within their own environment. However, this will limit the portability and choice of simulators. AMI bridges the gap by defining an open interface which all IC and EDA vendors can interact with to exchange data for system analysis.
  • Performance: As explained in the beginning of this section, very low BER requires data from millions of bits. It’s desired to be able to obtain this data in seconds but not days or weeks.
  • Flexibility: Before IBIS AMI, IBIS committee has attempted to address the equalization needs via now somewhat outdated keywords like driver schedule and bus hold. The time it take to revise these keywords in IBIS committee is just too long to meet the evolution speed of technology. Even in IBIS V4.0 era, other languages like Verilog-A are introduced, yet when million of bits and sensitive IP design (as explained below) are under considerations, compiled libraries in binary form (rather than interpreted like Verilog-A) can meet the demands much better.
  • IP Protection: EQ design is usually considered very sensitive IP to an IC vendor. Thus a model which can’t be reverse engineered and has better control over which design parameters to expose, like AMI spec, is desired.

What is in IBIS AMI:

Now that we know the scope and application of the AMI, let’s take a look at its components in more details. Depending on your perspective:

  • For an IBIS AMI model developer: IBIS AMI is an interface realized in three functions which you must provide the implementations in whatever language but compiled into .dll or .so. These three functions are Init, GetWave and Close:IBISAMI_Header
    • AMI_Init: This function must be implemented. Since lengthy bit sequence will be broken into small chunks and analysis accordingly, there are some data structure may be reused many times. In such usage scenario, the common “initialization” should be done in this function. It is somewhat llik “constructor” of an objective oriented language. When a direct pulse is used to synthesize the BER based on LTI (linear, time-invariant) assumption, computation will be done in this function and implementation of Getwave function is not needed.
    • AMI_Close: This function must be implemented. It acts as “destructor” to clean up and release the memory allocated back to OS.
    • AMI_Getwave: This function is optional to implement. If the channel is non-LTI, direct synthesis to get BER is not possible. In that case, the waveform of lengthy bit sequence is needed. GetWave function’s implementation provides such a mechanism to compute and convert the input bit sequence into their corresponding response.
  • For an IBIS AMI model user: three different files are included in an AMI model release:
    • IBIS file: This is the analog front end of the buffer. An “[Algorithmic Model]” must be used to point to the next two files, .ami and .so/.dll for the algorithmic part of the model.
    • .ami file: This is a plain text file which list the parameters model exposed, their variable types and range. For example, a 4-tap EQ with weights for one pre-cursor and three post-cursors are defined below. Also specified in the .ami file are implementation of the GetWave functions in compiled .dll/.so and the usage modes. When a simulator read the .ami file through pointer from the .ibs file, it will know how to interact with the .dll/.so files for system analysis.

IBISAMI_AMIParam

.dll/.so:

      These are compiled portion of the model. Note that both .dll and .so files also depends on your OS is 32-bit or 64-bit. So to run such AMI models on an 64-bit machine, one must have 64-bit .dll or .so as well.

 
IBIS AMI usage scenarios:

An AMI model developer can’t foresee how the developed models will be used. However, the model’s implementation itself will impose such limitation. There are two modes of AMI operations: Statistical or Empirical. If the “GetWave” function is not implemented, the model will only be able to run in “Statistical Flow”, meaning the passive channel must be LTI. On the other hand, if “GetWave” function is implemented, then the model may also run in “Empirical Flow”, which allows non-LTI channel. The figure below gives an overview of the two modes of AMI operations:

IBISAMI_Work

      • Statistical flow: In this flow, the channel is LTI. That means waveform from different bit sequences may be constructed from single bit’s impulse response using superposition. So provided TD impulse response of the passive channel, the TX and RX models can perform convolution on such single pulse. Once the simulator receive the this from RX model, it can perform peak distortion analysis like superposition to get the BER or eye directly.
      • Empirical flow: In this flow, the channel is non LTI and no waveform superposition should be done. Thus a digital bit sequence must be formed. This sequence may or may not be broken into smaller chunks then convolved with passive channel portion’s impulse response. The results are then called via TX and RX’s getwave function to form actual TD waveform of the full channel. Simulator will then fold the waveform to compute the BER and other parameters.

In reality, TX and RX AMI models may be from different vendors. Thus their combination also set the limits on how the models can be used. Interested user may see the section 10 of the IBIS spec. for detailed operation explanations.

This post gives a brief overview of IBIS AMI. The modeling flow of AMI model impose a challenge to the model developers. They usually need to know more inside details about the EQ design rather than doing black-box modeling approach. Besides, the extracted EQ  algorithms along with their parameters must be coded in C/C++ at least in order to compile and generate required .dll/.so files. Lastly, the flow is more or less EQ implementation dependent (depends on which high-level language the EQ was designed) and the model validation also requires deeper knowledge about the signal integrity. It’s common to have both EDA and SI expertise like what we have here at SPISim to work closely with IC vendors to deliver such models in good qualities. In the future posts, we may come back to cover all these steps and topics in more details.