IBIS-AMI: The black box magic

In AMI modeling domain, several “black box” magics or techniques are very useful. They exist to make the modeling process more streamlined yet flexible. In this blog post, I will share some experiences and thoughts we have during the development of our SPISimAMI flow. Modeling examples from some major companies will also be reviewed

Why the black box magic?

In contrast to other system models such as T-Line’s RLGC data, S-parameters or even traditional IBIS model, the AMI model itself is a black box. AMI models are both platform and OS dependent. They are in compiled binary format in the form of shared library… “.dll” (dynamic linked library) on windows and .”so” (shared object) on linux-like system. The other part of the ami model, the .ami file, is indeed in plain text holding model settings. However, it needs to work closely with the associated .dll/.so AMI model and can’t be changed arbitrarily.

For a model developer, even if he/she has the skills of the C/C++ coding and .dll/.so compilation, the whole process is still very tedious and error prune. For example, the same compilation and testing need to be done twice on windows due to the 32/64 bit difference. It’s even worse on linux. Different machines or OS need to be tested separately because no only there are different linux “distros”, different environments or IT set-ups may also cause discrepancies. As an example, the usage of “boost” or some other libraries like GSL are very common and widely used in C++ engineering/scientific computing development. However, they may not be included in all IT-built, production oriented environment. As a result, AMI models running perfectly fine on the developer’s system may fail to load on others. To be on the safe side, one usually needs link these “non-essential” libraries statically into the .so file, strip symbols to reduce size, and then test on different machines (virtual or not) to be assure for maximum compatibility. These details may explain why AMI modeling charge has been very high in the past and somewhat monopolized by very few companies.

In previous posts, we mentioned the similarities we felt between a circuit simulator and an AMI model. While a purposely built circuit simulator (like those for cache or memory circuit etc) may run very fast and efficient, a general purpose (e.g. MNA based) simulator usually is suffice for most of the design work. In these simulators, primitive device model such  R/L/C/V/I are built-in, even more complicated ones such as MOSFET, T-Line, IBIS are also included and configurable via external files (.snp file, .ibs file and .tab). This simulator, compiled to native format to maximize the performance, are also platform and OS dependent. However, they do not need to be design specific. Those configurable aspects are left in the netlist file…similar to the plain text .ami file. When more sensitive information needs to be provided to the simulator, such as the process file or design IP, they can be encrypted such as “encrypted hspice” does.

From this point of view, we can see that “magic” or modeling techniques can happen in both places… i.e. the .dll and .ami files

Black box magic in binary .dll/.so files:

While there are five AMI API functions defined in the spec., most important ones are AMI_Init and AMI_Getwave. From their signature shown above, we can see that the same pointer (impulse_matrix for AMI_Init and wave for AMI_GetWave) are used for both input and output. That is, the data are passed-in, the AMI functions need to calculate the response based on this black box’s behaviors, then fill the output response back in to the same pointer to return to the simulation platform on top. This can be depicted with a schematic to depict more or less like the one below:

The assumptions implied implicitly in AMI channel simulation is that the input impedance to the AMI model (black box above, such as Rx model) are infinitely high, same for the output stage of a AMI Tx circuit. So we can imagine two voltage controlled voltage sources (VCVS) with gain = 1 are used at both input and output ends, with the AMI model sitting in the middle to transfer circuit block’s response between stages.

The content inside the “black box” are not visible to outsiders, so a model developer can write spaghetti C/C++ codes to mangle everything together and produce module used only once. Or he/she can does a better job by architecting it in a modular way like the one below:

If we image this with a spice netlist, it will be something like a .subckt:

Parameters to different modules are apparent and omitted here. Construct an AMI model this way will enable module being reused more easily. With that said, the example above still has two assumptions: the first one is that only three stages (i.,e. CTLE, DFE, CDR) are used and their order are fixed. What if user want an AGC up front or LPF filter in between? Do we need to rewrite the codes and compile/test again? The second assumption is that these module are cascaded stage by stage (mostly true for SERDES design) and each block has only one input and output (not necessarily true even for SERDES), These two assumptions are not necessarily true for other design. So we believe that they do not need to be “hard coded” and compiled into the rigid AMI model.

In normal modeling process, one usually need to translate a circuit or a module’s response into describable behavior before its C/C++ codes can be written. For example, if we know this is module is a LPF (low pass filter) then we only need to get its frequency response in order to do IFFT and perform convolution in time domain. However, some blocks may not have close form formula or an easily describable behavior. Is it possible to embed a “mini circuit simulator” in a block so that its input and output will be calculated automatically? This is certainly possible. In fact, one can even implement a S-parameter block or T-Line block to serve as analog front end of the AMI model (for example, to represent a package model). In SPISim’s case, because we have developed an in house circuit simulator and associated device model, we simply need to construct a time domain PWL source representing input “impulse_matrix” or “wave” data, then run mini circuit simulation and retrieve output from simulator’s API (or even waveform on disk) and give back to the channel simulator on top.

As one can see, there are a lot of techniques or magics can be applied to this .dll/.so black boxes. Because compiling platform/OS dependent binary are not a fun, it’s our belief and practice that we should compile a comprehensive AMI model and reuse it as much as we can. The configurable part should be put in the .ami file.

Black box magic in plain text .ami file:

An .ami file has settings for both channel simulator and more importantly, the .dll/.so models being called. It contains two parts:

The “Reserved_parameters” (boxed in green) are for channel simulator to consume. The keywords used here are defined in the spec. and need to be used accordingly. All spec-compliant channel simulator needs to be able to process info. in this part (in reality, we do see individual companies defining their own keywords due to aforementioned monopoly situation).

The “Model_Specific” (boxed in red) portion, on the other hand, are only for model to consume. It’s not channel simulator’s business to judge and interfere how they should be defined and be used. As a result, we can perform many creative techniques here.

We proposed some of the techniques first late last year in previous posts. It’s to our pleasant surprise that other company, such as IBM/GlobalFoundaries also has similar thoughts. Please see their presentation at DesignCon this year linked below (Topic time is 10:15AM)

The AMI_Resolve: A Case Study for 56G PAM4   (http://ibis.org/summits/feb17/)

Simply speaking, one can embed lots of information in the .ami file to convey to the underneath .dll/.so models. Common encryption can be used to protect sensitive information. For example, the reason why SPISim can release our AMI models for free to generate and use is because our AMI .dll/.so files require a license setting and a time stamp and/or data dependent values are part of the license info. Thus, a model will expire after certain time, the parameters can be changed if locked, and the license info can’t be tampered.

The paper presented by IBM demonstrate the intercommunication between unresolved .ami file and the AMI_Resolve API function. The “netlist” or “equation” is encrypted and embedded as part of the .ami file. In the IBIS spec, a table format is also supported in the .ami model_specific syntax yet it may be a hassle to produce and use. In practice, we prefer to point to an external file directly like below:

This external file can also be encrypted up front and decrypted by the .dll/.so file on the fly. It can be a setting table, a frequency response, a S-parameter or a netlist to support the “mini-simulator” block within the AMI model. From this point of view, while the .ami file is plain text, it can still be like a black box and many aforementioned techniques to minimize the re-coding/re-compilation of the .dll/.so file can be used.

Real world examples & considerations:

While these “black box” magic are intriguing and wonderful, at the end of the day, we need to see how well they work in real world. Lets see two examples:

The first model is from Altera. Not only do they provide AMI models and documentation, they also provide a GUI tool to let user check what combination of parameter are valid. In this case, they only provide one .ami file with all possible values inside.

The second example is also from Altera but with different approach. There are many different .ami files, each supporting different usage scenarios.

In our approach, we let user to assemble a csv table first with needed columns to provide settings for different modules, each row is a pre-defined configuration so no further check of valid combination is needed.

With this approach, customer only need to change the row index, rather than different settings of different taps or modules. More over, the table is encrypted and the model publisher can provide different set of configuration to different customers, all with same set of data and .dll/.so models.

With our approach, user use our GUI front end to perform settings, then associated .ami file and pre-built .dll/.so files will be generated instantly. The .ami file contain minimized settings to make sweeping AMI parameters more easily. This minimized .ami settings will also make configuration within other EDA tool more convenient. As a result, with the help of front-end GUI, the “black box” magics resides in both the .ami file and .dll/.so models.

Check before use AMI:

It’s worth mentioning that the latest golden checker published by IBIS are now both OS and platform dependent. This is because they support embedded .dll/.so file checking on the most basic level now as well. Regardless whether you are a model developer or a model user, three steps need to be performed to validate the given or modified AMI models:

  1. Use IBIS’s golden checker (IbisChk6) to check the .ibs file. If this ibis file contains algorithmic model section, then the associated .ami and .dll/.so file will also be checked based on the platform. This will identify compatibility issue up front.
  2. Use a test driver to drive the .dll/.so and .ami file. This step is like connecting an ibis model to a test load and run simulation. A model may pass the golden checker, yet if the test load waveform is not meaningful, then there is no need to put into the full channel simulation. This step makes sure not only the model can be loaded (has all required API function defined), but it can also function properly. In HSpice and Ansys, this test driver is called “amicheck.exe” and require respective license to run. We SPISim also provide free checking/driver tool call SPISimAMI.exe for this purpose.
  3. The last step is to put into channel simulator such as HSpice, Cadence’s SystemSI, Mentor’s HyperLynx or sort. They will provide simulation measurement such as eye, BER and bath-tub plot for further validation. These features will also be in our SPICPro later this year (2017)

IBIS-AMI: An economical yet efficient modeling flow

Preface:

It has been often believed that IBIS-AMI modeling imposes a comparably high cost and technical barriers to get started. An AMI modeling engineer certainly has the due diligence to be familiar and understand the basic of link analysis or AMI flow. However, the requirements of implementing the models in C/C++ codes, compiling them into .dll/.so libraries, and being able to run on third party (often expensive) EDA tools with consistent results are often too much to ask for… or at least will increase the design cycle. To meet these challenges, several big EDA companies have provided top-down based flow for AMI generation directly from architecture codes. In exchange with the “click-button” convenience, SERDES needs to be designed in its environment first and tool’s up-front expense also needs to be considered. Further more, the long term cost of the generated AMI models (in terms of model support, maintenance and extensibility) are often ignored. It’s also true that even with these top-down flows, compilation on different platforms (win32, win64, linux) etc are still inevitable.

We published several free tools recently and presented a paper at the recent IBIS summit regarding AMI modeling. Together with other open source/free link analysis tool to be mentioned at the end of this post, we think it’s now a good time to consider an alternative AMI modeling flow. This proposed methodology in this post will be economic (no front end tools cost) and efficient (give SERDES/AMI developer most control).

[Note that in this post, we use link tool/simulator alternatively to represent the application loading IBIS-AMI models and calling their functions.]

Concepts:

Engineers familiar with Spice-like simulator know that we usually only need one platform specific simulator (binary). Schematics netlist is in plain text format and can be used cross different platforms (assuming good practice such as relative modeling path, file line endings are followed). A general purpose spice simulator is not design specific so there is no need to have different simulators for different circuit or IC design. This compatibility is achieved by building on simple rules, i,e KCL and KVL and linear algebra. This way a simulator can decoupled from the implementation details of design specifics.

In the “AMI-modeling” world, are there such simple rules to be found so that we don’t need to compile a binary just because the design is different? Understood that there are always trade-offs: e.g. simulation speed of design specific binary model vs slower yet convenience during modeling cycle, then at least we should find a way so that modeling engineers can focus more on the basic of the algorithmic blocks and hopefully, still find a way in a later stage to generate a speedy model with minimum extra tasks (C/C++,dll etc)

AMI model, at its current scopes, is mainly for SERDES application. They are mostly point-to-point system or can be processed stage by stage. We may also think it this way by looking at the defined AMI-API function prototypes:

AMI_Init and AMI_GetWave are two main processing routines defined in the API spec. Various arguments are passed-in and the AMI models are responsible to perform designed algorithmic within the model then return these values in place. By “in place”, we mean the impulse_matrix’s and wave’s content will be modified in the same memory address before returning back to the calling application… which is mostly simulation platform or circuit simulator.

Based on these observations, we can liberate the couplings between 1) link tool and models, and 2) models and underline algorithmic processing. Via these decoupling, an economic yet efficient flow can be made possible.

Decoupling within the model:

First, we want to observe how data are being passed from the link tool to the model. In the schematic above, the input to the Tx stage is either channel’s response (LTI mode) or bit pattern (NLTV). If the Tx is a simple pass through, then Rx will receive similar information without being affected by Tx:

Take AMI_Init as an example, we can achieve this decoupling using SPISim’s free SPISimProxy model. SPISimProxy will write the argument received from the link tool to a plain text file. After the AMI model’s processing, it will again write out the processed data to another text file before finally giving back to the link tool. This way data being exchanged between link tool and the model are now exposed… even if they both are compiled binaries. The two blue boxes above represent such generated text data. The main function of the AMI_Int function, purple block in the middle and existed in the model’s .dll/.so file as a form of the C function, is to transform input to output response.

With this, we can now replace the AMI_Init function with our own… we can write this function in matlab, python, perl, java etc language instead of more demanding C/C++ form. It only needs to interface with two text files just exposed with the following operation sequence:

  1. SPISimProxy will expose the calling arguments into a text file, the top blue box
  2. User’s script will read the text file and perform necessary processing
  3. User’s script will write out the data in similar text format
  4. SPISimProxy will read the generated text data, form argument and pass back to simulator.

Because each of these steps can be customized by Proxy’s .ami settings, a configuration file or even environment variables, AMI developers are now liberated to use what ever languages they preferred without dealing any C/C++ if they like. There is also no need for .dll/.so compilation as SPISimProxy has been pre-compiled to support most of the platforms.

The example above uses AMI_Init as an example, other API calls, such as AMI_GetWave or AMI_Resolve can be done in similar fashion. Clean-up calls such as AMI_Close or AMI_Resolve_Close are also supported in SPISimProxy model so if needed, AMI modeler can also clean-up all these file traces at the end.

Part of the arguments passed from the simulator is the model pointer. This model pointer (void*) is supposed to be persistent during the AMI process. The script author may use file based persistence mechanism across AMI_Init/AMI_GetWave calls to store constructed data structure or settings etc. By avoiding using C/C++ specific pointers or data structures, the process becomes neutral and can support many different languages.

Decoupling with the EDA tool:

The aforementioned process requires an application to drive the proxy model, and thus the modeling scripts. This can be achieved with our free SPISimAMI.exe. It’s again pre-compiled to support most platforms. Its built-in pulse response enable user to model LTI based AMI process directly. For bit-by-bit based input data, user may use our SPILite or other free simulator such as our SSolver or NGSpice to generate bit sequence, then feed this input in .csv format to the application. SPISimAMI will then take this user’s input, form the proper arguments and send to underlying AMI models… it can be modeling script current under development, or an existing IBIS-AMI models for testing or validation purpose. SPISImAMI can also be used to drive an existing AMI models via SPISimProxy so that user may get insights about this process. In the demo video we posted on SPISimProxy page, a matlab script has been used to demonstrate the AMI_Init process.

Test drive with open-source link tool:

Once the given response work properly with the prototype implemented in modeling scripts, next step is to run them in a link tool for BER like analysis. For this purpose, pyBert may be used as it also load IBIS-AMI models, including our SPISimProxy.

Up to this point, there is no front-end cost involved in the AMI modeling process and a developer only needs to use his/her own favorite language to deal with plain-text input/output. In addition, the debugging and testing of the model prototype can be done with direct command call instead of multi-step GUI operation/invocation. This not only avoid license needed with 3rd party tool, but also ensures an efficient work flow with easily repeated consistent results.

Optimization and model release:

There are several possibilities to release the models from on this process:

  • The model publisher may encrypt the script if needed, then distribute as they are. This will produce most accurate results as they have been validated by the author during the modeling process. The disadvantages include that: 1) it’s less efficient as the data exchanged between the SPISimProxy and the modeling scripts are file based, 2) the model recipient may need to install other run time interpreters such as perl, python etc in order to run the encrypt/compiled script, and 3) the client also needs to download the SPISimProxy from our site as unlicensed redistribution is strickly prohibited.
  • We can work with the model publisher to provide specific API to the prototype model such that the IP and accuracy are still maintained, yet the performance will be improved dramatically. We can also remove the unlicensed terms and add the proxy class to have your company’s name so that you can distribute SPISimProxy together with your model.
  • We can also create the corresponding AMI model with pure C/C++ codes to that there is only one model to be released with best performance and convenience for the clients.

The modeling flow suggested above is not proprietary and can also be implemented within the corporation or the modeling team. We believe that with the liberation of the modeling engineers from these unrelated AMI modeling process, they will be able to focus more on the core business logic, i.e. the algorithmic part and deliver the best quality model for the industry’s progress. Those non directly related tasks can be left for other EDA professionals (* cough *) if needed.

IBIS-AMI: Analysis using a proxy class

Preface:

A hot topic in recent years about IBIS-AMI is co-optimization through back channel. During this process, either Tx, Rx or both participate to select optimized set of parameters in order to have best link performance. The IBIS spec. as of today does not support this type of communication or data exchange between models, thus new protocols have been proposed. In addition, there are also occasional needs to support 1. simulator or AMI models from other vendors, 2. implemented in different AMI-API spec version, or 3. written in different languages such as matalb. Most of these cases, only their binary formats rather than source codes are available. To overcome these obstacles and bridge the gap, a proxy class based approach is proposed.

This post is written in preparation for the presentation of the same topic at the IBIS Summit during DesignCon 2017 on Feb. 3rd. Audio recording of the presentation and slides have been made available at the bottom of the post.

Roles in an AMI flow:

Readers of the latest IBIS spec. will find that Chapter 10, Algorithmic Modeling, is all about defining two roles in an AMI flow and how they communicate through API. These two roles are 1: EDA tool (usually circuit simulator) and 2: one or more AMI models being simulated. AMI API is defined such that EDA tool knows where the .ami file is and which functions are available in the model .dll and can be loaded. In the current spec., no interaction between different models is possible.

Take the .ami file above as an example, the EDA tool is responsible for parsing the ami file and do type and range check first. It then needs to figure out that an “AMI_GetWave” function implemented in the model since the parameter “GetWave_Exists” is set to true. In addition, the waveform process should happen in the AMI_GetWave” call rather than AMI_Init because “Use_Init_Output” is set to false. Beyond this, the only thing EDA tool needs to do regarding “Model_Specific” keyword section is to parse these data and form key-value pairs before passing into model. The simulator itself doe not need to know what they are and how they will be used.

When a break point is set at the function call inside the model’s .dll/.so, one will find that the parameters now form pairs already and can now proceed to consume these settings. Note that this breakpoint will only be hit if the APIs are implemented correctly (thus can be recognized by the simulator and loaded according to the spec.)

With the introduction of the back-channel interface (BCI), new AMI APIs are being defined to support communications between models themselves:

Interested reader can find BIRD147.,1_Draft on IBIS website fore more details. Simply speaking, models can pass data back and forth through this protocol via files. Since this is a reserved parameter, this BCI parameters should also be supported by the EDA tool.

Reader keeping track of this topic probably knows that for the past several years, several EDA companies have been arguing about how these back channel communications should be implemented. Some of the main considerations include:

  • In case when legacy models can’t be re-compiled, how are they going to participate in the optimization process
  • Whether or not a simulator should perform solution exploration actively or should leave this to the models themselves
  • Should the communication protocol be only between models involved or it should also be part of the defined spec. as well

On top of these, we would also argue that:

  • As a simulator/EDA tool developer, how can I test the AMI models at hand for co-optimization?
  • As a model developer, my EDA tool may not be updated or up to date. How can I test the co-optimization if I want to add such support my model?
  • As a model/EDA tool user, can I inspect what’s going on between simulator and models? I only get post-processed results from EDA tool and they are not correct!

For these purposes, we think that a proxy based class can be used in AMI analysis flow.

A proxy class:

The proxy pattern’s UML is shown above. The dot line at the top represents “has a” relationship while the solid vertical arrow line represents “is a” relationship. This UML diagram can be read as: A client has one or more subjects which implement interface function called “DoAction”. When the proxy class’s DoAction is called, it delegate to RealSubject’s DoAaction” function.

Put this in the IBIS-AMI’s context, the statement above can be read as: An EDA tool can load one or more .dll(s)/.so(s) which implement AMI-API’s interface such as AMI_GetWave etc. When the EDA tool calls a proxy’s AMI-APIs, this proxy object delegates calls to RealSubject (i.e. real model)’s AMI-APIs for data processing.

So this Proxy class is a “wrapper” around real model. It’s loaded by the EDA tool yet itself also load the real model’s .dll(s)/.so(s). It’s a “man in the middle” and can be used to intercept, modify input/output data or even perform customized analysis flow beyond what a simulator can do.

In the example above, the proxy’s AMI_Init function loads the model’s dll, identifies the AMI_Init function and then delegates call.

With proxy class, there are several experiments/test can be done:

Consistency/Stress test:

In this example, a model’s functions are called many times. The purpose is to test:

  • Consistency: i.e. when given the same data, will the model return the same result? This should be the case for LTI model.
  • Stress: will there be resource leak such as memory usage keep  increasing?

As mentioned in the IBIS spec, an EDA tool may break up lengthy bit stream into several chunks and call AMI_GetWave many times, thus a well behaved model should pass these two tests.

 

Co-optimization (internal):

The second experiment is to bundle Tx and Rx’s response together within one function call and try to do something with it (such as iterate to find combinations for best performance, i.e. optimization)

First, from the IBIS spec., or diagram shown above, Tx’s response will be computed first. Most of the Tx’s processing is LTI and its function is mainly to convolve signals with 1 or 4. After this step, EDA tools convolves again with signal 6 before passing on to 7.

Now convolutions is a commutative process, meaning the order can be exchanged:

So if we use a proxy class in Tx’s place and implement a delta response (e.g. simply return the data passed in), then EDA tool will convolve step 4 and 6 as before while we can combine the actual Tx model’s function with Rx together after that and get the same result:

However, these two models are now put together with the Rx proxy’s AMI_GetWave function call. So we have great control by manipulating the models’ pointers directly. This way, setting such as CTLE’s curve index and/or sweep the number of taps and their weights on the Tx side can be performed. This example also demonstrates how a proxy class can intercept call from EDA tool and perform customized flow.

Co-optimization (external):

In the two examples above, data execution is within the same process where the proxy and/or model’s .dll(s)/.so(s) are loaded. However, it’s also possible to implement these in separated process, or on different machine over the network. The diagram below shows an example:

In this case, we want to use EDA tool’s post-processing capability to obtain performance matrices such as eye-width or BER. However, we would also like to adjust the model’s parameters continuously in order to obtain best channel performance. The EDA tool can be called repeatedly, each time the AMI_Init or AMI_GetWave is called by the simulator, a proxy class intercepts and delegates the call to a remote process via communication such as file based IO, socket or shared memory. The remote process is persistent and can be considered as a server. It will return the data back to proxy class upon completing processing, wait until EDA tool’s post-process and updates results, then read and adjust the settings for next call from another simulator run. On the “server” side, a simple AMI test driver loading similar proxy class can serve as the stand-alone, persistent process.  This way all the .ami processing can be done up front by the model driver and no simulator license is needed.

The shaded block in the middle execute socket communication to get response from a remote process.

Other possible uses of the proxy class:

In addition to the examples discussed above, an AMI proxy class may also serve:

  • As a wrapper class to support model implemented in different language, such as matlab, octave, perl etc in either AMI-API compatible library form or executable
  • As a server to centrally manage the SERDES models, provide extra IP protection as it can now sit on some server always updated without being distributed.
  • As a wrapper to support legacy models by implementing support of new spec. or parameters at the proxy level without re-compiling the old models.

We have used such proxy class in our AMI analysis and find it very useful. We also expect it will be even more widely used as AMI models become prevalent and IBIS spec. being updated to support new protocol/parameters.

Materials:

Click [HERE] for the audio recording of the presentation.

Click [HERE] or visit ibis.org for the pdf of the presentation.

 

IBIS-AMI: Modeling architecture study

Preface:

In previous post, we discussed several possible roles involving IBIS-AMI and their points of considerations. In this post, we would like to focus on the AMI model developer’s role and explore several modeling flows along with their pros and cons. The materials and example discussed here are from publicly available sources and articles, which are also listed at the end of this post.

IBIS-AMI models:

Regardless how sophisticated a SERDES design is, at the end of the day, the work of generating a corresponding AMI model is to create the following the 3 ~ 5 (depending on IBIS spec. version) AMI API functions in C language and compile them as .dll(s)/.so(s) across different platforms and OSes. Noted that the actual implementations (how the SERDES or EQ operates) are not bounded by C/C++ language. For example, one may use matlab, octave, perl or other language to implement the core function, only that they have to be wrapped with C AMI-API functions. Regardless, the main objectives are still the same: translate the design to corresponding codes. On top of that, one also needs to consider how to debug, maintain and extend the generated models for model users and future design revision.

IBIS-AMI API functions

A very popular, and relatively easy approach to achieve this goal is to use tool/program to generate codes directly from design collateral.

Top-down AMI modeling flow:

A typical design flow is usually top-down based: floor plan is made and functional blocks are defined. At first, only abstract level behaviors, budgets and/or spec. are given. Each designer or team then dives into the detailed implementation of these functional blocks and finally assemble them together for full design simulation or verification. Using this approach, a schematic is usually used for design entry and connect different block first, each block may have several hierarchies. Architecture codes/schematics are translated to C/C++ by machine.

User also usually can right click on each block to specify parameters for customization or exporting. When it’s done, an add-on module will translate this design to corresponding C/C++ format most of the time. So far as AMI API is concerned, a special AMI kit may be also needed such that generated codes will be AMI-API compatible.

With this approach, user focuses on SERDES design rather than the coding or API details. Since these architecture blocks/codes are pre-generated and verified, the generated codes are considered well tested and should produce good correlations.

Machine generated codes:

While machine/program generated codes are pre-tested and usually very structured, it may not be easy to debug and extend. While it’s certainly possible to rewrite part of the codes for fine tuning or customization, one should not forget that next time when the designer click “generated c/c++” button again, all the changes will be overwritten. So the update made at the bottom (i.e. generated codes) will not back propagate or back annotate to original design. One almost always have to change from the top.

My observations for this top-down approach are:

  • Suitable for designer who has original collateral, don’t know or want to code.
  • These are mostly machine generated codes:
    • Should run correctly as it has been tested
    • May not be efficient, (most of the case, certainly in this case above)
    • Not easy to maintain…. bad readability and can’t enforce coding guideline
    • One direction only, code changes can’t back propagate.

Bottom-up AMI modeling flow:

If a software developer is going to tackle the AMI modeling challenge, his/her approach probably will be different from that used by the SERDES/IC designer. If it’s me, a bottom-up approach will probably be used:

  • 1st, one will identify several common blocks to be used, such as
    • FFE: Feed-forward equalizer, LTI, Time or frequency domain
    • LPF: Low pass filter, LTI, freqeuncy domain
      • CTLE, Bassel, filter based IIR/FIR etc
    • DFE: Decision feedback equalizer, NLTV/digial, time domain only
    • CDR: Clock data recovery, NLTV/digital, time domain only
    • Coder: various coding page
      • 64b66b, 8b10b etc
    • PRBS: Pseudo random bit stream, PRBS7, 10, 15 etc
    • AFE: analog front end to convert pulse to shape with Rt/Ft/Swing etc
  • 2nd, one may define common interface, data member etc between these blocks and use OO principles to construct base and derived class etc

  • 3rd, being able to assemble different stages elegantly and form an AMI model of this stage:

  • As a bonus, one can then easily add extra mechanism such as security, encryption and/or different selector of different CTLE responses as an example.

Since this is hand cranked, the developer should feel very comfortable supporting, debugging and extending the functions. Accompanies with good documentation, it can be used for very long time. In order to make sure the model’s performance matches those from the original design:

  • The classes should not be hard coded with number. Instead, settings etc should be parameterized and can be tune either programmingly or from the .ami file
  • A “sweep”, least-squared-error based fit or other optimization methodology can be used to find the set of parameters which match the original design’s performance best
    • In other words, the good correlation is not “given”. It needs effort.

My observation of this bottom-up approach are:

  • Suitable for engineer person who is good at OO and comfortable coding:
  • Human written codes:
    • Need effort to make sure codes can run correctly
    • When doing it right, will run very efficiently
    • When doing it right, will be easy to maintain and extend
    • Code usability is very high, save effort/resources in the long run.

Spec/Datasheet based AMI modeling:

From the discussion above, it seems both top-down and bottom-up approaches each has its own merits. Is it possible to have best of both worlds?

It’s not easy… since each engineering’s disciplines are different, and that’s also why doing IBIS-AMI modeling is technical challenging. It demands cross-domain knowledge and experience at least. We haven’t even discussed algorithms such as FFT, iFFT, DSP, BER etc above yet…

Having that said, I believe that this still possible if we can eliminate short comings of either side. For example, One can look at other EDA vendors’ AMI library parts and find that they are actually well organized and architect like the bottom-up thinking above. So it must be the process which translates schematic to codes which compromised results. This is most likely due to its general purpose usage: “design anything and tool can translate to c/c++”.

Since it’s not easy to change big EDA company’s product and application scope, if one can find suitable DSP/filter design library to use in the first place in the bottom-up flow, then a well behaved, efficient, maintainable and extensible modeling is not that out of reach. Fortunately, there are plenty of such open source libraries available. Further more, commonly used functional blocks can be assembled together based on settings or parameter. They can even be pre-built/pre-compiled so the compilation can be further avoided. The end result is template/spec/datasheet based, AMI modeling approach.

SPISimAMI: Spec/Datasheet based AMI modeling

Many recent publications, alone with the adoption of this approach from smaller EDA company like us, has supported this as a better modeling flow so far as IBIS-AMI is concerned.

Reference:

2015 DesignCon paper: [HERE]
2016 IBIS Summit paper: [HERE]
2016 DesignCon paper: [HERE]

IBIS-AMI: Scenerios

IBIS-AMI: roles and scenerios

SERDES based links, such as USB, PCIe and SATA, are now ubiquitous. Their SI analysis poses special challenges as low bit error rate required by their spec. means millions of bits needs to be simulated. In stead of using transistor level design details or proprietorial behavior blocks (e.g. Verilog-A or matalb model), an industrial standard, exchangeable modeling format are usually required. This is because Tx and Rx IPs may not be from the same vendors, nor are the simulator or analysis tools used by IC and system integrator. IBIS-AMI is currently the industrial standard for this purpose. Just like a traditional IBIS, only more cross-domain knowledge demanding and technical challenging. It requires certain flows to be able to test, generate, validate and release IBIS-AMI models. In addition, the considerations related to IBIS-AMI also depend on the these different roles and scenarios. We believe when developing or adopting AMI workflow, three usage situations, including 1. an end user, 2. a model developer and 3. a model publisher should all be considered thoroughly.

As an AMI model user:

This is the most common scenario. As a model user, one will often want to validate and test the received model first before running full simulation. Take an IBIS model as an example, the validation tasks usually include first: check with golden parser, then run time domain simulation driving IBIS model to simple test load and check response. More diligent users will also want to extract the performance matrices such as impedance and Rtt etc to make sure the it matches what’s claimed by the model name.

The example below show pretty good matches in terms of impedance of different corners (34 and 40 ohms).

When it comes to an AMI model, things are not that easy. First of all, AMI binary model files (.dll(s)/.so(s)) are not only OS (e.g. windows vs linux vs OSX) but also platform (e.g. 32 bit and 64 bit) dependent. This means that unless a model’s file name contain correct info (e.g. TX_Win32.dll), one usually can’t easily check by just looking into the content of the binary file. Secondly, depending on the version of the spec. the implemented APIs in the model are also different. Again, these implementations are in compiled format and one can’t easily check. Lastly, even the latest golden parser will not exercise these API calls, let alone checking their performance. And to excite an AMI model usually involve third party (often expensive) EDA tools usage and required license.

EDA tools like HSpice does provide a simple utility called AMICheck for this purpose. Similarly, companies like SiSoft and Cadence also provide some development kit which enable AMI model checking. However, in the HSpice’s case, a license is required to run this utility. For the later two cases, the development kit needs to be compiled into respective OS/paltform first and the input and output to the models are not flexible. All these poses obstacles which prevents a model user to be able to quickly check the received model.

We believe a simpler yet  more elegant solution should exist. That’s why we developed the SPISimAMI and release it as free tool. It’s also been compiled into different platform and can be freely download to use directly. With this utility, a user can drive given AMI dll file with accompanied .ami settings. Tool will first load the dll to make sure it matches the OS version of the executing SPISimAMI utility. It will then load the models and check mandatory API functions such as Ami_Init and Ami_Close. User can then use embedded rising/falling step and pulse response to drive these Ami_Init and Ami_GetWave functions if available (too will check whether the GetWave_exist is set to true). Alternatively, an user can pass the input response to the model in either .csv, .tr0 or .raw format. The output waveform from the model will then be saved as non-proprietorial .csv and .raw formats to be inspected by tool like excel or our free SPILite and SPIPro. The screenshot below show its usage:

An example of quick AMI check using this features is to feed input bit sequence and check the equalization level output from the AMI model:

Regardless which approach or tool one takes, we believe that as an AMI model user, these checking process and capabilities should be considered due diligence and must be done before performing full link analysis with these AMI models.

As an AMI model developer:

AMI model development usually requires cross domain expertise. For example, a model developer needs to know how AMI and associated IBIS models are used. Then computer science skills such as programming in C/C++, implement APIs and use compilers like Visual Studio and g++/gcc to produce .dll/.so are also required. Finally the domain knowledge such as DSP, equalization and link analysis come to play. In terms of implementation, a model developer not only needs to comprehend these technical details but also needs to architecture codes in a way that the model is modular and can be easily extended. This is often important as design of different generations within the same company may be different only slightly, thus a proper modeling architecture will allow new model to be created by deriving from the common base class (think about object oriented design). From the discussions above, one can easily see that this modeling effort may either require dedicated engineering resource or expensive contracting service to get the job done.

CS skills are required in AMI modeling

To address thess hurdles, EDA companies have provided several top-down based solutions (albeit expensive). In my opinions, these flows are very capable. I think their advantages and strength come from the fact that many building blocks (mainly for RF purpose originally) are already available in their library, thus translate these propitiatory structure into C/C++ language and make them compatible with AMI API are obvious solutions. However, it’s also my belief that an AMI model developer should not fully rely on these flows. At the end of the day, just like any other software product, the developed model should be maintainable, extensible and efficient. An owner of these tools can take a look at the machine generated template and find that these flow’s results are usually not the cases:

SystemVue’s codes, using buffer of size 1?

After looking these codes, as a developer, you should ask yourself how you are going to debug these machine generated codes when your user report bugs or issues. And will you be able to extend their functionalities with ease?

Fortunately for SERDES applications, where AMI are used most often, the function blocks are usually has common behaviors. For example, feed forward equalizer, low pass filter and pulse shape shaver are often required as part of the modeling functions. So for long term considerations, a company or a developer should really bite the bullet and build the AMI infrastructure from ground up like it should. Take all factors such as modeling architecture and code performance and maintainability into account. After this “exercise”, one will often find that a company’s SERDES designs do not change dramatically between generations and may developed codes/blocks can actually be reused just like their silicon IP. On top of these, one can also add functions not available from these tools such as encryption and other parameter locking mechanism.

A note worthy taken is that the aforementioned SPISimAMI utility can serve as a driver of the development stage. This way one does not require 3rd party (expensive?) tool or license usage to be able to drive and test the non executable .dll/.so under development.

As an AMI model publisher:

A model publisher should prepare to support AMI models he/she has released. The most common scenario requiring support is that the model does not behave properly or can’t achieve desired link performance. The complications here is that the user’s link analysis tool may often be different from the one a publisher is using. Without a common ground, it’s often difficult to root cause the issue without becoming blame game between models and the analysis tools.

We believe a free tool like SPISimAMI can again resolve this issue. By providing a free available tool supporting data capture both input to and output from the model in non-proprietorial format, data exchange becomes easy. An end user simply needs to probe the input signals to the model,  save it as customized input and send it back to publisher. A model publisher can then use same AMI driver tool to feed in this input and observe model response. All these are done with most basic/simplest manner directly with the models instead of other analysis functions. Together with more advanced programming techniques such as proxy based modeling pattern, data will become even more transparent and not bonded by a particular link simulator. This way, both model publisher and end user can focus more on the AMI model’s proper usage and performance rather than the discrepancies between different EDA vendors’ tools.