Share this page!

Quick Overview of OSVVM, VHDL’s #1 Verification Methodology

Open Source VHDL Verification Methodology (OSVVM) provides VHDL with buzz word verification capabilities including Transaction Level Modeling, Constrained Random, Functional Coverage, Scoreboards, FIFOs, Memory Models, Error and Message handling, and Test Reporting that are simple to use and feel like built-in language features. OSVVM has grown rapidly during the COVID years, giving us better capability, better test reporting (HTML and Junit), and scripting that is simple to use (and works with most VHDL simulators). This presentation shows how these advances fit into the overall OSVVM Methodology.

Welcome to the OSVVM 15-Minute Waltz. Let's go ahead and get started.

What is OSVVM? Well, first up, it's a verification framework. It's a verification utility library that implements the things that make VHDL a full verification language. It's a verification component library with a growing set of verification components. It's a script library that allows us to write simulator-independent scripts. It's a co-simulation library that lets us run software on top of our hardware in the simulator. It generates test reports, so HTML for humans and JUnit XML for continuous integration tools. OSVVM is free, open-source, available on GitHub, available on osvvm.org, and it's developed by the same VHDL experts who have helped with VHDL standards.

As a framework, it looks very similar to the SystemVerilog framework. We have verification components that implement interface signaling, and we have a test sequencer that has sequences of transactions that implement our test case. Then each test case is a separate architecture of test control. Our framework is simply structural code, and it's simple, just like RTL code. So we have an instance of the DUT, we have instances of our verification components, and we have an instance of our test sequencer.

Elements of our framework: We have transaction interfaces, such as 'ManagerRec'. We have the transaction API, such as 'Write()' and 'Send()'. We have verification components, and we have the test sequencer.

We have our model-independent transactions. This comes from the observation that many interfaces do similar things. So OSVVM has codified the transaction interface and transaction API for stream interfaces, such as AxiStream and UART, and in there you'll find our transaction interface implemented as a record, and our transaction API implemented as procedure calls. These procedures here are a subset of what's in the OSVM library. And then we have another set for address bus interfaces, or what's also called memory-mapped interfaces, such as Axi4 or Avalon, and again we have our record and our transactions, and again this is a very small subset of what's in the library itself. The benefit of this: We simplify verification component development, we simplify the reuse of common test cases and sequences.

Our verification components have a DUT interface, such as the record shown here for the 'AxiBus' or it could also have individual signals. And then it has a transaction record on it that uses one of the types from the previous slide, so here we're using the 'AddressBusRecType'. Inside a verification component -- if it's a very simple one -- we're calling 'WaitForTransaction', which waits until a transaction has been called, and then the record has something in it, so we decode the record and do the operations. Now we don't often have sub-programs in here, more often I write the code in here. Benefits: For verification developers, they just focus on the model functionality and don't have to do the other stuff that's provided by the OSVVM model-independent transactions.

Our test sequencer has transactions on the interface and maybe also Reset. Inside, this is where our test case is and our test case is in a single file. We have a Control Process that initiates and finalizes transactions, and then we have one process per interface, so we're concurrent just like the design is concurrent. Our tests are simply calls to the transactions, and it's easy to add and mix-in directed tasks with constrained random tasks with scoreboards and functional coverage. We also have synchronization utilities that help us synchronize these independent processes such as here at the beginning or here as the test is done. Because we're running the test, we're checking for errors and such and recording them against a data structure, and then at the end of the test, the control process calls this 'EndOfTestReports()' procedure that reports all of the errors for the test and creates YAML files that the scripts convert into the HTML reports.

Writing a directed test is easy. We simply call the transactions such as 'Send()' on the TX side or 'Get()' here on the RX side. We can do some checking with 'AffirmIfEqual()' from the AlertLog package. We can do checking by instead calling the 'CheckTransaction()' from the transaction interface. The test output of the 'AffirmaIfEqual()' then, if it passes, produces a 'Log PASSED'. If it fails, it produces a 'Log ERROR', an 'Alert ERROR'. The benefit here, we've greatly simplified writing self-checking tests and we've improved readability.

Our constrained random tests are simply a call to something from our randomization library such as our 'DistInt()' here, and this one, 70% of the time, it's going to generate a zero for us, and in that case, we're going to generate no errors and we're going to pick a data value between 0 and 255. Other possibilities, one for 'Parity Error' or two for 'Stop Error', note, we set up the operation to be stop error, but we also randomize a different set of values. This is the nature of constrained random, and then we do our transactions. Here, we're setting up our transaction and we're calling it at the end, but we could also be calling it within those 'case' branches and could be doing more than one transaction per branch. So our constrained random approach in OSVVM then is randomization plus code plus transaction calls.

Now we could do checking the same way we were previously and repeat the sequence on the 'Receive' side, but we really don't want to do that because it's tedious and it's error-prone. Instead, we can use a scoreboard. A scoreboard is a data structure used for checking data when it's minimally transformed such as sending something across a UART receiving it somewhere else like we're doing here. Our scoreboard has a FIFO and a checker inside, it uses package generics so that we can support different types, it handles small data transformations, it handles out-of-order execution, and it handles dropped values.

Using a scoreboard is pretty easy, we set up an object of the 'ID' type and then we construct the data structure. So we're building this in the package and it's actually a singleton data structure that we have sitting out there. Then we call 'Push()' with the handle for the scoreboard, 'SB' here, and then we do a 'Send()' transaction and then in the 'Receive' side we do a 'Get()' transaction to receive the value and then we just pass the values that we receive up to the scoreboard for checking. So we have a big benefit here in that the 'Checking' side is relatively generic and it stays the same even if the 'Stimulus Generation' side changes so we switch from directed to randomized test, still the same thing on the 'Checker' side.

The next thing we need to have is functional coverage. Functional coverage basically is code that tracks items in your test plan such as requirements, features, and boundary conditions. Why do we do this? Well, with randomization, how do you know what the test did? So we're looking for 100% functional coverage, 100% code coverage and that what indicates the test is done. Now why not just use code coverage? So code coverage tracks code execution but it misses anything that's not directly in the code such as binning values from a register or things that are independent that we need to correlate.

Okay so here we're building out our coverage model again using an 'ID' type because again the coverage models are in a singleton data structure. We then construct the data structure and then we define the coverage model by defining the bins of values that we want to see and then we call 'ICover()' to collect the coverage and it's simple. In fact, functional coverage with OSVVM is as simple and concise as language syntax.

Now we also go further. We can do introspection of the coverage model and create what is considered to be runtime coverage driven randomization. We call this intelligent coverage randomization which is what we're doing internally is randomizing across the coverage holes. So we start out the same way. We create our coverage object. We do our caller constructor and then we build out our bins. But now we add to our bins coverage goals. These become randomization weights and then we call up 'GetRandPoint()' with the coverage model to randomize a value within that coverage model and then we decode that value much similar fashion to what we did with the constrained random approach and then we dispatch our transaction and we record the fact that we did that transaction with 'ICover()'.

When we finish our test, we're going to generate our reports. The first reports you're going to see are these PASS/FAIL tests for a given test but 'EndOfTestReports()' is also essential for our other reports generated by OSVVM.

The next thing we have is our scripting and our scripting is started out as a list of files and then it evolved to having these TCL procedures that we call to set up our tests. So we have our library and we activate our library and then we analyze to compile things and then we simulate. And note, when we activated the library, we use that same library for the rest of the commands that follow it. So the library is set and remembered. And we work on basically all of the popular VHDL simulators with the exception of Xilinx's XSim. We're waiting on them to get good 2008 support.

We call our scripts using 'build' and 'include' rather than TCL's 'source' and EDA vendors' 'do'. Because this is what allows us to, when we specify a path, to make the path relative to the script directory rather than making it relative to the directory the simulators running in. That's important because you want to be able to relocate things on a project by project basis. So we use 'build' to start things off, to call things from the command line or to call the top-level scripts from the continuous integration run. And 'build' plus 'EndOfTestReports()' is what generates our reports. And then 'include' is for calling a profile from another profile.

Our report: We're going to just show you one of the reports. Our build summary report starts out with status for the entire build. Did it pass? Did it fail? We give you links to the log file and an HTML version of the log file. If you ran it with coverage, we have a link to the merged code coverage. We have a test suite summary, so we break our test cases out into test suites that focus on testing one thing, like the 'AlertLog' package or the 'AxiStream' verification component. And then, so that's a summary for the suites. And then we have the test case summaries that give us details of how each test case within a given suite ran.

So all you need for your VHDL verification is OSVVM. We have a powerful, concise capability that rivals other verification languages. We have unmatched reuse through the entire verification process. We have unmatched report capability with HTML for humans, JUnit XML for continuous integration tools. We have tests that are readable and reviewable by all, meaning verification engineers, but also the hardware designers, but also the software and system engineers. If you can read the transactions, you can read the tests. OSVVM is set up to be adopted incrementally, and you can find us on GitHub. You can find us on osvvm.org. Thank you for attending my presentation.