Skip to content

Latest commit

 

History

History
 
 

verif

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 

Verification Documentation

The sub-directories below this point contain all of the verification documentation for the CORE-V family of RISC-V cores.

  • Common: CORE-V verification documentation not specific to any one core. The most important document here is the Verification Strategy.
  • CV32E40P: verification documentation specific to the CV32E40P core
  • CV32E40: verification documentation specific to the CV32E40 core

How to Write a Verification Plan (Testplan)

The CORE-V projects use spreadsheets to capture Testplans. I know, I know, we all hate spreadsheets, but they really are the best format for this type of data. The template for the spreadsheet is simple enough that you can use either Microsoft Office Excel or LibreOffice Calc. The Verification Plan template for the CV32E40P is located at the root of the VerificationPlan directory.

Verification Planning

A key activity of any verification effort is to capture a Verification Plan (aka Test Plan or just Testplan). The purpose of a verification plan is to identify what features need to be verified; the success criteria of the feature and the coverage metrics for testing the feature. Testplans also allow us to reason about the capabilities of the verification environment.

A Verification Plan should focus on the what, and not the how of verification. When capturing a testplan we are mostly interested in creating a laundry list of thing to verify. At this stage we are not (yet) concerned with how to verify them.

The “how” part is captured in the Verification Strategy document. That document exists to support the Verification Plan. For example the CV32E40P testplan specifies that all RV32I instructions be generated and their results checked. Obviously, the testbench needs to have these capabilities and its a goal of the Verification Strategy document to explain how that is done.

A Trivial Example: the RV32I ADDI Instruction

Let's assume your task is to verify the CV32E40P's implementation of the RV32I ADDI instruction. Simple right? Create a simple assembler program with a few addi instructions check the results and we're done. Of course, checking for the correct result (rd = rs1 + imm), is insufficent. We also need to check:

  • Overflow is detected and flagged correctly
  • Underflow is detected and flagged correctly
  • No instruction execution side-effects (e.g. unexpected GPR changes, unexpected condition codes)
  • Program counter updates appropriately

Its also important that the instruction is fully exercised, so we also need to cover the following cases:

  • Use x0..x31 as rs1
  • Use x0..x31 as rd (Note: the result of this operation will always be 0x00000000 when rd is x0)
  • Set/Clear all bits of immediate
  • Set/Clear all bits of rs1
  • Set/Clear all bits of rd

Note the simplifying assumptions made here. With one 32-bit and one 12-bit operand there are 2,244 unique sums that can be calculated. Including the cross-products of source and destination register yields O(10^6) unique instruction calls. The RV32I ISA specifies 40 instructions so this gives us O(10^7) instruction executions simply to fully verify the most basic instructions in a CORE-V design. Obviously this is impractical and one of the things that makes Verification an art is determing the minimal amount of coverage to have confidence that a feature is sufficiently tested. It is the opinion of the author that the above coverage is sufficient for the addi instruction. You may see it as overkill or underkill depending on your understanding of the micro-architecture or your level of risk adversion.

So, specifying the Testplan for the addi instruction forces us to think about what the feature-under-test does, what we need to check to ensure its done properly and what stimulus and configuration needs to be covered to ensure the feature is tested under all penitent conditions.

The template used for this project attempts to provide an easy-to-use format to capture and review this information for every feature in the design.

HOWTO: The CORE-V Verification Plan Template

The following sub-sections explain each of the columns in the template spreadsheet.

Requirement Location

This is a pointer to the source Requirements document of the Features in question. It can be a standards document, such as the RISC-V ISA, or a micro-architecture specification. The CV32E40P introduction lists sources of documentation relevant to the CV32E40P. Every item in a Verification Plan must be attributed to one or more of these sources. Please also include a chapter or section number. Note that if you are using the CV32E40P User Manual as a reference, you must provide a release/version number as well since this document is currently in active development.

Feature

The high-level feature you are trying to verify. For example, RV32I Register-Immediate Instructions. In some cases, it may be natural to use the section header name of the reference document.

Sub-Feature

This is an optional, but often used column. Using our previous examples, ADDI is a sub-feature of RV32I Register-Immediate Instructions. If it makes sense to decompose the Feature into two or more sub-features, use this columnn for that. If required, add a column for sub-sub-features.

Feature Description

A summary of what the features does. It should be a summary, not a verbatium copy-n-paste from the Requirements Document.

Verification Goals

A summary of what stimulus and/or configuration needs to be generated/checked/covered to ensue sufficient testing of the Feature. Recall the example of the addi instruction. The verification goals of that feature are:

  • Overflow is detected and flagged correctly
  • Underflow is detected and flagged correctly
  • No instruction execution side-effects (e.g. unexpected GPR changes, unexpected condition codes)
  • Program counter updates appropriately

Pass/Fail Criteria

Here we attempt to answer the question, "how will the testbench know the test passed?". There are several methods that are typically used in CORE-V projects, and it is common to use more than one for a given item in a Verification Plan.

  • Self Checking: A self-checking test encodes the correct result directly into the testcase and compares what the DUT does against this "known good" outcome. See the RISCY Testcases section of the Verification Strategy for an example of this. This strategy is used extensively by the RISC-V Foundation Compliance tests.
  • Signature Check: This is a more sophisitcated form of a self checking test. The results of the test are used to calculate a signature and this is compared against a "known good" signature. This strategy is also used by the RISC-V Foundation Compliance tests.
  • Check against ISS: Here, the testcase does not "know" the correct outcome of the test, it merely provides stimulus to the DUT. The pass/fail criteria is determined a verification environment (testbench) component, in this case the Instruction Set Simulator (ISS), and the verification environment must compare the actual results from the DUT and the expected results from the ISS (or other reference model). When practical, this is the preferred approach because it makes testcase maintenance simplier.
  • Check against RM: The pass/fail criteria is determined by a Reference Model (RM). An RM is a verification environment (testbench) component which models some or all of the DUT behavior. In this context RM is a more generic term for ISS. Use this criteria when you suspect that the ISS will not model the specific behavior needed.
  • Assertion Check: Failure is detected by an assertion, typically coded in SVA.
  • Other: If one of the above Pass/Fail Criteria does not fit your needs, specify it here.

Test Type

Choose one or more of the following:

  • RISC-V Compliance: a self-checking ISA compliance testcase from the RISC-V Foundation.
  • OpenHW Compliance: OpenHW Compliance is compliance testing of the custom XPULP instructions supported by CV32E40P. For these we need to generate our own compliance test-suite. It is not yet know if these can be randomly generated or will require a self-checking ISA compliance testcase from the OpenHW Group. Note that if they are to be randomly generated then the ISS will need to be able to process XPULP instructions.
  • Directed Self-Checking: a directed (non-random) self-checking testcase from the OpenHW Group that is not specifically targetting ISA compliance.
  • Directed Non-Self-Checking: a directed (non-random) non-self-checking testcase from the OpenHW Group that is not specifically targetting ISA compliance. Note that these tests assume that the pass/fail criteria will be "Check against ISS" (or other reference model).
  • Constrained-Random: a constrained-random testcase. Typically the stimulus for these will come from the Google random instruction stream generator. Note that by defintion these tests cannot be self-checking.
  • Other: If one of the above Test Types does not fit your needs, specify it here.

Coverage Method

How will we know that the Feature is verified (covered)? There are several choices here:

  • Testcase: if the testcase was run, the Feature was tested.
  • Functional Coverage: the testbench supports SystemVerilog cover_groups that measures stimulus/configuration/response conditions to show that the Feature was tested. This is the perferred method of coverage.
  • Assertion Coverage: an alternate form of functional coverage, implemented as SVA cover properties.
  • Code Coverage: the Feature is deemed to be tested when the specific conditions in the RTL have been exercised.

Link to Coverage

This field is used to link the Feature to coverage data generated in Regression. Leave this blank for now as this information is tool dependent.