“Between the dawn of civilization and 2003, we only created five exabytes; now
we’re creating that amount every two days. By 2020, that figure is predicted to
sit at 53 zettabytes (53 trillion gigabytes) — an increase of 50 times.” – Hal Varian, Chief Economist at Google.
We are in the middle of a data revolution; we are generating data at a faster rate than we are creating the data storage units. We are putting in intelligence in every possible place and every possible industry and then looking for intelligent ways and algorithms to make sense of this data to make meaningful decisions.
The most common device to store data are Hard Disk Drives (HDD) but its popularity is on the decline, and Solid State Drives (SSD) are nudging the HDD’s down. HDDs used the Serial ATA (SATA) Host Interface to connect to the Advanced Host Controller Interface (AHCI) implemented as the Host Bus Adaptor (HBA) on the Host. Moving from some three decades of electromechanical HDDs towards the semiconductor-based SSD’s is revolutionary, but there are few key elements needed to realize the full potential of SSDs.
To improve the throughput of SSD’s, it makes architectural sense to move the SSD closer to the host and removing the layers between the storage and Host and putting the storage device on PCIe® rather than on SATA connected via the South Bridge or Peripheral Controller Hub (PCH). PCIe provides the needed throughput and enables scalability as it supports up to 1GBps per lane and has software transparent multi-lane support.
We now have PCIe based SSDs using the AHCI on the Host side. This protocol was developed for HDDs, and its feature set does not scale for PCIe based SSDs. So the industry formed a consortium and developed Non-Volatile Memory Express (NVMe), high performance, an efficient and scalable host interface for enterprise and client PCIe based SSDs.
There are enough resources available that summarize and elaborate the key features of NVMe available on http://www.nvmexpress.org/. There is sufficient complexity in the implementation of the features of NVMe on both the host and the device side. We also know that one of the key technology trends is that there is a gap between design productivity and verification productivity with the verification productivity trying catch up with design productivity. So how do we enable faster verification closure of NVMe based SSDs?
Knowing some of the potential pitfalls in verification of NVMe SSDs helps us to create an exhaustive verification plan. Some of the common pitfalls highlighted by an article by Saurabh Sharma from Mentor, A Siemens Business are:
• No comprehensive verification for register properties
• Circular Queues and various relations of Submission and Completion Queue
• Support for PCIe SR-IOV and Virtual Functions and their Namespace
• Interrupt and mask handling complexity
• Understanding/misunderstanding of Metadata concepts
This article also discusses what is needed in an NVMe verification IP to address these pitfalls.
Saurabh has also written an article on Verification Horizons, a Verification quarterly which discusses on what some key features that one should look for when selecting an NVMe Verification IP.
Having the right capabilities in a verification IP and knowing some of the common pitfalls in verifying an NVMe SSD should help you kick-start your verification and reducing the verification gap while verifying an NVMe SSD.