Вы находитесь на странице: 1из 28

A REPORT ON

VERIFICATION AND FIRMWARE DEVELOPMENT OF THE CORE PARTION IN THE FLASH COMPONENTS DIVISION
BY

Name(s) of the Student(s)

ID.No.(s)

Aditya Naik Tulika Garg AT

2009B5A3468G 2009B5A3390P

LSI INDIA RESEARCH AND DEVELOPMENT LTD (PUNE) A Practice School-I/II station of

BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE, PILANI

(November, 2013)

A REPORT ON

VERIFICATION AND FIRMWARE DEVELOPMENT OF THE CORE PARTION IN THE FLASH COMPONENTS DIVISION
BY

Name(s) of the Student(s)

ID.No.(s)

Aditya Naik Tulika Garg AT

2009B5A3468G 2009B5A3390P

LSI INDIA RESEARCH AND DEVELOPMENT LTD (PUNE) A Practice School-I/II station of

BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE, PILANI

(November, 2013)

BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCE PILANI (RAJASTHAN) Practice School Division

Station: LSI India Research and Development Duration :5 months Date of Start: 11th July 2013

Centre : Pune

Date of Submission : 26th November 2013 Title of the Project: VERIFICATION AND FIRMWARE DEVELOPMENT OF THE CORE PARTION IN THE FLASH
COMPONENTS DIVISION

ID No./Name(s)/ Discipline(s)

Aditya Naik (2009B5A3468G)


B.E. (Hons) Electrical and Electronics & MSc. Physics

Tulika Garg (2009B5A3390P)


MSc. Physics & B.E. (Hons) Electrical and Electronics

Name(s) and designation(s)of the mentors:1)Mr. Pratik Vasavda(Manager of Firmware verification FCD)

2) Mr. Ninad Pachpute (Verification Engineer)

Name(s) of the PS Faculty: Ms. A. Vijayalakshmi

Key Words: Verification, test cases.

Project Areas: Firmware Verification

Abstract: The flash storage division is the latest technological invention to get a boost in recent years due to the innovation of various digital signal processing and compression techniques thus increasing the durability and longevity of flash storage by a large amount. This project report gives an introduction to the company LSI Corporation which is a leader in developing and providing storage and networking solutions. The report then gives a brief introduction of the fl ash storage technology along with the solid state drives used in current markets. It then goes on to explain the various techniques of verification used by verification engineers. There are detailed explanations of the file repository system used to check out and make changes to data. We have also included the testing methods like UVM and an exhaustive but succinct reference to the various granularit ies involved in SSDs. We talk about the softwares and the programming languages used in various departments of research. Lastly we have listed down three test cases on which we worked. These test cases were very challenging as the block assigned to us was new and had no previous testcase documentation for reference. As a result, we have explained certain aspects like interrupt signaling and direct memory access in this section.

Signature(s) of Student(s)

Signature of PS Faculty

Date

Date

Acknowledgements

Foremost, we would like to express our sincere gratitude to our instructor Ms. A Vijayalakshmi for the continuous support of our practice school study and research, for her patience, motivation and enthusiasm. Her guidance helped us in all the time of research and writing of this re port. We could not have imagined having a better advisor and mentor for our PS2 study. Besides our instructor, we would like to thank our PS project manager Pratik Vasavda for his encouragement, insightful comments, and total support during our transition to Pune Branch. Our sincere thanks also go to Mr. Ninad Pachpute for offering to mentor us for this internship in their group, teaching us whatever we know about verification and C programming and leading us towards working on this exciting project. We thank our fellow PS mates in LSI: Trivid Singh, Harsh Vardhan and Shubham Rao, for the stimulating discussions, for the days we were working together before deadlines, and for all the fun we have had in the last two months. Also we thank our friends in Pune: Dushyant Varshney, Sharvari Pathak and Aafreen Hasnain. In particular, we are grateful to Aviral Agarwal for enlightening us on the first glance of research.

Contents
1. 2. 3. 4. Introduction ............................................................................................................................. 7 LSI Company Overview............................................................................................................... 9 History of Flash ....................................................................................................................... 10 Asic Flow................................................................................................................................ 11 4.1. Partitioning ......................................................................................................................... 15 4.2. Placement........................................................................................................................... 16 4.3. Clock tree synthesis .............................................................................................................. 16 4.4. Routing ............................................................................................................................... 16 4.5. Physical Verification.............................................................................................................. 17 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. SSDs ...................................................................................................................................... 17 Functional Verification ............................................................................................................. 18 Revision Control System ........................................................................................................... 20 UVM...................................................................................................................................... 20 Softwares and Languages ......................................................................................................... 21 Test case for Interrupt signal ................................................................................................. 22 Test case for Halt signal ........................................................................................................ 24 Test case for Direct Memory Access data transfer.24 Conclusion .......................................................................................................................... 28 References.......................................................................................................................... 28

1. Introduction
Verification flow starts with understanding the specification of the chip/block under verification. Once the specification is understood, a test cases document is prepared, which documents all possible test cases. Once this document is done to a level where 70-80 percent functionality is covered, a test bench architecture document is prepared. In the past, this document was prepared first and the test cases one was prepared next. There is a drawback with this style: if test cases document shows a particular functionality to be verified and if test bench does not support it, as the architecture document was prepared before the test cases one. If we have a test cases document to refer to, then writing an architecture document becomes much easier, as we know fo r sure what is expected from the test bench.

Figure 1 : Figure explaining the verification flow

Verification planning is a very important part of verification, irrespective of the size of the design. Since, about 70% of the design cycle time is spent on verification, with proper verification planning some of the issues faced during the later stages of the design cycle can be easily avoided earlier. Verification planning is described as a set of goals that needs to be verified. A v erification plan would consist of: 1. Functional Requirements 2. Design Requirements 3. Defining coverage goals and
7

4. Embedded firmware requirements In electronic systems and computing, firmware is the combination of persistent memory and program code and data stored in it. Typical examples of devices containing firmware are embedded systems (such as traffic lights, consumer appliances, and digital watches), computers, com puter peripherals, mobile phones, and digital cameras. The firmware contained in these devices provides the control program for the device. Firmware is held in non-volatile memory devices such as ROM, EPROM, or flash memory. Changing the firmware of a device may rarely or never be done during its economic lifetime; some firmware memory devices are permanently installed and cannot be changed after manufacture. Common reasons for updating firmware include fixing bugs or adding features to the device. This may require physically changing ROM integrated circuit s, or reprogramming flash memory with a special procedure. Firmware such as the ROM BIOS of a personal computer may contain only elementary basic functions of a device and may only provide services to higher-level software. Firmware such as the program of an embedded system may be the only program that will run on the system and provide all of its functions. Before integrated circuits, other firmware devices included a discrete semiconductor diode matrix. The Apollo guidance computer had firmware consisting of a specially manufactured core memory plane, called "core rope memory", where data was stored by physically threading wires through (1) or around (0) the core storing each data bit.

Ascher Opler coined the term "firmware" in a 1967 Datamation article. Originally, it meant the contents of a writable control store (a small specialized high speed memory), containing microcode that defined and implemented the computer's instruction set, and that could be reloaded to specialize or modify the instructions that the central processing unit (CPU) could execute. As originally used, firmware contrasted with hardware (the CPU itself) and software (normal instructions executing on a CPU). It was not composed of CPU machine instructions, but of lower-level microcode involved in the implementation of machine instructions. It existed on the boundary between hardware and software; thus the name "firmware". Still later, popular usage extended the word "firmware" to denote anything ROM-resident, including processor machine-instructions for BIOS, bootstrap loaders, or specialized applications. Until the mid-1990s, updating firmware typically involved replacing a storage medium containing firmware; usually a socketed ROM. Flash memory allows firmware to be updated without physically removing an integrated circuit from the system. An error during the upgrade process may make the device non-functional, or "bricked".

2. LSI Company Overview


LSI Corporation was founded under the name LSI Logic in 1981 in Milpitas, CA by Wilfred Corrigan as a semiconductor ASIC company after he left as CEO of Fairchild Semiconductor in 1979. The other three founders were Bill O'Meara (marketing and sales), Rob Walker (engineerin g) and Mitchell "Mick" Bohn (finance). The firm was initially funded with $6 million from noted venture capitalists including Sequoia Capital. A second round of financing for an additional $16M was completed in March 1982. The firm went public as LSI on NASDAQ on Friday May 13, 1983 - netting $153M, the largest tech IPO up to that date. LSI built its own wafer fabrication, packaging and testing facilities in Milpitas as well as utilizing excess capacity at Toshiba for manufacturing - an early example of the fabless semiconductor manufacturing model. LSI Logic expanded worldwide by establishing stand -alone affiliate companies in Japan, Europe and Canada. Nihon LSI Logic based in Tokyo, Japan was financed in April 1984 through a $20M private offering. LSI Logic Ltd based in Bracknell UK was financed in June 1984 by an additional $20M private placement and LSI Logic Canada based in Calgary, Alberta went public on the Toronto stock exchange. Each affiliate sought to develop independent man ufacturing facilities through alliances, purchases or independent development. In 1985, the firm entered into a joint venture with Kawasaki Steel - Japan's third largest steel manufacturer - to build a $100M wafer fabrication plant in Tsukuba, Japan. The firm developed the industry's first line of ASIC products which let customers create custom 'gate array' chips by use of leading-edge proprietary CAD tools (called LDS for 'Logic Design System'). The initial product lines were based on high-speed emitter-coupled logic technology but soon switched over to high-speed complementary metal-oxide-semiconductor (CMOS) which offered much lower cost and lower power requirements to system designers. Over time, LSI Logic increased its product offerings and IP library through pioneering efforts in the areas of standard cells, structured arrays, digital signal processor and microprocessors (MIPS and SPARC) as it moved toward the complete design and development of "System on a Chip" solutions. As the ASIC market matured, third party design tools became preeminent and with the very high cost of fab development, the foundry fab model gained momentum and LSI returned in 2005 to a fabless semiconductor business. During its ASIC years, LSI Logic invested in core technologies such as microprocessors, communication devices, and video compression devices such as MPEG. These core technologies have been used together with acquisitions to better place the firm as an intellectual property owner. In 1998 it bought Symbios Logic from Hyundai. In March 2001 LSI acquired C-Cube for $878M in stock. In 2006, the firm celebrated its 25th year of business. In 2005, Abhi Talwalkar joined the company as President and CEO, and was also appointed to the Board of Directors. Talwalkar was an executive at Intel Corporation before joining LSI. Since joining the company Abhi Talwalkar has aligned the companys products, through M&A, divestitures and internal development to address the "data deluge", the massive amount of data created in society. LSI offers products for datacenters, mobile networks and client computing. In 2006, LSI Logic sold the Gresham, Oregon Design & Manufacturing Facility to ON Semiconductor.

3. History of Flash
Development and History of Flash Memory Flash is an extension of the oating gate method of manufacturing nonvolatile memory. The rst sort of oating gate memory was the Erasable Programmable Read Only Memory (EPROM), invented in the 1960s but not developed until the 1970s. In EPROM, like in Dynamic Random Access Memory (DRAM) and Read Only Memory (ROM), each memory bit was represented by a transistor. Appendix C gives a list of some of the most signicant ash memory manufacturers. It helps to understand how transistors work if you want to understand these circuits. In a Field-Effect Transistor(FET), the FET current ows from the source to the drain. The gate controls how much current ows through the channel (the area between the source and drain). If the gate is unbiased, the current ows through the channel relatively freely. If a bias is applied to the gate, then the channel depletes, that is, the carriers are moved out of part of the channel, making it seem narrower, and limiting the current ow. This principle is key to all FETbased technologies, Metal-Oxide Semiconductor (MOS), Complementary Metal-Oxide Semiconductor (CMOS), Bipolar Complementary Metal-Oxide Semiconductor (BiCMOS) and so on. The ability to turn a circuits current ow on and off allows individual bits to be routed to a data bus.In a DRAM each memory bits transistor uses a capacitor to store the bit. The case is different with a ROM, where each bits transistor uses a short or open circu it to represent a bit (either programmed at manufacture by a mask, or afterward by a fuse that could be blown). In EPROM, the transistor itself looks as if it contains something kind of like a capacitor, but it is actually a second gate complementing the control gate of each bits transistor. The bits transistor actually has two gates, one that is connected to the bit line, and one that was connected to nothing it oats. The technology is known as oating gate. This concept is used by four technologie s: EPROM, Electrically Erasable Programmable Read Only Memory (EEPROM), NOR, and NAND.

10

4. Asic Flow

High level design It comprises of three main steps: 1) Specification capture: The system design process starts with the specification model written by the user to specify the desired system functionality. It forms the input to the series of exploration and refinement steps in the SoC design methodology. Moreover, the specification model defines the granularity for exploration through the size of the leaf behaviors. It exposes all available parallelism and uses hierarchy to group related functionality and manage complexity.

2) Design capture: After the circuit design is captured in a schematic, most EDA tools allow the design to be simulated. Schematic capture involves not only entering the circuits into the CAD system, but also generally calls for decisions that may seem more appropriate for later in the design, such as package choice. Although you may be able to change the package later, many PCB CAD systems ask you to choose both the part and package when placing it into the schematic capture program.

11

3) HW/SW partitioning and IP selection: One of the most crucial design steps in high level design is partitioning i.e. deciding which components of the system should be realized in hardware and which ones in software. Clearly, this is the first step in which the optimal tradeoff between cost and performance of the whole system. Traditionally, partitioning was carried out manually. However, as the systems to design have become infeasible, and many research efforts have been undertaken to automate partitioning as much as possible.

RTL design RTL is used in the logic design phase of the integrated circuit design cycle. An RTL description is usually converted to a gate-level description of the circuit by a logic synthesis tool. The synthesis results are then used by placement and routing tools to create a physical layout. Logic simulation tools may use a design's RTL description to verify its correctness. System, Timing and Logic verification Verification includes generating test-benches and running simulations to verify functionality Assertion based verification is used at this stage. Automated test-bench generation is a new field and there are a lot of scripts in perl, tcl, python which aid the generation of these test benches. Automated generation of generic gate description from RTL description is something that we get here in Pune from the San Jose office. Logic optimization for speed and area and timing analysis are done in LSI India offices. More about it is described below in the Physical design section. Verification is intended to check that a product, service, or system (or portion thereof, or set thereof) meets a set of design specifications. In the development phase, verification procedu res involve performing special tests to model or simulate a portion, or the entirety, of a product, service or system, then performing a review or analysis of the modeling results. In the post development phase, verification procedures involve regularly re peating tests devised specifically to ensure that the product, service, or system continues to meet the initial design requirements, specifications, and regulations as time progresses. It is a process that is used to evaluate whether a product, service, or system complies with regulations, specifications, or conditions imposed at the start of a development phase. Verification can be in development, scale -up, or production. This is often an internal process.

12

Physical Design The main steps in the ASIC physical design flow are:
Design Net list (after synthesis)

Floor planning Partitioning Placement Clock-tree Synthesis (CTS) Routing Physical Verification GDS II Generation These steps are just the basic. There are detailed flows that are used depending on the methodology/technology. Some of the tools/software used in the back-end design is: Cadence (SOC Encounter, Voltage Storm, Nano Route) Synopsys (Design Compiler, IC Compiler) Magma (Blast Fusion, etc.) Mentor Graphics (Olympus SoC, IC-Station, Calibre)

The ASIC physical design flow uses the technology libraries that are provided by the fabrication houses. Technologies are commonly classified according to minimal feature size. Standard sizes, in the order of miniaturization, are 2m, 1m, 0.5m, 0.35m, 0.25m, 180nm, 130nm, 90nm, 65nm, 45nm, 28nm, 22nm, 18nm, 14nm, etc. They may be also classified according to major manufacturing approaches: n-Well process, twin-well process, SOI process, etc
13

Figure 2 : ASIC Flow

Design Net list


Physical design is based on a net list which is the end result of the Synthesis process. Synthesis converts the RTL design usually coded in VHDL or Verilog HDL to gate -level descriptions which the next set of tools can read/understand. This net list contains information on the cells used, their interconnections, area used, and other details. Typical synthesis tools are:

14

Cadence RTL Compiler/Build Gates/Physically Knowledgeable Synthesis (PKS) Synopsys Design Compiler During the synthesis process, constraints are applied to ensure that the design meets the required functionality and speed (specifications). Only after the net list is verified for functionality and timing it is sent for the physical design flow.

Physical Design Steps


Floorplanning
Floorplanning is an essential design step for hierarchical, building-module design methodology. Floorplanning provides early feedback that evaluates architectural decisions estimates chip areas, and estimates delay and congestion caused by wiring. As technology advances, design complexity is increasing and the circuit size is getting larger. To cope with the increasing design complexity, hierarchical design and intellectual property (IP) modules are widely used. This trend makes floorplanning much more critical to the quality of a very large scale inte gration (VLSI) design than ever. Based on the area of the design and the hierarchy, a suitable floorplan is decided upon. Floorplanning takes into account the macros used in the design, memory, other IP cores and their placement needs, the routing possibilities and also the area of the entire design. Floorplanning also decides the IO structure, aspect ratio of the design. A bad floorplan will lead to waste -age of die area and routing congestion. In many design methodologies, Area and Speed are considered to be things that should be traded off against each other. The reason this is so is probably because there are limited routing resources, and the more routing resources that are used, the slower the design will operate. Optimizing for minimum area allows the design to use fewer resources, but also allows the sections of the design to be closer together. This leads to shorter interconnect distances, less routing resources to be used, faster end-to-end signal paths, and even faster and more consistent place and route times. Done correctly, there are no negatives to floorplanning. As a general rule, data-path sections benefit most from floorplanning, and random logic, state machines and other non-structured logic can safely be left to the placer section of the pl ace and route software. Data paths are typically the areas of your design where multiple bits are processed in parallel with each bit being modified the same way with maybe some influence from adjacent bits. Example structures that make up data paths are Adders, Subtractors, Counters, Registers, and Muxes.

Partitioning
Partitioning is a process of dividing the chip into small blocks. This is done mainly to separate different functional blocks and also to make placement and routing easier. Partitioning can b e done in the RTL design phase when the design engineer partitions the entire design into sub -blocks and then proceeds to design each module. These modules are linked together in the main module called the TOP LEVEL module. This kind of partitioning is commonly referred to as Logical Partitioning
15

Placement
Before the start of placement optimization all Wire Load Models (WLM) are removed. Placement uses RC values from Virtual Route (VR) to calculate timing. VR is the shortest Manhattan distance between two pins. VR RCs are more accurate than WLM RCs. Placement is performed in four optimization phases: 1. 2. 3. 4. Pre-placement optimization In placement optimization Post Placement Optimization (PPO) before clock tree synthesis (CTS) PPO after CTS.

Pre-placement Optimization optimizes the net list before placement, HFNs are collapsed. It can also downsize the cells. In-placement optimization re-optimizes the logic based on VR. This can perform cell sizing, cell moving, cell bypassing, net splitting, gate duplication, buffer insertion, area recovery. Optimization performs iteration of setup fixing, incremental timing and congestion driven placement. Post placement optimization before CTS performs net list optimization with ideal clocks. It can fix setup, hold, max trans/cap violations. It can do placement optimization based on global routing. It re does HFN synthesis. Post placement optimization after CTS optimizes timing with propagated clock. It tries to preserve clock skew.

Clock tree synthesis


The goal of clock tree synthesis (CTS) is to minimize skew and insertion delay. Clock is not propagated before CTS as shown in the picture. After CTS hold slack should improve. Clock tree begins at .sdc file defined clock source and ends at stop pins of flop. There are two types of stop pins known as ignore pins and sync pins. Dont touch circuits and pins in front end (logic synthesis) are treated as ignore circuits or pins at back end (physical synthesis). Ignore pins are ignored for timing analysis. If clock is divided then separate skew analysis is necessary.

Routing
There are two types of routing in the physical design process, global routing and detailed routing. Global routing allocates routing resources that are used for connections. Detailed routing assigns routes to specific metal layers and routing tracks within the global routing resources .

16

Physical Verification
Physical verification checks the correctness of the generated layout design. This includes verifying that the layout: Complies with all technology requirements Design Rule Checking (DRC) Is consistent with the original net list Layout vs. Schematic (LVS) Has no antenna effects Antenna Rule Checking Complies with all electrical requirements Electrical Rule Checking (ERC)

5. SSDs
A solid-state drive (SSD) (also known as a solid-state disk or electronic disk,[4] though it contains no actual "disk" of any kind, nor motors to "drive" the disks) is a data storage device using integrated circuit assemblies as memory to store data persistently. SSD technology uses electronic interfaces compatible with traditional block input/output (I/O) hard disk drives. SSDs have no moving mechanical components. This distinguishes them from traditional electromechanical magnetic disks such as hard disk drives (HDDs) or floppy disks, which contain spinning disks and movable read/write heads. Compared with electromechanical disks, SSDs are typically more resistant to physical shock, run more quietly, have lower access time, and less latency. However, while the price of SSDs has continued to decline in 2012, SSDs are still about 7 to 8 times more expensive per unit of storage than HDDs. Many SSDs use I/O interfaces developed for hard disk drives, thus permitting simple replacement in common applications. As of 2010, most SSDs use NAND-based flash memory, which retains data without power. For applications requiring fast access, but not necessarily data persistence after power loss, SSDs may be constructed from random-access memory (RAM). Such devices may employ separate power sources, such as batteries, to maintain data after power loss. Hybrid drives or solid state hybrid drives (SSHD) combine the features of SSDs and HDDs in the same unit, containing a large hard disk drive and an SSD cache to improve performance of frequently accessed data. These devices may offer near-SSD performance for many applications.

17

Figure 3 : PCI attached IO Accelerator SSD

6. Functional Verification
Functional verification, in electronic design automation, is the task of verifying that the logic design conforms to specification. In everyday terms, functional verification attempts to answer the question "Does this proposed design do what is intended?" This is a co mplex task, and takes the majority of time and effort in most large electronic system design projects. Functional verification is a part of more encompassing design verification, which, besides functional verification, considers non-functional aspects like timing, layout and power. Functional verification is very difficult because of the sheer volume of possible testcases that exist in even a simple design. Frequently there are more than 10^80 possible tests to comprehensively verify a design - a number that is impossible to achieve in a lifetime. This effort is equivalent to program verification, and is NP-hard or even worse - and no solution has been found that works well in all cases. However, it can be attacked by many methods. None of them are perfect, but each can be helpful in certain circumstances: Logic simulation simulates the logic before it is built. Simulation acceleration applies special purpose hardware to the logic simulation problem. Emulation builds a version of system using programmable logic. This is expensive, and still much slower than the real hardware, but orders of magnitude faster than simulation. It can be used, for example, to boot the operating system on a processor. Formal verification attempts to prove mathematically that certain requirements (also expressed formally) are met, or that certain undesired behaviors (such as deadlock) cannot occur. Intelligent verification uses automation to adapt the testbench to changes in the register transfer level code.
18

HDL-specific versions of lint, and other heuristics, are used to find common problems. Simulation based verification (also called 'dynamic verification') is widely used to "simulate" the design, since this method scales up very easily. Stimulus is provided to exercise each line in the HDL code. A test-bench is built to functionally verify the design by providing meaningful scenarios to check that given certain input, the design performs to specification. A simulation environment is typically composed of several types of components: The generator generates input vectors that are used to search for anomalies that exist between the intent (specifications) and the implementation (HDL Code). This type of generator utilizes an NP-complete type of SAT Solver that can be computationally expensive. Other types of generators include manually created vectors, Graph-Based generators (GBMs) proprietary generators. Modern generators create directed-random and random stimuli that are statistically driven to verify random parts of the design. The randomness is important to achieve a high distribution over the huge space of the available input stimuli. To this end, users of these generators intentionally under-specify the requirements for the generated tests. It is the role of the generator to randomly fill this gap. This mechanism allows the generator to create inputs that reveal bugs not being searched for directly by the user. Generators also bias the stimuli toward design corner cases to further stress the logic. Biasing and randomness serve different goals and there are tradeoffs between them, hence different generators have a different mix of these characteristics. Since the input for the design must be valid (legal) and many targets (such as biasing) should be maintained, many generators use the Constraint satisfaction problem (CSP) technique to solve the complex testing requirements. The legality of the design inputs and the biasing arsenal are modeled. The model -based generators use this model to produce the correct stimuli for the target design. The drivers translate the stimuli produced by the generator into the actual inputs for the design under verification. Generators create inputs at a high level of abstraction, namely, as transactions or assembly language. The drivers convert this input into actual design inputs as defined in the specification of the design's interface. The simulator produces the outputs of the design, based on the designs current state (the state of the flip-flops) and the injected inputs. The simulator has a description of the design net-list. This description is created by synthesizing the HDL to a low gate le vel net-list. The monitor converts the state of the design and its outputs to a transaction abstraction level so it can be stored in a 'score-boards' database to be checked later on. The checker validates that the contents of the 'score-boards' are legal. There are cases where the generator creates expected results, in addition to the inputs. In these cases, the checker must validate that the actual results match the expected ones. The arbitration manager manages all the above components together. Different coverage metrics are defined to assess that the design has been adequately exercised. These include functional coverage (has every functionality of the design been exercised?), statement coverage (has each line of HDL been exercised?), and branch coverage (has each direction of every branch been exercised?).

19

7. Revision Control System


The Revision Control System (RCS) is a software implementation of revision control that automates the storing, retrieval, logging, identification, and merging of revisions. RCS is useful for text that is revised frequently, for example programs, documentation, procedural graphics, papers, and form letters. RCS is also capable of handling binary files, though with reduced efficiency. Revisions are stored with the aid of the diff utility. Development RCS was first released in 1982 by Walter F. Tichy while he was at Purdue University as a free and more evolved alternative to the then-popular Source Code Control System(SCCS). It is now part of the GNU Project, which is still maintaining it.
Mode of Operation

RCS operates only on single files; it has no way of working with an entire project. Although it provides branching for individual files, the version syntax is cumbersome. Instead of using branches, many teams just use the built-in locking mechanism and work on a single head branch.

Advantages

In single-user scenarios, such as server configuration files or automation scripts, RCS may still be the preferred revision control tool as it is simple and no central repository needs to be accessible for it to save revisions. This makes it a more reliable tool when the system is in dire maintenance conditions. Additionally, the saved backup files are easily visible to the administration so the operation is straightforward. However, there are no built-in tamper protection mechanisms (that is, users who can use the RCS tools to version a file also, by design, are able to directly manipulate the corresponding version control file) and this is leading some security conscious administrators to consider client/server version control systems that restrict users' ability to alter the version control files.

8. UVM
The Universal Verification Methodology (UVM) is a standardized methodology for verifying integrated circuit designs. UVM is derived mainly from the OVM (Open Verification Methodology) which was, to a large part, based on the eRM (e Reuse Methodology) for the e Verification Language developed by Verisity Design in 2001. The UVM class library brings much automation to the System Verilog language such as sequences and data automation features (packing, copy, compare) etc., and unlike the previous methodologies developed independently by

20

the simulator vendors, is an Accellera standard with support from multiple vendors: Aldec, Cadence, Mentor, and Synopsys The unified verification methodology addresses the most critical verification challenges, while maximizing overall speed and efficiency. The methodology is centered on the creation and use of a transaction-level golden representation of the design and verification environment called the functional virtual prototype (FVP). The methodology encompasses all phases of the verification process and crosses all design domains. Utilizing the unified verification methodology will enable development teams to attain their verification goals on time. While focusing on the verification of systems-on-a-chip (SoCs) the methodology also encompasses verification of individual subsystems. Large application-specific digital or analog designs are often first developed as standalone components and later used as SoC subsystems. The unified verification methodology can be applied in whole to a SoC design or in parts for more application-specific designs. Different designs and design teams will emphasize different aspects of a methodology. The unified verification methodology will produce the greatest gains in speed and efficiency when used in a complete topdown integrated manner. It is understood that a complete top-down flow may not always be feasible for a number of different reasons. Thus the methodology is flexible in providing for both top-down and bottom-up approaches to developing subsystems while still providing an efficient top-down verification methodology. While it may be impractical for verification teams to move directly to a top-down unified methodology from their existing methodology, this methodology provides a standard for verification teams to work towards. Readers are encouraged to read the entire document, but if the readers focus is solely the verification of a subsystem they can skip to those individual sections of the methodology. Readers should also note that the unified verification methodology as described in this paper is targeted at custom IC and standard ASIC development. Many of the issues described and practices presented are applicable to verification of programmable devices such as FPGAs but some areas differ. Verification of programmable devices will be addressed in a future revision of this paper. The unified verification methodology is first presented in the Cadence whitepaper Its About Time: Requirements for the Functional Verification of Nanometer-scale SoCs. The paper presents the drivers of functional verification today and identifies the fragmentation that exists throughout functional verification. The unified verification methodology directly addresses this fragmentation. This paper discusses the unified verification methodology in detail. The first sections of the paper provide an overview of the methodology, describes several key concepts, and presents the requirements for a verification platform to support the methodology. The middle sections detail the methodology for verification of a SoC along with the individual subsystems. The final section details a migration path to the unified verification methodology.

9. Softwares and Languages


During the course of the initial training period at LSI, we learnt a few programming languages as a part of the training for the actual verification and firmware development. First, we learnt the basics of C and then moved on to the more challenging aspects of C such as functions, classes, loops, data
21

structures, etc. After getting to know the fundamentals of programing we moved on to Verilog. This language is used by designers to design simple as well as complex circuitry. It is similar to C in that it is a top down language in its execution. It does not provide support to object oriented programming though. For this purpose we were required to learn the extended version of Verilog called System Verilog. This is a superset of Verilog and supports many object oriented features provided by C++ like classes, encapsulation, inheritance, etc. As for the softwares, in order to run simulations we used Gsimulator. We use the UNIX operating system for all the technical work. Text editors like gedit, vim, gvim were used to write the codes.

10.

Test case for Interrupt signal

Writing a test case involves aiming to attain a 100 percent coverage rate to test all the asp ects of the functionality of the design under test (DUT). The first test case we worked on was the one in which we had to test whether an interrupt signal was travelling from the block assigned to us to the CPU in the partition.

22

The process followed for this test case is similar to all other test cases followed by all the engineers around the world. Mentioned below are the rough steps taken to write any test case which will enable us to verify the functionality of a design: Have a strategy which clearly states the objective of the test case and also the elements required for the test bench as well as the environment. Plan the course of action and remember to include all the required header files in your program. Refer to the previous generation model for similar test cases but do not blindly copy paste it into the new design for the fear of having changes in the new revision. Do not have any function or macro in your program which exceeds roughly 150 lines. This will ensure good readability and allow readers to understand the code easily. Check your code for missing/large variables (or names). Keep the nomenclature easy to understand. There should not be any warnings while compiling the code.

In this particular test case we create a scenario in which we generate an error, which in real life will be generated if there is some problem in the compression and decompression of data. This error is detected by my debug block which then generates an interrupt signal. This signal has to travel from
23

my block to the main CPU in the partition and trigger a halt and a non maskable interrupt signal of its own which gives instructions to all the other blocks in the partition to halt. So the steps followed were as follows: 1. 2. 3. 4. 5. Enable the interrupt signals Create an error scenario Generate the interrupt signal Check whether the signal is reaching the desired location Verify whether the signal has propagated to the cpu and triggered the halt and the non maskable interrupt.

11.

Test case for Halt signal

This testcase is written by us to verify if the halt is generated when error is encountered in any of the blocks of the partition. In case of error each block within the partition can trigger halt signal going to central error collecting hardware. On receiving error from any of the blocks, the central error correcting hardware triggers partition halt (Partition Halt) to all of its sub-blocks.

The testcase flow is: 1. Enable reception of partition halt 2. Enable halt generation 3. Create error scenario 4. Confirm if halt is generated 5. Confirm if partition halt is received The procedure involved in writing a testcase and getting it passed is explained below: 1. The text editor used by us is gvim editor where we write our required c code. The command to open a new file is gvim filename.c. 2. The test case written by us should include proper header files and should be written with proper indentation. 3. After writing the code we compile it using gsim simulator.

24

4. The log of the test case can be seen after running the test case and we can figure out the status of the code if the test case is not passing. 5. The waveforms can be seen in the Verdi which helps us to figure out if the testcase is giving required result. 6. If the testcase is getting passed then we can check it in and now it can be accessed by all the employees. This procedure is used in all the testcases which we write in fcd .While writing the test case we have used lots of macros which were predefined in the full chip level, which is why this halt testcase can be used in full chip level too.

12. Test case to perform a Direct Memory Access transfer from the CPU to the Firmware

This is the final test case for this semester which was allotted before the RTL freeze. This test case discovers the new functions of the new block such as a DMA transfer from the CPU to the firmware. Before this test case there were other test cases transferring data from memory to memory, memory to CPU, CPU to memory and so on and so forth. But this is the first time that a test case is being written for the data transfer from a CPU to the firmware.

25

For this purpose a special Prime cell UART was purchased from ARM. This is a great interface for transmitting data from the different blocks to the firmware. Although this UART infrastructure and interface is already used in the front end and back end processes, this is the first instance of UART in the core partition. Hence we had to start from scratch while writing the test case. We had to configure all the registers in the UART manually by entering individual values in the various fields bitwise. Once the registers are configured, the FIFOs are enabled which allows for the free flow of data.

As verification engineers, it is our responsibility to test different scenarios of data flow. So we checked the design for two cases; less than half the FIFO capacity and more than half the FIFO capacity in both the read and write operations. This is where we found our first bug in the design. The bug was reported to the design team which after fixing sent us the bug report. Following are the steps taken while initializing and then checking the data flow: Generate data to be sent from various sources like SRAM,DRAM,IRAM etc. (at the chip level : Front end processor and Back end processor) Initialize the control and status registers ring to allow the transmission of data from our debug block. Enable the UART registers Enable the done/failed flag signals Start the data transmission
26

After the data is transmitted, check for the done/failed interrupt signals Check the integrity of the data by using cyclic redundancy checksum or other methods Use a bus aligned method of dividing the data to be sent into chunks of varying sizes(here 56,60)

While writing this test case we learnt a lot about the core partition and the functions of its blocks. We also learnt the process of bug filing and then making the necessary changes when it is fixed.

27

13.

Conclusion

The project allotted to us helped us to have hands on experience of UNIX and C language. We got to know how a team works in a company. The testcases written by us helped to verify the design under test (DUT). As we were in storage department we learnt about the SSDs which were designed at LSI.

14.

References

1. N. Sherwani, "Algorithms for VLSI Physical Design Automation", Kluwer (1998), ISBN 9780792383932 2. Semi-Custom Design Flow 3. Mehrotra, Alok; Van Ginneken, Lukas P P P; Trivedi, Yatin. "Design flow and methodology for 50M gate ASIC", IEEE Conference Publications,ISBN 0-7803-7659-5 4. A. Kahng, J. Lienig, I. Markov, J. Hu: "VLSI Physical Design: From Graph Partitioning to Timing Closure", Springer (2011), ISBN 978-90-481-9590-9, p. 27 5. http://www.collinsdictionary.com/dictionary/english/flash?showCookiePolicy=true 6. www.asic-world.com 7. www.opencores.org 8. Yeshwant Kanetkar Let us C ISBN 2097

28

Вам также может понравиться