Вы находитесь на странице: 1из 28

http://www.vlsi-design.net/digital-design/physical-design/floor-planningplacement/ http://www.scribd.

com/doc/208620/Thesis-on-Floorplanning-VLSI-by-RenishLadani

Floor Planning Floor plan determines the size of the design cell (or die), creates the boundary and core area, and creates wire tracks for placement of standard cells. [1]. It is also a process of positioning blocks or macros on the die.

Floor planning control parameters like aspect ratio, core utilization are defined as follows: Aspect Ratio= Horizontal Routing Resources / Vertical Routing Resources Core Utilization= Standard Cell Area / (Row Area + Channel Area) Total 4 metal layers are available for routing in used version of Astro. M0 and M3 are horizontal and M2 and M4 are vertical layers. Hence aspect ratio for SAMM is 1. Total number of cells =1645; total number of nets=1837 and number of ports (excluding 16 power pads) = 60. The figure depicting floor plan-die size (m) of SAMM is shown beside.

Top Design Format (TDF) files provide Astro with special instructions for planning, placing, and routing the design. TDF files generally include pin and port information. Astro particularly uses the I/O definitions from the TDF file in the starting phase of the design flow. [1]. Corner cells are simply dummy cells which have ground and power layers. The TDF file used for SAMM is given below. The SAMM IC has total 80 I/O pads out of which 4 are dummy pads. Each side of the chip has 20 pads including 2 sets of power pads. Number of power pads required for SAMM is calculated in power planning section. Design is pad limited (pad area is more than cell area) and inline bonding (same I/O pad height) is used. Physical design (electronics) From Wikipedia, the free encyclopedia Jump to: navigation, search

Physical design steps within the IC design flow In integrated circuit design, physical design is a step in the standard design cycle which follows after the circuit design. At this step, circuit representations of the components (devices and interconnects) of the design are converted into geometric representations of shapes which, when manufactured in the corresponding layers of materials, will ensure the required functioning of the components. This geometric representation is called integrated circuit layout. This step is usually split into several sub-steps, which include both design and verification and validation of the layout.[1] Modern day Integrated Circuit (IC) design is split up into Front-end design using HDL's, Verification and Back-end Design or Physical Design. The next step after Physical Design is the Manufacturing process or Fabrication Process that is done in the Wafer Fabrication Houses. Fab-houses fabricate designs onto silicon dies which are then packaged into ICs. Each of the phases mentioned above have Design Flows associated with them. These Design Flows lay down the process and guide-lines/framework for that phase. Physical Design flow uses the technology libraries that are provided by the

fabrication houses. These technology files provide information regarding the type of Silicon wafer used, the standard-cells used, the layout rules, etc. Technologies are commonly classified according to minimal feature size. Standard sizes, in the order of miniaturization, are 2m, 1m , 0.5m , 0.35m, 0.25m, 180nm, 130nm, 90nm, 65nm, 45nm, 28nm, 22nm, 18nm... They may be also classified according to major manufacturing approaches: n-Well process, twin-well process, SOI process, etc.

Contents [hide]

1 Physical Design Flow 2 Design Netlist 3 Floorplanning 4 Partitioning 5 Placement 6 Clock tree synthesis 7 Routing 8 Physical Verification 9 GDSII Generation 10 References

Physical Design Flow A typical Back-end Flow is shown below

The main steps in the flow are:


Design Netlist (after synthesis) Floorplanning Partitioning Placement Clock-tree Synthesis (CTS) Routing Physical Verification GDS II Generation

These steps are just the basic. There are detailed PD Flows that are used depending on the Tools used and the methodology/technology. Some of the tools/software used in the back-end design are :

Cadence (SOC Encounter, VoltageStorm, NanoRoute) Synopsys (Design Compiler) Magma (BlastFusion, etc) Mentor Graphics (Olympus SoC, IC-Station, Calibre)

A more detailed Physical Design Flow is shown below. Here you can see the exact steps and the tools used in each step outlined.

[edit] Design Netlist A Netlist/Gate-level netlist is the end result of the Synthesis process. Synthesis converts the RTL design usually coded in VHDL or Verilog HDL to gate-level descriptions which the next set of tools can read/understand. This netlist contains information on the cells used, their interconnections, area used, and other details. Typical synthesis tools are:

Cadence RTL Compiler/Build Gates/Physically Knowledgeable Synthesis (PKS) Synopsys Design Compiler

During the synthesis process, constraints are applied to ensure that the design meets the required functionality and speed (specifications). Only after the netlist is verified for functionality and timing is it sent for the Physical Design flow. [edit] Floorplanning The first step in the Physical Design flow is Floorplanning. Floorplanning is the process of identifying structures that should be placed close together, and allocating space for them in such a manner as to meet the sometimes conflicting goals of available space (cost of the chip), required performance, and the desire to have everything close to everything else. Based on the area of the design and the hierarchy, a suitable floorplan is decided upon. Floorplanning takes into account the macro's used in the design, memory, other IP cores and their placement needs, the routing possibilities and also the area of the entire design. Floorplanning also decides the IO structure, aspect ratio of the design. A bad floorplan will lead to waste-age of die area and routing congestion. In many design methodologies, Area and Speed are considered to be things that should be traded off against each other. The reason this is so is probably because there are limited routing resources, and the more routing resources that are used, the slower the design will operate. Optimizing for minimum area allows the design to use fewer resources, but also allows the sections of the design to be closer together. This leads to shorter interconnect distances, less routing resources to be used, faster end-to-end signal paths, and even faster and more consistent place and route times. Done correctly , there are no negatives to floorplanning.

As a general rule, data-path sections benefit most from floorplanning, and random logic, state machines, and other non-structured logic can safely be left to the placer section of the place and route software. Data paths are typically the areas of your design where multiple bits are processed in parallel with each bit being modified the same way with maybe some influence from adjacent bits. Example structures that make up data paths are Adders, Subtractors, Counters, Registers, and Muxes. [edit] Partitioning Partitioning is a process of dividing the chip into small blocks. This is done mainly to separate different functional blocks and also to make placement and routing easier. Partitioning can be done in the RTL design phase when the design engineer partitions the entire design into sub-blocks and then proceeds to design each module. These modules are linked together in the main module called the TOP LEVEL module. This kind of partitioning is commonly referred to as Logical Partitioning. [edit] Placement Before the start of placement optimization all Wire Load Models (WLM) are removed. Placement uses RC values from Virtual Route (VR) to calculate timing. VR is the shortest Manhattan distance between two pins. VR RCs are more accurate than WLM RCs. Placement is performed in four optimization phases: 1. 2. 3. 4.

Pre-placement optimization In placement optimization Post Placement Optimization (PPO) before clock tree synthesis (CTS) PPO after CTS. Pre-placement Optimization optimizes the netlist before placement, HFNs are collapsed. It can also downsize the cells. In-placement optimization re-optimizes the logic based on VR. This can perform cell sizing, cell moving, cell bypassing, net splitting, gate duplication, buffer insertion, area recovery. Optimization performs

iteration of setup fixing, incremental timing and congestion driven placement.

Post placement optimization before CTS performs netlist optimization with ideal clocks. It can fix setup, hold, max trans/cap violations. It can do placement optimization based on global routing. It re does HFN synthesis. Post placement optimization after CTS optimizes timing with propagated clock. It tries to preserve clock skew.

[edit] Clock tree synthesis

Ideal clock before CTS The goal of clock tree synthesis (CTS) is to minimize skew and insertion delay. Clock is not propagated before CTS as shown in the picture. After CTS hold slack should improve. Clock tree begins at .sdc defined clock source and ends at stop pins of flop. There are two types of stop pins known as ignore pins and sync pins. Dont touch circuits and pins in front end (logic synthesis) are treated as ignore circuits or pins at back end (physical synthesis). Ignore pins are ignored for timing analysis. If clock is divided then separate skew analysis is necessary.

Global skew achieves zero skew between two synchronous pins without considering logic relationship. Local skew achieves zero skew between two synchronous pins while considering logic relationship.

If clock is skewed intentionally to improve setup slack then it is known as useful skew.

Rigidity is the term coined in Astro to indicate the relaxation of constraints. Higher the rigidity tighter is the constraints.

Clock After CTS In clock tree optimization (CTO) clock can be shielded so that noise is not coupled to other signals. But shielding increases area by 12 to 15%. Since the clock signal is global in nature the same metal layer used for power routing is used for clock also. CTO is achieved by buffer sizing, gate sizing, buffer relocation, level adjustment and HFN synthesis. We try to improve setup slack in preplacement, in placement and post placement optimization before CTS stages while neglecting hold slack. In post placement optimization after CTS hold slack is improved. As a result of CTS lot of buffers are added. Generally for 100k gates around 650 buffers are added. [edit] Routing There are two types of routing in the physical design process, global routing and detailed routing. Global routing allocates routing resources that are used for connections. Detailed routing assigns routes to specific metal layers and routing tracks within the global routing resources.

[edit] Physical Verification Physical verification checks the correctness of the layout design. This includes verifying that the layout

Complies with all technology requirements Design Rule Checking (DRC) Is consistent with the original netlist Layout vs. Schematic (LVS) Has no antenna effects Antenna Rule Checking Complies with all electrical requirements Electrical Rule Checking (ERC).[2]

[edit] GDSII Generation Once the design has been physically verified, optical-lithography masks are generated for manufacturing. The layout is represented in the GDSII stream format that is sent to a semiconductor fabrication plant (fab). [edit] References 1. ^ N. Sherwani, "Algorithms for VLSI Physical Design Automation", Kluwer (1998), ISBN 978-079238393 2. ^ A. Kahng, J. Lienig, I. Markov, J. Hu: "VLSI Physical Design: From Graph Partitioning to Timing Closure", Springer (2011), ISBN 978-90-481-9590-9, p. 27. 2.7 Case Study: Floorplan Optimization

Our second case study is an example of a highly irregular, symbolic problem. The solution that we develop incorporates a task scheduling algorithm. 2.7.1 Floorplan Background VLSI is a process used to build electronic components such as microprocessors and memory chips comprising millions of transistors. The design of VLSI components is a computationally demanding process. Computers are used extensively to verify the correctness of a circuit design, to lay out a circuit in a two-dimensional area, and to generate the patterns used to test circuits once they have been fabricated. Many of these problems involve either an exhaustive or a

heuristically guided search of a large space of possible solutions. Here, we consider a layout problem. The first stage of the VLSI design process typically produces a set of indivisible rectangular blocks called cells. In a second stage, interconnection information is used to determine the relative placements of these cells. In a third stage, implementations are selected for the various cells with the goal of optimizing the total area. It is the third stage, floorplan optimization, for which we shall develop a parallel algorithm. This is an important part of the design process, since the cost of a chip is usually dominated by its area. VLSI floorplan optimization can be explained by analogy with the problem of designing a kitchen. Assume that we have decided on the components the kitchen is to contain (this action is stage 1 of the VLSI design process) and how these components are to be arranged (stage 2). For example, we may wish to have a stove, refrigerator, table, and sink and may require that the stove be next to the refrigerator and the table next to the sink. Assume also that we can choose among several possible models for each of these components, with different models having different shapes but occupying the same floor area. In the floorplan optimization phase of our kitchen design, we select models so as make the best use of available floorspace. In VLSI, a floorplan is represented as a pair of polar graphs, conventionally called the and graphs. (A polar graph is a directed acyclic graph with a single source and a single sink. The term directed means that edges have a direction, and acyclic means that there are no cycles.) These graphs specify which cells are adjacent in the vertical and horizontal directions, respectively. Each arc denotes a cell, and nodes (other than the source and sink) link cells that must have touching edges. Although a cell has a fixed area, it may have several possible implementations with different aspect ratios. If we have N cells, and if cell has implementations, then the total number of possible floorplan configurations is

For example, Figure 2.27 shows a floorplan optimization problem with three cells and six possible configurations.

Figure 2.27: A floorplan optimization problem. The three cells A, B, and C, have 1, 3, and 2 implementations each, respectively. In (a) are the alternative implementations. In (b) are the and graphs, which state that B must be above C, and that A must be to the left of B and C, respectively. In (c) are the alternative floorplans that satisfy the constraints; each is labeled with its area. The lowest area floorplan is constructed from A, B0, and C1 and has an area of 130.

Figure 2.28: Solving a floorplan optimization problem. This is the search tree corresponding to the problem illustrated in Figure 2.27. Level 0 is the root. At level 1, an implementation has been chosen for A; the three level 2 subtrees represent the choices for B and the level 3 leaves the choices for C. The number in each tree node represents the area of the associated (partial) solution. The optimal configuration is (A,B0,C1) and has area 130. The problem then is to identify the configuration with the lowest area, where area is defined as the product of the maximum horizontal and vertical extents. This identification can be achieved by using a search algorithm to explore a search tree representing all possible configurations. As shown in Figure 2.28, level i of this tree corresponds to the situation in which implementations have been chosen for i cells. We can explore this search tree by using Algorithm 1.1. An initial call search(root) causes the entire tree to be visited, with the path used to get to each leaf node reported as a solution. Algorithm 1.1 implements an exhaustive search that visits all nodes of the search tree. Unfortunately, this strategy is computationally infeasible for any but the smallest problems. For example, a problem with just 20 cells and 6 implementations per cell has a search space of nodes. Fortunately, the number of nodes explored can be reduced considerably by using a technique called branch-and-bound search. The basic idea is to keep track of the best (lowest area) solution found so far. Before ``expanding'' a node (that is, looking at its subtrees), we check whether the area of the partial configuration represented by that node is already greater than that of the best known solution. If so, we know that this node

cannot yield a better solution, and the subtree rooted at that node can be abandoned, or pruned (Figure 2.29). This approach is specified as Algorithm 2.2, with the global variable A used to maintain a record of the best solution.

Figure 2.29: Branch-and-bound search. This figure shows the nodes actually explored in the example problem, assuming a depth-first and left-to-right search strategy. The subtree rooted at the second node on level 2 is pruned because the cost of this node (170) is greater than that of the cheapest solution already found (130).

On a sequential computer, the foreach in Algorithm 2.2 can examine each subtree in turn, thereby giving a depth-first search algorithm that explores the tree depth-first and left-to-right. In this case, pruning can reduce the number of nodes explored enormously. In one experiment reported in the literature, the number of nodes explored in a typical 20-cell problem was reduced from to . As we shall see, efficient pruning is a difficult problem in a parallel environment and, to a large extent, determines the structure of our parallel algorithm. In summary, the fundamental operation to be performed in the floorplan optimization problem is branch-and-bound search. This is an interesting algorithm from a parallel computing perspective because of its irregular computational structure: the size and shape of the search tree that must be explored are not known ahead of time. Also, the need for pruning introduces a need both to manage the order in which the tree is explored and to acquire and propagate global knowledge

of computation state. In these respects this problem is typical of many algorithms in symbolic (nonnumeric) computing. 2.7.2 Floorplan Algorithm Design Partition. Algorithm 2.2, like Algorithm 1.1, has no obvious data structure to which we can apply domain decomposition techniques. Hence, we use a fine-grained functional decomposition in which each search tree node is explored by a separate task. As noted earlier, this means that new tasks will be created in a wavefront as the search progresses down the search tree, which will tend to be explored in a breadth-first fashion. Notice that only tasks on the wavefront can execute concurrently. We also need to address the issue of how to manage the A value, which must be accessed by all tasks. For now, we assume that it is encapsulated in a single task with which other tasks will communicate. A quick review using the design checklist of Section 2.2.3 reveals one deficiency in this design. The breadth-first exploration strategy is likely to decrease performance dramatically by delaying discovery of solution nodes and hence reducing the amount of pruning that occurs, thereby leading to considerable redundant computation. We must bear this issue in mind in subsequent design phases. Communication. In a parallel implementation of simple search (Algorithm 1.1), tasks can execute independently and need communicate only to report solutions. In contrast, branchand-bound search requires communication during execution in order to obtain and update the search bound A . In designing a communication structure to achieve this goal, we need to trade off the benefits of frequent accesses to a centralized A value (which tends to reduce the amount of the search tree that must be explored) against communication costs. One approach is to encapsulate responsibility for maintaining A in a centralized task, with which each task communicates when a solution is produced or a bound is required. This approach is simple and may even be efficient if communication is cheap, evaluating a node is expensive, and the number of processors is not too large. However, the centralized approach is inherently nonscalable. Since the manager must take a certain amount of time to process a request, the maximum

rate at which it can service requests, and hence the maximum number of tasks that can execute concurrently, is bounded. Various refinements to this centralized scheme can be imagined. We can modify Algorithm 2.2 to check A only periodically, for example when a depth counter incremented on each recursive call is an integer multiple of a specified frequency parameter. Or, we can partition the tree into subtrees, each with its own A submanager, and organize periodic exchanges of information between these submanagers. For example, submanagers can perform broadcast operations when they discover significantly better solutions. Agglomeration. In the agglomeration phase of the design process we start to address practical issues relating to performance on target computers. In the floorplan optimization problem, this means we must address two potential deficiencies of the finegrained algorithm that we have developed. The first will be familiar from earlier problems, that is, the cost of creating a large number of fine-grained tasks. This can be addressed using agglomeration, unless we believe that node evaluation is sufficiently expensive and task creation sufficiently cheap for the fine-grained algorithm to be efficient. For example, we can create one task for each search call in the foreach statement of Algorithm 2.2 until we reach a specified depth in the tree, and then switch to a depth-first strategy, thereby creating a single task that evaluates search calls in sequence (Figure 2.30). If the switch to depth-first search is performed at depth D and cell has implementations, then in the absence of pruning this technique creates tasks.

Figure 2.30: Increasing granularity in a search problem. In this figure, we agglomerate by switching to a sequential search at level two in the search tree. A task is created for each subtree rooted at level two.

The second potential deficiency is more subtle and relates to the scheduling of tasks rather than to their creation. In the absence of explicit programmer control, we can assume that the tasks created to evaluate search tree nodes will execute either in the order that they are created or perhaps in a random order. In either case, the search tree tends to be explored in a breadth-first fashion. This is undesirable because it tends to reduce the effectiveness of pruning and hence cause redundant computation. The solution to this problem is to control the order in which search tree nodes are explored. That is, we must implement a task-scheduling algorithm. Because this is really a mapping issue, we discuss it under ``Mapping.'' Mapping. Recall that when we use a task-scheduling strategy, tasks (search tree nodes) become ``problems'' to be executed by one of a smaller number of ``worker'' tasks, typically one per processor. Workers generate new search problems as they expand nodes, and request new search problems each time they complete previously assigned problems. Requests can be handled using a centralized or decentralized strategy. We can imagine a variety of alternative task-scheduling schemes for the floorplan optimization problem. One approach works in conjunction with the agglomeration scheme of Figure 2.30. A central manager first constructs a number of coarsegrained tasks, by exploring the search tree to depth D . These tasks are then assigned to idle workers in a demand-driven manner. Because each task can be represented by a short vector representing the path taken to its position in the tree, the data movement costs associated with this scheme are not high. Furthermore, because each processor executes one subtree at a time in a depth-first fashion, pruning is effective. An interesting variant of this approach combines elements of both redundant work and cyclic mapping to avoid the need for a central manager. Every worker expands the tree to depth D . Then, each worker takes responsibility for a disjoint subset of the tasks generated. (This subset could be identified using a cyclic allocation strategy, for example.) Only if a worker becomes idle does it ask other workers for tasks. A third strategy, more complex but also more general, is initially to allocate the root node to a single worker. Load balancing is then achieved by causing workers with empty queues to request problems from other workers. Each worker can then enforce a local depth-first search strategy, and hence increase the amount of

pruning, by ordering its queue of search problems according to their depth in the tree. This method allows the worker to select problems far from the root for local execution and problems nearer to the root to hand to other workers. Our choice of task scheduling strategy will depend on characteristics of our problem and target computer and can be determined by analysis and experiment. Notice that the communication structures used for task scheduling can be integrated with those proposed earlier for maintaining A . For example, a central manager used for task scheduling can also maintain and distribute an up-todate search bound with each task. In decentralized schemes, the worker tasks that execute search problems can broadcast improved search bound values to other workers. 2.7.3 Floorplan Summary The parallel algorithm designed in this case study is certainly more complex, and perhaps less obvious, than that developed for the atmosphere model. It is clear from the start that functional decomposition techniques should be used to define tasks, that responsibility for maintaining A should be isolated from the rest of the computation, and that we can increase task granularity by switching from a parallel to a sequential evaluation strategy at a specified depth in the search tree. If we were concerned with parallelizing simple search, the design might be complete at this stage. However, the need to support pruning requires that we proceed with further refinements. In particular, we introduce a task-scheduling algorithm so that we can pursue depth-first search on each processor while exposing higher-level search tree nodes for idle workers.

Physical design (electronics)


From Wikipedia, the free encyclopedia Jump to: navigation, search

Physical design steps within the IC design flow

In integrated circuit design, physical design is a step in the standard design cycle which follows after the circuit design. At this step, circuit representations of the components (devices and interconnects) of the design are converted into geometric representations of shapes which, when manufactured in the corresponding layers of materials, will ensure the required functioning of the components. This geometric representation is called integrated circuit layout. This step is usually split into several sub-steps, which include both design and verification and validation of the layout.[1] Modern day Integrated Circuit (IC) design is split up into Front-end design using HDL's, Verification and Back-end Design or Physical Design. The next step after Physical Design is the Manufacturing process or Fabrication Process that is done in the Wafer Fabrication Houses. Fabhouses fabricate designs onto silicon dies which are then packaged into ICs. Each of the phases mentioned above have Design Flows associated with them. These Design Flows lay down the process and guide-lines/framework for that phase. Physical Design flow uses the technology libraries that are provided by the fabrication houses. These technology files provide information regarding the type of Silicon wafer used, the standard-cells used, the layout rules, etc.

Technologies are commonly classified according to minimal feature size. Standard sizes, in the order of miniaturization, are 2m, 1m , 0.5m , 0.35m, 0.25m, 180nm, 130nm, 90nm, 65nm, 45nm, 28nm, 22nm, 18nm... They may be also classified according to major manufacturing approaches: n-Well process, twin-well process, SOI process, etc.

Contents
[hide]

1 Physical Design Flow 2 Design Netlist 3 Floorplanning 4 Partitioning 5 Placement 6 Clock tree synthesis 7 Routing 8 Physical Verification 9 GDSII Generation 10 References

[edit] Physical Design Flow


A typical Back-end Flow is shown below

The main steps in the flow are:


Design Netlist (after synthesis) Floorplanning Partitioning

Placement Clock-tree Synthesis (CTS) Routing Physical Verification GDS II Generation

These steps are just the basic. There are detailed PD Flows that are used depending on the Tools used and the methodology/technology. Some of the tools/software used in the back-end design are :

Cadence (SOC Encounter, VoltageStorm, NanoRoute) Synopsys (Design Compiler) Magma (BlastFusion, etc) Mentor Graphics (Olympus SoC, IC-Station, Calibre)

A more detailed Physical Design Flow is shown below. Here you can see the exact steps and the tools used in each step outlined.

[edit] Design Netlist


A Netlist/Gate-level netlist is the end result of the Synthesis process. Synthesis converts the RTL design usually coded in VHDL or Verilog HDL to gate-level descriptions which the next set of tools can read/understand. This netlist contains information on the cells used, their interconnections, area used, and other details. Typical synthesis tools are:

Cadence RTL Compiler/Build Gates/Physically Knowledgeable Synthesis (PKS) Synopsys Design Compiler

During the synthesis process, constraints are applied to ensure that the design meets the required functionality and speed (specifications). Only after the netlist is verified for functionality and timing is it sent for the Physical Design flow.

[edit] Floorplanning
The first step in the Physical Design flow is Floorplanning. Floorplanning is the process of identifying structures that should be placed close together, and allocating space for them in such a manner as to meet the sometimes conflicting goals of available space (cost of the chip), required performance, and the desire to have everything close to everything else. Based on the area of the design and the hierarchy, a suitable floorplan is decided upon. Floorplanning takes into account the macro's used in the design, memory, other IP cores and their placement needs, the routing possibilities and also the area of the entire design. Floorplanning also decides the IO structure, aspect ratio of the design. A bad floorplan will lead to waste-age of die area and routing congestion. In many design methodologies, Area and Speed are considered to be things that should be traded off against each other. The reason this is so is probably because there are limited routing resources, and the more routing resources that are used, the slower the design will operate. Optimizing for minimum area allows the design to use fewer resources, but also allows the sections of the design to be closer together. This leads to shorter interconnect distances, less routing resources to be used, faster end-to-end signal paths, and even faster and more consistent place and route times. Done correctly , there are no negatives to floorplanning. As a general rule, data-path sections benefit most from floorplanning, and random logic, state machines, and other non-structured logic can safely be left to the placer section of the place and route software. Data paths are typically the areas of your design where multiple bits are processed in parallel with each bit being modified the same way with maybe some influence from adjacent bits. Example structures that make up data paths are Adders, Subtractors, Counters, Registers, and Muxes.

[edit] Partitioning
Partitioning is a process of dividing the chip into small blocks. This is done mainly to separate different functional blocks and also to make placement and routing easier. Partitioning can be done in the RTL design phase when the design engineer partitions the entire design into subblocks and then proceeds to design each module. These modules are linked together in the main module called the TOP LEVEL module. This kind of partitioning is commonly referred to as Logical Partitioning.

[edit] Placement
Before the start of placement optimization all Wire Load Models (WLM) are removed. Placement uses RC values from Virtual Route (VR) to calculate timing. VR is the shortest Manhattan distance between two pins. VR RCs are more accurate than WLM RCs. Placement is performed in four optimization phases:
1. 2. 3. 4.

Pre-placement optimization In placement optimization Post Placement Optimization (PPO) before clock tree synthesis (CTS) PPO after CTS. Pre-placement Optimization optimizes the netlist before placement, HFNs are collapsed. It can also downsize the cells. In-placement optimization re-optimizes the logic based on VR. This can perform cell sizing, cell moving, cell bypassing, net splitting, gate duplication, buffer insertion, area recovery. Optimization performs iteration of setup fixing, incremental timing and congestion driven placement. Post placement optimization before CTS performs netlist optimization with ideal clocks. It can fix setup, hold, max trans/cap violations. It can do placement optimization based on global routing. It re does HFN synthesis. Post placement optimization after CTS optimizes timing with propagated clock. It tries to preserve clock skew.

[edit] Clock tree synthesis

Ideal clock before CTS

The goal of clock tree synthesis (CTS) is to minimize skew and insertion delay. Clock is not propagated before CTS as shown in the picture. After CTS hold slack should improve. Clock tree begins at .sdc defined clock source and ends at stop pins of flop. There are two types of stop pins known as ignore pins and sync pins. Dont touch circuits and pins in front end (logic synthesis) are treated as ignore circuits or pins at back end (physical synthesis). Ignore pins are ignored for timing analysis. If clock is divided then separate skew analysis is necessary.

Global skew achieves zero skew between two synchronous pins without considering logic relationship. Local skew achieves zero skew between two synchronous pins while considering logic relationship. If clock is skewed intentionally to improve setup slack then it is known as useful skew.

Rigidity is the term coined in Astro to indicate the relaxation of constraints. Higher the rigidity tighter is the constraints.

Clock After CTS

In clock tree optimization (CTO) clock can be shielded so that noise is not coupled to other signals. But shielding increases area by 12 to 15%. Since the clock signal is global in nature the same metal layer used for power routing is used for clock also. CTO is achieved by buffer sizing, gate sizing, buffer relocation, level adjustment and HFN synthesis. We try to improve setup slack in pre-placement, in placement and post placement optimization before CTS stages while neglecting hold slack. In post placement optimization after CTS hold slack is improved. As a result of CTS lot of buffers are added. Generally for 100k gates around 650 buffers are added.

[edit] Routing
There are two types of routing in the physical design process, global routing and detailed routing. Global routing allocates routing resources that are used for connections. Detailed routing assigns routes to specific metal layers and routing tracks within the global routing resources.

[edit] Physical Verification


Physical verification checks the correctness of the layout design. This includes verifying that the layout

Complies with all technology requirements Design Rule Checking (DRC) Is consistent with the original netlist Layout vs. Schematic (LVS) Has no antenna effects Antenna Rule Checking Complies with all electrical requirements Electrical Rule Checking (ERC).[2]

[edit] GDSII Generation


Once the design has been physically verified, optical-lithography masks are generated for manufacturing. The layout is represented in the GDSII stream format that is sent to a semiconductor fabrication plant (fab).

Вам также может понравиться