Вы находитесь на странице: 1из 43

Routeurs: Evolution et technologies

Cours de C. Pham
Transparents raliss par Nick McKeown, Pankaj Gupta
nickm@stanford.edu www.stanford.edu/~nickm

The Internet is a mesh of routers


IP Core router

The Internet Core


IP Edge Router

The Internet is a mesh of IP routers, ATM switches, frame relay, TDM,


Access Network Access Network

Access Network

Access Network Access Network Access Network

Access Network

The Internet is a mesh of routers mostly interconnected by (ATM and) SONET

TDM

TDM

TDM

TDM
Circuit switched crossconnects, DWDM etc.

Where high performance packet switches are used


- Carrier Class Core Router - ATM Switch - Frame Relay Switch

The Internet Core

Edge Router

Enterprise WAN access & Enterprise Campus Switch

Ex: Points of Presence (POPs)


POP2 A POP1 POP3 POP4 D

B C POP6 POP7

POP5

POP8

What a Router Looks Like


Cisco GSR 12416
19

Juniper M160
19

Capacity: 160Gb/s Power: 4.2kW 6ft 3ft

Capacity: 80Gb/s Power: 2.6kW

2ft

2.5ft
7

Basic Architectural Components


Routing Protocols Routing Table

Admission Control

Reservation

Control Plane

Packet Classification

Policing & Access Control

Forwarding Switching Table

Output Scheduling

Datapath
per-packet processing

Ingress

Interconnect

Egress

1.

2.

3.

Basic Architectural Components


Datapath: per-packet processing
2. Interconnect 1. Ingress
Forwarding Table

3. Egress

Forwarding Decision Forwarding Table

Forwarding Decision Forwarding Table

Forwarding Decision

Generic Router Architecture


Data Hdr
Lookup IP Address Address Table

Header Processing
Update Header

Buffer Manager
Buffer Memory

Data Hdr

Data Hdr

Lookup IP Address Address Table

Header Processing
Update Header

Buffer Manager

Data Hdr

Data Hdr Memory

Buffer

Data Hdr

Lookup IP Address Address Table

Header Processing
Update Header

Buffer Manager
Buffer Memory

10

RFC 1812: Requirements for IPv4 Routers


Must perform an IP datagram forwarding decision (called forwarding) Must send the datagram out the appropriate interface (called switching)
Optionally: a router MAY choose to perform special processing on incoming packets

11

Examples of special processing


Filtering packets for security reasons Delivering packets according to a preagreed delay guarantee Treating high priority packets preferentially Maintaining statistics on the number of packets sent by various routers
12

Special Processing Requires Identification of Flows


All packets of a flow obey a pre-defined rule and are processed similarly by the router E.g. a flow = (src-IP-address, dst-IPaddress), or a flow = (dst-IP-prefix, protocol) etc. Router needs to identify the flow of every incoming packet and then perform appropriate special processing

13

Flow-aware vs Flow-unaware Routers


Flow-aware router: keeps track of flows and perform similar processing on packets in a flow Flow-unaware router (packet-bypacket router): treats each incoming packet individually
14

Why do we Need Faster Routers?


1. To prevent routers becoming the bottleneck in the Internet. 2. To increase POP capacity, and to reduce cost, size and power.

15

1: To prevent routers from being the bottleneck Packet processing Power


10000
Spec95Int CPU results

Why we Need Faster Routers


Link Speed

10000 1000 100 10 1

2x / 18 months

2x / 7 months

100

10

1985
0,1

1990

1995

2000

1985

1990

1995

2000 0,1

TDM
Source: SPEC95Int & David Miller, Stanford.

DWDM
16

Fiber Capacity (Gbit/s)

1000

2: To reduce cost, power & complexity of POPs


POP with large routers POP with smaller routers

Why we Need Faster Routers

! !

Ports: Price >$100k, Power > 400W. It is common for 50-60% of ports to be for interconnection.

17

Why are Fast Routers Difficult to Make?


1. Its hard to keep up with Moores Law: The bottleneck is memory speed. Memory speed is not keeping up with Moores Law.

18

Memory Bandwidth
Commercial DRAM
1986 1989 1992 1980 1983 1995 1998 2001

1000 1. Its hard to keep up with Moores Law:

100
Access Time (ns)

The bottleneck is memory speed.

DRAM Memory speed is not up with 1.1x / keeping 18months 10 Moores Law.

Moores Law 2x / 18 months

0,1 0,01 0,001


19 Line Capacity 2x / 7 months Router Capacity 2.2x / 18months

0,0001

Why are Fast Routers Difficult to Make?


1. Its hard to keep up with Moores Law: The bottleneck is memory speed. Memory speed is not keeping up with Moores Law. Routers need to improve faster than Moores Law.
20

2. Moores Law is too slow:

Router Performance Exceeds Moores Law


Growth in capacity of commercial routers:
Capacity 1992 ~ 2Gb/s Capacity 1995 ~ 10Gb/s Capacity 1998 ~ 40Gb/s Capacity 2001 ~ 160Gb/s Capacity 2003 ~ 640Gb/s

Average growth rate: 2.2x / 18 months.


21

Things that slow routers down


250ms of buffering
Requires off-chip memory, more board space, pins and power.

Multicast
Affects everything! Complicates design, slows deployment.

Latency bounds
Limits pipelining.

Packet sequence
Limits parallelism.

Small internal cell size


Complicates arbitration.

DiffServ, IntServ, priorities, WFQ etc. Others: IPv6, Drop policies, VPNs, ACLs, DOS traceback, measurement, statistics,
22

First Generation Routers


Typically <0.5Gb/s aggregate capacity
Shared
Fixed length DMA blocks or cells. Reassembled on egress Backplane linecard

CPU

Route Table

Buffer Memory

Li CP n U In e te rf ac M e em or y

Line Interface
MAC

Line Interface
MAC

Line Interface
MAC

Most Ethernet switches and cheap packet routers Bottleneck can be CPU, hostadaptor or I/O bus

Fixed length cells or variable length packets 23

First Generation Routers


Input 1

Queueing Structure: Shared Memory


Numerous1work has proven and Output made possible:
Fairness Delay Guarantees Delay Variation Control Statistical Guarantees

Input 2

Output 2Guarantees Loss

Input N

Large, single dynamically allocated memory buffer: Output N N writes per cell time N reads per cell time. Limited by memory bandwidth.

24

Limitations
First generation router built with 133 MHz Pentium
Mean packet size 500 bytes Interrupt takes 10 microseconds, word access take 50 ns Per-packet processing time is 200 instructions = 1.504 !s

Copy loop
register <- memory[read_ptr] memory [write_ptr] <- register read_ptr <- read_ptr + 4 write_ptr <- write_ptr + 4 counter <- counter -1 if (counter not 0) branch to top of loop

4 instructions + 2 memory accesses = 130.08 ns Copying packet takes 500/4 *130.08 = 16.26 !s; interrupt 10 !s Total time = 27.764 !s => speed is 144.1 Mbps Amortized interrupt cost balanced by routing protocol cost

25

Second Generation Routers


Typically <5Gb/s aggregate capacity
CPU
Route Table Buffer Memory

Slow Path
Line Card Line Card Buffer Memory Fwding Cache
MAC

Line Card Buffer Memory Fwding Cache


MAC

Drop Policy

Buffer Memory Fwding Cache


MAC

Drop Policy Or Backpressure Output Link Scheduling


26

Port mapping intelligence in line cards-better for connection mode Higher hit rate in local lookup cache

Second Generation Routers


As caching became ineffective
Route Table

Exception Processor

CPU

Line Card Buffer Memory Fwding Table


MAC

Line Card Buffer Memory Fwding Table


MAC

Line Card Buffer Memory Fwding Table


MAC

27

Queueing Structure: Combined Input and Output Queueing


1 write per cell time
Rate of writes/reads determined by bus speed

Second Generation Routers

1 read per cell time

Bus

28

Third Generation Routers


"Third

generation switch provides parallel paths (fabric)

Switched Backplane
Line Card Local Buffer Memory
Fwding Table
MAC

Li CP In ne Ute rf ac M em e or y

CPU Card
Routing Table

Line Card Local Buffer Memory


Fwding Table
MAC

Typically <50Gb/s aggregate capacity

29

Third Generation Routers


Queueing Structure

1 write per cell time

Rate of writes/reads determined by switch fabric speedup

1 read per cell time

Switch
Arbiter

30

Third Generation Routers


Queueing Structure

1 write per cell time

Rate of writes/reads determined by switch fabric speedup

1 read per cell time

Switch
Arbiter
Flow-control backpressure Per-flow/class or peroutput queues (VOQs) Per-flow/class or perinput queues
31

Fourth Generation Routers/Switches

Optical links

100s of feet Switch Core Linecards

32

Physically Separating Switch Core and Linecards


Distributes power over multiple racks. Multistage, clustering Allows all buffering to be placed on the linecard:
Reduces power. Places complex scheduling, buffer mgmt, drop policy etc. on linecard.
33

Do optics belong in routers?


They are already there.
Connecting linecards to switches.

Optical processing doesnt belong on the linecard.


You cant buffer light. Minimal processing capability.

Optical switching can reduce power.


34

Optics in routers

Optical links

Switch Core

Linecards
35

Complex linecards
Typical IP Router Linecard
Lookup Tables Buffer & State Memory Buffer Mgmt & Scheduling Buffer Mgmt & Scheduling Buffer & State Memory

Optics
Physical Layer Framing & Maintenance

Packet Processing

Switch Fabric

Arbitration

10Gb/s linecard:
! ! ! !

Number of gates: 30M Amount of memory: 2Gbits Cost: >$20k Power: 300W

36

Replacing the switch fabric with optics


Typical IP Router Linecard
Lookup Tables Buffer & State Memory

Typical IP Router Linecard


Buffer & State Memory Lookup Tables

Optics
Physical Layer Framing & Maintenance

Packet Processing

Buffer Mgmt & Scheduling

electrical
Switch Fabric

Buffer Mgmt & Scheduling

Packet Processing Framing & Maintenance Physical Layer

Optics

Buffer Mgmt & Scheduling

Buffer Mgmt & Scheduling

Buffer & State Memory

Arbi trati on Candidate technologies

1. 2. 3. 4.

MEMs.

Buffer & State Memory

Fast tunable lasers + passive optical couplers. Diffraction waveguides. Electroholographic materials.
optical
Typical IP Router Linecard
Buffer & State Memory Lookup Tables

Typical IP Router Linecard


Lookup Tables Buffer & State Memory

Optics
Physical Layer Framing & Maintenance

Packet Processing

Buffer Mgmt & Scheduling

Buffer Mgmt & Scheduling

Switch Fabric

Buffer Mgmt & Scheduling

Packet Processing Framing & Maintenance Physical Layer

Optics

Req/Grant

Buffer & State Memory

Arbi trati on

Req/Grant

Buffer Mgmt & Scheduling

Buffer & State Memory

37

Evolution to circuit switching


Optics enables simple, low-power, very high capacity circuit switches. The Internet was packet switched for two reasons:
Expensive links: statistical multiplexing. Resilience: soft-state routing.

Neither reason holds today.

38

Fast Links, Slow Routers


Processing Power
10000 1000

Link Speed (Fiber)


10000 1000 100 10 1 1985 1990 1995 2000 0,1

Spec95Int CPU results

2x / 2 years

2x / 7 months

100

10

1985

1990

1995

2000

0.1 Source: SPEC95Int; Prof. Miller, Stanford Univ.

Fiber optics

DWDM
39

Fiber Capacity (Gbit/s)

Fewer Instructions
Instructions per packet since 1996

1000
(log scale)

100 10 1 1996

1997

1998

1999

2000

2001
40

Some Mc Keown's predictions about core Internet routers


The need for more capacity for a given power and volume budget will mean: Fewer functions in routers:
Little or no optimization for multicast, Continued overprovisioning will lead to little or no support for QoS, DiffServ, ,

Fewer unnecessary requirements:


Mis-sequencing will be tolerated, Latency requirements will be relaxed.

Less programmability in routers, and hence no network processors. Greater use of optics to reduce power in switch.
41

What McKeown believe is most likely


The need for capacity and reliability will mean: Widespread replacement of core routers with transport switching based on circuits:
Circuit switches have proved simpler, more reliable, lower power, higher capacity and lower cost per Gb/s. Eventually, this is going to matter. Internet will evolve to become edge routers interconnected by rich mesh of DWDM circuit switches.

42

Click here to view forwarding technologies

43

Вам также может понравиться