Вы находитесь на странице: 1из 994

VMware NSX-T: Install, Configure, Manage

NSX-T 2.4
CONTENTS

Module 1 Course Introduction 1


1-2 Course Introduction ................................................................................................2
1-3 Importance ..............................................................................................................3
1-4 Learner Objectives .................................................................................................4
1-5 Course Outline ........................................................................................................5
1-6 Typographical Conventions ....................................................................................6
1-7 References .............................................................................................................7
1-8 VMware Online Resources .....................................................................................8
1-9 VMware Education Overview .................................................................................9
1-10 VMware Certification Overview ............................................................................10
1-11 VMware Digital Badge Overview ..........................................................................11

Module 2 VMware Virtual Cloud Network and NSX-T Data Center 13


2-2 Importance ............................................................................................................14
2-3 Module Lessons....................................................................................................15
2-4 VMware Virtual Cloud Network.............................................................................16
2-5 Learner Objectives ...............................................................................................17
2-6 Virtual Cloud Network Framework ........................................................................18
2-7 VMware NSX Portfolio (1) ....................................................................................20
2-8 VMware NSX Portfolio (2) ....................................................................................21
2-9 VMware NSX Portfolio (3) ....................................................................................22
2-10 NSX-T Data Center ..............................................................................................23
2-11 NSX-T Data Center Use Cases ............................................................................24
2-12 NSX-T Data Center Features (1) ..........................................................................25
2-13 NSX-T Data Center Features (2) ..........................................................................26
2-14 NSX-T Data Center Ecosystem ............................................................................27
2-15 Review of Learner Objectives...............................................................................29
2-16 NSX-T Data Center Architecture and Components..............................................30
2-17 Learner Objectives ...............................................................................................31
2-18 High-Level Architecture of NSX-T Data Center ....................................................32
2-19 Management and Control Planes .........................................................................34
2-20 About the Data Plane ...........................................................................................35

Contents i
2-21 About NSX Management Cluster .........................................................................36
2-22 Benefits of NSX Management Cluster ..................................................................38
2-23 NSX Management Cluster with Virtual IP Address ..............................................39
2-24 NSX Management Cluster with Load Balancer ....................................................40
2-25 About NSX Policy .................................................................................................41
2-26 NSX Policy Characteristics ...................................................................................42
2-27 Centralized Policy Management ...........................................................................43
2-28 NSX Manager Functions ......................................................................................44
2-29 NSX Policy and NSX Manager Workflow .............................................................45
2-30 About NSX Controller ...........................................................................................46
2-31 Control Plane Components (1) .............................................................................47
2-32 Control Plane Components (2) .............................................................................48
2-33 Control Plane Change Propagation ......................................................................49
2-34 Control Plane Sharding Function..........................................................................50
2-35 Handling Controller Failure ...................................................................................51
2-36 Data Plane Functions ...........................................................................................52
2-37 Data Plane Components ......................................................................................53
2-38 Review of Learner Objectives...............................................................................54
2-39 Key Points.............................................................................................................55

Module 3 Preparing the NSX-T Data Center Infrastructure 57


3-2 Importance ............................................................................................................58
3-3 Module Lessons....................................................................................................59
3-4 Deploying NSX Management Cluster ...................................................................60
3-5 Learner Objectives ...............................................................................................61
3-6 Preparing the Infrastructure for NSX-T Data Center ............................................62
3-7 NSX Manager Deployment Considerations ..........................................................63
3-8 NSX Manager Node Sizing...................................................................................64
3-9 Accessing the NSX Manager UI ...........................................................................65
3-10 Registering Compute Managers to NSX-T Data Center ......................................66
3-11 Verifying the Registration of Compute Manager to NSX-T Data Center ..............67
3-12 Deploying Additional NSX Manager Instances (1) ...............................................68
3-13 Deploying Additional NSX Manager Instances (2) ...............................................69
3-14 Management Cluster Status: GUI (1) ...................................................................70
3-15 Management Cluster Status - GUI (2) ..................................................................71
3-16 Configuring the Virtual IP Address .......................................................................72

ii Contents
3-17 Management Cluster Status: CLI (1) ....................................................................73
3-18 Management Cluster Status: CLI (2) ....................................................................74
3-19 NSX Manager Deployment on KVM Hosts ...........................................................75
3-20 Review of Learner Objectives...............................................................................76
3-21 Navigating the NSX Manager UI ..........................................................................77
3-22 Learner Objectives ...............................................................................................78
3-23 NSX Manager Simplified and Advanced User Interfaces (1) ...............................79
3-24 NSX Manager Simplified and Advanced User Interfaces (2) ...............................80
3-25 Networking View ...................................................................................................81
3-26 Security View ........................................................................................................82
3-27 Inventory View ......................................................................................................83
3-28 Tools View ............................................................................................................84
3-29 System View .........................................................................................................85
3-30 Labs ......................................................................................................................86
3-31 Lab: Labs Introduction ..........................................................................................87
3-32 Lab: Reviewing the Configuration of the Predeployed NSX Manager
Instance ................................................................................................................88
3-33 Lab Simulation: Deploying a 3-Node NSX Management Cluster .........................89
3-34 Review of Learner Objectives...............................................................................90
3-35 Preparing the Data Plane .....................................................................................91
3-36 Learner Objectives ...............................................................................................92
3-37 Data Plane Components and Functions ...............................................................93
3-38 Transport Node Overview.....................................................................................94
3-39 Transport Node Components and Architecture ....................................................95
3-40 Transport Node Physical Connectivity .................................................................96
3-41 About IP Address Pools ........................................................................................97
3-42 About Transport Zones (1) ...................................................................................98
3-43 About Transport Zones (2) ...................................................................................99
3-44 About N-VDS ......................................................................................................100
3-45 N-VDS on ESXi Transport Nodes.......................................................................102
3-46 N-VDS on KVM Transport Nodes .......................................................................103
3-47 Transport Zone and N-VDS Mapping .................................................................104
3-48 Creating Transport Zones ...................................................................................105
3-49 N-VDS Operational Modes .................................................................................107
3-50 Enhanced Datapath Mode ..................................................................................109
3-51 Reviewing the Transport Zone Configuration .....................................................110

Contents iii
3-52 Physical NICs, LAGs, and Uplinks .....................................................................111
3-53 About Uplink Profiles ..........................................................................................112
3-54 Default Uplink Profiles ........................................................................................113
3-55 Types of Teaming Policies .................................................................................114
3-56 Teaming Policies Supported by ESXi and KVM Hosts.......................................115
3-57 Teaming Policy ...................................................................................................116
3-58 About LLDP ........................................................................................................117
3-59 Enabling LLDP Profiles .......................................................................................118
3-60 About Network I/O Control Profiles.....................................................................119
3-61 Creating Network I/O Control Profiles (1) ...........................................................120
3-62 Creating Network I/O Control Profiles (2) ...........................................................121
3-63 About Transport Node Profiles (1) ......................................................................122
3-64 About Transport Node Profiles (2) ......................................................................123
3-65 Benefits of Transport Node Profiles....................................................................124
3-66 Transport Node Profile Considerations ..............................................................125
3-67 Transport Node Profile Prerequisites .................................................................126
3-68 Attaching a Transport Node Profile to the ESXi Cluster .....................................127
3-69 Managed ESXi: Host Preparation (1) .................................................................128
3-70 Managed ESXi: Host Preparation (2) .................................................................129
3-71 Reviewing ESXi Transport Node Status .............................................................130
3-72 Verifying ESXi Transport Node by CLI ...............................................................131
3-73 Transport Node Preparation: KVM .....................................................................133
3-74 Configuring KVM Hosts as Transport Nodes (1) ................................................134
3-75 Configuring KVM Hosts as Transport Nodes (2) ................................................135
3-76 Reviewing KVM Transport Node Status .............................................................136
3-77 Verifying the KVM Transport Node by CLI .........................................................137
3-78 Lab: Preparing the NSX-T Data Center Infrastructure .......................................138
3-79 Review of Learner Objectives.............................................................................139
3-80 Key Points...........................................................................................................140

Module 4 NSX-T Data Center Logical Switching 141


4-2 Importance ..........................................................................................................142
4-3 Module Lessons..................................................................................................143
4-4 Logical Switching Overview ................................................................................144
4-5 Learner Objectives .............................................................................................145
4-6 Logical Switching Use Cases .............................................................................146

iv Contents
4-7 Prerequisites for Logical Switching.....................................................................147
4-8 Logical Switching Terminology ...........................................................................148
4-9 About Segments (1) ............................................................................................150
4-10 About Segments (2) ............................................................................................151
4-11 About Tunneling..................................................................................................152
4-12 About GENEVE ..................................................................................................154
4-13 GENEVE Header Format ...................................................................................156
4-14 Logical Switching: End-to-End Communication .................................................158
4-15 Review of Learner Objectives.............................................................................160
4-16 Logical Switching Architecture............................................................................161
4-17 Learner Objectives .............................................................................................162
4-18 Management Plane and Central Control Plane Agents......................................163
4-19 Creating Segments on ESXi Hosts (1) ...............................................................164
4-20 Creating Segments on ESXi Hosts (2) ...............................................................165
4-21 Creating Segments on KVM Hosts (1) ...............................................................166
4-22 Creating Segments on KVM Hosts (2) ...............................................................167
4-23 NSX-T Data Center Communication Channels ..................................................168
4-24 Review of Learner Objectives.............................................................................169
4-25 Configuring Segments ........................................................................................170
4-26 Learner Objectives .............................................................................................171
4-27 Segment Configuration Tasks ............................................................................172
4-28 Creating Segments .............................................................................................173
4-29 Viewing Configured Segments ...........................................................................174
4-30 Attaching VMs to a Segment ..............................................................................175
4-31 Workflow: Attaching a vSphere VM to a Segment (1) ........................................176
4-32 Workflow: Attaching a vSphere VM to a Segment (2) ........................................177
4-33 Attaching a KVM VM to a Segment ....................................................................178
4-34 Workflow: Attaching a KVM VM to a Segment (1)..............................................180
4-35 Workflow: Attaching a KVM VM to a Segment (2)..............................................181
4-36 Viewing the Switching Configuration in the Advanced and Simplified UIs .........182
4-37 Verifying L2 End-to-End Connectivity .................................................................183
4-38 Lab: Configuring Segments ................................................................................184
4-39 Review of Learner Objectives.............................................................................185
4-40 Configuring Segment Profiles .............................................................................186
4-41 Learner Objectives .............................................................................................187
4-42 About Segment Profiles (1) ................................................................................188

Contents v
4-43 About Segment Profiles (2) ................................................................................189
4-44 Default Segment Profiles ....................................................................................190
4-45 Applying Segment Profiles to Segments ............................................................191
4-46 Applying Segment Profiles to L2 Ports ...............................................................192
4-47 IP Discovery Segment Profile .............................................................................193
4-48 Creating an IP Discovery Segment Profile (1)....................................................194
4-49 Creating an IP Discovery Segment Profile (2) ....................................................196
4-50 MAC Discovery Segment Profile ........................................................................197
4-51 QoS Segment Profile ..........................................................................................199
4-52 Segment Security Profile ....................................................................................201
4-53 SpoofGuard Segment Profile..............................................................................203
4-54 Creating a SpoofGuard Segment Profile ............................................................204
4-55 Review of Learner Objectives.............................................................................206
4-56 Logical Switching Packet Forwarding .................................................................207
4-57 Learner Objectives .............................................................................................208
4-58 NSX-T Data Center Controller Tables ................................................................209
4-59 TEP Table Update (1) .........................................................................................210
4-60 TEP Table Update (2) .........................................................................................211
4-61 TEP Table Update (3) .........................................................................................212
4-62 TEP Table Update (4) .........................................................................................213
4-63 MAC Table Update (1) ........................................................................................214
4-64 MAC Table Update (2) ........................................................................................215
4-65 MAC Table Update (3) ........................................................................................216
4-66 MAC Table Update (4) ........................................................................................217
4-67 ARP Table Update (1) ........................................................................................218
4-68 ARP Table Update (2) ........................................................................................219
4-69 ARP Table Update (3) ........................................................................................220
4-70 ARP Table Update (4) ........................................................................................221
4-71 Unicast Packet Forwarding Across Hosts (1) .....................................................222
4-72 Unicast Packet Forwarding Across Hosts (2) .....................................................223
4-73 Unicast Packet Forwarding Across Hosts (3) .....................................................224
4-74 Unicast Packet Forwarding Across Hosts (4) .....................................................225
4-75 BUM Traffic Overview .........................................................................................226
4-76 Handling BUM Traffic: Head Replication ............................................................228
4-77 Handling BUM Traffic: Hierarchical Two-Tier Replication ..................................229
4-78 Review of Learner Objectives.............................................................................230

vi Contents
4-79 Key Points...........................................................................................................231

Module 5 NSX-T Data Center Logical Routing 233


5-2 Importance ..........................................................................................................234
5-3 Module Lessons..................................................................................................235
5-4 Logical Routing Overview ...................................................................................236
5-5 Learner Objectives .............................................................................................237
5-6 Logical Routing Use Cases ................................................................................238
5-7 Prerequisites for Logical Routing........................................................................240
5-8 Logical Routing in NSX-T Data Center ...............................................................241
5-9 Gateway Components: Distributed Router and Service Router .........................243
5-10 Gateway: Distributed Router (1) .........................................................................244
5-11 Gateway: Distributed Router (2) .........................................................................245
5-12 Gateway: Service Router....................................................................................246
5-13 Interaction between Distributed and Service Routers ........................................247
5-14 About Edge Nodes .............................................................................................248
5-15 Logical Routing: Multitier Topology ....................................................................249
5-16 Tier-0 and Tier-1 Gateways ................................................................................250
5-17 Logical Router Interfaces ....................................................................................251
5-18 Centralized Service Port .....................................................................................253
5-19 Single-Tier Deployment Example .......................................................................254
5-20 Multitier Topology Examples ..............................................................................255
5-21 Tier-0 Gateway Uplink Connections ...................................................................256
5-22 Review of Learner Objectives.............................................................................257
5-23 NSX Edge and Edge Clusters ............................................................................258
5-24 Learner Objectives .............................................................................................259
5-25 NSX Edge Functions ..........................................................................................260
5-26 NSX Edge VM Form Factor and Sizing Options ................................................261
5-27 NSX Edge Bare Metal Hardware Requirements ................................................262
5-28 Logical Routing Topology (1)..............................................................................263
5-29 Logical Routing Topology (2)..............................................................................264
5-30 Logical Routing Topology (3)..............................................................................265
5-31 NSX Edge Cluster Guidelines ............................................................................266
5-32 NSX Edge Node Deployment Prerequisites .......................................................267
5-33 Deploying NSX Edge Nodes from the Simplified UI ...........................................268
5-34 Using vCenter Server to Deploy NSX Edge Nodes............................................269

Contents vii
5-35 Using the OVF Tool to Deploy NSX Edge Nodes ..............................................270
5-36 Installing NSX Edge on Bare Metal ....................................................................272
5-37 Using PXE to Deploy NSX Edge Nodes from an ISO File .................................273
5-38 Joining NSX Edge with the Management Plane.................................................274
5-39 Verifying the Edge Transport Node Status .........................................................275
5-40 Enabling Edge Node SSH Service .....................................................................276
5-41 Postdeployment Verification Checklist ...............................................................277
5-42 Creating an Edge Cluster ...................................................................................278
5-43 Mapping NSX Edge Node Interfaces (1) ............................................................279
5-44 Mapping NSX Edge Node Interfaces (2) ............................................................280
5-45 Verifying NSX Edge Node Interfaces Mapping ..................................................281
5-46 Edge Node VM Deployment Options..................................................................282
5-47 Lab: Deploying and Configuring NSX Edge Nodes ............................................284
5-48 Review of Learner Objectives.............................................................................285
5-49 Configuring Tier-0 and Tier-1 Gateways ............................................................286
5-50 Learner Objectives .............................................................................................287
5-51 Gateway Configuration Tasks ............................................................................288
5-52 Configuring a Tier-0 Gateway: Step 1 ................................................................289
5-53 Configuring a Tier-0 Gateway: Step 2 ................................................................290
5-54 Configuring a Tier-0 Gateway: Step 3 ................................................................291
5-55 Configuring a Tier-0 Gateway: Step 4 ................................................................292
5-56 Configuring a Tier-0 Gateway: Step 5 ................................................................293
5-57 Reviewing the Tier-0 Gateway Configuration .....................................................294
5-58 Configuring a Tier-1 Gateway: Step 1 ................................................................295
5-59 Configuring a Tier-1 Gateway: Step 2 ................................................................296
5-60 Testing East-West Connectivity..........................................................................297
5-61 Configuring a Tier-1 Gateway: Step 3 ................................................................298
5-62 Configuring a Tier-1 Gateway: Step 4 ................................................................299
5-63 Testing North-South Connectivity .......................................................................300
5-64 Routing Topologies .............................................................................................301
5-65 Single-Tier Topology ..........................................................................................302
5-66 Single-Tier Routing: Egress to Physical Network (1) .........................................303
5-67 Single-Tier Routing: Egress to Physical Network (2) .........................................304
5-68 Single-Tier Routing: Egress to Physical Network (3) .........................................305
5-69 Single-Tier Routing: Egress to Physical Network (4) .........................................306
5-70 Single-Tier Routing: Egress to Physical Network (5) .........................................307

viii Contents
5-71 Single-Tier Routing: Egress to Physical Network (6) .........................................308
5-72 Single-Tier Routing: Ingress from Physical Network (7).....................................309
5-73 Single-Tier Routing: Ingress from Physical Network (8).....................................310
5-74 Single-Tier Routing: Ingress from Physical Network (9).....................................311
5-75 Single-Tier Routing: Ingress from Physical Network (10)...................................312
5-76 Single-Tier Routing: Ingress from Physical Network (11)...................................313
5-77 Single-Tier Routing: Ingress from Physical Network (12)...................................314
5-78 Single-Tier Routing: Ingress from Physical Network (13)...................................315
5-79 Multitier Topology (1) ..........................................................................................316
5-80 Multitier Topology (2) ..........................................................................................317
5-81 Multitier Topology (3) ..........................................................................................318
5-82 Multitier Routing: Egress to Physical Network Example ....................................319
5-83 Multitier Routing: Egress to Physical Network (1) ..............................................320
5-84 Multitier Routing: Egress to Physical Network (2) ..............................................321
5-85 Multitier Routing: Egress to Physical Network (3) ..............................................322
5-86 Multitier Routing: Egress to Physical Network (4) ..............................................323
5-87 Multitier Routing: Egress to Physical Network (5) ..............................................324
5-88 Multitier Routing: Egress to Physical Network (6) ..............................................325
5-89 Multitier Routing: Egress to Physical Network (7) ..............................................326
5-90 Multitier Routing: Egress to Physical Network (8) ..............................................327
5-91 Multitier Routing: Egress to Physical Network (9) ..............................................328
5-92 Multitier Routing: Egress to Physical Network (10) ............................................329
5-93 Multitier Routing: Egress to Physical Network (11) ............................................330
5-94 Multitier Routing: Egress to Physical Network (12) ............................................331
5-95 Multitier Routing: Egress to Physical Network (13) ............................................332
5-96 Multitier Routing: Egress to Physical Network (14) ............................................333
5-97 Multitier Routing: Egress to Physical Network (15) ............................................334
5-98 Multitier Routing: Egress to Physical Network (16) ............................................335
5-99 Multitier Routing: Egress to Physical Network (17) ............................................336
5-100 Lab: Configuring the Tier-1 Gateway .................................................................337
5-101 Review of Learner Objectives.............................................................................338
5-102 Configuring Static and Dynamic Routing ............................................................339
5-103 Learner Objectives .............................................................................................340
5-104 Static and Dynamic Routing ...............................................................................341
5-105 Tier-0 Gateway Capabilities ...............................................................................342
5-106 Configuring Static Routes on a Tier-0 Gateway (1)............................................343

Contents ix
5-107 Configuring Static Routes on a Tier-0 Gateway (2)............................................344
5-108 Viewing the Static Route Configuration ..............................................................345
5-109 BGP on Tier-0 .....................................................................................................346
5-110 Routing Features Supported by the Tier-0 Gateway ..........................................347
5-111 Configuring Dynamic Routing on Tier-0 Gateways: Step 1 ................................348
5-112 Configuring Dynamic Routing on Tier-0 Gateways: Step 2 ................................349
5-113 Configuring Dynamic Routing on Tier-0 Gateways: Step 3 ................................350
5-114 Verifying BGP Configuration of Tier-0 Gateway on Edge Nodes .......................351
5-115 BFD on a Tier-0 Gateway ...................................................................................352
5-116 Enabling BFD on a Tier-0 Gateway ....................................................................353
5-117 About IP Prefix Lists ...........................................................................................354
5-118 Configuring an IP Prefix List ...............................................................................355
5-119 About Route Maps (1) ........................................................................................356
5-120 About Route Maps (2) ........................................................................................357
5-121 Using Route Maps in BGP Route Advertisements .............................................358
5-122 BGP Feature: Allow AS-In ..................................................................................359
5-123 BGP Feature: Multipath Relax ............................................................................360
5-124 Internal BGP Support .........................................................................................362
5-125 About Inter-SR Routing ......................................................................................363
5-126 Inter-SR Routing Characteristics ........................................................................364
5-127 Inter-SR Routing Example (1) ............................................................................365
5-128 Inter-SR Routing Example (2) ............................................................................366
5-129 Inter-SR Routing Example (3) ............................................................................367
5-130 Lab: Configuring the Tier-0 Gateway .................................................................368
5-131 Review of Learner Objectives.............................................................................369
5-132 ECMP and High Availability ................................................................................370
5-133 Learner Objectives .............................................................................................371
5-134 About Equal-Cost Multipath Routing ..................................................................372
5-135 Enabling ECMP ..................................................................................................373
5-136 Edge Node High Availability ...............................................................................374
5-137 Tier-0 Gateway Active-Active Mode ...................................................................375
5-138 Tier-0 Gateway Active-Standby Mode ................................................................376
5-139 Failure Conditions and Failover Process (1) ......................................................377
5-140 Failure Conditions and Failover Process (2) ......................................................378
5-141 Failure Conditions and Failover Process (3) ......................................................379
5-142 Edge Node Failback Modes ...............................................................................380

x Contents
5-143 Lab: Verifying Equal Cost Multipathing Configurations ......................................381
5-144 Review of Learner Objectives.............................................................................382
5-145 Key Points (1) .....................................................................................................383
5-146 Key Points (2) .....................................................................................................384

Module 6 NSX-T Data Center Logical Bridging 385


6-2 Importance ..........................................................................................................386
6-3 Learner Objectives .............................................................................................387
6-4 Logical Bridging Use Cases ...............................................................................388
6-5 Routing and Bridging for Physical-to-Virtual Communication ............................389
6-6 Virtual-to-Physical Routing Example ..................................................................390
6-7 Virtual-to-Physical Bridging Example .................................................................391
6-8 Logical Bridging Overview ..................................................................................392
6-9 Creating a Bridge Cluster ...................................................................................394
6-10 Logical Bridging on NSX Edge Nodes ................................................................395
6-11 Benefits of Configuring Logical Bridging on NSX Edge Nodes ..........................396
6-12 Bridge Profiles on NSX Edge Nodes ..................................................................397
6-13 Using Multiple Bridge Profiles on NSX Edge Nodes ..........................................398
6-14 Creating an Edge Bridge Profile .........................................................................399
6-15 Configuring a Layer 2 Bridge-Backed Logical Switch ........................................400
6-16 Monitoring the Bridged Traffic Statistics .............................................................402
6-17 Review of Learner Objectives.............................................................................403
6-18 Key Points...........................................................................................................404

Module 7 NSX-T Data Center Services 405


7-2 Importance ..........................................................................................................406
7-3 Module Lessons..................................................................................................407
7-4 Configuring NAT .................................................................................................408
7-5 Learner Objectives .............................................................................................409
7-6 About NAT ..........................................................................................................410
7-7 About SNAT ........................................................................................................412
7-8 About DNAT........................................................................................................413
7-9 Reflexive NAT (Stateless NAT) ..........................................................................414
7-10 Configuring SNAT and DNAT .............................................................................415
7-11 Configuring the No SNAT Rule...........................................................................416
7-12 Configuring the No DNAT Rule ..........................................................................417

Contents xi
7-13 Configuring Reflexive NAT .................................................................................419
7-14 NAT Packet Flow Logical Topology....................................................................420
7-15 NAT Packet Flow (1) ..........................................................................................421
7-16 NAT Packet Flow (2) ..........................................................................................422
7-17 NAT Packet Flow (3) ..........................................................................................423
7-18 NAT Packet Flow (4) ..........................................................................................424
7-19 NAT Packet Flow (5) ..........................................................................................425
7-20 NAT Packet Flow (6) ..........................................................................................426
7-21 NAT Packet Flow (7) ..........................................................................................427
7-22 NAT Packet Flow (8) ..........................................................................................428
7-23 NAT Packet Flow (9) ..........................................................................................429
7-24 NAT Packet Flow (10) ........................................................................................430
7-25 NAT Packet Flow (11) ........................................................................................431
7-26 Lab: Configuring Network Address Translation ..................................................432
7-27 Review of Learner Objectives.............................................................................433
7-28 Configuring DHCP and DNS Services ...............................................................434
7-29 Learner Objectives .............................................................................................435
7-30 About DHCP Services ........................................................................................436
7-31 DHCP Architecture .............................................................................................437
7-32 DHCP Use Cases ...............................................................................................438
7-33 DHCP Workflow ..................................................................................................439
7-34 Creating the DHCP Server .................................................................................440
7-35 Configuring the DHCP Server on the Tier-1 Gateway........................................441
7-36 Configuring the Subnet on the Segment ............................................................442
7-37 Editing Segments................................................................................................443
7-38 Viewing the DHCP Server Status .......................................................................444
7-39 DHCP Configuration Details: Advanced UI ........................................................445
7-40 DHCP Server Router Ports on Tier-1 Gateways ................................................446
7-41 DHCP Server and IP Pool Information in the Advanced UI ...............................447
7-42 DHCP Relay .......................................................................................................448
7-43 Configuring the DHCP Relay Server on Tier-1 Gateways..................................449
7-44 Configuring Segments with Gateway and DHCP IP Address Ranges ...............450
7-45 Local and Remote DHCP Server Configuration .................................................451
7-46 About DNS Services ...........................................................................................452
7-47 About DNS Forwarder ........................................................................................453
7-48 DNS Forwarder Benefits .....................................................................................454

xii Contents
7-49 Configuring DNS Services and DNS Zones (1)..................................................455
7-50 Configuring DNS Services and DNS Zones (2)..................................................457
7-51 Verifying the DNS Forwarder..............................................................................458
7-52 Lab: Configuring the DHCP Server on the NSX Edge Node ..............................459
7-53 Review of Learner Objectives.............................................................................460
7-54 Configuring Load Balancing ...............................................................................461
7-55 Learner Objectives .............................................................................................462
7-56 Load Balancing Use Cases ................................................................................463
7-57 Layer 4 Load Balancing ......................................................................................464
7-58 Layer 7 Load Balancing ......................................................................................465
7-59 Load Balancer Architecture ................................................................................466
7-60 Connecting to Tier-1 Gateways ..........................................................................467
7-61 Virtual Servers ....................................................................................................468
7-62 About Profiles .....................................................................................................469
7-63 About Server Pools .............................................................................................470
7-64 About Monitors....................................................................................................471
7-65 Relationships Among Load Balancer Components ............................................472
7-66 Load Balancer Scalability (1) ..............................................................................473
7-67 Load Balancer Scalability (2) ..............................................................................474
7-68 Load Balancing Deployment Modes ...................................................................475
7-69 Inline Topology ...................................................................................................476
7-70 One-Arm Topology (1) ........................................................................................477
7-71 One-Arm Topology (2) ........................................................................................478
7-72 Load Balancing Configuration Steps ..................................................................479
7-73 Creating Load Balancers ....................................................................................480
7-74 Creating Virtual Servers .....................................................................................481
7-75 Configuring Layer 4 Virtual Servers....................................................................482
7-76 Configuring Layer 7 Virtual Servers....................................................................483
7-77 Configuring Application Profiles..........................................................................484
7-78 Configuring Persistence Profiles ........................................................................485
7-79 Layer 7 Load Balancer SSL Modes ....................................................................486
7-80 Configuring Layer 7 SSL Profiles .......................................................................487
7-81 Configuring Layer 7 Load Balancer Rules ..........................................................488
7-82 Creating Server Pools ........................................................................................489
7-83 Configuring Load Balancing Algorithms .............................................................490
7-84 Configuring SNAT Translation Modes ................................................................491

Contents xiii
7-85 Configuring Active Monitors................................................................................492
7-86 Configuring Passive Monitors .............................................................................493
7-87 Lab: Configuring Load Balancing .......................................................................494
7-88 Review of Learner Objectives.............................................................................495
7-89 IPSec VPN ..........................................................................................................496
7-90 Learner Objectives .............................................................................................497
7-91 NSX-T Data Center VPN Services .....................................................................498
7-92 IPSec VPN Use Cases .......................................................................................499
7-93 IPSec VPN Methods ...........................................................................................500
7-94 IPSec VPN Modes ..............................................................................................501
7-95 IPSec VPN Protocols and Algorithms.................................................................502
7-96 IPSec VPN Certificate-Based Authentication .....................................................503
7-97 IPSec VPN Dead Peer Detection .......................................................................504
7-98 IPSec VPN Types ...............................................................................................505
7-99 IPSec VPN Deployment Considerations ............................................................506
7-100 IPSec VPN High Availability ...............................................................................507
7-101 IPSec VPN Scalability ........................................................................................508
7-102 IPSec VPN Configuration Steps .........................................................................509
7-103 Configuring an IPSec VPN Service ....................................................................510
7-104 Configuring DPD Profiles....................................................................................511
7-105 Configuring IKE Profiles .....................................................................................512
7-106 Configuring IPSec Profiles ..................................................................................513
7-107 Configuring Local Endpoints...............................................................................515
7-108 Configuring IPSec VPN Sessions (1) .................................................................516
7-109 Configuring IPSec VPN Sessions (2) .................................................................517
7-110 Configuring IPSec VPN Sessions (3) .................................................................519
7-111 Configuring IPSec VPN Sessions (4) .................................................................520
7-112 Review of Learner Objectives.............................................................................522
7-113 L2 VPN ...............................................................................................................523
7-114 Learner Objectives .............................................................................................524
7-115 L2 VPN Use Cases .............................................................................................525
7-116 L2 VPN in NSX-T Data Center ...........................................................................526
7-117 L2 VPN Deployment Considerations ..................................................................527
7-118 L2 VPN Hub-and-Spoke Topology .....................................................................528
7-119 L2 VPN Packet Format .......................................................................................529
7-120 L2 VPN Edge Packet Flow .................................................................................530

xiv Contents
7-121 L2 VPN Scalability ..............................................................................................531
7-122 L2 VPN Server Configuration Steps ...................................................................532
7-123 Configuring the L2 VPN Server (1) .....................................................................533
7-124 Configuring the L2 VPN Server (2) .....................................................................534
7-125 Configuring the L2 VPN Server (3) .....................................................................535
7-126 Configuring the L2 VPN Server (4) .....................................................................536
7-127 Supported L2 VPN Clients..................................................................................537
7-128 L2 VPN Peer Compatibility Matrix ......................................................................538
7-129 About Standalone Edge......................................................................................539
7-130 About NSX-Managed Edge (NSX Data Center for vSphere) .............................540
7-131 About NSX-Managed Edge (NSX-T Data Center)..............................................541
7-132 Configuring the L2 VPN Managed Client (1) ......................................................542
7-133 Configuring the L2 VPN Managed Client (2) ......................................................543
7-134 Configuring the L2 VPN Managed Client (3) ......................................................544
7-135 Configuring the L2 VPN Managed Client (4) ......................................................545
7-136 Lab: Deploying Virtual Private Networks ............................................................546
7-137 Review of Learner Objectives.............................................................................547
7-138 Key Points (1) .....................................................................................................548
7-139 Key Points (2) .....................................................................................................549

Module 8 NSX-T Data Center Security 551


8-2 Importance ..........................................................................................................552
8-3 Module Lessons..................................................................................................553
8-4 NSX-T Data Center Micro-Segmentation ...........................................................554
8-5 Learner Objectives .............................................................................................555
8-6 Traditional Data Center Security ........................................................................556
8-7 Data Center Security Requirements ...................................................................558
8-8 Micro-Segmentation in NSX-T Data Center .......................................................560
8-9 Enforcing the Zero-Trust Security Model of Micro-Segmentation (1) .................562
8-10 Enforcing the Zero-Trust Security Model of Micro-Segmentation (2) .................563
8-11 Enforcing the Zero-Trust Security Model of Micro-Segmentation (3) .................564
8-12 Micro-Segmentation Use Cases .........................................................................565
8-13 Micro-Segmentation Benefits .............................................................................566
8-14 Review of Learner Objectives.............................................................................567
8-15 NSX-T Data Center Distributed Firewall .............................................................568
8-16 Learner Objectives .............................................................................................569

Contents xv
8-17 NSX-T Data Center Firewalls (1) ........................................................................570
8-18 NSX-T Data Center Firewalls (2) ........................................................................571
8-19 Features of the Distributed Firewall ....................................................................572
8-20 Distributed Firewall: Key Concepts (1) ...............................................................574
8-21 Distributed Firewall: Key Concepts (2) ...............................................................575
8-22 Creating a Domain ..............................................................................................576
8-23 Security Policy Overview ....................................................................................577
8-24 Distributed Firewall Policy ..................................................................................578
8-25 Configuring Distributed Firewall Policies (1) .......................................................580
8-26 Configuring Distributed Firewall Policies (2) .......................................................581
8-27 Configuring Distributed Firewall Policy Settings .................................................583
8-28 Creating Distributed Firewall Rules ....................................................................584
8-29 Configuring Distributed Firewall Rule Parameters .............................................585
8-30 Specifying Sources and Destinations for a Rule ................................................586
8-31 Creating Groups .................................................................................................587
8-32 Adding Members and Member Criteria for a Group ...........................................588
8-33 Viewing the Configured Groups..........................................................................589
8-34 Specifying Services for a Rule............................................................................590
8-35 Predefined and User-Created Services ..............................................................591
8-36 Adding a Context Profile to a Rule .....................................................................592
8-37 Predefined and User-Created Context Profiles ..................................................593
8-38 Configuring Context Profile Attributes ................................................................594
8-39 Setting the Scope of Rule Enforcement .............................................................595
8-40 Specifying Distributed Firewall Settings .............................................................596
8-41 Filtering the Display of Firewall Rules ................................................................597
8-42 Determining the Default Firewall Behavior .........................................................598
8-43 Viewing the Default Firewall Rules .....................................................................599
8-44 Distributed Firewall Architecture .........................................................................600
8-45 Distributed Firewall Architecture: ESXi ...............................................................601
8-46 Distributed Firewall Architecture: KVM ...............................................................602
8-47 Lab: Configuring the NSX Distributed Firewall ...................................................603
8-48 Review of Learner Objectives.............................................................................604
8-49 NSX-T Data Center Gateway Firewall ................................................................605
8-50 Learner Objectives .............................................................................................606
8-51 About NSX-T Data Center Gateway Firewall .....................................................607
8-52 Gateway Firewall on Tier-0 Gateway for Perimeter Protection ..........................609

xvi Contents
8-53 Gateway Firewall Policy .....................................................................................610
8-54 Predefined Gateway Firewall Categories ...........................................................611
8-55 Configuring the Gateway Firewall Policy Settings ..............................................612
8-56 Configuring Firewall Rules ..................................................................................614
8-57 Configuring Gateway Firewall Rules Settings ....................................................615
8-58 Gateway Firewall Architecture ............................................................................616
8-59 Lab: Configuring the NSX Gateway Firewall ......................................................617
8-60 Review of Learner Objectives.............................................................................618
8-61 NSX-T Data Center Service Insertion ................................................................619
8-62 Learner Objectives .............................................................................................620
8-63 About Service Insertion ......................................................................................621
8-64 About Network Introspection ..............................................................................622
8-65 North-South Network Introspection Overview ....................................................623
8-66 Configuring North-South Network Introspection .................................................624
8-67 Registering a Partner Service.............................................................................625
8-68 Deploying a Partner Service Instance ................................................................626
8-69 Configuring Traffic Redirection to Partners ........................................................627
8-70 East-West Network Introspection Overview .......................................................628
8-71 Configuring East-West Network Introspection....................................................629
8-72 Registering Partner Services ..............................................................................631
8-73 Deploying an Instance of a Registered Service .................................................632
8-74 Creating a Service Profile for East-West Network Introspection ........................633
8-75 Creating Service Chains .....................................................................................634
8-76 Configuring Redirection Rules ............................................................................635
8-77 Endpoint Protection Overview and Use Cases ..................................................636
8-78 Endpoint Protection Process ..............................................................................637
8-79 Automatic Policy Enforcement for New VMs ......................................................638
8-80 Automated Virus or Malware Quarantine with Tags Example ............................639
8-81 Creating a Service Profile for Endpoint Protection .............................................640
8-82 Configuring Endpoint Protection Rules ..............................................................641
8-83 Review of Learner Objectives.............................................................................642
8-84 Key Points (1) .....................................................................................................643
8-85 Key Points (2) .....................................................................................................644

Module 9 NSX-T Data Center User and Role Management 647


9-2 Importance ..........................................................................................................648

Contents xvii
9-3 Module Lessons..................................................................................................649
9-4 Integrating NSX-T Data Center and VMware Identity Manager .........................650
9-5 Learner Objectives .............................................................................................651
9-6 About VMware Identity Manager ........................................................................652
9-7 Benefits of Integrating VMware Identity Manager and NSX-T Data Center .......653
9-8 VMware Identity Manager Integration Pre-Requisites ........................................654
9-9 Configuring VMware Identity Manager ...............................................................655
9-10 VMware Identity Manager and NSX-T Data Center Integration Overview .........657
9-11 Creating a New OAuth Client .............................................................................658
9-12 Getting the SHA-256 Certificate Thumbprint ......................................................660
9-13 Configuring VMware Identity Manager Details in NSX-T Data Center ...............661
9-14 Verifying VMware Identity Manager Integration .................................................662
9-15 Default UI Login ..................................................................................................663
9-16 UI Login with VMware Identity Manager .............................................................664
9-17 Local Login with VMware Identity Manager ........................................................665
9-18 Review of Learner Objectives.............................................................................666
9-19 Managing Users and Configuring RBAC ............................................................667
9-20 Learner Objectives .............................................................................................668
9-21 NSX-T Data Center Users ..................................................................................669
9-22 User Access and Authentication Policy Management ........................................670
9-23 Local Users .........................................................................................................671
9-24 Changing the Password for Local Users ............................................................672
9-25 Configuring Authentication Policy Settings for Local Users ...............................673
9-26 Configuring Authentication Policy Settings for VMware Identity Manager
Users ..................................................................................................................674
9-27 Using Role-Based Access Control .....................................................................675
9-28 Permissions Hierarchy ........................................................................................676
9-29 Built-in Roles (1) .................................................................................................677
9-30 Built-in Roles (2) .................................................................................................678
9-31 Role Assignment for Local Users .......................................................................679
9-32 Role Assignment for VMware Identity Manager Users.......................................680
9-33 Lab: Managing Users and Roles with VMware Identity Manager ......................681
9-34 Review of Learner Objectives.............................................................................682
9-35 Key Points...........................................................................................................683

xviii Contents
Module 10 NSX-T Data Center Tools and Basic Troubleshooting 685
10-2 Importance ..........................................................................................................686
10-3 Module Lessons..................................................................................................687
10-4 Troubleshooting Overview and Log Collection ...................................................688
10-5 Learner Objectives .............................................................................................689
10-6 About the Troubleshooting Process ...................................................................690
10-7 Differentiating Between Symptoms and Causes ................................................691
10-8 Local Logging on NSX-T Data Center Components ..........................................692
10-9 Viewing NSX Policy Manager Logs ....................................................................693
10-10 Viewing the NSX Manager Syslog......................................................................694
10-11 Viewing the NSX Controller Log .........................................................................695
10-12 Viewing the ESXi Host Log.................................................................................696
10-13 Viewing the KVM Host Log .................................................................................697
10-14 Syslog Overview .................................................................................................698
10-15 Configuring Syslog Exporters (1)........................................................................699
10-16 Configuring Syslog Exporters (2)........................................................................701
10-17 Configuring and Displaying Syslog .....................................................................702
10-18 Generating Technical Support Bundles ..............................................................703
10-19 Monitoring the Support Bundle Status ................................................................704
10-20 Downloading Support Bundles ...........................................................................705
10-21 Labs ....................................................................................................................706
10-22 Lab: Configuring Syslog .....................................................................................707
10-23 Lab: Generating Technical Support Bundles......................................................708
10-24 Review of Learner Objectives.............................................................................709
10-25 Monitoring and Troubleshooting Tools ...............................................................710
10-26 Learner Objectives .............................................................................................711
10-27 Monitoring Components from the NSX Manager Simplified UI ..........................712
10-28 Monitoring Component Status ............................................................................713
10-29 Port Mirroring Overview ......................................................................................714
10-30 Port Mirroring Method: Remote L3 SPAN ..........................................................715
10-31 Port Mirroring Method: Logical SPAN.................................................................716
10-32 Configuring Logical SPAN ..................................................................................717
10-33 Viewing the Logical SPAN Configuration and Mirrored Packets ........................718
10-34 IPFIX Overview ...................................................................................................719
10-35 Configuring IPFIX to Export Traffic Flows ..........................................................720

Contents xix
10-36 Configuring an IPFIX Firewall Profile..................................................................721
10-37 Configuring an IPFIX Switch Profile ...................................................................722
10-38 Configuring IPFIX Collectors ..............................................................................724
10-39 Traceflow Overview (1) .......................................................................................725
10-40 Traceflow Overview (2) .......................................................................................726
10-41 Traceflow Configuration Settings........................................................................727
10-42 Traceflow Operations .........................................................................................728
10-43 Using Traceflow for Troubleshooting ..................................................................729
10-44 About the Port Connection Tool .........................................................................730
10-45 Viewing the Graphical Output of the Port Connection Tool ................................731
10-46 Packet Capture ...................................................................................................732
10-47 Lab: Using Traceflow to Inspect the Path of a Packet........................................733
10-48 Review of Learner Objectives.............................................................................734
10-49 Troubleshooting Basic NSX-T Data Center Problems .......................................735
10-50 Learner Objectives .............................................................................................736
10-51 Common NSX Manager Installation Problems ...................................................737
10-52 Using Logs to Troubleshoot NSX Manager Installation Problems .....................738
10-53 Using CLI Commands to Troubleshoot NSX Manager Installation
Problems.............................................................................................................739
10-54 Viewing the NSX Manager Node Configuration .................................................740
10-55 Verifying Services and States Running on NSX Manager Nodes ......................741
10-56 Verifying NSX Management Cluster Status .......................................................742
10-57 Verifying Communication from Hosts to the NSX Management Cluster ............743
10-58 Troubleshooting Logical Switching Problems.....................................................744
10-59 Verifying the N-VDS Configuration .....................................................................745
10-60 Verifying Overlay Tunnel Reachability (1) ..........................................................746
10-61 Verifying Overlay Tunnel Reachability (2) ..........................................................747
10-62 Troubleshooting Logical Routing Problems ........................................................748
10-63 Retrieving Gateway Information .........................................................................749
10-64 Viewing the Routing Table..................................................................................750
10-65 Viewing the Forwarding Table of the Tier-1 Gateway ........................................751
10-66 Verifying BGP Neighbor Status ..........................................................................752
10-67 Viewing the BGP Route Table ............................................................................753
10-68 Troubleshooting Firewall Problems ....................................................................754
10-69 Verifying Firewall Configuration and Status (1) ..................................................755
10-70 Verifying Firewall Configuration and Status (2) ..................................................756

xx Contents
10-71 Verifying the Firewall Configuration from the KVM Host ....................................757
10-72 Verifying the Firewall Configuration from the ESXi Host ....................................758
10-73 Verifying the Firewall Configuration from the NSX Edge Node ..........................759
10-74 Review of Learner Objectives.............................................................................760
10-75 Key Points...........................................................................................................761

Contents xxi
Module 1
Course Introduction

Module 1: Course Introduction1


1-2 Course Introduction

2 Module 1: Course Introduction


1-3 Importance

Module 1: Course Introduction3


1-4 Learner Objectives

4 Module 1: Course Introduction


1-5 Course Outline

Module 1: Course Introduction5


1-6 Typographical Conventions

6 Module 1: Course Introduction


1-7 References

Module 1: Course Introduction7


1-8 VMware Online Resources

8 Module 1: Course Introduction


1-9 VMware Education Overview

Module 1: Course Introduction9


1-10 VMware Certification Overview

10 Module 1: Course Introduction


1-11 VMware Digital Badge Overview

Digital badges contain metadata with skill tags and accomplishments, and are based on Mozilla's
Open Badges standard.

Module 1: Course Introduction11


12 Module 1: Course Introduction
Module 2
VMware Virtual Cloud Network and NSX-T Data Center

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 13


2-2 Importance

14 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-3 Module Lessons

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 15


2-4 VMware Virtual Cloud Network

16 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-5 Learner Objectives

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 17


2-6 Virtual Cloud Network Framework

Virtual Cloud Network empowers customers to connect and protect applications and data,
regardless of their physical locations. The purpose of Virtual Cloud Network is to connect and
protect any workload running across any environment. Workloads might be running on-premises
in a customer data center, in a branch, or in a public cloud such as AWS or Azure.
Virtual Cloud Network enables organizations to embrace cloud networking as the software-
defined architecture for connecting everything in a distributed world.
Virtual Cloud Network is a ubiquitous software layer that provides maximum visibility into, and
context for, the interaction among various users, applications, and data. To realize its vision,
VMware started NSX to support various types of endpoints.
VMware’s software-based approach delivers a networking and security platform that enables
customers to connect, secure, and operate an end-to-end architecture to deliver services to
applications.

18 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


VMware’s software-based approach provides the following benefits:

• Enables you to design and build the next generation policy-driven data center that connects,
secures, and automates traditional hypervisors, as well as new microservices-based
(container) applications across a range of deployment targets, such as the data center, cloud,
and so on

• Embeds security into the platform, compartmentalizing the network through micro-
segmentation, encrypting in-flight data, and automatically detecting and responding to
security threats

• Delivers a WAN solution that provides full visibility, metrics, control, and automation of all
endpoints

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 19


2-7 VMware NSX Portfolio (1)

VMware’s Virtual Cloud Network enables you to run your applications everywhere.
NSX-T Data Center takes what you built from the private data center into the public cloud. NSX-T
Data Center also supports modern applications and technologies, such as containers and the
Internet of Things (IoT).
You can bring key capabilities from one central control point out to wherever your applications
run.

20 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-8 VMware NSX Portfolio (2)

Network and security virtualization is made up of the following solutions:

• NSX-T Data Center, formerly known as NSX-T, is an end-to-end platform for data center
networking.

• VMware SD-WAN is used for branch and cloud connectivity.

• NSX Cloud extends the data center network into the public cloud, such as VMware Cloud on
AWS or Microsoft Azure Cloud. NSX Cloud also provides container or Kubernetes support
with VMware PKS.

• NSX Hybrid Connect delivers application and network hybridity and mobility.

• AppDefense provides application-centric security based on the intent and behavior of each
application.

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 21


2-9 VMware NSX Portfolio (3)

The NSX portfolio supports management and automation tools:

• VMware Network Insight (SaaS) and vRealize Network Insight provide full visibility,
troubleshooting, and optimization across physical, virtual, and cloud environments.

• vRealize Automation is the cloud automation tool for the software-defined data center.

22 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-10 NSX-T Data Center

NSX-T Data Center offers consistent networking and security services across multiple endpoints,
such as ESXi , kernel-based virtual machines (KVMs), and bare metal workloads. These
workloads can run on the on-premises data center or on public clouds running native workloads,
or they can be powered by VMware Cloud destinations, such as VMware Cloud on AWS, IBM,
OVH public cloud, and the VMware Cloud Provider Program (VCPP).

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 23


2-11 NSX-T Data Center Use Cases

You can use NSX-T Data Center for the following purposes:

• Security: Delivers application-centric security at the workload level to prevent the lateral
spread of threats

• Multicloud networking: Brings networking and security consistency across varied sites and
streamlines multicloud operations

• Automation: Enables faster deployment through automation by reducing manual, error-prone


tasks

• Cloud-native applications: Enable native networking and security for containerized workloads
across application frameworks

24 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-12 NSX-T Data Center Features (1)

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 25


2-13 NSX-T Data Center Features (2)

26 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-14 NSX-T Data Center Ecosystem

Services from ecosystem partners are integrated with the NSX-T Data Center platform in the
management, control, and data planes. The NSX-T Data Center platform creates a unified user
experience and seamless integration with any cloud management platform (CMP), and also
enables roles and duties separation.
NSX-T Data Center provides a platform for solutions from ecosystem partners to help customers
optimize software-defined data center deployments. The VMware Ready for Networking and
Security NSX Partner Program provides support and certification for partners' integration with
NSX-T Data Center.
The following partner solutions are available:

• Agentless endpoint protection, such as antivirus, antimalware, and so on

• Intrusion protection system (IPS) and intrusion detection system (IDS)

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 27


• Firewall extension to NSX-T Data Center distributed and gateway firewalls, such as guest
introspection and network introspection with service insertion

• Network monitoring

28 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-15 Review of Learner Objectives

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 29


2-16 NSX-T Data Center Architecture and Components

30 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-17 Learner Objectives

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 31


2-18 High-Level Architecture of NSX-T Data Center

Each plane has its own components:

• Management plane: The management plane is designed with advanced clustering technology,
which allows the platform to process large-scale concurrent API requests. NSX Manager
provides the REST API and a web-based UI interface entry point for all user configurations.

• Control plane: The control plane includes a three-node controller cluster, which is responsible
for computing and distributing the runtime virtual networking and security state of the NSX-T
Data Center environment. The control plane is separated into a central control plane and a
local control plane. This separation significantly simplifies the work of the central control
plane and enables the platform to extend and scale for various endpoints. With NSX-T Data
Center 2.4, the management plane and control plane are converged. Each manager node in
NSX-T Data Center is an appliance with converged functions, including management,
control, and policy.

• Data plane: The data plane includes a group of ESXi or KVM hosts, as well as NSX Edge
nodes. The group of servers and edge nodes prepared for NSX-T Data Center are called

32 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


transport nodes. Transport nodes are responsible for the distributed forwarding of network
traffic. Rather than relying on the distributed virtual switch the data plane includes a host
switch called the NSX-managed virtual distributed switch (N-VDS), which decouples the data
plane from the compute manager, such as vCenter Server, and normalizes networking
connectivity.

• Consumption plane: Although the consumption plane is not part of NSX-T Data Center, it
provides integration into virtually any CMP through the REST API and integration with
VMware cloud management planes such as vRealize Automation:

• The consumption of NSX-T Data Center can be driven directly through the NSX
Manager user interface (UI).

• Typically, end users tie network virtualization to their cloud management plane for
deploying applications.

• Integration is also available through OpenStack, Kubernetes, and Pivotal Cloud Foundry.
All operations are performed from the management plane. These operations include create, read,
update, and delete (CRUD).

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 33


2-19 Management and Control Planes

Network and security virtualization is made up of several key solutions, providing security,
integration, extensibility, automation, and elasticity.

34 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-20 About the Data Plane

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 35


2-21 About NSX Management Cluster

NSX Manager is a standalone appliance. It includes the management plane, control plane, and
policies. As a result of this integrated approach, users do not need to install the manager,
controller, and policy roles as separate VMs.
The diagram shows that the manager and controller instances run on all three nodes, providing
resiliency. Requests from users through the API or UI can be handled by three manager nodes,
resulting in shared workloads and efficiency.
Although the three services are merged on each node in the cluster, separate resources (CPU,
memory, and so on) are allocated for each of the services.
The distributed persistent database runs across all three nodes, providing the same configuration
view to each node. This way, a manager or controller running on one node has the same view of
the configuration topology as those running on the other two nodes.
The management plane is designed to process large-scale concurrent API calls from CMPs. The
system can be integrated into any CMP and ships with a fully supported OpenStack Neutron plug-
in. As the system scales, the management plane scales out, using advanced clustering technology.

36 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


Another feature of the NSX-T Data Center management plane is that it is decoupled from vCenter
Server (compute manager). A compute manager is an application that manages resources such as
hosts and VMs. You can use NSX-T Data Center to manage the networking and security on other
compute platforms, providing options for users. Although not tightly coupled, vCenter Server still
provides greater value because of its ecosystem and functionality integration.
NSX Manager is available in different sizes for different deployment scenarios:

• A small appliance for lab or proof-of-concept deployments

• A medium appliance for deployments to 64 hosts

• A large appliance for customers who deploy to a large-scale environment

For more information, see the VMware configuration maximums tool at


https://configmax.vmware.com.

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 37


2-22 Benefits of NSX Management Cluster

38 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-23 NSX Management Cluster with Virtual IP Address

The API and GUI are available on all three manager nodes in the cluster. When a user request is
sent to the virtual IP address, the active manager (the leader who has the virtual IP address
attached) responds to the request. If the leader fails, the two remaining managers elect a new
leader. The new leader responds to the requests sent to that virtual IP address.
The diagram shows that, from the administrator's perspective, a single IP address (the virtual IP
address) is always used to access the NSX Management Cluster.

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 39


2-24 NSX Management Cluster with Load Balancer

The diagram shows how a traditional load balancer can balance traffic across multiple manager
nodes.

40 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-25 About NSX Policy

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 41


2-26 NSX Policy Characteristics

In NSX-T Data Center 2.4, the roles of NSX Policy Manager, manager, and controller are still
three different roles. but they are automatically deployed in the same appliance.
The support of a single policy and central management across multiple sites, NSX-T Data Center
instances, and on-premises and VMware Cloud on AWS will be supported in the future.

42 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-27 Centralized Policy Management

The motivation for centralized policy management is the customer's need for a consistent network
and security policy management platform across all workloads.

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 43


2-28 NSX Manager Functions

44 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-29 NSX Policy and NSX Manager Workflow

Reverse proxy has authentication and authorization capabilities.


NSX Policy Manager and Proton are internal web applications that communicate to each other
through HTTP.
CorfuDB is a persistent in-memory object store. Persistency is achieved by writing each
transaction in a shared logon disk. Queries are still served from memory, providing better
performance and scalability.

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 45


2-30 About NSX Controller

46 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-31 Control Plane Components (1)

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 47


2-32 Control Plane Components (2)

The central control plane (CCP) computes and disseminates the ephemeral runtime state based on
the configuration from the management plane and topology information reported by the data plane
elements.
The local control plane (LCP) runs on the compute endpoints. It computes the local ephemeral
runtime state for the endpoint based on updates from the CCP and local data plane information.
The LCP pushes stateless configurations to forwarding engines in the data plane and reports the
information back to the CCP. This process simplifies the work of the CCP significantly and
enables the platform to scale to thousands of different types of endpoints (hypervisor, container
host, bare metal, or public cloud).
RabbitMQ is an open-source message broker protocol.
Remote procedure call (RPC) is a protocol that one program can use to request a service from
another program located in another computer without having to understand the network's details.

48 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-33 Control Plane Change Propagation

The LCP on the transport node reports local runtime changes to the master CCP node. The master
CCP nodes receive the changes and propagate the changes to other controllers in the cluster. All
controllers propagate the changes to the transport nodes that they are responsible for.

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 49


2-34 Control Plane Sharding Function

50 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-35 Handling Controller Failure

In the diagram, controller 3 is assigned to two transport nodes. When controller 3 fails, the nodes
are moved to controllers 1 and 2.

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 51


2-36 Data Plane Functions

52 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-37 Data Plane Components

For packet forwarding, ESXi uses the NSX-T Data Center virtual distributed switch (N-VDS), and
KVM uses Open vSwitch.

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 53


2-38 Review of Learner Objectives

54 Module 2: VMware Virtual Cloud Network and NSX-T Data Center


2-39 Key Points

Module 2: VMware Virtual Cloud Network and NSX-T Data Center 55


56 Module 2: VMware Virtual Cloud Network and NSX-T Data Center
Module 3
Preparing the NSX-T Data Center Infrastructure

Module 3: Preparing the NSX-T Data Center Infrastructure 57


3-2 Importance

58 Module 3: Preparing the NSX-T Data Center Infrastructure


3-3 Module Lessons

Module 3: Preparing the NSX-T Data Center Infrastructure 59


3-4 Deploying NSX Management Cluster

60 Module 3: Preparing the NSX-T Data Center Infrastructure


3-5 Learner Objectives

Module 3: Preparing the NSX-T Data Center Infrastructure 61


3-6 Preparing the Infrastructure for NSX-T Data Center

62 Module 3: Preparing the NSX-T Data Center Infrastructure


3-7 NSX Manager Deployment Considerations

NSX Manager combines the functions of the management plane, control plane, and policy
management in a single node (virtual appliance).
NSX Manager nodes can be installed on supported hypervisors (vSphere, ESXi, RHEL KVM, and
Ubuntu KVM) for on-premises deployment.
For supported hypervisor versions , see VMware Product Interoperability Matrices at
https://www.vmware.com/resources/compatibility/sim/interop_matrix.php.

Module 3: Preparing the NSX-T Data Center Infrastructure 63


3-8 NSX Manager Node Sizing

The NSX Manager extra-small VM resource requirements apply to the Cloud Service Manager
(CSM) only.

64 Module 3: Preparing the NSX-T Data Center Infrastructure


3-9 Accessing the NSX Manager UI

Module 3: Preparing the NSX-T Data Center Infrastructure 65


3-10 Registering Compute Managers to NSX-T Data Center

You add the configuration details to register the compute manager to NSX-T Data Center. The
compute manager in the example is vCenter Server.

66 Module 3: Preparing the NSX-T Data Center Infrastructure


3-11 Verifying the Registration of Compute Manager to
NSX-T Data Center

Module 3: Preparing the NSX-T Data Center Infrastructure 67


3-12 Deploying Additional NSX Manager Instances (1)

The numbers on the image show the process for automatically deploying NSX Manager instances
from the NSX Manager simplified UI.

68 Module 3: Preparing the NSX-T Data Center Infrastructure


3-13 Deploying Additional NSX Manager Instances (2)

Module 3: Preparing the NSX-T Data Center Infrastructure 69


3-14 Management Cluster Status: GUI (1)

You can check the status of nodes by selecting Home > Dashboard > System in the NSX
Manager simplified UI.
Different colors send different messages. For example, yellow indicates degraded performance,
such as memory usage of NSX Manager that is consistently higher than 80% for the past 5
minutes.

70 Module 3: Preparing the NSX-T Data Center Infrastructure


3-15 Management Cluster Status - GUI (2)

For information about manually joining the NSX Manager nodes to form a cluster, see NSX-T
Data Center Installation Guide at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/2.4/nsxt_24_install.pdf.

Module 3: Preparing the NSX-T Data Center Infrastructure 71


3-16 Configuring the Virtual IP Address

You can configure a virtual IP address for the Management Cluster to provide load balancing and
availability among the management nodes:

• The virtual IP address is not set by default.

• You can configure the address for the management nodes to share.

• You might need to wait a few minutes for the newly configured address to take effect.

To change the virtual IP address, click EDIT. To remove the virtual IP address, click RESET.

72 Module 3: Preparing the NSX-T Data Center Infrastructure


3-17 Management Cluster Status: CLI (1)

You connect to an appliance in the cluster and enter the command get cluster status. The
number and status of the nodes in the cluster appear.
The example output lists the manager, policy, and controller groups. It also shows each group’s
status, along with its members and member status.

Module 3: Preparing the NSX-T Data Center Infrastructure 73


3-18 Management Cluster Status: CLI (2)

Watch out for the following common misconfigurations:

• ESXi host with insufficient resources (CPUs, memory, or hard disk)

• Incorrect network details, such as the gateway address, network mask, DNS, and so on

• Use of the same IP address when deploying multiple appliances through a single request

74 Module 3: Preparing the NSX-T Data Center Infrastructure


3-19 NSX Manager Deployment on KVM Hosts

For more information about deploying NSX Manager on a KVM host, see "Install NSX Manager
on KVM" at https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.4/installation/GUID-
5229A83D-1B97-4203-BA30-F52716F68F7F.html.

Module 3: Preparing the NSX-T Data Center Infrastructure 75


3-20 Review of Learner Objectives

76 Module 3: Preparing the NSX-T Data Center Infrastructure


3-21 Navigating the NSX Manager UI

Module 3: Preparing the NSX-T Data Center Infrastructure 77


3-22 Learner Objectives

78 Module 3: Preparing the NSX-T Data Center Infrastructure


3-23 NSX Manager Simplified and Advanced User
Interfaces (1)

Starting with NSX-T Data Center 2.4, the NSX Manager UI is divided into simplified and
advanced sections. You use the Advanced Networking & Security tab when the configuration is
not supported by the simplified UI, for example, when you need to configure a bridge firewall or
use the traceflow tool.

Module 3: Preparing the NSX-T Data Center Infrastructure 79


3-24 NSX Manager Simplified and Advanced User
Interfaces (2)

You can use the simplified or advanced UI to configure objects, but VMware recommends that
you use the simplified UI if possible.
Some terms used in the simplified UI tabs are different from those in previous versions of NSX-T
Data Center. For example, logical switch is now called segment. Tier-0 or Tier-1 logical router is
now called Tier-0 or Tier-1 Gateway.
However, previous naming conventions are maintained in the advanced UI:

• A logical segment in the simplified UI is called a logical switch in the advanced UI.

• A Tier-0 or Tier-1 Gateway in the simplified UI is called a T-0 or T-1 logical router in the
advanced UI.

80 Module 3: Preparing the NSX-T Data Center Infrastructure


3-25 Networking View

Module 3: Preparing the NSX-T Data Center Infrastructure 81


3-26 Security View

82 Module 3: Preparing the NSX-T Data Center Infrastructure


3-27 Inventory View

Module 3: Preparing the NSX-T Data Center Infrastructure 83


3-28 Tools View

84 Module 3: Preparing the NSX-T Data Center Infrastructure


3-29 System View

The Overview page shows the status and details of the management nodes and the cluster.

Module 3: Preparing the NSX-T Data Center Infrastructure 85


3-30 Labs

86 Module 3: Preparing the NSX-T Data Center Infrastructure


3-31 Lab: Labs Introduction

Module 3: Preparing the NSX-T Data Center Infrastructure 87


3-32 Lab: Reviewing the Configuration of the Predeployed
NSX Manager Instance

88 Module 3: Preparing the NSX-T Data Center Infrastructure


3-33 Lab Simulation: Deploying a 3-Node NSX
Management Cluster

Module 3: Preparing the NSX-T Data Center Infrastructure 89


3-34 Review of Learner Objectives

90 Module 3: Preparing the NSX-T Data Center Infrastructure


3-35 Preparing the Data Plane

Module 3: Preparing the NSX-T Data Center Infrastructure 91


3-36 Learner Objectives

92 Module 3: Preparing the NSX-T Data Center Infrastructure


3-37 Data Plane Components and Functions

The data plane includes the following types of traffic:

• NSX-managed virtual distributed switch (N-VDS)-based switching, distributed routing, and


distributed firewall traffic

• Workload data

Module 3: Preparing the NSX-T Data Center Infrastructure 93


3-38 Transport Node Overview

NSX-T Data Center logical topology is decoupled from the hypervisor-type transport nodes.
ESXi and KVM transport nodes can work together. Networks and topologies can extend to both
ESXi and KMV environments, regardless of the hypervisor type.

94 Module 3: Preparing the NSX-T Data Center Infrastructure


3-39 Transport Node Components and Architecture

Each transport node has a management plane agent (MPA). NSX Manager polls rule statistics and
status from the transport node using the MPA.
Each transport node is configured with a host switch, which is the primary component in the data
plane.
Data plane forwarding functions include switching, overlay encapsulation and decapsulation,
routing, and creating firewalls.
The local control plane (LCP) is composed of several agents and modules that perform the LCP
function on the data plane.

Module 3: Preparing the NSX-T Data Center Infrastructure 95


3-40 Transport Node Physical Connectivity

96 Module 3: Preparing the NSX-T Data Center Infrastructure


3-41 About IP Address Pools

An IP pool is a container created for assigning IP addresses to tunnel endpoints (TEPs).


You can manually configure IP address pools. If you use both ESXi and KVM hosts, one option is
to use two different subnets for the ESXi tunnel endpoint IP pool and the KVM tunnel endpoint IP
pool. In that case, you must create a static route with a dedicated default gateway on the KVM
hosts.
Each host transport node is a TEP. Each TEP has an IP address. These IP addresses can be in the
same subnet or in different subnets, depending on the IP pools or DHCP configured for the
transport nodes.

Module 3: Preparing the NSX-T Data Center Infrastructure 97


3-42 About Transport Zones (1)

A transport zone defines the span of a logical network over the physical infrastructure. It defines
the potential reach of transport nodes.
A transport zone can accommodate either overlay or VLAN traffic.
VLAN transport zones are used to connect between NSX Edge uplinks and upstream physical
routers to establish north-south connectivity.
Transport nodes are hypervisor hosts and NSX Edge nodes that participate in an NSX-T Data
Center overlay. As a result, a hypervisor host can host VMs that communicate over logical
switches. An NSX Edge node can have logical router uplinks and downlinks configured.
A hypervisor transport node can belong to multiple transport zones. A segment can belong to only
one transport zone.
NSX Edge nodes can belong to multiple transport zones: one overlay transport zone and multiple
VLAN transport zones.

98 Module 3: Preparing the NSX-T Data Center Infrastructure


3-43 About Transport Zones (2)

NSX-T Data Center supports the following types of transport nodes:

• Host (ESXi or KVM host)

• NSX Edge

• Bare metal

Module 3: Preparing the NSX-T Data Center Infrastructure 99


3-44 About N-VDS

The N-VDS is the software that operates in hypervisors to form a software abstraction layer
between servers and the physical network. The N-VDS is based on vSphere Distributed Switch,
which provides uplinks for host connectivity to physical switches.
When an ESXi host is prepared for NSX-T Data Center, an N-VDS is created. An N-VDS is
similar in function to a KVM Open vSwitch on a KVM host.
The N-VDS performs the switching functionality on a transport node:

• The N-VDS typically owns several physical NICs of the transport node.

• The N-VDS instances are created on host or edge transport nodes.

• The N-VDS instances configured on different transport nodes are independent.

• The N-VDS has a name assigned for grouping and management. For example, the diagram
shows two N-VDS instances that are configured on the transport nodes: An N-VDS named
Lab and an N-VDS named Prod (production).

100 Module 3: Preparing the NSX-T Data Center Infrastructure


The networks configured by NSX Manager are opaque to compute managers, such as vCenter
Server. vCenter Server, as a compute manager, has visibility into the networks. From vSphere
Web Client, a user can see the network components and can select them for usage. However, the
user cannot edit the settings of the network components.
The control plane and data plane are optimized for logical switching.

Module 3: Preparing the NSX-T Data Center Infrastructure 101


3-45 N-VDS on ESXi Transport Nodes

NSX-T Data Center does not require vCenter Server to operate. NSX Manager is responsible for
its creation and is completely independent of vCenter Server.
N-VDS can coexist with vSphere distributed and standard switches.
vCenter Server sees N-VDS as an opaque network. In other words, vCenter Server is aware of its
existence but cannot configure it. N-VDS is configured by the management plane and host agents
(nsxa and netcpa).
N-VDS performs later 2 forwarding and supports VLAN, port mirroring, and NIC teaming. The
teaming configuration is applied switch-wide. Link aggregation groups are implemented as ports.

102 Module 3: Preparing the NSX-T Data Center Infrastructure


3-46 N-VDS on KVM Transport Nodes

Module 3: Preparing the NSX-T Data Center Infrastructure 103


3-47 Transport Zone and N-VDS Mapping

104 Module 3: Preparing the NSX-T Data Center Infrastructure


3-48 Creating Transport Zones

Transport zones dictate which hosts (and thus which VMs) can participate in a particular network:

• The overlay transport zone is used by both host transport nodes and NSX Edge nodes.

• The VLAN transport zone is used by NSX Edge nodes for their VLAN uplinks.

An NSX-T Data Center environment can contain one or more transport zones, depending on your
requirements. A host can belong to multiple transport zones. A logical switch can belong to only
one transport zone.
NSX-T Data Center does not allow VMs in different transport zones in the layer 2 network to
connect. The span of a logical switch is limited to a transport zone, so virtual machines in different
transport zones cannot be on the same layer 2 network.
When you create a transport zone, you must provide a name for the N-VDS that will be installed
on the transport nodes when the nodes are added to this transport zone. The N-VDS name can be
whatever you want it to be.

Module 3: Preparing the NSX-T Data Center Infrastructure 105


You must select the traffic type and the N-VDS mode. The N-VDS is installed on the transport
nodes that are added to the newly created transport zone.

106 Module 3: Preparing the NSX-T Data Center Infrastructure


3-49 N-VDS Operational Modes

N-VDS supports the following modes:

• Standard mode provides switching functionality comparable to vSphere standard switch or


vSphere distributed switch.

• Enhanced Datapath mode is based on the underlying N-VDS and supports the base switch
features, such as vSphere vMotion, vSphere HA, vSphere DRS, and so on.

– It brings the advantages of the Data Plane Development Kit (DPDK)-style packet-
processing performance to the east-west flows within the data center.
– This switch mode is designed to support Network Functions Virtualization (NFV) type
applications.
– It is not suitable for generic data center applications or deployments where traditional
VM-based or bare metal NSX Edge nodes must be used.
DPDK is a set of data plane libraries and network interface controller drivers for fast packet
processing:

Module 3: Preparing the NSX-T Data Center Infrastructure 107


• DPDK uses several optimizations around CPU usage and memory management to help
improve the packet-processing speed.

• Compared to the standard way of packet processing, DPDK helps decrease CPU cost and yet
increase the number of packets processed per second.

• The DPDK library can be used for a variety of use cases, and many software vendors use
DPDK. It can be tuned to match desired performance for generalized or specific use cases.

With NFV, the focus shifts from raw throughput to packet-processing speed. In these workloads,
the applications do not often send a smaller number of large packets. Rather, they send many
smaller packets. Often, these packets are as small as 128 bytes. TCP optimizations do not help
with these workloads. Enhanced Datapath mode leverages DPDK to deliver performance for these
packet-processing workloads.

108 Module 3: Preparing the NSX-T Data Center Infrastructure


3-50 Enhanced Datapath Mode

With Poll Mode Driver (PMD), instead of the NIC sending an interrupt to the CPU when a packet
arrives, a core is assigned to poll the NIC to check for any packets. This process eliminates CPU
context switching, which is unavoidable in the traditional interrupt mode of packet processing.
Flow cache is an optimization that helps reduce the CPU cycles spent on known flows. Flow
Cache tables get populated with the start of a new flow. Decisions for the rest of the packets in a
flow might be skipped if the flow already exists in the flow table. If the packets from the same
flow arrive consecutively, the fast path decision for that packet is stored in memory and applied
directly for the rest of the packets in that cluster of packets. If the packets are from different flows,
the decision per flow is saved to a hash table and used to decide the next hop for each of the
packets of the flows.
DPDK features, including PMD, flow cache, and optimized packet copy, provide better
performance for small and large packet sizes pertinent for NFV style workloads.

Module 3: Preparing the NSX-T Data Center Infrastructure 109


3-51 Reviewing the Transport Zone Configuration

In the example, two transport zones are created. The transport zone named Prod-Overlay-TZ is
mapped to the N-VDS named Prod-Overlay-NVDS to carry the GENEVE-encapsulated overlay
traffic. The transport zone named Prod-VLAN-TZ is mapped to the N-VDS named Prod-VLAN-
NVDS to carry the 802.1q VLAN traffic.

110 Module 3: Preparing the NSX-T Data Center Infrastructure


3-52 Physical NICs, LAGs, and Uplinks

The N-VDS allows for virtual-to-physical packet flow by binding logical router uplinks and
downlinks to physical NICs.
Link Aggregation Groups (LAGs) use Link Aggregation Control Protocol (LACP) for the
transport network.
Uplinks of an N-VDS are assigned physical NICs or LAGs.
In the example, logical uplink 1 is mapped to a physical LAG (comprised of physical port p1 and
p2). Logical uplink 2 is mapped to physical port p3.

Module 3: Preparing the NSX-T Data Center Infrastructure 111


3-53 About Uplink Profiles

112 Module 3: Preparing the NSX-T Data Center Infrastructure


3-54 Default Uplink Profiles

Module 3: Preparing the NSX-T Data Center Infrastructure 113


3-55 Types of Teaming Policies

You can select from the following teaming policy modes:

• Failover Order: An active uplink is specified along with an optional list of standby uplinks.
If the active uplink fails, the next uplink in the standby list replaces the active uplink. No
actual load balancing is performed with this option.

• Load Balanced Source: A list of active uplinks is specified, and each interface on the
transport node is pinned to one active uplink based on the Source Port ID. This configuration
allows use of several active uplinks at the same time.

• Load Balanced Source Mac: This option determines the uplink based on the source VM’s
MAC address.

The image shows that you can specify a type of teaming policy for the uplink profile.

114 Module 3: Preparing the NSX-T Data Center Infrastructure


3-56 Teaming Policies Supported by ESXi and KVM Hosts

The Load Balanced Source and Load Balanced Source Mac teaming policies do not allow the
configuration of standby uplinks.
Load Balanced Source and Load Balanced Source Mac teaming policies are not supported on
KVM transport nodes.
KVM hosts are limited to the Failover Order teaming policy and a single LAG support. For
LACP, multiple LAG is not supported on KVM hosts.

Module 3: Preparing the NSX-T Data Center Infrastructure 115


3-57 Teaming Policy

116 Module 3: Preparing the NSX-T Data Center Infrastructure


3-58 About LLDP

Module 3: Preparing the NSX-T Data Center Infrastructure 117


3-59 Enabling LLDP Profiles

118 Module 3: Preparing the NSX-T Data Center Infrastructure


3-60 About Network I/O Control Profiles

Module 3: Preparing the NSX-T Data Center Infrastructure 119


3-61 Creating Network I/O Control Profiles (1)

120 Module 3: Preparing the NSX-T Data Center Infrastructure


3-62 Creating Network I/O Control Profiles (2)

Module 3: Preparing the NSX-T Data Center Infrastructure 121


3-63 About Transport Node Profiles (1)

122 Module 3: Preparing the NSX-T Data Center Infrastructure


3-64 About Transport Node Profiles (2)

Module 3: Preparing the NSX-T Data Center Infrastructure 123


3-65 Benefits of Transport Node Profiles

124 Module 3: Preparing the NSX-T Data Center Infrastructure


3-66 Transport Node Profile Considerations

Module 3: Preparing the NSX-T Data Center Infrastructure 125


3-67 Transport Node Profile Prerequisites

126 Module 3: Preparing the NSX-T Data Center Infrastructure


3-68 Attaching a Transport Node Profile to the ESXi Cluster

Attaching a transport node profile is required only when configuring vCenter-managed ESXi hosts
at the cluster level.
This step is not required for standalone ESXi host preparation.

Module 3: Preparing the NSX-T Data Center Infrastructure 127


3-69 Managed ESXi: Host Preparation (1)

The diagram shows how you can prepare a host or a host cluster managed by a compute manager,
such as vCenter Server.

128 Module 3: Preparing the NSX-T Data Center Infrastructure


3-70 Managed ESXi: Host Preparation (2)

The slide shows that the ESXi hosts prepared for NSX-T Data Center, sa-esxi-04.vclass.local and
sa-esx-05.vclass.local, are automatically listed as transport nodes in the NSX Manager simplified
UI.

Module 3: Preparing the NSX-T Data Center Infrastructure 129


3-71 Reviewing ESXi Transport Node Status

You can check the status of host transport nodes in the System view of the dashboard. Point to the
circle, and messages appear. These messages provide details about the nodes. For example, in the
screenshot, out of seven nodes, four nodes are configured as transport nodes (green color) and
three are not configured for NSX-T Data Center (gray circle).

130 Module 3: Preparing the NSX-T Data Center Infrastructure


3-72 Verifying ESXi Transport Node by CLI

After an ESXi host is prepared for NSX-T Data Center, VIBs are installed for the host to
participate in networking and security operations.
The functions of the VIBs are defined as follows:

• nsx-aggservice: NSX-T Data Center aggregation service runs in the management plane nodes
and fetches the runtime state of NSX-T Data Center components.

• nsx-da: Collects discovery agent data about the hypervisor OS version, VMs, and network
interfaces.

• nsx-esx-datapath: Provides NSX-T Data Center data plane packet-processing functionality.

• nsx-exporter: Provides host agents that report runtime state to the aggregation service.

• nsx-host: Provides metadata for the VIB bundle that is installed on the host.

• nsx-lldp: Provides support for the Link Layer Discovery Protocol (LLDP).

• nsx-mpa: Provides communication between NSX Manager and hypervisor hosts.

Module 3: Preparing the NSX-T Data Center Infrastructure 131


• nsx-netcpa: Provides communication between the central control plane and hypervisor hosts.

• nsx-python-protobuf: Provides Python bindings for protocol buffers.

• nsx-sfhc: Service fabric host components (SFHC) provides a host agent for managing the life
cycle of the hypervisor as a fabric host.

• nsxa: Performs host-level configurations, such as N-VDS creation.

• nsxcli: Provides the NSX-T CLI on hypervisor hosts.

• nsx-support-bundle-client: Provides the ability to collect support bundles.

132 Module 3: Preparing the NSX-T Data Center Infrastructure


3-73 Transport Node Preparation: KVM

The KVM host preparation workflow includes the following steps:


1. Install the Deb/RPM packages on the KVM host.
2. Add the KVM host to the management plane.
3. Install and configure the N-VDS.
4. Attach an uplink profile to the N-VDS.
5. Map physical NICs to the N-VDS.
6. Add the N-VDS to the transport zone.
7. Allocate the IP address to the TEP interface.
8. Promote the host to transport node.

Module 3: Preparing the NSX-T Data Center Infrastructure 133


3-74 Configuring KVM Hosts as Transport Nodes (1)

134 Module 3: Preparing the NSX-T Data Center Infrastructure


3-75 Configuring KVM Hosts as Transport Nodes (2)

Module 3: Preparing the NSX-T Data Center Infrastructure 135


3-76 Reviewing KVM Transport Node Status

136 Module 3: Preparing the NSX-T Data Center Infrastructure


3-77 Verifying the KVM Transport Node by CLI

Module 3: Preparing the NSX-T Data Center Infrastructure 137


3-78 Lab: Preparing the NSX-T Data Center Infrastructure

138 Module 3: Preparing the NSX-T Data Center Infrastructure


3-79 Review of Learner Objectives

Module 3: Preparing the NSX-T Data Center Infrastructure 139


3-80 Key Points

140 Module 3: Preparing the NSX-T Data Center Infrastructure


Module 4
NSX-T Data Center Logical Switching

Module 4: NSX-T Data Center Logical Switching 141


4-2 Importance

142 Module 4: NSX-T Data Center Logical Switching


4-3 Module Lessons

Module 4: NSX-T Data Center Logical Switching 143


4-4 Logical Switching Overview

144 Module 4: NSX-T Data Center Logical Switching


4-5 Learner Objectives

Module 4: NSX-T Data Center Logical Switching 145


4-6 Logical Switching Use Cases

146 Module 4: NSX-T Data Center Logical Switching


4-7 Prerequisites for Logical Switching

Transport nodes are hypervisor hosts, bare metal servers, and NSX Edge instances participating in
NSX-T Data Center.

Module 4: NSX-T Data Center Logical Switching 147


4-8 Logical Switching Terminology

A segment, formerly known as logical switch, reproduces switching functionality in an NSX-T


Data Center virtual environment completely decoupled from the underlying hardware. Segments
are similar to VLANs, in that they provide network connections to which you can attach VMs.
The VMs can then communicate with each other over tunnels between hypervisors if the VMs are
connected to the same segment. Each segment has a virtual network identifier (VNI), similar to a
VLAN ID. However, unlike VLAN, VNI scale well beyond the limits of VLAN IDs.
A segment contains multiple segment ports. Entities such as routers, VMs, or containers can
connect to a segment through the segment ports.
Segment profiles include layer 2 networking configuration details for logical switches and logical
ports. NSX Manager supports several types of switching profiles and maintains one or more
system-defined default switching profiles for each profile type.
The NSX-managed virtual distributed switch (N-VDS) is configured on each transport node,
which provides layer 2 functionality. N-VDS is a per-host switch spanning all hosts providing
layer 2 functionality.

148 Module 4: NSX-T Data Center Logical Switching


Segment profiles contain different configurations of the logical port. These profiles can be applied
at a port level or at a segment level. Profiles applied on a segment are applicable on all ports of the
segment unless they are explicitly overwritten at the port level. Multiple segment profiles are
supported, including QoS, port mirroring, IP Discovery, SpoofGuard, segment security, MAC
management, and Network I/O Control.

Module 4: NSX-T Data Center Logical Switching 149


4-9 About Segments (1)

One or more VMs can be attached to a segment. The VMs connected to a segment can
communicate with each other through tunnels between hosts.
Segments are similar to VLANs, in that they provide network connections to which you can attach
VMs. Each segment has a virtual network identifier (VNI), similar to a VLAN ID.

150 Module 4: NSX-T Data Center Logical Switching


4-10 About Segments (2)

Module 4: NSX-T Data Center Logical Switching 151


4-11 About Tunneling

Tunneling is the basis for implementing NSX-T Data Center overlay networks. It provides
isolation between the underlay network (physical network) and an overlay network (virtual
network). This isolation is achieved by encapsulating the overlay packet within an underlay
packet.
Overlay logical networking, or tunneling, deploys a layer 2 network on top of an existing layer 3
network by encapsulating frames inside of packets and transferring the packets over an underlying
transport network. The underlying transport network can be another layer 2 network or it can cross
layer 3 boundaries.

152 Module 4: NSX-T Data Center Logical Switching


The transport node endpoints in an NSX-T Data Center overlay network are known as the tunnel
endpoints (TEPs):

• TEPs are the source and destination IP addresses used in the external IP header to uniquely
identify the hypervisor hosts originating and terminating the NSX-T Data Center
encapsulation of overlay frames.

• These source and destination IP addresses are used in the external IP header to uniquely
identify the hypervisor hosts originating and terminating the NSX-T Data Center
encapsulation of overlay frames.

• TEPs typically carry two types of traffic: VM traffic and control (health check) traffic.

Module 4: NSX-T Data Center Logical Switching 153


4-12 About GENEVE

NSX-T Data Center uses a tunneling encapsulation mechanism called Generic Network
Virtualization Encapsulation (GENEVE).
GENEVE was developed by VMware, Microsoft, Red Hat, and Intel. It is a standard under
development (draft-ietf-nvo3-geneve-07). The GENEVE protocol is compatible with other
tunneling protocols (such as VXLAN, NVGRE, and STT) and is considered to be more flexible.
The GENEVE protocol encapsulates only data plane packets. GENEVE-encapsulated packets are
designed to be communicated over standard back planes, switches, and routers:

• Packets are sent from one tunnel endpoint to one or more tunnel endpoints using either unicast
or multicast addressing.

• The end-user application and the VMs in which the application is executing are not modified
in any way by the GENEVE protocol.

• The tunnel endpoint encapsulates the end-user IP packet in the GENEVE header.

154 Module 4: NSX-T Data Center Logical Switching


• The header consists of fields specifying that it is a GENEVE packet, the overall length of the
options if any, the tunnel identifier, and the series of options.

• The completed GENEVE packet is transmitted to the destination endpoint in a standard User
Datagram Protocol (UDP) packet. Both IPv4 and IPv6 are supported.

• The receiving tunnel endpoint strips off the GENEVE header, interprets any included options,
and directs the end-user packet to its destination in the virtual network indicated by the tunnel
identifier.

• The GENEVE specification offers recommendations on ways to achieve efficient operation


by avoiding fragmentation and taking advantage of equal-cost multipath (ECMP) routing and
NIC hardware offload facilities.

Module 4: NSX-T Data Center Logical Switching 155


4-13 GENEVE Header Format

To support the needs of network virtualization, the tunneling protocol draws on the evolving
capabilities of each type of device in both the underlay and overlay networks.
This process imposes a few requirements on the data plane tunneling protocol:

• The data plane is generic and extensible enough to support current and future control planes.

• Tunnel components are efficiently implemented in both hardware and software without
restricting capabilities to the lowest common denominator.

• High performance over existing IP addresses is required.

The GENEVE packet format consists of a compact tunnel header encapsulated in UDP over either
IPv4 or IPv6. A small fixed tunnel header provides control information, as well as a base level of
functionality and interoperability with a focus on simplicity. This header is then followed by a set
of variable options to allow for future development. The payload consists of a protocol data unit of
the indicated type, such as an Ethernet frame.

156 Module 4: NSX-T Data Center Logical Switching


The following fields are in a GENEVE header:

• Version (2 bits): The current version number is 0.

• Options Length (6 bits): This variable results in a minimum total GENEVE header size of 8
bytes and a maximum of 260 bytes.

• O (1 bit): Operations, Administration and Maintenance (OAM) packet. This packet contains a
control message instead of a data payload.

• C (1 bit): This field indicates that critical options are present.

• Rsvd. (6 bits): The Reserved field must be zero on transmission and ignored on receipt.

• Protocol Type (16 bits): The field indicates the type of protocol data unit appearing after the
GENEVE header.

• Reserved (8 bits): The Reserved field must be zero on transmission and ignored on receipt.

• Virtual Network Identifier: Each logical network is identified by a unique VNI. The VNI
uniquely identifies the segment that the inner Ethernet frame belongs to. It is a 24-bit number
that is added to the GENEVE frame, allowing a theoretical limit of 16 million separate
networks. The NSX VNI range starts from 5000-16777216.

The base GENEVE header is followed by zero or more options in type-length-value format. Each
option consists of a 4-byte option header and a variable amount of option data interpreted
according to the type. GENEVE provides NSX-T Data Center with the complete flexibility of
inserting metadata in the type, length, and value fields that can be used for new features. One of
the examples of this metadata is the VNI. VMware recommends an MTU of 1600 to account for
the encapsulation header.
The GENEVE protocol offers the following benefits:

• Supports proprietary type, length, and value fields

• Can add new metadata to the encapsulation without revising the GENEVE standard

• Allows VMware to develop software-based features without being held by hardware


dependencies

• Provides the same kind of NIC offloads as VXLAN (check compatibility list)

• Is open so third-party tools, such as Wireshark, can decode it

Module 4: NSX-T Data Center Logical Switching 157


4-14 Logical Switching: End-to-End Communication

The example diagram shows the following details:

• The ESXi host is configured as a transport node with TEP IP: 172.20.11.51, and PROD-
NVDS is installed on the hypervisor during the transport node creation. The VMkernel
interface VMK10 is created on the ESXi host.

• The KVM host is configured as a transport node with TEP IP: 172.20.11.52, and PROD-
NVDS is installed on the hypervisor during the transport node creation. The nsx-tep 0.0
interface is created on the KVM host.

• The ESXi and KVM transport nodes are configured in the transport zone named PROD-
OVERLAY-TZ.

• Transport node A is running VM-1 with IP Address 10.1.10.11 and MAC address ABC.

• Transport node B is running VM-2 with IP Address 10.1.10.12 and MAC address DEF.

158 Module 4: NSX-T Data Center Logical Switching


• VM-1 and VM-2 are connected to the segment ports on Web-Segment 69632. This web
segment is an overlay-based segment configured in the transport zone named PROD-
OVERLAY-TZ.

• When VM-1 communicates with VM-2, the source hypervisor encapsulates the packet with
the GENEVE header and sends it to the destination transport node, which decapsulates the
packet and forwards it to the destination VM.

During VM-1 to VM-2 Communication:


1. VM-1 sends the traffic to the Web-LS segment.
2. The source hypervisor encapsulates the packet with the GENEVE header.
3. The source transport node forwards the packet to the physical network.
4. The destination transport node receives the packet and performs the decapsulation.
5. The destination TEP forwards the L2 frame to the destination VM.

Module 4: NSX-T Data Center Logical Switching 159


4-15 Review of Learner Objectives

160 Module 4: NSX-T Data Center Logical Switching


4-16 Logical Switching Architecture

Module 4: NSX-T Data Center Logical Switching 161


4-17 Learner Objectives

162 Module 4: NSX-T Data Center Logical Switching


4-18 Management Plane and Central Control Plane Agents

Although the management plane and central control plane (CCP) run on the same virtual
appliance, they perform different functions.
The NSX cluster can scale to a maximum of three NSX Manager nodes running on the
management and central control planes.

Module 4: NSX-T Data Center Logical Switching 163


4-19 Creating Segments on ESXi Hosts (1)

164 Module 4: NSX-T Data Center Logical Switching


4-20 Creating Segments on ESXi Hosts (2)

Each component performs a function:

• The nsx-proxy agent is the local control plane agent running on each ESXi transport node.

• The CCP sends the information to the nsx-proxy agent running on the ESXi hypervisor, and
the nsx-proxy agent updates NestDB.

• The cfgAgent running on the ESXi host uses the nsxt-vdl2 module to create and configure
layer 2 segments.

• The configuration changes are performed through the cfgAgent and are written into the in-
memory database called NestDB.

Module 4: NSX-T Data Center Logical Switching 165


4-21 Creating Segments on KVM Hosts (1)

166 Module 4: NSX-T Data Center Logical Switching


4-22 Creating Segments on KVM Hosts (2)

Module 4: NSX-T Data Center Logical Switching 167


4-23 NSX-T Data Center Communication Channels

The directional arrows represent the ports used between the various components of NSX-T Data
Center.

168 Module 4: NSX-T Data Center Logical Switching


4-24 Review of Learner Objectives

Module 4: NSX-T Data Center Logical Switching 169


4-25 Configuring Segments

170 Module 4: NSX-T Data Center Logical Switching


4-26 Learner Objectives

Module 4: NSX-T Data Center Logical Switching 171


4-27 Segment Configuration Tasks

If your VM is on a KVM host, you need to manually create a logical port to attach the VM:

• A segment contains multiple switch ports. Routers, VMs, containers, and so on can connect to
a segment through the segment ports.

• After attaching a VM to a segment, you can add segment ports to the segment.

172 Module 4: NSX-T Data Center Logical Switching


4-28 Creating Segments

When creating a segment, you select an uplink in the Uplink & Type drop-down menu:

• You can select an existing Tier-0 or Tier-1 Gateway.

• You can also select None, which means that the segment is a logical switch that is not
connected to any gateway.

• If the uplink connects to a Tier-1 gateway, you must select a type: Flexible or Fixed.

A flexible segment can be unlinked from gateways.


A fixed segment can be deleted but not unlinked from a gateway.

Module 4: NSX-T Data Center Logical Switching 173


4-29 Viewing Configured Segments

The segments from NSX Manager appear in vCenter Server as opaque networks. These segments
are not port groups in vSphere.

174 Module 4: NSX-T Data Center Logical Switching


4-30 Attaching VMs to a Segment

A segment might have multiple switching ports. Entities such as routers, VMs, or containers can
connect to a segment through the segment ports. After attaching a VM to a segment, you can add
logical ports to the segment.
Depending on your host, the configuration for connecting a VM to a segment can vary.
If your ESXi host is managed by vCenter Server, you can access a hosted VM through the
vSphere Web Client UI. By editing the VM settings, you attach the VM to a desired segment.
If the ESXi host on which your VM resides is a standalone host, see the NSX-T Data Center
Administration Guide at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/2.4/administration/GUID-FBFD577B-745C-4658-B713-A3016D18CB9A.html.

Module 4: NSX-T Data Center Logical Switching 175


4-31 Workflow: Attaching a vSphere VM to a Segment (1)

176 Module 4: NSX-T Data Center Logical Switching


4-32 Workflow: Attaching a vSphere VM to a Segment (2)

Module 4: NSX-T Data Center Logical Switching 177


4-33 Attaching a KVM VM to a Segment

If your VM resides on a KVM host, you must manually create a logical port and attach the VM:
1. From the KVM CLI, run the virsh dumpxml <VM_name> | grep interfaceid
command and record the UUID information.
2. In the NSX Manager simplified UI, add a segment port by configuring the UUID, attachment
type, and other settings.

For more information about the creation of the UUID, see VMware knowledge base article
2150850 at https://kb.vmware.com/s/article/2150850.
When adding segment ports, you select a type for the port:

• In the Type drop-down menu, select Parent, Child, or Independent.

• Leave this field blank except for use cases such as containers or VMware HCX.

• If this port is for a container in a VM, select Child.

• If this port is for a container host VM, select Parent.

178 Module 4: NSX-T Data Center Logical Switching


• If this port is for a bare metal container or server, select Independent.

• If the type is set to Child, enter the parent virtual interface (VIF) ID in the Context ID text
box.

• If the type is set to Independent, enter the transport node ID in the Context ID text box.

Module 4: NSX-T Data Center Logical Switching 179


4-34 Workflow: Attaching a KVM VM to a Segment (1)

180 Module 4: NSX-T Data Center Logical Switching


4-35 Workflow: Attaching a KVM VM to a Segment (2)

Module 4: NSX-T Data Center Logical Switching 181


4-36 Viewing the Switching Configuration in the Advanced
and Simplified UIs

182 Module 4: NSX-T Data Center Logical Switching


4-37 Verifying L2 End-to-End Connectivity

After you successfully set up the segment and attach VMs to it, you can test the connectivity
between VMs on the same segment. In the example, you can test the connectivity in the following
way:
1. Using SSH or the VM console, log in to VM T1-Web-01 (172.16.10.11), which is attached to
the segment Web-LS.
2. Ping VM T1-Web-03 172.16.10.13 (which resides on another KVM host). This VM is also
attached to the segment Web-LS.

Module 4: NSX-T Data Center Logical Switching 183


4-38 Lab: Configuring Segments

184 Module 4: NSX-T Data Center Logical Switching


4-39 Review of Learner Objectives

Module 4: NSX-T Data Center Logical Switching 185


4-40 Configuring Segment Profiles

186 Module 4: NSX-T Data Center Logical Switching


4-41 Learner Objectives

Module 4: NSX-T Data Center Logical Switching 187


4-42 About Segment Profiles (1)

188 Module 4: NSX-T Data Center Logical Switching


4-43 About Segment Profiles (2)

NSX-T Data Center supports several types of segment profiles and maintains one or more system-
defined default segment profiles:

• The IP Discovery profile uses DHCP snooping, Address Resolution Protocol (ARP)
snooping, or VMware Tools to learn the VM MAC and IP addresses.

• The MAC Discovery profile supports two functionalities: MAC learning and MAC address
change.

• SpoofGuard prevents traffic with incorrect source IP and MAC addresses from being
transmitted.

• Segment Security provides stateless layer 2 and layer 3 security by checking the ingress
traffic to the segment and matching the IP address, MAC address, and protocols to a set of
allowed addresses and protocols. Unauthorized packets are dropped.

• QoS (Quality of Service) provides high-quality and dedicated network performance for
preferred traffic.

Module 4: NSX-T Data Center Logical Switching 189


4-44 Default Segment Profiles

You cannot edit or delete the default segment profiles, but you can create custom segment profiles.

190 Module 4: NSX-T Data Center Logical Switching


4-45 Applying Segment Profiles to Segments

Module 4: NSX-T Data Center Logical Switching 191


4-46 Applying Segment Profiles to L2 Ports

For example, two QoS segment profiles cannot be associated with a segment or segment port.
When the segment profile is associated or disassociated from a segment, the segment profile for
the child segment ports is applied based on the following criteria:

• If the parent segment has a profile associated with it, the child segment port inherits the
segment profile from the parent.

• If the parent segment does not have a segment profile associated with it, a default segment
profile is assigned to the segment, and the segment port inherits that default segment profile.

• If you explicitly associate a custom profile with a segment port, this custom profile overrides
the existing segment profile.

If you associate a custom segment profile with a segment, but want to retain the default segment
profile for one of the child segment ports, you must make a copy of the default segment profile
and associate it with the specific segment port.

192 Module 4: NSX-T Data Center Logical Switching


4-47 IP Discovery Segment Profile

ARP suppression minimizes ARP traffic flooding within VMs connected to the same segment.
The VMware Tools IP Discovery method can also provide the VM's configuration information
and is available for ESXi-hosted VMs only.
The IP Discovery profile might be used in the following scenario: The distributed firewall depends
on the IP-to-port mapping to create firewall rules. Without IP Discovery, the distributed firewall
must find the IP of a logical port through SpoofGuard and manual address bindings, which is a
cumbersome and error-prone process.

Module 4: NSX-T Data Center Logical Switching 193


4-48 Creating an IP Discovery Segment Profile (1)

In the screenshot on the slide, a custom IP Discovery profile named Lab-IP-Discovery-Profile is


created. After the profile is created, it can be applied to various segments.
The function of IP Discovery is to learn MAC and IP addresses:

• The discovered MAC and IP addresses are used to achieve ARP and ND (Neighbor
Discovery) suppression, which minimizes traffic between VMs connected to the same logical
switch.

• The addresses are also used by the SpoofGuard and distributed firewall components. The
distributed firewall uses the address bindings to determine the IP address of objects in firewall
rules.

194 Module 4: NSX-T Data Center Logical Switching


IP Discovery uses various discovery methods to learn MAC and IP addresses:

• DHCP and DHCP Snooping: IPv6 inspects DHCP packets exchanged between the DHCP
client and server to learn the IP and MAC addresses.

• ARP Snooping inspects the outgoing ARP and GARP (gratuitous ARP) packets of a VM to
learn the IP and MAC addresses.

• VM Tools is software that runs on an ESXi-hosted VM and can provide the VM's
configuration information, including MAC and IP or IPv6 addresses. This IP discovery
method is available for VMs running on ESXi hosts only.

• ND Snooping is the IPv6 equivalent of ARP Snooping. It inspects neighbor solicitation (NS)
and neighbor advertisement (NA) messages to learn the IP and MAC addresses.

• Duplicate IP Detection checks whether a newly discovered IP address is already present on


the binding list for a different port. This check is performed for ports on the same logical
switch. If a duplicate address is detected, the newly discovered address is not added to the
binding list but is added to the discovered list. All duplicate IPs have an associated discovery
timestamp. If the IP that is on the binding list is removed, either by adding it to the ignore
binding list or by disabling snooping, the duplicate IP with the oldest timestamp is moved to
the binding list. The duplicate address information is available through an API call.

Different address discovery methods operate in different modes:

• By default, ARP Snooping and ND Snooping operate in the Trust-on-First-Use (TOFU)


mode. The first address discovered is bound to the port for the lifetime of that port.

• By contrast, DHCP Snooping and VM Tools always operate in the Trust-in-Everything-Used


(TOEU) mode. When an address is discovered, it is added to the bindings list. When an
address is deleted, it is removed from the bindings list. You can disable TOFU for ARP
Snooping or ND Snooping. In that case, they operate in the TOEU mode.

For each port, NSX Manager maintains an ignore bindings list, which contains IP addresses that
cannot be bound to the port. You can only update this list using the API. You can also use this
method to delete a previously discovered IP for a given port.

Module 4: NSX-T Data Center Logical Switching 195


4-49 Creating an IP Discovery Segment Profile (2)

You can enable ARP Snooping, DHCP Snooping, or VM Tools to create a custom IP Discovery
segment profile that learns the IP and MAC addresses to ensure the IP integrity of a segment.

196 Module 4: NSX-T Data Center Logical Switching


4-50 MAC Discovery Segment Profile

Source MAC address-based learning is a common feature in the physical world to learn the MAC
address of a machine. MAC Learning provides network connectivity to deployments where
multiple MAC addresses are configured behind one vNIC, for example, in a nested hypervisor
deployment where an ESXi VM runs on an ESXi host and multiple VMs run inside the ESXi VM.
The MAC Discovery profile supports source MAC address learning:

• Source MAC address-based learning is a common feature in the physical world for learning
the MAC address of a machine. The MAC Learning feature provides network connectivity to
deployments where multiple MAC addresses are configured behind one vNIC, for example, in
a nested hypervisor deployment where an ESXi VM runs on an ESXi host and multiple VMs
run inside the ESXi VM.

• Without MAC Learning, when the ESXi VM’s vNIC connects to a segment port, its MAC
address is static. VMs running inside the ESXi VM do not have network connectivity because
their packets have different source MAC addresses. With MAC Learning, the source MAC
address of every packet coming from the vNIC is inspected, the MAC address is learned, and

Module 4: NSX-T Data Center Logical Switching 197


the packet is allowed to go through. If a MAC address that is learned is not used for 10
minutes, it is removed. This aging property is not configurable.

• MAC Learning also supports Unknown Unicast Flooding. When a unicast packet received by
a port that has an unknown destination MAC address, the packet is flooded out on all segment
ports that has MAC Learning and Unknown Unicast Flooding enabled. This property is
enabled by default, but only if MAC Learning is enabled.

The MAC Discovery profile also supports the ability of a VM to change its MAC address. A VM
connected to a port with MAC Change enabled can run an administrative command to change the
MAC address of its vNIC and still send and receive traffic on that vNIC. This feature is used when
a VM needs the ability to change its MAC address and yet not lose network connectivity.
The MAC Discovery profile also supports the ability of a VM to change its MAC address:

• A VM connected to a port with MAC Change enabled can run an administrative command to
change the MAC address of its vNIC and still send and receive traffic on that vNIC.

• This feature (disabled by default) is used when a VM needs the ability to change its MAC
address and yet not lose network connectivity.

If you enable both MAC Learning or MAC Change, you should also enable SpoofGuard to
improve security.
For more information about creating a MAC Discovery profile and associating the profile with a
segment or a port, see the NSX-T Data Center Administration Guide at
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.4/nsxt_24_admin.pdf.

198 Module 4: NSX-T Data Center Logical Switching


4-51 QoS Segment Profile

QoS provides high-quality and dedicated network performance for preferred traffic that requires
high bandwidth. The QoS mechanism achieves this performance by providing sufficient
bandwidth, controlling latency and jitter, and reducing data loss for preferred packets even with
network congestion. This level of network service is provided by using the existing network
resources efficiently.
The QoS profile supports two methods:

• Class of Service (CoS): Marks the packet’s layer 2 header to specify its priority

• Differentiated Services Code Point (DSCP): Inserts a code value into the packet’s layer 3
header for prioritization.

The layer 2 CoS allows you to specify priority for data packets when traffic is buffered in the
segment due to congestion. The layer 3 DSCP detects packets based on their DSCP values. CoS is
always applied to the data packet regardless of the trusted mode.

Module 4: NSX-T Data Center Logical Switching 199


NSX-T Data Center trusts the DSCP setting applied by a VM or modifies and sets the DSCP value
at the segment level. In each case, the DSCP value is propagated to the outer IP header of
encapsulated frames. In this way, the external physical network can prioritize the traffic based on
the DSCP setting on the external header. When DSCP is in the trusted mode, the DSCP value is
copied from the inner header. When in the untrusted mode, the DSCP value is not preserved for
the inner header. DSCP settings work only on tunneled traffic. These settings do not apply to
traffic inside the same hypervisor.
You can use the QoS segment profile to configure the average ingress and egress bandwidth
values to set the transmit limit rate. To prevent congestion on the northbound network links, you
can use the peak bandwidth rate to specify the upper limit that traffic on a segment is allowed to
burst. The settings in a QoS segment profile do not guarantee the bandwidth but help limit the use
of network bandwidth. The actual bandwidth you observe is determined by the link speed of the
port or the values in the segment profile, whichever is lower.
For more information about the QoS segment profile, see NSX-T Data Center Administration
Guide at https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html.

200 Module 4: NSX-T Data Center Logical Switching


4-52 Segment Security Profile

The Segment Security profile provides stateless layer 2 and layer 3 security by checking the
ingress traffic to the segment and dropping unauthorized packets sent from VMs. The profile
matches the IP address, MAC address, and protocols to a set of allowed addresses and protocols.
You can configure the Bridge Protocol Data Unit (BPDU) filter, DHCP snooping, DHCP server
block, and rate limiting options.
You can configure the Bridge Protocol Data Unit (BPDU) filter, DHCP snooping, DHCP server
block, and rate limiting options:

• BPDU Filter: Clicking the BPDU Filter toggle button to on enables BPDU filtering. When
the BPDU filter is enabled, all of the traffic to the BPDU destination MAC address is blocked.
When enabled, the BPDU filter also disables the Spanning Tree Protocol (STP) on the logical
segment ports because these ports are not expected to take part in STP.

• BPDU Filter Allow List: You click the destination MAC address from the BPDU destination
MAC addresses list to allow traffic to the permitted destination.

Module 4: NSX-T Data Center Logical Switching 201


• To enable DHCP filtering, you click the Server Block and Client Block toggle buttons to on.
DHCP Server Block blocks traffic from a DHCP server to a DHCP client. It does not block
traffic from a DHCP server to a DHCP relay agent.

• Clicking the Non-IP Traffic Block toggle button to on allows only IPv4, IPv6, ARP, GARP,
and BPDU traffic. The rest of the non-IP traffic is blocked. The permitted IPv4, IPv6, ARP,
GARP. and BPDU traffic is based on other policies set in address binding and SpoofGuard
configurations. By default, this option is disabled to allow non-IP traffic to be handled as
regular traffic.

• You can configure rate limits for the ingress or egress broadcast and multicast traffic. Rate
limits are configured to protect the segment or the VM from threats such as broadcast storms.
To avoid any connectivity problems, the minimum rate limit value must be >= 10 pps.

202 Module 4: NSX-T Data Center Logical Switching


4-53 SpoofGuard Segment Profile

SpoofGuard provides protection against spoofing with MAC+IP+VLAN bindings. If a VM’s IP


address does not match the IP address on the corresponding logical port and switch address
binding in SpoofGuard, the VM’s vNIC is prevented from accessing the network entirely.
SpoofGuard can be configured at the port or switch level.
SpoofGuard might be used in your environment for the following reasons:

• Preventing a rogue VM from assuming the IP address of an existing VM.

• Ensuring that the IP addresses of VMs cannot be altered without intervention. If you do not
want VMs to alter their IP addresses without proper change control review, you can use
SpoofGuard to ensure that the VM owner cannot simply alter the IP address and continue
working unimpeded.

• Ensuring that the distributed firewall rules are not inadvertently (or deliberately) bypassed.
For distributed firewall rules created using IP sets as sources or destinations, a VM could have
its IP address forged in the packet header, thereby bypassing the rules in question.

Module 4: NSX-T Data Center Logical Switching 203


4-54 Creating a SpoofGuard Segment Profile

A SpoofGuard profile applied to a segment or a port blocks traffic determined to be spoofed.


When SpoofGuard is configured, if the IP address of a VM changes, traffic from the VM might be
blocked until the corresponding configured port or segment address bindings are updated with the
new IP address.
You can enable SpoofGuard for the port groups containing the guests. When enabled for each
network adapter, SpoofGuard inspects packets for the prescribed MAC and its corresponding IP
address.
A SpoofGuard profile can be applied to a segment or a port:

• At the port level, the allowed MAC, VLAN, or IP whitelist is provided through the Address
Bindings property of the port. When the VM sends traffic, it is dropped if its MAC, VLAN, or
IP address does not match the MAC, VLAN, or IP properties of the port. The port-level

204 Module 4: NSX-T Data Center Logical Switching


SpoofGuard deals with traffic authentication, that is, the traffic consistent with VIF
configuration.

• At the segment level, the allowed MAC, VLAN, or IP whitelist is provided through the
Address Bindings property of the segment. This property is typically an allowed IP range or
subnet for the segment, and the segment-level SpoofGuard deals with traffic authorization.

Traffic must be permitted by the port and the segment levels by SpoofGuard before it is allowed
into a segment. Enabling or disabling port- and segment-level SpoofGuard can be controlled using
the SpoofGuard segment profile.

Module 4: NSX-T Data Center Logical Switching 205


4-55 Review of Learner Objectives

206 Module 4: NSX-T Data Center Logical Switching


4-56 Logical Switching Packet Forwarding

Module 4: NSX-T Data Center Logical Switching 207


4-57 Learner Objectives

208 Module 4: NSX-T Data Center Logical Switching


4-58 NSX-T Data Center Controller Tables

Module 4: NSX-T Data Center Logical Switching 209


4-59 TEP Table Update (1)

210 Module 4: NSX-T Data Center Logical Switching


4-60 TEP Table Update (2)

Module 4: NSX-T Data Center Logical Switching 211


4-61 TEP Table Update (3)

212 Module 4: NSX-T Data Center Logical Switching


4-62 TEP Table Update (4)

Module 4: NSX-T Data Center Logical Switching 213


4-63 MAC Table Update (1)

214 Module 4: NSX-T Data Center Logical Switching


4-64 MAC Table Update (2)

Module 4: NSX-T Data Center Logical Switching 215


4-65 MAC Table Update (3)

216 Module 4: NSX-T Data Center Logical Switching


4-66 MAC Table Update (4)

Module 4: NSX-T Data Center Logical Switching 217


4-67 ARP Table Update (1)

The host N-VDS learns the MAC-to-IP association by snooping the ARP and DHCP traffic. The
learned information is pushed from each host to the control plane.

218 Module 4: NSX-T Data Center Logical Switching


4-68 ARP Table Update (2)

Module 4: NSX-T Data Center Logical Switching 219


4-69 ARP Table Update (3)

220 Module 4: NSX-T Data Center Logical Switching


4-70 ARP Table Update (4)

Module 4: NSX-T Data Center Logical Switching 221


4-71 Unicast Packet Forwarding Across Hosts (1)

222 Module 4: NSX-T Data Center Logical Switching


4-72 Unicast Packet Forwarding Across Hosts (2)

Module 4: NSX-T Data Center Logical Switching 223


4-73 Unicast Packet Forwarding Across Hosts (3)

224 Module 4: NSX-T Data Center Logical Switching


4-74 Unicast Packet Forwarding Across Hosts (4)

Module 4: NSX-T Data Center Logical Switching 225


4-75 BUM Traffic Overview

All broadcast, unicast, and multicast (BUM) traffic is treated the same: flooded to all participating
hypervisors in the segment. The replication is performed in software.
Each host transport node is a tunnel endpoint. Each TEP has an IP address. These IP addresses can
be in the same subnet or in different subnets, depending on your configuration of IP pools or
DHCP for your transport nodes.
When two VMs on different hosts communicate directly and ARP is resolved, unicast-
encapsulated traffic is exchanged between the two TEP IP addresses without any need for
flooding. However, as with any layer 2 network, sometimes traffic that is originated by a VM,
such as an ARP request, needs to be flooded, which means that the packet needs to be sent to all of
the other VMs belonging to the same segment. This is the case with layer 2 BUM traffic.
In the diagram, VM2, residing on transport node 2 (TN2) needs to send traffic to VM9 residing on
TN9. VM9’s MAC address is unknown to TN2 or the control plane. Therefore, VM2 sends out an
ARP request (broadcast frame) seeking VM9’s MAC address. TN2 floods this ARP request frame
out to all other transport nodes within VNI 5000. VM9 on TN9 receives the ARP request and
responds with an ARP reply. ARP tables on hosts are then updated to reduce future flooding.

226 Module 4: NSX-T Data Center Logical Switching


To enable flooding, NSX-T Data Center segment supports two types of replication modes:

• Head Replication mode: This mode is also known as Source Mode or Headend Replication.
The source host simply duplicates each BUM frame and sends a copy to each TEP (on a
particular VNI) that it knows of.

• Hierarchical Two-Tier Replication (default mode): This mode is also known as the MTEP
mode. It involves a host in another L2 domain that performs replication of BUM traffic to
other hosts within the same VNI.

Module 4: NSX-T Data Center Logical Switching 227


4-76 Handling BUM Traffic: Head Replication

In this example, a BUM packet arrives on TN1:

• TN1 replicates (because the control plane does not have the desired information) to TN2 and
TN3 because they are in the same L2 domain.

• Meanwhile, TN1 also needs to replicate the packet to the remote transport nodes (TN4 and
TN5 in a L2 domain and TN7, TN8, TN9 in another L2 domain).

• Because TN6 does not participate in VNI 5000, the packet is not replicated to TN6.

228 Module 4: NSX-T Data Center Logical Switching


4-77 Handling BUM Traffic: Hierarchical Two-Tier
Replication

Hierarchical two-tier mode is also known as the MTEP replication mode.


In the diagram, a BUM packet arrives on TN1:

• TN1 replicates the BUM traffic locally to TN2 and TN3.

• An MTEP is elected for each L2 domain.

• TN1 also sends a copy of the BUM packet to each remote MTEP.
The role of MTEP is to replicate the received BUM packet locally and forward it to other TNs
within the same L2 domain.

• MTEP TN7 forwards the BUM packet to TN8 and TN9.

• MTEP TN5 forwards the packet to TN4.

• Because TN6 does not participate in VNI 5000, the packet is not sent to TN6.

Module 4: NSX-T Data Center Logical Switching 229


4-78 Review of Learner Objectives

230 Module 4: NSX-T Data Center Logical Switching


4-79 Key Points

Module 4: NSX-T Data Center Logical Switching 231


232 Module 4: NSX-T Data Center Logical Switching
Module 5
NSX-T Data Center Logical Routing

Module 5: NSX-T Data Center Logical Routing 233


5-2 Importance

234 Module 5: NSX-T Data Center Logical Routing


5-3 Module Lessons

Module 5: NSX-T Data Center Logical Routing 235


5-4 Logical Routing Overview

236 Module 5: NSX-T Data Center Logical Routing


5-5 Learner Objectives

Module 5: NSX-T Data Center Logical Routing 237


5-6 Logical Routing Use Cases

NSX-T Data Center logical routing has many use cases:

• NSX-T Data Center is designed to meet the demands of containerized workload, multi-
hypervisor, and multicloud environments.

• The logical routing functionality focuses on multitenant environments. Gateways can support
multiple instances where complete separation of tenants and networks are required.

• Logical routing is optimized for cloud environments. It is suited for containerized workload,
multi-hypervisor, and multicloud data centers.

• The distributed routing architecture provides optimal routing paths. Routing is done closest to
the source. For example, traffic from two VMs on different subnets residing on the same host
can be routed in the kernel. The traffic does not need to leave the host to get routed, thereby
avoiding hairpinning.

• NSX Edge transport nodes that host gateways provide network services that cannot be
distributed to hosts.

238 Module 5: NSX-T Data Center Logical Routing


• Gateways exist where east-west routing, north-south routing, and centralized services (such as
NAT or load balancing) are required.

• No dynamic routing protocol is needed between the two-tiered gateways, simplifying data
center routing.

• Logical routing makes it easy to extend logical networks to physical environments.

Module 5: NSX-T Data Center Logical Routing 239


5-7 Prerequisites for Logical Routing

240 Module 5: NSX-T Data Center Logical Routing


5-8 Logical Routing in NSX-T Data Center

An NSX-T Data Center gateway reproduces routing functionality in a virtual environment:

• Logical routing is distributed, completely decoupled from underlying hardware. Basic


forwarding decisions are made locally on the prepared transport nodes.

• Gateways provide centralized services. Layer 3 functionalities, such as NAT, are provided
through the services running on NSX Edge nodes.

• When multiple gateway instances are installed, multitenancy and network separation are
supported on a single gateway. Logical routing is enhanced for most cloud use cases that
involve multiple service providers and tenants.

Module 5: NSX-T Data Center Logical Routing 241


NSX-T Data Center gateways provide north-south and east-west connectivity:

• North-south routing enables tenants to access public networks. North-south means a traffic
direction leaving or entering into a tenant administrative domain. Connections to and from the
entities outside of the tenant's premises can be considered as north-south connectivity.

• East-west traffic flows between various networks within the same tenant. In other words,
traffic is sent between logical networks (between logical switches) under the same
administrative domain.

Tier-1 Gateways have downlink ports to connect to NSX-T Data Center logical switches and
uplink ports to connect to NSX-T Data Center Tier-0 Gateways:

• Peers with the upstream physical infrastructure for north-south routing.

• The logical interface serves as the default gateway for the connected network.

• Can provide network services such as NAT, load balancing, edge firewall, VPN, and more.

242 Module 5: NSX-T Data Center Logical Routing


5-9 Gateway Components: Distributed Router and Service
Router

Module 5: NSX-T Data Center Logical Routing 243


5-10 Gateway: Distributed Router (1)

244 Module 5: NSX-T Data Center Logical Routing


5-11 Gateway: Distributed Router (2)

Module 5: NSX-T Data Center Logical Routing 245


5-12 Gateway: Service Router

The distributed router (DR) modules, which are placed on the transport nodes, can forward packets
based on the given setup and routing decisions. However, these modules cannot perform certain
required functions. In those cases, application traffic is diverted through a service router (SR) in a
given edge node to perform these functions.
Some services in NSX-T Data Center are not distributed, including physical infrastructure
connectivity, NAT, DHCP server, NSX Edge firewall, logical load balancer, different VPN
services, metadata proxy for OpenStack, and more.

246 Module 5: NSX-T Data Center Logical Routing


5-13 Interaction between Distributed and Service Routers

One instance runs on the NSX Edge node to support connectivity to the service router on that
same NSX Edge node. When the service and distributed routers are created, they are
interconnected through the router-link port between them.

Module 5: NSX-T Data Center Logical Routing 247


5-14 About Edge Nodes

NSX Edge nodes are not the same as the Edge Services Gateway in NSX for vSphere.

248 Module 5: NSX-T Data Center Logical Routing


5-15 Logical Routing: Multitier Topology

Module 5: NSX-T Data Center Logical Routing 249


5-16 Tier-0 and Tier-1 Gateways

Gateways are distributed across the kernel of each host. A gateway can be deployed as either a
Tier-0 or a Tier-1 Gateway:

• Tier-0 Gateways provide north-south connectivity.

• Tier-1 Gateways provide east-west connectivity.

The Tier-1 Gateway must connect to the Tier-0 Gateway, with the exception of a single-tier
topology in which Tier-0 is directly connected to upstream physical gateways.
The Tier-1 Gateway does not require an edge node if no services are used. It has preprogrammed
(by the management plane) connections toward its upstream Tier-0 Gateway.
Both Tier-0 and Tier-1 Gateways support stateful services, such as NAT. Stateful services are
centralized on gateway nodes.
The stateful function supported by Tier-0 and Tier-1 Gateways is routing. Unlike stateful services,
no state must be maintained in routing.

250 Module 5: NSX-T Data Center Logical Routing


5-17 Logical Router Interfaces

In logical router deployment in NSX-T Data Center, different types of connections require
different types of interfaces:

• The uplink interface provides connections to the external physical infrastructure. VLAN and
overlay interface types are supported, depending on the use case. The uplink interface is
where the external BGP peering can be established. External service connections, such as
IPSec VPN, can also be used through the uplink interface.

• The downlink interface connects workload networks (where endpoint VMs are running) to the
routing infrastructure. A downlink interface is configured to connect to a logical switch (local
subnet). It is the interface that provides the default gateway for the VMs in that subnet.

• RouterLink is a type of interface that connects Tier-0 and Tier-1 Gateways. The interface is
created automatically when Tier-0 and Tier-1 Gateways are connected. It uses a subnet
assigned from the 100.64.0.0/10 IPv4 address space.

Module 5: NSX-T Data Center Logical Routing 251


• The intra-tier transit link connection is also automatically created when an service router is
created. It is an internal link between the distributed and service routers on a gateway. By
default, the intra-tier transit link has an IP address from 169.254.0.0/28 subnet range.

• The centralized service port (CSP) is a special purpose port to enable centralized services for
mainly VLAN-based networks. North-south service insertion is another use case that requires
a centralized service port to connect partner appliance and redirect north-south traffic for
partner services. Centralized service ports are supported on both active-standby Tier-0 logical
routers and Tier-1 routers. Firewall, NAT, and VPNs are supported on this port.

252 Module 5: NSX-T Data Center Logical Routing


5-18 Centralized Service Port

Support for VLAN-backed downlinks (centralized service port) to Tier-0 and Tier-1 Gateways
was introduced in NSX-T Data Center 2.2. You can extend NSX Edge services to customer
environments with only VLAN-based networks. Downlink interfaces can also be used for VLAN-
based connections.
Gateway firewall rules and NAT configuration can be applied directly to CSP.

Module 5: NSX-T Data Center Logical Routing 253


5-19 Single-Tier Deployment Example

In a single-tier deployment, only Tier-0 gateways are used (no Tier-1). The segments are directly
connected to the Tier-0 layer. The upstream connectivity is provided by the service provider. The
southbound connectivity is performed by the tenant.

254 Module 5: NSX-T Data Center Logical Routing


5-20 Multitier Topology Examples

In the diagrams, Segments A, B, C, and D are connected to Tier-1 Gateways. The Tier-0 Gateway
is also known as the provider gateway. Tier-1 Gateways can be owned and configured by the
tenants, depending on the business requirements.
The two-tier routing topology is not mandatory. If the provider and the tenant do not need to be
separated, a single-tier topology can be used.
The Tier-0 Gateway is owned and configured by the provider. The Tier-1 Gateway is owned and
configured by the tenants and is typically provisioned by cloud management platforms (CMPs).

Module 5: NSX-T Data Center Logical Routing 255


5-21 Tier-0 Gateway Uplink Connections

256 Module 5: NSX-T Data Center Logical Routing


5-22 Review of Learner Objectives

Module 5: NSX-T Data Center Logical Routing 257


5-23 NSX Edge and Edge Clusters

258 Module 5: NSX-T Data Center Logical Routing


5-24 Learner Objectives

Module 5: NSX-T Data Center Logical Routing 259


5-25 NSX Edge Functions

The purpose of NSX Edge is to provide computational power to deliver IP routing and services.
NSX Edge is an important part of the NSX-T Data Center transport zone.
NSX Edge nodes provide the administrative background and computational power for dynamic
routing and services. Edge nodes are appliances with pools of capacity that can host distributed
routing and nondistributed services. NSX Edge nodes provide high availability, using active-active
and active-standby models for resiliency.
NSX Edge is commonly deployed in DMZs and multitenant cloud environments, where it creates
virtual boundaries for each tenant.

260 Module 5: NSX-T Data Center Logical Routing


5-26 NSX Edge VM Form Factor and Sizing Options

NSX Edge deployed as a VM runs on ESXi host hypervisors.


For NSX Edge node VM deployment, the small appliance is for proof-of-concept deployments.
The medium size is suitable for a typical production environment and can support up to 64
hypervisors. The large size is for large-scale deployments with more than 64 hypervisors.
You can only deploy small and large VM form factors from the vSphere OVF deployment user
interface.

Module 5: NSX-T Data Center Logical Routing 261


5-27 NSX Edge Bare Metal Hardware Requirements

For NSX Edge bare metal NIC requirements, see the supported adapters listed on the NSX Edge
Bare Metal Requirements page in the NSX-T Data Center Installation Guide at
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.4/nsxt_24_install.pdf.
If the hardware is not listed, the storage, video adapter, or motherboard components might not
work on the NSX Edge node.

262 Module 5: NSX-T Data Center Logical Routing


5-28 Logical Routing Topology (1)

The slide shows the general topology for logical routing.


The spine-leaf physical topology ensures the communication between different resource clusters.
Spine switches are the core network switches, the backbones of the communication paths.
The leaf switches are the top-of-rack (ToR) switches, the immediate communicating devices
connected to the hypervisors.
In this scenario, the workload or compute clusters are separated from the management clusters.
Edge VMs are the endpoints that communicate with external entities. The edge VMS are deployed
together with management resources.

Module 5: NSX-T Data Center Logical Routing 263


5-29 Logical Routing Topology (2)

All transport nodes include the DR instance to provide localized and distributed routing functions.
The SR instances run on the edge transport nodes.

264 Module 5: NSX-T Data Center Logical Routing


5-30 Logical Routing Topology (3)

When VMs communicate with external entities, they pass through the edge nodes in their own
resource cluster. This process is true in reverse: VMs get traffic from external entities through the
edge nodes.

Module 5: NSX-T Data Center Logical Routing 265


5-31 NSX Edge Cluster Guidelines

NSX-T Data Center edge cluster scaling and maximums define the maximum nodes supported in
an edge cluster, how many cluster and edge nodes can belong to a cluster, and what combination
of different forms of nodes (VM and bare metal) can be used.

266 Module 5: NSX-T Data Center Logical Routing


5-32 NSX Edge Node Deployment Prerequisites

When you deploy NSX Edge nodes, different types of requirements apply depending on the
deployment method. In general, all the requirements listed on the slide apply for both manual or
automated deployments, with the exception of the requirements related to installation media and
OVF or OVA templates.

Module 5: NSX-T Data Center Logical Routing 267


5-33 Deploying NSX Edge Nodes from the Simplified UI

On the Name and Description page of the Add Edge VM wizard, you configure the following
settings:

• Host name/FQDN

• Description

• Form Factor

268 Module 5: NSX-T Data Center Logical Routing


5-34 Using vCenter Server to Deploy NSX Edge Nodes

NSX-T Data Center Edge nodes can be installed or deployed using various methods. If you prefer
an interactive edge installation, you can use a UI-based VM management tool, such as vSphere
Web Client connected to vCenter Server.
The image shows the option to deploy through vCenter Server or vSphere client. A wizard guides
you through the steps so that you can provide the required details.
This process does not register the NSX-T Data Center edge node with the management plane.
Additional commad-line operations are required. When finished, you can verify the connectivity
of the edge node in various ways.

Module 5: NSX-T Data Center Logical Routing 269


5-35 Using the OVF Tool to Deploy NSX Edge Nodes

You can use the VMware OVF tool to install NSX Edge nodes. This tool can be downloaded from
the VMware portal and supports many different types of deployment, including deployment
operations in vCenter Server or vCloud Director.
The following extra options (defined by the --prop command-line switch) are available:

• nsx_passwd_0: Setting for the admin user password

• nsx_cli_passwd_0: Setting for the appliance root user password

• nsx_cli_audit_passwd_0: Setting for the password of the audit user

• nsx_hostname: Setting for the host name of the appliance

• nsx_isSSHEnabled: Setting for the SSH service enablement (True/False)

• nsx_isSSHEnabled: Setting for the SSH service enablement (True/False)

• nsx_gateway_0: Setting for the default gateway IP address

270 Module 5: NSX-T Data Center Logical Routing


• nsx_ip_0: Setting for the management IP address of the appliance

• nsx_netmask_0: Setting for the subnet mask of the management IP address

• nsx_dns1_0: Setting for the DNS search IP address

• nsx_ntp_0: Setting for the NTP server IP address for the appliance management

Module 5: NSX-T Data Center Logical Routing 271


5-36 Installing NSX Edge on Bare Metal

Manual installation is also available when you install NSX Edge nodes on a bare metal server.
After the listed requirements are verified, the installation process should start automatically from
the installation media. Once the boot-up and power-on processes are complete, the system requests
an IP address through DHCP or requires manual entry.
By default, the root login password is vmware, and the admin login password is default.
Further setup procedures include enabling the interfaces and joining the edge node to the
management plane.

272 Module 5: NSX-T Data Center Logical Routing


5-37 Using PXE to Deploy NSX Edge Nodes from an ISO
File

The preboot execution environment (PXE) boot can also be used to install NSX Edge nodes on a
bare metal platform.
This operation automates the installation process. You can preconfigure the deployment with all
the required network settings for the appliance.
The PXE method supports the NSX Edge node deployment only. It does not support NSX
Manager or NSX Controller deployments.

Module 5: NSX-T Data Center Logical Routing 273


5-38 Joining NSX Edge with the Management Plane

The manual installation of NSX Edge nodes does not include an automated procedure to ensure
that the management plane sees edge nodes as available resources. You must join NSX Edge with
the management plane so that they can communicate with each other. Joining NSX Edge nodes to
the management plane ensures that the edge nodes are available from the management plane as
managed nodes.
First, you must verify that you have the administration privileges to access NSX Edge nodes and
the NSX Manager simplified UI. Then you can join NSX Edge nodes to the management plane
using the CLI.

274 Module 5: NSX-T Data Center Logical Routing


5-39 Verifying the Edge Transport Node Status

In the simplified UI, select Network > Fabric > Nodes and select the Edge Transport Nodes tab
to view the status of the edge nodes known by the NSX Manager or the management plane.
The Edge Transport Nodes tab lists the following categories:

• Configuration State

• Node Status

• Transport Zones Attached

• Node Version

Clicking the information icon next to the node status provides additional information about the
reasons for a given status.

Module 5: NSX-T Data Center Logical Routing 275


5-40 Enabling Edge Node SSH Service

You can enable SSH by taking the following steps:


1. In vSphere Web Client, open the console to the newly deployed edge node.
2. Enter the login name and password.
3. Enter the command start service ssh.

You can also enter the command set service ssh start-on-boot to set the SSH
service to autostart when the VM is powered on.
4. Use the command get service ssh to check the result.

276 Module 5: NSX-T Data Center Logical Routing


5-41 Postdeployment Verification Checklist

When the edge node deployment is complete, you can verify connectivity in several ways.

Module 5: NSX-T Data Center Logical Routing 277


5-42 Creating an Edge Cluster

Clicking +ADD starts the process for creating an edge cluster. You must create an edge cluster
profile, either separately or through the Add Edge Cluster wizard, including the edge node
members of this planned cluster.

278 Module 5: NSX-T Data Center Logical Routing


5-43 Mapping NSX Edge Node Interfaces (1)

Edge Node deployment requires specific interface assignments, particularly for the VM form
factor:

• The first interface of the deployment must be assigned to a management interface.

• Other interfaces must be assigned to the data path process that creates the overlay or VLAN-
based N-VDS.

Module 5: NSX-T Data Center Logical Routing 279


5-44 Mapping NSX Edge Node Interfaces (2)

All non-management links on the Edge Node will be used for the uplinks and tunnels. For
example, one might be used for a tunnel endpoint. The other might be used for an NSX Edge-to-
external physical uplink.
During N-VDS creation the uplinks can be individually assigned per N-VDS. The amount of
"uplink" interfaces will be decided by what uplink profiles they will use.

280 Module 5: NSX-T Data Center Logical Routing


5-45 Verifying NSX Edge Node Interfaces Mapping

Module 5: NSX-T Data Center Logical Routing 281


5-46 Edge Node VM Deployment Options

When deploying NSX Edge nodes on a hypvervisor or host that is already a transport node, you
can take different approaches to the deployment:

• If the host transport node (in this case, an ESXi host) has multiple virtual switches running,
for example, one switch is from vSphere (either a vSphere distributed switch or a vSphere
standard switch) and an NSX-managed virtual distributed switch (N-VDS) from NSX,
separate uplink interfaces must be used for each switch.
– The NSX Edge vNICs are attached to the standard switch or the distributed switch.
– Edge nodes can use a virtual distributed switch or virtual standard switch, which can use
different uplinks per the above requirement.
– This installation does not require the edge node TEP IP range to be different from the
host transport node TEP IP address range.

• If the transport node uses only the N-VDS, you must deploy the edge node using separate
VLAN-backed logical switches for uplink connectivity.

282 Module 5: NSX-T Data Center Logical Routing


With this method, the subnet for the edge node TEP must be different from the transport node
TEP IP range.

Module 5: NSX-T Data Center Logical Routing 283


5-47 Lab: Deploying and Configuring NSX Edge Nodes

284 Module 5: NSX-T Data Center Logical Routing


5-48 Review of Learner Objectives

Module 5: NSX-T Data Center Logical Routing 285


5-49 Configuring Tier-0 and Tier-1 Gateways

286 Module 5: NSX-T Data Center Logical Routing


5-50 Learner Objectives

Module 5: NSX-T Data Center Logical Routing 287


5-51 Gateway Configuration Tasks

Depending on the environment, the order of the configuration tasks can vary.
Before configuring the Tier-0 Gateway, you should verify that your NSX Controller cluster is
stable, that at least one NSX Edge node is installed, and that an NSX Edge cluster is configured.
After you create the Tier-0 and Tier-1 Gateways, you must manually connect the gateways.
The gateways are not automatically connected to each other during the creation process. The
management plane does not know automatically which Tier-1 instance should connect to which
Tier-0 instance.
After you manually connect these instances, the management plane programs the routes in these
instances to establish connectivity between the tiers.

288 Module 5: NSX-T Data Center Logical Routing


5-52 Configuring a Tier-0 Gateway: Step 1

Each Tier-0 Gateway can have multiple uplink connections, depending on the requirements and
the actual configuration.
In the example, two different segments are configured on two active edge nodes in the cluster.

Module 5: NSX-T Data Center Logical Routing 289


5-53 Configuring a Tier-0 Gateway: Step 2

290 Module 5: NSX-T Data Center Logical Routing


5-54 Configuring a Tier-0 Gateway: Step 3

Module 5: NSX-T Data Center Logical Routing 291


5-55 Configuring a Tier-0 Gateway: Step 4

292 Module 5: NSX-T Data Center Logical Routing


5-56 Configuring a Tier-0 Gateway: Step 5

Module 5: NSX-T Data Center Logical Routing 293


5-57 Reviewing the Tier-0 Gateway Configuration

294 Module 5: NSX-T Data Center Logical Routing


5-58 Configuring a Tier-1 Gateway: Step 1

In the first step, you create a Tier-0 Gateway. Later, you can select whether the configuration
should be continued.
Tier-1 Gateways have downlink ports for connecting to NSX logical switches, and gateway-link
ports for connecting to NSX Tier-0 Gateways.

Module 5: NSX-T Data Center Logical Routing 295


5-59 Configuring a Tier-1 Gateway: Step 2

When connecting a segment to a gateway, the subnet or gateway IP address must be configured.

296 Module 5: NSX-T Data Center Logical Routing


5-60 Testing East-West Connectivity

The Tier-1 Gateway is created and the interfaces for various logical networks are configured. Now
you can verify the east-west connectivity within the tenant environment.

Module 5: NSX-T Data Center Logical Routing 297


5-61 Configuring a Tier-1 Gateway: Step 3

298 Module 5: NSX-T Data Center Logical Routing


5-62 Configuring a Tier-1 Gateway: Step 4

Using route advertisement ensures that the networks defined for tenant segments are available for
the connected Tier-0 Gateway, which, in turn, can advertise them accordingly.

Module 5: NSX-T Data Center Logical Routing 299


5-63 Testing North-South Connectivity

In the diagram and the command output, the VM web-sv-01a (172.16.10.11) can ping the Tier-0
Gateway (192.168.100.2) and the upstream physical router (192.168.100.1), assuming routing is
configured on the physical router.
Web-sv-01a can also ping a remote VM 172.20.10.80.
Complete north-south connectivity is now established.

300 Module 5: NSX-T Data Center Logical Routing


5-64 Routing Topologies

Module 5: NSX-T Data Center Logical Routing 301


5-65 Single-Tier Topology

302 Module 5: NSX-T Data Center Logical Routing


5-66 Single-Tier Routing: Egress to Physical Network (1)

Module 5: NSX-T Data Center Logical Routing 303


5-67 Single-Tier Routing: Egress to Physical Network (2)

304 Module 5: NSX-T Data Center Logical Routing


5-68 Single-Tier Routing: Egress to Physical Network (3)

Module 5: NSX-T Data Center Logical Routing 305


5-69 Single-Tier Routing: Egress to Physical Network (4)

306 Module 5: NSX-T Data Center Logical Routing


5-70 Single-Tier Routing: Egress to Physical Network (5)

Module 5: NSX-T Data Center Logical Routing 307


5-71 Single-Tier Routing: Egress to Physical Network (6)

308 Module 5: NSX-T Data Center Logical Routing


5-72 Single-Tier Routing: Ingress from Physical Network (7)

Module 5: NSX-T Data Center Logical Routing 309


5-73 Single-Tier Routing: Ingress from Physical Network (8)

310 Module 5: NSX-T Data Center Logical Routing


5-74 Single-Tier Routing: Ingress from Physical Network (9)

Module 5: NSX-T Data Center Logical Routing 311


5-75 Single-Tier Routing: Ingress from Physical Network
(10)

312 Module 5: NSX-T Data Center Logical Routing


5-76 Single-Tier Routing: Ingress from Physical Network
(11)

Module 5: NSX-T Data Center Logical Routing 313


5-77 Single-Tier Routing: Ingress from Physical Network
(12)

314 Module 5: NSX-T Data Center Logical Routing


5-78 Single-Tier Routing: Ingress from Physical Network
(13)

Module 5: NSX-T Data Center Logical Routing 315


5-79 Multitier Topology (1)

316 Module 5: NSX-T Data Center Logical Routing


5-80 Multitier Topology (2)

Module 5: NSX-T Data Center Logical Routing 317


5-81 Multitier Topology (3)

318 Module 5: NSX-T Data Center Logical Routing


5-82 Multitier Routing: Egress to Physical Network Example

Module 5: NSX-T Data Center Logical Routing 319


5-83 Multitier Routing: Egress to Physical Network (1)

320 Module 5: NSX-T Data Center Logical Routing


5-84 Multitier Routing: Egress to Physical Network (2)

Module 5: NSX-T Data Center Logical Routing 321


5-85 Multitier Routing: Egress to Physical Network (3)

322 Module 5: NSX-T Data Center Logical Routing


5-86 Multitier Routing: Egress to Physical Network (4)

Module 5: NSX-T Data Center Logical Routing 323


5-87 Multitier Routing: Egress to Physical Network (5)

The slide highlights the segment that is used to transmit traffic between T0_DR and T0_SR. This
image shows only a logical view.

324 Module 5: NSX-T Data Center Logical Routing


5-88 Multitier Routing: Egress to Physical Network (6)

Module 5: NSX-T Data Center Logical Routing 325


5-89 Multitier Routing: Egress to Physical Network (7)

326 Module 5: NSX-T Data Center Logical Routing


5-90 Multitier Routing: Egress to Physical Network (8)

Module 5: NSX-T Data Center Logical Routing 327


5-91 Multitier Routing: Egress to Physical Network (9)

328 Module 5: NSX-T Data Center Logical Routing


5-92 Multitier Routing: Egress to Physical Network (10)

Module 5: NSX-T Data Center Logical Routing 329


5-93 Multitier Routing: Egress to Physical Network (11)

330 Module 5: NSX-T Data Center Logical Routing


5-94 Multitier Routing: Egress to Physical Network (12)

Module 5: NSX-T Data Center Logical Routing 331


5-95 Multitier Routing: Egress to Physical Network (13)

332 Module 5: NSX-T Data Center Logical Routing


5-96 Multitier Routing: Egress to Physical Network (14)

Module 5: NSX-T Data Center Logical Routing 333


5-97 Multitier Routing: Egress to Physical Network (15)

334 Module 5: NSX-T Data Center Logical Routing


5-98 Multitier Routing: Egress to Physical Network (16)

Module 5: NSX-T Data Center Logical Routing 335


5-99 Multitier Routing: Egress to Physical Network (17)

336 Module 5: NSX-T Data Center Logical Routing


5-100 Lab: Configuring the Tier-1 Gateway

Module 5: NSX-T Data Center Logical Routing 337


5-101 Review of Learner Objectives

338 Module 5: NSX-T Data Center Logical Routing


5-102 Configuring Static and Dynamic Routing

Module 5: NSX-T Data Center Logical Routing 339


5-103 Learner Objectives

340 Module 5: NSX-T Data Center Logical Routing


5-104 Static and Dynamic Routing

Module 5: NSX-T Data Center Logical Routing 341


5-105 Tier-0 Gateway Capabilities

342 Module 5: NSX-T Data Center Logical Routing


5-106 Configuring Static Routes on a Tier-0 Gateway (1)

Module 5: NSX-T Data Center Logical Routing 343


5-107 Configuring Static Routes on a Tier-0 Gateway (2)

344 Module 5: NSX-T Data Center Logical Routing


5-108 Viewing the Static Route Configuration

Module 5: NSX-T Data Center Logical Routing 345


5-109 BGP on Tier-0

Tier-0 Gateways can use only BGP as a dynamic routing protocol. The Tier-0 Gateway BGP
topology should be configured with redundancy and symmetry between the Tier-0 Gateways and
the external peers.
BGP is the only supported dynamic routing protocol on the Tier-0 Gateway in NSX-T Data
Center.

346 Module 5: NSX-T Data Center Logical Routing


5-110 Routing Features Supported by the Tier-0 Gateway

Module 5: NSX-T Data Center Logical Routing 347


5-111 Configuring Dynamic Routing on Tier-0 Gateways:
Step 1

BGP is enabled by default on Tier-0 Gateways.

348 Module 5: NSX-T Data Center Logical Routing


5-112 Configuring Dynamic Routing on Tier-0 Gateways:
Step 2

Module 5: NSX-T Data Center Logical Routing 349


5-113 Configuring Dynamic Routing on Tier-0 Gateways:
Step 3

350 Module 5: NSX-T Data Center Logical Routing


5-114 Verifying BGP Configuration of Tier-0 Gateway on
Edge Nodes

To use the edge node CLI to verify NSX Edge BGP connections, you follow these steps:
1. Log in to the edge node CLI.
2. Run the get logical-routers command to acquire the Virtual Routing and
Forwarding (VRF) number of the Tier-0 service router, which is SR-T0-LR-01 in the
example.
3. Enter the vrf <vrf_number> command to enter the Tier-0 service gateway context. This
command restricts the scope of the output of the commands to the configured VRF.
4. Enter the get bgp neighbor or get bgp neighbor summary commands to verify
that the BGP neighbor state is established.

Module 5: NSX-T Data Center Logical Routing 351


5-115 BFD on a Tier-0 Gateway

The Bidirectional Forwarding Detection (BFD) protocol is supported by Tier-0 Gateways to


protect the connection with the routing peers. BFD protects both static routes and BGP dynamic
routes .

352 Module 5: NSX-T Data Center Logical Routing


5-116 Enabling BFD on a Tier-0 Gateway

You can enable BFD per BGP neighbor and globally per gateway. The protocol timer can be fine-
tuned to cover environmental needs. The BFD timer range is 300 milliseconds to 10,000
milliseconds. The default 1,000 milliseconds x 3.

Module 5: NSX-T Data Center Logical Routing 353


5-117 About IP Prefix Lists

IP prefix lists are a way to define subsets or lists of IP addresses. This way of grouping is not the
same as defining a subnet. The IP prefix list defines a group of different subnets and individual IP
addresses as well as an action to either allow or deny those IP addresses. Additionally, using le or
ge prefix modifications, you can limit or extend the subnet or IP range.
In NSX-T Data Center, you can use IP prefix lists for various purposes, such as BGP filtering.
For example, you can add the IP address 192.168.100.3/24 to the IP prefix list and deny that the
route be redistributed to the northbound gateway. As a result, with the exception of the
192.168.100.3/24 IP address, all other IP addresses are shared with the gateway.
You can also append an IP address with less-than-or-equal-to (le) and greater-than-or-equal-to (ge)
modifiers to grant or limit route redistribution.

354 Module 5: NSX-T Data Center Logical Routing


5-118 Configuring an IP Prefix List

In the example, the IP prefix list permits network prefixes 10.0.0.0/8, so that they can be
advertised out by the Tier-0 Gateway to its upstream BGP neighbors. This prefix list also denies
the 192.168.0.0/24 network prefixes with masks greater than or equal to 26 bits and less than or
equal to 30 bits in length. This configuration means that the Tier-0 Gateway cannot redistribute
these prefixes to its upstream BGP neighbors.
Prefixes that are not specifically permitted in a prefix list are denied implicitly. If you need to
reverse this behavior (permit all other routes that are not specifically denied), you can change the
action of the default rule prefixlist-out-default to Permit.

Module 5: NSX-T Data Center Logical Routing 355


5-119 About Route Maps (1)

356 Module 5: NSX-T Data Center Logical Routing


5-120 About Route Maps (2)

A route map consists of a sequence of IP prefix lists, BGP path attributes, and an associated
action. The gateway scans the sequence for an IP address match. When a match occurs, the
gateway performs the action and stops scanning.
Route maps can be referenced at the BGP neighbor level for route redistribution. When IP prefix
lists are referenced in route maps, and the route map action of permitting or denying is applied, the
action specified in the route map sequence overrides the specification in the IP prefix list.

Module 5: NSX-T Data Center Logical Routing 357


5-121 Using Route Maps in BGP Route Advertisements

358 Module 5: NSX-T Data Center Logical Routing


5-122 BGP Feature: Allow AS-In

By default, the BGP process denies routes received from a peer if the route has its own
autonomous system number (ASN) in the route update. The receiving gateway considers this route
an internal route and denies it to avoid routing loops.
However, this scenario can be valid when a single customer has two locations interconnected to
the same service provider. In this case, the BGP neighbors are configured with the allowas-in
option to receive routes with the same ASN.
In the example, Company X has two locations: Site-A and Site-B. Both sites belong to AS 64511.
Both sites are connected to an ISP.
Route 1 (RT1) advertises network prefix x.x.x.x to the ISP. The ISP gateway, in turn, advertises
x.x.x.x to RT2.
The ASNs (AS path) are recorded in the advertised prefix as it traverses each AS. By default, RT2
does not accept the BGP advertised prefix x.x.x.x because RT2 sees its own AS number 64511 in
it. However, x.x.x.x is a legitimate network residing at another site (Site-A). In this scenario, the
network administrator can enable the allowas-in option so that RT2 accepts the route
advertisement x.x.x.x with its own ASN 64511 is in the AS path.

Module 5: NSX-T Data Center Logical Routing 359


5-123 BGP Feature: Multipath Relax

The multipath relax feature is available beginning with NSX-T Data Center 2.4.
For application load-balancing purposes, the same prefix should be advertised from multiple BGP
gateways. From the perspective of other devices, this prefix includes BGP paths with different
AS_PATH attribute values but the same AS_PATH attribute lengths.
BGP implementations support load-sharing over the above-mentioned paths. This feature is
sometimes known as multipath relax or multipath multiple-AS and enables equal-cost multipath
(ECMP) routing across different neighboring ASNs, if all other attributes (weight, local
preference, and so on) are equal.
In the diagram, the network prefix 200.1.1.0/24 in AS 100 is advertised by RT1 to its peers: RT2
in AS 200 and RT3 in AS 300.
RT2 and RT3 both advertise the prefix 200.1.1.0/24 to RT4 in AS 400.
RT4 can reach the 200.1.1.0/24 network through two paths: one through AS 200 and the other
through AS 300. Both of these paths have equal cost and path length.

360 Module 5: NSX-T Data Center Logical Routing


Without multipath relax, RT4’s BGP process chooses only one path to reach the remote 200.1.1.0
network. With ECMP and mutlipath relax enabled, RT4 uses both paths, resulting in a more
balanced traffic load.

Module 5: NSX-T Data Center Logical Routing 361


5-124 Internal BGP Support

iBGP and two related options, set local-preference and set next-hop-self, are
supported. These settings offer flexibility in exchanging routing information between logical space
and the fabric.

362 Module 5: NSX-T Data Center Logical Routing


5-125 About Inter-SR Routing

Inter-SR routing is a new feature available with the NSX-T Data Center 2.4 release.

Module 5: NSX-T Data Center Logical Routing 363


5-126 Inter-SR Routing Characteristics

364 Module 5: NSX-T Data Center Logical Routing


5-127 Inter-SR Routing Example (1)

Module 5: NSX-T Data Center Logical Routing 365


5-128 Inter-SR Routing Example (2)

366 Module 5: NSX-T Data Center Logical Routing


5-129 Inter-SR Routing Example (3)

Module 5: NSX-T Data Center Logical Routing 367


5-130 Lab: Configuring the Tier-0 Gateway

368 Module 5: NSX-T Data Center Logical Routing


5-131 Review of Learner Objectives

Module 5: NSX-T Data Center Logical Routing 369


5-132 ECMP and High Availability

370 Module 5: NSX-T Data Center Logical Routing


5-133 Learner Objectives

Module 5: NSX-T Data Center Logical Routing 371


5-134 About Equal-Cost Multipath Routing

372 Module 5: NSX-T Data Center Logical Routing


5-135 Enabling ECMP

Module 5: NSX-T Data Center Logical Routing 373


5-136 Edge Node High Availability

Grouping edge nodes offers the benefits of high availability for edge node services. The service
router runs on an edge node and has two modes of operation: active-active or active-standby.
Active-active mode is offered on NSX Edge:

• Logical routing is active on more than one NSX Edge node at a time.

• This mode is supported only on Tier-0 Gateways.

Active-standby mode is also offered on NSX Edge:

• Logical routing is active on only one NSX Edge node at a time.

• This mode is supported on Tier-0 and Tier-1 Gateways.

A maximum of 10 edge nodes can be grouped in a cluster.

374 Module 5: NSX-T Data Center Logical Routing


5-137 Tier-0 Gateway Active-Active Mode

Active-active is a high availability mode where a gateway is hosted on more than one edge node at
a time:

• When one node fails, traffic is not disrupted, but bandwidth is constrained.

• High availability can be optionally enabled during the creation of a Tier-0 Gateway.

• By default, the active-active mode is used. In the active-active mode, traffic is load-balanced
across all members.

Stateful services such as NAT and firewall cannot be used in this mode.

Module 5: NSX-T Data Center Logical Routing 375


5-138 Tier-0 Gateway Active-Standby Mode

Active-standby is a high availability mode where a gateway is operational on only a single edge
node at a time.
This mode is required when stateful services are enabled. Stateful services typically require
tracking of connection state, for example, sequence number check. As a result, traffic for a given
session needs to go through the same edge node.
Active-standby mode is supported on both Tier-1 and Tier-0 service routers (SRs).
With active-standby mode, the Tier-0 SR acts as a hot standby. The Tier-0 state is synchronized
but it does not actively forward traffic. Both SRs maintain BGP peering with the physical
gateway.
In active-standby mode, all traffic is processed by an elected active member. If the active member
fails, a new member is elected to be active.
For Tier-1, active-standby SRs have the same IP addresses northbound.
For Tier-0, active-standby SRs have different IP addresses northbound and have eBGP sessions
established on both links.

376 Module 5: NSX-T Data Center Logical Routing


5-139 Failure Conditions and Failover Process (1)

BFD is a network protocol used to detect faults between two forwarding engines connected by a
link. Failures are detected on a per-logical router basis. The conditions used to declare an edge
node down are the same in active-active and active-standby high availability modes.
To ensure uninterrupted routing of network traffic, the NSX Edge nodes exchange keepalive
messages, which are BFD sessions running between the nodes. Edge nodes in an edge cluster
exchange BFD keepalives on the management and tunnel interfaces. When the standby Tier-0
Gateway fails to receive keepalives on both management and tunnel interfaces, it announces itself
as active.

Module 5: NSX-T Data Center Logical Routing 377


5-140 Failure Conditions and Failover Process (2)

The BFD protocol provides fast detection of failure for forwarding paths or forwarding engines,
improving convergence. Edge VMs support BFD with a minimum BFD timer of one second with
three retries, providing a three-second failure detection time. Bare metal edges support BFD with a
minimum BFD Tx/Rx timer of 300 milliseconds with three retries, which implies a 900
milliseconds-failure-detection time.

378 Module 5: NSX-T Data Center Logical Routing


5-141 Failure Conditions and Failover Process (3)

If an active gateway loses all its BGP neighbor sessions and a standby gateway is configured,
failover occurs. An active SR on an edge node is declared down when all eBGP sessions on the
peer SR are down.
This scenario is only applicable on Tier-0 with dynamic routing.
eBGP is configured on the uplink between each NSX Edge node and the exterior physical
gateways.
eBGP status is also monitored during the keepalive exchanges.
The default keepalive interval is 60 seconds. The minimum time between advertisements is 30
seconds.
If all overlay tunnels to the compute hypervisors are down, the active edge node is not receiving
any tunnel traffic from compute hypervisors. Then the standby edge node takes over.

Module 5: NSX-T Data Center Logical Routing 379


5-142 Edge Node Failback Modes

380 Module 5: NSX-T Data Center Logical Routing


5-143 Lab: Verifying Equal Cost Multipathing Configurations

Module 5: NSX-T Data Center Logical Routing 381


5-144 Review of Learner Objectives

382 Module 5: NSX-T Data Center Logical Routing


5-145 Key Points (1)

Module 5: NSX-T Data Center Logical Routing 383


5-146 Key Points (2)

384 Module 5: NSX-T Data Center Logical Routing


Module 6
NSX-T Data Center Logical Bridging

Module 6: NSX-T Data Center Logical Bridging 385


6-2 Importance

386 Module 6: NSX-T Data Center Logical Bridging


6-3 Learner Objectives

Module 6: NSX-T Data Center Logical Bridging 387


6-4 Logical Bridging Use Cases

When an NSX-T Data Center segment requires a layer 2 connection to a VLAN-backed port group
or needs to reach another device, such as a gateway which resides outside of an NSX-T Data
Center deployment, you can use an NSX-T Data Center layer 2 bridge.
The main benefits of logical bridging are as follows:

• It provides high throughput using Data Plane Development Kit (DPDK)-based physical-to-
virtual bridging on edge nodes.

• It introduces firewall capability on the edge bridge.

• In a KVM-only environment, layer 2 bridging can only be configured using edge nodes.

388 Module 6: NSX-T Data Center Logical Bridging


6-5 Routing and Bridging for Physical-to-Virtual
Communication

When connecting your physical workloads on traditional physical networks to a virtualized


environment, you can use routers running standard routing protocols to route traffic between
workloads in the two environments.
If routing is not an option, and you have to place your physical and virtual devices on a single
layer 2 subnet, you can enable bridging.

Module 6: NSX-T Data Center Logical Bridging 389


6-6 Virtual-to-Physical Routing Example

390 Module 6: NSX-T Data Center Logical Bridging


6-7 Virtual-to-Physical Bridging Example

This diagram demonstrates how the physical server and the App1 server (virtual) can exist on the
same subnet.

Module 6: NSX-T Data Center Logical Bridging 391


6-8 Logical Bridging Overview

The NSX-T Data Center components that are key to layer 2 bridging are bridge clusters, bridge
endpoints, and bridge nodes:

• A bridge cluster is a group of bridge nodes that can provide redundancy.

• A bridge node is a transport node that does bridging.

• A bridge endpoint identifies the physical attributes of the bridge, such as the bridge cluster ID
and the associated VLAN ID. Each segment that is used for bridging a virtual and physical
deployment has an associated VLAN ID.

392 Module 6: NSX-T Data Center Logical Bridging


The diagram provides an example:

• The NSX bridge node is a transport node that belongs to a bridge cluster. The segment is
attached to a bridge cluster and is called a bridge-backed segment. To be eligible for bridge
backing, a segment must be in an overlay transport zone, not in a VLAN transport zone.

• The NSX bridge node (on the left) and the NSX transport node (in the middle) are part of the
same overlay transport zone. As such, their N-VDS are attached to the same bridge-backed
segment.

• The bridge-backed segment has a VNI 10500, which is mapped to VLAN 150. The NSX
bridge node has VLAN 150 and VNI 10500 configured.

• The NSX transport node is not a part of the bridge cluster. It is a normal transport node, which
can be a KVM or ESXi host. In the diagram, VM1 residing on this node is attached to the
bridge-backed segment.

• The other node on the right is not a part of the NSX-T Data Center overlay. It might be any
hypervisor with VM2 or it might be a physical network node. If the node that is not part of
NSX-T Data Center is an ESXi host, you can use a standard virtual switch or a vSphere
distributed switch for the port attachment. But the VLAN ID associated with the port
attachment must match the VLAN ID on the bridge-backed segment. Also, communication
occurs over layer 2, so the two end devices must have IP addresses in the same subnet
(172.16.30.0 network).

To use ESXi host transport nodes for bridging, you create a bridge cluster. To use NSX Edge
transport nodes for bridging, you create a bridge profile.

Module 6: NSX-T Data Center Logical Bridging 393


6-9 Creating a Bridge Cluster

A bridge cluster is a collection of ESXi host transport nodes that can provide layer 2 bridging to a
segment. A bridge cluster can have a maximum of two ESXi host transport nodes as bridge nodes.
With two bridge nodes, a bridge cluster provides high availability in active-standby mode. Even if
you have only one bridge node, you must create a bridge cluster. After creating the bridge cluster,
you can add an additional bridge node.
VMware recommends that bridge nodes do not include hosted VMs.
You can add a transport node to only one bridge cluster. You cannot add the same transport node
to multiple bridge clusters.

394 Module 6: NSX-T Data Center Logical Bridging


6-10 Logical Bridging on NSX Edge Nodes

You can configure a bridge-backed logical switch to provide layer 2 connectivity between VMs in
an NSX-T Data Center overlay and devices that are outside of NSX-T Data Center.

Module 6: NSX-T Data Center Logical Bridging 395


6-11 Benefits of Configuring Logical Bridging on NSX Edge
Nodes

In a KVM-only environment, layer 2 bridging can only be configured using NSX Edge nodes.

396 Module 6: NSX-T Data Center Logical Bridging


6-12 Bridge Profiles on NSX Edge Nodes

Module 6: NSX-T Data Center Logical Bridging 397


6-13 Using Multiple Bridge Profiles on NSX Edge Nodes

Preemption is an action taken by the preferred node. If the preferred node fails and recovers, it
demotes its peer and becomes the active node. The peer changes its state to standby.

398 Module 6: NSX-T Data Center Logical Bridging


6-14 Creating an Edge Bridge Profile

Module 6: NSX-T Data Center Logical Bridging 399


6-15 Configuring a Layer 2 Bridge-Backed Logical Switch

Before configuring a bridge-backed logical switch, you should verify the following components:

• A bridge cluster or a bridge profile.

• At least one ESXi or KVM host to serve as a regular transport node. This node hosts VMs that
require connectivity with devices outside of a NSX-T Data Center deployment.

• A VM or another end device outside of the NSX-T Data Center deployment. This end device
must be attached to a VLAN port matching the VLAN ID of the bridge-backed logical switch.

• One logical switch in an overlay transport zone to serve as the bridge-backed logical switch.

You can attach the logical switch to a bridge cluster or a bridge profile.
After completing the configuration of the bridge-backed logical switch, you can connect VMs to
the switch, if they are not already connected. The VMs must be on transport nodes in the same
transport zone as the bridge cluster or bridge profile.

400 Module 6: NSX-T Data Center Logical Bridging


You can test the functionality of the bridge by sending a ping from the NSX-T Data Center
internal VM to a node that is external to NSX-T Data Center.

Module 6: NSX-T Data Center Logical Bridging 401


6-16 Monitoring the Bridged Traffic Statistics

402 Module 6: NSX-T Data Center Logical Bridging


6-17 Review of Learner Objectives

Module 6: NSX-T Data Center Logical Bridging 403


6-18 Key Points

404 Module 6: NSX-T Data Center Logical Bridging


Module 7
NSX-T Data Center Services

Module 7: NSX-T Data Center Services 405


7-2 Importance

406 Module 7: NSX-T Data Center Services


7-3 Module Lessons

Module 7: NSX-T Data Center Services 407


7-4 Configuring NAT

408 Module 7: NSX-T Data Center Services


7-5 Learner Objectives

Module 7: NSX-T Data Center Services 409


7-6 About NAT

Network address translation (NAT) was designed originally to conserve public internet address
space. During the 1990s, Internet providers quickly depleted the available IPv4 address supply.
NAT became the primary method for IPv4 address conservation. NAT performs one-to-one
mapping (one public IP address is mapped to one private IP address) or one-to-many mapping
(one public IP address is mapped to multiple private IP addresses).
You can create different NAT rules:

• Source NAT (SNAT) translates the source IP of the outbound packets to a known public IP
address so that the application can communicate with the outside world without using its
private IP address. SNAT also keeps track of the reply.

• Destination NAT (DNAT) enables access to internal private IP addresses from the outside
world by translating the destination IP address when inbound communication is initiated.

410 Module 7: NSX-T Data Center Services


DNAT also takes care of the reply. For both SNAT and DNAT, users can apply NAT rules
based on 5-tuple match criteria.

• Reflexive NAT rules are stateless access control lists (ACLs) that must be defined in both
directions. These rules do not keep track of the connection. Reflexive NAT rules are applied
when stateful NAT cannot be used. For example, when a Tier-0 logical router is running in
active-active equal-cost multipath (ECMP) mode, you cannot configure stateful NAT because
asymmetrical paths might cause issues.

Whenever NAT is enabled, a service router (SR) component must be instantiated on an edge
cluster.
To configure NAT, specify the edge cluster where the service should run. You can also configure
the NAT service on a specific edge node pair.
If no specific edge node is identified, the platform performs auto-placement of the services
component on an edge node in the cluster using a weighted round-robin algorithm.
Using the No SNAT rule disables source NAT. This rule applies to the traffic going toward the out
direction.
Using the No DNAT rule disables destination NAT. This rule applies to the traffic coming toward
the in direction.

Module 7: NSX-T Data Center Services 411


7-7 About SNAT

412 Module 7: NSX-T Data Center Services


7-8 About DNAT

Module 7: NSX-T Data Center Services 413


7-9 Reflexive NAT (Stateless NAT)

When a Tier-0 or Tier-1 logical router (LR) is running in active-active mode, you cannot configure
stateful NAT where asymmetrical paths might cause issues. For active-active routers, you can use
reflexive NAT, which is sometimes called stateless NAT.
For reflexive NAT, you can configure a single source address to be translated or a range of
addresses. If you configure a range of source addresses, you must also configure a range of
translated addresses. The size of the two ranges must be the same. The address translation is
deterministic, meaning that the first address in the source address range is translated to the first
address in the translated address range, the second address in the source range is translated to the
second address in the translated range, and so on.
In the diagram, the source VM (172.16.101.11) on the inside network sends a packet to an outside
client (x.x.x.x) on the Internet. The packet is routed to the Tier-0 Gateway hosted on NSX Edge
Node 1, which creates a reflexive NAT entry: source IP 172.16.10.11 and translated IP 80.80.80.1.
When the return traffic arrives (with the destination 80.80.80.1), the same reflexive NAT entry is
used to translate 80.80.80.1 back to 172.16.10.11.

414 Module 7: NSX-T Data Center Services


7-10 Configuring SNAT and DNAT

Module 7: NSX-T Data Center Services 415


7-11 Configuring the No SNAT Rule

416 Module 7: NSX-T Data Center Services


7-12 Configuring the No DNAT Rule

You can configure the following parameters:

• Name: Provide name for the NAT rule.

• Action: Specify the action of the NAT rule if a match occurs. The No DNAT action disables
the DNAT rule, in which case the original destination address in the IP packet is not
translated.

• Source IP: Specify a source IP address or an IP address range in CIDR format. If you leave
this field blank, the NAT rule applies to all sources outside of the local subnet.

• Destination IP: Specify a destination IP address or an IP address range in CIDR format.

• Translated IP: The new IP address as the result of network address translation.

• Service: Single service entry on which the NAT rule is applied.

Module 7: NSX-T Data Center Services 417


• Firewall: Includes the options Match External Address, Match Internal Address, and
Bypass.

• Applied To: Select objects that this NAT rule applies to. The available objects are Tier-0
Gateways, interfaces, labels, service instance endpoints, and virtual endpoints.

418 Module 7: NSX-T Data Center Services


7-13 Configuring Reflexive NAT

Module 7: NSX-T Data Center Services 419


7-14 NAT Packet Flow Logical Topology

420 Module 7: NSX-T Data Center Services


7-15 NAT Packet Flow (1)

Module 7: NSX-T Data Center Services 421


7-16 NAT Packet Flow (2)

422 Module 7: NSX-T Data Center Services


7-17 NAT Packet Flow (3)

Module 7: NSX-T Data Center Services 423


7-18 NAT Packet Flow (4)

424 Module 7: NSX-T Data Center Services


7-19 NAT Packet Flow (5)

Module 7: NSX-T Data Center Services 425


7-20 NAT Packet Flow (6)

426 Module 7: NSX-T Data Center Services


7-21 NAT Packet Flow (7)

Module 7: NSX-T Data Center Services 427


7-22 NAT Packet Flow (8)

428 Module 7: NSX-T Data Center Services


7-23 NAT Packet Flow (9)

Module 7: NSX-T Data Center Services 429


7-24 NAT Packet Flow (10)

430 Module 7: NSX-T Data Center Services


7-25 NAT Packet Flow (11)

Module 7: NSX-T Data Center Services 431


7-26 Lab: Configuring Network Address Translation

432 Module 7: NSX-T Data Center Services


7-27 Review of Learner Objectives

Module 7: NSX-T Data Center Services 433


7-28 Configuring DHCP and DNS Services

434 Module 7: NSX-T Data Center Services


7-29 Learner Objectives

Module 7: NSX-T Data Center Services 435


7-30 About DHCP Services

436 Module 7: NSX-T Data Center Services


7-31 DHCP Architecture

Module 7: NSX-T Data Center Services 437


7-32 DHCP Use Cases

438 Module 7: NSX-T Data Center Services


7-33 DHCP Workflow

Module 7: NSX-T Data Center Services 439


7-34 Creating the DHCP Server

440 Module 7: NSX-T Data Center Services


7-35 Configuring the DHCP Server on the Tier-1 Gateway

Module 7: NSX-T Data Center Services 441


7-36 Configuring the Subnet on the Segment

442 Module 7: NSX-T Data Center Services


7-37 Editing Segments

Module 7: NSX-T Data Center Services 443


7-38 Viewing the DHCP Server Status

444 Module 7: NSX-T Data Center Services


7-39 DHCP Configuration Details: Advanced UI

A DHCP server profile specifies an NSX Edge cluster or members of an NSX Edge cluster. A
DHCP server with this profile services DHCP requests from VMs on logical switches that are
connected to the NSX Edge nodes that are specified in the profile

Module 7: NSX-T Data Center Services 445


7-40 DHCP Server Router Ports on Tier-1 Gateways

446 Module 7: NSX-T Data Center Services


7-41 DHCP Server and IP Pool Information in the Advanced
UI

Module 7: NSX-T Data Center Services 447


7-42 DHCP Relay

On the DHCP configuration page of the simplified UI, you can create DHCP servers to handle
DHCP requests and create DHCP relay services to relay DHCP traffic to external DHCP servers.

448 Module 7: NSX-T Data Center Services


7-43 Configuring the DHCP Relay Server on Tier-1
Gateways

To edit or create a new Tier-1 Gateway with a DHCP relay server, you click IP Address
Management to set the server type to DHCP Relay Server.
The DHCP relay server forwards the DHCP IP requests to the external DHCP server.
The DHCP relay server can be configured on Tier-1 or Tier-0 Gateways.

Module 7: NSX-T Data Center Services 449


7-44 Configuring Segments with Gateway and DHCP IP
Address Ranges

450 Module 7: NSX-T Data Center Services


7-45 Local and Remote DHCP Server Configuration

The slide shows that the Tier-1 Gateway, T1-LR-1, is attached to the local DHCP server, whereas
the Tier-1 Gateway, T2-LR-2, is configured to relay DHCP requests to the external DHCP server.

Module 7: NSX-T Data Center Services 451


7-46 About DNS Services

452 Module 7: NSX-T Data Center Services


7-47 About DNS Forwarder

Module 7: NSX-T Data Center Services 453


7-48 DNS Forwarder Benefits

454 Module 7: NSX-T Data Center Services


7-49 Configuring DNS Services and DNS Zones (1)

The slide shows how to create a DNS forwarder service in NSX-T Data Center from the simplified
UI.
To create a DNS forwarder, you perform the following steps:

• Select Networking > IP Address Management > DNS.

• Select the DNS Services tab and enter details for the following fields:

– Name
– Tier0/Tier1 Gateway
– DNS Service IP: This IP address is the listen IP address for DNS requests.
– Default Zone: DNS client requests are forwarded to the default zone unless configured to
relay the request to the conditional zone. The default zone contains the following
parameters.

• Zone Name: Enter the name for the DNS forwarder.

Module 7: NSX-T Data Center Services 455


• Domain: The default value is ANY and is the default DNS forwarder for all the
domains except for the domains mentioned in the new FQDN zone.)

• DNS Servers: Enter the upstream DNS server IP address.

• Source IP: Enter the IP address that the DNS forwarder uses as the source IP
address to send the DNS query to external DNS servers and receive the DNS reply
from external DNS servers.

• Description: Enter an optional description for the DNS forwarder.

• Tags: Enter optional tags for the DNS forwarder


– Select the menu icon next to the FQDN Zones box. Configure the FQDN zones by
entering the following details:

• Name: Enter the name for the DNS forwarder.

• Domain Name: Enter the Fully Qualified Domain Name (FQDN).

• DNS Servers: Enter the upstream DNS server IP address.

• Source IP: Enter the IP address that the DNS forwarder uses as the source IP address
to send the DNS query to external DNS servers and receive the DNS reply from
external DNS servers.

• Description: Enter an optional description for the DNS forwarder.

• Tags: Enter optional tags for the DNS forwarder.


If the FQDN zone is configured with the domain name vclass.local, any client request for the
vclass.local domain is forwarded to the DNS server specified in the FQDN zone. Otherwise, all
the other DNS requests are serviced by the DNS server specified in the default zone.

456 Module 7: NSX-T Data Center Services


7-50 Configuring DNS Services and DNS Zones (2)

When you configure DNS services, you can use the simplified UI to create DNS zones.
You create DNS zones on the DNS Zones tab. You can use these zones when you configure the
DNS forwarder.

Module 7: NSX-T Data Center Services 457


7-51 Verifying the DNS Forwarder

Run the get dns-forwarder status command to query the DNS forwarder service
running on NSX Edge.
Run the get dns-forwarder config command to query the DNS forwarder configuration
on the NSX Edge node.
The high availability for the DNS forwarder is active/standby, depending on the NSX Edge
cluster.

458 Module 7: NSX-T Data Center Services


7-52 Lab: Configuring the DHCP Server on the NSX Edge
Node

Module 7: NSX-T Data Center Services 459


7-53 Review of Learner Objectives

460 Module 7: NSX-T Data Center Services


7-54 Configuring Load Balancing

Module 7: NSX-T Data Center Services 461


7-55 Learner Objectives

462 Module 7: NSX-T Data Center Services


7-56 Load Balancing Use Cases

Module 7: NSX-T Data Center Services 463


7-57 Layer 4 Load Balancing

464 Module 7: NSX-T Data Center Services


7-58 Layer 7 Load Balancing

Module 7: NSX-T Data Center Services 465


7-59 Load Balancer Architecture

466 Module 7: NSX-T Data Center Services


7-60 Connecting to Tier-1 Gateways

Module 7: NSX-T Data Center Services 467


7-61 Virtual Servers

468 Module 7: NSX-T Data Center Services


7-62 About Profiles

Module 7: NSX-T Data Center Services 469


7-63 About Server Pools

470 Module 7: NSX-T Data Center Services


7-64 About Monitors

Module 7: NSX-T Data Center Services 471


7-65 Relationships Among Load Balancer Components

472 Module 7: NSX-T Data Center Services


7-66 Load Balancer Scalability (1)

Module 7: NSX-T Data Center Services 473


7-67 Load Balancer Scalability (2)

474 Module 7: NSX-T Data Center Services


7-68 Load Balancing Deployment Modes

Module 7: NSX-T Data Center Services 475


7-69 Inline Topology

476 Module 7: NSX-T Data Center Services


7-70 One-Arm Topology (1)

Module 7: NSX-T Data Center Services 477


7-71 One-Arm Topology (2)

478 Module 7: NSX-T Data Center Services


7-72 Load Balancing Configuration Steps

Module 7: NSX-T Data Center Services 479


7-73 Creating Load Balancers

In the ADD LOAD BALANCER wizard, you provide the name of the load balancer, specify the
deployment size, and provide the Tier-1 Gateway to attach your load balancer to.
From this wizard, you can also select the Set Virtual Servers link to configure the virtual servers
for the load balancer you just created.

480 Module 7: NSX-T Data Center Services


7-74 Creating Virtual Servers

Module 7: NSX-T Data Center Services 481


7-75 Configuring Layer 4 Virtual Servers

When configuring a layer 4 virtual server, you provide values for the following parameters:

• Name

• Virtual IP address

• Ports: Port ranges are supported.

• Server Pool: This setting can be created from the wizard.

• Application Profile: This setting is populated by default based on the protocol type specified
when you created the virtual server.

• Persistence profile: Layer 4 virtual servers only support the Source IP option.

482 Module 7: NSX-T Data Center Services


7-76 Configuring Layer 7 Virtual Servers

When configuring a layer 7 virtual server, you provide values for the following parameters:

• Name

• IP address

• Ports: Port ranges are not supported when configuring a layer 7 virtual server.

• Server Pool: This setting can be created from the wizard.

• Application Profile: This setting is populated by default based on the protocol type specified
when you create the virtual server.

• Persistence profile: Layer 7 virtual servers support both Source IP and Cookie persistence
options.

• SSL Configuration: You can configure SSL parameters on both the server and client side.

• Load Balancer Rules: Layer 7 virtual servers support the configuration of layer 7 rules.

Module 7: NSX-T Data Center Services 483


7-77 Configuring Application Profiles

484 Module 7: NSX-T Data Center Services


7-78 Configuring Persistence Profiles

Additional persistence profiles can be created based on source IP or cookies to suit your
application needs

Module 7: NSX-T Data Center Services 485


7-79 Layer 7 Load Balancer SSL Modes

486 Module 7: NSX-T Data Center Services


7-80 Configuring Layer 7 SSL Profiles

Module 7: NSX-T Data Center Services 487


7-81 Configuring Layer 7 Load Balancer Rules

488 Module 7: NSX-T Data Center Services


7-82 Creating Server Pools

When configuring a server pool, you provide values for the following parameters:

• Name

• Load balancing Algorithm

• Pool Members/Group: These members can be static or dynamic if configured as a group.

• SNAT Translation Mode

• Active Monitor

Module 7: NSX-T Data Center Services 489


7-83 Configuring Load Balancing Algorithms

490 Module 7: NSX-T Data Center Services


7-84 Configuring SNAT Translation Modes

Module 7: NSX-T Data Center Services 491


7-85 Configuring Active Monitors

492 Module 7: NSX-T Data Center Services


7-86 Configuring Passive Monitors

Module 7: NSX-T Data Center Services 493


7-87 Lab: Configuring Load Balancing

494 Module 7: NSX-T Data Center Services


7-88 Review of Learner Objectives

Module 7: NSX-T Data Center Services 495


7-89 IPSec VPN

496 Module 7: NSX-T Data Center Services


7-90 Learner Objectives

Module 7: NSX-T Data Center Services 497


7-91 NSX-T Data Center VPN Services

498 Module 7: NSX-T Data Center Services


7-92 IPSec VPN Use Cases

The IPSec VPN secures traffic flowing between two networks connected over a public network
through IPSec gateways called endpoints. NSX Edge supports site-to-site IPSec VPN between an
NSX Edge instance and remote IPSec-capable gateways.

Module 7: NSX-T Data Center Services 499


7-93 IPSec VPN Methods

IPSec VPN tunnel packets can use the following headers:

• Authentication header (AH)

• Encapsulating security payload header (ESP)

The difference between the two headers is that the authentication header does not provide
encryption, whereas the encapsulating security payload header enables the encryption of the
protected payload.

500 Module 7: NSX-T Data Center Services


7-94 IPSec VPN Modes

Module 7: NSX-T Data Center Services 501


7-95 IPSec VPN Protocols and Algorithms

502 Module 7: NSX-T Data Center Services


7-96 IPSec VPN Certificate-Based Authentication

Module 7: NSX-T Data Center Services 503


7-97 IPSec VPN Dead Peer Detection

504 Module 7: NSX-T Data Center Services


7-98 IPSec VPN Types

Module 7: NSX-T Data Center Services 505


7-99 IPSec VPN Deployment Considerations

IPSec VPN is not supported in the NSX-T Data Center limited export release.

506 Module 7: NSX-T Data Center Services


7-100 IPSec VPN High Availability

The video demonstrates how IPSec VPN high availability works.

Module 7: NSX-T Data Center Services 507


7-101 IPSec VPN Scalability

The table shows the various sizes of NSX Edge nodes and their supported VPN sessions.

508 Module 7: NSX-T Data Center Services


7-102 IPSec VPN Configuration Steps

Module 7: NSX-T Data Center Services 509


7-103 Configuring an IPSec VPN Service

You configure the following settings for an IPSec service:

• Name: You enter a name for the IPSec service.

• Tier-0 Gateway: From the Tier-0 Gateway drop-down menu, you can select a Tier-0
Gateway to associate with this IPSec VPN service.

• IKE Log Level: This setting is for VPN service logging. The Internet Key Exchange (IKE)
logging level determines the amount of information you want collected for the IPSec VPN
traffic. The default is set to the Info level.

• Admin Status: This setting enables or disables the IPSec VPN service. By default, the value
is set to Enabled, which means the service is enabled on the Tier-0 Gateway.

• Tags: You enter a value for tags if you want to include this service in a tag group.

510 Module 7: NSX-T Data Center Services


7-104 Configuring DPD Profiles

To configure a DPD profile, you select values for the following options:

• Name: You use the name to identify the service.

• DPD Probe Interval (sec): You provide a value in seconds to define at which times a DPD
detection packet should be sent.

• Admin Status: This setting enables or disables the profile.

• Tags: For cloud-based installations, almost every entity can hold a tag.

Module 7: NSX-T Data Center Services 511


7-105 Configuring IKE Profiles

You specify the following settings to configure an IKE profile:

• Name: You use the name to identify the service.

• IKE Version: The options are IKE V1, IKE V2, or IKE FLEX. The selection depends on
your business requirements.

• Encryption Algorithm: This setting specifies the level of encryption to secure the
communication.

• Digest Algorithm: You select the available digest algorithm.

• SA Lifetime (sec): The lifetime (in seconds) of the security associations (individual
communicating peer identifiers) after which a renewal is required.

• Tags: For cloud-based installations, almost every entity can hold a tag.

512 Module 7: NSX-T Data Center Services


7-106 Configuring IPSec Profiles

You provide values for the following settings to configure the IPSec profile:

• Name : You use the name to identify the service.

• Encryption Algorithm: This setting specifies the level of encryption to secure the
communication.

• Digest Algorithm: You select the available digest algorithm.

• PFS Group: This setting specifies the Perfect Forward Secrecy (PFS) group, which adds
protection to the keys used for building secure channels. You can enable or disable this
option.

• Diffie-Hellman: This setting is an additional security algorithm for establishing a secret key-
exchange channel.

• SA Lifetime (sec): The setting specifies the lifetime (in seconds) of the security associations
(individual communicating peer identifiers) after which a renewal is required.

Module 7: NSX-T Data Center Services 513


• DF Bit: This setting defines whether the encrypted traffic should copy the DF (don't
fragment) bit from the inner payload to the encrypted traffic.

• Tags: For cloud-based installations, almost every entity can hold a tag.

514 Module 7: NSX-T Data Center Services


7-107 Configuring Local Endpoints

To configure the local endpoints, you provide values for the following options:

• Name: You use this name to identify the service.

• VPN Service: This setting specifies which IPSec VN service to use with the endpoint.

• IP Address: This setting is for the local IP address.

• Site Certificate: You use this setting with certificate-based authentication to specify which
certificate to use with this endpoint.

• Local ID: This setting specifies the IPsec ID of the local side. The local ID is usually the
same as the local IP address.

• Tags: For cloud-based installations, almost every entity can hold a tag.

Module 7: NSX-T Data Center Services 515


7-108 Configuring IPSec VPN Sessions (1)

516 Module 7: NSX-T Data Center Services


7-109 Configuring IPSec VPN Sessions (2)

To configure the policy-based IPSec session, you specify the following settings:

• Name : You use the name to identify the service when you need to use it.

• Type: This setting is already defined by the previous selection.

• VPN Service: This setting is the predefined service to use with this session.

• Local Endpoint: This seeing is the earlier configured local endpoint for use with this
configuration session.

• Remote IP: The setting specifies the IP address of the remote IPSec-capable gateway for
building the secure connection.

• Authentication Mode: This setting defines whether to use the preshared key (PSK) or
certificate-based connection authentication.

• Local Networks and Remote Networks: These settings define the interesting traffic that
should be encrypted through this VPN session.

Module 7: NSX-T Data Center Services 517


• Pre-shared Key: This setting specifies the string to define the key if the authentication mode
is PSK.

• Remote ID: This setting defines the identifier of the remote peer for verifying the authenticity
of the peering.

• Tags: For cloud-based installations, almost every entity can hold a tag.

518 Module 7: NSX-T Data Center Services


7-110 Configuring IPSec VPN Sessions (3)

Module 7: NSX-T Data Center Services 519


7-111 Configuring IPSec VPN Sessions (4)

To configure a route-based IPSec session, you define the following settings:

• Name: You use the name to identify the service for later use.

• Type: This setting is already defined by the previous selection.

• VPN Service: This setting is the predefined service to use with this session.

• Local Endpoint: This setting defines an earlier configured local endpoint to use with this
session configuration.

• Remote IP: This setting defines the IP address of the remote IPSec-capable gateway for
building the secure connection.

• Authentication Mode: This setting defines whether to use a preshared key (PSK) or
certificate-based connection authentication.

• Admin Status: This setting enables and disables the service.

520 Module 7: NSX-T Data Center Services


• Tunnel Interface: This setting defines the IP address of the local virtual tunnel interface
(VTI) that is created to use with this session.

• Pre-shared Key: This setting provides the string for defining the key if the authentication
mode is PSK.

• Remote ID: This setting specifies the identifier of the remote peer for verifying the
authenticity of the peering.

• Tags: For cloud-based installations, almost every entity can hold a tag.

Module 7: NSX-T Data Center Services 521


7-112 Review of Learner Objectives

522 Module 7: NSX-T Data Center Services


7-113 L2 VPN

Module 7: NSX-T Data Center Services 523


7-114 Learner Objectives

524 Module 7: NSX-T Data Center Services


7-115 L2 VPN Use Cases

Module 7: NSX-T Data Center Services 525


7-116 L2 VPN in NSX-T Data Center

In previous releases, L2 VPN services were supported only between an NSX-T Data Center server
and a managed NSX-v edge or standalone edge.
NSX-T Data Center 2.4 adds managed edge L2 VPN client support.

526 Module 7: NSX-T Data Center Services


7-117 L2 VPN Deployment Considerations

The L2 VPN function is not supported in the NSX-T Data Center limited export release.

Module 7: NSX-T Data Center Services 527


7-118 L2 VPN Hub-and-Spoke Topology

528 Module 7: NSX-T Data Center Services


7-119 L2 VPN Packet Format

Module 7: NSX-T Data Center Services 529


7-120 L2 VPN Edge Packet Flow

For outbound L2 VPN traffic (traffic from the internal network behind the edge node) that is
destined for any remote L2 network, the first step is to decapsulate the GENEVE frames. The
destination address of the internal frame designates whether traffic goes through the local bridge
port toward remote sites or is locally handled. Further steps are inserting the ID for the proper
VLAN and sending the traffic to the local VTI interface to encapsulate into GRE, which, in turn,
gets protected by IPSec and forwarded to any given destination.
In the inbound direction, when receiving L2 VPN traffic that is identified as such by the IPSec
engine, the traffic requires IPSec decryption first and GRE decapsulation. After being sent to the
bridge interface, traffic is sent to local networks. The required GENEVE encapsulation parameters
are based on the actual tunnel IDs for the traffic.

530 Module 7: NSX-T Data Center Services


7-121 L2 VPN Scalability

Module 7: NSX-T Data Center Services 531


7-122 L2 VPN Server Configuration Steps

532 Module 7: NSX-T Data Center Services


7-123 Configuring the L2 VPN Server (1)

Module 7: NSX-T Data Center Services 533


7-124 Configuring the L2 VPN Server (2)

To configure the L2 VPN sessions, you specify the following settings:

• Name: You select a name to identify the session later.

• Mode: This option is already selected (Server).

• VPN Service: You select the L2 VPN service to use.

• Pre-shared Key: This setting is the key or password for this session.

• Tunnel Interface: You select the IP address of the VTI to use with this session.

• Remote ID: This setting designates the IPsec identifier of the remote IPSec gateway.

• Admin Status: This option is for enabling or disabling the profile.

• Tags: For cloud-based installations, almost every entity can hold a tag.

534 Module 7: NSX-T Data Center Services


7-125 Configuring the L2 VPN Server (3)

When you configure the segments, the following settings are key:

• L2 VPN: This setting defines the previously configured L2 VPN session. The segment
defined is used through that session.

• VPN Tunnel ID: This number is used to identify the communicating local and remote L2
networks. The same ID on both sides means that they are on the same L2 broadcast domain.

Module 7: NSX-T Data Center Services 535


7-126 Configuring the L2 VPN Server (4)

536 Module 7: NSX-T Data Center Services


7-127 Supported L2 VPN Clients

Module 7: NSX-T Data Center Services 537


7-128 L2 VPN Peer Compatibility Matrix

538 Module 7: NSX-T Data Center Services


7-129 About Standalone Edge

Module 7: NSX-T Data Center Services 539


7-130 About NSX-Managed Edge (NSX Data Center for
vSphere)

The information presented on the slide is related to what is required on NSX Data Center for
vSphere to act as a peer for the NSX-T Data Center L2 VPN. For more detailed configuration
steps, see the NSX API Guide at https://docs.vmware.com/en/VMware-NSX-Data-Center-for-
vSphere/6.4/nsx_64_api.pdf.

540 Module 7: NSX-T Data Center Services


7-131 About NSX-Managed Edge (NSX-T Data Center)

Module 7: NSX-T Data Center Services 541


7-132 Configuring the L2 VPN Managed Client (1)

The L2 VPN client configuration is similar to the server configuration.

542 Module 7: NSX-T Data Center Services


7-133 Configuring the L2 VPN Managed Client (2)

Peer code is a Base64-encoded configuration string that is available from the L2 VPN server
through the DOWNLOAD CONFIG option or through a REST API call.

Module 7: NSX-T Data Center Services 543


7-134 Configuring the L2 VPN Managed Client (3)

544 Module 7: NSX-T Data Center Services


7-135 Configuring the L2 VPN Managed Client (4)

Module 7: NSX-T Data Center Services 545


7-136 Lab: Deploying Virtual Private Networks

546 Module 7: NSX-T Data Center Services


7-137 Review of Learner Objectives

Module 7: NSX-T Data Center Services 547


7-138 Key Points (1)

548 Module 7: NSX-T Data Center Services


7-139 Key Points (2)

Module 7: NSX-T Data Center Services 549


550 Module 7: NSX-T Data Center Services
Module 8
NSX-T Data Center Security

Module 8: NSX-T Data Center Security 551


8-2 Importance

552 Module 8: NSX-T Data Center Security


8-3 Module Lessons

Module 8: NSX-T Data Center Security 553


8-4 NSX-T Data Center Micro-Segmentation

554 Module 8: NSX-T Data Center Security


8-5 Learner Objectives

Module 8: NSX-T Data Center Security 555


8-6 Traditional Data Center Security

Typically, in a traditional data center, certain high-level segmentation security policies are built to
prevent various types of workloads from communicating with other types of workloads. However,
this high-level segmentation does not prevent lateral communication between workloads within a
tier. When threats breach the perimeter, their lateral spread is very hard to stop. Shared services
can traverse boundaries unchecked. Most important, this security model is not aligned to
applications, which need to be protected.
The vast majority of attacks breach perimeter security systems by compromising just one machine.
More often than not, attackers exploit human beings, not technology, to breach the system.
Phishing emails and social engineering techniques are extremely effective in getting legitimate
credentials to a machine. Attackers go after low-priority systems first. After they are inside, they
move laterally through the data center, from machine to machine, to find the information they
want.

556 Module 8: NSX-T Data Center Security


Most traditional data centers experience the following main challenges:

• Increased cost and complexity

• Lack of internal security controls

• Traditional security controls devoid of context

• Proliferation of devices without consistent security systems

Application-centric security policies and control are needed to address these challenges.

Module 8: NSX-T Data Center Security 557


8-7 Data Center Security Requirements

The distributed firewall allows you to define and enforce network security policies for every
individual workload in the environment, whether a VM, container, or a bare metal server.
However, to achieve this level of segmentation with physical or virtual firewall appliances is
expensive and operationally unrealistic. The number of rules that you must manage is enough to
make micro-segmentation of this kind impractical.
Conversely, NSX-T Data Center allows you to design and manage all of these policies from a
central location.
NSX-T Data Center micro-segmentation policies are also software-defined, making them agile and
capable of being automated.
A key advantage of the distributed firewall is context. You can use security groups and tags to
orchestrate policy. These security groups can be based on VM attributes such as name or operating
system, traditional network attributes such as IP address or port, and even higher-order application
attributes.

558 Module 8: NSX-T Data Center Security


For instance, you can create a security group for all applications that must comply with the
Payment Card Industry Data Security Standard (PCI DSS). Newly created workloads that also
need to comply with PCI DSS can automatically inherit the appropriate security policies, making
policy management easier while maximizing security.
You might invest in third-party security services, such as next-generation firewalls (NGFW) with
a built-in intrusion protection system (IPS) and intrusion detection system (IDS), next-generation
AV systems, and so on. You can maximize your return on these solutions by giving them the same
level of control and visibility that NSX-T Data Center allows.
With network service insertion in NSX-T Data Center, network traffic can be dynamically routed
to your IPS or IDS systems and next-generation firewalls, effectively inserting these services into
your micro-segments. This way, you can be selective about the traffic that you route through these
services, increasing network efficiency and maximizing security.
With Guest Introspection, NSX-T Data Center can also help maximize efficiency on your
workloads. Using Guest Introspection, AV solutions can be offloaded to a service VM and
hypervisor, removing the need for AV agents on every workload. When an AV solution does find
a threat, it can use the integrated NSX-T Data Center to respond. NSX-T Data Center firewall
policies can be generated to quarantine a compromised VM, or NSX-T Data Center can insert a
third-party service to respond, using its service insertion capabilities.

Module 8: NSX-T Data Center Security 559


8-8 Micro-Segmentation in NSX-T Data Center

Micro-segmentation enables an organization to logically divide a data center into distinct security
segments down to the individual workload level, and to define distinct security controls for, and
deliver services to, each unique segment.
A central benefit of micro-segmentation is its ability to deny attackers the opportunity to pivot
laterally within the internal network, even after the perimeter is breached. NSX-T Data Center
micro-segmentation prevents the lateral spread of threats across an environment.
NSX-T Data Center supports micro-segmentation because it allows for a centrally controlled,
operationally distributed firewall to be attached directly to workloads within an organization’s
network. The distribution of the firewall for applying security policies that protect individual
workloads is highly efficient.
You can apply rules that are specific to the requirements of each workload. Of additional value is
that these capabilities are not limited to homogeneous vSphere environments. NSX-T Data Center
supports a variety of platforms and infrastructure.

560 Module 8: NSX-T Data Center Security


Micro-segmentation provided by NSX-T Data Center supports a zero-trust architecture for IT
security. This architecture establishes a security perimeter around each VM or container workload
with a dynamically defined policy.

Module 8: NSX-T Data Center Security 561


8-9 Enforcing the Zero-Trust Security Model of Micro-
Segmentation (1)

Conventional security models assume that everything on the inside of an organization's network
can be trusted. Zero trust assumes the opposite: trust nothing and verify everything. This
architecture addresses the increased sophistication of network attacks and insider threats that
frequently exploit the conventional perimeter-controlled approach. For each system in an
organization's network, trust of the underlying network is removed. A perimeter is defined per
system within the network to limit the possibility of lateral (east-west) movement of an attacker.
To build the zero-trust security data center, first determine which VMs contain an application and
what network traffic is necessary for the application to function.

562 Module 8: NSX-T Data Center Security


8-10 Enforcing the Zero-Trust Security Model of Micro-
Segmentation (2)

When you understand an application’s composition and necessary network traffic, you can create
micro-segmentation policies to restrict superfluous network traffic.
This step immediately reduces the attack surface of the application by restricting what the
application can communicate with to only the resources that it absolutely needs.
But what about legitimate, necessary network traffic? For example, an application needs to
communicate with shared services such as Active Directory (AD), users, and, potentially, other
applications.
How do we account for these communication paths and direct attacks on the VMs?

Module 8: NSX-T Data Center Security 563


8-11 Enforcing the Zero-Trust Security Model of Micro-
Segmentation (3)

This step, securing through context, establishes and enforces the intended state and behavior of the
workload VM, including the processes that should be running, how the OS should be configured,
and so on.

564 Module 8: NSX-T Data Center Security


8-12 Micro-Segmentation Use Cases

The NSX-T Data Center security platform is designed to handle the firewall challenges faced by
IT administrators. One of the platform's main use cases is to address the need for context-aware
micro-segmentation of applications.
The NSX-T Data Center distributed firewall is delivered as part of a distributed platform that
offers ubiquitous enforcement, scalability, line-rate performance, multi-hypervisor support, and
API-driven orchestration. These fundamental pillars of the distributed firewall enable it to address
many different use cases for production deployment.

Module 8: NSX-T Data Center Security 565


8-13 Micro-Segmentation Benefits

566 Module 8: NSX-T Data Center Security


8-14 Review of Learner Objectives

Module 8: NSX-T Data Center Security 567


8-15 NSX-T Data Center Distributed Firewall

568 Module 8: NSX-T Data Center Security


8-16 Learner Objectives

Module 8: NSX-T Data Center Security 569


8-17 NSX-T Data Center Firewalls (1)

NSX-T Data Center implements a centralized policy and configuration capability and distributes
the policies to the firewalls.

570 Module 8: NSX-T Data Center Security


8-18 NSX-T Data Center Firewalls (2)

NSX-T Data Center implements a centralized policy and configuration capability and distributes
the policies to the firewalls.

Module 8: NSX-T Data Center Security 571


8-19 Features of the Distributed Firewall

The NSX-T Data Center distributed firewall include many features:

• Centralized configuration through NSX Manager Simplified UI

• Distributed layer 2-4 firewall

• Layer 7 context-aware firewall

• Residing in the kernel and implemented at the vNIC level

• Line-rate firewall throughput

• Multiple hypervisor support

• Multiple workload (VM and container) support

• On-premises and public cloud support

• Static and dynamic grouping based on compute objects and tags

572 Module 8: NSX-T Data Center Security


• Firewall rule enforcement regardless of the network transport type (overlay or VLAN)

• vMotion support: Firewall policies move with VMs.

Module 8: NSX-T Data Center Security 573


8-20 Distributed Firewall: Key Concepts (1)

574 Module 8: NSX-T Data Center Security


8-21 Distributed Firewall: Key Concepts (2)

Module 8: NSX-T Data Center Security 575


8-22 Creating a Domain

Rules in a domain must have at least one group in the source or destination that is a member of the
same domain.

576 Module 8: NSX-T Data Center Security


8-23 Security Policy Overview

The NSX Manager simplified UI enables you to configure several types of policies:

• Gateway policies: Use for configuring gateway firewall rules to control north-south traffic

• Network Introspection policies: Use for configuring north-south and east-west traffic
redirection rules

• Distributed Firewall policies: Use for configuring distributed firewall rules to control east-
west traffic

• Endpoint policies: Use for configuring Guest Introspection services and rules

Module 8: NSX-T Data Center Security 577


8-24 Distributed Firewall Policy

The categories for distributed firewall rules are available for both distributed and gateway
firewalls:

• Ethernet: All layer 2 policies. Layer 2 firewall rules are always evaluated before layer 3 rules.

• Emergency: Temporary firewall policies needed in emergency situations, such as blocking an


attacker from attacking a web server.

• Infrastructure: Nonapplication policies specific to infrastructure components such as vCenter


Server, ESXi hosts, and so on.

• Environment: High-level policy groupings, for example, the production group cannot
communicate with the testing group, or the testing group cannot communicate with the
development group.

• Application: Specific and granular application policy rules such as rules between applications
or application tiers, or rules between micro-services.

578 Module 8: NSX-T Data Center Security


Each of these categories can have its own rules and policies. Firewall rules are enforced left to
right, top to bottom.
You can reorder policies and rules within a specific category. However, you cannot move policies
or rules across different categories.
You can configure rules under relevant categories.

Module 8: NSX-T Data Center Security 579


8-25 Configuring Distributed Firewall Policies (1)

In a firewall policy, each firewall rule contains instructions that determine whether a packet should
be allowed or blocked, which protocols it is allowed to use, which ports it is allowed to use, and so
forth. Policies are used for multitenancy, such as creating specific rules for sales and engineering
departments in separate policies.
A policy can be defined as enforcing stateful or stateless rules. Stateless rules are treated as
traditional stateless access-control lists (ACLs). Reflexive ACLs are not supported for stateless
policies. A mix of stateless and stateful rules on a single logical switch port is not recommended
and might cause undefined behavior.

580 Module 8: NSX-T Data Center Security


8-26 Configuring Distributed Firewall Policies (2)

In the example, three distributed firewall policies are created: Web, MySQL, and Drop. Each
policy has one or more firewall rules.
Firewall rules are enforced in the following ways:

• Like firewall policies, firewall rules are processed in the top-to-bottom order.

• Each packet is checked against the top rule in the rule table before moving down the
subsequent rules in the table.

• The first rule in the table that matches the traffic parameters is enforced. No subsequent rules
can be enforced because the search is then terminated for that packet.

Because of this behavior, VMware recommends that you place the most granular policies at the
top of the rule table.
For any traffic attempting to pass through the firewall, the packet information is subjected to the
rules in the order shown in the policy, beginning at the top and proceeding to the default rule at the
bottom. The first rule that matches the packet has its configured action applied, and any processing

Module 8: NSX-T Data Center Security 581


specified in the rule's configured options is performed. All subsequent rules are ignored (even if a
later rule is a better match). As a result, you should place specific rules above more general rules
to ensure those specific rules are not ignored.
The default rule, located at the bottom of the rule table, is a catch-all rule. Packets not matching
any other rules are enforced by the default rule. After the host preparation operation, the default
rule is set to the Allow action. This rule ensures that VM-to-VM communication is not broken
during staging or migration phases. You should change this default rule to block action and
enforce access control through a positive control model, for example, only traffic defined in the
firewall rule is allowed onto the network.

582 Module 8: NSX-T Data Center Security


8-27 Configuring Distributed Firewall Policy Settings

Module 8: NSX-T Data Center Security 583


8-28 Creating Distributed Firewall Rules

584 Module 8: NSX-T Data Center Security


8-29 Configuring Distributed Firewall Rule Parameters

Several parameters can be defined when configuring a distributed firewall rule:

• Sources: You can use previously defined groups.

• Destinations: You can use previously defined groups.

• Services: You can specify a port and protocol combination.

• Profiles: You use context profiles to define context-aware or layer 7 rules.

• Action: You can select from the firewall rule actions Allow, Drop, and Reject.

In the example, a firewall policy named HR-APP-DFW-Policy is created. In this policy, a rule
named To-Web is configured. This rule permits HTTPS traffic from the any source to reach the
destinations specified in Group-1.
The order of firewall rules is important in determining the handling of traffic. You can drag and
drop rules in the simplified UI to change the order.

Module 8: NSX-T Data Center Security 585


8-30 Specifying Sources and Destinations for a Rule

Both IPv4 and IPv6 addresses are supported for Sources and Destinations options of the firewall
rule.

586 Module 8: NSX-T Data Center Security


8-31 Creating Groups

Before creating a group including AD users, you must add an AD domain to NSX Manager. You
add this domain through the NSX Manager simplified UI by navigating to System > Active
Directory > ADD ACTIVE DIRECTORY.
The main use case for creating a group that includes AD users is to configure identity-based
firewall rules. In NSX-T Data Center 2.4, Identity Firewall is only supported for virtual desktops
and virtual user sessions.

Module 8: NSX-T Data Center Security 587


8-32 Adding Members and Member Criteria for a Group

588 Module 8: NSX-T Data Center Security


8-33 Viewing the Configured Groups

Module 8: NSX-T Data Center Security 589


8-34 Specifying Services for a Rule

590 Module 8: NSX-T Data Center Security


8-35 Predefined and User-Created Services

Both predefined services and user-created services can be used in firewall rules to classify traffic.
NSX-T Data Center 2.4 includes additional services to support layer 2 and layer 7 rules.

Module 8: NSX-T Data Center Security 591


8-36 Adding a Context Profile to a Rule

592 Module 8: NSX-T Data Center Security


8-37 Predefined and User-Created Context Profiles

Layer 7 firewall rules can be defined only in a stateful firewall policy.

Module 8: NSX-T Data Center Security 593


8-38 Configuring Context Profile Attributes

A context profile defines context-aware attributes, including application ID, domain name, as well
as subattributes such as application version or cipher set.
Context profiles include the following main attributes:

• APP_ID: You can choose from a list of preconfigured applications. You cannot add any
additional applications. Examples include FTP, SSH, and SSL. Certain applications allow
users to specify subattributes. For example, when choosing SSL administrators, you can
specify the TLS_VERSION and the TLS_CIPHER_SUITE. For CIFS, you can specify the
SMB_VERSION.

• DOMAIN_NAME: You can choose from a static list of Fully Qualified Domain Names
(FQDNs).

594 Module 8: NSX-T Data Center Security


8-39 Setting the Scope of Rule Enforcement

Module 8: NSX-T Data Center Security 595


8-40 Specifying Distributed Firewall Settings

In the simplified UI, you can configure several settings:

• Logging: You can turn logging off or on. Logs are stored in the
/var/log/dfwpktlogs.log file on ESXi and KVM hosts.

• Direction: This setting matches the direction of a packet as it moves across the network. A
direction of IN is for traffic ingressing through the firewall. A direction of OUT is for traffic
egressing though the firewall. The option IN_OUT is also available.

• Tag: Tags are a way to group VMs and other objects in a category, such as accounting,
payroll, web servers, and so on. Tags can also be used to identify quarantined VMs. Support
for rule tagging is introduced in NSX-T Data Center 2.4.

• Rule Path: Distributed firewall policies and rules are identified using their absolute path.
Knowing how to retrieve the object identifiers for policies (policy path) and rules (rule path)
from the UI is very important for troubleshooting purposes.

596 Module 8: NSX-T Data Center Security


8-41 Filtering the Display of Firewall Rules

Module 8: NSX-T Data Center Security 597


8-42 Determining the Default Firewall Behavior

Logging can be enabled with the blacklist or whitelist.


You can configure the blacklist and whitelist in the simplified UI.

598 Module 8: NSX-T Data Center Security


8-43 Viewing the Default Firewall Rules

To view the default rules (whitelist and blacklist) that you create, select the Advanced
Networking & Security tab.

Module 8: NSX-T Data Center Security 599


8-44 Distributed Firewall Architecture

600 Module 8: NSX-T Data Center Security


8-45 Distributed Firewall Architecture: ESXi

The following data path modules are responsible for distributed firewall rule processing:

• VSIP: (VMware Internetworking Service Insertion Platform): The main part of the distributed
firewall kernel module that receives the firewall rules and downloads them on each VM’s
vNIC.

• VDPI: (VMware Deep Packet Inspection): A deep packet inspection module daemon in the
user space that is responsible for L7 packet inspection. VDPI can identify application IDs and
extract context for a traffic flow.

Module 8: NSX-T Data Center Security 601


8-46 Distributed Firewall Architecture: KVM

This slide shows the distributed firewall architecture on a KVM. The same architecture also
applies to bare metal servers.
The following data path modules are responsible for distributed firewall rule processing on a
KVM:

• OVS: Core data path component for L2, L3, and distributed firewall. It provides ingress and
egress filtering for stateless rules.

• Conntrack: Module responsible for tracking established connections for stateful firewall rules.

• VDPI: A deep packet inspection module daemon in the user space that is responsible for L7
packet inspection. VDPI can identify application IDs and extract context for a traffic flow.

602 Module 8: NSX-T Data Center Security


8-47 Lab: Configuring the NSX Distributed Firewall

Module 8: NSX-T Data Center Security 603


8-48 Review of Learner Objectives

604 Module 8: NSX-T Data Center Security


8-49 NSX-T Data Center Gateway Firewall

Module 8: NSX-T Data Center Security 605


8-50 Learner Objectives

606 Module 8: NSX-T Data Center Security


8-51 About NSX-T Data Center Gateway Firewall

The NSX-T Data Center gateway firewall provides essential perimeter firewall protection that can
be used in addition to a physical perimeter firewall. The gateway firewall service is part of the
NSX-T Edge node for both bare metal and VM form factors. The gateway firewall is useful in
developing PCI zones, multi-tenant environments, or DevOps style connectivity without forcing
the inter-tenant or inter-zone traffic onto the physical network. The gateway firewall data path
uses the Data Plane Development Kit (DPDK) framework supported on NSX Edge to provide
better throughput.
The NSX-T Data Center gateway firewall is instantiated per logical router and supported at both
Tier-0 and Tier-1.
The gateway firewall works independent of the distributed firewall from a policy configuration
and enforcement perspective. A user can consume the gateway firewall using either the UI or
REST API framework provided by NSX Manager. In NSX Manager, the UI gateway firewall can
be configured from the Gateway Firewall page. The gateway firewall configuration is similar to
the Distributed Firewall policy in that it is defined as a set of individual rules within a policy. Like
the distributed firewall, the gateway firewall rules can use logical objects, tagging, and groups to
build policies.

Module 8: NSX-T Data Center Security 607


The gateway firewall is an optional centralized firewall implemented on NSX-T Data Center Tier-
0 Gateway uplinks and Tier-1 Gateway links. The firewall is implemented on a Tier-0 or Tier-1
service router component that is hosted on NSX Edge.
The Tier-0 Gateway firewall supports stateful firewall filtering only with active-standby high
availability mode. It can also be enabled in an active-active mode, but this mode works only in
stateless mode. The gateway firewall uses a similar model as the distributed firewall for defining
policy. You can also use grouping constructs such as NSGroups, IPSets, and so on. Gateway
firewall policy rules are defined in the dedicated policy in the firewall table for each Tier-0 and
Tier-1 Gateway.
The Tier-0 Gateway firewall is used as a perimeter firewall. The Tier-0 firewall is mainly used for
north-south traffic from the virtualized environment to the physical world. The Tier-1 service
router component resides on the NSX Edge node to enforce the firewall policy before traffic
leaves or enters the NSX-T Data Center virtual environment. East-west traffic continues to use
distributed routing and firewall filtering.

608 Module 8: NSX-T Data Center Security


8-52 Gateway Firewall on Tier-0 Gateway for Perimeter
Protection

The diagrams show that the Tier-0 Gateway firewall is used as a perimeter firewall between the
physical and virtual domains. This gateway is mainly used for north-south traffic from the
virtualized environment to the physical world. In this case, the Tier-0 service router component
residing on the NSX Edge node enforces the firewall policy before traffic leaves or enters the
NSX-T Data Center virtual environment. East-west traffic continues to use the distributed routing
and firewall filtering capability that NSX-T Data Center natively provides in the hypervisor.

Module 8: NSX-T Data Center Security 609


8-53 Gateway Firewall Policy

610 Module 8: NSX-T Data Center Security


8-54 Predefined Gateway Firewall Categories

The gateway firewall includes several predefined categories for rules:

• Emergency: Used for quarantine and can also be used for Allow rules

• System Rules: Automatically generated by NSX-T Data Center and specific to internal
control plane traffic, such as BFD rules, VPN rules, and so on

• Shared Prerules: Globally applied across NSX gateway nodes

• Local Gateway: Rules specific to a particular NSX gateway node

• Auto Service Rules: Auto-plumbed rules applied to the data plane

• Default: Rules that define the default gateway firewall behavior.

Module 8: NSX-T Data Center Security 611


8-55 Configuring the Gateway Firewall Policy Settings

To create a Gateway Firewall policy, you assign a policy name and specify the domain.
You can configure the following settings when creating a new Gateway Firewall policy:

• TCP Strict: This setting strengthens the security of the gateway firewall by dropping packets
that are not preceded by a complete three-way TCP handshake.

• Stateful: When this option is enabled, the gateway firewall performs stateful packet
inspection and tracks the state of network connections. Packets matching a known active
connection are allowed by the firewall, and packets that do not match are inspected against
the gateway firewall rules

• Locked: This setting allows you to lock a policy while making configuration changes so that
others cannot make modifications at the same time. To lock or unlock a policy, you must
provide a comment.

612 Module 8: NSX-T Data Center Security


Gateway policies enable you to define gateway firewall rules across domains. In the case of
multiple domains, gateway firewall rules for a particular gateway are processed as follows:
1. Gather all gateway firewall policies from all domains for the logical router.
2. Order gateway firewall policies by category and priority within a given category.
3. Order rules within a policy based on their sequence number.

Each Gateway Firewall policy defined in the NSX Manager simplified UI has its own policy in the
logical router.

Module 8: NSX-T Data Center Security 613


8-56 Configuring Firewall Rules

In the example, a firewall policy named Block Traffic Policy was created for the default domain.
Within this policy, a firewall rule was created to block SSH traffic from any source going to
Group-1, Group-2, and the test group. This rule is applied to the entire Tier-0 Gateway.

614 Module 8: NSX-T Data Center Security


8-57 Configuring Gateway Firewall Rules Settings

Module 8: NSX-T Data Center Security 615


8-58 Gateway Firewall Architecture

The slide provides a high-level summary of the gateway firewall architecture:


1. You configure gateway policies through the simplified UI.
2. Gateway policies are processed by the policy role.
3. Gateway policies are pushed to NSX Manager, which validates and forwards them to the
central control plane.
4. The CCP calculates the span of the rules and distributes the firewall configuration to the
relevant edge nodes.
5. NSX-proxy receives the firewall configuration from the CCP and configures the edge data
path.
6. The Stats Exporter collects flow records from the data path and generates rule statistics.
7. The MPA reports the firewall rules statistics and status to the management plane.

616 Module 8: NSX-T Data Center Security


8-59 Lab: Configuring the NSX Gateway Firewall

Module 8: NSX-T Data Center Security 617


8-60 Review of Learner Objectives

618 Module 8: NSX-T Data Center Security


8-61 NSX-T Data Center Service Insertion

Module 8: NSX-T Data Center Security 619


8-62 Learner Objectives

620 Module 8: NSX-T Data Center Security


8-63 About Service Insertion

NSX-T Data Center supports Network Introspection and Endpoint Protection.


Endpoint Protection examines inside guest VMs, whereas Network Introspection examines outside
of the guest VMs. Endpoint Protection and Network Introspection provide both internal (endpoint)
and external (network) perspectives on the activities being performed inside of virtual machines.
The information about these activities is leveraged by services provided by either VMware or third
parties, such as SpoofGuard, Identity-Aware functions, antivirus solutions, or intrusion detection
or protection systems.
Network Introspection deals with data in motion across the network. You can define detailed
redirection rules, which define which traffic should be inspected by the partner services.
Endpoint Protection deals with security on the workload. It enables use cases such as agentless
antivirus, where an agent does not need to be installed on each workload but instead NSX-T Data
Center can intercept file events and pass them to a partner virtual appliance. This functionality
significantly reduces the overhead because you do not need another agent and avoids the
processing overhead associated with running scanning operations at every workload.

Module 8: NSX-T Data Center Security 621


8-64 About Network Introspection

Service insertion for Network Introspection can be applied at Tier-0 and Tier-1 Gateways to check
north-south and east-west traffic. Partner services typically provide advanced security features
such as IDS, IPS, L4-L7 firewall, URL filtering, and so on.

622 Module 8: NSX-T Data Center Security


8-65 North-South Network Introspection Overview

The L2 north-south insertion mode (also known as the Bump in the Wire mode) of the partner
service is supported. The L3 north-south service insertion is being developed.
Service Manager is a third-party entity that mediates the communication between the NSX-T Data
Center service insertion platform and partner’s service virtual machines (SVMs)
An SVM runs the OVA or OVF specified by a service and is connected over the service plane to
receive redirected traffic.

Module 8: NSX-T Data Center Security 623


8-66 Configuring North-South Network Introspection

By default, traffic on the router’s uplink (where the partner’s service is inserted) is not redirected.
You can add granular redirection rules to send interesting traffic to the partner’s security service.
Traffic steering uses policy-based routing (PBR).

624 Module 8: NSX-T Data Center Security


8-67 Registering a Partner Service

A partner registers the service with NSX-T Data Center by making an API call or using the partner
management console (CLI). Partners can automate service registration from their management
console.
In this registration process, several parameters must be specified, such as the location (URL) of
the OVF, to which router (Tier-1 or Tier-0) the partner service is attached, the operational mode
(L2 bridged mode), and so on.
Users need to do separate registrations for each service type per attachment point (Tier-0 and Tier-
1).

Module 8: NSX-T Data Center Security 625


8-68 Deploying a Partner Service Instance

After a service is registered and appears in the catalog, you deploy an instance of the service so
that it can start processing network traffic. Each partner releases its own partner service OVF for
NSX-T Data Center integration
You select a logical router (Tier-0 or Tier-1) and a host where the partner service is deployed. The
instance must be deployed on an ESXi transport node because logical switching is needed to
bridge traffic to the interface where the partner service is attached.
The partner service instance is typically deployed on the same host as the edge node to avoid
hairpinning situations.
Each SVM can only be applied to one logical router. NSX Manager creates and attaches segments
to the gateway and the partner’s SVM.
The deployment process might take some time, depending on the vendor's implementation.
Deployment and operational status is monitored. You can view the status in the NSX Manager
simplified UI. The status should appear as Deployment Successful.

626 Module 8: NSX-T Data Center Security


8-69 Configuring Traffic Redirection to Partners

After you deploy a service instance, you can configure the type of traffic that the gateway redirects
to the partner service. Configuring traffic redirection is similar to configuring a firewall. You can
define detailed redirection rules at the distributed firewall, which define which traffic should be
inspected by the partner services. By default, the No-Redirect All rule is applied. You can create
selective redirection rules using groups.
Redirection rules are always stateless. Reflexive redirection rules are automatically created and
sent to the control plane so that the return traffic is also sent to the partner service.

Module 8: NSX-T Data Center Security 627


8-70 East-West Network Introspection Overview

For SVMs deployed on compute hosts, an SVM does not need to be installed on every host. Some
customers prefer to deploy the partner SVM on each host to achieve the least amount of traffic
hairpinning.
When the partner SVM is deployed in a service cluster, traffic is sent from the compute hosts
across the overlay to the hosts in the service cluster.
For north-south service insertion, the insertion points are at the uplinks of Tier-0 or Tier-1
Gateways. With east-west service insertion, the insertion points are at each guest VM’s vNIC. In
other words, traffic is intercepted at the vNIC of each guest VM.
With east-west service insertion, the security groups configured in the NSX Manager simplified
UI can be shared with the management consoles of the partners.
Partner appliances of different sizes can be integrated with NSX-T Data Center.

628 Module 8: NSX-T Data Center Security


8-71 Configuring East-West Network Introspection

The east-west service insertion for the Network Introspection configuration is very similar to the
steps that you used in the north-south configuration and includes the following steps:
1. Service Registration
2. Service Deployment
3. Service Consumption

Service profile: A specific instantiation of a vendor template. For example, if a vendor template
defines an IPSec tunneling operation, a service profile specifies details such as IPSec tunnel
endpoints, algorithms, and so on. Service profiles can be created by NSX-T Data Center
administrators or third-party vendors.
Service chain: A sequence of service profiles defined by the network administrator. A service
defines the logical sequence of operations to be applied to network traffic, for example, firewall,
then monitor, and then IPSec VPN. Service chains can specify different sequences of service
profiles for different directions of traffic (egress or ingress).

Module 8: NSX-T Data Center Security 629


Redirection policy: A construct that specifies traffic matching patterns (based on NSX-T Data
Center security groups) and a service chain. All traffic matching the pattern is redirected along the
service chain.

630 Module 8: NSX-T Data Center Security


8-72 Registering Partner Services

Module 8: NSX-T Data Center Security 631


8-73 Deploying an Instance of a Registered Service

632 Module 8: NSX-T Data Center Security


8-74 Creating a Service Profile for East-West Network
Introspection

A service profile is a specific instantiation or customization of a vendor template. For example, if


a vendor template defines an IPSec tunneling operation, a service profile specifies details such as
IPSec tunnel endpoints, algorithms, and so on. You can create multiple service profiles, for
example, one for IPS and one for a next-generation firewall.

Module 8: NSX-T Data Center Security 633


8-75 Creating Service Chains

You can create one or more service profiles in a service chain.


In the example, during the creation of the service chain called SC1, two service profiles (SP1 and
SP2) are specified in the traffic-forwarding path. As a result, traffic must be examined by both
SP1 and SP2.

634 Module 8: NSX-T Data Center Security


8-76 Configuring Redirection Rules

All traffic matching a redirection rule is redirected along the specified service chain.

Module 8: NSX-T Data Center Security 635


8-77 Endpoint Protection Overview and Use Cases

Linux workload support is currently being worked on.

636 Module 8: NSX-T Data Center Security


8-78 Endpoint Protection Process

As part of host preparation, NSX-T Data Center distributes Endpoint Protection modules to all
hosts in a cluster.
The integrated service is deployed as a virtual appliance running partner services, also known as a
service virtual machine (SVM).
The administrator defines Endpoint Protection policies for the VMs.
The integrated service uses the Endpoint Protection API library (formerly known as EPSec API
library) to introspect and protect guest VMs from malware.
Because Endpoint Protection enables SVMs to read and write specific files on guest VMs, it
provides an efficient way to optimize memory use and avoid resource bottlenecks.

Module 8: NSX-T Data Center Security 637


8-79 Automatic Policy Enforcement for New VMs

638 Module 8: NSX-T Data Center Security


8-80 Automated Virus or Malware Quarantine with Tags
Example

The example shows two security groups (Standard, Quarantine Zone), two security policies
(Standard, Quarantined), and a defined security tag (ANTI_VIRUS.VirusFound).

1. The antivirus SVM monitors activities on the guest of the VMs in the Standard group. If malicious
activities are detected, the security tag ANTI_VIRUS.VirusFound is set for that VM.

2. A security tag, ANTI_VIRUS.VirusFound, places the VMs that are virus-infected into the
Quarantine Zone security group, where the Quarantined VM security policy is enforced. In
this example, the Quarantined VM security policy blocks all inbound and outbound traffic
with the exception of the necessary security tools.
3. After the virus is removed and the VMs scanned, the ANTI_VIRUS.Virusfound tag is
removed. The VMs are removed from the Quarantine Zone group, eliminating the
Quarantined VM security policy. Traffic flow to and from the VMs resumes. This process
prevents the spread of the virus to other VMs. This entire process is automated and does not
require manual intervention.

Module 8: NSX-T Data Center Security 639


8-81 Creating a Service Profile for Endpoint Protection

Service profiles are a way for administrators to choose protection levels for a VM by selecting the
templates provided by the vendor. For example, a vendor can provide silver, gold, and platinum
policy levels.
Each profile created might serve a different type of workload. A gold service profile provides
complete antimalware to a PCI-type workload, whereas a silver service profile only provides basic
antimalware protection to a regular workload.

640 Module 8: NSX-T Data Center Security


8-82 Configuring Endpoint Protection Rules

Module 8: NSX-T Data Center Security 641


8-83 Review of Learner Objectives

642 Module 8: NSX-T Data Center Security


8-84 Key Points (1)

Module 8: NSX-T Data Center Security 643


8-85 Key Points (2)

644 Module 8: NSX-T Data Center Security


Module 8: NSX-T Data Center Security 645
Module 9
NSX-T Data Center User and Role Management

Module 9: NSX-T Data Center User and Role Management 647


9-2 Importance

648 Module 9: NSX-T Data Center User and Role Management


9-3 Module Lessons

Module 9: NSX-T Data Center User and Role Management 649


9-4 Integrating NSX-T Data Center and VMware Identity
Manager

650 Module 9: NSX-T Data Center User and Role Management


9-5 Learner Objectives

Module 9: NSX-T Data Center User and Role Management 651


9-6 About VMware Identity Manager

You can verify the version compatibility between NSX-T Data Center and VMware Identity
Manager, using the VMware Product Interoperability Matrix at
https://www.vmware.com/resources/compatibility/sim/interop_matrix.php#interop&175=&140=.
NSX-T Data Center 2.4 requires at least version 3.2 of VMware Identity Manager.

652 Module 9: NSX-T Data Center User and Role Management


9-7 Benefits of Integrating VMware Identity Manager and
NSX-T Data Center

Module 9: NSX-T Data Center User and Role Management 653


9-8 VMware Identity Manager Integration Pre-Requisites

The following steps must be completed before integrating VMware Identity Manager with NSX-
T:
1. From the vSphere Web Client, deploy the VMware Identity Manager appliance from an OVF
template.
2. The VMware Identity Manager virtual machine must be configured to synchronize its time
with the ESXi host where it is running. To configure synchronization, right-click the VM and
select Edit Settings > VM Options. Scroll down to the Time section and select the check box
for Synchronize guest time with host.
3. After deploying the VMware Identity Manager appliance, you use the Setup wizard available
at https://<VMware_Identity_Manager_FQDN> to set passwords for the admin, root, and
remote SSH user and to select a database. You can use the external Microsoft SQL or Oracle
database, or the internal PostgreSQL database.

654 Module 9: NSX-T Data Center User and Role Management


9-9 Configuring VMware Identity Manager

You can access the VMware Identity Manager Administration console at


https://<VMware_Identity_Manager_FQDN>:443/SAAS/admin.

Active Directory (AD) over LDAP or AD with integrated Windows authentication, LDAP, and
local directory are all supported identity sources in VMware Identity Manager. To configure
identity sources from the VMware Identity Manager administration console, you take the
following steps:
1. Click the Identity & Access Management tab.
2. On the Directories page, click Add Directory and select the type of directory for integration.
3. Select Domains.
4. Map user attributes.
5. Select Groups and Users to Sync.
6. Click Sync Directory to start the directory synchronization.

Module 9: NSX-T Data Center User and Role Management 655


To configure authentications methods, you select Identity & Access Management >
Authentication Methods.
Administrators can configure a single authentication method or can set up chained, two-factor
authentication.
To define access policies, you select Identity & Access Management > Policies.
Administrators can configure rules that specify the network ranges and types of devices that users
can use to sign in.

656 Module 9: NSX-T Data Center User and Role Management


9-10 VMware Identity Manager and NSX-T Data Center
Integration Overview

Module 9: NSX-T Data Center User and Role Management 657


9-11 Creating a New OAuth Client

VMware Identity Manager uses the OAuth 2.0 authorization framework to enable third-party
applications, such as NSX-T Data Center and their users, to access specific data and services. In
the process, VMware Identity Manager protects the third-party's account credentials.
Before enabling integration between VMware Identity Manager and NSX-T Data Center, you
must register NSX-T Data Center as a trusted OAuth client in VMware Identity Manager.
When configuring NSX-T Data Center details, you select Service Client Token from the Access
Type drop-down menu. This selection indicates that the application, NSX-T Data Center in this
example, accesses the APIs on its own behalf, not on behalf of a particular user.
You must specify a Client ID to uniquely identify NSX. You should record this value because you
need it to enable VMware Identity Manager integration.
You must also click Generate Shared Secret and record the generated value, which you need to
enable VMware Identity Manager integration.
Leave the default settings for all other options.

658 Module 9: NSX-T Data Center User and Role Management


On the Create Client page, you can optionally set the token time-to-live values by specifying the
access, refresh, and idle timers.

Module 9: NSX-T Data Center User and Role Management 659


9-12 Getting the SHA-256 Certificate Thumbprint

You should record the SHA-256 certificate thumbprint because you need this value when you
enable VMware Identity Manager integration.

660 Module 9: NSX-T Data Center User and Role Management


9-13 Configuring VMware Identity Manager Details in NSX-
T Data Center

The values entered for OAuth Client ID and OAuth Client Secret are the values you recorded
when creating a new OAuth client for NSX-T Data Center in VMware Identity Manager.
The value entered for SSL Thumbprint is the value you recorded from the VMware Identity
Manager appliance command line.
The value entered for NSX Appliance must be used to access NSX Manager after the integration.
If you enter the FQDN of NSX Manager but then try to access the appliance through its IP
address, the authentication fails.

Module 9: NSX-T Data Center User and Role Management 661


9-14 Verifying VMware Identity Manager Integration

If the integration is successful, the VMware Identity Manager Connection appears as Up, and the
VMware Identity Manager Integration as Enabled.

662 Module 9: NSX-T Data Center User and Role Management


9-15 Default UI Login

The default login page also appears if integration with VMware Identity Manager is configured,
but VMware Identity Manager is down or not reachable at the time of the login.

Module 9: NSX-T Data Center User and Role Management 663


9-16 UI Login with VMware Identity Manager

664 Module 9: NSX-T Data Center User and Role Management


9-17 Local Login with VMware Identity Manager

Module 9: NSX-T Data Center User and Role Management 665


9-18 Review of Learner Objectives

666 Module 9: NSX-T Data Center User and Role Management


9-19 Managing Users and Configuring RBAC

Module 9: NSX-T Data Center User and Role Management 667


9-20 Learner Objectives

668 Module 9: NSX-T Data Center User and Role Management


9-21 NSX-T Data Center Users

Module 9: NSX-T Data Center User and Role Management 669


9-22 User Access and Authentication Policy Management

670 Module 9: NSX-T Data Center User and Role Management


9-23 Local Users

Each NSX node has two users, admin and audit, which can be used for local authentication. You
cannot delete or add local users.
A system user called nsx_policy is used by the policy role to realize configuration changes in the
NSX-T Data Center environment.

Module 9: NSX-T Data Center User and Role Management 671


9-24 Changing the Password for Local Users

672 Module 9: NSX-T Data Center User and Role Management


9-25 Configuring Authentication Policy Settings for Local
Users

Module 9: NSX-T Data Center User and Role Management 673


9-26 Configuring Authentication Policy Settings for VMware
Identity Manager Users

674 Module 9: NSX-T Data Center User and Role Management


9-27 Using Role-Based Access Control

Module 9: NSX-T Data Center User and Role Management 675


9-28 Permissions Hierarchy

Full access gives the user all permissions.


The execute permission includes the read permission.
For a user with multiple roles, the combined permissions of all the roles are assigned.

676 Module 9: NSX-T Data Center User and Role Management


9-29 Built-in Roles (1)

Module 9: NSX-T Data Center User and Role Management 677


9-30 Built-in Roles (2)

Additional built-in roles available for VMware Cloud deployments are Cloud Service
Administrator and Cloud Service Auditor. The Cloud Service Administrator role is designed for
public cloud administrators and container administrators to configure services on NSX Manager.
For more information about the permissions for each role in different operations, see "Role-Based
Access Control" at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/2.4/administration/GUID-26C44DE8-1854-4B06-B6DA-A2FD426CDF44.html.

678 Module 9: NSX-T Data Center User and Role Management


9-31 Role Assignment for Local Users

Module 9: NSX-T Data Center User and Role Management 679


9-32 Role Assignment for VMware Identity Manager Users

680 Module 9: NSX-T Data Center User and Role Management


9-33 Lab: Managing Users and Roles with VMware Identity
Manager

Module 9: NSX-T Data Center User and Role Management 681


9-34 Review of Learner Objectives

682 Module 9: NSX-T Data Center User and Role Management


9-35 Key Points

Module 9: NSX-T Data Center User and Role Management 683


684 Module 9: NSX-T Data Center User and Role Management
Module 10
NSX-T Data Center Tools and Basic Troubleshooting

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 685
10-2 Importance

686 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-3 Module Lessons

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 687
10-4 Troubleshooting Overview and Log Collection

688 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-5 Learner Objectives

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 689
10-6 About the Troubleshooting Process

690 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-7 Differentiating Between Symptoms and Causes

For more information about how to resolve issues in your environment, see the NSX-T Data
Center Troubleshooting Guide at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/2.3/nsxt_23_troubleshoot.pdf.

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 691
10-8 Local Logging on NSX-T Data Center Components

692 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-9 Viewing NSX Policy Manager Logs

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 693
10-10 Viewing the NSX Manager Syslog

694 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-11 Viewing the NSX Controller Log

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 695
10-12 Viewing the ESXi Host Log

696 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-13 Viewing the KVM Host Log

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 697
10-14 Syslog Overview

698 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-15 Configuring Syslog Exporters (1)

NSX-T Data Center component logging is RFC 5424-compliant, except for logging on ESXi
hosts.
RFC 5424 defines a specific format for log messages. Any number of transport protocols can be
used for transmission of Syslog messages.
RFC 5424 also provides a message format that enables vendor-specific extensions:

• To provide a means by which to convey information in a clear, easily consumed and


interpreted data format, RFC 5424 specifies the use of structured data.

• The structured data format is version UTC-TZ hostname APP-NAME procid


MSGID [structured-data] msg.

On management or edge nodes, to configure a Syslog server, you enter the command set
logging-server proto level [facility ] [messageid ] [certificate
] [structured-data ].

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 699
You can filter log entries by severity, facility, and message ID. The message ID field identifies the
type of message. Message IDs can be used to specify which messages are transferred by the set
logging-server command.

700 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-16 Configuring Syslog Exporters (2)

For example, you can configure an ESXi host as a Syslog exporter to export log messages to the
remote Syslog server (172.20.10.94):
[-esxi-01:~] esxcli network firewall ruleset set -r syslog -e
true
[root@sa-esxi-01:~] esxcli system syslog config set --
loghost=172.20.10.94
[root@sa-esxi-01:~] esxcli system syslog reload

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 701
10-17 Configuring and Displaying Syslog

In this example, the NSX Manager node sa-nsxmgr-01 is configured as a Syslog exporter. NSX
Manager sends the info-level log messages to the Syslog server student-a-01.vclass.local through
TCP.
The Syslog server application Kiwi Syslog Service Manager running on student-a-01 receives the
info-level messages from home 172.20.10.41 (sa-nsxmgr-01) as configured.

702 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-18 Generating Technical Support Bundles

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 703
10-19 Monitoring the Support Bundle Status

Support bundles can be downloaded to your machine or uploaded to a file server:

• If you download the bundles to your machine, you get a single archive file consisting of a
manifest file and support bundles for each node.

• If you upload the bundles to a file server, the manifest file and the individual bundles are
uploaded to the file server separately.

704 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-20 Downloading Support Bundles

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 705
10-21 Labs

706 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-22 Lab: Configuring Syslog

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 707
10-23 Lab: Generating Technical Support Bundles

708 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-24 Review of Learner Objectives

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 709
10-25 Monitoring and Troubleshooting Tools

710 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-26 Learner Objectives

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 711
10-27 Monitoring Components from the NSX Manager
Simplified UI

712 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-28 Monitoring Component Status

NSX Manager dashboards provide a predefined status panel of the NSX-T Data Center
environment. A Custom tab can be modified by using the API to display content that you define.

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 713
10-29 Port Mirroring Overview

Port mirroring is used on a switch to send a copy of packets seen on one switch port (or an entire
VLAN) to a monitoring connection on another switch port. Port mirroring is used to analyze and
debug data or diagnose errors on a network.

714 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-30 Port Mirroring Method: Remote L3 SPAN

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 715
10-31 Port Mirroring Method: Logical SPAN

The logical SPAN method has the following advantages:

• You can mirror source ports to a destination port on the same logical overlay switch but on
different transport nodes.

• This method uses the overlay for tunneling traffic to its destination if needed.

• Monitoring sessions are not broken by vSphere vMotion migration.

716 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-32 Configuring Logical SPAN

Configuring logical SPAN involves the following steps:


1. Create a logical SPAN profile.
2. Specify where the packets should be copied.
3. Specify the mirrored source.

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 717
10-33 Viewing the Logical SPAN Configuration and Mirrored
Packets

718 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-34 IPFIX Overview

Internet Protocol Flow Information Export (IPFIX) is a standard for the format and export of
network flow information.

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 719
10-35 Configuring IPFIX to Export Traffic Flows

When you enable IPFIX, all configured host transport nodes send IPFIX messages to the IPFIX
collectors using port 4739. In the case of the ESXi host, NSX-T Data Center automatically opens
port 4739.
For a KVM, if the firewall is not enabled, port 4739 is open. If the firewall is enabled, you must
ensure that the port is open because NSX-T Data Center does not automatically open the port.
IPFIX on ESXi and KVM hosts sample tunnel packets in different ways. For details, see the NSX-
T Data Center Administration Guide at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/2.4/nsxt_24_admin.pdf.

720 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-36 Configuring an IPFIX Firewall Profile

Configuring the IPFIX firewall profile is similar to configuring the IPFIX switch profile.
You configure the following settings for the IPFIX firewall profile:

• Name and Description: Enter a name and optionally a description.

• Collector Configuration: Select a collector that you configured. The collector is the device
to which the IPFIX flows are sent.

• Active Flow Export Timeout (sec): Enter the length of time after which a flow times out,
even if more packets associated with the flow are received.

• Priority: Enter a priority value. This parameter is used to resolve conflicts when multiple
profiles apply. The IPFIX exporter uses the profile with the highest priority only. A lower
value means a higher priority.

• Observation Domain Id: Enter the observation domain that the network flows originate
from. The default is 0 and indicates no specific observation domain.

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 721
10-37 Configuring an IPFIX Switch Profile

You configure the following settings for the IPFIX switch profile:

• Name and Description: Enter a name and optionally a description.

• Active Timeout (sec): Enter the length of time after which a flow times out, even if more
packets associated with the flow are received. The default is 300.

• Idle Timeout (sec): Enter the length of time after which a flow times out, if no more packets
associated with the flow are received. This option is for ESXi only. KVM times out all flows
based on active timeout. The default is 300.

• Max Flows: Enter the maximum flows to be cached on a bridge. This option is for KVM
only. It is not configurable on ESXi hosts. The default is 16,384.

• Packet Sample Probability (%): The percentage of packets that are sampled
(approximately). Increasing this setting might have a performance impact on the hypervisors
and collectors. If all hypervisors send more IPFIX packets to the collector, the collector might

722 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
not be able to collect all packets. Setting the probability at the default value of 0.1% keeps the
performance impact low.

• Observation Domain Id: Enter the observation domain that the network flows originate
from. Enter 0 to indicate no specific observation domain.

• Collector Configuration: Select a switch IPFIX collector that you configured in the previous
step.

• Priority: Enter a priority value. This parameter is used to resolve conflicts when multiple
profiles apply. The IPFIX exporter uses the profile with the highest priority only. A lower
value means a higher priority.

• Applied To: Apply the IPFIX switch profile to one or more objects.

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 723
10-38 Configuring IPFIX Collectors

You must install one or more IPFIX collectors. The installed collectors must have full network
connectivity to other devices in the NSX-T Data Center.
You should also verify that any relevant firewalls, including the ESXi firewall, allow traffic on the
IPFIX collector ports.

724 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-39 Traceflow Overview (1)

Traceflow enables users to test layer 2 and layer 3 connectivity between two objects, typically two
VM ports, by tracing all actions that take place in the data path during network communication.
These actions include switching, routing, firewall, NAT, and so on.

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 725
10-40 Traceflow Overview (2)

726 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-41 Traceflow Configuration Settings

You inject a packet and observe where that packet is seen as it passes through the physical and
logical networks.
The trace packet travels the logical switch overlay, but it is not visible to interfaces attached to the
logical switch. In other words, no packet is actually delivered to the test packet’s intended
recipients.

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 727
10-42 Traceflow Operations

Information about the connections, components, and layers is displayed in the UI.
If you select unicast and logical switch as a destination, the output includes a graphical map of the
topology.
A table lists information under the following categories:

• Observation Type: Delivered, Dropped, Received, or Forwarded

• Transport Node

• Component

You can filter the displayed observations with the options ALL, DELIVERED, and DROPPED.
If dropped observations occur, the DROPPED filter is applied by default. Otherwise, the ALL
filter is applied. The graphical map shows the back plane and router links.
Unicast traceflow traffic observations are layered similar to the port connection tool. For multicast
and broadcast traceflows, observations are reported in tabular format.

728 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-43 Using Traceflow for Troubleshooting

The slide shows an example of troubleshooting a connectivity problem between VM T1-Web-01


and VM T1-Web-02. Using traceflow, you can see that the traceflow packet from VM T1-Web-01
to VM T1-Web-02 is dropped due to the configured firewall rule.

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 729
10-44 About the Port Connection Tool

NSX-T Data Center includes a port connection tool to help you visualize the connectivity status
between two virtual machine interfaces. This tool takes both port IDs as an input and provides
information about the data path connectivity, including GENEVE tunneling.

730 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-45 Viewing the Graphical Output of the Port Connection
Tool

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 731
10-46 Packet Capture

732 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-47 Lab: Using Traceflow to Inspect the Path of a Packet

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 733
10-48 Review of Learner Objectives

734 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-49 Troubleshooting Basic NSX-T Data Center Problems

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 735
10-50 Learner Objectives

736 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-51 Common NSX Manager Installation Problems

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 737
10-52 Using Logs to Troubleshoot NSX Manager Installation
Problems

738 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-53 Using CLI Commands to Troubleshoot NSX Manager
Installation Problems

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 739
10-54 Viewing the NSX Manager Node Configuration

740 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-55 Verifying Services and States Running on NSX
Manager Nodes

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 741
10-56 Verifying NSX Management Cluster Status

You can verify the firewall rules from the KVM host by running the following commands:

• ovs-appctl -t /var/run/openvswitch/nsxa-ctl dfw/vif

• ovs-appctl -t /var/run/openvswitch/nsxa-ctl dfw/rules <vif ID>

742 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-57 Verifying Communication from Hosts to the NSX
Management Cluster

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 743
10-58 Troubleshooting Logical Switching Problems

744 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-59 Verifying the N-VDS Configuration

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 745
10-60 Verifying Overlay Tunnel Reachability (1)

746 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-61 Verifying Overlay Tunnel Reachability (2)

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 747
10-62 Troubleshooting Logical Routing Problems

748 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-63 Retrieving Gateway Information

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 749
10-64 Viewing the Routing Table

At the service router command prompt, you can also run the get interfaces command to
retrieve the Tier-0 or Tier-1 router’s interface information.

750 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-65 Viewing the Forwarding Table of the Tier-1 Gateway

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 751
10-66 Verifying BGP Neighbor Status

752 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-67 Viewing the BGP Route Table

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 753
10-68 Troubleshooting Firewall Problems

You can verify the firewall configuration from the NSX Manager simplified UI:

• Security > East West Security > Distributed Firewall

• Security > North South Security > Gateway Firewall

754 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-69 Verifying Firewall Configuration and Status (1)

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 755
10-70 Verifying Firewall Configuration and Status (2)

756 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-71 Verifying the Firewall Configuration from the KVM Host

You can verify the firewall rules from the KVM host by running the following commands:

• ovs-appctl -t /var/run/openvswitch/nsxa-ctl dfw/vif

• ovs-appctl -t /var/run/openvswitch/nsxa-ctl dfw/rules <vif ID>

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 757
10-72 Verifying the Firewall Configuration from the ESXi Host

758 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-73 Verifying the Firewall Configuration from the NSX Edge
Node

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 759
10-74 Review of Learner Objectives

760 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-75 Key Points

Module 10: NSX-T Data Center Tools and Basic Troubleshooting 761
Lab Manual
VMware NSX-T Data Center
CONTENTS

Lab 1 Labs Introduction ..................................................................... 1


Lab 2 Reviewing the Configuration of the Predeployed NSX
Manager Instance ............................................................................... 3
Task 1: Access Your Lab Environment ........................................................................ 4
Task 2: Prepare for the Lab .......................................................................................... 4
Task 3: Verify the vCenter Server System and the ESXi Hosts Licensing................... 7
Task 4: Verify the NSX Manager Configuration and Licensing .................................... 9
Task 5: Review the NSX Management Cluster Information from the NSX CLI.......... 10
Task 6: Set the Management Cluster Virtual IP Address and Verify Its Operation .... 12
Task 7: Register the vCenter Server System to NSX Manager ................................. 13
Lab 3 Deploying a 3-Node NSX Management Cluster ................... 17
Lab 4 Preparing the NSX Infrastructure.......................................... 19
Task 1: Prepare for the Lab ........................................................................................ 20
Task 2: Create Transport Zones................................................................................. 21
Task 3: Create IP Pools .............................................................................................. 23
Task 4: Prepare the ESXi Hosts ................................................................................. 25
Task 5: Prepare the KVM Hosts ................................................................................. 30
Lab 5 Configuring Segments ........................................................... 33
Task 1: Prepare for the Lab ........................................................................................ 34
Task 2: Create Segments ........................................................................................... 35
Task 3: Attach VMs to Segments ............................................................................... 38
Task 4: Test Layer 2 Connectivity and Verify the Segments Configuration ............... 42
Lab 6 Deploying and Configuring NSX Edge Nodes ..................... 47
Task 1: Prepare for the Lab ........................................................................................ 48
Task 2: Deploy Two Edge Nodes from the NSX Manager Simplified UI.................... 49
Task 3: Enable SSH on the Edge Nodes ................................................................... 56
Task 4: Configure an Edge Cluster ............................................................................ 57
Lab 7 Configuring the Tier-1 Gateway ............................................ 59
Task 1: Prepare for the Lab ........................................................................................ 60
Task 2: Create a Tier-1 Gateway ............................................................................... 61
Task 3: Create Gateway Ports on Segments ............................................................. 62
Task 4: Test East-West L3 Connectivity..................................................................... 64

i
Lab 8 Configuring the Tier-0 Gateway ............................................ 65
Task 1: Prepare for the Lab ........................................................................................ 66
Task 2: Create Uplink Segments ................................................................................ 67
Task 3: Create a Tier-0 Gateway ............................................................................... 68
Task 4: Connect the Tier-0 and Tier-1 Gateways ...................................................... 72
Task 5: Test the End-to-End Connectivity .................................................................. 74
Lab 9 Verifying Equal-Cost Multipathing Configurations ............. 75
Task 1: Prepare for the Lab ........................................................................................ 75
Task 2: Verify the BGP Configuration......................................................................... 76
Task 3: Verify That Equal-Cost Multipathing Is Enabled ............................................ 78
Task 4: Verify the Result of the ECMP Configuration ................................................ 78
Lab 10 Configuring Network Address Translation ........................ 83
Task 1: Prepare for the Lab ........................................................................................ 84
Task 2: Create a Tier-1 Gateway for Network Address Translation........................... 85
Task 3: Create a Segment .......................................................................................... 86
Task 4: Attach a VM to the NAT-LS Segment ............................................................ 87
Task 5: Configure NAT ............................................................................................... 88
Task 6: Configure Route Advertisement and Route Redistribution ............................ 90
Task 7: Verify the IP Connectivity............................................................................... 93
Lab 11 Configuring the DHCP Server on the NSX Edge Node ...... 95
Task 1: Prepare for the Lab ........................................................................................ 96
Task 2: Configure a DHCP Server ............................................................................. 97
Task 3: Verify the DHCP Server Operation .............................................................. 100
Task 4: Prepare for the Next Lab ............................................................................. 105
Lab 12 Configuring Load Balancing ............................................. 107
Task 1: Prepare for the Lab ...................................................................................... 108
Task 2: Test the Connectivity to Web Servers ......................................................... 109
Task 3: Create a Tier-1 Gateway Named T1-LR-LB and Connect it to T0-LR-01 ... 110
Task 4: Create a Load Balancer ............................................................................... 111
Task 5: Configure Route Advertisement and Route Redistribution for the Virtual IP115
Task 6: Use the CLI to Verify the Load Balancer Configuration ............................... 120
Task 7: Verify the Operation of the Backup Server .................................................. 122
Task 8: Prepare for the Next Lab ............................................................................. 123
Lab 13 Deploying Virtual Private Networks .................................. 125
Task 1: Prepare for the Lab ...................................................................................... 126
Task 2: Deploy Two New NSX Edge Nodes to Support the VPN Deployment ........ 127
Task 3: Enable SSH on the Edge Nodes ................................................................. 131
Task 4: Configure a New Edge Cluster .................................................................... 132

ii Contents
Task 5: Deploy and Configure a New Tier-0 Gateway and Segments for
VPN Support ................................................................................................ 133
Task 6: Create an IPSec VPN Service ..................................................................... 136
Task 7: Create an L2 VPN Server and Session ....................................................... 137
Task 8: Deploy the L2 VPN Client ............................................................................ 139
Task 9: Verify the Operation of the VPN Setup ........................................................ 142
Lab 14 Configuring the NSX Distributed Firewall ........................ 147
Task 1: Prepare for the Lab ...................................................................................... 148
Task 2: Test the IP Connectivity ............................................................................... 149
Task 3: Create IP Set Objects .................................................................................. 151
Task 4: Create Firewall Rules .................................................................................. 154
Task 5: Create an Intratier Firewall Rule to Allow SSH Traffic ................................. 157
Task 6: Create an Intratier Firewall Rule to Allow MySQL Traffic ............................ 158
Task 7: Prepare for the Next Lab ............................................................................. 160
Lab 15 Configuring the NSX Gateway Firewall ............................ 163
Task 1: Prepare for the Lab ...................................................................................... 164
Task 2: Test SSH Connectivity ................................................................................. 165
Task 3: Configure a Gateway Firewall Rule to Block External SSH Requests ........ 166
Task 4: Test the Effect of the Configured Gateway Firewall Rule ............................ 169
Task 5: Prepare for the Next Lab ............................................................................. 170
Lab 16 Managing Users and Roles with
VMware Identity Manager............................................................... 171
Task 1: Prepare for the Lab ...................................................................................... 172
Task 2: Add an Active Directory Domain to VMware Identity Manager ................... 173
Task 3: Create the OAuth Client for NSX Manager in VMware Identity Manager ... 180
Task 4: Gather the VMware Identity Manager Appliance Fingerprint ...................... 182
Task 5: Enable VMware Identity Manager Integration with NSX Manager .............. 184
Task 6: Assign NSX Roles to Domain Users and Test Permissions ........................ 185
Task 7: Prepare for the Next Lab ............................................................................. 187
Lab 17 Configuring Syslog ............................................................ 191
Task 1: Prepare for the Lab ...................................................................................... 192
Task 2: Configure Syslog on NSX Manager and Review the Collected Logs .......... 193
Task 3: Configure Syslog on an NSX Edge Node and Review the Collected Logs . 194
Lab 18 Generating Technical Support Bundles ........................... 195
Task 1: Prepare for the Lab ...................................................................................... 196
Task 2: Generate a Technical Support Bundle for NSX Manager ........................... 197
Task 3: Download the Technical Support Bundle .................................................... 199

Contents iii
Lab 19 Using Traceflow to Inspect the Path of a Packet ............. 201
Task 1: Prepare for the Lab ...................................................................................... 202
Task 2: Configure a Traceflow Session .................................................................... 203
Task 3: Examine the Traceflow Output .................................................................... 204

iv Contents
Lab 1 Labs Introduction

Lab Environment Key Knowledge Points

The lab environment in which you work is highlighted by the Lab Environment Topology Map.
You need to know and use these important items when you work with the NSX-T 2.4 ICM labs
that impacts the lab performance:
• In these labs, you enter the environment by using MSTSC (Remote Desktop Protocol -
RDP) initially to the student desktop. The student desktop resides on the Management
Network (SA-Management) and you can start deploying the various NSX-T fabric items
from here.
• You find a vCenter Server and NSX Manager predeployed with two clusters populated
with various virtual machines.
• At various points within the labs you are directed to copy and paste information for later
use.

When you initially access the student desktop, right-click the Start button > Run> notepad
and note the following useful items:

• Password used on many occasions: VMware1!VMware1!


• User for the vSphere Web Client: administrator@vsphere.local
• Save it to your desktop and name it Lab-notes.

1
Lab Environment Topology Map

You can refer to this topology map periodically, which you would find useful.

2 Lab 1 Labs Introduction


Lab 2 Reviewing the Configuration
of the Predeployed NSX Manager
Instance

Objective: Verify the NSX Manager appliance settings

In this lab, you perform the following tasks:


1. Access Your Lab Environment
2. Prepare for the Lab
3. Verify the vCenter Server System and the ESXi Hosts Licensing
4. Verify the NSX Manager Configuration and Licensing
5. Review the NSX Manager Cluster Information from the NSX CLI
6. Set the Management Cluster Virtual IP Address and Verify Its Operation
7. Register the vCenter Server System to NSX Manager

For this lab environment, you use a single-node NSX cluster. In a production environment, a
three-node cluster must be deployed to provide redundancy and high availability.

3
Task 1: Access Your Lab Environment
You use Remote Desktop Connection to connect to your lab environment.
1. Use the information provided by your instructor to log in to your lab environment.
2. If the message Kiwi Syslog free version supports up to 5 message
sources. Please define them under Inputs in Setup. appears, click OK to
close the Kiwi Syslog Service Manager window.
The Kiwi Syslog application is a free Syslog collector preinstalled as a service on your
student desktop to be used in a future lab.

Task 2: Prepare for the Lab


You log in to the vSphere Web Client UI and the NSX Manager Simplified UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
Use Chrome as your primary browser, unless you are instructed to use a different
browser.

NOTE
On first opening Chrome, you might see a message indicating that VMware
Enhanced Authetication Plugin has updated its SSL certification. Click OK to
close.

b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.

4 Lab 2 Reviewing the Configuration of the Predeployed NSX Manager Instance


2. Log in to the NSX Simplified UI.
a. Open another tab in the Chrome web browser.
b. Click the NSX-T Data Center > NSX Manager bookmark.
c. If you see the Your connection is not private message, click ADVANCED
and click the Proceed to sa-nsxmgr-01.vclass.local (unsafe) link.
d. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.
3. On login, you are prompted with the acceptance of the End User License Agreement.
a. Scroll to the bottom of the window and select the I understand and accept the terms of
the license agreement check box and click CONTINUE.

Lab 2 Reviewing the Configuration of the Predeployed NSX Manager Instance 5


4. After accepting the EULA, you are prompted to Join the VMware Customer Experience
Improvement Program.
a. For the purposes of the labs you must deselect the Join the VMware Customer
Experience Improvement Program check box.

A login prompt appears as Welcome to NSX-T and Get Started for a guided workflow
experience.

6 Lab 2 Reviewing the Configuration of the Predeployed NSX Manager Instance


5. Click FABRIC NODES to bypass the guided tour and proceed.

Task 3: Verify the vCenter Server System and the ESXi


Hosts Licensing
You verify the licenses of the vCenter Server system and ESXi hosts. Your instructor provides
the necessary licenses.
1. From the vSphere Web Client UI, point to the Home icon at the top and select
Administration.
2. In the Navigator pane, click Licenses.
3. Verify that the vCenter Server license is valid.
a. In the middle pane, click the Assets tab.
b. Click the vCenter Server Systems tab and verify the license expiration date.
4. If the license is not valid, assign a vCenter Server license key to the vCenter Server
instance by following the substeps below. Otherwise, proceed with the next step.
a. With your vCenter Server instance selected, click All Actions and select Assign
License.
b. In the Assign License Key panel, click the plus sign.
c. In the License key text box, enter or paste the vCenter Server license key provided by
the instructor and click Next.

Lab 2 Reviewing the Configuration of the Predeployed NSX Manager Instance 7


d. Review the expiration date and license capacity.
e. Click Next.
f. Click Finish.
g. In the Assign License panel, select the license key that you added and click OK.
5. Verify that the ESXi hosts licenses are valid.
a. In the center pane, click the Assets then Hosts tab and verify the license expiration
dates.
6. If the licenses are not valid, assign a license key to each ESXi host by following the
substeps below.
a. Select the first ESXi host in the list.
b. Right-click the ESXi hosts.
c. Select the Assign License Key link.
d. In the Assign License Key panel, click the plus sign.
e. In the License key text box, enter or paste the license key provided by the instructor
and click Next.
f. Review the expiration date and license capacity.
g. Click Next.
h. Click Finish.
i. In the Assign License panel, select the license key that you added and click OK.

8 Lab 2 Reviewing the Configuration of the Predeployed NSX Manager Instance


Task 4: Verify the NSX Manager Configuration and
Licensing
You examine the configuration and licensing information of the predeployed NSX Manager
appliance.
1. On the NSX Simplified UI Home page, click System.
2. Under Overview, view the information of the predeployed NSX Manager (172.20.10.41),
including the IP address, NSX Version, Cluster Connectivity, System Load, Repository
Status, and Disk Utilization.

Information for only one NSX Manager node appears because in this lab you are using a
single-node cluster.

Lab 2 Reviewing the Configuration of the Predeployed NSX Manager Instance 9


3. Verify the License of NSX Manager by clicking System > Licenses.
The license should show Valid.

Task 5: Review the NSX Management Cluster Information


from the NSX CLI
You review the configuration and status information of the NSX Cluster from the NSX CLI.
1. On your student desktop, open the MTPuTTY application from the system tray.

2. Double-click sa-nsxmgr-01 to open a console connection.


3. If a PuTTY Security Alert appears, click Yes to proceed.
4. Disable the command-line timeout.

set cli-timeout 0

10 Lab 2 Reviewing the Configuration of the Predeployed NSX Manager Instance


5. View the status of the NSX cluster.
get cluster status

This command returns the status for each of the roles in the NSX Cluster including Policy,
Manager, and Controller. You can see that the cluster for each of these components is
STABLE. Note that in the lab you use a single-node NSX cluster.

Lab 2 Reviewing the Configuration of the Predeployed NSX Manager Instance 11


Task 6: Set the Management Cluster Virtual IP Address
and Verify Its Operation
1. If not already opened, open Chrome and click the NSX-T Data Center > NSX Manager
bookmark.
2. On the NSX Manager > System > Overview page next to Virtual IP: Not Set, click Edit.

3. On the Change Virtual IP page, enter 172.20.10.48 and click SAVE.


4. You see the message indicating that New Virtual IP for Management Cluster has been
assigned.
a. Click REFRESH.

12 Lab 2 Reviewing the Configuration of the Predeployed NSX Manager Instance


5. Test the new VIP by opening a new browser tab and enter https://172.20.10.48.

If you see the Your connection is not private message, click ADVANCED and click the
Proceed to 172.20.10.48 (unsafe) link.

A new Management Cluster login page opens.

Task 7: Register the vCenter Server System to NSX


Manager
You register the vCenter Server system to NSX Manager to establish communication between
them.
1. Open a new tab on your browser and click the NSX-T Data Center > NSX Manager
bookmark.
2. From the NSX Simplified UI Home page, click System > Fabric > Compute Managers >
+ADD.

Lab 2 Reviewing the Configuration of the Predeployed NSX Manager Instance 13


3. On the New Compute Manager page, provide the configuration details.
• Name: Enter sa-vcsa-01.vclass.local.
• Domain Name/IP Address: Enter 172.20.10.94.
• Type: vCenter (default).
• Username: Enter administrator@vsphere.local.
• Password: Enter VMware1!.
• SHA-256 Thumbprint: Leave empty.

4. Click ADD.

14 Lab 2 Reviewing the Configuration of the Predeployed NSX Manager Instance


5. When the Thumbprint is Missing message appears, click ADD to use the server's default
thumbprint.

6. Wait until the Registration Status shows Registered and the Connection Status shows Up.
Click Refresh at the bottom of the display to update the contents.

7. Verify that the of version of vCenter Server is 6.7.0.

Lab 2 Reviewing the Configuration of the Predeployed NSX Manager Instance 15


16 Lab 2 Reviewing the Configuration of the Predeployed NSX Manager Instance
Lab 3 Deploying a 3-Node NSX
Management Cluster

Objective: Deploy a 3-Node NSX Management Cluster


from the NSX Manager Simplified UI

In this lab simulation, you perform the following tasks:


1. Prepare for the Lab
2. Deploy the Second NSX Manager
3. Deploy the Third NSX Manager
4. Review the NSX Management Cluster Information from the NSX Manager Simplified UI
5. Review the NSX Management Cluster Information from the NSX CLI

Go to https://vmware.bravais.com/s/kLHVhYZCzUZPeBYx2N4G to open the simulation.

17
IMPORTANT
Do not refresh, navigate away from, or minimize the browser tab hosting the
simulation. These actions might pause the simulation and the simulation might not
progress.

18 Lab 3 Deploying a 3-Node NSX Management Cluster


Lab 4 Preparing the NSX
Infrastructure

Objective: Deploy transport zones, create IP pools, and


prepare hosts for NSX usage

In this lab, you perform the following tasks:


1. Prepare for the Lab
2. Create Transport Zones
3. Create IP Pools
4. Prepare the ESXi Hosts
5. Prepare the KVM Hosts

19
Task 1: Prepare for the Lab

You log in to the NSX Simplified UI.


1. From your student desktop, open the Chrome web browser.
2. Click the NSX-T Data Center > NSX Manager bookmark.
3. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

20 Lab 4 Preparing the NSX Infrastructure


Task 2: Create Transport Zones
You create an overlay transport zone and a VLAN transport zone.
1. Create a global overlay transport zone.
a. On the NSX Simplified UI System page, click Fabric > Transport Zones > +ADD.
b. Provide the configuration details in the New Transport Zone window.
• Name: Enter Global-Overlay-TZ.
• N-VDS Name: Enter PROD-Overlay-NVDS.
• N-VDS Mode: Standard (default).
• Traffic Type: Overlay (default)
• Uplink Teaming Policy Name: Leave empty (default).

c. Click ADD.

Lab 4 Preparing the NSX Infrastructure 21


A new transport zone appears.

2. Create a Global VLAN-based transport zone to communicate with the nonoverlay


networks which are external to the NSX-T Data Center.
a. On the NSX Simplified UI System page, select Fabric > Transport Zones page, click
+ADD.
b. Provide the configuration details in the New Transport Zone window.
• Name: Enter Global-VLAN-TZ.
• N-VDS Name: Enter PROD-VLAN-NVDS.
• N-VDS Mode: Standard (default).
• Traffic Type: Select VLAN.
• Uplink Teaming Simplified Names: Leave empty (default).
c. Click ADD.
A new transport zone appears.

22 Lab 4 Preparing the NSX Infrastructure


Task 3: Create IP Pools
You create an IP pool for assigning IP addresses to the NSX transport nodes.
1. On the NSX Simplified UI Home page, click Networking > IP Address Management >
IP Address Pools > ADD IP ADDRESS POOL.
2. Provide the configuration details in the ADD IP ADDRESS POOL window.
• Name: VTEP-IP-Pool.
• Description: IP Pool for ESXi, KVM, and Edge.
• Click Set under Subnets, Select ADD SUBNET > IP Ranges, and provide the
configuration details.
• IP Ranges: Enter 172.20.11.151-172.20.11.170 click Add item(s).
• CIDR: Enter 172.20.11.0/24.
• Gateway IP: Enter 172.20.11.10.

Lab 4 Preparing the NSX Infrastructure 23


a. Click ADD on the ADD SUBNETS page.
3. Click on the Set Subnets page, and click APPLY.
4. Click SAVE.

24 Lab 4 Preparing the NSX Infrastructure


Task 4: Prepare the ESXi Hosts
You prepare the ESXi hosts to participate in the virtual networking and security functions
offered by NSX-T Data Center.
1. On the NSX Simplified UI Home page, click System >Fabric > Nodes > Host
Transport Nodes.
2. From the Managed by drop-down menu, select sa-vcsa-01.
Two clusters appear: SA-Management-Edge and SA-Compute-01.
3. Expand the SA-Compute-01 cluster view.
The NSX Configuration status of the hosts appears as Not Configured and Node Status
is Not Available.

4. Select the SA-Compute-01 check box and click CONFIGURE NSX.

Lab 4 Preparing the NSX Infrastructure 25


5. In the Configure NSX dialog box, click Create New Transport Node Profile.
a. Provide the required details in the Add Transport Node Profile - General window.
• Select Deployment Profile: Click Create New Transport Node Profile.
• Name: Enter ESXi_TN_Profile.
• Transport Zones Available (2) : Select Global-Overlay-TZ and Global-VLAN-
TZ, and click the right arrow to move to Selected.

26 Lab 4 Preparing the NSX Infrastructure


b. Provide the required details in the Add Transport Node Profile - N-VDS window.
• N-VDS Name: Select PROD-Overlay-NVDS.
• NIOC Profile: Select nsx-default-nioc-hostswitch-profile.
• Uplink Profile: Select nsx-default-uplink-hostswitch-profile.
• LLDP Profile: Select LLDP [Send packets disabled].
• IP Assignment: Select Use IP Pool.
• IP Pool: Select VTEP-IP-Pool.
• Physical NICs: Enter vmnic4 and select uplink-1 from the drop-down menu.

Lab 4 Preparing the NSX Infrastructure 27


6. Scroll back up to the top of the page and click on +ADD-N-VDS.
Add Transport Node Profile - N-VDS
• N-VDS Name: Select PROD-VLAN-NVDS.
• NIOC Profile: Select nsx-default-nioc-hostswitch-profile.
• Uplink Profile: Select nsx-default-uplink-hostswitch-profile.
• LLDP Profile: Select LLDP (Send Packets Disabled).
• IP Assignment: [Disabled].
• Physical NICs: Enter vmnic5, and select uplink-2 from the drop-down menu.

28 Lab 4 Preparing the NSX Infrastructure


7. Click ADD
a. In the Configure NSX window, Click SAVE.
The autoinstall process starts.
The process might take approximately 5 minutes to complete.
b. Click REFRESH at the bottom of the page.
8. When the installation completes, verify that NSX is installed on the hosts, and sa-Compute
Cluster Nodes Status shows Up.

You might need to click REFRESH at the bottom of the screen to refresh the page.

NOTE
When you next look at the vCenter Inventory, ESXi hosts sa-esxi-04.vclass.local
and sa-esxi-05.vclass.local show a red alarm for their loss of network redundancy.
Click Reset to Green to resolve the host alarm.

Lab 4 Preparing the NSX Infrastructure 29


Task 5: Prepare the KVM Hosts
You prepare the kernel-based virtual machine (KVM) hosts to participate in the NSX virtual
networking and security functions.
1. Add the sa-kvm-01 KVM host to NSX.
a. From the Managed by drop-down menu, select None: Standalone Hosts.
b. Click +ADD.
c. Provide the configuration details in the Add Transport Node window.
• Name: Enter sa-kvm-01.vclass.local.
• IP Addresses: Enter 172.20.10.151.
• Operating System: Select Ubuntu KVM.
• Username: Enter vmware.
• Password: Enter VMware1!.
• SHA-256 Thumbprint: Leave empty (default)

d. Click Next.
e. When the Thumbprint is missing message appears, click ADD. When the Add
Transport Node returns, click Next.

30 Lab 4 Preparing the NSX Infrastructure


On the Configure NSX window, provide the configuration details:
• Transport Zone: Select Global-Overlay-TZ.
• N-VDS Name: Select PROD-Overlay-NVDS.
• Uplink-Profile: Select nsx-default-uplink-hostswitch-profile.
• LLDP Profile: Select LLDP (Send Packets Disabled).
• IP Assignment: Select Use IP Pool.
• IP Pool: Select VTEP-IP-Pool.
• Physical NICs: Enter eth1 amd select uplink-1.

Click SAVE and the NSX Install process starts.

Lab 4 Preparing the NSX Infrastructure 31


2. Repeat step 1 to add the sa-kvm-02 KVM host to NSX.
On the Add Transport Node window provide the configuration details:
• Name: Enter sa-kvm-02.vclass.local.
• IP Addresses: Enter 172.20.10.152.
• Operating System: Select Ubuntu KVM.
• Username: Enter vmware.
• Password: Enter VMware1!.
• SHA-256 Thumbprint: Leave empty (default).
On the Configure NSX window, provide the configuration details:
• Transport Zone: Select Global-Overlay-TN.
• N-VDS Name: Select PROD-Overlay-NVDS.
• Uplink-Profile: Select nsx-default-uplink-hostswitch-profile.
• LLDP Profile: Select LLDP (Send Packets disabled).
• IP Assignment: Select Use IP Pool.
• IP Pool: Select VTEP-IP-Pool.
• Physical NICs: Enter eth1 and select uplink-1.
This process might take approximately 5 minutes to complete.
3. Verify that Deployment Status shows Configuration State as Success and Status shows
Up for the two KVM hosts.
You might need to refresh the page to update the status of the installation.

32 Lab 4 Preparing the NSX Infrastructure


Lab 5 Configuring Segments

Objective: Create segments for VMs residing on the ESXi


and KVM hosts

In this lab, you perform the following tasks:


1. Prepare for the Lab
2. Create Segments
3. Attach VMs to Segments
4. Test Layer 2 Connectivity and Verify the Segmenting Configuration

33
Task 1: Prepare for the Lab

You log in to the vSphere Web Client UI and the NSX Manager Simplified UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.

2. Log in to the NSX Simplified UI.


a. Open another tab in the Chrome web browser.
b. Click the NSX-T Data Center > NSX Manager bookmark.
c. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

34 Lab 5 Configuring Segments


Task 2: Create Segments
You create three segments namely Web-LS, App-LS, and DB-LS.
1. Create a segment named Web-LS.
a. On the NSX Simplified UI Home page, click Networking > Segments.
b. Click ADD SEGMENT and provide the configuration details.
• Segment Name: Enter Web-LS.
• Uplink & Type: Leave blank.
• Transport Zone: Select Global-Overlay-TZ.
• Leave all the other options as default.
• Click SAVE.

2. Click SAVE
a. When the message to continue segment configuration appears, click NO.

Lab 5 Configuring Segments 35


3. Add a Segment named App-LS.
a. On the NSX Simplified UI Home page, click Networking > Segments.
b. Click ADD SEGMENT and provide the configuration details.
• Segment Name: Enter App-LS.
• Transport Zone: Select Global-Overlay-TZ (default).
• Uplink & Type: Leave blank.
• Leave all the other options as default.
c. Click SAVE.
d. When the message to continue segment configuration appears, click NO.
4. Add a Segment named DB-LS.
a. On the NSX Simplified UI Home page, click Networking > Segments.
b. Click ADD SEGMENT and provide the configuration details.
• Segment Name: Enter DB-LS.
• Transport Zone: Select Global-Overlay-TZ (default).
• Uplink & Type: None.
• Leave all the other options as default.
c. Click SAVE.
d. When the message to continue segment configuration appears, click NO.
5. Verify that the three segments are created successfully, and the Status is Up.

6. On the vSphere Web Client home page, click Networking.

36 Lab 5 Configuring Segments


7. Expand the Navigator view and verify that the three newly created segments are listed
under SA-Datacenter.

Lab 5 Configuring Segments 37


Task 3: Attach VMs to Segments
You attach VMs running on the ESXi hosts and KVM hosts to their corresponding segments.
1. In the navigator pane of vSphere Web Client, click the Hosts and Clusters tab and expand
the view of SA-Datacenter > SA-Compute-01.
2. Add T1-Web-01 to the Web-LS segment.
a. Right-click T1-Web-01 and select Edit Settings.
b. From the Network adapter 1 drop-down menu, select Web-LS (nsx.LogicalSwitch).
c. Verify that the Connected check box is selected.
d. Click OK.

3. Add T1-Web-02 to the Web-LS segment.


a. Right-click T1-Web-02 and select Edit Settings.
b. From the Network adapter 1 drop-down menu, select Web-LS (nsx.LogicalSwitch).
c. Verify that the Connected check box is selected.
d. Click OK.

38 Lab 5 Configuring Segments


4. Add T1-App-01 to the App-LS. segment.
a. Right-click T1-App-01 and select Edit Settings.
b. From the Network adapter 1 drop-down menu, select App-LS (nsx.LogicalSwitch).
c. Verify that the Connected check box is selected.
d. Click OK.
5. Verify the status of the logical ports.
a. On the NSX Simplified UI Home page, click Advanced Networking & Security >
Switching > Ports.
b. Verify that the two logical ports for the Web-LS and one logical port for the App-LS
VMs are listed with Admin Status as Up and Operational Status as Up.

6. Power on T1-DB-01 on host sa-kvm-01.


a. Open MTPuTTY and double-click the SA-KVM-01 connection.
b. Switch the user to root.
sudo -s

c. Check the status of the VMs running on the SA-KVM-01 host.


virsh list –-all

Your T1-DB-01 VM is in the shut down state.

Lab 5 Configuring Segments 39


d. Power on the VM.
virsh start T1-DB-01

e. Verify that T1-DB-01 is powered on.


virsh list --all

7. Power on T1-Web-03 on host sa-kvm-02.


a. Open MTPuTTY and double-click the SA-KVM-02 connection.
b. Switch the user to root.
sudo -s

c. Check the status of the VMs running on the SA-KVM-02 host.


virsh list –-all

d. Power on the VM.


virsh start T1-Web-03

8. Attach T1-DB-01 to the DB-LS Segment.


a. At the SA-KVM-01 command prompt, view the UUID (shown as interfaceid)
associated with T1-DB-01.
virsh dumpxml T1-DB-01 | grep interfaceid

b. Copy and paste the UUID to a notepad so it can be used in a future step.
In this example, the UUID associated with T1-DB-01 is 57601300-2e82-48c4-8c27-
1e961ac70e81.

c. On the NSX Simplified UI Home page, click Networking > Segments and click the
three vertical ellipses icon next to DB-LS and select Edit.
d. Click Ports, then click Set, and then click ADD SEGMENT PORT.
The Set Segment Ports window appears.

40 Lab 5 Configuring Segments


e. Provide the details in the Set Segment Ports window.
• Name: Enter DB01-LS-Port.
• Type: Select Independent.
• ID: Copy and paste the ID (numbers between the single quotes) from the notepad
to the ID.
f. Click SAVE.
g. Click CLOSE.
h. Click CLOSE EDITING.
9. Attach T1-Web-03 to the Web-LS Segment.
a. At the SA-KVM-02 command prompt, obtain the UUID associated with T1-Web-03
and record it in a notepad.
virsh dumpxml T1-Web-03 | grep interfaceid

The UUID associated with T1-Web-03 is 57601300-2e82-48c4-8c27-1e961ac70e79.


10. Create a logical port.
a. On the NSX Simplified UI Home page, click Segments and click the three vertical
ellipses icon next to Web-LS and select Edit.
b. Click the expand > icon next to Ports and click the number 2.
The Set Segments Ports window appears.
c. On the Add Segment Port window, enter the details.
• Name: Enter Web03-LS-Port.
• Type: Select Independent.
• ID: Copy and paste the ID (only numbers between the single quotes from notepad).
d. Click SAVE.
e. Click CLOSE .
f. Click CLOSE EDITING.

Lab 5 Configuring Segments 41


11. Navigate to Advanced Networking & Security >Switching > Ports and verify that the
two logical ports KVM-DB-01 and KVM-Web-03 are created with Admin Status and
Operational Status as Up.
You might need to refresh the page.

Task 4: Test Layer 2 Connectivity and Verify the


Segments Configuration
You verify the information about segments from the control plane, data plane, and management
plane.
1. Open a console connection to T1-Web-01.
a. From the vSphere Web Client Home page, click Hosts and Clusters.
b. In the Navigator pane, right-click T1-Web-01 and select Open Console.
c. When the remote console window opens, mouse click in the window and press enter
to activate activate the screen.
d. Enter root as the user name and VMware1! as the password.
2. Ping the T1-Web-02 (172.16.10.12) VM which resides on an ESXi host.
ping -c 3 172.16.10.12
Your ping should be successful.

42 Lab 5 Configuring Segments


3. Ping the T1-Web-03 (172.16.10.13) VM which resides on a KVM host.
ping -c 3 172.16.10.13
Your ping should be successful.

NOTE
You can press Ctrl+Alt to escape from the console window.

4. Retrieve the VNI and UUID information for each segment.


a. From MTPuTTY, connect to sa-nsxmgr-01.
b. Retrieve information for the segments.
get logical-switches

c. Record the VNI and UUID values for Web-LS in a notepad.

The VNIs and UUIDs in your lab environment might be different from the screenshot.
5. Retrieve the Tunnel Endpoint (TEP) information for the Web-LS Segment.
get logical-switch Web-LS_VNI_number vtep

The above sample output shows the TEPs connected to the VNI 73728 (Web-LS) control
plane.

Lab 5 Configuring Segments 43


6. Retrieve the MAC table information for Web-LS.
get logical-switch Web-LS_VNI_number mac

7. Retrieve the ARP table information for Web-LS.


get logical-switch Web-LS_VNI_number arp

If your Address Resolution Protocol (ARP) table is empty, initiate ping between the Web-
Tier VMs.
8. Retrieve information about the established host connections on Web-LS.
get logical-switch Web-LS_UUID ports

9. From MTPuTTY, connect to the sa-esxi-04 host.


10. Go to the nsxcli mode.
nsxcli

44 Lab 5 Configuring Segments


11. Retrieve the segment information from the sa-esxi-04 host.
get logical-switches

Lab 5 Configuring Segments 45


Lab 6 Deploying and Configuring
NSX Edge Nodes

Objective: Deploy NSX Edge nodes and configure them


as transport nodes

In this lab, you perform the following tasks:


1. Prepare for the Lab
2. Deploy Two Edge Nodes from the NSX Simplified UI
3. Enable SSH on the Edges Nodes
4. Configure an Edge Cluster

47
Task 1: Prepare for the Lab

You log in to the vSphere Web Client UI and the NSX Manager Simplified UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.

2. Log in to the NSX Simplified UI.


a. Open another tab in the Chrome web browser.
b. Click the NSX-T Data Center > NSX Manager bookmark.
c. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

48 Lab 6 Deploying and Configuring NSX Edge Nodes


Task 2: Deploy Two Edge Nodes from the NSX Manager
Simplified UI
You deploy NSX Edge nodes on ESXi hosts to perform routing and other Layer 3 networking
functionality.
1. On the NSX Simplified UI Home page, click System > Fabric > Nodes > Edge
Transport Nodes.
2. Click +ADD EDGE VM.
3. Provide the configuration details in the Add Edge VM window.
• Name: Enter sa-nsxedge-01.
• Host name/FQDN: Enter sa-nsxedge-01.vclass.local.
• Form Factor: Medium (default).

4. Click NEXT.

Lab 6 Deploying and Configuring NSX Edge Nodes 49


5. On the Credentials page, enter VMware1!VMware1! as the CLI password and the system
root password.

6. Click NEXT.

50 Lab 6 Deploying and Configuring NSX Edge Nodes


7. On the Configure Deployment page, provide the configuration details.
• Compute Manager: Select sa-vcsa-01.vclass.local (begin by typing sa and the full
name should appear).
• Cluster: Select SA-Management-Edge from the drop-down menu.
• Resource Pool: Leave empty.
• Host: Leave empty.
• Datastore: Select SA-Shared-02-Remote from the drop-down menu.

8. Click NEXT.

Lab 6 Deploying and Configuring NSX Edge Nodes 51


9. On the Configure Ports page, provide the configuration details.
• IP Assignment: Select Static.
• Management IP: Enter 172.20.10.61/24.
• Default Gateway: Enter 172.20.10.10.
• Management Interface: Select pg-SA-Management from the drop-down menu.

52 Lab 6 Deploying and Configuring NSX Edge Nodes


10. On the Configure NSX page, provide the configuration details.
• Transport Zone: Select Global-Overlay-TZ and Global-VLAN-TZ.
• Edge Switch Name: Select PROD-Overlay-NVDS.
• Uplink Profile: Select nsx-edge-single-nic-uplink-profile from the drop-down menu.
• IP Assignment: Select Use IP Pool from the drop-down menu.
• IP Pool: Select VTEP-IP-Pool from the drop-down menu.
• DPDK Fastpath Interfaces: Select uplink-1 and select pg-SA-Edge-Overlay from the
drop-down menu.

11. On the Configure NSX page, click + Add N-VDS


Provide the configuration details.
• Edge Switch Name: Select Prod-VLAN-NVDS from the drop-down menu.
• Uplink Profile: Select nsx-edge-single-nic-uplink-profile from the drop-down menu.
• IP Assignment: [Disabled].
• DPDK Fastpath Interfaces: Select uplink-1 and select pg-SA-Edge-Uplinks from the
drop-down menu.
• Click FINISH.

Lab 6 Deploying and Configuring NSX Edge Nodes 53


12. Click FINISH.

NOTE
The Edge deployment might take a several minutes to complete. The deployment
status displays various values, for example, Node Not Ready, which is only
temporary.

NOTE
Please wait until the Configuration status displays Success and Status is Up. You
might click REFRESH occasionally.

13. On the NSX Simplified UI Home page, click System > Fabric > Nodes > Edge
Transport Nodes.
Provide the configuration details to deploy the second edge node.
a. On the Name and Description window, enter the following details.
• Name: Enter sa-nsxedge-02.
• Host name/FQDN: Enter sa-nsxedge-02.vclass.local.
• Form Factor: Medium (default).
b. On the Credentials window, enter the following details.
• Enter VMware1!VMware1! as the CLI password and the system root password.
c. On the Configure Deployment window, enter the following details.
• Compute Manager: Select sa-vcsa-01.vclass.local (begin by typing sa and the
full name should appear).
• Cluster: Select SA-Management-Edge from the drop-down menu.
• Resource Pool: Leave empty.
• Host: Leave empty.
• Datastore: Select SA-Shared-02-Remote from the drop-down menu.
d. On the Configure Ports window, enter the following details.
• IP Assignment: Click Static.

54 Lab 6 Deploying and Configuring NSX Edge Nodes


• Management IP: Enter 172.20.10.62/24.
• Default Gateway: Enter 172.20.10.10.
• Management Interface: Select pg-SA-Management from the drop-down menu.
e. On the Configure NSX window, enter the following details.
• Transport Zone: Select Global-Overlay-TZ and Global-VLAN-TZ.
• Edge Switch Name: Select PROD-Overlay-NVDS.
• Uplink Profile: Select nsx-edge-single-nic-uplink-profile from the drop-down
menu.
• IP Assignment: Select Use IP Pool from the drop-down menu.
• IP Pool: Select VTEP-IP-Pool from the drop-down menu.
• DPDK Fastpath Interfaces: uplink-1 is populated. Select connect to pg-SA-Edge-
Overlay from the drop-down menu.
f. On the Add N-VDS window, enter the following details.
• Click ADD N-VDS.
• Edge Switch Name: Select Prod-VLAN-NVDS from the drop-down menu.
• Uplink Profile: Select nsx-edge-single-nic-uplink-profile from the drop-down
menu.
• IP Assignment: [Disabled].
• DPDK Fastpath Interfaces: uplink-1 is populated. Select connect to pg-SA-Edge-
Uplinks from the drop-down menu.
g. Click FINISH.

NOTE
The Edge deployment might take a several minutes to complete. The deployment
status displays various values, for example, Node Not Ready which is only
temporary.

NOTE
Please wait until the Configuration status displays Success and Status is Up. You
might click REFRESH occasionally.

Lab 6 Deploying and Configuring NSX Edge Nodes 55


14. Verify that the two edge nodes are deployed and listed on the Edge VM list.

Configuration Status shows Success and Node Status is UP.

Task 3: Enable SSH on the Edge Nodes


You enable the SSH service on each edge node that you created.
1. From the vSphere Web Client Home page, click Hosts and Clusters.
2. In the navigator pane, right-click sa-nsxedge-01 and select Open Console.
3. Enter admin as the user name and VMware1!VMware1! as the password.
4. Verify that the SSH service is stopped.
get service ssh

5. Start the SSH service.


start service ssh

6. Set the SSH service to autostart when the VM is powered on.


set service ssh start-on-boot

7. Verify that the SSH service is running and Start on boot is set to True.
get service ssh

56 Lab 6 Deploying and Configuring NSX Edge Nodes


8. Configure SSH on sa-nsxedge-02.
a. From the vSphere Web Client Home page, click Hosts and Clusters.
b. In the navigator pane, right-click sa-nsxedge-02 and select Open Console.
c. Enter admin as the user name and VMware1!VMware1! as the password.
d. Verify that the SSH service is stopped.
get service ssh

e. Start the SSH service.


start service ssh

f. Set the SSH service to autostart when the VM is powered on.


set service ssh start-on-boot

g. Verify that the SSH service is running and Start on boot is set to True.
get service ssh

Task 4: Configure an Edge Cluster


You create an edge cluster and add the two edge nodes to the cluster.
1. On the NSX Simplified UI Home page, click System >Fabric > Nodes > Edge Clusters.
2. Click +ADD.
3. Provide the configuration details in the Add Edge Cluster window.
• Name: Enter Edge-Cluster-01.
• Edge Cluster Profile: Select nsx-default-edge-high-availability-profile (default).
• Member Type: Edge Node (default).
4. In the Available (2) pane, select both sa-nsxedge-01 and sa-nsxedge-02 and click the right
arrow to move them to the Selected (0) pane.
5. Click ADD.
6. Click Refresh once Edge Cluster is created.
7. Verify that Edge-Cluster-01 appears in the Edge Cluster list.

Lab 6 Deploying and Configuring NSX Edge Nodes 57


8. Click 2 in the Transport Nodes column and verify that sa-nsxedge-01 and sa-nsxedge-02
appear in the list.

58 Lab 6 Deploying and Configuring NSX Edge Nodes


Lab 7 Configuring the Tier-1
Gateway

Objective: Create a Tier-1 gateway and configure gateway


ports

In this lab, you perform the following tasks:


1. Prepare for the Lab
2. Create a Tier-1 Gateway
3. Create Gateway Ports on Segments
4. Test East-West L3 Connectivity

59
Task 1: Prepare for the Lab

You log in to the vSphere Web Client UI and the NSX Manager Simplified UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.

2. Log in to the NSX Simplified UI.


a. Open another tab in the Chrome web browser.
b. Click the NSX-T Data Center > NSX Manager bookmark.
c. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

60 Lab 7 Configuring the Tier-1 Gateway


Task 2: Create a Tier-1 Gateway
You create a Tier-1 gateway to provide east-west connectivity.
1. On the NSX Simplified UI Home page, click Networking > Tier-1 Gateways.
2. Click ADD Tier-1 GATEWAY.
3. Provide the configuration details in the ADD TIER-1 GATEWAY window.
• Name: Enter T1-LR-01.
• Linked Tier-0 Gateway: Leave empty because the Tier-0 gateway is not yet created.
• Failover: Leave default - nonpreemptive.
• Edge Cluster: Leave empty

4. Click SAVE.
You see a message message that you want to continue editing the Tier-1GW, click YES.

Lab 7 Configuring the Tier-1 Gateway 61


5. Scroll to the lower portion of the T1-LR-01 gateway and expand Route Advertisement
using the expand > icon next to it and select the options.
• Select All Static Routes.
• Select All Connected Segments & Service Ports.

6. Click SAVE followed by CLOSE EDITING.

Task 3: Create Gateway Ports on Segments


You create gateway ports to associate the gateway with segments.
1. On the NSX Simplified UI Home page, click Networking > Segments.
2. Click the three vertical ellipses icon next to APP-LS and select Edit.
a. Select T1-LR-01 from the Uplink & Type drop-down menu.
b. Click Set Subnets > ADD SUBNET.
c. Enter 172.16.20.1/24 in the Gateway field for App-LS on the Set Subnets page.
d. Click ADD followed by APPLY and SAVE.
e. Click CLOSE EDITING.

62 Lab 7 Configuring the Tier-1 Gateway


3. Use the following configuration details to add ports for DB-LS.
a. Click the three vertical ellipses icon next to DB-LS and select Edit.
b. Select T1-LR-01 from the Uplink & Type drop-down menu.
c. Click Set Subnets > ADD SUBNET.
d. Enter 172.16.30.1/24 in the Gateway field for DB-LS on the Set Subnets page.
e. Click ADD followed by APPLY and SAVE.
f. Click CLOSE EDITING.
4. Use the following configuration details to add ports for Web-LS.
a. Click the three vertical ellipses icon next to Web-LS and select Edit.
b. Select T1-LR-01 from the Uplink & Type drop-down menu.
c. Click Set Subnets > ADD SUBNET.
d. Enter 172.16.10.1/24 in the Gateway field for Web-LS on the Set Subnets page.
e. Click ADD followed by APPLY and SAVE.
f. Click CLOSE EDITING.

Lab 7 Configuring the Tier-1 Gateway 63


Task 4: Test East-West L3 Connectivity
You verify east-west connectivity among the tenant networks.
1. From vSphere Web Client, open a console to T1-Web-01 and enter root as the user name
and VMware1! as the password.
2. From T1-Web-01, verify that you can reach the tenants in the App-Tier and DB-Tier
networks.
ping -c 3 172.16.20.11 (T1-App-01)
ping -c 3 172.16.30.11 (T1-DB-01)

64 Lab 7 Configuring the Tier-1 Gateway


Lab 8 Configuring the Tier-0
Gateway

Objective: Create a Tier-0 gateway and configure north-


south end-to-end connectivity

In this lab, you perform the following tasks:


1. Prepare for the Lab
2. Create Uplink Segments
3. Create a Tier-0 Gateway
4. Connect the Tier-0 and Tier-1 Gateways
5. Test the End-to-End Connectivity

65
Task 1: Prepare for the Lab

You log in to the NSX Manager Simplified UI.


1. From your student desktop, open the Chrome web browser.
2. Click the NSX-T Data Center > NSX Manager bookmark.
3. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

66 Lab 8 Configuring the Tier-0 Gateway


Task 2: Create Uplink Segments
You create segments for the two uplinks used by the Tier-0 gateway to connect to the upstream
gateway.
1. On the NSX Simplified UI Home page, click Networking > Segments > ADD
SEGMENT.
2. Provide the configuration details in the window.
On the General tab:
• Name: Enter Uplink-1.
• Uplink & Type: None.
• Transport Zone: Select Global-VLAN-TZ.
• VLAN: Enter 0 and click Add Item(s).

3. Click SAVE.
a. When the message appears asking whether you want to continue Configuring the
Segment, specify NO.
4. Repeat steps 1-3 to create another logical segmentnamed Uplink-2 for the second uplink.
• Name: Enter Uplink-2.
• Uplink & Type: None.
• Transport Zone: Select Global-VLAN-TZ.
• VLAN: Enter 0 and click Add Item(s).
a. Click SAVE.
b. When the message appears asking whether you want to continue Configuring the
Segment, specify NO.

Lab 8 Configuring the Tier-0 Gateway 67


5. Verify that the two Segments for uplinks appear in the Segments list.

Task 3: Create a Tier-0 Gateway


You create a Tier-0 gateway.
1. On the NSX Simplified UI Home page, click Networking > Tier-0 Gateways.
2. Click ADD TIER-0 GATEWAY.
a. Provide the configuration details in the ADD TIER-0 GATEWAY window.
• Name: Enter T0-LR-01.
• HA Mode: Select Active-Active (using the drop-down menu select the value).
• Edge Cluster: Select Edge-Cluster-01.

b. Click SAVE.
3. When the message appears asking that you want to continue editing this Tier-0 gateway,
click YES.

68 Lab 8 Configuring the Tier-0 Gateway


4. Select ROUTE RE-DISTRIBUTION and click SET.
a. Provide the configuration details in the Set Route Redistribution page.
Tier-O Subnets:
• Select Static Routes.
• Select Connected Interfaces & Segments and all the suboptions.
Advertise Tier-1 Subnets:
• Select Connected Subnets.
• Select Static Routes.

b. Click APPLY followed by click SAVE.


5. Click the expand > icon next to Interfaces and click Set.

Lab 8 Configuring the Tier-0 Gateway 69


6. In the Set Interfaces page, click ADD INTERFACE.
a. Provide the configuration information for the interfaces.
• Name: Enter Uplink-1-Intf.
• Type: External (default).
• IP Address / Mask: Enter 192.168.100.2/24 and click Add Item(s).
• Connected To(Segments): Select Uplink-1.
• Edge Node: Select sa-nsxedge-01.
b. Click SAVE.
7. In the Set Interfaces page, click ADD INTERFACE.
a. Enter the configuration information for the interfaces.
• Name: Enter Uplink-2-Intf.
• IP Address / Mask: Enter 192.168.110.2/24 and click Add Item(s).
• Connected To(Segments): Uplink-2.
• Edge Node: Select sa-nsxedge-02.
b. Click SAVE followed by CLOSE.
8. Click the expand > icon next to BGP and provide the configuration details.
• Local AS: 100
• BGP: On
• Inter SR iBGP: OFF
• ECMP: On
• Multipath Relax: On
Leave all the other options as default.
a. Click SAVE.
b. Click Set next to BGP Neighbors.
Click ADD BGP NEIGHBOR and enter the configuration information.
• IP Address : 192.168.100.1
• Remote AS: 200
c. Click SAVE.

70 Lab 8 Configuring the Tier-0 Gateway


d. Click ADD BGP NEIGHBOR and enter the configuration information.
• IP Address : 192.168.110.1
• Remote AS: 200
e. Click SAVE.
f. Click CLOSE followed by CLOSE EDITING.

9. You should see your Tier-0 Gateway in the window with the status as UP.

Lab 8 Configuring the Tier-0 Gateway 71


Task 4: Connect the Tier-0 and Tier-1 Gateways
You connect the two gateways together because the direct connection between Tier-0 and Tier-
1 gateways is not automatic.
1. On the NSX Simplified UI Home page, click Networking > Tier-1 Gateways.
2. Select the Tier-1 T1-LR-01 gateway, click the three vertical ellipses icon next to the T1-
LR-01 entry, and from the menu select Edit.

3. On the T1-LR-1 edit page, click the down arrow in the Linked Tier-0 Gateway field and
select T0-LR-1.
4. Click SAVE followed by CLOSE EDITING.

72 Lab 8 Configuring the Tier-0 Gateway


5. Verify that a Linked Tier-0 Gateway is created on both the gateways .
a. Check T1-LR-01 to verify that a Linked Tier-0 Gateway is connected to T0-LR-01.

b. Select Tier-0 Gateways link in the navigation menu.


In the T0-LR-1 list, verify that a Linked T0-LR-1 is connected to T1-LR-1 by clicking on 1
in the Linked Tier-1 Gateways column.

Lab 8 Configuring the Tier-0 Gateway 73


Task 5: Test the End-to-End Connectivity
You test the connectivity from your student desktop to tenant VMs to verify that end-to-end
routing is working.
In the lab environment, routing has been preconfigured on your student desktop, the RRAS
server, and the Vyos router.
1. To verify connectivity, ping from the console of any tenant VM (T1-Web-01, T1-App-01,
T1-DB-01, and so on) to the gateway 192.168.100.1.

ping -c 3 192.168.100.1
ping -c 3 192.168.110.1

Your pings should be successful.


2. From the command prompt of your student desktop, verify that you can reach all the tenant
VMs.
ping 172.16.10.11
ping 172.16.20.11
ping 172.16.30.11

You should be able to ping from your student desktop to any of the tenant networks, which
verifies that the north-south routing is working properly.

74 Lab 8 Configuring the Tier-0 Gateway


Lab 9 Verifying Equal-Cost
Multipathing Configurations

Objective: Enable equal-cost multipathing on gateways

In this lab, you perform the following tasks:


1. Prepare for the Lab
2. Verify the BGP Configuration
3. Verify That Equal-Cost Multipathing Is Enabled
4. Verify the Result of the ECMP Configuration

Task 1: Prepare for the Lab

75
You log in to the NSX Manager Simplified UI.
1. From your student desktop, open the Chrome web browser.
2. Click the NSX-T Data Center > NSX Manager bookmark.
3. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

Task 2: Verify the BGP Configuration


You verify the BGP neighbor relationship between the edge nodes and the upstream VyOS
router.
1. Verify that the BGP neighbor relationship is established between the VyOS and the sa-
nsxedge-01 gateway.
a. From MTPuTTY, connect to sa-nsxedge-01.
b. When the PuTTY Security Alert appears, click Yes to proceed.
c. Disable the command-line timeout.
set cli-timeout 0
d. Obtain information for the gateways.
get logical-routers
e. Verify that the SR-T0-LR-01 service gateway appears with an associated VRF ID.

In the command output, VRF 6 is associated with SR-T0-LR-01. The VRF ID in your lab
might be different.

76 Lab 9 Verifying Equal-Cost Multipathing Configurations


f. Access the Tier-0 service gateway mode.
vrf vrf_ID

g. Verify the BGP state.


get bgp neighbor
The BGP state should show Established, up. Press q to quit out of BGP neighbor
output.

h. Exit the Tier-0 VRF service gateway mode.


exit

NOTE
On sa-nsxedge-01, the BGP state for neighbor 192.168.100.1 is established and up.

2. From MTPuTTY, connect to sa-nsxedge-02 and repeat step 1 to verify that the BGP
neighbor relationship is established between the VyOS router and the sa-nsxedge-02
gateway.

NOTE
On sa-nsxedge-02, the BGP neighbor state for neighbor 192.168.100.1 is active.

Lab 9 Verifying Equal-Cost Multipathing Configurations 77


Task 3: Verify That Equal-Cost Multipathing Is Enabled
You enable Equal-Cost Multipathing (ECMP) between the Tier-0 gateway and VyOS router so
that both the links can be used.
1. On the NSX Simplified UI Home page, click Networking > Tier-0 Gateways > T0-LR-
01.
a. Clikc the > icon next to BGP.
2. Verify that both BGP, ECMP, and Multipath Relax appear as On.

Task 4: Verify the Result of the ECMP Configuration


You perform a packet capture from both the edge nodes to verify that the traffic is sent across
both the uplinks.
1. From MTPuTTY, connect to SA-VYOS-01.
2. Verify that each tenant network (Web-Tier 172.16.10.0/24, App-Tier 172.16.20.0/24, and
DB-Tier 172.16.30.0/24) has two next-hops through the 192.168.100.2 and 192.168.110.2
interfaces on the T0-LR-1 gateway.
show ip route

78 Lab 9 Verifying Equal-Cost Multipathing Configurations


3. In MTPuTTY, connect to sa-nsxedge-01.
4. Capture packets on sa-nsxedge-01.

set capture session 1 interface fp-eth1 direction in


set capture session 1 expression src net 172.20.10.0/24

5. In MTPuTTY, connect to sa-nsxedge-02.


6. Capture packets on sa-nsxedge-02.
set capture session 1 interface fp-eth1 direction in
set capture session 1 expression src net 172.20.10.0/24

7. On the student desktop, double-click the httpdata11.bat and httpdata12.bat scripts,


which start a large number of HTTP requests to the web VMs.

Lab 9 Verifying Equal-Cost Multipathing Configurations 79


8. Verify that the traffic is going through both sa-nsxedge-01 and sa-nsxedge-02, as a result
of your ECMP configuration.

NOTE
Your results might reflect that the traffic for 172.16.10.11 flows goes to edge-01
and the traffic for 172.16.10.12 goes to edge-02, or vice versa.

80 Lab 9 Verifying Equal-Cost Multipathing Configurations


9. Terminate the packet capture in the sa-nsxedge-01 console.
a. Press Ctrl+C.
b. Delete the capture.
del capture session 1

10. Terminate the packet capture in the sa-nsxedge-02 console..


a. Press Ctrl+C.
b. Delete the capture.
del capture session 1

11. If the .bat scripts do not automatically terminate, stop them manually.
a. In the httpdata11.bat window, press Ctrl+C to stop the script, and enter Y to terminate
the batch job.
b. In the httpdata12.bat window, press Ctrl+C to stop the script, and enter Y to terminate
the batch job.

Lab 9 Verifying Equal-Cost Multipathing Configurations 81


Lab 10 Configuring Network
Address Translation

Objective: Configure source and destination network


address translation rules on the Tier-1 gateway

In this lab, you perform the following tasks:


1. Prepare for the Lab
2. Create a New Tier-1 Gateway for Network Address Translation
3. Create a Segment
4. Attach a VM to the NAT-LS Segment
5. Configure NAT
6. Configure Route Advertisement and Route Redistribution
7. Verify the IP Connectivity

83
Task 1: Prepare for the Lab

You log in to the vSphere Web Client UI and the NSX Manager UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.

2. Log in to the NSX Simplified UI.


a. Open another tab in the Chrome web browser.
b. Click the NSX-T Data Center > NSX Manager bookmark.
c. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

84 Lab 10 Configuring Network Address Translation


Task 2: Create a Tier-1 Gateway for Network Address
Translation
You create another Tier-1 gateway to support Network Address Translation (NAT).
1. On the NSX Simplified UI Home page, click Networking > Tier-1 Gateways > ADD
TIER-1 GATEWAY.
2. Provide the configuration details in the ADD TIER-1 GATEWAY window.
• Name: Enter T1-LR-2-NAT.
• Tier-0 Router: Select T0-LR-01.
• Failover Mode: Non Preemptive (default).
• Edge Cluster: Select Edge-Cluster-01.
• Route Advertisement: Select All Static Routes, All Connected Segments & Service
Ports, and All NAT IPs.
Leave all the other options as default.

3. Click SAVE.
You see a message message asking whether you want to continue editing the Tier1
Gateway. Click NO.

Lab 10 Configuring Network Address Translation 85


4. Verify that the NAT gateway appears in the Tier-1 Gateway list and the Status is UP.

Task 3: Create a Segment


You create a logical segmentthat connects to the NAT network.
1. On the NSX Simplified UI Home page, click Networking > Segments > ADD
SEGMENT.
2. Provide the configuration details in the ADD SEGMENT window.
• Name: Enter NAT-LS.
• Uplink & Type: Select T1-LR-2-NAT.
• Transport Zone: Select Global-Overlay-TZ.
a. Click Set Subnets followed by ADD SUBNET.
• Set Subnets: Enter 172.16.101.1/24.
Leave all the other options as default.
b. Click ADD.
c. Click APPLY followed by SAVE.
3. When the message Want to continue this Segment appears, click No.

86 Lab 10 Configuring Network Address Translation


4. Verify that the NAT-LS logical segment is successfully created.

Task 4: Attach a VM to the NAT-LS Segment


You attach the VM T2-NAT-01 to the newly-created NAT-LS segment.
1. From the vSphere Web Client UI, go to Hosts and Clusters.
2. Right-click the T2-NAT-01 VM and select Edit Settings.
3. From the Network adapter 1 drop-down menu, select NAT-LS (nsx.LogicalSwitch).

4. Verify that the Connected check box is selected.


5. Click OK.

Lab 10 Configuring Network Address Translation 87


Task 5: Configure NAT
You configure the source and destination NAT rules on the Tier-1 NAT gateway.
1. From the home of the NSX Simplified UI, click Networking > NAT.
2. Click Gateway and select T1-LR-2-NAT from the drop-down menu.
a. Click ADD NAT RULE.
3. Provide the configuration details in the ADD NAT RULE window.
• Name: Enter NAT-Rule-1.
• Action: Select SNAT.
• Source IP: Enter 172.16.101.11.
• Destination IP: Leave blank.
• Translated IP: Enter 80.80.80.1.
• Firewall: Select By Pass.
• Priority: Enter 1024.
Leave all the other options as default.

4. Click SAVE.

88 Lab 10 Configuring Network Address Translation


5. Verify that the SNAT rule appears in the list.

6. Click ADD NAT RULE again and check that T1-LR-2-NAT is still the value in the
Gateway field.
7. Provide the configuration details in the New NAT Rule window.
• Name: Enter NAT-Rule-2.
• Action: Select DNAT.
• Source IP: Leave blank.
• Destination IP: Enter 80.80.80.1.
• Translated IP: Enter 172.16.101.11.
• Firewall: Select By Pass.
• Priority: Enter 1024.
Leave all the other options as default.

8. Click SAVE.

Lab 10 Configuring Network Address Translation 89


9. Verify that the DNAT rule appears in the list.

Task 6: Configure Route Advertisement and Route


Redistribution
You verify route advertisement in the NAT network to the upstream VyOS router.
1. Using MTPuTTY, connect to sa-vyos-01 and verify that the 172.16.101.0/24 route is
advertised by entering show ip route.

90 Lab 10 Configuring Network Address Translation


2. On the Tier-0 Gateways, redistribute the NAT route (80.80.80.1/32) so that the upstream
gateway learns about it.
a. On the NSX Simplified UI Home page, click Networking > T0 Gateways and select
T0-LR-01.
b. Click the vertical three dot icon and select Edit from the menu.
c. Expand the ROUTE RE-DISTRIBUTION option and click the current count value
7.
d. Select the Advertised Tier-1 Subnets > NAT IP check box.
e. Click APPLY.
f. When the TIER-0 Gateway window appears, click SAVE.
You see that the ROUTE RE-DISTRIBUTION count is 8.

3. Click SAVE followed by CLOSE EDITING.

Lab 10 Configuring Network Address Translation 91


4. Click the value 8 current count.

a. Click CLOSE.
The T0-LR-01 Gateway Status shows Down until the configuration is realized on the NSX
Manager, which might take a few seconds.
5. Switch back to the MTpuTTY connection for sa-vayos-01 and enter show ip route
again to verify that 80.80.80.1/32 is displayed.

92 Lab 10 Configuring Network Address Translation


Task 7: Verify the IP Connectivity
You test the connectivity to the NAT network.
1. From MTPuTTY, connect to sa-nsxedge-01.
2. Retrieve gateway instances and identify the virtual routing and forwarding (VRF) instance
context for SR-T0-LR-01.
get logical-routers

In the above command output, the VRF ID for SR-T0-LR is 9. The VRF ID in your lab
might be different.
3. Access the VRF for SR-T0-LR-01 and view the routing table of the Tier-0 SR -1.
vrf 9
get route

Lab 10 Configuring Network Address Translation 93


4. From your student desktop, open a browser window, and enter http://80.80.80.1
(NAT web server).
A test page appears indicating that your NAT is successful.

94 Lab 10 Configuring Network Address Translation


Lab 11 Configuring the DHCP Server
on the NSX Edge Node

Objective: Configure the DHCP Server on the NSX Edge


Node

In this lab, you perform the following tasks:


1 Prepare for the Lab
2 Configure a DHCP Server
3 Verify the DHCP Server Operation
4 Prepare for the Next Lab

95
Task 1: Prepare for the Lab

You log in to the vSphere Web Client UI and the NSX Manager UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.

2. Log in to the NSX Simplified UI.


a. Open another tab in the Chrome web browser.
b. Click the NSX-T Data Center > NSX Manager bookmark.
c. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

96 Lab 11 Configuring the DHCP Server on the NSX Edge Node


Task 2: Configure a DHCP Server
You log in to the NSX Manager UI and configure a DHCP server, Tier-1 Gateway, and a
Segment.
1. Navigate to NSX Simplified UI > Networking > IP Address Management > DHCP.
a. Click ADD SERVER.
b. Select DHCP Server from the drop-down menu for Server Type.
Enter the configuration for the DHCP Server.
• Name: Enter DHCP-Server.
• Server IP Address: Enter 192.168.100.18/24.
• Lease Time (seconds): 86400 (default).
• Edge Cluster: Select Edge-Cluster-01.
c. Click SAVE.

Lab 11 Configuring the DHCP Server on the NSX Edge Node 97


2. Navigate to NSX Simplified UI > Networking > Tier-1 Gateways.
a. Click the three vertical ellipse icon next to T1-LR-01 and select Edit.
b. Click No IP Allocation Set next to the configuration option IP Address
Management.
c. From the Type drop-down menu, select DHCP Local Server.
d. From the DHCP Server drop-down menu, select DHCP-Server.

e. Click SAVE and SAVE again followed by CLOSE EDITING.


3. Navigate to NSX Simplified UI > Networking > Segments.
a. Click ADD SEGMENT.
b. Enter the configuration for the DHCP Segment.
• Name: Enter DHCP-LS.
• Uplink & Type: Select T1-LR-01 from the drop-down menu.
• Uplink Type: Select Flexible (default).

98 Lab 11 Configuring the DHCP Server on the NSX Edge Node


• Transport Zone: Select Global-Overlay-TZ | Overlay.
c. Click Set Subnets and click ADD SUBNET.
• Gateway: Enter 172.16.40.1/24.
• DHCP Ranges: Enter 172.16.40.25-172.16.40.35 and click Add item(s).
d. Click ADD then APPLY followed by SAVE.
e. When the message asking whether you want to continue Configuring this Segment?
appears, click YES.
f. Continue to enter the configuration for the DHCP Segment.
• Domain Name: Enter vclass.local.
g. Click SAVE followed by CLOSE EDITING.

Lab 11 Configuring the DHCP Server on the NSX Edge Node 99


Task 3: Verify the DHCP Server Operation
You log in to the vSphere Web Client and attach two virtual Machines to the DHCP segment.
Next you use MTpuTTY to validate the DHCP server configuration. Finally, you configure one
of the virtual Machines to acquire an IP address from DHCP.
1. Switch to the vSphere Web Client and navigate to Hosts and Clusters.
a. Right-click on Ubuntu-01a and select from the Edit Settings menu.
b. Change the Network Adapter 1 to connect to DHCP-LS, make sure Connected is
selected, and click OK.

100 Lab 11 Configuring the DHCP Server on the NSX Edge Node
2. Connect Ubuntu-02a to the DHCP-LS.
a. Right-click Ubuntu-02a and select from the Edit Settings menu.
b. Change the Network Adapter 1 to connect to DHCP-LS, make sure Connected is
selected, and click OK.
3. Verify that the two virtual machines can communicate on the newly attached segment.
a. In the vSphere Web Client, select Hosts and Clusters and right-click Ubuntu-01a
from the inventory, and select Open Console.
b. Log in to Ubuntu-01a using vmware as the user name and VMware1! as the password.
c. Ping Ubuntu-02a.
ping -c 3 172.16.40.12

Your ping should be successful.

Lab 11 Configuring the DHCP Server on the NSX Edge Node 101
4. Verify the DHCP server configurations using the command line.
a. Switch to MTpuTTY and connect to sa-nsxedge-01.
b. Log in with admin as user name and VMware1!VMware1! as the password.
c. Get the DHCP servers.
get dhcp servers

102 Lab 11 Configuring the DHCP Server on the NSX Edge Node
5. Verify the configurations of the DHCP IP pools.
get dhcp ip-pools

Lab 11 Configuring the DHCP Server on the NSX Edge Node 103
6. Verify that the DHCP server operates as expected.
a. Switch to the vSphere Web Client and open a console for Ubuntu-02a.
b. Log in using the user name vmware and password VMware1!.
c. Gain root access by entering sudo -s and enter VMWare1! when prompted for the
password.
d. Clear the IP address assignment and request a new one from DHCP. Enter the
ifconfig ens160 0.0.0.0 0.0.0.0 && dhclient command.
ifconfig ens160 0.0.0.0 0.0.0.0 && dhclient

NOTE
Note the space between the two zero groupings for the IP address and netmask.

e. View the newly assigned IP address (172.16.40.25) from the DHCP pool with the
ifconfig command.
You see that the new inet addr: is now 172.16.40.25, which is the first address in
the DHCP IP pool.

104 Lab 11 Configuring the DHCP Server on the NSX Edge Node
7. Switch back to MTpuTTY to verify the DHCP lease.
a. Get the DHCP lease.
get dhcp leases

Task 4: Prepare for the Next Lab


In preparation for the next lab, you use the vSphere Web Client and MTpuTTY to reconfigure
Ubuntu-01a and Ubuntu-02a to its original IP address and attached network.
1. Switch back to the Ubuntu-02a virtual machine console in the vSphere Web Client and
return back to its original static IP address.
a. Enter the command killall dhclient && ifconfig ens160 172.16.40.12
netmask 255.255.255.0.

Lab 11 Configuring the DHCP Server on the NSX Edge Node 105
2. Switch to the vSphere Web Client and return the virtual machines back to their original
network.
a. Right-click on Ubuntu-01a and select Edit Settings.

3. While still in the vSphere Web Client, open a console to Ubuntu-01a.


a. Login with user name vmware and password VMware1!.
b. Enter ifconfig at the command line and verify that the IP address for Ubuntu-01a is
172.16.40.11.
c. Switch back to the vSphere Web Client, right-click on Ubuntu-01a, and select Edit
Settings.
d. Verify that Ubuntu-01a is attached to the VM Network.
Otherwise, edit the network configuration.

106 Lab 11 Configuring the DHCP Server on the NSX Edge Node
Lab 12 Configuring Load Balancing

Objective: Configure load balancing on the Tier-1


gateway to distribute web traffic

In this lab, you perform the following tasks:


1. Prepare for the Lab
2. Test the Connectivity to Web Servers
3. Create a Load Balancer
4. Configure Route Advertisement and Route Redistribution for the Virtual IP
5. Use the CLI to Verify the Load Balancer Configuration
6. Verify the Operation of the Backup Server
7. Prepare for the Next Lab

107
Task 1: Prepare for the Lab

You log in to the vSphere Web Client UI and the NSX Manager UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.

108 Lab 12 Configuring Load Balancing


2. Log in to the NSX Simplified UI.
a. Open another tab in the Chrome web browser.
b. Click the NSX-T Data Center > NSX Manager bookmark.
c. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

Task 2: Test the Connectivity to Web Servers


You verify the end-to-end connectivity from your student desktop to the web servers on Web-
Tier.
1. On your student desktop, open a command prompt window.
2. Ping the three web servers and verify that the pings are successful.

ping 172.16.10.11
ping 172.16.10.12
ping 172.16.10.13

3. On your student desktop, open a browser tab and verify that you can access the three web
servers.
http://172.16.10.11
http://172.16.10.12
http://172.16.10.13

Do not proceed to the next task if you cannot access the three web servers.

Lab 12 Configuring Load Balancing 109


Task 3: Create a Tier-1 Gateway Named T1-LR-LB and
Connect it to T0-LR-01
1. Create T1-LR-LB and attach it to T0-LR-01.
a. From the Simplified UI, click the Networking tab.
b. Click Tier-1 Gateways.
c. Click ADD TIER-1 GATEWAY and enter the following details to create a T1-LR-
LB.
• Tier-1 Gateway Name: Enter T1-LR-LB.
• Linked Tier-0 Gateway: Select T0-LR-01.
• Fail Over: Select Non Preemptive.
• Edge Cluster: Select Edge-Cluster-01.
d. Click SAVE.
e. When the message appears asking whether you want to continue configuring the Tier-
1 gateway, click NO.

110 Lab 12 Configuring Load Balancing


2. Attach Web-LS to T1-LR-LB.
a. Click Segments.
b. Click the dotted vertical line to edit Web-LS.
• Uplink & Type: Select T1-LR-LB from the drop-down menu.
c. Click SAVE followed by CLOSE EDITING.

Task 4: Create a Load Balancer


You create a load balancer and attach it to the Tier-1 gateway.
1. Create a load balancer by navigating to NSX-T UI Home > Networking > Load
Balancing > ADD LOAD BALANCER.
a. Provide the configuration details on the ADD LOAD BALANCER page.
• Name: Enter Web-LB.
• Size: Select Small.
• Tier-1 Gateway: Select T1-LR-LB.
• Leave all other options blank.
b. Click SAVE.
c. When the Continue Configuring this Load Balancer message displays, click Yes.
d. On the Load Balancer options page, expand VIRTUAL SERVERS and click Set
Virtual Servers.
2. Create a new virtual server.
a. Click ADD VIRTUAL SERVER > L4 TCP.
• Name: Enter Web-IP-VIP.
• IP Address: Enter 192.168.100.7.
• Ports: Enter 80 and click add item.
• Server Pool: Click the three vertical ellipses icon next to field and select Create
New.

Lab 12 Configuring Load Balancing 111


3. Create a server pool for the web servers.
a. Provide the configuration details on the General Properties page in the Add New
Server Pool window.
• Name: Enter Web-IP-Pool.
• Description: Enter Server pool for web servers.
• Load Balancing Algorithm: Select ROUND_ROBIN (default).
• Select Members: Click Select Members.
• Leave all the other settings as default.

112 Lab 12 Configuring Load Balancing


b. On the Configure Server Pool Members page, click ADD MEMBER under Enter
individual members to add three (one acting as backup) web server nodes (T1-web-01,
T1-web-02, and T1-Web-03) to the pool member list.
• Name: Enter Node-1.
• IP: Enter 172.16.10.11.
• Port: Enter 80.
• Weight: 1 (default).
• State: ENABLED (default).
• Backup Member: Disabled.
Click ADD.

c. Click ADD MEMBER.


Enter the configuration information for the next member.
• Name: Enter Node-2.
• IP: Enter 172.16.10.12.
• Port: Enter 80.
• Weight: 1 (default).
• State: ENABLED (default).
• Backup Member: Disabled.
d. Click ADD MEMBER.

Lab 12 Configuring Load Balancing 113


Enter the configuration information for the last member.
• Name: Enter Node-3.
• IP: Enter 172.16.10.13.
• Port: Enter 80.
• Weight: 1 (default).
• State: ENABLED (default).
• Backup Member: Enabled.
e. Click ADD followed by APPLY.
f. On the Create Server Pool page, click SAVE.
g. On the Set Virtual Servers page, click SAVE and CLOSE.
h. On the ADD LOAD BALANCER page, click SAVE.
4. Select the SERVER POOLS tab and verify that the newly created Web-LB-Pool appears
in the server pool list.

5. Select the VIRTUAL SERVERS tab and verify that the newly created Web-LB-VIP
appears in the virtual server list.

6. Navigate to NSX-T Home UI > Networking > Load Balancers > LOAD BALANCERS.

a. Verify that the Web-LB load balancer is attached to the T1-LR-LB gateway and the
load balancer’s operational status is Up.

114 Lab 12 Configuring Load Balancing


Task 5: Configure Route Advertisement and Route
Redistribution for the Virtual IP
You advertise the load balancer's virtual IP (VIP) and verify that the HTTP traffic is being
handled by both web servers in a round-robin method.
1. Use the Chrome browser to access the load balancer VIP.
a. From your student desktop, open a Chrome browser window and try to access the load
balancer’s VIP address http://192.168.100.7.
b. Verify that the website cannot be reached.
The website cannot be reached because the load balancer’s VIP is not advertised and
is unknown to the outside clients.

Lab 12 Configuring Load Balancing 115


2. Use curl to verify access to the load balancer VIP.
a. From your student desktop, open the command prompt window and access the load
balancer’s VIP address.
curl -i http://192.168.100.7

b. Verify that the website cannot be reached.


The website cannot be reached because the load balancer’s VIP is not advertised and
is unknown to the outside clients.

3. Configure the T1-LR-LB gateway to advertise the VIP route.


a. On the NSX Simplified UI Home page, click Networking > Tier-1 Gateways > T1-
LR-LB.
b. Click the three vertical ellipses icon and select Edit.
c. Expand the option by clicking the > icon next to Route Advertisement.
d. In the Edit Route Advertisement Configuration window, click Advertise All LB VIP
Routes.

4. Click SAVE followed by CLOSE EDITING.

116 Lab 12 Configuring Load Balancing


5. Configure the T0-LR-1 gateway to redistribute the VIP route to its upstream VyOS router.
a. Select Networking > Tier-0 Gateways > T0-LR-01.
b. Click the three vertical ellipses icon next to TO-LR-01 and select Edit.
c. Expand the ROUTE RE-DISTRIBUTION option and click the Route Re-
distribution number.
d. In the Edit Redistribution Criteria window, select the LB VIP check box.

6. Click APPLY.
a. Click SAVE followed by CLOSE EDITING.

Lab 12 Configuring Load Balancing 117


7. Use Firefox to verify the access to the load balancer VIP.
a. From student desktop, open a Firefox browser and access the VIP address using
http://192.168.100.7.
The webpage should appear.
b. Refresh the browser display to verify that both back-end web servers are being used
(as a result of the configured round-robin method).
The client’s HTTP requests alternate between T1-Web-01 and T1-Web-02.
Due to the browser cache behavior, you might need to press Ctrl+F5 (force refresh) to see
the traffic being load balanced between the two web servers.

118 Lab 12 Configuring Load Balancing


8. Use curl to verify access to the load balancer VIP.
a. From student desktop, open Windows command prompt and access the load
balancer’s VIP address.
curl -i http://192.168.100.7

The webpage should appear.


b. Run the same curl command again to verify that both back-end web servers are being
used in a round-robin method.

Lab 12 Configuring Load Balancing 119


Task 6: Use the CLI to Verify the Load Balancer
Configuration
You verify the configuration of the load balancer using the NSX Edge CLI.
1. Verify the load balancer configuration.
a. In MTPuTTY, open a connection to either sa-nsxedge-01 or sa-nsxedge-02 and
retrieve the load balancer information.
get load-balancer

The output shows the general load balancer configuration, including UUID and
Virtual Server ID.
b. Copy the UUID and the Virtual Server IIDd values and paste them to a notepad.
2. Verify the virtual server configuration.
get load-balancer UUID virtual-server Virtual_Server_ID

120 Lab 12 Configuring Load Balancing


3. Verify the server pool configuration.
get load-balancer UUID pools

UUID is the value that you recorded for the load balancer.

Lab 12 Configuring Load Balancing 121


Task 7: Verify the Operation of the Backup Server
You verify the operation of the backup server configured in the server pool.
1. Verify the operations of the backup server .
a. From the vSphere Web Client UI, Shut Down Guest OS for the two web servers (T1-
Web-01 and T1-Web-02) which belong to the default server pool for load balancing.
b. From your student desktop, open the Firefox browser window and connect to the load
balancer’s VIP address http://192.168.100.7.
T1-Web-03 in the backup pool is now in use because the servers in the default pool servers
are down.

NOTE
You might need to wait a few minutes before trying to access the backup server.

122 Lab 12 Configuring Load Balancing


Task 8: Prepare for the Next Lab
You restart the web server VMs and disable the load balancer.
1. From the vSphere Web Client UI, power on the T1-Web-01 and T1-Web-02 VMs.
a. When the Power On Recommendations page display, click OK.

2. Disable the load balancer.


a. From NSX Manager web UI, click Networking > Load Balancing > LOAD
BALANCERS.
b. Click the three vertical ellipses icon next to the Web-LB and select Edit.
c. Toggle the Admin State to Disabled.
3. Detach the Web-LB load balancer from the T1-LR-LB gateway.
a. Clear the Tier-1 Gateway box by clicking the X beside the value in the box and
clicking outside the box.
b. Click SAVE.

Lab 12 Configuring Load Balancing 123


4. From the NSX Simplified UI, click Networking > Segments and click on dotted vertical
line and select Edit Web-LS.
a. Select T1-LR-01 from Uplink & Type.
b. Click Save.
c. Click CLOSE EDITING.

124 Lab 12 Configuring Load Balancing


Lab 13 Deploying Virtual Private
Networks

Objective: Configure the VPN tunnel and verify the


operation

In this lab, you perform the following tasks:


1 Prepare for the Lab
2 Deploy Two New NSX Edge Nodes to Support the VPN Deployment
3 Enable SSH on the Edge Nodes
4 Configure a New Edge Cluster
5 Deploy and Configure a New Tier-0 Gateway and Segments for VPN Support
6 Create an IPSec VPN Service
7 Create an L2 VPN Server
8 Create an L2 VPN Session
9 Deploying the L2 VPN Client
10 Verify the Operation of the VPN Setup

125
Task 1: Prepare for the Lab

You log in to the vSphere Web Client UI and the NSX Manager UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.

2. Log in to the NSX Simplified UI.


a. Open another tab in the Chrome web browser.
b. Click the NSX-T Data Center > NSX Manager bookmark.
c. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

126 Lab 13 Deploying Virtual Private Networks


Task 2: Deploy Two New NSX Edge Nodes to Support the
VPN Deployment
1. You need to complete the configurations for the new edge transport nodes in order to use
them later in this lab.
a. On the NSX Simplified UI home page, click System > Fabric > Nodes > Edge
Transport Nodes.
b. Click +ADD EDGE VM.
c. Provide the configuration details in the Name and Description window.
• Name: Enter sa-nsxedge-03.
• Host name/FQDN: Enter sa-nsxedge-03.vclass.local.
• Form Factor: Select Medium (default).
d. Click NEXT.
2. On the Credentials page, enter VMware1!VMware1! as the CLI password and the system
root password.
a. Click NEXT.
3. On the Configure Deployment page, provide the configuration details.
• Compute Manager: Select sa-vcsa-01.vclass.local (begin by typing sa and the full
name should appear).
• Cluster: Select SA-Management-Edge from the drop-down menu.
• Resource Pool: Leave empty.
• Host: Leave empty.
• Datastore: Select SA-Shared-02-Remote from the drop-down menu.
a. Click NEXT.
4. On the Configure Ports page, provide the configuration details.
• IP Assignment: Select Static.
• Management IP: Enter 172.20.10.63/24.
• Default Gateway: Enter 172.20.10.10.
• Management Interface: Select pg-SA-Management from the drop-down menu.
a. Click NEXT.

Lab 13 Deploying Virtual Private Networks 127


5. On the Configure NSX page, provide the configuration details.
• Transport Zone: Select Global-Overlay-TZ and Gloal-VLAN-TZ.
• Edge Switch Name: Select PROD-Overlay-NVDS.
• Uplink Profile: Select nsx-edge-single-nic-uplink-profile from the drop-down menu.
• IP Assignment: Select Use IP Pool from the drop-down menu.
• IP Pool: Select VTEP-IP-Pool from the drop-down menu.
• DPDK Fastpath Interfaces: Select uplink-1 and select pg-SA-Edge-Overlay from the
drop-down menu.
6. Continuing on the Configure NSX page, click + Add N-VDS.
• Edge Switch Name: Select PROD-VLAN-NVDS from the drop-down menu.
• Uplink Profile: Select nsx-edge-single-nic-uplink-profile from the drop-down menu.
• IP Assignment: [Disabled].
• DPDK Fastpath Interfaces: Select uplink-1 and select pg-SA-Edge-Uplinks from the
drop-down menu.
a. Click FINISH.

NOTE
The edge deployment might take a several minutes to complete. The deployment
status displays various values, for example, Node Not Ready, which is only
temporary.

NOTE
Please wait until the Configuration status displays Success and Status is Up. You
might click REFRESH occasionally.

128 Lab 13 Deploying Virtual Private Networks


7. You need to deploy another NSX Edge node.
a. On the NSX Simplified UI home page, click System > Fabric > Nodes > Edge
Transport Nodes.
b. Click +ADD EDGE VM.
c. Provide the configuration details in the Name and Description window.
• Name: Enter sa-nsxedge-04.
• Host name/FQDN: Enter sa-nsxedge-04.vclass.local
• Form Factor: Select Medium (default).
d. Click NEXT.
8. On the Credentials page, enter VMware1!VMware1! as the CLI password and the system
root password.
a. Click NEXT.
9. On the Configure Deployment page, provide the configuration details.
• Compute Manager: Select sa-vcsa-01.vclass.local (begin by typing sa and the full
name should appear).
• Cluster: Select SA-Management-Edge from the drop-down menu.
• Resource Pool: Leave empty.
• Host: Leave empty.
• Datastore: Select SA-Shared-02-Remote from the drop-down menu.
a. Click NEXT.
10. On the Configure Ports page, provide the configuration details.
• IP Assignment: Select Static.
• Management IP: Enter 172.20.10.64/24.
• Default Gateway: Enter 172.20.10.10.
• Management Interface: Select pg-SA-Management from the drop-down menu.
a. Click NEXT.
11. On the Configure NSX page, provide the configuration details.
• Transport Zone: Select Global-Overlay-TZ and Global-VLAN-TZ.
• Edge Switch Name: Select PROD-Overlay-NVDS.
• Uplink Profile: Select nsx-edge-single-nic-uplink-profile from the drop-down menu.

Lab 13 Deploying Virtual Private Networks 129


• IP Assignment: Select Use IP Pool from the drop-down menu.
• IP Pool: Select VTEP-IP-Pool from the drop-down menu.
• DPDK Fastpath Interfaces: Select uplink-1 and select pg-SA-Edge-Overlay from the
drop-down menu.
12. Continuing on the Configure NSX page, click + Add N-VDS.
• Edge Switch Name: Select PROD-VLAN-NVDS from the drop-down menu.
• Uplink Profile: Select nsx-edge-single-nic-uplink-profile from the drop-down menu.
• IP Assignment: [Disabled].
• DPDK Fastpath Interfaces: Select uplink-1 and select pg-SA-Edge-Uplinks from the
drop-down menus.
a. Click FINISH.

NOTE
The edge deployment might take a several minutes to complete. The deployment
status displays various values, for example, Node Not Ready, which is only
temporary.

NOTE
Please wait until the Configuration status displays Success and Status is Up. You
might click REFRESH occasionally.

130 Lab 13 Deploying Virtual Private Networks


Task 3: Enable SSH on the Edge Nodes
You enable the SSH service on each edge node that you created.
1. From the vSphere Web Client Home page, click Hosts and Clusters.
2. In the navigator pane, right-click sa-nsxedge-03 and select Open Console.
3. Enter admin as the user name and VMware1!VMware1! as the password.
4. Verify that the SSH service is stopped.
get service ssh

5. Start the SSH service.


start service ssh

6. Set the SSH service to autostart when the VM is powered on.


set service ssh start-on-boot

7. Verify that the SSH service is running and Start on boot is set to True.
get service ssh

8. Configure SSH on sa-nsxedge-04.


a. From the vSphere Web Client Home page, click Hosts and Clusters.
b. In the navigator pane, right-click sa-nsxedge-04 and select Open Console.
c. Enter admin as the user name and VMware1!VMware1! as the password.
d. Verify that the SSH service is stopped.
get service ssh

e. Start the SSH service.


start service ssh

f. Set the SSH service to autostart when the VM is powered on.


set service ssh start-on-boot

g. Verify that the SSH service is running and Start on boot is set to True.
get service ssh

Lab 13 Deploying Virtual Private Networks 131


Task 4: Configure a New Edge Cluster
You log in to the NSX Simplified UI and configure a VPN service to a remote network.
1. Create a new Edge Cluster that contains two previously deployed NSX Edge Nodes.
a. Navigate to System > Fabric > Nodes > Edge Clusters.
b. Click +ADD.
• Name: Enter Edge-Cluster-02.
• Transport Nodes: Select the checkbox next to Available (2) to select both sa-
nsxedge-03 and sa-nsxedge-04.
c. Click the right arrow icon to move the Edge Nodes to Selected.
d. Click ADD.

132 Lab 13 Deploying Virtual Private Networks


Task 5: Deploy and Configure a New Tier-0 Gateway and
Segments for VPN Support
1. You deploy and configure a new Tier-0 gateway for VPN support.
a. Navigate the NSX Simplified UI to Networking > Tier-0 Gateways.
b. Click ADD TIER-0 GATEWAY.
c. Enter the configuration information for the new Tier-0 gateway.
• Name: Enter T0-VPN-Gateway.
• HA Mode: Select Active Standby (use the drop-down menu and select).
• Fail Over: Select Preemptive.
• Edge Cluster: Select Edge-Cluster-2 (use the drop-down menu and select).

NOTE
The Edge Cluster might not initially populate. You might need to click on the field
multiple times to eventually have it available.

• Preferred Edge: Select sa-nsxedge-03 (use the drop-down menu and select).
d. Click Save.
2. When the message asking whether you want to continue Configuring this Tier-0 Gateway
appears, click YES.
3. Expand ROUTE RE-DISTRIBUTION by clicking the > icon next to it and click Set.
a. Click the check boxes for the configuration.
• Select Static Routes.
• Select IPSec Local IP.
• Select Connect Interfaces & Segments and all subobjects.
• Select Advertised Tier-1 Subnets: Leave Off.
b. Click APPLY.
c. Click SAVE.

Lab 13 Deploying Virtual Private Networks 133


4. Click CLOSE EDITING.

5. Navigate in the NSX Simplified UI to Networking > Segments.


a. Click ADD SEGMENT.
b. Enter the configuration information for the new segment.
• Name: Enter T0-VPN-GW-Uplink.
• Uplink & Type: Leave blank.
• Subnets: Leave blank.
• Transport Zone: select Global-VLAN-TZ | VLAN from the drop-down menu.
• VLAN: Enter 0 and click Add Item(s).
c. Click SAVE.
6. When prompted to continue editing the segment, click NO.

134 Lab 13 Deploying Virtual Private Networks


7. Click ADD SEGMENT again to create another segment.
a. Enter the configuration information for the new Segment.
• Name: Enter L2VPN-Segment.
• Uplink & Type: Leave blank.
• Subnets: Leave blank.
• Transport Zone: Select Global-Overlay-TZ from the drop-down menu.
b. Click SAVE.
c. When prompted to continue editing the segment, click NO.
8. Return to Networking > Tier-0 Gateways.
a. Click the three vertical ellipse icon next to T0-VPN-Gateway.
b. Expand INTERFACES by clicking the > icon next to it and click Set.
c. Click ADD INTERFACE.
d. Enter the configuration information for the new Interface.
• Name: Enter Uplink.
• Type: External (default).
• IP Address / Mask: Enter 192.168.201.2/24 and click Add item(s).
• Connected To(Segment): Select T0-VPN-GW-Uplink (use the drop-down menu
and select)
• Edge Node: Select sa-nsxedge-03 from the drop-down menu.
9. Click SAVE, CLOSE followed by CLOSE EDITING.
10. Wait until the new Tier-0 Gateway status displays UP.
You might click REFRESH periodically while waiting.

Lab 13 Deploying Virtual Private Networks 135


Task 6: Create an IPSec VPN Service
1. You create and configure a new IPSec VPN Service.
a. Navigate in the NSX Simplified UI to Networking > VPN.
b. Click ADD SERVICE > IPSec.

c. Enter the configuration information for the new VPN Service.


• Name: Enter IPSec-for-L2VPN.
• Tier-0 Gateway: Select T0-VPN-Gateway (use the drop-down menu to select).
d. Click SAVE.
2. When the message asking whether you want to continue Configuring this Tier-0 Gateway
appears, click NO.

136 Lab 13 Deploying Virtual Private Networks


Task 7: Create an L2 VPN Server and Session
1. You create an L2 VPN server for your VPN network.
a. While in the VPN SERVICES tab, click ADD SERVICE > L2 VPN Server.
b. Enter the configuration information for N-VDS.
• Name: Enter L2-VPN-Server.
• Tier-0 Gateway: Select T0-VPN-Gateway (use the drop-down menu to select).
c. Click SAVE.
d. When the message asking whether you want to continue Configuring this VPN
Service appears, click YES.

Lab 13 Deploying Virtual Private Networks 137


2. Expand SESSIONS by clicking the > icon next to it and click ADD Sessions followed by
ADD L2 VPN SESSION.
a. Enter the session configuration information.
• Segment: Enter L2-VPN-Session-01.
• Local Endpoint/IP: Click the three vertical ellipse icon and add an endpoint
• Name: Enter L2VPN-Endpoint.
• VPN Service: Select IPSec-for-L2VPN.
• IP Address: Enter 192.168.201.3.
• Local ID: Enter 192.168.201.3.
• Click SAVE.
• On the ADD L2 VPN SESSION screen, provide the configurations.
• Remote IP: Enter 192.168.201.4.
• Pre-shared Key: Enter VMware1!.
• Tunnel Interface: Enter 169.1.1.1/24.
• Remote ID: Enter 192.168.201.4.
b. Click SAVE.
c. When the message asking whether you want to continue Configuring this L2-VPN
Session appears, click NO.
3. Click CLOSE followed by CLOSE EDITING.
a. Click on the L2 VPN SESSIONS tab and confirm the sessions were created.

NOTE
The L2 VPN Session appears as either Down or In Progress until you have
deployed the L2 VPN Client and have an active session running.

138 Lab 13 Deploying Virtual Private Networks


4. Return to Networking > Segments and add the newly created VPN session information to
L2VPN-Segment.
a. Click the three vertical ellipse icon next to L2VPN-Segment and select Edit from the
menu.
b. Click the L2 VPN field and select L2-VPN-Session.
c. Enter the value 100 in the VPN Tunnel ID field.
d. Click SAVE followed by CLOSE EDITING.

Task 8: Deploy the L2 VPN Client


1. Before deploying the L2VPN-Client, you acquire Peer Code from the L2 VPN Session.
a. Navigate to Networking > VPN > L2 VPN Sessions.
b. From the L2 VPN SESSIONS tab click the > icon next to L2-VPN-Session-01.
c. Click on DOWNLOAD CONFIG.
The Download Config has PSK information in it warning appears.
d. Click YES.
e. Save the configuration file to your desktop.

Lab 13 Deploying Virtual Private Networks 139


2. You deploy the L2 VPN Client onto sa-esxi-01.vclass.local from the vSphere Web Client.
a. Switch to the vSphere Web Client and select sa-esxi-01.vclass.local.
You might need to log in again using user name administrator@vsphere.local and
password VMware1!.
b. Right-click on the host and select Deploy OVF Template.
c. Enter the configurations for the deployment.
• Select Template: Click Local file and click Browse.
• Locate the nsx-l2vpn-client-ovf-11197779 folder and click to open the folder.
• Select all the files in the folder using Ctrl+A and click Open.
3. Click Next.
4. In the Select Name and location windows, delete -XLarge from the name and click Next.
5. In the Select a resource window, sa-esxi-01.vclass.local should be highlighted.
If it is not highlighted, select it and click Next.
6. In the Review details window, click Next.
7. In the Select storage window, select Thin provision from the Select virtual disk format:
drop-down menu, select SA-Shared-02-Remote from the storage list, and click Next.
8. In the Select networks window, select the following fields.
• Trunk: Select Trunk (use the drop-down menu to select).
• Public: Select pg-SA-Edge-Uplinks (use the drop-down menu to select)
• HA: Select HA (you might need to click Browse to locate the value).
9. Click Next.
10. On the Customize template window, enter the user passwords.
• For admin, enable, and root users: enter VMware1!VMware1! in Enter Password and
Confirm Password.
11. Expand Uplink Interface using the > icon next to it.
• IP Address: Enter 192.168.201.4.
• Prefix Length: Enter 24.
• Default Gateway: Enter 192.168.201.1.
• DNS IP Address: Enter 172.20.10.10.
12. Expand L2T using the > icon next to it.

140 Lab 13 Deploying Virtual Private Networks


a. Minimize all open windows and access your desktop.
b. Double-click the L2VPNSession_L2VPN-Session-01_config.txt file.
c. In the open Notepad screen, select Format > Word Wrap.
d. Beginning after the text peer_code, highlight the text between the quotes and copy
the content.

Enter the following information in the L2T configurations.

• Egress Optimized IP Address: Leave blank.


• Peer Address: Enter 192.168.201.3.
• Peer Code: Paste the content using Ctrl+V from you notepad screen.

13. Expand Sub Interface using the > icon next to it.
• Enter 10(100) in the Sub Interface VLAN (Tunnel ID)
14. Click Next and then Finish.
You might encounter the Failed to Deploy OVF package...missing
descriptor error.

NOTE
You might encounter the Failed to Deploy OVF package...missing
descriptor error. Unfortunately you will have to start the deploy over and try
again. You must power off the NSX-l2t-client and Delete from Disk option before
reattempting the deploy. If the second time does not work correctly, ask your
instructor for assistance.

15. Watch the progress of the deployment until complete.

NOTE
Even after Recent Tasks show is complete, you might have to wait for a few
minutes before the Power On option is accessible.

• Power on the NSX-l2t-client by right-clicking on the newly deployed VM in the


inventory and select Power > Power On.

Lab 13 Deploying Virtual Private Networks 141


NOTE
You might need to wait for about 3 minutes for the startup to complete.

16. To insure that the startup is complete, switch back to your vSphere Web Client, select
NSX-l2t-client in the inventory and click the gear icon in the console image and select
Launch Web Console.
a. Wait for the login prompt appears and login using the user name admin and password
VMware1!VMware1!.

b. Verify that the client Information is displayed.

c. Close the console by clicking the X in the browser tab.

Task 9: Verify the Operation of the VPN Setup


In this task you will verify the proper operation of the VPN tunnel deployed by opening
consoles into the two L2VPN VMs and using ping to reach across the VPN.
1. In the vSphere Web Client inventory, right-click T1-L2VPN-01 and select Edit Settings.
a. Change Network adapter 1 by clicking the drop-down menu and select L2VPN-
Segment (nsx LogicalSwitch).
b. Make sure Connected is selected and click OK.

NOTE
Ensure that both NSX-l2t-client and T1-L2VPN-02 reside on the same host by
selecting each of them and viewing the Summary tab for the Host: value.
Otherwise, use vMotion to migrate T1-L2VPN-02 to the same host as the NSX-l2t-
client. Both should reside on sa-esxi-01.vclass.local.

2. Verify that T1-L2VPN-02 is connected to Remote_Network.


a. In the vSphere Web Client inventory, right-click T1-L2VPN-02 and select Edit
Settings.

142 Lab 13 Deploying Virtual Private Networks


b. Verify the network connection by ensuring Network adapter 1 has the value
Remote_Network.
Alternately, click the drop-down menu and select Remote_Network to verify the network
connection.
3. In the vSphere Web Client, open a console to T1-L2VPN -01.
a. In the vCenter Hosts and Clusters inventory pane, select T1-L2VPN-01, click the
Summary tab and gear in the console image to select Launch Web Console.
4. Log in to the T1-L2VPN-01 VM using the username vmware and the password
VMware1!.

a. Verify connectivity with T1-L2VPN-02.


ping -c 3 172.16.50.12

5. Return to vCenter Hosts and Clusters inventory pane, select T1-L2VPN-02, click the
Summary tab and gear in the console image to select Launch Web Console.
6. Log in to T1-L2VPN-02 VM using the username vmware and the password VMware1!.
a. Verify bidirectional connectivity from T1-L2VPN-02 to T1-L2VPN-01.

Lab 13 Deploying Virtual Private Networks 143


ping -c 3 172.16.50.11
You have verified bidirectional communication between the two VMs at the end of the
VPN tunnel.
7. Close both the consoles by clicking the X on their respective web tabs.
8. Open MTpuTTY and connect to sa-nsxedge-03.
a. Log in with the user name admin and password VMware1!VMware1!.
b. Verify that the L2VPN session is active, identify the peers, and ensure that the tunnel
status is up.
get ipsecvpn session active

9. Verify that the sessions are up.


get ipsecvpn session status

10. Check whether the ipsecvpn session is up between the local and remote peers.

144 Lab 13 Deploying Virtual Private Networks


get ipsecvpn session summary

11. Get the l2vpn session, tunnel, and IPSEC session numbers, and check that the status is UP.
get l2vpn sessions

12. Get statistical information of the local and remote peers, whether the status is UP, count of
packets received, bytes received (RX), packets transmitted (TX), and packets dropped,
malformed, or loops.

Lab 13 Deploying Virtual Private Networks 145


get l2vpn session stats

13. Get the session configuration information.


get l2vpn session config

146 Lab 13 Deploying Virtual Private Networks


Lab 14 Configuring the NSX
Distributed Firewall

Objective: Create NSX distributed firewall rules to allow


or deny application traffic

In this lab, you perform the following tasks:


1. Prepare for the Lab
2. Test the IP Connectivity
3. Create IP Set Objects
4. Create Firewall Rules
5. Create an Intertier Firewall Rule to Allow SSH Traffic
6. Create an Intertier Firewall Rule to Allow MySQL Traffic
7. Prepare for the Next Lab

147
Task 1: Prepare for the Lab

You log in to the vSphere Web Client UI and the NSX Manager UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.

2. Log in to the NSX Simplified UI.


a. Open another tab in the Chrome web browser.
b. Click the NSX-T Data Center > NSX Manager bookmark.
c. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

148 Lab 14 Configuring the NSX Distributed Firewall


Task 2: Test the IP Connectivity
You test various types of connections, including ICMP, SSH, SQL, HTTP and HTTPS. You
should have full accessibility because the default firewall rule is Allow.
1. From the vSphere Web Client Home page, click Hosts and Clusters and open a console to
T1-Web-01.
2. Log in to T1-Web-01 with root as the user name and VMware1! as the password.
3. Test the ICMP reachability.
ping -c 2 172.16.10.1 (default gateway)

ping -c 2 172.16.10.12 (T1-Web-02)

ping -c 2 172.16.10.13 (T1-Web-03)

ping -c 2 172.16.20.11 (T1-App-01)

ping -c 2 172.16.30.11 (T1-DB-01)

All pings should be successful.

Lab 14 Configuring the NSX Distributed Firewall 149


4. Test the SSH connections.
a. From the T1-Web-01 console, establish an SSH connection to T1-App-01.
• Establish an SSH connection.
ssh 172.16.20.11

• If the Are you sure you want to continue connecting? message appears,
enter yes.

• Enter VMware1! as the password when prompted.


You should be able to enter T1-App-01’s command prompt through SSH.

• Terminate the SSH connection.


exit

b. From the console of T1-Web-01, establish an SSH connection to T1-DB-01.


• Establish an SSH connection.
ssh 172.16.30.11

• If the Are you sure you want to continue connecting? message appears,
enter yes.
• Enter VMware1! as the password when prompted.
You should be able to enter T1-DB-01’s command prompt through SSH.

• Terminate the SSH connection.


exit

5. Test the HTTP access.


a. From the T1-Web-01 console, request an HTTP webpage from T1-Web-02.
curl http://172.16.10.12

b. Verify that a HTTP response is returned from the T1-Web-02 server.

150 Lab 14 Configuring the NSX Distributed Firewall


6. Test the SQL access.
a. From vSphere Web Client, open a console to T1-App-01 and enter root as the user
name and VMware1! as the password.
b. Connect to the SQL database and enter VMware1! when prompted for the password.
mysql -u root -h 172.16.30.11 -p

c. Verify that the mysql prompt is available to query the database.

d. Press Ctrl+C to exit.

Task 3: Create IP Set Objects


You create three IP Sets for Web-Tier, App-Tier, and DB-Tier for future definition of firewall
rules.
1. On the NSX Simplified UI Home page, click Inventory > Domains.
2. Click ADD DOMAIN.
3. Provide the configuration details in the ADD DOMAIN window.
• Name: Production
a. Click SAVE.
b. When the message Please continue configuring groups in this
Production is displayed, click YES.

Lab 14 Configuring the NSX Distributed Firewall 151


4. Click ADD GROUP.
• Group Name: Web-VMs.
• Compute Members: Click Set Members followed by +ADD CRITERIA.
a. Expand the Criteria 1 by clicking the expand > symbol and enter the following
configuration values.
• First entry: Virtual Machine
• Second entry: Name
• Third entry: contains
• Fourth entry: Web

b. Click APPLY and then click SAVE.

5. On the Add Groups - Production window, click View Members.


Verify that all the three web VMs are listed.

6. Click CLOSE followed by ADD GROUP.

152 Lab 14 Configuring the NSX Distributed Firewall


7. Repeat step 4 for App-VMs and DB-VMs.
• For the App-VMs group:
Expand the Criteria 1 by clicking the expand > symbol and enter the following
configuration values.

• First entry: Virtual Machine


• Second entry: Name
• Third entry: contains
• Fourth entry: app
• For the DB-VMs group:
Expand the Criteria 1 by clicking the expand > symbol and enter the following
configuration values.

• First entry: Virtual Machine


• Second entry: Name
• Third entry: contains
• Fourth entry: db
8. Click SAVE and CLOSE.

Lab 14 Configuring the NSX Distributed Firewall 153


Task 4: Create Firewall Rules
You create an infrastructure firewall rule to block all the external web traffic to the web tier and
allow intratier web to web traffic.
1. On the NSX Simplified UI Home page, click Security > East West Security >
Distributed Firewall > APPLICATION.
2. Click + ADD POLICY.

a. Click on the Name in the new section and enter Control-Intratier-Traffic.

b. Click default in Domain and select Production from the list and click SAVE.

154 Lab 14 Configuring the NSX Distributed Firewall


3. Click the three vertical ellipses icon and select Add Rule twice.

4. Enter the following rule configurations:


Top Rule
• Name: Allow-Web-to-Web.
• Source: Web-VMs and click APPLY.
• Destination: Web-VMs and click APPLY.
• Services: ICMP Echo Request, ICMP Echo Reply, HTTP, and HTTPS, and click
SAVE.
• Profiles: Any.
• Applied To: Select DFW.
• Action: Select Allow.
Second Rule
• Name : Block-to-Tiers-External.
• Source: Any.
• Destination: Web-VMs, App-VMs, and DB-VMs, and click APPLY.
• Services: Any.

Lab 14 Configuring the NSX Distributed Firewall 155


• Profiles: Any.
• Applied To: Select DFW.
• Action: Select Drop.

5. Click PUBLISH.
6. Verify the connectivity from your student desktop to the Web-Tier VMs.
a. From your student desktop, open a browser tab and enter http://172.16.10.11.
The HTTP request should timeout, as a result of the firewall rule.
b. From your student desktop, open a browser tab and and enter
http://172.16.10.12.

The HTTP request should timeout, as a result of the firewall rule.


c. In the vSphere Web Client open a console into T1-Web-01.
ping -c 3 172.16.10.12
ping -c 3 172.16.10.13
curl http://172.16.10.12
curl http://172.16.10.13

The ping and curl request should succeed.

156 Lab 14 Configuring the NSX Distributed Firewall


Task 5: Create an Intratier Firewall Rule to Allow SSH
Traffic
You create a firewall rule to allow SSH traffic from Web-Tier VMs to App-Tier VMs.
1. From the T1-Web-01 console, test the SSH access to T1-App-01.
ssh 172.16.20.11

You should not be able to connect.


2. Press Ctrl+C to exit.
If your connection already timed out, you do not need to press Ctrl+C.
3. On the NSX Simplified UI Home page, click East West Firewall > Distributed Firewall
> CATEGORY SPECIFIC RULES.
a. Click the three vertical ellipses icon next to Policy Control-Intratier-Traffic and select
Add Rule.
4. Enter the following rule configurations:
Top Rule
• Name: Enter Allow-SSH-Intratier.
• Source: Select Web-VMs and click APPLY.
• Destination: Select Web-VMs and App-VMs, and click APPLY.
• Service: Enter SSH in the lookup bar and select SSH from list of services, and click
SAVE.
• Profiles: Select Any.
• Applied To: Select DFW.
• Action: Select Allow.

Lab 14 Configuring the NSX Distributed Firewall 157


5. Click PUBLISH.
6. From the T1-Web-01 console, test the SSH access to T1-App-01.
ssh 172.16.20.11

7. Enter VMware1! when prompted for the password.


Your prompt should be changed to the App VM’s prompt, which verifies that your Web-
to-App (Allow) rule is working properly.

8. Close the SSH session.


exit

Task 6: Create an Intratier Firewall Rule to Allow MySQL


Traffic
You create a firewall rule to allow MySQL traffic from App-Tier VMs to DB-Tier VMs.
1. Test the SQL access.
a. From vSphere Web Client, open a console connection to T1-App-01.
b. Connect to T1-DB-01.
mysql -u root -h 172.16.30.11 -p

You should not be able to connect.


c. Press Ctrl+C to close the mysql connection attempt.
2. On the NSX Simplified UI Home page, go to Security > East West Security >
Distributed Firewall > CATEGORY SPECIFIC RULES.
.
3. Click the three vertical ellipses icon next to the Control-Intratier-Traffic section and
select Add Rule.

158 Lab 14 Configuring the NSX Distributed Firewall


4. Provide the configuration details in the for the new rule.
• Rule Name: Enter Allow-MySQL.
• Source: Select App-VMs.
• Destination: Select DB-VMs
• Service: Enter MySQL in the service list and select MySQL, and click SAVE.
• Profiles: Select Any.
• Applied To: Select DFW.
• Action: Select Allow.

5. Click PUBLISH.
6. Switch to the T1-App-01’s console prompt and test the SQL access again.
a. Test the SQL connectivity.
mysql -u root -h 172.16.30.11 -p

b. Enter VMware1! when prompted for the password.


c. Verify that the mysql prompt appears.

The mysql prompt verifies that the App-to-DB rule is working properly.

Lab 14 Configuring the NSX Distributed Firewall 159


d. Close the SQL connection.
exit

Task 7: Prepare for the Next Lab


You disable the user-created distributed firewall sections and reset the default section back to
its default settings.
1. On the NSX Simplified UI Home page, go to Security > East West Security >
Distributed Firewall > CATEGORY SPECIFIC RULES.
2. Disable the Control-Intratier-Traffic section.
a. Click the three vertical ellipses icon next to Control-Intratier-Traffic and select
Disable all rules.

160 Lab 14 Configuring the NSX Distributed Firewall


3. Click PUBLISH.

Lab 14 Configuring the NSX Distributed Firewall 161


162 Lab 14 Configuring the NSX Distributed Firewall
Lab 15 Configuring the NSX
Gateway Firewall

Objective: Configure and test the NSX gateway firewall


rules to control north-south traffic

In this lab, you perform the following tasks:


1. Prepare for the Lab
2. Test SSH Connectivity
3. Configure a Gateway Firewall Rule to Block External SSH Requests
4. Test the Effect of the Configured Gateway Firewall Rule
5. Prepare for the Next Lab

163
Task 1: Prepare for the Lab

You log in to the vSphere Web Client UI and the VMware NSX Manager UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.

164 Lab 15 Configuring the NSX Gateway Firewall


2. Log in to the NSX Simplified UI.
a. Open another tab in the Chrome web browser.
b. Click the NSX-T Data Center > NSX Manager bookmark.
c. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

Task 2: Test SSH Connectivity


You verify that the SSH connections are successful.
1. From MTPuTTY on your student desktop, open the preconfigured SSH connections to T1-
App-01, T1-Web-01, T1-Web-02, and T1-Web-03.
2. From T1-Web-01’s MTPuTTY connection, SSH to T1-App-01.
a. Establish an SSH connection.
ssh 172.16.20.11

b. Log in with the password VMware1!.


c. Terminate the SSH connection.
Exit

Lab 15 Configuring the NSX Gateway Firewall 165


Task 3: Configure a Gateway Firewall Rule to Block
External SSH Requests
You configure a Gateway Firewall Rule to block SSH requests from external networks.
1. On the NSX Simplified UI Home page, click Security > North South Security >
Gateway Firewall.
2. From the Gateway drop-down menu, select T0-LR-01.

3. Click the + ADD POLICY to add a new category.

166 Lab 15 Configuring the NSX Gateway Firewall


4. Click default in the Domain field of the new policy and select Production.

a. Click SAVE.
5. Edit the New Policy name.
• Name: Enter Block-SSH-Policy.

Lab 15 Configuring the NSX Gateway Firewall 167


6. Click the three vertical ellipses icon next to the new category and select Add Rule.

7. Configure the rule with the following configuration values:


• Name: Enter Block-SSH-from-Outside.
• Source: Any (default).
• Destination: Select App-Tier-VMs, DB-VMs, and Web-Tier-VMs, and click
APPLY
• Services: Select SSH from the Set Service page and click SAVE.
• Applied to: Select Uplink-1-Intf and Uplink-2-Intf and click SAVE.
• Action: Select DROP.

8. Click PUBLISH.

168 Lab 15 Configuring the NSX Gateway Firewall


Task 4: Test the Effect of the Configured Gateway
Firewall Rule
You verify that the Gateway Firewall Rule successfully blocks the SSH traffic.
1. Open MTPuTTY from the student desktop and attempt to connect to T1-Web-01, T1-App-
01, and T1-DB-01.
You connections should fail.

a. Close the putty connection attempts by clicking OK and Close.


2. From T1-Web-01, open an SSH connection to T1-App-01.
a. From the vSphere Web Client UI, open a console to T1-Web-01.
b. Establish an SSH connection.
ssh 172.16.20.11

Lab 15 Configuring the NSX Gateway Firewall 169


c. Log in with the password VMware1!.
The connection should be successful, because the Gateway Firewall Rule that you
configured does not affect the internal traffic between tenant networks.
d. Terminate the SSH connection.
exit

Task 5: Prepare for the Next Lab


You disable the Gateway Firewall Rule.
1. On the NSX Simplified UI Home page, click Security > North South Firewall >
Gateway Firewall.
2. Click Enable/Disable to disable the rule.

3. Click PUBLISH.
4. Verify that SSH is allowed from external sources.
5. Open MTPuTTY from the desktop and connect to T1-Web-01, T1-App-01, and T1-DB-01.
Your connections should work.

170 Lab 15 Configuring the NSX Gateway Firewall


Lab 16 Managing Users and Roles
with VMware Identity Manager

Objective: Integrate NSX Manager with a predeployed


VMware Identity Manager appliance

In this lab, you perform the following tasks:


1. Prepare for the Lab
2. Add an Active Directory Domain to VMware Identity Manager
3. Create the OAuth Client for NSX Manager from VMware Identity Manager
4. Gather the VMware Identity Manager Appliance Thumbprint
5. Enable VMware Identity Manager Integration with NSX Manager
6. Assign NSX Roles to Domain Users and Test Permissions
7. Prepare for the Next Lab

171
Task 1: Prepare for the Lab

You log in to the NSX Manager UI and the Identity Manager Administration Console.
1. From your student desktop, log in to the NSX Simplified UI.
a. Open the Chrome web browser.
b. Click the NSX-T Data Center > NSX Manager bookmark.
c. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

172 Lab 16 Managing Users and Roles with VMware Identity Manager
2. Log in to the VMware Identity Manager Administration Console.
a. Open another tab in the Chrome web browser.
b. Click the NSX-T Data Center > VMware Workspace ONE - VIDM bookmark.
c. If you see the Your connection is not private message, click ADVANCED
and click Proceed to sa-nsxvidm-01.vclass.local (unsafe).
d. Enter admin as the user name and VMware1! as the password.
e. On your first entry to the VMware Identity Manager, you are greeted by a message
that asks you to join the VMware Customer Experience Improvement Program
(CEIP). For lab purposes, deselect the check box and click OK.

Task 2: Add an Active Directory Domain to VMware


Identity Manager
You add a Windows Active Directory Domain to VMware Identity Manager.
1. From the VMware Identity Manager Administration Console, click Identity & Access
Management > Directories.
2. Click Add Directory and select Add Active Directory over LDAP/IWA from the drop-
down menu.

3. Provide the configuration details on the Add Directory page.


a. Directory Name:
• Directory Name: Enter vclass.local.
• Select the Active Directory (Integrated Windows Authentication) check box
and scroll down.

Lab 16 Managing Users and Roles with VMware Identity Manager 173
b. Directory Sync and Authentication:
• Sync Connector: Leave as sa-nsxvidm-01.vclass.local (default).
• Authentication: Click Yes (default).
• Directory Search Attribute: Select sAMAccountName (default) and scroll down.

c. Certificates:
• Leave the check box deselected (default) and scroll down.

174 Lab 16 Managing Users and Roles with VMware Identity Manager
d. Join Domain Details:
• Domain Name: Enter vclass.local.
• Domain Admin Username: Enter administrator.
• Domain Admin Password: Enter VMware1! and scroll down.

e. Bind User Details:


• Bind User Name: Enter administrator@vclass.local
• Bind User Password: Enter VMware1! and scroll down.

f. Click Save & Next.


The process of adding the domain takes a few minutes and displays various tasks that are
completed.

Lab 16 Managing Users and Roles with VMware Identity Manager 175
4. On the Select the Domains page, ensure that Domain and vclass.local (VCLASS) are
selected and click Next.

5. On the Map User Attributes page, leave the default settings, and click Next.

176 Lab 16 Managing Users and Roles with VMware Identity Manager
6. On the Select the groups that you want to sync page, provide the necessary
specifications.
a. Leave the Sync nested group members check box selected (default).
b. In the Specify the group DNs row, click the green plus sign.
• When the Specify the group DNs text box appears, specify the group DNs.
CN=NSX-Users,CN=Users,DC=vclass,DC=local

• Click Find Groups.


• Select the Select All check box.
The number of Groups to sync should be 1 of 1.

c. Click Next.

Lab 16 Managing Users and Roles with VMware Identity Manager 177
7. On the Select the Users you would like to sync page, provide the necessary
specifications.
a. In the Specify the user DNs row, click the green plus sign.
• When the Specify the user DNs text box appears, enter the values.
CN=John Doe,CN=Users,DC=vclass,DC=local

b. Click Next.
8. On the Review page, verify that there is one user and one group ready to synchronize, and
click Sync Directory.
The Import Status: Sync started message appears.

178 Lab 16 Managing Users and Roles with VMware Identity Manager
9. Click the Refresh Page link.

10. Once the synchonization process completes, verify that there is one user and one group
listed in the vclass.local directory.
The Green check mark indicates that the synchronization process is successful.

Lab 16 Managing Users and Roles with VMware Identity Manager 179
Task 3: Create the OAuth Client for NSX Manager in
VMware Identity Manager
You create the new OAuth Client for NSX Manager from VMware Identity Manager
Administration Console.
1. From VMware Identity Manager Administration Console, click the down arrow next to the
Catalog tab and select Settings from the drop-down menu.
2. In the left pane, select Remote App Access.

3. On the Clients tab, click Create Client.

180 Lab 16 Managing Users and Roles with VMware Identity Manager
4. Provide the configuration details in the Create Client window.
• Access Type: Select Service Client Token.
• Client ID: Enter sa-nsxmgr-01-OAuthClient.
• Click the triangle to expand the Advanced option.
• Click the Generate Shared Secret link to populate the Shared Secret text box.
Copy and paste the shared secret in a notepad.

• Leave all the other values as default.

5. Click Add.

Lab 16 Managing Users and Roles with VMware Identity Manager 181
6. Verify the OAuthClient addition.

Task 4: Gather the VMware Identity Manager Appliance


Fingerprint
You gather the SHA-256 fingerprint information for the VMware Identity Manager appliance.
1. On your student desktop, open the MTPuTTY application from the system tray and
double-click SA-NSX-vIDM-01 to open a console connection.
2. When the PuTTY Security Alert appears, click Yes to proceed.
3. Gain root access by entering sudo -s and VMware1! as the password.
4. Navigate to the VMware Identity Manager appliance configuration directory.
cd /usr/local/horizon/conf/

182 Lab 16 Managing Users and Roles with VMware Identity Manager
5. Collect the SHA-256 fingerprint of the VMware Identity Manager and record it in a
notepad.
openssl x509 -in sa-nsxvidm-01.vclass.local_cert.pem -noout -sha256
-fingerprint

6. Copy and paste the fingerprint to notepad.

Lab 16 Managing Users and Roles with VMware Identity Manager 183
Task 5: Enable VMware Identity Manager Integration with
NSX Manager
You integrate VMware Identity Manager with NSX Manager.
1. On the NSX Simplified UI Home page, click System > Users and click on the
Configuration tab.
2. Click the EDIT link.
3. Provide the configuration details in the Edit VMware Identity Manager Parameters
window.
• External Load Balancer Integration: Select Enabled.
• VMware Identity Manager Integration: Select Enabled.
• VMware Identity Manager Appliance: Enter sa-nsxvidm-01.vclass.local.
• OAuth Client ID: Enter sa-nsxmgr-01-OAuthClient, which is the Client ID that
you created in task 3.
• OAuth Client Secret: Enter Shared Secret that you collected in task 3.
• SSL Thumbprint: Cut and paste the SHA-256 Fingerprint you collected in task 4 with
MTPuTTY.
• NSX Appliance: Enter 172.20.10.48.

4. Click SAVE.

184 Lab 16 Managing Users and Roles with VMware Identity Manager
5. Verify that the VMware Identity Manager Connection status is Up and the VMware
Identity Manager Integration status is Enabled.

NOTE
You need to wait for 5 minutes approximately and click the browser refresh before
proceeding.

Task 6: Assign NSX Roles to Domain Users and Test


Permissions
You assign an NSX user role to an Active Directory domain user and verify the user's
permissions.
1. On the NSX Simplified UI home page, click System > Users and click on the Role
Assignments tab.
2. Click ADD > Role Assignment.
3. When the ADD ROLE windows appears, search for jdoe in the Search Users/Users
Groups section and select the user jdoe@vclass.local.
4. In the Roles pane, select the Security Engineer from the drop-down menu .
5. Click SAVE.
6. In the upper-right corner of the NSX Simplified UI, click the User icon and select Log out
to log out as admin.
7. Switch back to VMware Identity Manager Administration Console, click Local Admin >
Logout to log out as admin.

Lab 16 Managing Users and Roles with VMware Identity Manager 185
8. Log in to the NSX Simplified UI at the Virtual IP address (https://172.20.10.48) as the
new user jdoe.
The VMware Identity Manager login page appears.
a. Verify that the vclass.local domain is selected. Otherwise, click Change to a
different domain to select it.
b. Click Next.
c. Enter jdoe as the user name, VMware1! as the password, and click Sign in.

9. In the upper-right corner of the NSX Simplified UI, click User to verify that you are
logged in as jdoe@vclass.local.

186 Lab 16 Managing Users and Roles with VMware Identity Manager
10. Click Networking > Segments > and verify that the ADD SEGMENT option is grayed
out.
The grayed out option indicates that users with the Security Engineer role do not have
permissions to configure segments.
11. Click to System > Fabric > Nodes > Edge Transport Nodes and verify that the +ADD
Edge VM option is grayed out.
The grayed out option indicates that users with the Security Engineer role do not have
permission to configure routing.
12. In the upper-right corner of the NSX Simplified UI, click the User and select LOG out to
log out as jdoe@vclass.local.

Task 7: Prepare for the Next Lab


You disable the integration between VMware Identity Manager and NSX Manager.
1. Open a new tab in your browser and enter
https://172.20.10.48/login.jsp?local=true (NSX Manager Virtual IP address
and local login enabled to bypass VMware Identity Manager).
2. From the NSX Manager login page, enter admin as the user name and
VMware1!VMware1! as the password, and click LOG IN.

3. On the NSX Simplified UI Home page, click System > Users and click the
Configuration tab.
4. Click the EDIT link.
5. When the Edit VMware Identity Manager Parameters menu appears, change the VMware
Identity Manager Integration and External Load Balancer options to Disabled and
click SAVE.

Lab 16 Managing Users and Roles with VMware Identity Manager 187
6. Logout of NSX Simplified and log in again to https://172.20.10.48/login.jsp?local=true as
user admin and password VMware1!VMware1! to validate properly disabling VMware
Identity Manager.
Your login should be successful.
7. Log in to the NSX Simplified UI using the new URL.
Ensure that you perform this step.
a. To enable you to use the correct URL, right-click the NSX Data Center favorites tab
and select Add page.
b. In the Name field, enter NSX After vIDM.
c. In the URL field, enter https://172.20.10.48/login.jsp?local=true.

188 Lab 16 Managing Users and Roles with VMware Identity Manager
d. Click the link to test it and you should be able to log in as user admin and password
VMware1!VMware1!.

Lab 16 Managing Users and Roles with VMware Identity Manager 189
190 Lab 16 Managing Users and Roles with VMware Identity Manager
Lab 17 Configuring Syslog

Objective: Configure Syslog to collect log messages

In this lab, you perform the following tasks:


1. Prepare for the Lab
2. Configure Syslog on NSX Manager and Review the Collected Logs
3. Configure Syslog on an NSX Edge Node and Review the Collected Logs

191
Task 1: Prepare for the Lab

You log in to the NSX Manager UI.


1. From your student desktop, open the Chrome web browser.
2. Click the NSX-T Data Center > NSX After vIDM bookmark.
3. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

192 Lab 17 Configuring Syslog


Task 2: Configure Syslog on NSX Manager and Review
the Collected Logs
You configure a Syslog server address on NSX Manager and review the collected logs from the
remote Syslog collector.
1. From MTPuTTY, double-click sa-nsxmgr-01.
2. Configure NSX Manager to send TCP info level log messages to the Syslog server on
student-a-01.vclass.local.
set logging-server student-a-01.vclass.local:1468 proto tcp level
info

You can use the DNS name or the IP address of the Syslog server in your configuration.
3. Verify your logging configuration.
get logging-server

4. Start the Kiwi Syslog Server Console.


a. Expand the System Tray of your student desktop, right-click the Kiwi icon and select
Restore.

5. Verify that the log messages from NSX Manager with the IP address of 172.20.10.41
appear in Kiwi Syslog Server Console.

Lab 17 Configuring Syslog 193


6. Return to the sa-nsxmgr-01 MTPuTTY session and remove the Syslog server
configuration.
del logging-server student-a-01.vclass.local:1468 proto tcp level
info

a. Verify that the logging server is removed.


get logging-server

Only a blank system prompt must be returned.

Task 3: Configure Syslog on an NSX Edge Node and


Review the Collected Logs
You configure a Syslog server address on NSX Edge and review the collected logs from the
remote Syslog collector.
1. From MTPuTTY, double-click sa-nsxedge-01.
2. Configure the NSX Edge Node with a DNS server.
set name-servers 172.20.10.10

3. Configure NSX Edge Node to send TCP info level log messages to the Syslog server.
set logging-server student-a-01.vclass.local:1468 proto tcp level
info

4. Verify your logging configuration.


get logging-servers

5. Go back to Kiwi Syslog Server Console and verify that the log messages from NSX Edge
Node with the IP address of 172.20.10.61 appear.

6. Return to the sa-nsxedge-01 MTPuTTY session and remove the Syslog server
configuration.
del logging-server student-a-01.vclass.local:1468 proto tcp level
info

a. Verify the logging server removal.


get logging-server

There should only return a blank system prompt.


7. Close all MTPuTTY sessions and Kiwi Syslog Server Console.

194 Lab 17 Configuring Syslog


Lab 18 Generating Technical
Support Bundles

Objective: Generate and download a technical support


bundle for NSX Manager

In this lab, you perform the following tasks:


1. Prepare for the Lab
2. Generate a Technical Support Bundle for NSX Manager
3. Download the Technical Support Bundle

195
Task 1: Prepare for the Lab

You log in to the NSX Manager UI.


1. From your student desktop, open the Chrome web browser.
2. Click the NSX-T Data Center > NSX After vIDM bookmark.
3. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

196 Lab 18 Generating Technical Support Bundles


Task 2: Generate a Technical Support Bundle for NSX
Manager
You generate a technical support bundle to gather log and configuration information for NSX
Manager.
1. On the NSX Simplified UI Home page, click System > Support Bundle.
2. At the Request Bundle step, verify that Management Nodes is selected from the Type
drop-down menu.

3. From the Available pane, select the sa-nsxmgr-01 check box and click the right arrow to
move it to the Selected pane.

Lab 18 Generating Technical Support Bundles 197


4. Set Log age (days) to 1 by clicking the down arrow.

5. Click Include core files and audit logs to change it to Yes.

6. Click START BUNDLE COLLECTION.


7. In the Status step, monitor the collection progress, which takes approximately 10 minutes
to complete.

198 Lab 18 Generating Technical Support Bundles


Task 3: Download the Technical Support Bundle
You download the NSX Manager technical support bundle to your student desktop.
1. In the Support Bundle Status window, click DOWNLOAD.
2. Select Desktop in the left pane and click Save to save the nsx_support_archive file
to your student desktop.

3. Verify that the nsx_support_archive_########-######.tar file exists on the


student desktop, where the # symbol is the date and file number.

Lab 18 Generating Technical Support Bundles 199


200 Lab 18 Generating Technical Support Bundles
Lab 19 Using Traceflow to Inspect
the Path of a Packet

Objective: Use Traceflow to inspect the path of a packet


as it travels from source to destination

In this lab, you perform the following tasks:


1. Prepare for the Lab
2. Configure a Traceflow Session
3. Examine the Traceflow Output

201
Task 1: Prepare for the Lab

You log in to the vSphere Web Client UI and the NSX Manager UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.

2. Log in to the NSX Simplified UI.


a. Open another tab in the Chrome web browser.
b. Click the NSX-T Data Center > NSX After vIDM bookmark.
c. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.

202 Lab 19 Using Traceflow to Inspect the Path of a Packet


Task 2: Configure a Traceflow Session
You specify the source VM and the destination VM of a Traceflow session.
1. On the vSphere Web Client UI Home page, click Hosts and Clusters.
2. Ensure that T1-Web-01 and T1-App-01 reside on different hosts.
Otherwise, use vSphere vMotion to migrate the VMs as needed.
3. On the NSX Manager Simplified UI Home page, click Advanced Networking & Security
>Tools > Traceflow.
4. On the Traceflow tab, configure the source and destination VM details.
• IP Address: Select IPv4.
• Traffic Type: Select Unicast (default).
• Source:
• Type: Select Virtual Machine (default).
• VM Name: Select T1-App-01.
• Virtual Interface: Select Network adapter 1 (default).
• Destination:
• Type: Select Virtual Machine (default).
• VM Name: Select T1-Web-01.
• Virtual Interface: Select Network adapter 1 (default).

5. Click TRACE.

Lab 19 Using Traceflow to Inspect the Path of a Packet 203


Task 3: Examine the Traceflow Output
You examine the Traceflow output, including how the packet is injected in the data path, which
components are involved, and how the packet is delivered.
1. If you see a trace observation warning message, ignore it and close the window because
your lab runs in a nested ESXi environment.

2. Verify that the Traceflow output appears, including a diagram on the left and the steps of
the packet are on the right.

3. In the first row of the packet walk, verify that a packet is injected through the Transport
Node.
4. In the second and third rows, verify that the distributed firewall receives the packet, applies
firewall rules, and forwards the packet to the App-LS logical switch.
5. From the fourth to the seventh rows, verify that App-LS is attached to the gateway T1-LR-
1, which receives the packet and forwards it to the attached logical segmentWeb-LS.
6. In the eighth and ninth rows, verify that the source VTEP and destination VTEP IP
addresses appear, because the source and the destination VMs reside on two different
hosts.
7. In the tenth and eleventh rows, verify that the distributed firewall receives the packet and
applies any firewall rules, if any, at the destination host.

204 Lab 19 Using Traceflow to Inspect the Path of a Packet