Вы находитесь на странице: 1из 288

Network Automation Using

Contrail Cloud
2.a

Student Guide
Volume 2 of 2

Worldwide Education Services

1133 Innovation Way


Sunnyvale, CA 94089
USA
408-745-2000
www.juniper.net

Course Number: EDU-JUN-NACC


This document is produced by Juniper Networks, Inc.
This document or any part thereof may not be reproduced or transmitted in any form under penalty of law, without the prior written permission of Juniper Networks Education
Services.
Juniper Networks, Junos, Steel-Belted Radius, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. The
Juniper Networks Logo, the Junos logo, and JunosE are trademarks of Juniper Networks, Inc. All other trademarks, service marks, registered trademarks, or registered service
marks are the property of their respective owners.
Network Automation Using Contrail Cloud Student Guide, Revision 2.a
Copyright © 2016 Juniper Networks, Inc. All rights reserved.
Printed in USA.
Revision History:
Revision 2.a—March 2016.
The information in this document is current as of the date listed above.
The information in this document has been carefully verified and is believed to be accurate for software Release 2.21. Juniper Networks assumes no responsibilities for any
inaccuracies that may appear in this document. In no event will Juniper Networks be liable for direct, indirect, special, exemplary, incidental, or consequential damages
resulting from any defect or omission in this document, even if advised of the possibility of such damages.

Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
YEAR 2000 NOTICE
Juniper Networks hardware and software products do not suffer from Year 2000 problems and hence are Year 2000 compliant. The Junos operating system has no known
time-related limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036.
SOFTWARE LICENSE
The terms and conditions for using Juniper Networks software are described in the software license provided with the software, or to the extent applicable, in an agreement
executed between you and Juniper Networks, or Juniper Networks agent. By using Juniper Networks software, you indicate that you understand and agree to be bound by its
license terms and conditions. Generally speaking, the software license restricts the manner in which you are permitted to use the Juniper Networks software, may contain
prohibitions against certain uses, and may state conditions under which the license is automatically terminated. You should consult the software license for further details.
Contents

Chapter 5: Service Chaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1


Service Chaining Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
In-Network Service Chain and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-12
Transparent Service Chain and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-31
Using Heat Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-53
Lab: Service Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-69

Chapter 6: Contrail Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1


Monitor > Infrastructure Workspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
Monitor > Networking Workspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-28
Analyzing Live Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-38
Flow Queries and Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-59
Underlay Overlay Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-71
Analytics API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-81

Chapter 7: Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1


Contrail CLI Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Fabric Utility Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-12
OpenStack CLI Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-15
vRouter Commands and Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-22
Using Contrail Introspect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-35
Lab: Performing Analysis and Troubleshooting in Contrail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-44

Appendix A: Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1


Pre-Installation and Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A-3
Installation Using Fabric Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A-9
Additional Settings and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-22
High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-36
Configuring Simple Virtual Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-45
Installation Using Server Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-51
Lab: Installation of the Contrail Cloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-65

Acronym List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ACR-1

www.juniper.net Contents • iii


iv • Contents www.juniper.net
Course Overview

This two-day course is designed to provide students with the knowledge required to work with the Juniper Contrail
software-defined networking (SDN) solution. Students will gain in-depth knowledge of how to use the OpenStack and
Contrail Web UIs and APIs to perform the required tasks. Through demonstrations and hands-on labs, students will gain
experience with the features of Contrail. This course is based on Contrail Release 2.21.
Course Level
Network Automation Using Contrail Cloud is an intermediate-level course.
Intended Audience
This course benefits individuals responsible for working with software-defined networking solutions in data center,
service provider, and enterprise network environments.
Prerequisites
The prerequisites for this course are as follows:
• Basic TCP/IP skills;
• General understanding of data center virtualization;
• Basic understanding of the Junos operating system;
• Attendance of the Introduction to the Junos Operating System (IJOS) and Juniper Networks SDN
Fundamentals (JSDNF) courses prior to attending this class; and
• Basic knowledge of object-oriented programming and Python scripting is recommended.
Objectives
After successfully completing this course, you should be able to:
• Define basic SDN principles and functionality.
• Define basic OpenStack principles and functionality.
• Define basic Contrail principles and how they relate to OpenStack.
• List and define the components that make up the Contrail solution.
• Explain where Contrail fits into NFV and SDN.
• Describe the functionality of the Contrail control and data planes.
• Describe Nova Docker support in Contrail.
• Describe extending Contrail cluster with physical routers.
• Describe support for TOR switches and OVSDB.
• Describe the OpenStack and Contrail WebUIs.
• Create a tenant project.
• Create and manage virtual networks.
• Create and manage policies.
• Create and assign floating IP addresses.
• Add an image and launch an instance from it.
• Describe how a tenant is created internally.
• Use Contrail's API to configure OpenStack and Contrail.
• Describe service chaining within Contrail.
• Set up a service chain.
• Explain the use of Heat Templates with Contrail.

www.juniper.net Course Overview • v


• Manipulate the WebUI monitoring section.
• Extract key information regarding the Contrail infrastructure.
• Extract key information regarding traffic flows and packet analysis.
• Create various types of filters and queries to generate data.
• Describe Ceilometer support in a Contrail Cloud.
• Perform TTL Configuration for Analytics Data.
• Use various troubleshooting tools for debugging Contrail.

vi • Course Overview www.juniper.net


Course Agenda

Day 1
Chapter 1: Course Introduction
Chapter 2: Contrail Overview
Chapter 3: Architecture
Chapter 4: Basic Configuration
Lab: Tenant Implementation and Management
Day 2
Chapter 5: Service Chaining
Lab: Service Chains
Chapter 6: Contrail Analytics
Chapter 7: Troubleshooting
Lab: Performing Analysis and Troubleshooting in Contrail
Appendix A: Installation
Lab: Installation of the Contrail Cloud (Optional)

www.juniper.net Course Agenda • vii


Document Conventions

CLI and GUI Text


Frequently throughout this course, we refer to text that appears in a command-line interface (CLI) or a graphical user
interface (GUI). To make the language of these documents easier to read, we distinguish GUI and CLI text from standard
text according to the following table.

Style Description Usage Example

Franklin Gothic Normal text. Most of what you read in the Lab Guide and
Student Guide.

Courier New Console text:


commit complete
• Screen captures
• Noncommand-related syntax Exiting configuration mode
GUI text elements:
• Menu names Select File > Open, and then click
Configuration.conf in the Filename 
• Text field entry
text box.

Input Text Versus Output Text


You will also frequently see cases where you must enter input text yourself. Often these instances will be shown in the
context of where you must enter them. We use bold style to distinguish text that is input versus text that is simply
displayed.

Style Description Usage Example

Normal CLI No distinguishing variant. Physical interface:fxp0, Enabled


Normal GUI View configuration history by clicking
Configuration > History.

CLI Input Text that you must enter. lab@San_Jose> show route
GUI Input Select File > Save, and type config.ini
in the Filename field.

Defined and Undefined Syntax Variables


Finally, this course distinguishes between regular text and syntax variables, and it also distinguishes between syntax
variables where the value is already assigned (defined variables) and syntax variables where you must assign the value
(undefined variables). Note that these styles can be combined with the input style as well.

Style Description Usage Example

CLI Variable Text where variable value is already policy my-peers


assigned.
GUI Variable Click my-peers in the dialog.

CLI Undefined Text where the variable’s value is the Type set policy policy-name.
user’s discretion or text where the
ping 10.0.x.y
variable’s value as shown in the lab
GUI Undefined guide might differ from the value the Select File > Save, and type filename in
user must input according to the lab the Filename field.
topology.

viii • Document Conventions www.juniper.net


Additional Information

Education Services Offerings


You can obtain information on the latest Education Services offerings, course dates, and class locations from the World
Wide Web by pointing your Web browser to: http://www.juniper.net/training/education/.
About This Publication
The Network Automation Using Contrail Cloud Student Guide was developed and tested using software Release 2.21.
Previous and later versions of software might behave differently so you should always consult the documentation and
release notes for the version of code you are running before reporting errors.
This document is written and maintained by the Juniper Networks Education Services development team. Please send
questions and suggestions for improvement to training@juniper.net.
Technical Publications
You can print technical manuals and release notes directly from the Internet in a variety of formats:
• Go to http://www.juniper.net/techpubs/.
• Locate the specific software or hardware release and title you need, and choose the format in which you
want to view or print the document.
Documentation sets and CDs are available through your local Juniper Networks sales office or account representative.
Juniper Networks Support
For technical support, contact Juniper Networks at http://www.juniper.net/customers/support/, or at 1-888-314-JTAC
(within the United States) or 408-745-2121 (outside the United States).

www.juniper.net Additional Information • ix


x • Additional Information www.juniper.net
Network Automation Using Contrail Cloud

Chapter 5: Service Chaining


Network Automation Using Contrail Cloud

We Will Discuss:
• Service chaining within Contrail; and
• Configuring service chains;
• Configuring Source NAT; and
• Using Heat templates.

Chapter 5–2 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Service Chaining Overview


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Service Chaining • Chapter 5–3


Network Automation Using Contrail Cloud

Data Center Evolution


Traditionally, network services have been delivered on standalone appliances that are inflexible, difficult to scale, and add
complexity to the network. These types of data centers have drawbacks such as vertical resource allocation, manual
configuration, and static service chains. What this means is that it is next to impossible to move devices around as the
network services cannot easily move with it. In contrast, a completely virtualized environment offers much more agility in
that resources are dynamically allocated, configuration can be abstracted and automated, and service chains can be
dynamic.
The advantages of dynamic service chaining virtual services can be applied in many different ways. Network operators can
use these services to replace network functions currently hosted on physical appliances, thereby improving the efficiency
and operations of their own internal networks. Dynamically service chaining network applications can also serve as the
foundation for new services.

Chapter 5–4 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

What Is Service Chaining?


The concept of service chaining is nothing new. Chances are you already do some manner of service chaining in your data
center where traffic between two endpoints is routed or cabled to external service devices such as firewalls or load
balancers. Dynamic service chaining takes this a step further in that it extracts those services from network and security
devices, runs them as VMs, and automatically chains them together into a logical flow.
The slide shows an extremely simple, extremely abstracted service chain schema. On the left, we have VM-A1 and, on the
right, VM-A2. In between those two virtual machines (VMs) is a service VM. Service VMs can be anything from firewalls, Deep
packet inspection (DPI) devices, load balancers, and so on. Under the hood, Contrail takes the building block approach
discussed previously and leverages it to build these types of service chains. We explore more complex examples on the
following pages.
Note that typically, when considering service chaining, the left interface points to the internal end customer, who uses the
service; and the right interface points to the external network or Internet.

www.juniper.net Service Chaining • Chapter 5–5


Network Automation Using Contrail Cloud

Deployment Locations
Services can be deployed in multiple locations, including the hypervisor, VMs, physical devices, access control lists (ACLs) on
the physical access switch and a service card within a router or a switch.
The main option that is most typical for SDN/NFV solutions is deployment of services inside VMs.

Chapter 5–6 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Deployment on Network Boundaries


Services can be associated with one or more VMs—for example, as a result of attaching a security policy to one or more VMs.
Alternatively, services may be associated on network boundaries, for example, by attaching a security policy to a network
boundary or by inserting a load balancer at a network boundary. As shown on the slide, these network boundaries might be
between:
• A tenant network and an external network (the Internet or the virtual private network [VPN]) to an enterprise
network);
• The network of one tenant and the network of another tenant; or
• Multiple networks of the same tenant.

www.juniper.net Service Chaining • Chapter 5–7


Network Automation Using Contrail Cloud

Service Chain Examples


The slide shows several examples of service chains as follows:
• The first example is a cloud data center connection between the Internet and a web server. In this example, the
Stateful Firewall service protects the application and the Application Delivery Controller (ADC) provides load
balancing of network traffic across multiple instances of the web server. Service chaining allows each service
within the chain to elastically scale based on need. That is, the service chain dynamically adjusts the links
within the chain as instances of the services come and go.
• The second example is between two components of a cloud application; in this case between Web server and
middle-tier application VMs. The traffic between these VMs must be isolated from other traffic within the cloud
data center and the load needs to be balanced across application instances with an ADC service. With service
chaining, all of this is done in software. The chain forms a virtual network where the endpoints are the virtual
switches within the hypervisors of the servers that run the application VMs. Additionally, the service chain
dynamically adjusts the links in the chain when the data center orchestration system moves a VM from one
physical server to another. Furthermore, the underlay network requires no changes during these moves.
• While the first two service chain examples apply to the cloud data center, the third example is in a completely
different domain—the mobile service provider edge. In this case, the network traffic comes from a cell phone
tower; it moves through an edge router and then a set of processing steps are performed in series. The Evolved
Packet Core extracts the IP sessions from the network tunnels connected to the cell tower base stations.
Immediately, this traffic is analyzed and processed by a stateful firewall. DPI is used to determine traffic
patterns and generate analytics information. The subscriber policy enforcement function applies subscriber
polices such as enhancing the quality of service for premium subscribers. Finally, as the traffic heads out to the
Internet, carrier grade Network Address Translation (NAT) provides the traffic with a public IP address.

Chapter 5–8 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Service Chain Packet Flow


When you create a service chain, the Contrail software creates tunnels across the underlay network that span through all
services in the chain. The slide shows two end points and two compute nodes, each with one service instance and traffic
going to and from one end point to the other. In the example, and from a high-level, abstracted point of view, you are simply
connecting Tenant 1 and Tenant 2 with two service VMs in between. You build a service template that defines the left, right,
and (potentially) management interfaces along with the image that runs as the service VM. Contrail then takes that schema
and transforms it into the low-level schema necessary for service chaining to work. All VRFs, route leaking, and next-hop
changes necessary for the packets to flow from VM to VM are configured “under the hood,” so to speak.
The system creates additional routing instances for service virtual machines in addition to the routing instances for tenant
virtual machines. Traffic is steered in the following ways:
• The route targets for routes are manipulated to influence importing and exporting routing from one routing
instance to another routing instance.
• The next hops, labels, or both of the routes are manipulated as they are leaked from routing instance to routing
instance to force the traffic through the right sequence of routing instances and the right sequence of
corresponding virtual machines.

www.juniper.net Service Chaining • Chapter 5–9


Network Automation Using Contrail Cloud

Service Chain Modes


The modes of services that can be configured are as follows:
• Transparent or bridge mode: Used for services that do not modify the packet. This mode is also known as
bump-in-the-wire or Layer 2 mode. Examples include Layer 2 firewall, Intrusion Detection and Prevention (IDP),
and so on.
• In-network or routed mode: Provides a gateway service where packets are routed between the service instance
interfaces. Examples include NAT, Layer 3 firewall, load balancer, HTTP proxy, and so on.
• In-network-NAT mode: Similar to in-network mode; however, packets from the left (private, or source) network
are not routed to the right (public) network. In-network-NAT mode is particularly useful for NAT services.

Chapter 5–10 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Configuration Elements
Setting up a service chain requires the configuration of several elements. You can think of these elements as building blocks
as they fit together to allow you to easily create varied and dynamic service chains. All service chains require the following
elements:
• Virtual network(s): Defined VNs to use.
• Service template: Includes parameters such as service mode, service type, and VM image to use.
• Service instance: Includes parameters such as the service template to use, number of instances to launch, and
which VNs to assign to the service VM interfaces.
• Service policy: Includes policy rules (source and destination VNs) and other policy match conditions such as
source and destination ports).
In the next section, we go through some use cases and discuss these elements in further detail.

www.juniper.net Service Chaining • Chapter 5–11


Network Automation Using Contrail Cloud

In-Network Service Chain and Configuration


The slide highlights the topic we discuss next.

Chapter 5–12 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Use Case: In-Network Service Chain


For our first use case, we are going to set up a very simple in-network service chain comprised of the following:
• Left VN: 10.10.10.0/24;
• Right VN: 2.2.2.0/24;
• Customer VM-Left is assigned to the left VN;
• Customer VM-Right is assigned to the right VN; and
• The service VM is a vSRX image preconfigured for NAT between its ge-0/0/1 (left) and ge-0/0/2 (right)
interfaces.

www.juniper.net Service Chaining • Chapter 5–13


Network Automation Using Contrail Cloud

Creating Left and Right Virtual Networks


Creation and management of virtual networks were covered in-depth in a previous chapter so we don’t go into a large
amount of detail here. To configure the left and right networks for this service chain, perform the following steps:
1. Click the Configure button at the top left.
2. Navigate to the Networking > Networks workspace.
3. Ensure the correct project is chosen from the appropriate drop-down list. Not choosing the correct project is a
common mistake here.
4. Click the + button at the upper right, which brings up the Create Network dialog.
a. In the Create Network dialog, enter the left VN name in the Name field. In the example, we named it
acme-left-vn.
b. In the Subnets CIDR field, enter in the left VN network subnet. In the example, we have used
10.10.10.0/24 for the left VN. Do not forget to click the plus sign to actually add your network to the
table. Again, this is a very common mistake.
c. Click the Save button when finished to add the left VN to the list of available networks for this project.
d. To add the right VN, simply repeat the steps and use the appropriate data. In our example, we have
named the right VN acme-right-vn and used 2.2.2.0/24 for the network subnet.

Chapter 5–14 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Service Templates
Service templates are always configured within the scope of a domain, and the templates can be used on all projects within
a domain. Furthermore, a template can be used to launch multiple service instances in different projects within a domain.
The following are parameters configured for a service template:
• Service template name: The name of the template.
• Mode of service: Select from Transparent, In-Network, or In-Network-NAT.
• Type of service: Select between Firewall or Analyzer. Use Firewall for both firewall and NAT services.
• Image name: Select the appropriate image from a list of available images (added through the OpenStack
Web UI).
• Interface types: Select the interface type or types for the service.
– For firewall or NAT services, both Left and Right interfaces are required.
– For an analyzer service, only a Left interface is required.
– For Juniper Networks virtual images, a Management interface is required, in addition to any left or right
requirement.
Continued on the next page.

www.juniper.net Service Chaining • Chapter 5–15


Network Automation Using Contrail Cloud
Service Templates (contd.)
Advanced service template options include:
• Service Scaling: If you plan on using multiple virtual machines for a single service instance to scale out the
service, select the Service Scaling check box. When scaling is selected, you can choose to use the same
IP address for a particular interface on each virtual machine interface or to allocate new addresses for each
virtual machine. For a NAT service, the left (inner) interface should have the same IP address, and the right
(outer) interface should have a different IP address
• Image flavor: Optionally select the flavor you want use for the service VM.

Chapter 5–16 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Creating Service Templates


To create the service template for this use case, perform the following steps:
1. Click the Configure button at the top left.
2. Navigate to the Services > Service Templates workspace.
3. Click the + button at the upper right, which brings up the Add Service Template dialog.
a. In the Add Service Template dialog, enter the service template name in the Name field. In the
example, we named it acme-in-network-service-template.
b. Change the Service Mode to In-Network.
c. Leave the Service Type as Firewall.
d. Choose the correct image from the Image Name field. In the example, the image name is
nat-service.
e. Click the + sign three times to add entries for the Management, Left, and Right interfaces.
f. Expand the Advanced options section and enable the Service Scaling check box. This
automatically enables the Shared IP option for the Left interface.
g. If necessary for the VM image, choose the Instance Flavor you prefer to use from the dropdown list.
h. Click Save to add the template to the list.

www.juniper.net Service Chaining • Chapter 5–17


Network Automation Using Contrail Cloud

Service Instance
A service instance is always maintained within the scope of a project. A service instance is launched using a specified
service template from the domain to which the project belongs. The following parameters configured for a service instance:
• Service Instance name: Enter a name for the service instance
• Service Template name: Select the appropriate template from a list of available service templates.
• Management VN: Select the management VN from a list of available VNs. If you are using the Management
interface, select Auto Configured. The software will use an internally-created virtual network.
• Left VN: Select the Left VN from a list of available VNs.
• Right VN: Select the Right VN from a list of available VNs.

Chapter 5–18 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Creating Service Instances: Part 1


To create the service template for this use case, perform the following steps:
1. Click the Configure button at the top left.
2. Navigate to the Services > Service Instance workspace.
3. Click the + button at the upper right, which brings up the Create Service Instance dialog.
a. In the Create Service Instance dialog, enter the service instance name in the Name field. In the
example, we named it acme-in-network-service-instance.
b. From the Service Template dropdown, choose the name of the service template we created
previously. In the example, we selected acme-in-network-service-template.
c. Choose the number of instances to launch. In the example, we left it at the default of 1.
d. From the Interface 1 (Management) dropdown, select Auto Configured.
e. From the Interface 2 (Left) dropdown, select the appropriate VN. In the example, we selected
acme-left-vn.
f. From the Interface 3 (Right) dropdown, select the appropriate VN. In the example, we selected
acme-right-vn.
g. Click Save to add and spawn the instance.
Note that, immediately after clicking Save, Contrail attempts to spawn the service instance. Depending on the type of
service instance configured, this can take some time.

www.juniper.net Service Chaining • Chapter 5–19


Network Automation Using Contrail Cloud

Creating Service Instances: Part 2


After the service instance spawns completely, it changes to an Active state. You can click the expansion arrow to view
more details about the service instance. Also in the detailed view is the View Console link. This link launches a new
browser window and gives you console access to the service VM.

Chapter 5–20 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Creating Service Instances: Part 3


The service instance is also visible within the OpenStack Web UI. In the OpenStack Web UI, ensure you have (1) selected the
correct project name and then (2) select the Instances tab. Though it is possible to manipulate the service instance
within the OpenStack Web UI, service instance VMs should only be deleted from the Contrail Web UI. That is, do not use
OpenStack Horizon or Nova to delete service VMs or instances.

www.juniper.net Service Chaining • Chapter 5–21


Network Automation Using Contrail Cloud

Verifying the Service VM


After launching the service instance, click the View Console link and verify the interfaces have been assigned as
expected. In the example, ge-0/0/1 and ge-0/0/2 have been assigned as the left and right interfaces, respectively. Interface
ge-0/0/0 has been assigned as the management interface.

Chapter 5–22 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Service Policy
The final piece of a service chain is the service policy. The following are the parameters to be configured for a service policy:
• Policy name: Provide a name for the service policy.
• Source network name: Select the appropriate network name for the source network.
• Destination network name: Select the appropriate network name for the destination network.
• Enable the Services check box and then choose the appropriate service instance name from the drop-down
list.

www.juniper.net Service Chaining • Chapter 5–23


Network Automation Using Contrail Cloud

Creating a Service Policy: Part 1


To configure a service policy for the service chain, perform the following steps:
1. Click the Configure button at the top left.
2. Navigate to the Networking > Policies workspace.
3. Ensure the correct project is chosen from the appropriate dropdown list. Not choosing the correct project is a
common mistake here.
4. Click the + button at the upper right, which brings up the Create Policy dialog. In the Create Policy
dialog, perform the following steps:
a. Enter a name in the Name field. In the example, we have named it
acme-in-network-service-policy.
b. Select the appropriate network name for the Source. In the example, we have selected
acme-left-vn.
c. Select the appropriate network name for the Destination. In the example, we have selected
acme-right-vn.
d. Enable the Services check box and then choose the appropriate service instance name from the
dropdown list that appears. In the example, we have selected
acme-in-network-service-instance.
e. Click the Save button to add the policy to the list.

Chapter 5–24 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Creating a Service Policy: Part 2


The final step to setting up the service chain is to apply the policy to the networks involved. This is a commonly overlooked
step and the service chain will not work without it. To apply the policy to the networks, perform the following steps:
1. Click the Configure button at the top left.
2. Navigate to the Networking > Networks workspace.
3. Ensure the correct project is chosen from the appropriate dropdown list. Not choosing the correct project is a
common mistake here.
4. Locate the correct Left network in the main table. In the example, it is acme-left-vn. Click the gear wheel
icon at the far right of the row and select Edit from the popup menu. This brings up the Edit Network
dialog.
5. In the Edit Network dialog, click within the Network Policy(s) field and select the correct policy. In the
example, it is acme-in-network-service-policy.
6. Click Save when finished.
7. Repeat these steps and attach the policy to the correct Right network.

www.juniper.net Service Chaining • Chapter 5–25


Network Automation Using Contrail Cloud

Testing the Service Chain: Part 1


At this point, all that is left is to launch the actual customer VM instances and test the service chain. Customer instances are
launched from the OpenStack Web UI and have been covered in a previous chapter so we cover just the essentials here. To
launch the customer instances for this use case, perform the following steps:
1. Return to the OpenStack Web UI and navigate to the Project tab. Ensure you have selected the correct
project. In the example, we have selected ACME Corp.
2. Click the Instances link to open the Instances workspace.
3. Click the Launch Instance button to bring up the Launch Instance dialog.

Chapter 5–26 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Testing the Service Chain: Part 2


We start with the left-side customer instance first. From the Launch Instance dialog, perform the following steps:
1. In the Instance Name field, enter the name of the instance. In the example, we have entered
acme-customer-left.
2. If necessary for your environment, select an appropriate flavor from the Flavor drop-down list. In the example,
we have selected m1.tiny.
3. Select the appropriate image from the Image Name drop-down list. In the example, we have selected Core
Linux.
4. Switch to the Networking tab. In the Available Networks section, click the plus button next to the
acme-left-vn network entry. This adds that network to the Selected Networks section.
5. Click the Launch button to spawn this customer instance.
Repeat these steps for the right-side instance using the appropriate data. For example, we used acme-customer-right
for the name and selected the acme-right-vn network.

www.juniper.net Service Chaining • Chapter 5–27


Network Automation Using Contrail Cloud

Testing the Service Chain: Part 3


As shown in the example, both customer instances should be spawned. In the example, the left customer instance has been
assigned 10.10.10.4 and the right customer instance has been assigned 2.2.2.4. Click the button with an arrow at the end
of the acme-customer-left instance and choose Console from the menu. From the Instance Console window,
click the gray bar above the console window. You must do this to interact with the console. After clicking the gray bar, initiate
a ping to the 2.2.2.4 IP address using the ping 2.2.2.4 command.

Chapter 5–28 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Testing the Service Chain: Part 4


Finally! We can check to see if NAT is working properly. First, launch a console instance to the service VM. In our case, it is a
vSRX image running the Junos OS. In the example, we have issued the show security flow session command. Note
how NAT has changed the source address from 10.10.10.4 to 2.2.2.3.

www.juniper.net Service Chaining • Chapter 5–29


Network Automation Using Contrail Cloud

Testing the Service Chain: Part 5


You can also use the Contrail Web UI to view the flows. From the Contrail Web UI, navigate to the Monitor >
Infrastructure > Virtual Routers > compute-1 > Flows workspace. From here, you can view the flow data
from the ping test in a graphical format.

Chapter 5–30 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Transparent Service Chain and Configuration


The slide highlights the topic we discuss next.

www.juniper.net Service Chaining • Chapter 5–31


Network Automation Using Contrail Cloud

Transparent Service Chain


A transparent service chain shares much of the same configuration as the in-network service with a few key differences.
• Left VN: 3.3.3.0/24.
• Right VN: 4.4.4.0/24.
• Customer VM-Left is assigned to the left VN.
• Customer VM-Right is assigned to the right VN.
The service VM is a vSRX image preconfigured for bridging between its ge-0/0/1 (left) and ge-0/0/2 (right) interfaces. The
same service image is spawned twice in this service chain and Contrail takes care of assigning networks and addresses in
order to chain them together.

Chapter 5–32 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Creating Left and Right Virtual Networks


The creation of the networks is the same as with an In-Network service chain. In the Contrail Web UI, navigate to the
Configure > Networking > Networks workspace. Again, ensure the proper project is selected and create left and
right networks using the 3.3.3.0/24 and 4.4.4.0/24 subnets, respectively. In the example, we have used the names
acme-left and acme-right.

www.juniper.net Service Chaining • Chapter 5–33


Network Automation Using Contrail Cloud

Creating a Service Template


Creating a template for a transparent service is the same except that the Transparent option from the Service Mode
dropdown list is used (instead of In-Network). Note that we only need one template even though we plan to launch two
instances from it. Navigate to the Configure > Services > Service Templates workspace and create the
template.

Chapter 5–34 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Creating Service Instances: Part 1


Because we intend on chaining two of these service VMs together, we need to launch two instances of them. Note that we
use unique names for each instance but use the same template. Also, select Auto Configured for all three interfaces
(Management, Left, and Right). This differs from the In-Network service chain where we selected the left and right VNs
for the left and right interfaces. Navigate to the Configure > Services > Service Instances workspace and
create both service instances as shown in the example.

www.juniper.net Service Chaining • Chapter 5–35


Network Automation Using Contrail Cloud

Creating Service Instances: Part 2


Allow both instances to spawn completely. The example shows both the Contrail and OpenStack Web UIs view of the service
instances.

Chapter 5–36 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Creating the Customer Instances


As with the In-Network service chain, create two customer instances, one in the acme-left VN and one in the
acme-right VN.

www.juniper.net Service Chaining • Chapter 5–37


Network Automation Using Contrail Cloud

Verifying the Service VMs


Click the View Console link and verify the interfaces have been assigned as expected. In the example, ge-0/0/1 and
ge-0/0/2 have been assigned as the left and right interfaces, respectively. Interface ge-0/0/0 has been assigned as the
management interface. Note how, unlike the In-Network service chain, no IP addresses are assigned to the interfaces. Recall
that transparent mode is for Layer 2 services.

Chapter 5–38 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Create the Service Policy: Part 1


Creating the service policy is the same as before except that you add both service instances this time. Navigate to the
Configure > Networking > Policies workspace and create a new policy as before. After enabling the Services
check box, you have to click the bar that appears below the rules table, select the first instance, click the bar again, and
select the second instance. Note that order is important here.

www.juniper.net Service Chaining • Chapter 5–39


Network Automation Using Contrail Cloud

Create the Service Policy: Part 2


Navigate back to the Configure > Networking > Networks workspace and apply the policy to the left and right VNs.
This is an easy step to overlook but the service chain won’t work without it.

Chapter 5–40 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Testing the Service Chain: Part 1


From the acme-customer-left instance console, attempt to ping the IP address of the acme-customer-right
instance. In the example, we successfully ping the 4.4.4.3 IP address.

www.juniper.net Service Chaining • Chapter 5–41


Network Automation Using Contrail Cloud

Testing the Service Chain: Part 2


You can verify packets are transiting both service VMs by checking the flows on their respective consoles. In the example, the
show security flow sessions command shows output on both service VMs.

Chapter 5–42 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Testing the Service Chain: Part 3


You can further verify the service chain flows using the Monitor > Infrastructure > Virtual Routers >
Compute Node > Flows workspace.

www.juniper.net Service Chaining • Chapter 5–43


Network Automation Using Contrail Cloud

Deleting a Service Chain


Deleting a service chain is a simple procedure. Basically, it is the reverse of building a service chain. However, you must do
this from the Contrail Web UI only. That is, Juniper does not recommend using the OpenStack Web UI to delete or modify
service instances. Perform the following steps to safely delete a service chain:
1. Navigate to the Configure > Networking > Policies workspace and delete the policy used for the
service instance. Optionally, if you want to reuse the policy elsewhere, you can choose to edit the policy and
simply remove the networks from it.
2. Navigate to the Configure > Services > Service Instances workspace. Locate the appropriate
service in the table, enable its check box, and click the Delete button in the upper right.
3. Navigate to the Configure > Services > Service Templates workspace. Locate the appropriate
template, enable its check box, and click the Delete button in the upper right. Of course, this step is optional
if you intend to reuse the template.

Chapter 5–44 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Configuring Source NAT


The slide highlights the topic we discuss next.

www.juniper.net Service Chaining • Chapter 5–45


Network Automation Using Contrail Cloud

Source Network Address Translation


Source Network Address Translation (source-nat or SNAT) allows traffic from a private network to go out to the Internet.
Virtual machines launched on a private network can get to the Internet by going through a gateway capable of performing
SNAT. The gateway has one arm on the public network and as part of SNAT, it replaces the source IP of the originating packet
with its own public side IP. As part of SNAT, the source port is also updated so that multiple VMs can reach the public network
through a single gateway public IP.
OpenStack supports SNAT gateway implementation through its Neutron APIs for routers. The OpenContrail plugin supports
the Neutron APIs for routers and creates the relevant service-template and service-instance objects in the API server. The
service scheduler in OpenContrail instantiates the gateway on a randomly-selected virtual router.
The SNAT gateway created by Contrail is a service instance of a special type (uses the service type 'source-nat'). It is using a
predefined service template netns-snat-template which does not include a corresponding VM image. Instead, for this
service type a Linux network namespace (in other words, routing instance) is created on a vRouter, with interfaces between
left and right networks and with Netfilter (iptables) rules to SNAT traffic.

Chapter 5–46 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Source NAT Case Study: Part 1


The presented network diagram shows a PRIVATE virtual network with the subnet of 192.168.0.0/24. The default route for
the virtual network points to the SNAT gateway. The gateway replaces the source-ip from 192.168.0.0/24 and uses its public
address 1.1.1.3. We will show how to configure SNAT in such a scenario on the following slides.

www.juniper.net Service Chaining • Chapter 5–47


Network Automation Using Contrail Cloud

Source NAT Case Study: Part 2


We first create a PRIVATE virtual network. This is done the usual way.

Chapter 5–48 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Source NAT Case Study: Part 3


We now create a PUBLIC virtual network. The critical difference compared to creating PRIVATE network is to check the
External option, everything else is done the same way.

www.juniper.net Service Chaining • Chapter 5–49


Network Automation Using Contrail Cloud

Source NAT Case Study: Part 4


We now need to create a router. This is done in Configure > Networking > Routers workspace. Press the Create
Router button (with a plus [+] sign on it), then enter a router name (we use ROUTER), select an external gateway (we use
PUBLIC) and connected networks (we select PRIVATE). The SNAT check box is enabled by default. Remember to press the
Save button.
Note that you will not be able to select PUBLIC as an External Gateway unless you check the External option for this network
(as we did on the previous slide).

Chapter 5–50 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Source NAT Case Study: Part 5


After doing the previous steps, both router and service instance are created. The service instance should be in Active
state. It is using the predefined service template named netns-snat-template. Also note that a VN with name starting
from snat-si-left_si_ has been created automatically.
It is important to mention that service instances for SNAT are not seen in OpenStack GUI’s list of VM instances.

www.juniper.net Service Chaining • Chapter 5–51


Network Automation Using Contrail Cloud

Source NAT Case Study: Part 6


To test connectivity and SNAT we have instantiated two VMs in PRIVATE and PUBLIC networks, as per the case study
diagram. By sending a ping from the private VM to the server VM and looking at the tcpdump output on server VM, we see
that packets come in from address 3.3.3.3, which is an address of SNAT gateway, that is, SNAT is indeed performed.

Chapter 5–52 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Using Heat Templates


The slide highlights the topic we discuss next.

www.juniper.net Service Chaining • Chapter 5–53


Network Automation Using Contrail Cloud

Contrail and Heat


Heat is the main project in the OpenStack Orchestration program. It implements an orchestration engine to launch multiple
composite cloud applications based on templates in the form of text files that can be treated like code.
A Heat template describes the infrastructure for a cloud application, such as networks, servers, floating IP addresses, and
the like, and can be used to manage the entire life cycle of that application.
When the application infrastructure changes, the Heat templates can be modified to automatically reflect those changes.
Heat also can delete all application resources if the system is finished with an application.
Heat templates can also record the relationships between resources, for example, which networks are connected by means
of policy enforcements, and consequently call OpenStack REST APIs that create the necessary infrastructure, in the correct
order, that are needed to launch the application managed by the Heat template.
The Heat Engine supports Resource Plug-in modules, which allows vendors and operators of OpenStack clouds to provide
custom Resource handlers to their users. The Juniper Heat plug-in enables the use of some resources not currently included
in the OpenStack Heat orchestration engine, including network policy, service template, and service instances.
Continued on the next page.

Chapter 5–54 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud
Contrail and Heat (contd.)
Resources are the specific objects that Heat creates or modifies as part of its operation. The Heat plug-in resources are
loaded into the /usr/lib/heat/resources directory by the heat-engine service as it starts up. The names of the resource types
in the Juniper Heat plug-in include:
• OS::Contrail::NetworkPolicy
• OS::Contrail::ServiceTemplate
• OS::Contrail::AttachPolicy
• OS::Contrail::ServiceInstance

www.juniper.net Service Chaining • Chapter 5–55


Network Automation Using Contrail Cloud

Heat Template: Create VN


We start with an example template which creates a VN with the given name, route target, IP subnet and gateway. Other
parameters, such as a list of DNS servers, could also be passed to resources, but this is not used in this example.
Heat templates are defined in YAML—a popular human-readable data serialization format. Their syntax is mostly
self-explanatory. The key with value 2013-05-23 indicates that the YAML document is a HOT (Heat Orchestration template)
and it may contain features implemented until the OpenStack Icehouse release. This version supports the following
functions: get_attr, get_file, get_param, get_resource, list_join, resource_facade, str_replace, Fn::Base64, Fn::GetAZs,
Fn::Join, Fn::MemberListToMap, Fn::Replace, Fn::ResourceFacade, Fn::Select, Fn::Split, Ref. Some of these functions are
used below.
The Parameters section describes the parameters that the template receives from outside (typically, they are placed in a
separate file as we will see on the following slides).
Resources are the actual objects that are created by the template. In our example we create a virtual network and a subnet
inside it. Note depends_on keyword: it is important that resources are created in correct sequence, otherwise the template
may fail. In our example, virtual network must exist before it can have a subnet added to it. If a resource depends on more
than one other resources, the value of the depends_on attribute is specified as a list of resource IDs.
Heat templates can also contain an Outputs section, that allows for specifying output parameters available to users once the
template has been instantiated. This section is optional and can be omitted when no output values are required (as in our
examples).

Chapter 5–56 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Running Heat Template


In our examples, we run templates from the local OpenStack machine. For a description of how to get Heat to work with a
remote OpenStack instance, see the following URL: http://docs.openstack.org/developer/heat/getting_started/
standalone.html.
Although parameters can be specified in the command line, it is more convenient to put them in another YAML file (in our
example, name vn.env is used for it).
We use command source /etc/contrail/openstackrc as a simplest way to authenticate a CLI session.
The templates enable creation of most OpenStack resource types, such as instances, floating IP addresses, volumes,
security groups, and users. The resources, once created, are referred to as stacks. The stack is created with heat
stack-create command, as we demonstrate on the slide, and consists of a collection of resources. Second command,
stack-list, shows the status of the created stacks.
Other useful commands are heat stack-show stack-name (shows status and some details about the stack) and
heat resource-list stack-name (shows the associated resources).

www.juniper.net Service Chaining • Chapter 5–57


Network Automation Using Contrail Cloud

The Result
The result of running Heat template is shown in the slide. Virtual network HEAT_NET with the required subnet, gateway and
route targets in created.

Chapter 5–58 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Updating the Stack


Now we demonstrate how the life cycle of a cloud application can be managed. In this example, we modified a list of route
targets adding 555:555 to the end. Then, we use stack-update command. Finally, we see the result—a new route target
was added to HEAT_NET.

www.juniper.net Service Chaining • Chapter 5–59


Network Automation Using Contrail Cloud

Deleting the Stack


The first command on the slide is a useful stack-show command that outputs the status of the stack and possible errors
(output abbreviated).
The second command deletes the stack. It is seen in GUI that HEAT_NET no longer exists.

Chapter 5–60 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Creating a Service Template: Part 1


We now show an example Heat template to create a Contrail service template. Note the use of
OS::Contrail::ServiceTemplate resource type.
Function Fn::Split is used to split the comma-separated list of parameters into parts. The first argument is the separator
(“,” in out example), the second argument is the string we want to split.
Note that the following two expressions will produce the same result:
"Fn::Split" : [ ",", Ref: service_interface_type_list ]

? "Fn::Split"
:
- ","
- Ref: service_interface_type_list
This example shows different ways lists and dictionaries can be presented in YAML.

www.juniper.net Service Chaining • Chapter 5–61


Network Automation Using Contrail Cloud

Creating a Service Template: Part 2


The slide shows the content of the service_template.env file with parameters that we use for the
service_template.yaml template. We then run the template using Heat and see the result in GUI.

Chapter 5–62 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Implementing a Service Chain: Part 1


We now look at a bit more complicated example of implementing a service chain in Contrail. However, no new elements are
used compared to previous examples.
The template will create a service instance, a network policy, and attaches the network policy to two virtual networks. It is
assumed that both VNs and a service template have already been created, their names are passed as parameters.
This slide shows the beginning of a template (description and parameters sections).

www.juniper.net Service Chaining • Chapter 5–63


Network Automation Using Contrail Cloud

Implementing a Service Chain: Part 2


The presented parts of template instantiate a service instance using OS::Contrail::ServiceInstance resource,
and then policy using OS::Contrail::NetworkPolicy resource. Note that a policy cannot be created before a service
instance is available, so the depends_on field is used.

Chapter 5–64 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Implementing a Service Chain: Part 3


Here, still in the service_chain.yaml, we use OS::Contrail::AttachPolicy resource to attach the above
created policy to two VNs.
The service_chain.env file contains the parameters that we will pass to the template in our example. Note that VN-A
and VN-B, as well as SERVICE_TEMPLATE, already exist and are just passed as parameters. At the same time,
service_instance_A_B and policy_A_B will be created by the template.
Then, we create a stack using heat stack-create command.

www.juniper.net Service Chaining • Chapter 5–65


Network Automation Using Contrail Cloud

The Result
The results of running the template, as seen in Contrail GUI, are shown on the slide. It is seen that
service_instance_A_B and policy_A_B have been created, the correct service template is used, and the policy is
applied to VNs.
Many more examples of using Heat with Contrail can be found in contrail-heat repository at the following URL: 
https://github.com/Juniper/contrail-heat.

Chapter 5–66 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

We Discussed:
• Service chaining within Contrail;
• Configuration of service chains;
• Configuration of Source NAT; and
• Using Heat templates.

www.juniper.net Service Chaining • Chapter 5–67


Network Automation Using Contrail Cloud

Review Questions
1.

2.

3.

Chapter 5–68 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Lab: Service Chains


The slide lists the objectives for this lab.

www.juniper.net Service Chaining • Chapter 5–69


Network Automation Using Contrail Cloud
Answers to Review Questions
1.
Three types of network boundaries where service chains can be implemented are between a tenant and an external network, between
different tenant networks, and between the same tenant’s networks.
2.
Three types of service chain modes are transparent, in-network, and in-network-nat.
3.
Four elements that make up a service chain are virtual networks, service templates, service instances, and service policy.

Chapter 5–70 • Service Chaining www.juniper.net


Network Automation Using Contrail Cloud

Chapter 6: Contrail Analytics


Network Automation Using Contrail Cloud

We Will Discuss:
• How to work with the Monitor workspace;
• How to analyze live traffic using service instances and the Debug workspace;
• How to run flow queries and examine system logs;
• How to use Underlay Overlay mapping; and
• How to work with Contrail Analytics API.

Chapter 6–2 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Monitor > Infrastructure Workspace


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Contrail Analytics • Chapter 6–3


Network Automation Using Contrail Cloud

Analyzing Traffic in a Typical Network


How does somebody go about analyzing network traffic in a typical network? Usually, you must first provide connectivity with
an intermediate device, that the live traffic will be flowing through, and the flow collector. This process is time consuming and
requires you to first determine where in your network the live traffic will travel. Then, you have to select an intermediate
device that has the capabilities to mirror, or sample, the traffic, and finally send the mirrored packets to the flow collector.
Also, it happens quite often that a flow collector is not physically in the correct place to receive the mirrored traffic. This
situation requires you to manually run cables and possibly move servers around to provide the necessary connectivity. That
is a task that is not welcomed by most engineers in a data center environment.
A virtualized data center adds to this problem. The virtual nature of virtual machines (VMs) can make it difficult to determine
which server the VM resides on, then determining where to place the flow collector, run the necessary cables, and
configuring an intermediate device to mirror the traffic can seem to be an almost impossible task.
Collection of live traffic is not the only analytics that occur within a network, collecting statistics to determine the health of
your network is of up most importance. Also, collecting route information, packet statistics, and flow statistics are important,
too.
The goal of this chapter is to help you understand how Contrail can provide these analytics in a quick, easy, and efficient way.

Chapter 6–4 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Analyzing Statistics with Contrail


Analyzing statistics with Contrail in a data center is much simpler than the method we discussed in the last page. Records of
all packets, all bytes, and all flows are sent to the analytics node. You can employ packet capture functionality by deploying a
special VM instance which has Wireshark software already installed on it. From there, traffic is mirrored to this special VM
analyzer instance for collection.
To determine the health of the nodes in the Contrail fabric, infrastructure statistics are gathered that display the state,
memory usage, CPU usage, and other items. Also, these statistics include the number and connectivity of virtual networks
(VNs) for a project, the location of VMs and VNs in regards to the compute nodes.

www.juniper.net Contrail Analytics • Chapter 6–5


Network Automation Using Contrail Cloud

Analytics Node: Review


The Analytics Nodes are responsible for the collection of system state information, usage statistics, and debug information
from all of the software modules across all of the nodes of the system. The Analytics Nodes store the data gathered across
the system in a database that is based on the Apache Cassandra open source distributed database management system.
The database is queried by means of an SQL-like language and REST APIs. The Sandesh protocol, which is XML over TCP, is
used by all the nodes to deposit data in the NoSQL database. For example, all vRouters dump their data into the database in
this manner. Every packet, every byte, and every flow, is collected so there is 100% visibility of all traffic going through the
vRouters. At any point, it is possible to view all live flows traversing a vRouter. Furthermore, once the flow data has been
dumped into the Cassandra database, you can use the aforementioned query interfaces to mine the data for historical or
debugging purposes, capacity management, billing, heuristics, etc. System state information collected by the analytics
nodes is aggregated across all of the nodes, and comprehensive graphical views allow the user to get up-to-date system
usage information easily.
Debug information collected by the analytics nodes includes the following types:
• System log (syslog) messages: Informational and debug messages generated by system software components.
• Object log messages: Records of changes made to system objects such as virtual machines, virtual networks,
service instances, virtual routers, BGP peers, routing instances, and the like.
• Trace messages: Records of activities collected locally by software components and sent to analytics nodes
only on demand.
• Statistics information related to flows, CPU and memory usage, and the like is also collected by the analytics
nodes and can be queried at the user interface to provide historical analytics and time-series information. The
queries are performed using REST APIs.

Chapter 6–6 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

The Monitor > Infrastructure > Dashboard Workspace


You can use the Monitor > Infrastructure > Dashboard workspace to get an at-a-glance view of the system
infrastructure components. These components include the numbers of virtual routers, control nodes, analytics nodes,
database nodes, and config nodes currently operational.
Across the top of the Dashboard screen are summary boxes representing the components of the system that are shown in
the statistics. Any of the control nodes, virtual routers, analytics nodes, database nodes, and config nodes can be monitored
individually and in detail from the Dashboard by clicking an associated box, and drilling down for more detail.
Bubble charts show the CPU and memory utilization of components contributing to the current analytics display, including
vRouters, control nodes, config nodes, and the like. You can hover over any bubble to get summary information about the
component it represents. Then, you can click through the summary information to get more details about the component.
Continued on the next page.

www.juniper.net Contrail Analytics • Chapter 6–7


Network Automation Using Contrail Cloud
The Monitor > Infrastructure > Dashboard Workspace (contd.)
Bubble charts use the following color-coding scheme:
• Config and Analytics Nodes
– Green—working as configured.
– Red—error, possible resource issue.
• Control Nodes
– Green—working as configured.
– Red—error, at least one configured peer is down.
• vRouters
– Blue—working, but no instance is launched.
– Green—working with at least one instance launched.
– Red—error, there is a problem with connectivity or a vRouter is in a failed state.

Chapter 6–8 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

The Monitor > Infrastructure > Control Nodes Workspace


You can use the Monitor > Infrastructure > Control Nodes workspace to see a graphical chart of average
memory usage versus average CPU percentage usage for all control nodes in the system. Also on this screen is a list of all
control nodes in the system. Remember that with the bubble chart, a green bubble represents a control node that is in the
working state, and a red bubble represents a control node that is in down or unreachable.
An explanation of the contrail node fields follows.
• Host name: The name of the control node.
• IP Address: The IP address of the control node.
• Version: The software version number that is installed on the control node.
• Status: The current operational status of the control node — Up or Down.
• CPU (%): The CPU percentage currently in use by the selected control node.
• Memory: The memory in MB currently in use and the total memory available for this control node.
• BGP Peers: The total number of BGP peers in sync for this control node.
• vRouters: The total number of vRouters in sync for this control node.

www.juniper.net Contrail Analytics • Chapter 6–9


Network Automation Using Contrail Cloud

Monitor Individual Control Node Details


You can click the host name of any control nodes listed under the Control Nodes title to view an array of graphical
reports of usage and numerous details about that node. There are several tabs available to help you probe into more details
about the selected control node. The first tab is the Details tab which displays information such as CPU and memory
utilization, version number, IP address, host name, and many other items.
An explanation of some of the fields shown in the Details tab follows.
• Hostname: The host name defined for this control node.
• IP Address: The IP address of the selected node.
• Version: The software version number that is installed on the control node.
• Overall Node Status: The operational status of the control node.
• Analytics Node: The IP address of the node from which analytics (monitor) information is derived.
• Analytics Messages: The total number of analytics messages in and out from this node.
• Peers: The total number of peers established for this control node and how many are in sync and of what type.
• CPU: The average percent of CPU load incurred by this control node.
Continued on the next page.

Chapter 6–10 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud
Monitor Individual Control Node Details (contd.)
• Memory: The average memory usage incurred by this control node.
• Last Log: The date and time of the last log message issued about this control node.
• Control Node CPU/Memory Utilization: A graphic display x, y chart of the average CPU load and
memory usage incurred by this control node over time.

www.juniper.net Contrail Analytics • Chapter 6–11


Network Automation Using Contrail Cloud

Monitoring Individual Control Node Peers


The Peers tab displays the peers for an individual control node and their peering state. You can click the expansion arrow
next to the address of any peer to reveal more details. Note how the navigation bar shows that these set of peers are from
the perspective of control node control-1. Also, note how there are different peer types for this control node. Remember that
routes are exchanged through the BGP between control nodes and gateway routers, where as routes are exchanged through
the Extensible Messaging and Presence Protocol (XMPP) between control nodes and compute nodes.
An explanation of the fields shown in the Peers tab follows.
• Peer: The hostname of the peer.
• Peer Type: The type of peer.
• Peer ASN: The autonomous system number of the peer.
• Status: The current status of the peer.
• Last flap: The last flap detected for this peer.
• Messages (Recv/Sent): The number of messages sent and received from this peer.

Chapter 6–12 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Monitoring Individual Control Node Routes


The Routes tab displays active routes for this control node and lets you query the results. You can narrow the query results
by selecting different options in fields on the screen. You can also click the expansion icon next to a route to reveal more
details about the selected route. Remember that a control node is basically a route reflector, in which all routes are reflected
through the control nodes.
Continued on the next page.

www.juniper.net Contrail Analytics • Chapter 6–13


Network Automation Using Contrail Cloud
Monitoring Individual Control Node Routes (contd.)
An explanation of the selectable fields in the Routes tab follows.
• Routing Instance: You can select a single routing instance from a list of all instances for which to display
the active routes.
• Address Family: Select an address family for which to display the active routes:
– All (default)
– enet
– erm-vpn
– inet
– inetvpn
– inet6
– l3vpn
– rtarget
• Limit: Select to limit the display of active routes:
• Peer Source: Select from a list of available peers the peer for which to display the active routes, or select All.
• Prefix: Enter a route prefix to limit the display of active routes to only those with the designated prefix.
• Protocol: Select a protocol for which to display the active routes:
– All (default)
– XMPP
– BGP
– ServiceChain
– Static
• Display Routes: Click this button to refresh the display of routes after selecting different display criteria.
• Reset: Click this button to clear any selected criteria and return the display to default values.
An explanation of the fields shown in Routes table of the Routes tab follows.
• Routing Table: The name of the routing table that stores this route.
• Prefix: The route prefix for each active route displayed.
• Protocol: The protocol used by the route.
• Source: The host source for each active route displayed.
• Next hop: The IP address of the next hop for each active route displayed.
• Label: The label for each active route displayed.
• Security Group: The security group for each active route displayed.
• Origin VN: The virtual network from which the route originates.

Chapter 6–14 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Monitoring vRouters
By navigating to the Monitor > Infrastructure > Virtual Routers workspace, you are taken to the vRouters
summary screen. On the vRouters summary screen you will see the familiar bubble chart. Remember that with the vRouters
bubbles, blue means the vRouter is working, but it has no associated instances. A green bubble means the vRouter is
working and has associated instances. A red bubble means that the vRouter is down and unreachable.
An explanation of the fields shown in Virtual Routers table follows.
• Host name: The name of the vRouter. Click the name of any vRouter to reveal more details.
• IP Address: The IP address of the vRouter.
• Version: The version of software installed on the system.
• Status: The current operational status of the vRouter — Up or Down.
• CPU (%): The CPU percentage currently in use by the selected vRouter.
• Memory (MB): The memory currently in use and the total memory available for this vRouter.
• Networks: The total number of networks for this vRouter.
• Instances: The total number of instances for this vRouter.
• Interfaces: The total number and status of interfaces for this vRouter.

www.juniper.net Contrail Analytics • Chapter 6–15


Network Automation Using Contrail Cloud

Virtual Router Details Tab


The Details tab provides a summary of the status and activity on the selected compute node, and presents graphical
displays of CPU and memory usage. Note that although this is a in the vRouters workspace, these statistics are for the
compute node that houses the vRouter.
An explanation for some of the fields shown on this page follows.
• Hostname: The hostname of the vRouter.
• IP Address: The IP address of the selected vRouter.
• Overall Node Status: The operational status of the vRouter.
• Analytics Node: The IP address of the node from which analytics (monitor) information is derived.
• Control Nodes: The IP address of the configuration node associated with this vRouter.
• Analytics Messages: The total number of analytics messages in and out from this node.
• XMPP Messages: The total number of XMPP messages that have gone in and out of this vRouter.
• Flow Count: The number of active flows and the total flows for this vRouter.
• Networks: The number of networks associated with this vRouter.
Continued on the next page.

Chapter 6–16 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud
Virtual Router Details Tab (contd.)
• Interfaces: The number of interfaces associated with this vRouter.
• Instances: The number of instances associated with this vRouter.
• CPU (%): The CPU percentage currently in use by the selected vRouter.
• Memory (MB): The memory currently in use and the total memory available for this vRouter.
• Last Log: The date and time of the last log message issued about this vRouter.
• vRouter CPU/Memory Utilization: Graphs (x, y) displaying CPU and memory utilization averages over
time for this vRouter, in comparison to system utilization averages.

www.juniper.net Contrail Analytics • Chapter 6–17


Network Automation Using Contrail Cloud

vRouter Interfaces Tab


The Interfaces tab displays details about the interfaces associated with an individual vRouter. You can click the
expansion arrow next to any interface name to reveal more details. Also, to troubleshoot an issue, you can start a packet
capture on a given interface, as shown on the slide. Note, that this action launches an analyzer VM instance. We discuss
analyzer VM instance later in this chapter.
An explanation of the fields shown on the page follows.
• Name: The name of the interface.
• Label: The label for the interface. This number is the MPLS label that is pushed on any traffic coming from the
associate VM.
• Status: The current status of the interface.
• Network: The VN associated with the interface and the associated project name.
• IP Address: The IP address of the interface.
• Floating IP: Displays any floating IP addresses associated with the interface, if applicable.
• Instance: The name of an instance, and its associated universal unique identifier (UUID), associated with the
interface.

Chapter 6–18 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Virtual Routers Networks Tab


The Networks tab displays details about the networks associated with an individual vRouter. As with other workspaces, you
can click the expansion arrow at the name of any network to reveal more details. You can view a wealth of details about a VN
by examining this page. Note how the name field contains the domain, project, and VN name. Then, you can view the
associated access control list (ACL), that is created from a policy. You can even see the associated virtual routing and
forwarding table (VRF), and if you click on the associated VRF you are taken to the Routes workspace. Finally, from this
screen you have the ability to jump to the Configure > Networking > Networks workspace, the Monitor >
Networking > Networks workspace, or run an object log job by clicking the Configure icon that is associated with
each VN.

www.juniper.net Contrail Analytics • Chapter 6–19


Network Automation Using Contrail Cloud

Virtual Router ACL Tab


The ACL tab displays details about the ACLs associated with an individual vRouter. As with other workspaces, you can click
the expansion arrow at the name of any network to reveal more details. Although it might not be readily apparent, an ACL is
created whenever you create a policy. The information in the policy is passed on to the ACL in question.
An explanation of the fields shown on the page follows.
• UUID: The UUID associated with the listed ACL.
• Flows: The number of flows associated with the listed ACL.
• Action: The traffic action defined by the listed ACL.
• Protocol: The protocol associated with the listed ACL.
• Source Network or Prefix: The name or prefix of the source network associated with the listed ACL.
• Source Port: The source port associated with the listed ACL.
• Destination Network or Prefix: The name or prefix of the destination network associated with the
listed ACL.
• Destination Port: The destination port associated with the listed ACL.
• ACE Id: The ACE ID associated with the listed ACL.

Chapter 6–20 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Virtual Router Flows Tab


The Flows tab displays details about the flows associated with an individual vRouter. As with other workspaces, you can
click the expansion arrow at the name of any network to reveal more details. The slide calls out an initiating and return flow
for a session. Note that the source and destination VNs line up for the two flows and the IP addresses associated with those
flows.
Continued on the next page.

www.juniper.net Contrail Analytics • Chapter 6–21


Network Automation Using Contrail Cloud
Virtual Router Flows Tab (contd.)
An explanation of the fields shown on the page follows.
• ACL UUID: The default is to show all flows, however, you can select from a drop down list any single flow to
view its details.
• ACL / SG UUID: The UUID associated with the listed ACL or SG.
• Protocol: The protocol associated with the listed flow.
• Src Network: The name of the source network associated with the listed flow.
• Src IP: The source IP address associated with the listed flow.
• Src Port: The source port of the listed flow.
• Dest Network: The name of the destination network associated with the listed flow.
• Dest IP: The destination IP address associated with the listed flow.
• Dest Port: The destination port associated with the listed flow.
• Bytes/Pkts: The number of bytes and packets associated with the listed flow.
• Setup Time: The setup time associated with the listed flow.

Chapter 6–22 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Virtual Router Routes Tab


The Routes tab displays details about unicast, multicast, and L2 routes in specific VRFs for an individual vRouter. As with
other workspaces, you can click the expansion arrow at the name of any network to reveal more details. First, you might
notice the VRF field which allows you to select the necessary VRF. If you are looking for a specific route and that route is not
showing up in the output, make sure that you have the correct VRF selected, there is no selection that displays the routes of
all VRFs. Also, note the next hop type, which tells you which type of next hop is employed for the route. The slide shows three
different types of next hops—tunnel, discard and interface. The tunnel next hop type is used for routes that require a tunnel,
such as Multi-protocol Label Switching over Generic Routing Encapsulation (MPLSoGRE), to reach the end point. We
discussed these types of tunnels in detail in a previous chapter as they are used to connect VM instances together.
Take a close look at the next hop details of the default route. The source IP for the default route is the compute node and the
destination IP is the gateway router. This means that the default route actually originated from the gateway router and the
compute node can use it to send VM traffic to unknown destinations. Also, note how there are two versions for the route, but
there is actually only one gateway router advertising this default route. Why are two default routes appearing in the VRF from
the one gateway router? To answer this question you must remember how the control nodes act as route reflectors. In this
case there are two control nodes that are receiving the one default route from the gateway router and they both are
reflecting the route to the vRouter in the compute node.

www.juniper.net Contrail Analytics • Chapter 6–23


Network Automation Using Contrail Cloud

The Monitor > Infrastructure > Analytics Nodes Workspace


Navigating to the Monitor > Infrastructure > Analytics Nodes allows you to view a summary of activities for
the analytics nodes. The analytics node function can run on the config nodes or on the control nodes. In the example on the
slide, the analytics node is running on the same node as the config node. This concept can be plainly seen by looking at the
Analytics Nodes section under the Host name field. Within the Host name field you can see the same host name
that is used for the config node. Like most other workspaces Contrail provides a bubble chart that tells you at a glance if the
analytics node is up or down by displaying blue or red, respectively.
An explanation of the fields shown on the page follows:
• Host name: The name of this node.
• IP address: The IP address of this node.
• Version: The version of software installed on the system.
• Status: The current operational status of the node — Up or Down — and the length of time it is in that state.
• CPU (%): The average CPU percentage usage for this node.
• Memory: The average memory usage for this node.
• Generators: The total number of generators for this node.

Chapter 6–24 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Analytics Node Tabs


Clicking the host name for an individual analytics node lands you on the Details tab for that node. The same information
is there that was present on the overview screen with a little additional information. In this tab you are given information on
the up time for the node and the various processes that are running. Also, graphs are present that show the CPU and
memory utilization to give you more information on the state of the node.
The other tabs provide additional information about the generators, query expansion (QE) queries, and console logs. The
Generators tab displays information about the generators for an individual analytics node, such as the name, status,
messages, and bytes produced by a generator. The QE Queries tab displays the number of QE messages that are in the
queue for this analytics node. The Console tab displays system logging information for a defined time period for an
individual analytics node.

www.juniper.net Contrail Analytics • Chapter 6–25


Network Automation Using Contrail Cloud

The Monitor > Infrastructure > Config Nodes Workspace


The config node is the node that handles all the inputs from the Web UIs—Contrail and OpenStack. To that end, its very
important that the config node stays in good working condition. The Config Nodes workspace is dedicated to making sure
you are able to examine the health of the node. In the workspace overview you are presented with a bubble chart, as with the
other bubble charts, blue means that the node is up and functioning, red means that it is not reachable and having
problems. The Config Nodes section presents you with parameters that you should be very familiar with by now. Note
that the host name is the same as the host name of the analytics node. A few pages ago we discussed how the config node
and the analytics node are running on the same node in this example.

Chapter 6–26 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Config Node Tabs


Clicking on the host name of a config node provides additional details. The system presents you with two tabs—Details
and Console. The Details tab, shown on the slide, provides similar information that was seen in the overview, but
additional details are given on the uptime of the processes and transformer. Also, graph data is given in regards to the CPU
and memory utilization. The Console tab provides you with the ability to view system logs that are specific to this config
node.
The main take away from the Config Nodes workspace is a means to ensure that the config node is healthy and not
running out of resources.

www.juniper.net Contrail Analytics • Chapter 6–27


Network Automation Using Contrail Cloud

Monitor > Networking Workspace


The slide highlights the topic we discuss next.

Chapter 6–28 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

The Monitor > Networking > Dashboard Workspace


The first option in the Monitor > Networking part of the menu is the Dashboard. It is project specific and shows
network topology in the upper part of the screen, and Networks, Instances and Port Distribution tabs in the
bottom.

www.juniper.net Contrail Analytics • Chapter 6–29


Network Automation Using Contrail Cloud

The Monitor > Networking > Projects Workspace


The Monitor > Networking > Projects provides a summary of statistics for current projects in the domain—VNs,
VMs, traffic in and out, and throughput in and out. To examine more details about a specific project, click the name of the
project.

Chapter 6–30 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Projects Summary Tab


Clicking any of the projects listed on the Projects Summary tab gives details about connectivity, source and destination
port distribution, project’s networks and instances. Clicking an individual project opens a view that shows the details of how
your VNs are connected. The slide shows that the VN-A and VN-B VNs are connected, but the External network does not have
connectivity to either the VN-A or VN-B VN. These connections are formed by configuring a policy and associating that policy
with the necessary networks. Note that although the External VN does not have connectivity to the other VNs, this lack of
connectivity is by design. The External VN in this example is used as a floating IP pool for NAT purposes.
The Port Distribution chart section gives you a quick view of how ports are distributed for active traffic, and the
bandwidth that traffic us using.

www.juniper.net Contrail Analytics • Chapter 6–31


Network Automation Using Contrail Cloud

Project Instances Tab


The Instances tab provides you with detailed information about the current VM instances associated with the project.
Note that the information contained under this tab is the exact information that is contained under the Monitor >
Networking > Instances Summary tab. The VM instance name, VN and project, number of interfaces, IP address,
and traffic statistics information are given in the output. Note that, as the slide shows, many of these items are shortcuts to
other places within the Contrail Web UI.

Chapter 6–32 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

The Monitor > Networking > Networks Workspace


The Networks Workspace consists of a bubble chart in the top and Network Summary section in the bottom. Click the
particular network to get more details about it.

www.juniper.net Contrail Analytics • Chapter 6–33


Network Automation Using Contrail Cloud

Network Details: Part 1


After you select a network in Monitor > Networking > Networks Workspace, you can see its details on a new page.
Its upper part is a topology graph, and there are several tabs in the bottom.
The Port Map tab for an individual network displays the relative distribution of traffic for this network by protocol and by
port. The Port Distribution tab displays the relative distribution of traffic in and out by source port and destination
port. The Instances tab displays details for each instance associated with this network, including the number of
interfaces, the associated vRouter, the instance IP address, and the volume of traffic in and out. Additionally, you can click
the arrow near the instance name to reveal even more details about the instance—the interfaces and their addresses, UUID,
CPU (usage), and memory used of the total amount available.

Chapter 6–34 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Network Details: Part 2


The upper part of the slide shows the Traffic Statistics tab. You can move the toddlers to see the graph for the time
interval that is needed.
Bottom part of the slide shows the Port Distribution tab. It is a bubble chart that gives more details when you hover
the bubbles.

www.juniper.net Contrail Analytics • Chapter 6–35


Network Automation Using Contrail Cloud

Instances Workspace
The Monitor > Networking > Instances workspace provides information on the VM instances that are active and
managed by Contrail.
You can click on the particular instance to see more details about it.

Chapter 6–36 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Instance Details
The slide shows Instance Details workspace. It includes the network topology graph, Details tab (it shows instance
UUID, used and total memory, CPU utilization), Interfaces tab, Traffic Statistics tab, and Port Map tab.

www.juniper.net Contrail Analytics • Chapter 6–37


Network Automation Using Contrail Cloud

Analyzing Live Traffic


The slide highlights the topic we discuss next.

Chapter 6–38 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Traffic Analyzers in Contrail


Before using the Contrail Web UI to configure traffic analyzers and packet capture for mirroring, make sure that the following
analyzer images are available in the VM image list in the OpenStack Web UI. The traffic analyzer images are enhanced for
viewing details of captured packets in Wireshark. The two analyzer image names, which are listed below, can be downloaded
from the Juniper website:
• analyzer-vm-console.qcow2 — Standard traffic analyzer; must be named analyzer in the image list.
• analyzer-vm-console-two-if.qcow2 — Traffic analyzer to use in a service chain with two interfaces
(left and right); must be named analyzer two interfaces in the image list.

There are two main ways to launch the analyzer VM instance, through the Monitor > Debug > Packet Capture and the
Configure > Services workspaces. Note that if you use the methods under the Monitor > Debug > Packet
Capture workspace, a VM instance that uses predefined m1.medium instance flavor, which uses 4 GB of RAM and 40 GBs of
disk space, is launched. However, if you use the method under the Configure > Services workspace, you can select
which instance flavor to use. This instance flavor selection can result in a much smaller VM instance. Alternatively, you can
manually adjust the resource usage for the m1.medium instance flavor to use less resources through the OpenStack Web UI.
Note that if you delete the m1.medium instance flavor in the OpenStack Web UI, you will not be able to launch an analyzer VM
through the Monitor > Debug > Packet Capture workspace.
Continued on the next page.

www.juniper.net Contrail Analytics • Chapter 6–39


Network Automation Using Contrail Cloud
Traffic Analyzers in Contrail (contd.)
A third way to launch analyzer VM is via Query > Flows > Flow Record workspace (it will be discussed later in this
chapter). In Query Results tab, press the gear icon to the right of the flow of interest, and select Start Packet
Capture. The analyzer VM is launched automatically and is using m1.medium flavor in this case. It is seen in OpenStack Web
UI, however, it is not seen either in Configure > Services > Service Instances or in Monitor > Debug >
Packet Capture workspaces in the Contrail Web UI.
Finally, you can also run the analyzer from Monitor > Infrastructure > Virtual Routers >
Compute-Node-Name > Interfaces workspace, by pressing the gear icon to the right of the interface and clicking Start
Packet Capture link. Similar to the previous option, analyzer VM launches automatically and uses m1.medium flavor, and it
is only seen in OpenStack Web UI.

Chapter 6–40 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Uploading the Analyzer Image


To begin, you must first upload the analyzer image through the OpenStack Web UI. The process of uploading the analyzer
image is similar to uploading any other image. The slide shows user admin logged in under the Customer-A project. From
there, you must click the Create Image button to begin the process.

www.juniper.net Contrail Analytics • Chapter 6–41


Network Automation Using Contrail Cloud

Analyzer Image Parameters


Begin specifying the analyzer parameters by first giving the image the proper name. Remember that we discussed the proper
name for the analyzer image depends on which image you are using. If you are using the image with the file name of
analyzer-vm-console.qcow2, you must name the image analyzer. If you are using the image with the file name of
analyzer-vm-console-two-if.qcow2 you must use the image name of analyzer two interfaces. In the
example on the slide we are using the analyzer-vm-console.qcow2 file, which requires us to use the name of
analyzer. After you give the image a name you must select the image file by selecting either the Image Location
option or the Image File option in the Image Source field. In the example on the slide, we simply used the Choose
File button to find the file on the local machine. Finally, you must specify the format of QCOW2—QEMU Emulator.
Although it is not necessary, the slide shows that the Public check box is selected. Selecting this check box permits other
projects to use the image.
Also, note that the image file is large—over 3 GBs in size. To this end, you must make sure that there is enough disk space on
the config node to accommodate the large analyzer image. Also, depending on your connection speed, it might take a while
to upload the file.

Chapter 6–42 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Checking the Image Status


As with any image being uploaded through the OpenStack Web UI, you must make sure that the image is present and active.
The slide shows three different images that have been uploaded and are in the active state.

www.juniper.net Contrail Analytics • Chapter 6–43


Network Automation Using Contrail Cloud

Case Study: Packet Capture 1


To complete this case study you must mirror traffic that is sent between the VN-A and VN-B networks to an analyzer VM.
There are strict requirements in which you must use the Monitor > Debug > Packet Capture workspace and you
must not capture any traffic sent to or from the physical network. Note that the analyzer image has already been uploaded
through the OpenStack Web UI.
Over the next slides we walk through the exact steps you must take to complete this case study.

Chapter 6–44 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Create a New Packet Capture


To begin, navigate to the Monitor > Debug > Packet Capture workspace and click the Create button. However,
before you do click the Create button, make sure that you are within the correct project.

www.juniper.net Contrail Analytics • Chapter 6–45


Network Automation Using Contrail Cloud

Configuring the Analyzer Parameters


To configure the analyzer parameters you must first name the analyzer, select the VN that the analyzer VM will reside in,
associate the VNs, and configure the analyzer rules. When setting the VN that you want the analyzer VM to reside in, you can
select any existing VN, or you can use the Automatic option, which creates a new VN for the analyzer VM to reside in.
Selecting the Automatic option for this case study works well because it is not necessary for it to be in the same VNs as
the other VMs. Next, the Associated Networks field is where you must specify the VNs that you want to mirror traffic
from. In this case study, we want to mirror traffic from the VN-A and VN-B networks. Finally, you must create a rule that
specifies the VN-A and VN-B networks as the source and destination VNs, respectively. Technically, VN-B can be the source
network and VN-A can be the destination network because selecting the bi-directional (<>) option for the direction allows you
to match traffic that either VN initiates. Keep in mind that if you left the Source Network and Destination Network
fields at their default of Any, then traffic sent between the physical network and the VNs might receive the port mirroring
treatment if the VN that is in use for a floating IP pool is included in the Associate Networks field. If port mirroring of
the traffic sent to, and received from, the physical network occurs, the requirements of the case study are broken.
Note that associating the analyzer with the VNs at this point applies a special policy that includes the information from the
rules specified here to those VNs. However, you must have a different policy in place that allows the traffic between VN-A and
VN-B. This special policy only permits port mirroring to occur.

Chapter 6–46 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

View the Mirrored Traffic


To view the mirrored traffic you must first wait for the state of the analyzer to change to Active, if it does not reach the
active state you might need to examine the resource usage on the compute node that Contrail is attempting to place the
analyzer VM on. If there is not enough resources available on the compute node, such as RAM or disk space, the deployment
of the analyzer VM instance fails. Then, you can click the Configure icon and select the View Analyzer option. Note
that the graphic on the slide shows all the analyzer parameters that we configured on the previous page. Most of these
parameters can be changed by clicking the Edit button.

www.juniper.net Contrail Analytics • Chapter 6–47


Network Automation Using Contrail Cloud

Examining the Analyzer VM


The slides shows the console view of the analyzer VM. The window shows the active packet capture of the Wireshark
software that is running on the VM. From the graphic on the slide you can determine that traffic that is being initiated from
both VN-A and VN-B is being mirrored to the analyzer VM.
If necessary, you can stop the running packet capture, and save it on the analyzer VM. From there you can use the Secure
Copy Protocol (SCP) to access the analyzer VM and retrieve the packet capture for later analysis. Note that to log into the
analyzer VM with SCP, you can use the default credentials of user name contrail and password contrail123.
However, one very important item to note is that to view the virtual network information in the packet capture on a device
other than the analyzer VM, you must use special Wireshark packet capture definitions. Without these special definitions,
you will only see the underlay packet information in the packet capture. You can obtain these special Wireshark definitions
from the Juniper support website.

Chapter 6–48 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Case Study: Packet Capture 2


The next case study is very similar to the first one in which you must capture traffic that travels between VN-A and VN-B and
you must not capture traffic that travels between the VNs and the physical network. However, you must use a service
instance and ensure that the analyzer VM only uses 512 MBs of RAM on the compute node. You can quickly accomplish
these requirements by deploying an analyzer VM instance using a service instance and service template. We will go into
detail of how you can accomplish this over the next slides.
Alternatively, you could adjust the m1.medium instance flavor through the OpenStack Web UI and use the Monitor >
Debug workspace to meet the requirement of only using 512 MBs of RAM on the compute node. However, the case study
still requires you to use a service instance, so you still cannot use the Monitor > Debug workspace to complete this case
study.

www.juniper.net Contrail Analytics • Chapter 6–49


Network Automation Using Contrail Cloud

Service Templates
Begin creating a service template by navigating to the Services > Service Templates workspace and clicking the
Create button.

Chapter 6–50 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Service Template Parameters


Configure the service template by specifying a name, setting the service mode to Transparent, the service type to
Analyzer, and select the analyzer image. Remember that the one interface analyzer image must be named analyzer.
In our scenario we only need an analyzer with one interface. If we were using the analyzer in a service chain, you must use
the analyzer image that contains two interfaces. Next, add one Left interface and expand the Advanced option
section. By clicking the Instance Flavor dropdown box Contrail presents you with the different flavor options.
Remember that you can use the OpenStack Web UI to create new instance flavors, or adjust the predefined flavors. In our
scenario, the m1.tiny flavor works well because it only uses 512 MBs of RAM on the compute node.

www.juniper.net Contrail Analytics • Chapter 6–51


Network Automation Using Contrail Cloud

Creating the Service Instance


Now that you have created the service template, you can begin creating the service instance. To begin, navigate to the
Services > Service Instance workspace. Once you arrive at that workspace, click the Create button to begin
creating the service instance.

Chapter 6–52 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Configuring the Service Instance Parameters


Creating the service instance is relatively easy in comparison to the service template. Most of the work for a service instance
is complete when you create the service template. All you must do now is give the service instance a name, select the
service template that we created earlier, and specify which VN the interface should belong to. You can select a VN that you
have already configured, or you can use the default option of Auto Configured. Using the Auto Configured
parameter here creates a new VN and associates the analyzer VM instance with that VN. For the purposes of this case study,
using the Auto Configured option works well because we do not need the analyzer VM in the same network as the other
VMs.
Once you click Save, you are taken to the Service Instances workspace and Contrail begins spawning the analyzer
service instance. At this point, you must wait for the analyzer service instance to reach the Active state, as shown on the
slide.

www.juniper.net Contrail Analytics • Chapter 6–53


Network Automation Using Contrail Cloud

Editing the VN-A_to_VN-B Policy


Now you must edit the policy that permits traffic from VN-A to VN-B. To begin, navigate to the Configure > Networking
> Policies workspace. Once there you can click the Configure button and select the Edit option to begin editing the
policy.
Note that this policy has previously been associated with the VN-A network. If you forget to associate the correct policy with
the intended VN, the port mirroring will not occur.

Chapter 6–54 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Configuring Policy Parameters


Now that you are editing the VN-A_to_VN-B policy you must select the Mirror option. Selecting this option adds an
empty field that allows you to select the analyzer service instance by name. The result is that any traffic that matches the
policy is port mirrored to the analyzer VM instance.

www.juniper.net Contrail Analytics • Chapter 6–55


Network Automation Using Contrail Cloud

Editing the VN-B_to_VN-A Policy


Remember that the case study requires you to port mirror traffic that is initiated from either VNs. To this end you must also
edit the VN-B_to_VN-A policy and configure port mirroring, just like you did for the VN-A_to_VN-B policy.

Chapter 6–56 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Configuring Policy Parameters


Just as with the VN-A_to_VN-B policy, you must configure the VN-B_to_VN-A policy to mirror traffic by selecting the
Mirror check box and selecting the analyzer instance by name.

www.juniper.net Contrail Analytics • Chapter 6–57


Network Automation Using Contrail Cloud

Examining the Analyzer VM


The slide shows the console view of the analyzer VM. The window shows the running packet capture of the Wireshark
software that is running on the VM. From the graphic on the slide you can determine that traffic that is being initiated from
both VN-A and VN-B is being mirrored to the analyzer VM. As you can see, there is no difference in the packet capture from
using the two different methods of configuring and deploying the analyzer VM.
Remember, that you can stop the running packet capture, and save it on the analyzer VM. From there you can use SCP to
access the analyzer VM and retrieve the packet capture for later analysis. Note that to log into the analyzer VM with SCP, you
can use the default credentials of user name contrail and password contrail123.
Remember, to view the packet capture on a device other than the VM analyzer, you must have the special Wireshark packet
capture definitions. You can obtain these definitions through the Juniper support website.

Chapter 6–58 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Flow Queries and Logs


The slide highlights the topic we discuss next.

www.juniper.net Contrail Analytics • Chapter 6–59


Network Automation Using Contrail Cloud

Querying Flows
Use the Query > Flows workspace to perform rich and complex SQL-like queries on flows in the Contrail Controller. You
can use the query results for such things as gaining insight into the operation of applications in a virtual network, performing
historical analysis of flow issues, and pinpointing problem areas with flows. We discuss this workspace over the next slides.

Chapter 6–60 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Query > Flows > Flow Series Workspace


The Query > Flows > Flow Series workspaces allows you to enter query data into the fields to create a SQL-like
query to display and analyze flows.
Use this workspace to create queries on the flow series table, with results in the form of time series data for flows. Time
series is just some data (such as flow records) obtained over a period of time, usually at regular intervals.
The query fields available on the slide are described below.
• Time Range: Select a range of time for which to see the flow series.
• Select: Click the edit button (pencil icon) to open a window, where you can click one or more boxes to select
the fields to display from the flow series, such as source VN, destination VN, bytes, packets, and more.
• Where: Click the edit button (pencil icon) to open a query-writing window, where you can specify query values
for variables such as source VN, source IP, destination VN, destination IP, protocol, source port, and destination
port.
• Direction: Select the desired flow direction — INGRESS or EGRESS.
• Filter: Click the edit button (pencil icon) to open a Filter window, where you can select filter items by
which to sort, sort order, and limits to the number of results returned.

www.juniper.net Contrail Analytics • Chapter 6–61


Network Automation Using Contrail Cloud

Query > Flows > Flow Series


The slide shows an example of generating a Flow Query and its result. Because we have selected 5 minute time granularity,
every record corresponds to a 5-minute interval.

Chapter 6–62 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Query > Flows > Flow Series


When Time Granularity option is selected (as in our example), the results can also be displayed as graphical charts.
Click the graph button on the right side of the tabular results. The results are now displayed in a graphical flow chart.

www.juniper.net Contrail Analytics • Chapter 6–63


Network Automation Using Contrail Cloud

Query > Flows > Flow Records Workspace


The Query > Flow > Flow Records workspace allows you to create queries on individual flow records for detailed
debugging of connectivity issues between applications and virtual machines. Queries at this level return records of the active
flows within a given time period.
The query fields available on the slide are described below.
• Time Range: Select a range of time for which to see the flow records.
• Select: Click the edit button (pencil icon) to open a window, where you can click one or more boxes to select
attributes to display for the flow records, including Setup Time, Teardown Time, Aggregate Bytes, and Aggregate
Packets.
• Where: Click the edit button (pencil icon) to open a query-writing window, where you can specify query values
for variables such as source VN, source IP, destination VN, destination IP, protocol, source port, and destination
port.
• Direction: Select the desired flow direction: INGRESS or EGRESS.

Chapter 6–64 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Query > Flows > Flow Records


The example result for Flow Record query is shown. The results can be exported to a CSV file for a later analysis by an
external tool.

www.juniper.net Contrail Analytics • Chapter 6–65


Network Automation Using Contrail Cloud

Query > Logs > System Logs Workspace


The Query > Logs > System Logs workspace allows you to access the Query System Logs menu, where you can
view system logs according to criteria that you determine.
The query fields available on the slide are described below.
• Time Range: Select a range of time for which to see the system logs.
• Keywords: Enter an arbitrary value to query for.
• Where: Click the edit button (pencil icon) to open a query-writing window, where you can specify query values
for variables such as Source, Module, MessageType, and the like, in order to retrieve specific information.
• Filter: Click the edit button (pencil icon) to open a Filter window, where you can select filter items by
which to sort, sort order, and limits to the number of results returned.
Continued on the next page.

Chapter 6–66 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud
Query > Logs > System Logs Workspace (contd.)
• Level: Select the message severity level to view:
– SYS_NOTICE
– SYS_EMERG
– SYS_ALERT
– SYS_CRIT
– SYS_ERR
– SYS_WARN
– SYS_INFO
– SYS_DEBUG
• Run Query: Click this button to retrieve the system logs that match the query. The logs are listed in a box with
columns showing the Time, Source, Module Id, Category, Log Type, and Log message.

www.juniper.net Contrail Analytics • Chapter 6–67


Network Automation Using Contrail Cloud

Query > Logs > Object Logs Workspace


Object logs allow you to search for logs associated with a particular object, for example, all logs for a specified VN. Object
logs record information related to modifications made to objects, including creation, deletion, and other modifications.
The query fields available on the slide are described below.
• Time Range: Select a range of time for which to see the logs.
• Object Type: Select the object type for which to show logs: Virtual Network, Virtual Machine,
Virtual Router, BGP Peer, and others.
• Object Id: Select from a list of available identifiers the name of the object you wish to use.
• Select: Click the edit button (pencil icon) to open a window where you can select searchable types by clicking
a check box:
– ObjectLog
– SystemLog
• Where: Click the edit button (pencil icon) to open the query-writing window, where you can specify query values
for variables such as Source, ModuleId, and MessageType, in order to retrieve information as specific as you
wish.
• Filter: Click the edit button (pencil icon) to open a Filter window, where you can select filter items by
which to sort, sort order, and limits to the number of results returned.

Chapter 6–68 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

TTL Configuration for Analytics Data


Cassandra database used by Contrail analytics nodes allows to specify a TTL (time to live) for all the data written to it. After
the TTL, that data expires and is no longer available for queries. Default TTL for all data types is 48 hours (2 days).
To provide flexibility for the availability time of various types of contrail analytics data, Contrail has configuration mechanism
to set different TTL for different types of data. The four TTL parameters that can be specified in /etc/contrail/
contrail-collector.conf are:
• analytics_configaudit_ttl - TTL for config audit data coming into collector;
• analytics_statsdata_ttl - TTL for statistics data;
• analytics_flowdata_ttl - TTL for flow data;
• analytics_data_ttl - TTL for messages and object logs.
TTL is specified in number of hours and a typical configuration in /etc/contrail/contrail-collector.conf could
be:
analytics_data_ttl=48
analytics_config_audit_ttl=168
analytics_statistics_ttl=24
analytics_flow_ttl=2
Continued on the next page.

www.juniper.net Contrail Analytics • Chapter 6–69


Network Automation Using Contrail Cloud
TTL Configuration for Analytics Data (contd.)
In this case, we save the config audit logs for 7 days, messages and object logs for 2 days, statistics data for 1 day and flow
data for 2 hours.
If one is using fab scripts to do Contrail provisioning, the following parameters in the testbed.py will translate to the
above config parameters for contrail-collector.conf:
database_ttl = 48
analytics_config_audit_ttl = 168
analytics_statistics_ttl = 24
analytics_flow_ttl = 2
More details on provisioning Contrail are contained in the Installation (Appendix A) chapter.

Chapter 6–70 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Underlay Overlay Mapping


The slide highlights the topic we discuss next.

www.juniper.net Contrail Analytics • Chapter 6–71


Network Automation Using Contrail Cloud

Underlay Overlay Mapping


Virtualization presents a new paradigm for visibility and debugging of network elements, physical and virtual. Applications
live their lives in the overlay network. Sometimes they face connectivity, latency and other performance problems. Is it a
network problem? How do we isolate it and solve it?
To do this, we really need to map what’s happening in the overlay network with the networking entities that are operating in
the physical network. Underlay Overlay Mapping in Contrail allows to look at a traffic flow in the overlay, and examine what
physical routers and network interfaces it is traversing in the underlay. This can be co-related to interface statistics and other
health indications in the underlay network.
Starting with Contrail Release 2.20, you can view a variety of analytics related to the underlay and overlay traffic in the
Contrail Web UI. The following are some of the analytics that Contrail provides for statistics and visualization of the overlay
underlay traffic:
• View the topology of the underlay network.
• View the details of any element in the topology.
• View the underlay path of an overlay flow.

Chapter 6–72 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Architecture and Data Collection


Accumulation of the data to map an overlay flow to its underlay path is performed in several steps across Contrail modules.
The following outlines the essential steps.
1. The SNMP collector module polls physical routers.
The SNMP collector module receives the authorizations and configurations of the physical routers from the
contrail-config module, polls all of the physical routers, using SNMP protocol, then uploads the data to the
Contrail analytics collectors. The SNMP information is stored in the pRouter UVEs (physical router user visible
entities).
2. IPFIX and sFlow protocols are used to collect the flow statistics.
The physical router is configured to send flow statistics to the collector, using one of the collection protocols,
Internet Protocol Flow Information Export (IPFIX) or sFlow (an industry standard for sampled flow of packet
export at Layer 2).
3. The topology module reads the SNMP information.
The Contrail topology module reads SNMP information from the pRouter UVEs from the analytics API, computes
the neighbor list, and writes the neighbor information into the pRouter UVEs. This neighbor list is used by the
Contrail WebUI to display the physical topology.
4. The Contrail user interface reads and displays the topology and statistics.
The Contrail user interface module reads the topology information from the Contrail analytics and displays the
physical topology. It also uses information stored in the analytics to display graphs for link statistics, and to
show the map of the overlay flows on the underlay network.

www.juniper.net Contrail Analytics • Chapter 6–73


Network Automation Using Contrail Cloud

Enable Physical Topology Workspace


By default, the Infrastructure > Physical Topology workspace is disabled and not seen in Contrail Web UI. To
enable it, first edit the /etc/contrail/config.global.js file to add the following lines (you can put them in the end
of the file):
config.features = {};
config.features.disabled = [];
Then, restart the Web UI with the command service supervisor-webui restart.
It is possible that you will also need to perform the following restart commands:
service contrail-snmp-collector restart
service contrail-topology restart
The contrail-snmp-collector and the contrail-topology are new daemons that are both added to the
contrail-analytics node. The contrail-analytics package contains these new features and their associated files. The
contrail-status command (discussed in more details in Troubleshooting chapter) displays the new services.
Continued on the next page.

Chapter 6–74 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud
Enable Physical Topology Workspace (contd.)
In case you experience any problems with the Underlay Overlay Mapping functionality, you can enable debug logging in the
processes. To do that, edit the following files:
/etc/contrail/contrail-snmp-collector.conf
/etc/contrail/contrail-topology.conf
and set the DEBUG level for logging. Then restart the processes. The log files to monitor for errors are:
/var/log/contrail/contrail-snmp-collector.log
/var/log/contrail/contrail-snmp-collector-stdout.log
/var/log/contrail/contrail-topology.log

www.juniper.net Contrail Analytics • Chapter 6–75


Network Automation Using Contrail Cloud

Steps to Perform on Physical Devices


The required steps to perform on physical devices before you can start using underlay overlay mapping are shown in the
slide. You need to enable SNMP, LLDP, LLDP-MED (if supported on the device), and IPFIX or sFlow.

Configure in Contrail Web UI


You also need to add each physical device in Configure > Physical Devices > Physical Routers workspace.
We have worked with this workspace in one of the previous chapters. Use the Add Physical Router button and add
Name (make sure that the configured name coincides with device’s hostname), Management IP and SNMP parameters
for the device. Other fields are optional. As always, you can use API scripting instead of GUI.

Chapter 6–76 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Physical Topology Workspace


The slide shows the Physical Topology workspace that appears in GUI after performing operations described on the
previous slides (remember it is not present in GUI of Contrail 2.21 by default). Click a physical device to get information on
device details (Details tab) and interface statistics collected via SNMP (Interfaces tab).

www.juniper.net Contrail Analytics • Chapter 6–77


Network Automation Using Contrail Cloud

Traffic Statistics
Click on any physical link in the Physical Topology workspace to enable Traffic Statistics tab that shows
graphics of traffic for that link, as a function of time. Use sliders in the lower graph to zoom-in to an interval that is of interest
to you, the graph for this time interval will be shown in upper panel in a different scale.

Chapter 6–78 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Search Flows
Contrail can list historical overlay flow records, according to the flow parameters used for Flow Record queries, i.e.
Source-VN/Source-IP, Dest-VN/Dest-IP, vRouter, Protocol/Source-Port and Protocol/Dest-Port.
Then it can map these flows onto the Physical Nodes to see what path they took. The underlay flow parameters used for a
given overlay flow depend on the type of encapsulation being used. MPLS-over-UDP and VXLAN encapsulations try to add
entropy (for better load-balancing between paths) by varying the UDP source-port based on overlay flow parameters. Multiple
overlay flows are expected to hash onto the same underlay flow. When looking at sFlow or IPFIX information of underlay flows
to infer the path of a given overlay flow, Contrail can exploit all samples of the underlay flow, even if they were actually due to
other overlay flows.

www.juniper.net Contrail Analytics • Chapter 6–79


Network Automation Using Contrail Cloud

Trace Flows
Click the Trace Flows tab to see a list of active flows. To see the path of a flow, click a flow in the active flows list, then
click the Trace Flow button. The path taken in the underlay by the selected flow displays.
Because the Trace Flows feature uses IP traceroute to determine the path between the two vRouters involved in the flow,
it has the same limitations as the IP traceroute, including that Layer 2 routers in the path are not listed, and therefore do not
appear in the topology.

Chapter 6–80 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Analytics API
The slide highlights the topic we discuss next.

www.juniper.net Contrail Analytics • Chapter 6–81


Network Automation Using Contrail Cloud

Contrail Analytics API


Contrail provides two different REST APIs: Configuration API described in Chapter 4 (it uses HTTP port 8082 by default), and
Analytics (also known as Opserver) API for analytics (HTTP port 8081 by default). In this chapter, we focus on the analytics
API.

Chapter 6–82 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

User Visible Entities


Opserver provides API to get the current state of the User Visible Entities in the Contrail system. A User Visible Entity (UVE) is
defined as an object that may span multiple components and may require aggregation before the UVE information is
presented. Examples include Virtual Network, Virtual Machine, etc. Operational information of a Virtual Network may span
multiple vRouters, Config Nodes, Control Nodes. Opserver provides aggregation of all this information and provides the
aggregated information through REST API.
If there is a need to get partial information from the UVE (before aggregation), then relevant flags can be passed with the API
request to get only the filtered information. Supported filtering flags are:
• sfilt: For Source (usually hostname of the generator) filtering,
• mfilt: For Module (module name of the generator) filtering,
• cfilt: For UVE struct type filtering - useful when UVE is a composition of multiple structs; and
• kfilt: Filter by UVE keys, useful to get multiple, but not all, UVEs of a particular type.
For example, /analytics/uves/virtual-network/vn1?sfilt=src1 gives Virtual Network vn1’s information
provided by all components on Source src1.
If there is a need to get information for multiple UVEs at the same time, then wildcard query can be issued. For example, 
/analytics/uves/virtual-network/VN-* gives all UVEs for Virtual Networks with names starting from VN- (such
as VN-A or VN-B).

www.juniper.net Contrail Analytics • Chapter 6–83


Network Automation Using Contrail Cloud

Browsing the API: Part 1


The slide shows how Analytics API can be accessed from a web browser. Because the results are returned in JSON format, it
is convenient to install a JSON browser plug-in that formats the output hierarchically and highlights the URLs.

Chapter 6–84 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Browsing the API: Part 2


The slide shows how a list of vRouters can be accessed via Analytics API. Clicking on one of two vRouter links returns a
lengthy JSON document with many details about the vRouter. This is the same data that can be accessed in Monitor >
Infrastructure > Virtual Routers workspace.

www.juniper.net Contrail Analytics • Chapter 6–85


Network Automation Using Contrail Cloud

Log and Flow Information


Analytics query APIs are available as REST APIs and are conceptualized as queries on logical tables. Currently Contrail Web
UI uses analytics query API but any other non-Contrail client can leverage these REST APIs too.
Contrail analytics query API is modeled on SQL. Just to remind the readers, a typical SQL query on a table looks like:
SELECT col1, col2, … coln FROM tablename WHERE index1=value1
Contrail does not currently support the SQL syntax but the parameters of the analytics query API and SQL query are similar.
Both synchronous and asynchronous queries to the API are supported. In a synchronous query, the API server sends the
result inline with the query processing. In an asynchronous query, the client is expected to first send a request and then
check the status until the query processing is complete, then get the results. Asynchronous API queries are slightly more
difficult to implement.

Chapter 6–86 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Flow Record Query


The slide shows an example where a synchronous API request is used to get a flow record table. In this case, we poll
FlowRecordTable selecting the vrouter, sourceip, destvn, destip fields from it. We do some filtering using
where parameter. We also filter on the flow direction. The start time and the end time are mandatory—they define the time
period of the query data.
Note that for start_time we use value of "now-43200s" which means 43200 seconds (12 hours) before the current
time. It is important to not use spaces before or after the minus (-) sign in this expression.
The request is sent using the curl tool. In production setup, such requests will likely be sent using programming languages
such as Python, Java, and the like.
At the bottom of the slide, we see result of a similar request obtained in Contrail Web UI. Remember, Web UI uses the same
API to get data from the Contrail system.

www.juniper.net Contrail Analytics • Chapter 6–87


Network Automation Using Contrail Cloud

OpenStack Ceilometer Support


As we have seen in this chapter, Contrail Analytics exposes a rich variety of comprehensive and in-depth statistics related to
the operational state of virtual machines, virtual networks, floating IPs and other objects. Cloud operators using Contrail as
the networking solution in an OpenStack deployment benefit from having these statistics exposed to them. The Contrail
Analytics API server provides a very fast and scalable REST API implementation to query these statistics.
However for cloud operators who prefer to query the Ceilometer APIs (where Ceilometer is a data collection and telemetry
service for OpenStack clouds) to extract these statistics, Contrail provides a Ceilometer driver and plugin to expose these
statistics.
The Contrail Ceilometer plug-in adds the capability to meter the traffic statistics of floating IP addresses in Ceilometer. The
following meters for each floating IP resource are added by the plug-in in Ceilometer:
ip.floating.receive.bytes
ip.floating.receive.packets
ip.floating.transmit.bytes
ip.floating.transmit.packets
Contrail Release 2.20 and later supports the OpenStack Ceilometer services. Ceilometer services are only installed on the
first OpenStack controller node and do not support high availability in the current Contrail Release. Please see Contrail
documentation on www.juniper.net website for the details on Ceilometer architecture and installation.

Chapter 6–88 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

We Discussed:
• How to work with the Monitor workspace;
• How to analyze live traffic using service instances and the Debug workspace;
• How to run flow queries and examine system logs;
• How to use Underlay Overlay mapping; and
• How to work with Contrail Analytics API.

www.juniper.net Contrail Analytics • Chapter 6–89


Network Automation Using Contrail Cloud

Review Questions
1.

2.

3.

Chapter 6–90 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud
Answers to Review Questions
1. You can deploy an analyzer VM at the Monitor > Debug > Packet Capture, or at the Configure >
Services workspaces. Additionally, Start Packet Capture option is available at Query > Flows >
Flow Record and at Monitor > Infrastructure > Virtual Routers > Compute-Node-Name
> Interfaces workspaces.

2. Records of all packets, bytes, and flows are sent to the analytics node. Also, infrastructure statistics are sent to the
analytics nodes.

3. User Visible Entity is defined as an object that may span multiple controller components and may require aggregation
before its information is presented.

www.juniper.net Contrail Analytics • Chapter 6–91


Network Automation Using Contrail Cloud

Chapter 6–92 • Contrail Analytics www.juniper.net


Network Automation Using Contrail Cloud

Chapter 7: Troubleshooting
Network Automation Using Contrail Cloud

We Will Discuss:
• The use of Contrail command-line interface (CLI) commands;
• The use of Fabric scripts;
• The use of OpenStack CLI commands;
• The use of vRouter Commands; and
• The use of Contrail Introspect

Chapter 7–2 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

Contrail CLI Commands


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Troubleshooting • Chapter 7–3


Network Automation Using Contrail Cloud

Troubleshooting Checklist
The slide presents the list of most important Contrail system health checks you want to perform before doing any more
detailed troubleshooting. It includes checking for available disk space on all nodes and partitions; checking if NTP is
correctly configured and working; checking contrail-status utility. The contrail-status command is described on
the next slide.

Chapter 7–4 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

Contrail CLI Commands: contrail-status


To display a list of component statuses of a given node, issue the contrail-status command. Explained further, the output of
the command depends on which roles are assigned to the node the command is issued on. Several examples from various
nodes are shown on the slide.

www.juniper.net Troubleshooting • Chapter 7–5


Network Automation Using Contrail Cloud

Contrail CLI Commands: contrail-logs


During most troubleshooting sessions, log files are a good place to start and Contrail is no different. The contrail-logs
command, by itself, shows only the system log messages from all nodes for the last ten minutes. To view this information, issue
the contrail-logs CLI command on the node defined as the collector node in the testbed.py.

[root@openstack testbeds]# more testbed.py


[...]
#Role definition of the hosts.
env.roledefs = {
'all': [host1, host2, host3, host4, host5],
'cfgm': [host1],
'openstack': [host1],
'control': [host2, host3],
'compute': [host4, host5],
'collector': [host1],
'webui': [host1],
'database': [host1],
'build': [host_build],
}
[...]
Further options to the contrail-logs command are discussed on subsequent pages.

Chapter 7–6 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

Contrail CLI Commands: contrail-logs


Several options exist for the contrail-logs command and can be viewed with the --help option as shown below:
root@openstack:~# contrail-logs --help
usage: contrail-logs [-h] [--analytics-api-ip ANALYTICS_API_IP]
[--analytics-api-port ANALYTICS_API_PORT]
[--start-time START_TIME] [--end-time END_TIME]
[--last LAST] [--source SOURCE]
[--node-type
{Invalid,Config,Control,Analytics,Compute,WebUI,Database,OpenStack,ServerMgr}]
[--module
{contrail-control,contrail-vrouter-agent,contrail-api,contrail-schema,contrail-a
nalytics-api,contrail-collector,contrail-query-engine,contrail-svc-monitor,Devic
eManager,contrail-dns,contrail-discovery,IfmapServer,XmppServer,contrail-analyti
cs-nodemgr,contrail-control-nodemgr,contrail-config-nodemgr,contrail-database-no
demgr,Contrail-WebUI-Nodemgr,contrail-vrouter-nodemgr,Storage-Stats-mgr,Ipmi-Sta
ts-mgr,contrail-snmp-collector,contrail-topology,InventoryAgent,contrail-alarm-g
en,contrail-tor-agent}]
[--instance-id INSTANCE_ID] [--category CATEGORY]
[--level LEVEL] [--message-type MESSAGE_TYPE] [--reverse]
[--verbose] [--all] [--raw]
[--object-type
Continued on the next page.

www.juniper.net Troubleshooting • Chapter 7–7


Network Automation Using Contrail Cloud
Contrail CLI Commands: contrail-logs (contd.)
{service-chain,database-node,routing-instance,xmpp-connection,analytics-query,virtua
l-machine-interface,config-user,analytics-query-id,None,logical-interface,xmpp-p
eer,generator,virtual-network,analytics-node,prouter,bgp-peer,config,dns-node,co
ntrol-node,physical-interface,None,virtual-machine,vrouter,None,None,service-ins
tance,config-node}]
[--object-values] [--object-id OBJECT_ID]
[--object-select-field {ObjectLog,SystemLog}]
[--trace TRACE] [--limit LIMIT] [--send-syslog]
[--syslog-server SYSLOG_SERVER]
[--syslog-port SYSLOG_PORT] [--f] [--keywords KEYWORDS]
[--message-types]

optional arguments:
-h, --help show this help message and exit
--analytics-api-ip ANALYTICS_API_IP
IP address of Analytics API Server (default:
127.0.0.1)
--analytics-api-port ANALYTICS_API_PORT
Port of Analytics API Server (default: 8081)
--start-time START_TIME
Logs start time (format now-10m, now-1h) (default: None)
--end-time END_TIME Logs end time (default: None)
--last LAST Logs from last time period (format 10m, 1d) (default: None)
--source SOURCE Logs from source address (default: None)
--node-type
{Invalid,Config,Control,Analytics,Compute,WebUI,Database,OpenStack,ServerMgr}
Logs from node type (default: None)
--module
{contrail-control,contrail-vrouter-agent,contrail-api,contrail-schema,contrail-a
nalytics-api,contrail-collector,contrail-query-engine,contrail-svc-monitor,Devic
eManager,contrail-dns,contrail-discovery,IfmapServer,XmppServer,contrail-analyti
cs-nodemgr,contrail-control-nodemgr,contrail-config-nodemgr,contrail-database-no
demgr,Contrail-WebUI-Nodemgr,contrail-vrouter-nodemgr,Storage-Stats-mgr,Ipmi-Sta
ts-mgr,contrail-snmp-collector,contrail-topology,InventoryAgent,contrail-alarm-g
en,contrail-tor-agent}
Logs from module (default: None)
--instance-id INSTANCE_ID
Logs from module instance (default: None)
--category CATEGORY Logs of category (default: None)
--level LEVEL Logs of level (default: None)
--message-type MESSAGE_TYPE
Logs of message type (default: None)
--reverse Show logs in reverse chronological order (default: False)
--verbose Show internal information (default: False)
--all Show all logs (default: False)
--raw Show raw XML messages (default: False)
--object-type
{service-chain,database-node,routing-instance,xmpp-connection,analytics-query,vi
rtual-machine-interface,config-user,analytics-query-id,None,logical-interface,xm
pp-peer,generator,virtual-network,analytics-node,prouter,bgp-peer,config,dns-nod
e,control-node,physical-interface,None,virtual-machine,vrouter,None,None,service
-instance,config-node}
Logs of object type (default: None)
--object-values Display list of object names (default: False)
--object-id OBJECT_ID
Logs of object name (default: None)
Continued on the next page.

Chapter 7–8 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud
Contrail CLI Commands: contrail-logs (contd.)
--object-select-field {ObjectLog,SystemLog}
Select field to filter the log (default: None)
--trace TRACE Dump trace buffer (default: None)
--limit LIMIT Limit the number of messages (default: None)
--send-syslog Send syslog to specified server and port (default: False)
--syslog-server SYSLOG_SERVER
IP address of syslog server (default: localhost)
--syslog-port SYSLOG_PORT
Port to send syslog to (default: 514)
--f Tail logs from now (default: False)
--keywords KEYWORDS comma separated list of keywords (default: None)
--message-types Display list of message type (default: False)

www.juniper.net Troubleshooting • Chapter 7–9


Network Automation Using Contrail Cloud

Contrail CLI Commands: contrail-version


The contrail-version command displays a list of all installed components with their version and build numbers. As
with the contrail-status command, the contrail-version command is node-dependent. That is, the output
differs from node to node depending on which roles have been assigned to the node. Issue the contrail-version |
grep contrail command to display only Contrail related components.

Chapter 7–10 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

Managing Services
All Contrail services are managed by the process supervisord, which is open source software written in Python. Each
Contrail node type, such as compute, control, and so on, has an instance of supervisord that, when running, launches
Contrail services as child processes. All supervisord instances display in contrail-status output with the prefix
supervisor. If the supervisord instance of a particular node type is not running, none of the services for that node
type are running. For more details about the open source supervisord process, see http://www.supervisord.org.
The standard Linux command, service, is used to start, stop, reset, or view the status of a given service running in
Contrail. There are several options for the service command as shown below:
• start: Start a named service
• stop: Stop a named service
• restart: Restart a named service
• status: Display
The example demonstrates how to restart the supervisor-webui service.

www.juniper.net Troubleshooting • Chapter 7–11


Network Automation Using Contrail Cloud

Fabric Utility Scripts


The slide highlights the topic we discuss next.

Chapter 7–12 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

Fabric Utility Scripts: Part 1


Fabric scripts are used as part of Contrail install procedure, as described in the Installation content. In addition to those,
many more Fabric utility scripts are available. Issue the fab --list command from the /opt/contrail/utils
directory to view a complete listing.

www.juniper.net Troubleshooting • Chapter 7–13


Network Automation Using Contrail Cloud

Fabric Utility Scripts: Part 2


This slides shows several examples of useful Fabric scripts.

Chapter 7–14 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

OpenStack CLI Usage


The slide highlights the topic we discuss next.

www.juniper.net Troubleshooting • Chapter 7–15


Network Automation Using Contrail Cloud

OpenStack CLI Commands


A full discussion of OpenStack CLI commands is outside the scope of this course. However, it is worth your time to
investigate the extensive documentation found at http://docs.openstack.org/.
When issuing OpenStack commands, keep in mind that they must be authenticated. The easiest method to accomplish this
is by issuing the source /etc/contrail/openstackrc command at the beginning of the CLI session. An example of
this is shown on the slide.
The example lists all images that are available for deployment as virtual machines. These images are stored in OpenStack
Glance. Some of the Glance CLI examples are discussed later.

Chapter 7–16 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

OpenStack: Nova Command Line


This slide demonstrates some OpenStack Nova CLI commands.
The first example lists all servers (VMs) currently deployed for all tenants in the cloud orchestration system.
The second example on the slide lists all available flavors. In OpenStack terms, a flavor is simply a template of settings that
can be applied to a deployed virtual machine. A flavor defines aspects such as the amount of RAM and virtual CPUs to
allocate to a virtual machine.
The final example lists hypervisors (compute nodes) that are known by nova.

www.juniper.net Troubleshooting • Chapter 7–17


Network Automation Using Contrail Cloud

OpenStack: Neutron Command Line


This slide demonstrates some OpenStack Neutron CLI commands.
The first example shows the creation of a network simply named network172.
The second example shows the creation and application of a 172.16.0.0/12 subnet to the network172 network. Note
that the gateway IP and DHCP scope are generated automatically.

Chapter 7–18 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

OpenStack: Keystone Command Line


This slide demonstrates some OpenStack Keystone CLI commands.
The first example provides a list of tenants or projects currently configured in the cloud orchestration system.
The second example creates a user named contrail that has access to the demo project. This user will have a password
of contrail.

www.juniper.net Troubleshooting • Chapter 7–19


Network Automation Using Contrail Cloud

OpenStack: Glance Command Line


This slide demonstrates some OpenStack Glance CLI commands.
The first CLI example on the slide uses the glance CLI command to display a list of images stored in OpenStack. Notice that
one of the images is in qcow2 format, while the other is in vmdk format.
The second example shows greater detail about the analyzer image.

Chapter 7–20 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

OpenStack Log Files


All OpenStack components have corresponding log files. Their locations are summarized in the table presented in the slide.

www.juniper.net Troubleshooting • Chapter 7–21


Network Automation Using Contrail Cloud

vRouter Commands and Troubleshooting


The slide highlights the topic we discuss next.

Chapter 7–22 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

Compute Node Interfaces


The slide reviews Compute node architecture, including interface names.
pkt0 is the tap interface between vRouter and vnswad daemon (also called vRouter agent). VRouter traps packets needing
control processing such as flow-setup on this interface. Also, agent uses the pkt0 interface to transmit control packets
(such as ARP).
vhost0 is a tap interface between host OS and vRouter. vhost0 is the layer-3 interface used by host operating system. As
part of setup, Contrail software migrates the IP configuration from physical server ethernet port to vhost0. When
networking stack of host OS sends packet on the vhost0 interface, they are received by vRouter module. VRouter in turn
routes the packet. When vRouter needs to send packet to networking stack of host OS, it transmits them through vhost0
interface.
ethX shown on the slide is the physical Ethernet interface of the server. This interface provides connectivity to the underlay
network to the vRouter. The interface name can be different depending on the system.
On a standard Linux installation there is no guarantee that a physical interface will come up with the same name after a
system reboot. Linux NetworkManager tries to accommodate this behavior by linking the interface configurations to the
hardware addresses of the physical ports. However, Contrail avoids using hardware-based configuration files because this
type of solution cannot scale when using remote provisioning and management techniques.
Continued on the next page.

www.juniper.net Troubleshooting • Chapter 7–23


Network Automation Using Contrail Cloud
Compute Node Interfaces (contd.)
The Contrail alternative is a threefold interface-naming scheme based on <bus, device, port (or function)>. As an example,
on a server operating system that typically gives interface names such as p4p0 and p4p1 for onboard interfaces, the
Contrail system generates those names as p4p0p0 and p4p0p1, when using the optional package
contrail-interface-name.
When the contrail-interface-name package is installed, it uses the threefold naming scheme to provide consistent
interface naming after reboots. The contrail-interface-name package is installed by default when a Contrail ISO
image is installed. If you are using an RPM-based installation, you should install the contrail-interface-name
package before doing any network configuration. If your system already has another mechanism for getting consistent
interface names after a reboot, it is not necessary to install the contrail-interface-name package.

Chapter 7–24 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

vRouter Command Line Tools: Part 1


The most useful commands for inspecting the Contrail vRouter module are summarized in this and the following slides:
• vif - Inspect vRouter interfaces associated with the vRouter module. The vRouter requires vRouter interfaces
(vif) to forward traffic. Use the vif command to see the interfaces that are known by the vRouter. Note that
having interfaces only in the OS (Linux) is not sufficient for forwarding. The relevant interfaces must be added to
vRouter. Typically, the set up of interfaces is handled by components like nova-compute or vRouter agent.
• flow - Display active flows in a system. Use -l option to list everything in the flow table.
• vrfstats - Display next hop statistics for a particular VRF. It is typically used to determine if packets are
hitting the expected next hop.
• rt - Display routes in a VRF.
• dropstats - Inspect packet drop counters in the vRouter.

www.juniper.net Troubleshooting • Chapter 7–25


Network Automation Using Contrail Cloud

vRouter Command Line Tools: Part 2


Useful vRouter commands, continued:
• mpls - Display the input label map programmed into the vRouter.
• mirror - Display the mirror table entries. The mirror table can be dumped with the --dump option.
• vxlan - Dump the vxlan table. The vxlan table maps a network ID to a next hop, similar to an MPLS table. If
a packet comes with a vxlan header and if the VNID is one of those in the table, the vRouter will use the next
hop identified to forward the packet.
• nh - Display the next hops that the vRouter knows. Next hops tell the vRouter the next location to send a
packet in the path to its final destination.
• --help - Display all command options available for the current command.

Chapter 7–26 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

vRouter Command Line Tools: Part 3


Use lsmod and modinfo Linux commands to obtain information about the vRouter kernel module. Here are example
outputs:
root@compute-1:~# lsmod | grep vrouter
vrouter 228312 1
root@compute-1:~# modinfo vrouter
filename: /lib/modules/3.13.0-40-generic/extra/net/vrouter/vrouter.ko
version: 1.0
license: GPL
srcversion: AD30CDA2B59E70818DFEFC4
depends:
vermagic: 3.13.0-40-generic SMP mod_unload modversions
parm: vr_flow_entries:uint
parm: vr_oflow_entries:uint
parm: vr_bridge_entries:uint
parm: vr_bridge_oentries:uint
parm: vr_mpls_labels:uint
parm: vr_nexthops:uint
parm: vr_vrfs:uint
parm: vrouter_dbg:Set 1 for pkt dumping and 0 to disable, default value is
0 (int)
Continued on the next page.

www.juniper.net Troubleshooting • Chapter 7–27


Network Automation Using Contrail Cloud
vRouter Command Line Tools: Part 3 (contd.)
Linux network namespaces are analogous to Junos routing instances. When a Source NAT service instance is created, a new
namespace is instantiated on a compute node. Then, you can see it and work with it using ip netns commands, for
example to see the list of namespaces:
root@compute-1:~# ip netns
vrouter-6f3cc973-7bcc-4481-9072-7aca5dac798b
You can now run commands in relation to this namespace, for example to see the list of interfaces, issue:
root@compute-1:~# ip netns exec vrouter-6f3cc973-7bcc-4481-9072-7aca5dac798b
ifconfig
gw-fa6a328c-c6 Link encap:Ethernet HWaddr 02:12:fe:60:be:70
inet addr:172.31.0.4 Bcast:172.31.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18 errors:0 dropped:0 overruns:0 frame:0
TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1068 (1.0 KB) TX bytes:690 (690.0 B)

int-567bd9dd-4 Link encap:Ethernet HWaddr 02:45:d0:2e:11:69


inet addr:100.64.0.4 Bcast:100.64.0.7 Mask:255.255.255.248
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18 errors:0 dropped:0 overruns:0 frame:0
TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1068 (1.0 KB) TX bytes:690 (690.0 B)

lo Link encap:Local Loopback


inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Network namespaces are also created if Load-Balancing-as-a-Service (LBaaS) is configured, or Docker containers are used
in the Contrail system.

Chapter 7–28 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

Monitoring Traffic Flow


In this example, we send traffic from VN-A_VM-1 to VN-B_VM-1 virtual machine. We trace it using different commands
considered previously. First, we use ifconfig to get a list of interfaces on compute-1 node, on which VN-A_VM-1 resides.
As shown, it has one tap interface. Then we monitor traffic on this interface using tcpdump. The ping traffic that was sent
between VMs is seen on this interface, without any tunnel encapsulation.

www.juniper.net Troubleshooting • Chapter 7–29


Network Automation Using Contrail Cloud

Traffic on compute-1
Now you dump traffic on physical interface eth0 of compute-1 node, using tcpdump. The encapsulated (GRE over MPLS)
packets are seen in the output.
The flow –l command is used to show the flows seen by vRouter. The ping traffic has produced two flows, corresponding
to two session directions.

Chapter 7–30 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

Traffic on compute-2
You issue the same commands on the other compute node, compute-2, that has VN-B_VM-1 instantiated on it. The
command’s output is similar. You notice the MPLS label for incoming packet which is 17 in our case.

www.juniper.net Troubleshooting • Chapter 7–31


Network Automation Using Contrail Cloud

Determining Next Hop and Interface


Now, based on the label value (which is 17), you determine the next-hop that packets take. According to mpls --get
command, it is 22.
To get interface and VRF information on the next-hop index 22, you issue nh --get command. The resulting VRF index in 1,
and interface index is 3.
Next, you use vif --get command to get information on interface with index 3. It turns out that this index corresponds to
interface vif0/3 (on the vRouter side). As seen from the output, on the side of OS, this interface is named
tapc9c18afd-71.

Chapter 7–32 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

Final Checks on compute-2


You now monitor traffic on tapc9c18afd-71 interface to make sure that packets are seen on this interface (it connects to
VN-B_VM-1). Although not shown, tcpdump on Guest OS could be performed as well.
Two more commands are shown, which you can use to get information on a particular VRF, vrfstats and rt.

www.juniper.net Troubleshooting • Chapter 7–33


Network Automation Using Contrail Cloud

Troubleshooting Case Study


This animated slide presents a short case study.
The task says, “VN-A_VM-1 can’t ping VN-B_VM-1”. In such a situation, you will need to know whether the ping was working
before and if it was, if anything has changed in the system since that time. In this particular case, you can consider this as a
new setup.
The first step that you perform is dumping the traffic on a tap interface. It is seen that pings go out, but no replies are
received.
Then, it is a good idea to view packet drop statistics using dropstats command. You see that Flow Action Drop
counter is increasing, which typically means no policy exists to permit the traffic.
Indeed, no policy between VN-A and VN-B has been configured, so packet drops are an expected behavior. Add a policy
permitting traffic between VN-A and VN-B, and apply it to these VNs, to make ping work.

Chapter 7–34 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

Using Contrail Introspect


The slide highlights the topic we discuss next.

www.juniper.net Troubleshooting • Chapter 7–35


Network Automation Using Contrail Cloud

Using the Contrail Introspect


Introspect is a mechanism for taking a program object and querying information about it.
Sandesh is the name of a unified infrastructure in the Contrail Virtual Networking solution. At the same time, Sandesh is a
way for the Contrail daemons to provide a request-response mechanism. Requests and responses are defined in Sandesh
format and the Sandesh compiler generates code to process the requests and send responses.
Sandesh also provides a way to use a Web browser to send Sandesh requests to a Contrail daemon and get the Sandesh
responses. This feature is used to debug processes by looking into the operational status of the daemons.
Each Contrail daemon starts an HTTP server, with the following page types:
• The main index.html listing all Sandesh modules and the links to them.
• Sandesh module pages that present HTML forms for each Sandesh request.
• XML-based dynamically-generated pages that display Sandesh responses.
• An automatically generated page that shows all code needed for rendering and all HTTP server-client
interactions.

You can display the HTTP introspect of a Contrail daemon directly by accessing the Introspect ports, as listed in the slide.
Another way to launch the Introspect page is by browsing to a particular node page using the Contrail Web user interface (the
link is at the bottom of the page, when you navigate to a particular node in the Monitor > Infrastructure workspaces).

Chapter 7–36 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

Control Node Introspect


The slide shows how to browse to the control node introspect page from Contrail Web UI. Alternatively, Introspect page can
be accessed directly using the control node IP address and port 8083.

www.juniper.net Troubleshooting • Chapter 7–37


Network Automation Using Contrail Cloud

View Control Node’s BGP Peers


After you click bgp_peer.xml link on the Control Node Introspect page, you are presented with an option menu that allows
to send different Sandesh requests. You select BgpNeighborReq and click the Send button (the search_string
parameter can be left empty). The resulting page named BgpNeighborListResp shows many details on BGP and XMPP
peers of the Control node.
Note that many Introspect pages require you to use vertical and horizontal scrolling to see all information.

Chapter 7–38 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

vRouter Introspect
The Contrail vRouter Agent Introspect on a compute node can be accessed from the Infrastructure > Virtual
Routers workspace or directly via HTTP on port 8085. Various links accessible from here can be used to see the current
operational data in the vRouter. In particular:
• agent.xml shows agent operational data. In this, we can see the list of interfaces, VMs, VNs, VRFs, Next
hops, MPLS labels, Security groups, ACLs, Mirror configurations.
• controller.xml shows the current XMPP connections with the control node (XMPP server), the status and
stats on these connections.
• cpuinfo.xml gives the CPU load and memory usage on the compute node.
• ifmap_agent.xml shows the current configuration data received from IF-MAP.
• kstate.xml gives data configured in the vRouter (kernel) module.
• pkt.xml gives the current flow data in the agent.
• sandesh_trace.xml gives the module traces.
• services.xml provides stats and packet dumps for control packets like DHCP, DNS, ARP, ICMP.

www.juniper.net Troubleshooting • Chapter 7–39


Network Automation Using Contrail Cloud

View vRouter’s XMPP Connections


After you click the controller.xml link on vRouter Introspect page, you are presented with a set of options to get more
detailed information. You click AgentXmppConnectionStatusReq to see a list of vRouter’s XMPP connections and their
details, such as last flap time, packet counters, and more.

Chapter 7–40 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

Support URLs
The slides shows some helpful URLs to visit if experiencing trouble with your Contrail setup.

www.juniper.net Troubleshooting • Chapter 7–41


Network Automation Using Contrail Cloud

We Discussed:
• The use of Contrail command-line interface (CLI) commands;
• The use of Fabric scripts;
• The use of OpenStack CLI commands;
• The use of vRouter Commands; and
• The use of Contrail Introspect

Chapter 7–42 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

Review Questions
1.

2.

3.

www.juniper.net Troubleshooting • Chapter 7–43


Network Automation Using Contrail Cloud

Lab: Performing Analysis and Troubleshooting in Contrail


The slide lists the objectives of the lab.

Chapter 7–44 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud
Answers to Review Questions
1.
The contrail-logs --last 1h command displays Contrail system message for the last hour.
2.
The service command allows you stop, start, restart, or view the status of a given service.
3.
Before issuing OpenStack CLI commands, you must first authenticate using the source /etc/contrail/openstackrc
command.

www.juniper.net Troubleshooting • Chapter 7–45


Network Automation Using Contrail Cloud

Chapter 7–46 • Troubleshooting www.juniper.net


Network Automation Using Contrail Cloud

Resources to Help You Learn More


The slide lists online resources available to learn more about Juniper Networks and technology. These resources include the
following sites:
• Pathfinder: An information experience hub that provides centralized product details.
• Content Explorer: Junos OS and ScreenOS software feature information to find the right software release and
hardware platform for your network.
• Feature Explorer: Technical documentation for Junos OS-based products by product, task, and software
release, and downloadable documentation PDFs.
• Learning Bytes: Concise tips and instructions on specific features and functions of Juniper technologies.
• Installation and configuration courses: Over 60 free Web-based training courses on product installation and
configuration (just choose eLearning under Delivery Modality).
• J-Net Forum: Training, certification, and career topics to discuss with your peers.
• Juniper Networks Certification Program: Complete details on the certification program, including tracks, exam
details, promotions, and how to get started.
• Technical courses: A complete list of instructor-led, hands-on courses and self-paced, eLearning courses.
• Translation tools: Several online translation tools to help simplify migration tasks.

www.juniper.net
Network Automation Using Contrail Cloud

www.juniper.net
Network Automation Using Contrail Cloud

Appendix A: Installation
Network Automation Using Contrail Cloud

We Will Discuss:
• Pre-installation tasks and roles;
• Contrail installation using Fabric scripts;
• Additional Contrail settings and operations;
• Contrail high availability solutions;
• Configuring simple virtual gateway; and
• Installation process using Server Manager.

Appendix A–2 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Pre-Installation and Roles


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Installation • Appendix A–3


Network Automation Using Contrail Cloud

Supported Platforms
Contrail Release 2.21 is supported on the OpenStack Juno and Icehouse releases. Juno is supported on Ubuntu 14.04.2
and Centos 7.1.
Contrail networking is supported on Red Hat RHOSP 5.0, which is supported only on OpenStack Icehouse.
Contrail Release 2.21 supports VMware vCenter 5.5. vCenter is limited to Ubuntu 14.04.2 (Linux kernel version:
3.13.0-40-generic).
Other supported platforms include:
• CentOS 6.5 (Linux kernel version: 2.6.32-358.el6.x86_64)
• CentOS 7.1 (Linux kernel version: 3.10.0-229.el7)
• Redhat 7/RHOSP 5.0 (Linux kernel version: 3.10.0-123.el7.x86_64)
• Ubuntu 12.04.04 (Ubuntu kernel version: 3.13.0-34-generic)
• Ubuntu 14.04. (Linux kernel version: 3.13.0-40-generic)
The current list of OpenStack releases is available at https://wiki.openstack.org/wiki/Releases
At the time of this writing, Icehouse release is considered end-of-life (EOL).

Appendix A–4 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Roles Within Contrail


The slide lists Contrail node roles that are assigned during installation process. The roles are run on multiple servers in an
operating installation. A single node can have multiple roles. The roles can also run on a single server for testing or
demonstration purposes.
Additional roles that are used for Contrail Storage are storage-master and storage-compute.

www.juniper.net Installation • Appendix A–5


Network Automation Using Contrail Cloud

Server Requirements
The minimum requirement for a proof-of-concept (POC) system is 3 servers, either physical or virtual machines. All
non-compute roles can be configured in each controller node. For scalability and availability reasons, it is highly
recommended to use physical servers. Each server must have a minimum of:
• 64 GB memory
• 300 GB hard drive
• 4 CPU cores
• At least one Ethernet port
For production environment, each server must have a minimum of:
• 256 GB memory
• 500 GB hard drive
• 16 CPU cores
The following are additional minimum hardware specifications needed for implementing the Contrail storage solution:
• Two 500 GB, 7200 RPM drives in the server 4 and server 5 cluster positions (those with the compute storage
role) in the Contrail installation. This configuration provides 1 TB of clustered, replicated storage.
Continued on the next page.

Appendix A–6 • Installation www.juniper.net


Network Automation Using Contrail Cloud
Server Requirements (contd.)
Recommended compute storage configuration:
• For every 4-5 HDD devices on one compute storage node, use one SSD device to provide the OSD journals for
that set of HDD devices.

www.juniper.net Installation • Appendix A–7


Network Automation Using Contrail Cloud

Installation Software
All components necessary for installing the Contrail Controller are available as:
• An RPM file (contrail-install-packages-1.xx-xxx.el6.noarch.rpm) that can be used to install
the Contrail system on an appropriate CentOS operating system.
• A Debian file (contrail-install-packages-1.xx-xxx~xxxxxx_all.deb) that can be used to install
the Contrail system on an appropriate Ubuntu operating system.
Versions are available for each Contrail release, for the supported Linux operating systems and versions, and for the
supported versions of OpenStack.
All installation images can be downloaded from http://www.juniper.net/support/downloads/?p=contrail#sw.
The Contrail image includes the following software:
• All dependent software packages needed to support installation and operation of OpenStack and Contrail
• Contrail Controller software – all components
• OpenStack release currently in use for Contrail
As noted on the slide, Server Manager is a separate package file. Contrail Storage is a separate package as well, and also
NFS VM file must be downloaded from the same URL to support Storage functionality.

Appendix A–8 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Installation Using Fabric Scripts


The slide highlights the topic we discuss next.

www.juniper.net Installation • Appendix A–9


Network Automation Using Contrail Cloud

Installation Stages
Install the stock CentOS or Ubuntu operating system image appropriate for your version of Contrail onto the server, then
install Contrail packages separately.
The following are general guidelines for installing the operating system and preparing to install Contrail.
1. Install a CentOS or Ubuntu minimal distribution as desired on all servers. Follow the published operating system
installation procedure for the selected operating system; refer to the website for the operating system.
2. After rebooting all of the servers after installation, verify that you can log in to each of them using the root
password defined during installation.
3. After the initial installations on all servers, configure some items specific to your systems, then begin the first
part of the Contrail installation. These steps are discussed on the following slides.
Root login over SSH using password is disabled in modern Ubuntu Linux distributions by default. To enable it, edit the /etc/
ssh/sshd_config file and replace the following line
PermitRootLogin without-password with PermitRootLogin yes
This task can be accomplished by a text editor such as vi or nano, or by using sed utility as follows:
sudo sed -i.bak 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/
sshd_config
Finally, restart SSH daemon using sudo service ssh restart command.

Appendix A–10 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Configuring System Settings: Part 1


After installing the base image to all servers being used in the installation, and before running role provisioning scripts,
perform the following steps to configure items specific to your environment:
Update /etc/resolv.conf with DNS name server information specific to your system.
Configure any required static hostname-to-IP mappings in /etc/hosts file. If not using DNS, this step is required.
Update configuration files with the hostname and domain information specific to your system, if not configured during initial
installation (edit /etc/sysconfig/network for CentOS or /etc/hostname for Ubuntu).
As discussed in Troubleshooting chapter, NTP configuration is highly recommended.

www.juniper.net Installation • Appendix A–11


Network Automation Using Contrail Cloud

Configuring System Settings: Part 2


Next, configure the LAN port with network information specific to your system. Use ifconfig -a to determine which LAN
port you are using, as this might not be obvious on some systems due to the ways interfaces can be named.
The example interface configuration for Ubuntu is shown in the slide.
The example interface configuration for CentOS:
[root@openstack ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=00:0C:29:C3:49:2A
TYPE=Ethernet
UUID=9a248c10-397e-4b8d-a560-b73734dd431f
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.10.10.230
NETMASK=255.255.255.0
GATEWAY=10.10.10.1

Appendix A–12 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Testbed Definitions File: Part 1


The next step is to populate the testbed definitions file located at /opt/contrail/utils/fabfile/testbeds/
testbed.py with parameters specific to your environment. Within the /opt/contrail/utils/fabfile/testbeds
directory are example testbed files which you can use to build your own testbed.py file. The two to look at are the
testbed_singlebox_example.py and testbed_multibox_example.py files. Depending on the type of
environment you are setting up, you simply make a copy of one of those files and modify that.

www.juniper.net Installation • Appendix A–13


Network Automation Using Contrail Cloud

Testbed Definitions File: Part 2


Use the following guidelines to set the parameters within the testbed.py file:
1. For a lab installation only, if you have disk space available less that 250GB, use the following string to allow
installation with a smaller disk space (in this example you set 10 GB):
minimum_diskGB = 10

2. Provide host strings for the nodes in the cluster. Replace the addresses shown in the example with the actual IP
addresses of the hosts in your system:
host1 = 'root@10.10.10.230'
host2 = 'root@10.10.10.231'
host3 = 'root@10.10.10.233'
host4 = 'root@10.10.10.234'

3. Define external routers (such as Juniper MX series routers) to which the Control Nodes will be peered. For
example,
ext_routers = [(‘mx1’, ’10.10.10.240’), (‘mx2’, ’10.10.10.242’)]
If there are no external routers, you need to define:
ext_routers = []
Continued on the next page.

Appendix A–14 • Installation www.juniper.net


Network Automation Using Contrail Cloud
Testbed Definitions File: Part 2 (contd.)
4. Provide the BGP autonomous system number:
router_asn = 64512

5. Define the host on which the Fabric tasks will be invoked:


host_build = 'root@10.10.10.230'

www.juniper.net Installation • Appendix A–15


Network Automation Using Contrail Cloud

Testbed Definitions File: Part 3


Continue using the following guidelines to define parameters within the testbed.py file:
6. In the env.roledefs section, define which hosts operate with which roles. The example shows our sample
cluster. For single node collector analytics and database, you might use the following:
env.roledefs = {
'all': [host1, host2, host3, host4],
'cfgm': [host1],
'openstack': [host1],
'control': [host2],
'compute': [host3, host4],
'collector': [host1],
'webui': [host1],
'database': [host1],
'build': [host_build],
'storage-master': [host1],
'storage-compute': [host3, host4],
}
We are not using Contrail Storage here, so storage-master and storage-compute roles can be omitted.
Continued on the next page.

Appendix A–16 • Installation www.juniper.net


Network Automation Using Contrail Cloud
Testbed Definitions File: Part 3 (contd.)
7. In the env.hostnames section, correlate the cluster’s hostnames to the host definitions defined previously.
env.hostnames = {
'host1': ['openstack'],
'host2': ['control'],
'host3': ['compute-1'],
'host4': ['compute-2'],
}

www.juniper.net Installation • Appendix A–17


Network Automation Using Contrail Cloud

Testbed Definitions File: Part 4


Continue using the following guidelines to define parameters within the testbed.py file:
8. In the env.openstack_admin_password and env.passwords sections, define the password
credentials for each host. Again, the example shows our sample cluster:
env.openstack_admin_password = 'contrail'
env.passwords = {
host1: 'contrail',
host2: 'contrail',
host3: 'contrail',
host4: 'contrail',
host_build: 'contrail',
}

9. Provide OS types for each host in the env.ostypes section.


env.ostypes = {
host1: 'ubuntu',
host2: 'ubuntu',
host3: 'ubuntu',
host4: 'ubuntu',
}

Appendix A–18 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Installation the Contrail Packages: Part 1


Contrail installation steps are the following.
1. Copy the downloaded Contrail install packages file to /tmp/ on the first server for your system installation. This
can be accomplished by a command issued on a remote server, similar to
scp contrail-install-packages_2.21-102-ubuntu-14-04juno_all.deb root@10.10.10.230:/
tmp
2. Install the Contrail packages. For CentOS:
yum localinstall /tmp/
contrail-install-packages-1.xx-xxx~openstack_version.el6.noarch.rpm
For Ubuntu:
dpkg -i /tmp/contrail-install-packages-1.xx-xxx~openstack_version_all.deb

3. Run the setup.sh script. This step will create the Contrail packages repository as well as the Fabric utilities
(located in /opt/contrail/utils) needed for provisioning:
cd /opt/contrail/contrail_packages
./setup.sh

4. Create the testbed.py file as described previously and put it into /opt/contrail/utils/fabfile/
testbeds directory.

www.juniper.net Installation • Appendix A–19


Network Automation Using Contrail Cloud

Installing the Contrail Packages: Part 2


At this point, it is assumed that:
• All of the servers are time synced.
• All servers can ping from one to another, both on management and on data and control, if part of the system.
• All servers can ssh and scp between one another.
• All host names are resolvable.
• If using CentOS or RHEL, SELinux has been disabled (/etc/sysconfig/selinux).
Now you need to install contrail packages on the remaining nodes, and set everything up. It is important to remember that
fab commands are always run from /opt/contrail/utils/ directory.
Perform the following steps:
1. Install packages on all nodes using commands (example commands assume Ubuntu OS and OpenStack Juno;
use a different package name if it is not your case):
cd /opt/contrail/utils/
fab install_pkg_all:/tmp/contrail-install-packages_2.21-102-ubuntu-14-04juno_all.deb

Continued on the next page.

Appendix A–20 • Installation www.juniper.net


Network Automation Using Contrail Cloud
Installing the Contrail Packages: Part 2 (contd.)
2. The recommended Kernel version for Ubuntu based system is 3.13.0-40. Nodes can be upgraded to kernel
version 3.13.0-40 using below fabric command:
fab upgrade_kernel_all
This step upgrades the kernel version to 3.13.0-40 in all nodes and performs reboot. Reconnect to perform
remaining tasks.

3. Install the required Contrail packages in each node of the cluster:


fab install_contrail

Before the next step, only if your installation has multiple interfaces, run setup_interface:
fab setup_interface

See Contrail documentation for more details on multiple interface support and testbed.py example
configurations.

4. Provision the entire cluster:


fab setup_all

www.juniper.net Installation • Appendix A–21


Network Automation Using Contrail Cloud

Additional Settings and Operations


The slide highlights the topic we discuss next.

Appendix A–22 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Adding a Compute Node


Use the following procedure to add one or more new compute nodes to an existing Contrail cluster.
1. Install Linux on a new node.
2. Perform initial system settings (hostname, etc.—as described in the previous slides).
3. Add the new information about the new compute node(s) into your existing testbed.py file.
4. Copy the contrail-install-packages file for CentOS or Ubuntu to the /tmp directory of the cfgm node where the
fab commands are triggered.
5. For Ubuntu 12.04.4 or 12.04.3 server with kernel version older than 3.13.0-34, upgrade the kernel by using the
following fab command:
cd /opt/contrail/utils; fab upgrade_kernel_node:root@x.x.x.x
where x.x.x.x should be replaced with the server’s actual IP address.
6. Install the contrail-install-packages on to the new compute node (or nodes):
CentOS: fab install_pkg_node:/tmp/
contrail-install-packages_x.xx-xx.xxx.noarch.rpm,root@x.x.x.x
Ubuntu: fab install_pkg_node:/tmp/
contrail-install-packages_x.xx-xx~havana_all.deb,root@x.x.x.x
7. Use fab commands to add the new compute node (or nodes):
fab add_vrouter_node:root@x.x.x.x

www.juniper.net Installation • Appendix A–23


Network Automation Using Contrail Cloud

Removing a Compute Node


Use the following procedure to remove one or more compute nodes from an existing Contrail cluster.
1. Use the following fab command to remove the new compute node:
fab detach_vrouter_node:root@x.x.x.x
Replace x.x.x.x with the correct IP for the node or nodes that you are removing.
2. Remove the information about this detached compute node from the existing testbed.py file.

Appendix A–24 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Installing Contrail on VMware ESXi


Typically, the servers running Contrail components are using the Linux operating system and a KVM hypervisor.
As of Contrail Release 2.0 and greater, limited capability is provided for extending the Contrail compute node functionality to
servers running the VMware ESXi virtualization platform. To run Contrail on ESXi, a virtual machine is spawned on a physical
ESXi server, and a compute node is configured on the virtual machine. For Contrail on ESXi, only compute node functionality
is provided at this time.
There are two methods for configuring and provisioning nodes in a Contrail cluster: using the Contrail Server Manager to
automate provisioning or using fab (Fabric) commands. Both methods can also be used to configure Contrail compute nodes
on VMware ESXi.

www.juniper.net Installation • Appendix A–25


Network Automation Using Contrail Cloud

Install Procedure
The slide presents details on the initial installation and configuration of ESXi host as a compute node in Contrail system
using fabric scripts. On the right, example testbed.py configuration (only part relevant for ESXi) is shown.
More details on setup and performance of the solution can be found in Contrail technical documentation at www.juniper.net
and in OpenContrail Blog at http://www.opencontrail.org/integrating-vmware-esxi-with-openstack-opencontrail/.

Appendix A–26 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Adding a Control Node


The process of adding a Control Node is similar to adding a Compute node, however, fab commands to run are different. The
whole process is described in the slide.
The example commands with particular package name and IP of a new node are:
fab install_pkg_node:/tmp/
contrail-install-packages_2.21-102-ubuntu-14-04juno_all.deb,root@10.10.10.232
fab create_install_repo_node:root@10.10.10.232
fab install_control_node:root@10.10.10.232
fab setup_control_node:root@10.10.10.232
The Kernel on a new node can be also upgraded, if needed:
fab upgrade_kernel_node:root@10.10.10.232
Finally, to make a new node operational, you need to add a new Control node in Configure > Infrastructure >
BGP Routers workspace of Contrail Web UI.
To uninstall control node, perform the following steps:
Remove the Control node in Configure > Infrastructure > BGP Routers workspace of Contrail Web UI.
Issue fab uninstall_control_node command, for example:
fab uninstall_control_node:root@10.10.10.232
Remove the detached Control node from testbed.py file.

www.juniper.net Installation • Appendix A–27


Network Automation Using Contrail Cloud

Configuring BGP MD5 Authentication


Contrail implements MD5 authentication for BGP peering based on RFC 2385. The primary motivation for this option is to
allow BGP to protect itself against the introduction of spoofed TCP segments into the connection stream. Both of the BGP
peers must be configured with the same MD5 key. Once configured, each BGP peer adds a 16-byte MD5 digest to the TCP
header of every segment that it sends. This digest is produced by applying the MD5 algorithm on various parts of the TCP
segment. Upon receiving a signed segment, the receiver validates it by calculating its own digest from the same data (using
its own key) and compares the two digests. For valid segments, the comparison is successful since both sides know the key.
There are 3 ways to enable BGP MD5 authentication and set the keys on the Contrail node.
1. To configure MD5 authentication for a BGP peer using an environment dictionary. Before provisioning the node,
include an environment dictionary in the testbed.py file as shown. In this example, juniper is the md5 key
that is configured on the host1 and host2 nodes.
env.md5 = {
host1: 'juniper',
host2: 'juniper',
}

Continued on the next page.

Appendix A–28 • Installation www.juniper.net


Network Automation Using Contrail Cloud
Configuring BGP MD5 Authentication (contd.)
2. Alternately, if the md5 key is not included in the testbed.py file and the node is already provisioned, you can
run the following script with an argument for md5:
root@<your_node>:/opt/contrail/utils# python provision_control.py --host_name
<host_name> --host_ip <host_ip> --router_asn <asn> --api_server_ip <api_ip>
--api_server_port <api_port> --oper add --md5 “juniper” --admin_user admin
--admin_password contrail123 --admin_tenant_name admin

3. Another alternative is to use the web user interface. Connect to the node’s IP address at port 8080 and select
Configure > Infrastructure > BGP Routers. The list of BGP peers is displayed. For a BGP peer,
click on the gear icon on the right hand side of the peer entry. Then click Edit. This displays the Edit BGP
Router dialog box. Scroll down the window and select Advanced Options. Configure the MD5
authentication by selecting MD5 Authentication Mode and entering the Authentication Key value.

www.juniper.net Installation • Appendix A–29


Network Automation Using Contrail Cloud

Upgrading Contrail Software: Part 1


Use the following procedure to upgrade an installation of Contrail software from one release to a more recent release. This
procedure is valid starting from Contrail Release 2.00 and greater.
Instructions are given for both CentOS and Ubuntu versions. The only Ubuntu versions supported for upgrading are Ubuntu
12.04 and 14.04.2.
To upgrade Contrail software from Contrail Release 2.00 or greater:
1. Download the file contrail-install-packages-x.xx-xxx.xxx.noarch.rpm (for CentOS) or
contrail-install-packages-x.xx-xxx.xxx.noarch.deb (for Ubuntu) from 
http://www.juniper.net/support/downloads/?p=contrail#sw 
and copy it to the /tmp directory on the config node.
2. Install the contrail-install-packages, using the correct command for your operating system:
CentOS: yum localinstall /tmp/contrail-install-packages-x.xx-xxx.xxx.noarch.rpm
Ubuntu: dpkg –i /tmp/contrail-install-packages_x.xx-xxx~_all.deb
3. Set up the local repository by running the setup.sh:
cd /opt/contrail/contrail_packages
./setup.sh

Continued on the next page.

Appendix A–30 • Installation www.juniper.net


Network Automation Using Contrail Cloud
Upgrading Contrail Software: Part 1 (contd.)
4. Ensure that the testbed.py file that was used to set up the cluster with Contrail is intact at /opt/
contrail/utils/fabfile/testbeds/. Ensure that testbed.py has been set up with a combined
control_data section (required as of Contrail Release 1.10). Ensure that the do_parallel flag is set to
True in the testbed.py file, see bug 1426522 in Launchpad.net, https://bugs.launchpad.net/
juniperopenstack/+bug/1426522.

www.juniper.net Installation • Appendix A–31


Network Automation Using Contrail Cloud

Upgrading Contrail Software: Part 2


Upgrade Procedure (continued):
5. Change directory to the utils folder:
cd /opt/contrail/utils
Select the correct upgrade procedure from the following to match your operating system and vRouter. In the
following, <from> refers to the currently installed release number, such as 2.0, 2.01, 2.1:
CentOS Upgrade Procedure: fab upgrade_contrail:<from>,/tmp/
contrail-install-packages-x.xx-xxx.xxx.noarch.rpm
Ubuntu 12.04 Procedure: fab upgrade_contrail:<from>,/tmp/
contrail-install-packages-x.xx-xxx~icehouse_all.deb
Ubuntu 14.04 Upgrade, Two Procedures: There are two different upgrade procedures for Ubuntu 14.04 upgrade
to Contrail Release 2.20, depending on which vRouter (contrail-vrouter-3.13.0-35-generic or
contrail-vrouter-dkms) is installed in your current setup. As of Contrail Release 2.20, the recommended kernel
version for an Ubuntu 14.04-based system is 3.13.0-40. Both procedures can use the command fab
upgrade_kernel_all to upgrade the kernel.
Ubuntu 14.04 Upgrade Procedure For a System with contrail-vrouter-3.13.0-35-generic:
Continued on the next page.

Appendix A–32 • Installation www.juniper.net


Network Automation Using Contrail Cloud
Upgrading Contrail Software: Part 2 (contd.)
The command sequence upgrades the kernel version and also reboots the compute nodes when finished.
fab install_pkg_all:/tmp/contrail-install-packages-x.xx-xxx_all.deb;
fab migrate_compute_kernel;
fab upgrade_contrail:<from>,/tmp/contrail-install-packages-x.xx-xxx_all.deb;
fab upgrade_kernel_all;
fab restart_openstack_compute;
Ubuntu 14.04 Upgrade Procedure For System with contrail-vrouter-dkms:
The command sequence upgrades the kernel version and also reboots the compute nodes when finished.
fab upgrade_contrail:<from>,/tmp/contrail-install-packages-x.xx-xxx_all.deb;
All nodes in the cluster can be upgraded to kernel version 3.13.0-40 by using the following fab command?:
fab upgrade_kernel_all

www.juniper.net Installation • Appendix A–33


Network Automation Using Contrail Cloud

Upgrading Contrail Software: Part 3


Upgrade Procedure (continued):
6. On the OpenStack node, soft reboot all of the virtual machines. You can do this in the OpenStack dashboard or
log into the node that uses the openstack role and issue the following commands:
source /etc/contrail/openstackrc ; nova reboot <vm-name>
You can also use the following fab command to reboot all virtual machines:
fab reboot_vm

7. Check to ensure that the nova-novncproxy service is still running:


service nova-novncproxy status
If necessary, restart the service:
service nova-novncproxy restart

Continued on the next page.

Appendix A–34 • Installation www.juniper.net


Network Automation Using Contrail Cloud
Upgrading Contrail Software: Part 3 (contd.)
8. (For Contrail Storage option, only.) Contrail Storage has its own packages. To upgrade Contrail Storage,
download the file: contrail-storage-packages_x.x-xx*.deb from http://www.juniper.net/support/downloads/
?p=contrail#sw and copy it to the /tmp directory on the config node.
Use the following statements to upgrade the software:
?cd /opt/contrail/utils;
fab upgrade_storage:<from>,/tmp/contrail-storage-packages_2.0-22~icehouse_all.deb;

When upgrading to Contrail Release 2.10, add the following steps if you have live migration configured.
Upgrades to Release 2.0 do not require these steps. Select the command that matches your live migration
configuration:
fab setup_nfs_livem
or
fab setup_nfs_livem_global

www.juniper.net Installation • Appendix A–35


Network Automation Using Contrail Cloud

High Availability
The slide highlights the topic we discuss next.

Appendix A–36 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Contrail and OpenStack High Availability


In Ubuntu setups, OpenStack high availability and Contrail high availability are both supported, for Contrail Release 1.10 and
greater. In CentOS setups, only Contrail high availability is supported, and only for Contrail Release 1.20 and greater.
Contrail has high availability already built into various components, including support for the Active-Active model of high
availability, which works by deploying the Contrail node component with an appropriate required level of redundancy.
The Contrail control node runs BGP and maintains adjacency with the vRouter module in the compute nodes. Additionally,
every vRouter maintains a connection with all available control nodes.
Contrail uses Cassandra as the database. Cassandra inherently supports fault tolerance and replicates data across the
nodes participating in the cluster. A highly available deployment of Contrail requires at least two control nodes, three config
nodes (including analytics and webui) and three database nodes.
High availability of OpenStack is supported by deploying the OpenStack controller nodes in a redundant manner on multiple
nodes. As of Contrail Release 1.10, the existing Active-Active high availability solution has been extended to include the
OpenStack controller as well.
A minimum of 3 servers (physical or virtual machines) are required to deploy a highly available Juniper OpenStack Controller.
In Active-Active mode, the Controller cluster uses Quorum-based consistency management for guaranteeing transaction
integrity across its distributed nodes. This translates to the requirement of deploying 2n+1 nodes to tolerate n failures.

www.juniper.net Installation • Appendix A–37


Network Automation Using Contrail Cloud

Supported Cluster Topologies for High Availability


The following configurations for the cluster topologies are supported:
• OpenStack and Contrail on the same high available nodes
• OpenStack and Contrail on different high available nodes
• Contrail only on high available nodes
The following lists some known limitations and some configuration guidelines for when you add new nodes or roles to an
existing cluster with high availability enabled:
• Adding a new node to an existing high availability cluster is only supported in Contrail Release 2.21 and later.
• Converting a single node or cluster that does not have high availability enabled to a cluster that does have high
availability enabled is not supported.
• New nodes must be appended to the existing list of nodes at the end of the testbed.py lists.
• We recommend you maintain a cluster with an odd number of controllers since high availability is based on a
quorum and to support n failures, (2n + 1) nodes are required.
• The new node is required to be running the same release as of the other nodes in the cluster.
• You need to use the nodetool cleanup command after a new node joins the Cassandra cluster. You can
safely schedule this for low usage hours to prevent disruption of the cluster operation.
• While deleting a node from an existing cluster, the remaining cluster should be operational and meet the high
availability cluster requirements, if not, purging the node is not allowed.

Appendix A–38 • Installation www.juniper.net


Network Automation Using Contrail Cloud

High Availability Solution Components


There is additional High Availability support brought in with new components to the Contrail OpenStack deployment, which
are packaged in a new package called contrail-openstack-ha. It primarily contains HAProxy, Keepalived, Galera, and their
requisite dependencies.
HAProxy is run on all nodes in order to load balance the connections across multiple instances of the services. To provide a
Virtual IP (VIP), Keepalived (open source health check framework and hot standby protocol) runs and elects a master based
on Virtual Router Redundancy Protocol (VRRP). The VRRP master owns the VIP. If the master node fails, the VIP moves to a
new master elected by VRRP.
When an instance of a service fails, HAProxy detects the failure and load-balances any subsequent requests across other
active instances of the service. The supervisord process monitors for service failures and brings up the failed instances. As
long as there is one instance of a service operational, the Juniper OpenStack controller continues to operate. This is true for
both stateful and stateless services across Contrail and OpenStack.
The Juniper OpenStack controller supports single node failures involving both graceful shutdown or reboots and ungraceful
power failures. When a node that is the VIP master fails, the VIP moves to the next active node as it is elected to be the VRRP
master. HAProxy on the new VIP master distributes the connections to the active service instances as before, while the failed
node is brought back online. Stateful services (MySQL/Galera, Zookeeper, and so on) require a quorum to be maintained
when a node fails. As long as a quorum is maintained, the controller cluster continues to work without problems. Data
integrity is also inherently preserved by Galera, Rabbit, and other stateful components in use.
Continued on the next page.

www.juniper.net Installation • Appendix A–39


Network Automation Using Contrail Cloud
High Availability Solution Components (contd.)
A connectivity break in the control/data network causes the Controller cluster to partition into two. As long as the caveat
around minimum number of nodes is maintained for one of the partitions, Controller cluster continues to work fine. Stateful
services including MySQL Galera and RabbitMQ detect the partitioning and reorganize their cluster around the reachable
nodes. Existing workloads continue to function and pass traffic and new workloads can be provisioned. When the
connectivity is restored, the joining node becomes part of the working cluster and the system gets restored to its original
state.
Installation is supported through fabric (fab) scripts. External facing, there is very little change mostly to incorporate multiple
OpenStack roles and VIP configuration. Testbed.py has new sections to incorporate external & internal VIPs. If OpenStack
and Contrail roles are co-located on the nodes, only one set of external and internal VIP is required.
Install also supports separating OpenStack and Contrail roles on physically different servers. In this case, external and
internal VIPs specified are used for OpenStack controller and a separate set of VIPs, contrail_external_vip and
contrail_internal_vip are used for the Contrail controller nodes. It is also possible to specify separate RabbitMQ for
OpenStack and Contrail controllers.
When multiple OpenStack roles are specified along with VIPs, the install-contrail target treats the installation as the
High Availability install and additionally adds the new contrail-openstack-ha package.
The following are options available to configure high availability within the Contrail configuration file (testbed.py).
• internal_vip: the virtual IP of the OpenStack high availability nodes in the control data network. In a single
interface setup, the internal_vip will be in the management data control network.
• external_vip: the virtual IP of the OpenStack high availability nodes in the management network. In a
single interface setup, the external_vip is not required.
• contrail_internal_vip: The virtual IP of the Contrail high availability nodes in the control data network.
In a single interface setup, the contrail_internal_vip will be in the management data control network.
• contrail_external_vip: The virtual IP of the Contrail high availability nodes in the management network.
In a single interface setup, the contrail_external_vip is not required.
• nfs_server: The IP address of the NFS server that will be mounted to /var/lib/glance/images for the
openstack node. The default is to env.roledefs['compute'][0].
• nfs_glance_path: The NFS server path to save images. The default is to /var/tmp/glance-images/.
• manage_amqp: A flag to tell the setup_all task to provision separate rabbitmq setups for openstack
services in openstack nodes.

Appendix A–40 • Installation www.juniper.net


Network Automation Using Contrail Cloud

HA Deployment Example
This slide presents an example of how OpenStack and Contrail can be deployed on the same set of HA servers.
OpenStack and Contrail services can be deployed in the same set of highly available nodes by setting the internal_vip
parameter in the env.ha dictionary of the testbed.py. Because the highly available nodes are shared by both
OpenStack and Contrail services, it is sufficient to specify only internal_vip. However, if the nodes have multiple
interfaces with management and data control traffic separated by provisioning multiple interfaces, then the
external_vip also needs to be set in the testbed.py. For example:
env.ha = {
‘internal_vip’ : ‘an-ip-in-control-data-network’,
‘external_vip’ : ‘an-ip-in-management-network’,
}

Below are some details on the software modules used in the setup:
• Apache ZooKeeper is a centralized service for maintaining configuration information, naming, providing
distributed synchronization, and providing group services.
• RabbitMQ is an open source message broker software (sometimes called message-oriented middleware) that
implements the Advanced Message Queuing Protocol (AMQP).
• Galera Cluster for MySQL is a Multimaster Cluster based on synchronous replication.

www.juniper.net Installation • Appendix A–41


Network Automation Using Contrail Cloud

Using Headless vRouter to Improve Redundancy


The Contrail vRouter agent downloads routes, configurations, and the multicast tree from the control node. For redundancy,
the vRouter agent connects to two control nodes. However, in some circumstances, it is possible for the vRouter agent to
lose the connection to the two control nodes, in this case, the information provided by the control nodes can get flushed. If
the vRouter agent loses connection to the control nodes, the multicast and unicast routes are flushed immediately, however,
the configuration agent waits a while for the control node to come up, and after that time, the configuration information gets
flushed.
When the headless vRouter feature is enabled, if the agent's connection to the control nodes is lost, the last known
information about routes, configurations, and the multicast tree is retained and marked as stale entries. In the meantime,
the system can remain in a working state. When the control node comes up again, the agent waits a while to flush out the
stale entries, and the newly-connected control node sends information about the routes, configurations, and multicast tree
to the agent. If the control node is in an unstable state, coming up and going down again, the agent retains the stale
information until the control node becomes stable again.
By default, the vRouter agent runs in non-headless mode. Use the following line in the DEFAULT section of the
contrail-vrouter-agent.conf to enable the headless vRouter:
headless_mode=true

Appendix A–42 • Installation www.juniper.net


Network Automation Using Contrail Cloud

HA Solution for Contrail ToR Agent


Starting with Contrail Release 2.20, high availability can be configured for the Contrail ToR agent. When a top-of-rack (ToR)
switch is managed through the Open vSwitch Database Management (OVSDB) Protocol by using a ToR agent on Contrail, a
high availability configuration is necessary to maintain ToR agent redundancy. With ToR agent redundancy, if the ToR agent
responsible for a ToR switch is unable to act as the vRouter agent for the ToR switch, due to any failure condition in the
network or the node, then another ToR agent takes over and manages the ToR switch.
ToR agent redundancy (high availability) for Contrail Release 2.20 and greater is achieved using HAProxy. HAProxy is an open
source, reliable solution that offers high availability and proxy service for TCP applications. The solution uses HAProxy to
initiate an SSL connection from the ToR switch to the ToR agent. This configuration ensures that the ToR switch is connected
to exactly one active ToR agent at any given point in time.
The ToR switch connects to the HAProxy that is configured to use one of the ToR agents on the two ToR services nodes
(TSNs). An SSL connection is established from the ToR switch to the ToR agent, making that agent the active ToR agent. The
active ToR agent is responsible for managing the OVSDB on the ToR switch. It configures the OVSDB tables based on the
configuration. It advertises the MAC routes learnt on the ToR switch as Ethernet VPN (EVPN) routes to the Contrail controller.
It also programs any routes learned by means of EVPN over XMPP, southbound into OVSDB on the ToR switch.
Both the ToR agents, active and standby, receive the same configuration from the control node, and all routes are
synchronized. The active ToR agent also advertises the multicast route (ff:ff:ff:ff:ff:ff) to the ToR switch, ensuring that there is
only one multicast route in OVSDB pointing to the active TSN.
Continued on the next page.

www.juniper.net Installation • Appendix A–43


Network Automation Using Contrail Cloud
HA Solution for Contrail ToR Agent (contd.)
After the SSL connection is established, keepalive messages are exchanged between the ToR switch and the ToR agent. The
messages can be sent from either end and are responded to from the other end. When any message exchange is seen on
the connection, the keepalive message is skipped for that interval. When the ToR switch sees that keepalive has failed, it
closes the current SSL session and attempts to reconnect. When the ToR agent side sees that keepalive has failed, it closes
the SSL session and retracts the routes it exported to the control node.
In a high availability configuration, multiple HAProxy nodes are configured, with Virtual Router Redundancy Protocol (VRRP)
running between them. The ToR agents are configured to use the virtual IP address of the HAProxy nodes to make the SSL
connection to the controller. The active TCP connections go to the virtual IP master node, which proxies them to the chosen
ToR agent. A ToR agent is chosen based on the number of connections from the HA Proxy to that node (the node with lower
number of connections gets the new connection) and can be controlled through configuration of the HAProxy.
If the HAProxy node fails, a standby node becomes the virtual IP master and sets up the connections to the ToR agents. The
SSL connections are reestablished following the same methods discussed earlier.
To get the required configuration downloaded from the control node to the TSN agent and to the ToR agent, the physical
router node must be linked to the virtual router nodes that represent the two ToR agents and the two TSNs. The Contrail Web
user interface can be used to configure this. Go to Configure > Physical Devices > Physical Routers and
create an entry for the ToR switch, providing the ToR switch IP address and the virtual tunnel endpoint (VTEP) address. The
router name should match the hostname of the ToR switch. Both ToR agents and their respective TSN nodes can be
configured here.
The same testbed configuration used for provisioning the TSN and TOR agents is used to provision high availability. The
redundant ToR agents should have the same tor_name and tor_ovs_port in their respective stanzas, for them to be
considered as a pair.

Appendix A–44 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Configuring Simple Virtual Gateway


The slide highlights the topic we discuss next.

www.juniper.net Installation • Appendix A–45


Network Automation Using Contrail Cloud

Simple Virtual Gateway


Every virtual network has a routing instance associated with it. The routing instance defines the network connectivity for the
virtual machines in the virtual network. By default, the routing instance contains routes only for virtual machines spawned
within the virtual network. Connectivity between virtual networks is controlled by defining network policies.
The public network is the IP fabric or the external networks across the IP fabric. The virtual networks do not have access to
the public network, and a gateway is used to provide connectivity to the public network from a virtual network. In traditional
deployments, a routing device such as a Juniper Networks MX Series router can act as a gateway.
The simple virtual gateway for Contrail is a restricted implementation of a gateway that can be used for experimental
purposes. The simple gateway provides the Contrail virtual networks with access to the public network, and is represented as
vgw.
The following are restrictions of the simple gateway.
• A single compute node can have the simple gateway configured for multiple virtual networks, however, there
cannot be overlapping subnets. The host OS does not support routing instances, therefore, all gateway
interfaces in the host OS are in the same routing instance. Consequently, the subnets in the virtual networks
must not overlap.
• Each virtual network can have a single simple gateway interface. ECMP is not supported.

Appendix A–46 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Compute Node with Simple Gateway


The slide describes routing between VM and physical server in a Vitrual Gateway scenario. The packets are routed through
the Host OS (Linux) networking stack using vgw and vhost0 interfaces.
An IP address is not configured for the gateway interface vgw. The actual interface naming is vgw0, vgw1, vgw2, etc. A single
compute node can have several vgw interfaces configured.

www.juniper.net Installation • Appendix A–47


Network Automation Using Contrail Cloud

Simple Virtual Gateway Configuration


There are four ways to perform Simple Virtual Gateway configuration:
• You can edit the vRouter configuration file, /etc/contrail/contrail-vrouter-agent.conf. This method requires a
reboot of vRouter to apply configuration.
• You can provision the simple virtual gateway (vgw) during system provisioning with fab commands by enabling
the vgw knob in the Contrail testbed.py file. Select some or all of the compute nodes to be configured as
vgw by identifying vgw roles in the env.roledefs section, along with other role definitions.
• Another way to configure the simple gateway is to dynamically send create and delete thrift messages to the
vRouter agent. To dynamically create a simple virtual gateway, you run a script on the compute node where the
virtual gateway will be created. The location of the script is shown in the slide.
• Yet another way to configure the simple gateway is to set configuration parameters in the DevStack localrc file.
DevStack is a tool to quickly create an OpenStack development environment. The following parameters are
available: CONTRAIL_VGW_PUBLIC_NETWORK, CONTRAIL_VGW_PUBLIC_SUBNET,
CONTRAIL_VGW_INTERFACE. This method can only add the default route 0.0.0.0/0 into the routing instance
specified in CONTRAIL_VGW_PUBLIC_NETWORK.

Appendix A–48 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Implementing Virtual Gateway


In this example we use provision_vgw_interface.py script to implement a virtual gateway. This method has an
advantage over other methods, because it is dynamic – vgw interfaces can be created and deleted at any time.
The following script is used to create a vgw1 interface in our example:
python /opt/contrail/utils/provision_vgw_interface.py --oper create --interface vgw1
--subnets 10.0.0.0/24 --routes 10.10.10.0/24 --vrf default-domain:demo:VN-A:VN-A
Here, you use the option subnets to specify the subnets defined for virtual network VN-A (it has corresponding VRF
default-domain:demo:VN-A:VN-A). You use the option routes to specify the routes in the public network that are
injected into VN-A. If you need to specify several subnets or routes, separate them with a space.
The following script is used to remove the vgw1 interface:
python /opt/contrail/utils/provision_vgw_interface.py --oper delete --interface vgw1
--subnets 10.0.0.0/24 --routes 10.10.10.0/24

www.juniper.net Installation • Appendix A–49


Network Automation Using Contrail Cloud

Testing Virtual Gateway


The slide highlights the importance of correct routing between Fabric network and VRF when using virtual gateway. To access
VMs in private networks, servers in Fabric network need the correct routes. The next hop of such route is an IP address of a
compute node.
In case you have all routes and traffic still does not pass, check if firewall (such as iptables) is dropping it.

Appendix A–50 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Installation Using Server Manager


The slide provides the objective for this lab.

www.juniper.net Installation • Appendix A–51


Network Automation Using Contrail Cloud

Contrail Server Manager


Starting with Contrail Release 1.10, the Contrail Server Manager can be used to provision, configure, and reimage a Contrail
virtual network system of servers, clusters, and nodes. Server Manager is an alternative to using Fabric commands to
provision a Contrail system.
The server manager provides a simple, centralized way for users to manage and configure components of a virtual network
system running across multiple physical and virtual servers in a cloud infrastructure. You can use the server manager to
configure, provision, and reimage servers with the correct software version and packages for the nodes that are running on
each server in multiple virtual network system clusters.
The following requirements are assumed for the server manager:
• The server manager runs on a Linux server (bare metal or virtual machine) and assumes availability of several
software products with which it interacts to provide the functionality of managing servers.
• The server manager has network connectivity to the servers it is trying to manage.
• The server manager has access to a remote power management tool to power cycle the servers that it
manages.
• The server manager uses Cobbler software for Linux provisioning to configure and download software to
physical servers. Cobbler resides on the same server that is running the server manager daemon.
Continued on the next page.

Appendix A–52 • Installation www.juniper.net


Network Automation Using Contrail Cloud
Contrail Server Manager (contd.)
• The server manager uses Puppet software, an open source configuration management tool, to accomplish the
configuration management of target servers, including the installation and configuration of different software
packages and the launching of various services.
• SQLite3 database management software is used to maintain and manage server configurations and it runs on
the same machine where the server manager daemon is running.
Server Manager can be installed on the following platform operating systems:
• Ubuntu 12.04.3
• Ubuntu 14.04
• Ubuntu 14.04.1
Server Manager can be used to reimage and provision the following target platform operating systems:
• Ubuntu 12.04.3
• Ubuntu 14.04
• Ubuntu 14.04.1
• VMware ESXi 5.5

www.juniper.net Installation • Appendix A–53


Network Automation Using Contrail Cloud

Server Manager Installation


Contrail Server Manager can only be installed on Ubuntu Linux. The steps to perform are the following:
• Install the supported Ubuntu Linux version (minimal distribution)
• Perform initial system settings (such as IP address, hostname, DNS, etc.)
• Check that you have domain name correctly configured. Puppet Master requires the fully-qualified domain
name (FQDN) of the server manager for key generation. The domain name is taken from /etc/hosts. If the
server is part of multiple domains, specify the domain name by using the --domain option during the
installation.
Here is an example of domain configuration in /etc/hosts file:
127.0.1.1 server-manager.englab.juniper.net server-manager

Here, the configured domain is englab.juniper.net and hostname is server-manager. You can check
host and domain name with hostname -f command.
• Copy Server Manager package to the local host.
• Ensure Internet access is present during installation, as it is required to get dependent packages.
Continued on the next page.

Appendix A–54 • Installation www.juniper.net


Network Automation Using Contrail Cloud
Server Manager Installation (contd.)
• Install package and perform setup using commands
dpkg -i contrail-server-manager-installer_2.21-102-ubuntu-14-04_all.deb
cd /opt/contrail/contrail_server_manager
./setup.sh –all

• The Server Manager service does not start automatically upon successful installation. The user must finish the
provisioning by modifying the following templates to match the actual environment. Refer to the Contrail
documentation for details on configuring these files:
/etc/cobbler/dhcp.template
/etc/cobbler/named.template
/etc/bind/named.conf.options
/etc/cobbler/settings
/etc/cobbler/modules.conf
/etc/sendmail.cf

• Finally, start Server Manager using service contrail-server-manager start command.

www.juniper.net Installation • Appendix A–55


Network Automation Using Contrail Cloud

Clusters Workspace
The server manager user interface can be accessed using:
http://<server-manager-user-interface-ip>:8080
From the Contrail user interface, select Setting > Server Manager to access the Server Manager home page. From
this page you can manage server manager settings for clusters, servers, images, and packages.
From the Clusters page that appears by selecting Setting > Server Manager > Clusters click the plus icon in the
upper right of the Clusters page. The Add Cluster window appears, where you can add a new cluster ID and the domain
email address of the cluster.

Appendix A–56 • Installation www.juniper.net


Network Automation Using Contrail Cloud

OS Images Workspace
To add a new OS image, on the OS Images Workspace page, click the plus (+) icon in the upper right header to open the
Add OS Image dialog box. Enter the information for the new image and click Save to add the new item to the list of
configured items.

www.juniper.net Installation • Appendix A–57


Network Automation Using Contrail Cloud

Packages Workspace
To add a new package, on the Packages Workspace page, click the plus (+) icon in the upper right header to open the Add
Package dialog box. Enter the information for the new package and click Save to add the new item to the list of configured
items.

Appendix A–58 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Servers Workspace
Click on the Servers link in the left sidebar at Setting > Server Manager to view a list of all servers.
To add a new server, from Setting > Server Manager > Servers, click the plus (+) icon at the upper right side in
the header line. The Add Server window pops up.

www.juniper.net Installation • Appendix A–59


Network Automation Using Contrail Cloud

Add Server Window


The Add Server Window is shown in the slide. Enter details for a new server, such as Server ID, Hostname, root password,
etc. When finished, click Save to add the new server configuration to the list of servers at Setting > Server Manager
> Servers.
You can change details of the new server by clicking the cog icon to the right side to get a list of actions available, including
Edit Config, Edit Tags, Reimage, Provision, and Delete.

Appendix A–60 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Assign Roles Window


When servers are added to Server Manager, the required step before provisioning is to assign them roles in Contrail system.
Navigate to Setting > Server Manager > Clusters, click the right side cog icon of the cluster, then select Assign
Roles from the action list. Select the roles for server(s) and click Save.
The slide shows an example where all roles are assigned to server1 (all-in-one setup).

www.juniper.net Installation • Appendix A–61


Network Automation Using Contrail Cloud

Reimage and Provision


Once all aspects of a cluster are configured, you can select actions for the server manager to perform on the cluster, such as
Reimage or Provision.
To reimage the servers, from the Clusters page (Setting > Server Manager > Clusters), click the right side
cog icon of the cluster to be reimaged, then select Reimage from the action list. The Reimage dialog box pops up. Ensure
that the correct image for reimaging is selected in the Configured OS Image field, then click Save to initiate the
reimage action.
The process to provision a cluster is similar to the process to reimage a cluster. From the Clusters page (Setting >
Server Manager > Clusters), click the right side cog icon of the cluster to be provisioned, then select Provision
from the action list. The Provision Cluster dialog box pops up. Ensure that the correct package for provisioning is
selected in the Configured Package field, then click Save to initiate the provision action.

Appendix A–62 • Installation www.juniper.net


Network Automation Using Contrail Cloud

We Discussed:
• Pre-installation tasks and roles;
• Contrail installation using Fabric scripts;
• Additional settings and operations;
• Contrail high availability solutions;
• Configuring simple virtual gateway; and
• Installation process using Server Manager.

www.juniper.net Installation • Appendix A–63


Network Automation Using Contrail Cloud

Review Questions
1.

2.

3.

Appendix A–64 • Installation www.juniper.net


Network Automation Using Contrail Cloud

Lab: Installation of the Contrail Cloud


The slide provides the objective for this lab.

www.juniper.net Installation • Appendix A–65


Network Automation Using Contrail Cloud
Answers to Review Questions
1.
Three nodes are required for HA installation of Contrail and OpenStack.
2.
Simple virtual gateway does not support overlapping subnets and ECMP.
3.
Cobbler and Puppet is used by Server Manager for server provisioning.

Appendix A–66 • Installation www.juniper.net


Acronym List
ACL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .access control list
ADC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application Delivery Controller
API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .application programming interface
ARP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Address Resolution Protocol
AS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .autonomous system
ASIC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .application-specific integrated circuit
CE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . customer edge
CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .command-line interface
DHCP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dynamic Host Configuration Protocol
DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Domain Name System
DPI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . deep packet inspection
EOL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . end-of-life
EVPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethernet virtual private network
GRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .generic routing encapsulation
GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .graphical user interface
HA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .high availability
IaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Infrastructure as a Service
IBGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . internal BGP
IDE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integrated development environment
IDP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intrusion Detection and Prevention
IDS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . intrusion detection service
IDS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . intrusion detection system
IETF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet Engineering Task Force
IF-MAP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interface to Metadata Access Point
IPAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IP address management
IPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . intrusion prevention system
IPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . intrusion protection system
IPsec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IP Security
IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IP version 4
ISO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . International Organization for Standardization
JNCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juniper Networks Certification Program
KVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kernel-based Virtual Machine
L3VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Layer 3 virtual private network
LBaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Load Balancing as a Service
MAC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . media access control
MANO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . management and orchestration
MPLSoGRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multi-protocol Label Switching over General Routing and Encapsulation
NAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Address Translation
NETCONF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Configuration
NFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network File System
NFVI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NFV infrastructure
NVF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Virtualization Function
NTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Time Protocol
NVO3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Virtualization Overlays
OVS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Open vSwitch
OVSDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Open vSwitch Database Management
P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . provider
PE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . provider edge
QE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . query expansion
QoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . quality of service
RD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . route distinguisher
REST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . representational state transfer
RT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . route target
SCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Secure Copy Protocol
SDK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . software development kit

www.juniper.net Acronym List • ACR–1


SDN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .software-defined networking
SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Structured Query Language
SSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Secure Sockets Layer
STP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spanning Tree Protocol
TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transmission Control Protocol
ToR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Top of Rack
TSN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ToR services node
UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .User Datagram Protocol
UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . user interface
UUID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .universal unique identifier
UVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .User Virtual Environment
VAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Value Added Services
vif. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vRouter interface
VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .virtual LAN
VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .virtual machine
VN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual network
vNIC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual network interface card
VPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual path connection
VPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Virtual Private Cloud
VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual private network
VPNaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VPN as a Service
VRF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual routing and forwarding table
XMPP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extensible Messaging and Presence Protocol

ACR–2 • Acronym List www.juniper.net

Вам также может понравиться