Академический Документы
Профессиональный Документы
Культура Документы
With the introduction of Junos node slicing, Juniper Networks offers the best virtualization solu-
tion for both control and data planes. Indeed, it’s quite well known that X86 CPUs are the best for
routing protocol implementations, but they are very limited in terms of network packet processing
input/output performances. With Junos node slicing, network engineers can take advantage of
DAY ONE: INSIDE JUNOS® NODE SLICING
“Change begins at the end of the comfort zone and the Junos node slicing engineering team stepped
out of their comfort zone and had no fear taking MX technology to the next level. Massimo did the
same in this book. He stepped out of the comfort zone and had no fear in writing this excellent book
that will help customers understand and deploy Junos node slicing technology. A great example of
leadership and hard work.”
Javier Antich Romaguera, Product Line Manager, Automation Software Team, Juniper Networks
“You true believers in the no-compromise, brute performance, and power efficiency built on network
ASICs may have felt a disturbance in the Force with “network slicing,” a wavering in your faith. Worry
not! This book shows you how to surgically carve a physical router into “node slices,” each running its
Understand, verify, deploy.
own version of the OS, administered by its own group, running its own protocols – advanced “CPU
virtualization” for networking devices. Massimo Magnani makes the process easy with this step-by-
step Day One book!”
- Dr. Kireeti Kompella, SVP and CTO Engineering, Juniper Networks
IT’S DAY ONE AND YOU HAVE A JOB TO DO, SO LEARN ABOUT:
n Junos node slicing architecture and how it works
n How to install and set up a new Junos node slicing solution
n Advanced topics such as Abstracted Fabric Interfaces and inter-GNF connectivity
n How to correctly set up MX chassis and server to deploy Junos node slicing in a lab or a
production environment
n How to migrate an existing chassis to a Junos node slicing solution
n How to design, deploy, and maintain Junos node slicing
By Massimo Magnani
Magnani
ISBN 978-1941441923
52500
With the introduction of Junos node slicing, Juniper Networks offers the best virtualization solu-
tion for both control and data planes. Indeed, it’s quite well known that X86 CPUs are the best for
routing protocol implementations, but they are very limited in terms of network packet processing
input/output performances. With Junos node slicing, network engineers can take advantage of
DAY ONE: INSIDE JUNOS® NODE SLICING
“Change begins at the end of the comfort zone and the Junos node slicing engineering team stepped
out of their comfort zone and had no fear taking MX technology to the next level. Massimo did the
same in this book. He stepped out of the comfort zone and had no fear in writing this excellent book
that will help customers understand and deploy Junos node slicing technology. A great example of
leadership and hard work.”
Javier Antich Romaguera, Product Line Manager, Automation Software Team, Juniper Networks
“You true believers in the no-compromise, brute performance, and power efficiency built on network
ASICs may have felt a disturbance in the Force with “network slicing,” a wavering in your faith. Worry
not! This book shows you how to surgically carve a physical router into “node slices,” each running its
Understand, verify, deploy.
own version of the OS, administered by its own group, running its own protocols – advanced “CPU
virtualization” for networking devices. Massimo Magnani makes the process easy with this step-by-
step Day One book!”
- Dr. Kireeti Kompella, SVP and CTO Engineering, Juniper Networks
IT’S DAY ONE AND YOU HAVE A JOB TO DO, SO LEARN ABOUT:
n Junos node slicing architecture and how it works
n How to install and set up a new Junos node slicing solution
n Advanced topics such as Abstracted Fabric Interfaces and inter-GNF connectivity
n How to correctly set up MX chassis and server to deploy Junos node slicing in a lab or a
production environment
n How to migrate an existing chassis to a Junos node slicing solution
n How to design, deploy, and maintain Junos node slicing
By Massimo Magnani
Magnani
ISBN 978-1941441923
52500
Basic understanding of routing and switching main concepts such as next hops
and VLANs
A basic knowledge of subscriber management and BGP protocol may be help-
ful to better grasp the use cases illustrated in Chapter 3
And, familiarity with the technical documentation about the MX Series, Junos
node slicing, and routing in general, available at https://www.juniper.net/docu-
mentation.
MORE? The Day One library has dozens of books on working with Junos:
http://www.juniper.net/books.
How to correctly set up MX chassis and server to deploy Junos node slicing in
a lab or a production environment
How to migrate an existing chassis to a Junos node slicing solution
Before starting, it’s important to understand why Junos node slicing is so remark-
able and what benefits it can bring to a modern network infrastructure.
Of course, building physically separated networks for each service, or even groups
of bandwidth and latency homogeneous services, is neither physically, practically,
nor economically sustainable.
So how can a service provider segment its physical infrastructure to provide strict
resource allocation and protection, the availability demanded by sensible services,
and economic sustainability? The 3GPP committee explicitly created the concept
of “network slicing” to solve all these problems. Network slicing, according to
3GPP, is a set of technologies that allows an operator to partition his physical net-
work into multiple virtual networks providing the needed resources protection,
availability, and reliability to each service to make it act like it is a dedicated net-
work all to itself.
To further reinforce this same concept, but at the most granular level that is the
single node, Juniper Networks introduced Junos node slicing on its flagship MX
Series line of products.
To solve these two problems, back in 2007 Juniper Networks released a solution,
available on T-Series routers only, named “Juniper Control System” (JCS). The
JCS was a custom chassis, connected to the T Series router through Ethernet-re-
dundant links, hosting up to twelve X86 physical boards that could be configured
separately or Master/Backup routing engines to control up to eight partitions (if
routing engine redundancy was not a strict requirement, otherwise up to six were
achievable) inside the same T Series router. In this way, scaling out, resource reser-
vation and protection, and single Routing Engine fate sharing challenges were per-
fectly addressed. The major drawbacks of the solution were that the hardware was
completely proprietary and very different from standard X86 servers, the blades
were quite expensive, and overall, the TCO of the solution, once all twelve slots
were populated, was quite high due to the high total power consumption of the
external chassis. Moreover, to interconnect different partitions (named Protected
System Domains, or PSDs, just for the record) the customer had to use either phys-
ical ports or logical tunnel interfaces that needed dedicated tunnel cards, which
wasted slots on the Flexible PIC Concentrators (FPCs) and had some performanc-
es limitations in terms of available throughput. JCS provisioning was another
weak spot because it had to be performed using a sort of off-line BIOS-like soft-
ware that was not integrated in any way with Junos. Bottom line: the solution was
not fully optimized.
JCS, on paper, was already quite impressive at that time, but the real problem was
very simple: the technology needed for the underlay wasn’t ready. With the intro-
duction of multi-core X86 CPUs, memory price drops, increased storage sizes with
very low cost per stored bit, the adoption of modern virtualization technologies on
the control plane side, and the advancements in programmability of network pro-
cessors in the data plane side, all the technological shortcomings JCS had to fight
are now solved, and the time has finally come to implement Junos node slicing.
With Junos node slicing, all the challenges addressed by the JCS are solved, with
none of the downsides that afflicted it. Indeed, Junos node slicing:
Provides control plane virtualization using modern virtualization machineries
running over computing off the shelf (COTS) X86 servers running standard
Linux distributions (Ubuntu and Red Hat Enterprise Linux at the moment);
The Routing Engines run as virtual machines (VMs) on top of the Linux OS
and not as dedicated hardware blades;
Resource reservation and protection are provided using full-fledged VMs;
Scaling out is achieved because each slice inherits the scale and performances
of a single chassis;
A failure in any part of a slice can’t affect the others in any way; the perceived
fault on a remote slice will be the same as if a remote node goes down;
Junos node slicing provides a fully integrated VM management orchestrator
named Junos Device Manager (JDM) providing northbound APIs and a Junos-
like CLI to easily handle all the operations needed during the whole lifecycle of
a VM;
Slices can be interconnected using AF Interfaces, which are logical interfaces
created leveraging the MX Chassis fabric; no revenue ports are lost, and per-
formances are the same for the underlying crossbar when different partitions
are connected;
With the introduction of Junos node slicing, Juniper Networks offers the best vir-
tualization solution for both control and data planes. Indeed, it’s quite well known
that X86 CPUs are the best for routing protocol implementations, but they are
very limited in terms of network packet processing input/output performances.
With Junos node slicing network engineers can take advantage of X86 capabilities
for control plane applications, while at the same time continuing to enjoy TRIO
programmable network processor benefits in terms of packet processing
performances.
Figure 1.2 shows each X86 server connected to both MX SCBe2s to provide need-
ed redundancy to the most important underlay control plane component. In fact,
these 10Gbe links have the responsibility to deliver all the control plane signaling
(for example, the routing protocol PDUS), configuration commands, and syslog
messages between the base system and the bare metal servers running the virtual
Routing Engine instances. A single point of failure is just not an option!
It’s very important to the underlay that, from a purely logical standpoint, there is
no difference between how an internal Routing Engine and the external X86 serv-
ers are connected to the data plane component of a router chassis. In fact, one of
14 Chapter 1: Introducing Junos Node Slicing
the architectural pillars of Juniper Networks devices is that control plane and data
plane components are physically separated from each other. Routing Engines han-
dle control traffic, while Packet Forwarding Engines forward transit traffic.
Nevertheless, among many other duties, the control plane must handle routing
information to calculate the routing information base (RIB - also known as rout-
ing table) and then download it into the forwarding information base (FIB – also
known as forwarding table) to provide the Packet Forwarding Engines all the cor-
rect next hops used to forward transit traffic.
But if there is no physical connection between the Packet Forwarding Engines and
the Routing Engines inside a chassis, how could this goal be achieved? The answer
is quite simple: inside every Juniper Network device chassis, a fully dedicated in-
frastructure takes care of all the control plane traffic and the communications be-
tween the Routing Engines and the line cards. The term line cards is used because
this information exchange doesn’t happen between the Routing Engines and the
various Packet Forwarding Engines, but between the Routing Engines and a dedi-
cated control CPU installed on each of them; in fact, this CPU is in charge of ex-
changing control plane information with the routing engines and to program the
local Packet Forwarding Engines for traffic forwarding.
It’s understood that in order for this information exchange to happen, Routing
Engines and line card control CPUs need some kind of network infrastructure; in-
deed, a port-less switching component is installed in the chassis to provide the in-
terconnection between all the control plane CPUs, either in the line cards or on the
Routing Engines. In all MX chassis, this switching chip is physically installed on
the Control Board (SCBe2 for MX240/480/960 or CB-RE on MX2000 series), so
it is pretty straightforward to provide the two physical Ethernet ports on the CBs
themselves to extend this infrastructure outside the chassis.
To better understand how this layer works, you can check how all the internal
components are connected through this management switch, issuing the show chas-
sis ethernet-switch command:
magno@MX960-4-RE0> show chassis ethernet-switch
Displaying summary for switch 0
---- SNIP ----
Link is good on GE port 1 connected to device: FPC1
Speed is 1000Mb
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
---- SNIP ----
Link is good on GE port 6 connected to device: FPC6
Speed is 1000Mb
15 Node Slicing — Physical High-Level View
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
<SNIP>
Link is good on GE port 12 connected to device: Other RE
Speed is 1000Mb
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
Link is good on GE port 13 connected to device: RE-GigE
Speed is 1000Mb
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
<SNIP>
Link is good on XE port 24 connected to device: External-Ethernet
Speed is 10000Mb
Duplex is full
Autonegotiate is Disabled
Flow Control TX is Disabled
Flow Control RX is Disabled
Link is good on XE port 26 connected to device: External-Ethernet
Speed is 10000Mb
Duplex is full
Autonegotiate is Disabled
Flow Control TX is Disabled
Flow Control RX is Disabled
NOTE This output comes from an MX960 B-SYS chassis and for the sake of
brevity only the connected ports are shown.
You can see in the output that in this particular MX960 chassis:
Ports 1/6 are connected to FPC1/6, respectively.
NOTE In the lab setup used to write this Day One book, Routing Engines
RE-S-1800x4-32G were installed so the connection speed on the management
switch is still 1Gpbs. Newer Routing Engines, such as the RE-S-X6 (MX480/960)
or the RE-S-X8 (MX2020/2010/2008) have 10Gbps links instead.
Traditionally, it is through this control plane component that all the needed infor-
mation is exchanged between the Routing Engine and the line cards installed in the
chassis. These communications have been happening using our well known and
beloved IP protocol since Junos 10.2R1, so it was straightforward to simply con-
nect some more ports to this very same switch and ‘extend’ the control plane con-
nectivity outside a physical chassis.
17 Choose the Right X86 Servers
IMPORTANT Even though there are no real technical constraints when deploy-
ing 10Gpbs Ethernet links using copper or fiber cables, Juniper Networks strongly
suggests deploying the links using optical media. Juniper Networks Systems
engineers themselves use fiber connections to perform node slicing quality assur-
ance tests. Whichever media is chosen, it’s mandatory that all the connections be
delivered using the same media: mixed fiber and copper setups are not supported.
But when the choice comes to the X86 bare metal servers, the options are almost
endless, so it may be beneficial to clearly understand all the characteristics a server
must provide to host the Junos node slicing control plane. The two main criteria
are:
Hardware Characteristics
Scaling requirements
NOTE The two external servers should have similar, or even better, identical
technical specifications.
WARNING When CPU cores are accounted for, only physical cores are consid-
ered. Indeed, hyperthreading must be deactivated in the X86 server BIOS because,
at the moment, JDM is hyperthreading unaware, hence vCPUs belonging to the
same physical core might be pinned to different virtual Routing Engine instances,
eventually causing suboptimal performance.
You can get many useful insights (highlighted in bold) from using this command:
The CPU model is a Xeon E5-2683 v4 @ 2.10Ghz, based on the Broadwell mi-
croarchitecture, which is a successor to Haswell.
This CPU model has 16 cores per socket.
Just a single thread per CPU is reported to the OS, so hyperthreading is dis-
abled.
19 Choose the Right X86 Servers
The server has two NUMA nodes; in other words, two physical CPUs are in-
stalled on the server motherboard.
As the server has two NUMA nodes and each of them can provide 16 cores, a
total of 32 cores is available.
Intel VT-x virtualization extensions are enabled.
2. To ensure the power management features are not impairing CPU performanc-
es, check that all the cores run at (around) the expected nominal frequency, in this
case 2.10Ghz:
administrator@server-6d:~$ cat /proc/cpuinfo | grep MHz
cpu MHz : 2100.432
cpu MHz : 2100.432
cpu MHz : 2099.808
cpu MHz : 2100.427
cpu MHz : 2101.213
cpu MHz : 2100.391
cpu MHz : 2100.058
cpu MHz : 2100.693
cpu MHz : 2098.522
cpu MHz : 2101.095
cpu MHz : 2099.541
cpu MHz : 2101.460
cpu MHz : 2103.776
cpu MHz : 2100.058
cpu MHz : 2098.889
cpu MHz : 2104.051
cpu MHz : 2100.680
cpu MHz : 2101.299
cpu MHz : 2100.111
cpu MHz : 2100.059
cpu MHz : 2100.458
cpu MHz : 2106.318
cpu MHz : 2109.576
cpu MHz : 2105.722
cpu MHz : 2101.276
cpu MHz : 2100.212
cpu MHz : 2100.083
cpu MHz : 2100.277
cpu MHz : 2101.365
cpu MHz : 2101.076
cpu MHz : 2098.523
cpu MHz : 2103.841
/vm-primary – It is the mount point where all the virtual Routing Engine im-
ages and files will be stored and must have at least 350GB space allocated.
20 Chapter 1: Introducing Junos Node Slicing
The server has a single disk whose attribute ROTA (which means rotational de-
vice) is 0. So, it’s an SSD.
WARNING With older disk controllers, and if any virtualized storage is in place,
this Linux command might report wrong information. Nevertheless, with modern
disk controllers and direct attached storage, it should be reliable. Anyway, it’s also
possible to check the kernel boot messages using “dmesg” to find the exact disk
model / manufacturer and then browser search for it. For example:
administrator@server-6d:~$ dmesg | grep -i -e scsi | grep ATA
[ 10.489298] scsi 5:0:0:0: Direct-Access ATA Micron_1100_MTFD U020 PQ: 0 ANSI: 5
4. Check the disk size using familiar Linux commands such as df:
administrator@server-6d:~$ df -H
Filesystem Size Used Avail Use% Mounted on
udev 169G 0 169G 0% /dev
tmpfs 34G 27M 34G 1% /run
/dev/mapper/server1--vg-root 503G 36G 442G 8% /
tmpfs 170G 0 170G 0% /dev/shm
tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs 170G 0 170G 0% /sys/fs/cgroup
/dev/sda1 495M 153M 317M 33% /boot
cgmfs 103k 0 103k 0% /run/cgmanager/fs
tmpfs 34G 0 34G 0% /run/user/1000
servers and the B-SYS, hence SR-IOV might be implemented instead. Juniper Net-
works therefore suggests equipping X86 external servers with NICs based on Intel
X710 chipset because these will be the only ones officially supported with SR-IOV.
Of course, in case Junos node slicing is deployed today using VIRTIO, it will be
perfectly supported to replace existing network cards with Intel X710-based ones
in the future in order to be able to use the SR-IOV-based solution.
6. RAM Memory
There is no specific requirement about RAM memory besides its size. Indeed, we
will see in the next paragraph that RAM is one of the fundamental factors to prop-
erly dimension X86 servers.
Each virtual Routing Engine running on the X86 external server is built according
to a template dictating which hardware resources will be required by the virtual
instances. At the time of this writing, four virtual Routing Engine resource tem-
plates exist and they mimic real 64-bit MX Series Routing Engines already re-
leased by Juniper Networks. The four virtual Routing Engine templates are
summarized in Table 1.2.
NOTE Each virtual Routing Engine, regardless of its resource template, will also
need about 55GB of storage space, so do account for this as well.
A Practical Example
Now that all the pieces of the puzzle have come together, it’s time to apply what we
have just learned with a real-world example. Assume that customer ACME has
three aging devices and they want to consolidate all of them into a single MX Se-
ries chassis using Junos node slicing. The three devices provide:
BNG Services
Internet Peering GW
First of all, let’s choose the resource template that best fits our use cases:
8core-64G for BNG, as it is the most control plane intensive service.
After an assessment, considering the expected scale of the business edge ser-
vice, a 4core-32g template was chosen.
For Internet peering GW, the number of routes and peerings suggest that 4core-
32g template should just fit the bill.
Let’s now calculate what the minimum requirements for our server will look like:
23 Software Requirements
Minimum Requirements = GNF dedicated resources + Shared hardware resources
GNF Dedicated Resources:
GNF vRE CPUs = GNF BNG (8) + GNF BE (4) + GNF GW (4) Cores = 16;
GNF vRE GB RAM = GNF BNG (64) + GNF BE (32) + GNF GW (32) RAM = 128G;
Total Storage Needed by vREs = 55G * 3 = 165G;
Shared HW Resources:
Linux OS & JDM Cores = 4;
Linux OS & JDM RAM = 32GB;
Linux OS & JDM Storage = 64GB;
B-SYS – vRE connectivity = 2 x 10GE Ports
Management (Server + JDM + GNFs) = 3 x 1 GE ports
Total Minimum X86 Server Resources:
CPU Cores: 16 + 4 = 20
RAM Memory = 128GB + 32GB = 160GB RAM
Storage = 165GB + 64GB = 229 GB
Networking = 2 x 10GE Ports + 3 x 1GE ports;
Software Requirements
Now let’s discuss the software side. There are four main software components re-
quired to deploy Junos node slicing:
Bare Metal Server host Operating System
We will also discuss the multi-version feature, which provides the capability to de-
ploy different Junos versions between the B-SYS and the Junos node slicing GNFs
(and between different GNFs as well).
24 Chapter 1: Introducing Junos Node Slicing
The servers should have Internet connectivity to download Linux distros updates
and to eventually install additional packages if needed after the initial server setup.
NOTE Despite there being no technical constraints preventing the server from
correctly operating, even in case of a missing Internet connection, it’s mandatory
to have one to maintain the host OS is always updated, especially because of
important updates that may fix security flaws.
For instance, in our lab, we are going to use JDM version 18.3-R1.9 on Ubuntu,
hence the JDM file will be:
jns-jdm-18.3-R1.9.x86_64.deb
NOTE It’s important to state that starting with Junos 19.2R1, all the TRIO
MPCs, including Multiservice MPC, will be made AF Interefaces capable, so the
aforementioned limitation will go away.
26 Chapter 1: Introducing Junos Node Slicing
NOTE The image file is only needed for the first virtual Routing Engine spin up.
Once the Routing Engine is running, the Junos OS file and the software upgrade
procedure will be exactly the same as on a traditional MX Series router.
When working with the Junos Image for B-SYS and the Junos Image for virtual
Routing Engines, there’s an important point to note regarding naming conven-
tions. The same version of B-SYS and virtual Routing Engine Junos files contain
the very same software, just packaged in two different ways. In fact, where the B-
SYS Junos is delivered as a package that can be installed on top of an already run-
ning operating system, the node slicing Junos contains a full disk image that will be
used by the KVM infrastructure to spin up a new virtual machine. Because of that,
to distinguish one version from the other, a different naming convention is
adopted.
NOTE A full explanation about Junos file naming conventions is outside the
scope of this book, so it will only cover what is relevant to distinguish a Junos for
B-SYS from a Junos for virtual Routing Engine file.
So, by looking at the ns, which of course stands for node slicing, it’s possible to dis-
tinguish which kind of image is contained in the package.
A Practical Example
In our Day One book lab we used Junos 18.3R1, therefore we will install the fol-
lowing packages:
BSYS: junos-install-mx-x86-64-18.3R1.9.tgz
NOTE For the sake of completeness, it must be noted that the B-SYS filename
can change if the Routing Engines installed in the MX Chassis are the new RE-X6
(MX960/480/240) or RE-X8 (MX2020/2010/2008). In this case, the Junos OS
runs as a VM over a Linux host OS, and the file name becomes:
junos-vmhost-install-mx-86-64-$Junos_VERSION.tgz.
Just as in the node slicing setup, two RE-S-1800X4s are installed in the MX
chassis; we are using the bare metal Junos version, therefore it’s the only one
explicitly mentioned.
WARNING Both servers must run the same JDM version. Nevertheless, running
different JDM versions during upgrade activities is supported as long as the commit
synchronization feature is disabled until both servers run the updated software.
Once both are upgraded, the commit sync can be reenabled. Be aware that even in
case synchronization is not turned off, the software checks if the JDM version of
the incoming change request matches the one running on the local server, and if it
is not the case, it gets refused.
Multi-Version Consideration
One of the most value-added features that Junos node slicing brings to network
administrators is the inherent possibility to run different Junos versions between
B-SYS and GNFs and amongst different GNFs. Nevertheless, when Junos node
slicing was first introduced, all the solution components had to run the very same
Junos OS version. Starting from Junos 17.4R1, a new feature called multi-version
was added to actually allow to run different Junos software versions between the
B-SYS and the GNFs; and between different GNFs.
Despite the introduction of the multi-version capability, not all the version combi-
nations are supported; in fact, for a certain combination to be supported, it needs
to satisfy some rules implemented by the multi-version feature itself, which will be
explained in a minute.
WARNING The aforementioned rules are enforced as the software will perform a
series of checks that could point out compatibility issues, which will prevent the
new GNFs from being successfully activated.
28 Chapter 1: Introducing Junos Node Slicing
“-“ (Minus) Support: if GNF runs a Junos version lower than the B-SYS;
NOTE It’s implied that there are no limitations whatsoever if B-SYS and GNFs
run the same Junos OS version!
NOTE At the time of this writing, the 2018 Junos version chosen to allow the
“+ / - 2 Support” is Junos 18.2.
Junos virtual Routing Engine versions supported = 17.4 + [0-2] that is 17.4, or
18.1 or 18.2
Extended “+ / - 2” Support
Junos B-SYS version = 18.2
Junos virtual Routing Engine versions supported = 18.2 + / - [0-2] that is 17.4,
18.1, 18.2, 18.3, or 18.4;
The multi-version rules only account for major version numbers. Any ‘R’ release
combination can be supported. For instance:
B-SYS Junos 17.4R1 -> virtual Routing Engine Junos 17.4R2/18.1R2 are al-
lowed combinations.
29 A Practical Example
NOTE Different GNFs can run different Junos versions as long as these ones
satisfy the multi-version rule.
Multi-version is not just a set of ‘nice to have’ rules to fulfill to successfully deploy
Junos node slicing but it is enforced during GNF deployments. These BSYS – GNF
Junos version compatibility checks are performed:
When the user configures a new GNF, or upgrades a running GNF;
When the user upgrades the B-SYS Junos version;
As JDM launches GNFs, the compatibility checks are run during the GNF
bring up process;
It’s important to underline that there are no special constraints related to JDM
software version. Indeed, this component implements the Junos node slicing man-
agement/orchestration plane only, which is completely orthogonal to the Junos OS
running on both B-SYS and GNFs. Indeed, on one hand JDM has no relationship
with the B-SYS at all and, on the other, it only uses GNF Junos OS as a vehicle to
spin up the virtual Routing Engines.
NOTE There is no limit in the deviation of the JDM versions from the B-SYS ones
and JDM can be upgraded completely independently without affecting in any way
the GNFs and B-SYS operativity.
login: root
Last login: Wed Nov 21 22:25:36 from 190.0.4.5
--- JUNOSJunos 18.1R1.9 Kernel 64-bit JNPR-11.0-20180308.0604c57_buil
root@:~ # cli
root> show chassis alarms
bsys-re0:
--------------------------------------------------------------------------
1 alarm currently active
Alarm time Class Description
2018-11-21 21:19:53 UTC Minor GNF 4 Not Online
gnf4-re0:
--------------------------------------------------------------------------
30 Chapter 1: Introducing Junos Node Slicing
1 alarms currently active
Alarm time Class Description
2018-11-21 21:19:49 UTC Major System Incompatibility with BSYS
root> show version bsys | grep Junos:
Junos: 18.3R1.9
This example shows a typical problem with multi-version, and that is the Junos
version incompatibility between the GNF and the B-SYS: in fact, the B-SYS is run-
ning Junos 18.3R1 while the GNF is running 18.1R1. As Junos 18.3 only allows a
“+ 2” support, the GNFs must run at least Junos 18.3 or up to two releases newer,
otherwise the multi-version most important rule is violated, and the GNF can’t
transition to online status.
NOTE The careful reader may have noticed the show chassis alarm command
output is composed of two sections, namely “bsys-re0” and “gnf4-re0”. The first
section actually shows the same output as if the show chassis command would
have been typed on the B-SYS itself, while the second is the local output received
by the GNF. You’ll see that there are indeed CLI GNF commands that also show
B-SYS information and how it is made possible in Chapter 2.
Chapter 2
Now that everything about what is needed to deploy Junos node slicing is (hope-
fully) clear, it’s finally time to touch the real thing and start setting it up in our lab.
Our objective is to create the first two slices, namely EDGE-GNF and CORE-
GNF, and connect them using AF Interefaces. And, as we set up, you can look
more closely at how Junos node slicing works under the hood.
Let’s start the lab!
The lab is composed of one MX960 with a hostname of MX960-4. Its hardware
components are:
Two SCBe2s
NOTE The MX960 needs three SCBe2s to run MPC7 at full rate, nevertheless, to
deploy a fully redundant node slicing solution only two are strictly required. As
this is a lab and we’re not performance testing, this hardware configuration is
suitable to our purposes.
There are two X86 servers with the hostnames of jns-x86-0 and jns-x86-1. The
jns-x86-0 will become the JDM Server0 and the jns-x86-1 will become the JDM
Server1. The main hardware components of the servers are:
Two CPUs Xeon E5-2863 v4 @ 2,10Ghz (16 Cores each)
Each device in the lab has two sets of connections (see Figure 2.1):
Junos node slicing infrastructure links
Management links
3. Communications between the JDM Servers 0 and 1 are needed for configuration
synchronization, file transfers during GNF instantiation and, generally speaking,
to collect information from the remote JDM server every time the commands are
executed with the all server or server # keywords.
WARNING The correct connection scheme for the server to MX Chassis 10GE
links is pivoted to the server number and the SCBe2 10GE port number that must
match. So, Server0 10GE ports must be connected to both SCBe2 Port 0s, while
Server1’s must be connected to both SCBe2 Port 1’s.
Management Links
Some of the Junos node slicing management infrastructure components should be
familiar to the reader, as they are present in traditional environments, too.
NOTE In this book lab, the management for all the components of the solution
will be out-of-band. This term, in Juniper Networks jargon, refers to a configura-
tion where all the management interfaces are connected to ports that have no
access to the Packet Forwarding Engine transit path. Examples of out-of-band
ports are interface fxp0 on the Routing Engines.
The control and data plane of the two new GNFs will be modeled as:
EDGE-GNF:
CORE-GNF:
The lab is quite simple, there are just some important things to keep in mind:
When a line card becomes a member of a GNF, it will not change its numbering
schema. This propriety is particularly useful when an MX router, already run-
ning in the field as a single chassis, will be reconfigured as a node slicing B-SYS.
Indeed, by maintaining the same line card numbering scheme, no modifica-
tions to the original configuration will be needed.
When a line card joins a GNF, it will need to reboot to attach the new virtual-
ized control plane running on the external servers. So be aware of this behavior
to correctly calculate the expected down times during the conversion process.
Last but not least, the AF Interface behaves like a plain core facing Ethernet
interface and it will be configured accordingly as you will see in the next sec-
tions of this chapter.
35 Node Slicing Lab Deployment
NOTE MX960 Chassis and X86 servers are already running the base software
needed to deploy Junos node slicing. In particular:
MX960 is running Junos 18.3R1.9 and redundancy machineries are already con-
figured (graceful-restart, nonstop-routing, nonstop-bridging, commit
synchronization);
Note that jns-x86-0 and jns-x86-1 are running the Ubuntu Linux 16.04.05 (up-
dated to the latest patches available at the time of writing) with “Virtual Machine
Host” software package installed.
Once both checks are passed, the Junos node slicing installation can begin.
36 Chapter 2: Junos Node Slicing, Hands-On
This command does not affect service, it can be committed without any impact on
existing traffic.
{master}
magno@MX960-4-RE0> edit
Entering configuration mode
{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-functions
{master}[edit]
magno@MX960-4-RE0# commit
re0:
configuration check succeeds
re1:
configuration check succeeds
commit complete
re0:
commit complete
{master}[edit]
magno@MX960-4-RE0#
37 Node Slicing Lab Deployment
Easy, isn’t it? But how can you check if the command works? Let’s examine what’s
happening under the hood. As explained previously, Junos node slicing leverages
the internal management switch installed on each MX SCBe2 (or MX2K SFB2) to
extend the links outside the chassis. So, the management switch is the right place
to investigate to find clues about the effects triggered by the previous commit.
NOTE To provide a faster and easier reading on one hand, and to reduce the
quantity of logging on the other, all the CLI snippets are only taken from the
Master Routing Engine and trimmed to show just the relevant information.
Displaying summary for switch 0
--- SNIP ---
Link is good on GE port 1 connected to device: FPC1
Speed is 1000Mb
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
--- SNIP ---
Link is good on GE port 6 connected to device: FPC6
Speed is 1000Mb
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
--- SNIP ---
Link is good on GE port 12 connected to device: Other RE
Speed is 1000Mb
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
Link is good on GE port 13 connected to device: RE-GigE
Speed is 1000Mb
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
--- SNIP ---
Link is down on XE port 24 connected to device: External-Ethernet
Link is down on XE port 25 connected to device: External-Ethernet
38 Chapter 2: Junos Node Slicing, Hands-On
Link is down on XE port 26 connected to device: External-Ethernet
Link is down on XE port 27 connected to device: External-Ethernet
Before the command is applied, the only ports that are UP are the ones connected
to the existing internal components, such as the two Flexible PIC Concentrators
(FPCs) in slot 1 and 6 and the two Routing Engines. All the other ports are not
connected and are in the down state. Pay attention to XE ports 24 and 26, which
should be the ones connected to the external SFP+ plugs on the SCBe2: despite the
fact that their physical cabling is in place, they are shut down in the link down
state.
Now, let’s check the internal VLAN configured on the management switch:
{master}
magno@MX960-4-RE0> test chassis ethernet-switch shell-cmd “vlan show”
As expected, all the ports are assigned to VLAN1, the native and untagged VLAN
that provides the internal messaging channel to carry all the communications
among chassis components.
Displaying summary for switch 0
--- SNIP ---
Link is good on GE port 1 connected to device: FPC1
Speed is 1000Mb
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
--- SNIP ---
Link is good on GE port 6 connected to device: FPC6
Speed is 1000Mb
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
--- SNIP ---
39 Node Slicing Lab Deployment
Link is good on GE port 12 connected to device: Other RE
Speed is 1000Mb
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
Link is good on GE port 13 connected to device: RE-GigE
Speed is 1000Mb
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
--- SNIP ---
Link is good on XE port 24 connected to device: External-Ethernet
Speed is 10000Mb
Duplex is full
Autonegotiate is Disabled
Flow Control TX is Disabled
Flow Control RX is Disabled
Link is down on XE port 25 connected to device: External-Ethernet
Link is good on XE port 26 connected to device: External-Ethernet
Speed is 10000Mb
Duplex is full
Autonegotiate is Disabled
Flow Control TX is Disabled
Flow Control RX is Disabled
Link is down on XE port 27 connected to device: External-Ethernet
For sure, the command had the effect of starting SCBe2 10GE physical ports. In-
deed, as the fiber cables were already connected, once the commit had taken place,
the management switch started 10GE links that now are in UP state.
But now, let’s also check what happened to the internal VLAN scheme:
{master}
magno@MX960-4-RE0> test chassis ethernet-switch shell-cmd “vlan show”
Another interesting thing happened: VLAN4001 was created and all the ports
were added to this VLAN. This is known as the B-SYS Master VLAN. It provides
the media for GNF to and from B-SYS communications. Hence, all the internal
components (B-SYS Routing Engines, FPCs, and the Switch Processor Mezzanine
Board (SPMB) on MX2020/2010/2008) and all the GNFs are members of this
VLAN.
40 Chapter 2: Junos Node Slicing, Hands-On
NOTE While all the traffic originated and received by the internal Routing
Engines is untagged, the external GNF Routing Engines send tagged traffic. The
tagging operations are performed on the B-SYS in the Ethernet management
switch, while they are done by the Junos network stack on the external virtual
Routing Engines.
VLAN 4001 is needed because there are circumstances where B-SYS may need to
communicate with all the GNFs. There are four use cases here:
1. SNMP and Traps forwarding;
2. Syslog messaging forwarding;
3. Chassis Alarms;
4. CLI command forwarding;
For the first three use cases, if a common chassis component breaks, for instance a
power supply, SNMP traps and syslog messages are sent to all the GNFs because
this kind of failure affects the whole solution. At the same time, a chassis alarm is
raised by the BSYS chassis daemon and forwarded to all the chassis daemons
(chassisd) running on the different GNF virtual Routing Engines. The principle is
always the same: the alarm is related to the chassis, which is the only resource
shared among all the GNFs, therefore it is mandatory to display it on all of them.
NOTE At the moment, the chassis alarms are not filtered. This means that all the
alarms forwarded by the BSYS will be visible to all the GNF users regardless of
whether the resource is shared (for example, power supply, fan, SCBe2s) or
dedicated (a line card, or pic assigned to a specific GNF).
For the fourth use case, think about a command such as show chassis hardware: this
command would be issued using the GNF CLI but then it would be forwarded to
the MGD daemon running on the BSYS, which would return the output to the
GNF MGD.
Let’s depict all the software architecture related to the communications between
the base system and the external servers, as shown in Figure 2.3, to clarify as much
as possible on this important topic.
41 Node Slicing Lab Deployment
As explained, some daemons such as chassisd, mgd, and syslogd are still running
on the B-SYS as well as on the external virtual Routing Engine. The communica-
tion between the B-SYS daemons and all the GNFs are happening on VLAN4001,
while the command forwarding between the external and the internal Routing En-
gine mgd daemons are using a per-GNF dedicated VLAN. As mentioned, the con-
trol board internal management switch provides the Layer 2 infrastructure to all of
them.
NOTE The VLAN numbering scheme is valid at the time of this writing, but it
may change, even drastically, during the development cycle of the Junos node
slicing technology.
Now that the basic communication channel between the B-SYS and the external
servers is ready, let’s start the JDM installation!
They are already running a plain vanilla (and updated to the latest patches avail-
able) Ubuntu server, 16.04.5 Linux. You now need to perform all the activities to
make these servers suitable to deploy JDM and the virtual Routing Engines.
NOTE This section can’t provide all the in depth information that is already
available in the Junos Node Slicing Feature Guide: https://www.juniper.net/
documentation/en_US/junos/information-products/pathway-pages/junos-node-
slicing/junos-node-slicing.pdf, but is a quick setup guide with some tricks learned
during various Junos node slicing implementation experiences. So read the
excellent TechLibrary Junos node slicing docs, too!
Before installing the JDM package, you need to perform some activities on both
Linux servers to properly configure them to correctly work with the JDM itself.
The process is quite straightforward, and it is organized in different steps which
could be executed in different orders. Nevertheless, by executing them in this or-
der, the whole set of activities will be optimized and will take the shortest amount
of time possible.
NOTE All the steps are documented below but all the outputs are from one single
server to avoid unnecessary duplications. You should understand that all these
activities must be operated on both X86 servers.
Okay, let’s start the process, by checking that the distribution version is up to date:
Check Ubuntu Version:
administrator@jns-x86-0:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.5 LTS
Release: 16.04
Codename: xenial
Check if any packages need updates:
administrator@jns-x86-0:~$ sudo apt update
Hit:1 http://nl.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://nl.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]
Hit:3 http://security.ubuntu.com/ubuntu xenial-security InRelease
Get:4 http://nl.archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB]
Fetched 216 kB in 0s (729 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
Check. Now, it’s time to double-check that hyperthreading is not active in the
BIOS. To do that, input the following:
administrator@jns-x86-0:/etc/default$ lscpu | grep Thread
Thread(s) per core: 1
Because just a single thread per core is present, it’s all set.
43 Node Slicing Lab Deployment
NOTE If the output reads 2, it means the server BIOS (or UEFI) must be config-
ured to disable hyperthreading. Refer to your X86 server documentation about
how to do that.
As mentioned before, you need to reserve some cores, namely core 0 and core 1, to
the host Linux OS. To do this you must configure the kernel boot parameter isol-
cpus to isolate them from the OS scheduler that will not allocate user space thread
on these CPU cores. To do that, the command will be added to the grub boot-load-
er configuration file, install the new grub parameters, and reboot the server.
First of all, check whether the isolcpus boot parameter is present or not:
administrator@jns-x86-0:~$ cat /sys/devices/system/cpu/isolated
administrator@jns-x86-0:~$
If the command reply is void, then no CPU cores are removed from the OS sched-
uling activities, hence it means the “isolcpus” parameter was not passed to the ker-
nel during the boot process. To add it properly, the file “/etc/defaults/grub” must
be edited, using a text editor of your choice, to add the following statement:
GRUB_CMDLINE_LINUX=”isolcpus=2-31”
NOTE The X86 servers used in this lab have dual-socket CPU motherboards,
with two Xeon CPUs installed, each of them providing 16 cores, for a total of 32
cores numbered from 0 to 31. As two cores must be reserved, core 0 and core 1
will be removed from the normal user-space scheduling activities by configuring
the ‘isolcpus’ parameter to start from core number 2. Therefore, in this lab setup,
the correct parameter to pass to the kernel is 2-31. Please tune yours according to
your setup.
Once the change is saved, run the following command to reflect the modification
to the actual Grub boot configuration:
administrator@jns-x86-0:/etc/default$ sudo update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.4.0-141-generic
Found initrd image: /boot/initrd.img-4.4.0-141-generic
Found linux image: /boot/vmlinuz-4.4.0-131-generic
Found initrd image: /boot/initrd.img-4.4.0-131-generic
Done
44 Chapter 2: Junos Node Slicing, Hands-On
NOTE Your output might be different. These are the most recent kernels released
by Ubuntu at the time of this writing.
At the next reboot, the “isolcpus” parameter should just be there. But it’s not yet
time to reboot the server. You can optimize the preparation process by rebooting
just once after the AppArmor service will be disabled, so let’s move to the next
step.
Check that the AppArmor service is deactivated and disabled. First check on the
current status:
administrator@jns-x86-0:~$ systemctl is-active apparmor
active
By default, apparmor is enabled in Ubuntu 16.04, so let’s stop it and check its new
status:
administrator@jns-x86-0:~$ sudo systemctl stop apparmor
administrator@jns-x86-0:~$ sudo systemctl is-active apparmor
inactive
Now that it is stopped, let’s disable apparmor so that Linux will not start it during
the next reboot and check the new state:
administrator@jns-x86-0:~$ sudo systemctl disable apparmor
apparmor.service is not a native service, redirecting to systemd-sysv-install
Executing /lib/systemd/systemd-sysv-install disable apparmor
insserv: warning: current start runlevel(s) (empty) of script `apparmor’ overrides LSB defaults (S).
insserv: warning: current stop runlevel(s) (S) of script `apparmor’ overrides LSB defaults (empty).
administrator@jns-x86-0:~$ sudo systemctl is-enabled apparmor
apparmor.service is not a native service, redirecting to systemd-sysv-install
Executing /lib/systemd/systemd-sysv-install is-enabled apparmor
Disabled
Now that apparmor is deactivated and also disabled, you can reboot the server.
Once it has come online again, check on the apparmor as well as the isolcpus status:
don’t forget this single reboot will affect both configurations.
WARNING It is absolutely paramount to follow these steps and then to reboot the
server. Indeed, if the installation will be performed by just disabling AppArmor
without a server reboot, nasty things may happen. Some of the most common
symptoms that the installation was not performed after a full reboot are: JDM CLI
can’t be run, the JDM initial configuration DB can’t be opened, and any configura-
tion manipulations (show, display set, compare, or commit) fail. So, if you see the
following behavior:
root@jdm:~# cli
error: could not open database: /var/run/db/juniper.data: No such file or directory
or any command on the JDM trying to manipulate the configuration ends with an error complaining about /
config/juniper.conf opening problems:
error: Cannot open configuration file: /config/juniper.conf
or any commit operation fails with the following:
45 Node Slicing Lab Deployment
root@jdm# commit
jdmd: jdmd_node_virt.c:2742: jdmd_nv_init: Assertion `available_mem > 0’ failed.
error: Check-out pass for Juniper Device Manager service process (/usr/sbin/jdmd) dumped core (0x86)
error: configuration check-out failed
Probably the installation was made without a full server reboot after AppArmor
deactivation. To solve all these problems, the JDM package must be uninstalled,
the server must be rebooted, and then the JDM package must be reinstalled.
So, let’s reboot the server now:
administrator@jns-x86-0:~$ sudo reboot
Great, the first part of the process is now ended. You can have a cup of coffee, or
whatever you’d like, and wait for the server to boot up…
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-141-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
0 packages can be updated.
0 updates are security updates.
New release ‘18.04.1 LTS’ available.
Run ‘do-release-upgrade’ to upgrade to it.
Last login: Mon Jan 7 12:03:18 2019 from 172.29.87.8
administrator@jns-x86-0:~$
Let’s check if our configurations took effect or not with an isolcpus parameter
check. Now, by issuing the same command used before the reboot, you can check
that the output has the parameter set to the 2-19 value:
administrator@jns-x86-0:~$ cat /sys/devices/system/cpu/isolated
2-31
Perfect! As expected, the Linux OS will schedule user space tasks starting with
core number 2! Now let’s check on the AppArmor service. Use the familiar system-
ctl command:
administrator@jns-x86-0:~$ sudo systemctl is-active apparmor
inactive
administrator@jns-x86-0:~$ sudo systemctl is-enabled apparmor
disabled
Great. Everything is fine. From a kernel and operating system standpoint, the serv-
er is correctly configured to host both JDM and the virtual Routing Engines that
will power the node slicing solution. Before starting JDM installation, there are
some other activities to perform, but they are related to storage and distribution
package installation and configuration.
46 Chapter 2: Junos Node Slicing, Hands-On
NOTE Despite the 2TB storage size, only 1T is used for /vm-primary path as this
amount of storage is more than enough to host up to 10 GNFs, the maximum
number supported at the time of this writing.
The storages are configured as logical volumes using the native Linux / Ubuntu
LVM2 storage management suite. How this configuration is performed is beyond
the scope of this book, but there are very easy-to-follow tutorials on the Internet
that show you how to perform a smooth installation. Moreover, Ubuntu server,
16.04, is already LVM2 capable, therefore no extra packages are required and the
bootup scripts are already configured to automatically detect, activate, and mount
LVM based block devices.
Here is the X86 server storage configuration:
administrator@jns-x86-0:~$ df -h -t ext4 -t ext2
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/jatp700--3--vg-root 491G 6.7G 459G 2% /
/dev/sda1 720M 108M 575M 16% /boot
/dev/mapper/vm--primary--vg-vm--primary--lv 1008G 72M 957G 1% /vm-primary
The configuration reflects what’s expected so let’s move on to install the Junos De-
vice Manager!
In this lab, as already mentioned, the JDM version used is 18.3R1, therefore, being
the underlying Linux distribution Ubuntu server, the package is stored in this setup
under “/home/administrator/software”:
administrator@jns-x86-0:~$ ls Software/jns-jdm*
jns-jdm-18.3-R1.9.x86_64.deb
47 Node Slicing Lab Deployment
The installation process is exactly the same as all the other Ubuntu .deb packages
and it’s done using the dpkg command.
NOTE The jns-jdm package has two mandatory package dependencies: qemu-
kvm and libvirt-bin. These packages may or may not be part of the Ubuntu server
installation; indeed, if during the first installation the “Virtual Machine Host” box
is not explicitly checked, then these two packages will not be installed. How can
you check if they are already present on Ubuntu Server? The dpkg command is your
savior:
administrator@jns-x86-0:~/Software$ dpkg -l | grep qemu-kvm
ii qemu-kvm 1:2.5+dfsg-5ubuntu10.33 amd64 QEMU Full
virtualization
administrator@jns-x86-0:~/Software$ dpkg -l | grep libvirt-bin
ii libvirt-bin 1.3.1-1ubuntu10.24 amd64 programs for the
libvirt library
If the output returns the information about these packages and it shows the “ii” in
the desired/status columns, they are correctly installed. In case the status is differ-
ent from “ii”, the user will have to troubleshoot the issue and eventually reinstall
the packages, while if the output is null, then they must be added before attempt-
ing JDM installation using the following statements:
administrator@jns-x86-0:~/Software$ sudo apt install libvirt-bin
Reading package lists... Done
Building dependency tree
Reading state information... Done
libvirt-bin is already the newest version (1.3.1-1ubuntu10.24).
0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
administrator@jns-x86-0:~/Software$ sudo apt install qemu-kvm
Reading package lists... Done
Building dependency tree
Reading state information... Done
qemu-kvm is already the newest version (1:2.5+dfsg-5ubuntu10.33).
0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
The outputs show they are already present in the system. If they are not, a user
prompt will be returned to confirm the installation and all the needed
dependencies.
WARNING It is very important to check the presence of SNMP tools on the Linux
servers. Sometimes they are not installed by default on Ubuntu 16.04, so the
following packages should be installed:
The last two are not mandatory, they implement the SNMP client side and are use-
ful in case you want to perform any SNMP testing.
48 Chapter 2: Junos Node Slicing, Hands-On
Perfect! The installation went through flawlessly. If you’re interested in more detail
about the installation process, a full log is stored in /var/log/jns-jdm-setup.log file.
So, before proceeding, let’s illustrate the JDM software architecture in Figure 2.4
and describe its main components.
Surprise, surprise, it’s not! Why? Is that correct? Is there any step or activity we
missed? No worries, it’s all absolutely working as expected!
JDM is still inactive because during its first boot, the end user must configure the
server identity on both servers. Indeed, the two X86 devices must be identified as
Server0 and Server1 and to do that, the user must assign this identifier by specify-
ing it in the JDM first run start command.
In this lab, server “jns-x86-0” will act as Server0, while “jns-x86-1” will act as
Server1. This step is only required during the first JDM run because the identity
configuration will be stored and used at every subsequent JDM restart operation.
As this is a fundamental step, both servers’ CLI outputs will be shown to highlight
the different startup ID used. Let’s start with JDM assigning the correct identities
to both servers by appending the “server=[0|1]” statement to the jdm start
command:
JNS-X86-0
administrator@jns-x86-0:~$ sudo jdm start server=0
Starting JDM
administrator@jns-x86-0:~$
JNS-X86-1
administrator@jns-x86-1:~$ sudo jdm start server=1
Starting JDM
administrator@jns-x86-1:~$
As explained, both servers started JDM with different server IDs, as depicted in the
CLI output. The JDM runs as a container, so it’s possible to access its console with
sudo jdm console. To exit from the console prompt, use the CTRL + ] key combina-
tion. Here’s the JDM Console sample output:
50 Chapter 2: Junos Node Slicing, Hands-On
administrator@jns-x86-0:~$ sudo jdm console
Connected to domain jdm
Escape character is ^]
---- SNIP ----
* Check if mgd has started (pid:1960) [ OK ]
* Check if jdmd has started (pid:1972) [ OK ]
* Check if jinventoryd has started (pid:2021) [ OK ]
* Check if jdmmon has started (pid:2037) [ OK ]
Device “jmgmt0” does not exist.
Creating new ebtables rule
Done setting up new ebtables rule
* Stopping System V runlevel compatibility [ OK ]
Ubuntu 14.04.1 LTS jdm tty1
jdm login:
And it’s clearly visible; JDM is running inside an Ubuntu 14.04.1 container. It’s
using a single core and up to 2GB of RAM:
administrator@jns-x86-0:~$ sudo virsh --connect lxc:/// dominfo jdm
Id: 8440
Name: jdm
UUID: c57b23c8-6286-4c87-a6f0-c358d2c07a53
OS Type: exe
State: running
CPU(s): 1
CPU time: 22.4s
Max memory: 2097152 KiB
Used memory: 150856 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0
The start-up process has also automatically set the network connectivity on the
Linux Ubuntu Host side. Let’s check it out:
administrator@jns-x86-0:~$ sudo brctl show
bridge name bridge id STP enabled interfaces
bridge_jdm_vm 8000.761a02971f95 no vnet2
virbr0 8000.5254005ac9e3 yes virbr0-nic
vnet0
51 JDM First Run and Initial Configuration
The virbr0 provides IP connectivity acting like an integrated routing and bridging
(IRB) interface, using the IP address 192.168.2.254/30 as shown here:
administrator@jns-x86-0:~$ sudo ip addr list virbr0
11: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:5a:c9:e3 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.254/30 brd 192.168.2.255 scope global virbr0
valid_lft forever preferred_lft forever
As the netmask is /30, the other side of the link can’t be anything different from
192.168.2.253, let’s try to ping it:
administrator@jns-x86-0:~$ ping 192.168.2.253 -c 1
PING 192.168.2.253 (192.168.2.253) 56(84) bytes of data.
64 bytes from 192.168.2.253: icmp_seq=1 ttl=64 time=0.060 ms
To see who the owner of this IP address is, first of all, check the MAC address:
administrator@jns-x86-0:~$ arp -n 192.168.2.253
Address HWtype HWaddress Flags Mask Iface
192.168.2.253 ether 52:54:00:ec:ff:a1 C virbr0
It looks familiar, doesn’t it? It’s indeed the JDM vnet0 MAC Address as seen in the
output above. To confirm it, there are two more clues:
the file /etc/hosts was modified and now contains an entry named ‘jdm’ with ip
address 192.168.2.253:
administrator@jns-x86-0:~$ sudo cat /etc/hosts | grep 192.168.2.253
192.168.2.253 jdm
if you SSH as root to this IP address (or better, using the JDM hostname) you
can reach the JDM container as expected:
administrator@jns-x86-0:~$ sudo ssh jdm
****************************************************************************
* The Juniper Device Manager (JDM) must only be used for orchestrating the *
* Virtual Machines for Junos Node Slicing *
****************************************************************************
Last login: Tue Jan 8 23:16:58 2019 from 192.168.2.254
root@jdm:~# ifconfig bme1
bme1 Link encap:Ethernet HWaddr 52:54:00:ec:ff:a1
inet addr:192.168.2.253 Bcast:192.168.2.255 Mask:255.255.255.252
---- SNIP ---
NOTE You may have noticed that the network interface on the container is
named bme1 and not vnet0. It’s indeed an interesting thing to explain: the con-
tainer is running a dedicated NIC driver (veth namely in this case) that exposes the
network interface as ‘bme1’ to the Linux kernel. The vnet0 interface acts as a
tunnel interface that receives the raw network frames and delivers them to the host
network stack.
52 Chapter 2: Junos Node Slicing, Hands-On
To better grasp the whole picture, let’s illustrate what has been observed using the
CLI:
NOTE All the information provided is accurate as per today’s Junos node slicing
with external control plane productization. Juniper Networks may change the
underlying implementation during the solution lifecycle without any notice. Of
course, the look and feel of the solution will not change in its fundamental pillars
to provide a very smart and flexible way to partition the MX Series routers
maintaining the same user experience offered by a standalone solution.
You’ll see in a moment that all this technology is used to implement Junos node
slicing, therefore exploring in detail the simplest use case, that is the host to JDM
container connectivity will help to better understand the more complicated ones.
Let’s move on and perform the first JDM configuration.
Once logged in to the JDM container, a Junos-like CLI is available to perform the
initial setup to create the management plane between the JDM servers and the B-
SYS. Once this task is completed, the Junos node slicing solution will be up and
running and ready to start hosting our GNFs.
To access the JDM Junos CLI, simply type “cli” at the login prompt:
administrator@jns-x86-0:~$ sudo su -
root@jns-x86-0:~# ssh jdm
****************************************************************************
* The Juniper Device Manager (JDM) must only be used for orchestrating the *
* Virtual Machines for Junos Node Slicing *
****************************************************************************
Last login: Tue Jan 8 23:34:53 2019 from 192.168.2.254
root@jdm:~# cli
root@jdm>
53 JDM First Run and Initial Configuration
The prompt looks familiar, eh? The first configurations must be performed on
both JDM Server0 and Server1 because the replication machinery is not working
yet. But after the first commit, which will create the IP management infrastructure
between the two JDM servers, it will be possible to configure everything using a
single server and then, by committing the configuration, it will sync automatically
between the JDMs.
Let’s examine what the first configuration looks like and how all the different com-
mands come together to provide all the aforementioned capabilities.
NOTE As a gentle reminder, please find all the information about which inter-
faces will be used to perform the different roles in the configuration recapped
below. Remember these are exactly the same on both Server0 and Server1 JDMs:
NOTE Remember the X86 to CB link rule: # = 0 for Server0, 1 for Server1.
54 Chapter 2: Junos Node Slicing, Hands-On
NOTE No JDM interface is assigned to host eno3. Indeed, eno3 will attach to a
bridge with each of the virtual Routing Engine fxp0 interfaces to provide manage-
ment connectivity to all of them.
Let’s examine the different blocks in detail. The ‘server0’ and ‘server1’ are two re-
served group names that identify on which server the configuration should be ap-
plied. This configuration allows you to keep command separation between the
two servers if needed. This is exactly the same machinery used when two Routing
Engines are present in the same chassis; in that case the reserved group names are
“re0” and “re1”, but the principle doesn’t change.
So, when the configuration is committed on server0, only the commands listed
un- der the ‘group server0’ stanza are executed and, of course, the same applies to
server1, too. All the commands outside of these two stanzas will be applied to
both JDM servers at the same time.”
NOTE Because of the ‘server0’ and ‘server1’ group machinery, initial configura-
tions will be exactly the same on both servers as the commands will be selectively
applied based on the server identity configured when JDM was initially started.
55 JDM First Run and Initial Configuration
Now let’s examine relevant configurations under server0 group stanza (all the con-
siderations will apply to server1 group as well):
set groups server0 server interfaces cb0 enp4s0f0
set groups server0 server interfaces cb1 enp4s0f1
set groups server0 server interfaces jdm-management eno2
set groups server0 server interfaces vnf-management eno3
With these commands, Linux physical ports are mapped to JDM interfaces ac-
cording to what was summarized in Table 2.
The cb0 and cb1 interfaces, as per their names, identify the links to the MX B-SYS
control boards. These interfaces will carry all the control plane and management
traffic between the virtual Routing Engines and the B-SYS, as well as JDM Server0
to JDM Server1 traffic.
With the jdm-management command, the Linux physical server interface named
“eno2” becomes the JDM management interface. Indeed, a new interface, “jmg-
mt0”, appears in the JDM container, used as the out-of-band management port to
connect to JDM directly. It can be compared to the fxp0 port found on Juniper
Network devices.
In the same way, the vnf-management statement on the Linux server physical inter-
face eno3 becomes the out-of-band management port shared amongst all the
GNFs. All the traffic sourced by the fxp0 interfaces on each virtual Routing Engine
will be transmitted using the eno3 port.
Despite the homogenous CLI configuration between the service interfaces, (cb0
and cb1), and the management interfaces, it triggers very different settings in the
JDM container configuration. Indeed, the former are configured in virtual Ether-
net port aggregator (VEPA) mode, while the latter are configured in bridge mode,
meaning that all traffic carried by cb0 and cb1 interfaces, regardless of the actual
destination, is forced to pass through the external switch. This doesn’t hold true in
case of bridged ones, which can leverage the virtual switch if the destination con-
tainer is on the same host compute node. Long story short, all the control traffic
between the JDMs, and overall, between the virtual Routing Engine and the line
cards on the B-SYS, will transit through the management switch.
NOTE The X86 servers deployed in this lab all use the very same hardware,
hence all the interfaces share the same names. Therefore, these commands might
have been configured outside of the server group stanzas in this particular scenar-
io. Nevertheless, best practice is to maintain these statements within each server
group stanza to achieve the same configuration style regardless of the kind of
servers used. Indeed, having two twin servers is just recommended but not strictly
mandatory to deploy Junos node slicing as long as both of them can provide the
required hardware technical specifications.
56 Chapter 2: Junos Node Slicing, Hands-On
The next configuration block sets up Server0 JDM hostname, management inter-
face IP address, and the default route to the IP GW:
set groups server0 system host-name JDM-SERVER0
set groups server0 interfaces jmgmt0 unit 0 family inet address 172.30.181.173/24
set groups server0 routing-options static route 0.0.0.0/0 next-hop 172.30.181.1
NOTE You may have noticed the IP GW address is the same for both JDM
Server0 and Server1. Therefore, the routing-options static route 0.0.0.0/0 next-hop
172.30.181.1 command might have been configured outside the group configura-
tion stanza. Nevertheless, the choice to put it inside the server0 and server1 group
stanzas provides a more ordered and linear style where all the management
reachability statements are grouped per each server at a very low cost of an
additional line of text.
Then the groups are applied to the candidate configuration by using the following
commands:
set apply-groups server0
set apply-groups server1
NOTE Although the JDM root login happens through the use of SSH keys (more
on that soon), it’s mandatory to commit any initial Junos configuration to config-
ure a root password, otherwise a commit error complaining about this missing
statement will happen:
set system commit synchronize
As it happens on the Junos we all know, this command will activate the synchroni-
zation features between the JDMs running on both servers. It is mandatory to acti-
vate this command to automatically synchronize JDM configurations:
set system login user magno class super-user
set system login user magno authentication encrypted-password “$6$AYC3.$qcf0.cDc9GcU4FuO.VeU9NGrBBje
j70NPyWE2C.03rzgimtW8sctZLxbGU8zpub62LB7Q/8DU28zLzYEngBBa/”
set system root-authentication encrypted-password “$6$Cw.E1$rWs6rnBn0vxrSHgvHT.YFpXUltRnpEnJ9V.
vDKwcbV7l11vA0VCWCDoKpae3.Lu72mHQ1Ra4oD4732T0MT5Lc/”
And, last but not least, the next commands will enable some system services need-
ed by the JDM to work properly:
set system services ssh root-login allow
set system services netconf ssh
set system services netconf rfc-compliant
set system services rest http
The first three statements enable the SSH and the NETCONF (over SSH in its
RFC-4571 compliant version: https://www.juniper.net/documentation/en_US/ju-
nos/topics/reference/configuration-statement/rfc-compliant-edit-system-services-
netconf.html ) access to JDM servers. NETCONF is widely used in JDM to apply
configurations between the JDM servers (synchronization and remote command)
and between JDM and B-SYS (command forwarding). The last command enables
HTTP Restful APIs, which can offer a very simple and straightforward north-
bound interface to automate JDM operations. This command is not strictly man-
datory to successfully configure the JDM.
Now that the initial configuration is loaded on both servers, it’s time to commit:
[edit]
root@jdm# commit
commit complete
root@jdm#
The commit process went through but apparently nothing happened. For instance,
let’s take a closer look and check the JDM interfaces:
root@jdm# run show interfaces | match Physical
Physical interface: lo , Enabled, Physical link is Up
Physical interface: cb0.4002, Enabled, Physical link is Up
Physical interface: cb1.4002, Enabled, Physical link is Up
Physical interface: bme1, Enabled, Physical link is Up
Physical interface: bme2, Enabled, Physical link is Up
Physical interface: cb0, Enabled, Physical link is Up
Physical interface: cb1, Enabled, Physical link is Up
Physical interface: jmgmt0, Enabled, Physical link is Up
root@jdm#
Wow, great! It looks like something has definitely happened during the commit!
NOTE You may have noticed the CLI prompt didn’t change despite the hostname
configuration. That’s because the user must disconnect, and connect back, for this
to happen. Let’s do that!
root@jdm# exit
root@jdm> exit
root@jdm:~# cli
root@JDM-SERVER0>
As expected, once you disconnect from the JDM CLI and connect back, the host-
name changes!
58 Chapter 2: Junos Node Slicing, Hands-On
The configuration commit produced the desired effects, but you still need to per-
form deeper checks to verify that everything is working properly. Before doing
this, one of the most important tasks to be performed is related to mutual authen-
tication of the JDM servers. Configuration synchronization and remote command
machineries (that is, the possibility to run commands from one JDM server to the
other) use NETCONF over SSH as their communication channel. In order to al-
low these communications to happen, the servers must learn each other’s SSH
public key and store it in the authorized key file. To automatically perform this
task, the operational request server authenticate-peer-server command must be
executed on both JDM servers.
NOTE During the automated procedure, the user will be prompted for a root
password; the one configured with the system root-authentication plain-text-pass-
word command must be used:
SERVER0:
root@JDM-SERVER0> request server authenticate-peer-server
The authenticity of host ‘192.168.2.245 (192.168.2.245)’ can’t be established.
ECDSA key fingerprint is d0:06:39:fa:02:1b:c3:b2:1a:e9:ed:0b:9e:02:4e:1d.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-
id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.2.245’s password:
Number of key(s) added: 1
Now try logging in to the machine with ’ssh root@192.168.2.245’ and check to
make sure that only the key(s) you wanted were added:
The authenticity of host ‘192.168.2.249 (192.168.2.249)’ can’t be established.
ECDSA key fingerprint is d0:06:39:fa:02:1b:c3:b2:1a:e9:ed:0b:9e:02:4e:1d.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
root@JDM-SERVER0>
SERVER1:
root@JDM-SERVER1> request server authenticate-peer-server
The authenticity of host ‘192.168.2.246 (192.168.2.246)’ can’t be established.
ECDSA key fingerprint is 08:db:90:22:3e:65:88:3e:9a:95:c4:e5:78:36:d9:1b.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-
id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.2.246’s password:
Number of key(s) added: 1
59 JDM First Run and Initial Configuration
Now try logging in to the machine with ssh root@192.168.2.246 and check to make
sure that only the key(s) you wanted were added:
The authenticity of host ‘192.168.2.250 (192.168.2.250)’ can’t be established.
ECDSA key fingerprint is 08:db:90:22:3e:65:88:3e:9a:95:c4:e5:78:36:d9:1b.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
root@JDM-SERVER1>
The outputs are quite self-explanatory: the two JDM servers exchange their SSH
public keys between them. From now on, all the communication using SSH as
channel between the JDMs can be authenticated using SSH keys and this is par-
ticularly relevant for NETCONF as it is carried over the secure shell.
To double-check that the key exchange happened correctly, it’s sufficient to inspect
the /root/.ssh/authorized_keys:
root@JDM-SERVER0> file show /root/.ssh/authorized_keys | match jdm
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDxBxKEco9LoVjk/
j8DY9maeIXJ96Vwvzm6Ye6OEmKgpVYEfmJBbcsrpmw3ye6x7ncyo1z+TaSlgoEQ9BdG1enhBsKSVZJx/
f+5IKrxJOJl3tg+huV50CUNwlm5M3wjdUJV7/1pAR/8Ki3IKtrHHtudM7RzLskcheMPoI4ZS2Gd2UwPHiDwB2Ap7aWS/
ZIYJpPvQazfyHOiy/l9vTtRwKtY6lUScehfq97XHBEftJblCdenyr2KJ6ucf6RqzgKm51FEghmrbJTORRG4BRZYr+vaQiof3DCLE/
ZfvWqH6b38XNOzttLxDIxPG7456eZ/exOthTjjDAr2QMLsD+lDHzzz jdm
Now it’s time to perform the last sanity check to verify environment health. A very
useful command to perform this task is show server connectivity.
Let’s kill two birds with one stone and use this activity to also experiment, for the
first time, with the command forwarding machinery between the two JDM serv-
ers. Indeed, from the JDM Junos CLI, it’s possible to invoke all the commands not
only locally, but also on the remote server or on both servers at the same time. To
perform the command forwarding to a specific server, the server [0|1] command
must be appended. When a command is executed without the server statement, it’s
as if the server number is the one assigned to the local server.
If you want any command to be executed automatically on both servers, use the
all-servers keyword appended to the original command. Let’s try the all-servers
suffix, so that with a single command you can check: if the JDM environment is
working properly on both servers, and if NETCONF over SSH is working as ex-
pected. From Server0:
root@JDM-SERVER0> show server connections all-servers
server0:
--------------------------------------------------------------------------
Component Interface Status Comments
Host to JDM port virbr0 up
Physical CB0 port enp4s0f0 up
Physical CB1 port enp4s0f1 up
60 Chapter 2: Junos Node Slicing, Hands-On
Physical JDM mgmt port eno2 up
Physical VNF mgmt port eno3 up
JDM-GNF bridge bridge_jdm_vm up
CB0 cb0 up
CB1 cb1 up
JDM mgmt port jmgmt0 up
JDM to HOST port bme1 up
JDM to GNF port bme2 up
JDM to JDM link0* cb0.4002 up StrictKey peer SSH - OK
JDM to JDM link1 cb1.4002 up StrictKey peer SSH - OK
server1:
--------------------------------------------------------------------------
Component Interface Status Comments
Host to JDM port virbr0 up
Physical CB0 port enp4s0f0 up
Physical CB1 port enp4s0f1 up
Physical JDM mgmt port eno2 up
Physical VNF mgmt port eno3 up
JDM-GNF bridge bridge_jdm_vm up
CB0 cb0 up
CB1 cb1 up
JDM mgmt port jmgmt0 up
JDM to HOST port bme1 up
JDM to GNF port bme2 up
JDM to JDM link0* cb0.4002 up StrictKey peer SSH - OK
JDM to JDM link1 cb1.4002 up StrictKey peer SSH – OK
That’s exactly the output that we were looking for! Everything is up on both serv-
ers. Pay particular attention to the last two lines of the output: these lines certify
that the two JDM servers can exchange messages between themselves using SSH!
As is clearly visible from the output, this command is quite informational. All the
physical and logical interfaces are explicitly shown, providing a comprehensive
full picture about the JDM connectivity machinery.
To better understand what each component is providing to the solution, let’s ex-
amine each line of the command output.
NOTE As usual, only one server’s output, namely Server0, will be analyzed to
avoid unnecessary redundant outputs. And of course, everything applies to both
servers:
Host to JDM port virbr0 up
You should already be familiar with virbr0. That’s the bridge locally connecting
the Host OS (Linux Ubuntu) to JDM container. Its link is up and working as ex-
pected and it gets monitored by pings sent by both servers every second.
NOTE No big deal here as we used exactly that bridge to connect to JDM CLI, so
if we were able to start the SSH session, it will just mean that this bridge is work-
ing properly:
61 JDM First Run and Initial Configuration
Physical CB0 port enp4s0f0 up
Physical CB1 port enp4s0f1 up
Physical JDM mgmt port eno2 up
Physical VNF mgmt port eno3 up
As explained during the configuration stage, physical Linux Ubuntu 10GE inter-
faces, namely enp4s0f0 and enp4s0f1, are configured as links to the MX B-SYS
control board 0, and 1, respectively. The software automatically creates two new
interfaces named CB0 and CB1, which represent these connections inside JDM.
The same concept applies to the eno2 and eno3 Linux interfaces that are mapped,
as per our configuration, to the JDM (jmgmt0) and GNF management interfaces
inside JDM.
NOTE Please keep in mind these entries only display the link’s physical status:
JDM-GNF bridge bridge_jdm_vm up
The JDM-GNF bridge, as you will see in the next few pages, provides the Layer 2
connectivity between all the virtual Routing Engines and JDM. This bridge is in
charge of carrying all the messaging between these two components, such as vir-
tual machine API calls (create, reboot, shutdown, etc.) and liveness detection traf-
fic. This bridge status must be up, otherwise no communications between the JDM
and virtual Routing Engine can happen. Nevertheless, it’s very important to con-
sider that if virtual Routing Engines are already running and the status is down, it
only affects communication between the JDM and virtual Routing Engines; pro-
duction services provided by any GNF (which is virtual Routing Engine plus line
cards inside B-SYS) are not affected at all:
JDM to HOST port bme1 up
JDM to GNF port bme2 up
These lines show the JDM to host and JDM to GNF ports’ physical status. They
are working as expected.
This output displays the JDM to JDM link logical status between the two contain-
ers running on Server0 and Server1. Clearly displayed, the two connections be-
tween the local JDM (remember, these are the outputs from Server0) and the
remote JDM (in this case Server1) are configured. Their status is up and that means
there is IP connectivity between the two JDMs on both links. The connectivity is
monitored by the JDM monitoring daemon (jdmmon), which maintains an open
TCP connection sending empty TCP keepalives every second.
62 Chapter 2: Junos Node Slicing, Hands-On
NOTE The “*” (asterisk) displayed with the JDM to JDM link0 line means this is
the active link. Should link0 fail, link1 will take over automatically. The behavior
is preemptive, so when link0 comes up again, it will assume the active role again.
root@JDM-SERVER0> show server connections extensive server 1 | match “JDM to JDM link”
JDM to JDM link0* cb0.4002 up 192.168.2.245 StrictKey peer SSH - OK
JDM to JDM link1 cb1.4002 up 192.168.2.249 StrictKey peer SSH – OK
You can see that both commands were executed on Server0, but by specifying the
server keyword, the last output was collected on Server1. This is a good example
of how comfortable the command forwarding feature can be. As shown, server0
and server1 share two IP point-to-point connections, one for each control board.
These are the communication channels used by the two JDMs to connect to each
other. The suffix .4002 indicates the packets are tagged using this VLAN-ID num-
ber. But let’s check using some of our Linux Kung Fu:
root@JDM-SERVER0:~# ip addr list | grep ‘cb[0-1].4002’
root@JDM-SERVER0:~#
Hmmm, empty output was not expected. Let’s check to see what’s wrong, cb0
must be there after all. The mystery is soon unveiled; to further separate the JDM
connectivity, a dedicated network namespace holds the two CB interfaces!
NOTE A full explanation about Linux namespaces is outside the scope of this
book. But at a very high level conceptually, they can be thought of as machinery
to create a separate context to better isolate resources dedicated to containers
from the host. There are seven types of namespaces currently in Linux kernel
implementation, each one providing separation for different entities. We will
check the only one relevant to us, the network name space (netns).
So, two network namespaces are present on JDM: host is the default while jdm_nv_
ns is the one dedicated to the JDM. Let’s try the command again to inspect cb0 and
cb1 interfaces, but this time inside the appropriate namespace:
root@JDM-SERVER0:~# ip netns exec jdm_nv_ns ip addr list | grep cb[0-1].4002
2: cb0.4002@cb0: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_
UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
inet 192.168.2.246/30 scope global cb0.4002
3: cb1.4002@cb1: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_
UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
inet 192.168.2.250/30 scope global cb1.4002
Examine the namespace more closely, and you’ll find that the bme2 interface also
belongs to jdm_nv_ns:
root@JDM-SERVER0:~# ip netns exec jdm_nv_ns ip addr list | grep ‘@if’ -A2 | grep bme
14: bme2@if15: <BROADCAST,MULTICAST,UP,LOWER_
UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
inet 192.168.2.14/28 scope global bme2
This interface connects JDM to all the GNFs, and you’ll soon see which features it
provides.
Let’s go back to examine that StrictKey peer SSH – OK comment! As already ex-
plained, JDM configuration synchronization and remote command leverages
NETCONF over an SSH communication channel. That’s why we had to exchange
the public RSA keys between the two JDMs. It’s mandatory for the status to be OK,
otherwise neither of the aforementioned features will work.
If for any reason the SSH RSA key changes, this communication channel will
break. Let’s look at an example and, overall, how to fix it.
Let’s suppose JDM Server 0 has its RSA key changed after the two servers had al-
ready exchanged their SSH keys. Let’s check the status now:
root@JDM-SERVER0> show server connections
Component Interface Status Comments
Host to JDM port virbr0 up
Physical CB0 port enp4s0f0 up
Physical CB1 port enp4s0f1 up
Physical JDM mgmt port eno2 up
Physical VNF mgmt port eno3 up
JDM-GNF bridge bridge_jdm_vm up
CB0 cb0 up
CB1 cb1 up
JDM mgmt port jmgmt0 up
JDM to HOST port bme1 up
JDM to GNF port bme2 up
JDM to JDM link0* cb0.4002 up StrictKey peer SSH - Failed
JDM to JDM link1 cb1.4002 up StrictKey peer SSH – Failed
64 Chapter 2: Junos Node Slicing, Hands-On
If the status is failed, the NETCONF over SSH communication channel is gone
and it must be fixed at once! So, let’s re-run the request server authenticate-peer-
server command:
root@JDM-SERVER0> request server authenticate-peer-server
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-
id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.2.245’s password:
Number of key(s) added: 1
Now try logging in to the machine with SSH as root@192.168.2.245 and check to
make sure that only the key(s) you want were added:
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
Pay attention to the WARNING here: as a key already exists for Server0 on the remote
Server1, the new one is skipped. So first you need to get rid of the old key, and then
you’ll be able to run the server authenticate command to restore the situation. Let’s
delete the old key from Server1 then execute the command again on Server0:
NOTE To remove the Server0 key from Server1, you need to delete it from the file
/root/.ssh/authorized_keys using a text editor. Look for the line starting with
“ssh-rsa” and ending with “jdm”, something like the following example:
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQCk7AAnIeHGZEh2EGc33Bjb82MrM9nrB6O/y1CXPA77dVn9wAUoJdQGTNFDZh1gjddf
r69PlJY4FQvgVH2dLMzRrnoBwXl9kPX2avvOPBBgeIS2WSphdufuQGV4rQnq62FLu94Z8BLevTjyYMfXEKnh3aVVpJUhajsd
vrrWyz4Xnb9xQmiWF+x7JiR8Ab3JPmq00dmUaaKvGELB07soGF8+GyIsJdTC9uCxvmhiQn7+UCDwHeJNLFSNhEoX48K7jfMt
VxlGDYPPW+TH4jYjR2s8S8eRLspKhy3FHZHVLGIBt
jdm
Once the line is removed, get back to Server0 and re-execute the request server au-
thenticate-peer-server command:
root@JDM-SERVER0> request server authenticate-peer-server
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-
id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.2.245’s password:
Number of key(s) added: 1
Log back in to the machine with ’ssh root@192.168.2.245’ and check to make sure
that only the key(s) you wanted were added:
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
65 JDM First Run and Initial Configuration
/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
root@JDM-SERVER0>
This time, the old key was already deleted and the new key was correctly added.
Indeed, by re-checking the connection status, it’s clearly shown we are back in the
game:
root@JDM-SERVER0> show server connections
Component Interface Status Comments
Host to JDM port virbr0 up
Physical CB0 port enp4s0f0 up
Physical CB1 port enp4s0f1 up
Physical JDM mgmt port eno2 up
Physical VNF mgmt port eno3 up
JDM-GNF bridge bridge_jdm_vm up
CB0 cb0 up
CB1 cb1 up
JDM mgmt port jmgmt0 up
JDM to HOST port bme1 up
JDM to GNF port bme2 up
JDM to JDM link0* cb0.4002 up StrictKey peer SSH - OK
JDM to JDM link1 cb1.4002 up StrictKey peer SSH - OK
Great, the JDM to JDM communication channel has been successfully restored!
Figure 2.6 illustrates all the detailed connections so you can understand the envi-
ronment before starting to create your first GNFs.
WARNING On some installations, all of the server interfaces may not be up. This
can cause problems in B-SYS to X86 server connectivity, therefore it’s highly
advisable to configure the related bootup scripts to explicitly set all used interfaces
to up. On Ubuntu, this is accomplished by adding the following lines (for each
interface) to the “/etc/network/interfaces” configuration file:
auto $IF_NAME
iface $IF_NAME inet manual
up ifconfig $IF_NAME up
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eno1
iface eno1 inet static
address 172.30.181.171
netmask 255.255.255.0
network 172.30.181.0
broadcast 172.30.181.255
gateway 172.30.181.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 172.30.181.2
dns-search poc-nl.jnpr.net
### Management Interfaces
auto eno2
iface eno2 inet manual
up ifconfig eno2 up
auto eno3
iface eno3 inet manual
up ifconfig eno3 up
### 10GE Interfaces
auto enp4s0f0
iface enp4s0f0 inet manual
up ifconfig enp4s0f0 up
auto enp4s0f1
iface enp4s0f1 inet manual
up ifconfig enp4s0f1 up
So far the focus has been on the connectivity aspect of the solution. The next pages
go inside the storage configuration to elongate our deep dive!
Chapter 3
Time to create the first guest network functions (GNFs)! The purpose of this Day
One book on Junos node slicing is to create, configure, interconnect, and test one
Edge and one Core GNF. And, as previously explained, the GNF is a logically sep-
arated partition of a single MX Chassis composed of two main components:
A dedicated virtual control plane (aka virtual Routing Engine) running on two
X86 external servers;
One or more dedicated data plane components that are line cards running in-
side the MX Chassis.
Each of these components must be configured and ‘stitched’ together to create a
working GNF. As a rule of thumb, the control plane component is configured us-
ing JDM while the data plane element is configured on the B-SYS. Although the
order in which these two elements are configured is not governed by any strict
technical guidelines, it is best practice to start with the virtual Routing Engine set
up on JDM and then proceed with the data plane component on the B-SYS.
So, following our own advice, let’s start creating our first GNF, namely EDGE-
GNF, starting from the control plane component. As explained, JDM is the or-
chestrator to manage the full lifecycle of our virtual Routing Engine, therefore it’s
a pretty straightforward choice as the starting point to create our first GNF con-
trol plane. But before logging in and starting the virtual Routing Engine spin up
process, let’s examine how it is architected.
The virtual Routing Engines can be thought as a KVM virtual machine running
Junos to provide their control plane features to the GNF data plane component, so
it obviously needs a boot image to be started. Adding a suitable image to our vir-
tual network function is the first step of the new virtual Routing Engine creation.
If we try to configure all our desired parameters before adding an image, the JDM
commit will not complete and it will return the following error:
[edit]
root@JDM-SERVER0# commit check
[edit]
‘virtual-network-functions EDGE-GNF’
Adding Image is mandatory for EDGE-GNF
error: configuration check-out failed
[edit]
root@JDM-SERVER0#
Once the boot image is correctly installed, then the real VM configuration will
take place under the JDM CLI ‘virtual-network-functions’ stanza.
Now let’s log in to JDM and perform the two-step creation process to start our
first virtual Routing Engine:
JDM Login (using the JDM Server0 Management IP Address) and start the cli
****************************************************************************
* The Juniper Device Manager (JDM) must only be used for orchestrating the *
* Virtual Machines for Junos Node Slicing *
****************************************************************************
Last login: Thu Jan 24 16:43:20 2019 from 192.168.2.254
root@JDM-SERVER0:~# cli
root@JDM-SERVER0>
Once the Junos-like JDM CLI prompt is received, it’s time to add the image to the
new GNF. Because JDM runs in a Linux container, the image file must be made
available to it. The file was copied using Secure Copy Protocol (SCP) from the
original location (server0 /home/administrator/Software/ directory) to JDM /var/
tmp:
root@JDM-SERVER0:~# scp administrator@172.30.181.171:~/Software/junos*-ns-* /var/tmp
The authenticity of host ‘172.30.181.171 (172.30.181.171)’ can’t be established.
ECDSA key fingerprint is 6a:29:60:18:35:9f:0e:ba:cf:30:98:3e:c2:b4:10:ba.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘172.30.181.171’ (ECDSA) to the list of known hosts.
junos-install-ns-mx-x86-64-18.3R1.9.tgz
100% 2135MB 112.4MB/s 00:19
root@JDM-SERVER0:~#
69 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image
Create the EDGE-GNF Virtual Routing Engine and Add the Boot
Image
The mandatory step to assign a valid software image to a virtual network function
achieves the following goals:
1. Provides a valid software image to boot from;
2. Assigns the desired name to the new VNF;
3. Creates the directory structure under the ‘vm-primary’ mount point to store all
the files related to a specific VNF.
As you will see, the chosen name will be used in the JDM CLI to perform configu-
ration or operative tasks related to the VM itself. So let’s go to the JDM CLI and
assign the desired boot image to our soon-to-be-created EDGE-GNF virtual Rout-
ing Engine:
root@JDM-SERVER0> request virtual-network-functions EDGE-GNF add-image /var/tmp/junos-install-ns-mx-
x86-64-18.3R1.9.tgz all-servers
server0:
--------------------------------------------------------------------------
Added image: /vm-primary/EDGE-GNF/EDGE-GNF.img
server1:
--------------------------------------------------------------------------
Added image: /vm-primary/EDGE-GNF/EDGE-GNF.img
root@JDM-SERVER0>
Statement Function
add-image Add a boot image to a VNF
console Provide a console access to a VNF
start Start a VNF boot process
stop Stop a running VNF
restart Perform a stop-start cycle of a running VNF
delete-image Delete all the directory structure of a stopped VNF
As usual, let’s also take a look at what has happened under the hood! You know
that all the VNF-related files are stored under the /vm-primary path, so we should
expect to find something interesting in that place:
root@JDM-SERVER0> file list /vm-primary/ detail
/vm-primary/:
total blocks: 56
drwxr-xr-x 2 root root 4096 Feb 6 19:25 EDGE-GNF/
drwx------ 2 root root 16384 Jan 7 13:11 lost+found/
total files: 0
root@JDM-SERVER0>
/vm-primary/EDGE-GNF/:
total blocks: 9145016
-rw-r--r-- 1 root root 18253676544 Feb 6 19:25 EDGE-GNF.img
-rw-r--r-- 1 930 930 2350645248 Sep 21 03:09 EDGE-GNF.qcow2
-rw-r--r-- 1 930 930 3 Sep 21 03:11 smbios_version.txt
total files: 3
root@JDM-SERVER0>
And there are three files present as the result of the boot-image installation pro-
cess. Let’s see what they are doing:
EDGE-GNF.img file represents the virtual SSD drive of the virtual Routing En-
gine;
EDGE-GNF.qcow2 file contains the image the virtual Routing Engine will use
to boot up;
smbios_version.txt is a parameter passed to the VM BIOS (at the time of this
writing the version is v1) to correctly identify the booting Junos VM.
NOTE At the time of this writing, only v1 is supported. Should new capabilities
be added to Junos VM, which relies on the underlying virtual hardware, this
version may change.
71 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image
It’s now time to configure and start our new GNF! As usual, the JDM CLI will
provide the needed interface to perform the required configurations. The EDGE-
GNF VM will be modeled as follows:
Eight Cores – 64 GB RAM Routing Engine
Chassis-type mx960
ID = 1
No starting configuration
No auto-boot
[edit]
root@JDM-SERVER0# set virtual-network-functions EDGE-GNF id 1 resource-template 8core-64g chassis-
type mx960 no-autostart
[edit]
root@JDM-SERVER0# show virtual-network-functions
EDGE-GNF {
no-autostart;
id 1;
chassis-type mx960;
resource-template 8core-64g;
}
[edit]
root@JDM-SERVER0#
[edit]
root@JDM-SERVER0# commit
commit complete
[edit]
root@JDM-SERVER0#
Isn’t that easy? Just a single command and a commit! Et volia`, the new GNF is
ready to go! Before starting it, let’s examine each command and what it achieves:
no-autostart:by default, the virtual Routing Engines spawn up on commit; us-
ing this command, this behavior can be changed and the VM will not boot up
until the operational request virtual-network-functions $GNF_NAME start com-
mand is submitted;
id: the “id” command identifies the virtual Routing Engine with a numeric
value; today, ID value can range from 1 to 10, as the maximum number of sup-
ported GNF is 10; this value is very important as it will also identify on the B-
SYS the line cards assigned to a certain pair of virtual Routing Engines; and not
only… we’ll see in the next pages how this ID value is also involved in other
parts of the Junos node slicing implementation;
72 Chapter 3: GNF Creation, Bootup, and Configuration
chassis-type: This identifies the B-SYS type of chassis. The value must match
the real B-SYS MX model otherwise the line cards will not be correctly at-
tached to the virtual Routing Engines.
resource-template: This defines which kind of virtual Routing Engine is required
from a resource reservation standpoint.
There are some other notable commands not used in this configuration:
base-config: This allows the user to provide a customer Junos startup configu-
ration from which the VNF will get its initial settings.
physical-cores: This provides the capability to statically define which CPU
cores are bound to the virtual-network-function; there are commit checks that
prevent the user from setting a lower or a higher number of cores than the ones
specified in the resource-template and to reserve cores that are already in use by
other VNFs.
Let’s now go to JDM CI operational mode and check if our VNFs are actually
ready to start on both servers:
[edit]
root@JDM-SERVER0# exit
Exiting configuration mode
root@JDM-SERVER0> show virtual-network-functions all-servers
server0:
--------------------------------------------------------------------------
ID Name State Liveness
--------------------------------------------------------------------------------
1 EDGE-GNF Shutdown down
server1:
--------------------------------------------------------------------------
ID Name State Liveness
--------------------------------------------------------------------------------
1 EDGE-GNF Shutdown down
root@JDM-SERVER0>
The command output looks promising: indeed, on both JDM servers, you can see
the VNFs are created and, as expected, they have not booted yet, thus the liveness
status is obviously down. So it’s time to start our virtual Routing Engines up by us-
ing our beloved JDM CLI:
root@JDM-SERVER0> request virtual-network-functions EDGE-GNF start all-servers
server0:
--------------------------------------------------------------------------
EDGE-GNF started
server1:
--------------------------------------------------------------------------
EDGE-GNF started
root@JDM-SERVER0>
73 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image
Once again, the all-servers knob eased our sys-admin’s life by booting both virtual
Routing Engines at the same time, brilliant!
NOTE Because we are spawning two Routing Engines, they will run the well-
known master / backup election process just as their real counterparts do every
time the MX physical chassis starts up. The JDM server number matches the
Routing Engine number, so RE0 runs on Server0 while RE1 runs on Server1. By
default, RE0 has the higher priority to become the Master. The set chassis redun-
dancy routing-engine command is supported on the GNF’s Junos to eventually
change this default behavior. All the well-known chassis redundancy machineries
available on standalone dual routing engine MX chassis are also implemented in a
Junos node slicing dual virtual Routing Engine setup. Graceful Routing Engine
Switchover (GRES) can be configured and periodic keepalives handled by chassisd
can be activated to detect master reachability so that, in case of lack of connectiv-
ity, the backup Routing Engine will take over automatically.
While the Routing Engine is booting, it’s possible to use the request virtual-net-
work-function $VNF-NAME console command to connect to the QEMU emulated serial
port. Let’s try it:
root@JDM-SERVER0> request virtual-network-functions EDGE-GNF console
Connected to domain EDGE-GNF
Escape character is ^]
@ 1549558470 [2019-02-07 16:54:30 UTC] verify pending ...
Verified jfirmware-x86-32-18.9 signed by PackageProductionEc_2018 method ECDSA256+SHA256
Verified jdocs-x86-32-20180920 signed by PackageProductionEc_2018 method ECDSA256+SHA256
Verified jinsight-x86-32-18.9 signed by PackageProductionEc_2018 method ECDSA256+SHA256
Verified jsdn-x86-32-18.9 signed by PackageProductionEc_2018 method ECDSA256+SHA256
Verified jpfe-common-x86-32-20180920 signed by PackageProductionEc_2018 method ECDSA256+SHA256
---- SNIP ----
FreeBSD/amd64 (Amnesiac) (ttyu0)
login:
Perfect! Our virtual Routing Engine has finally booted and it’s ready to receive our
configuration. Because it’s clearly visible, the look and feel is exactly the same as a
Junos-powered hardware Routing Engine.
TIP To exit from the QEMU console use the “CTRL + ]” key combination.
server1:
--------------------------------------------------------------------------
ID Name State Liveness
-----------------------------------------------------------------------------
1 EDGE-GNF Running up
root@JDM-SERVER0>
74 Chapter 3: GNF Creation, Bootup, and Configuration
We can conclude the VNFs are running properly! But let’s keep up with good hab-
its and take a look at what happens under the hood. First of all, let’s take a look at
how the new VMs are connected to the B-SYS and to JDM by checking the net-
work interfaces on the new RE0 instance using JDM console access:
root@JDM-SERVER0> request virtual-network-functions EDGE-GNF console
Connected to domain EDGE-GNF
Escape character is ^]
FreeBSD/amd64 (Amnesiac) (ttyu0)
login: root
--- JUNOS 18.3R1.9 Kernel 64-bit JNPR-11.0-20180816.8630ec5_buil
root@:~ # cli
root> show interfaces terse
Interface Admin Link Proto Local Remote
dsc up up
fxp0 up up
--- SNIP ---
vtnet0 up up
vtnet0.32763 up up inet 190.0.1.1/2
190.0.1.4/2
tnp 0x3e000104
vtnet0.32764 up up inet 190.0.1.1/2
190.0.1.4/2
tnp 0x3e000104
vtnet1 up up
vtnet1.32763 up up inet 190.0.1.1/2
190.0.1.4/2
tnp 0x3e000104
vtnet1.32764 up up inet 190.0.1.1/2
190.0.1.4/2
tnp 0x3e000104
vtnet2 up up
vtnet2.0 up up inet 192.168.2.1/2
The interfaces relevant to our investigation are highlighted: one of them should be
quite familiar, fxp0, while the other three may look new but they really are not:
fxp0:Most readers may have already guessed that it’s the out-of-band manage-
ment interface of the virtual Routing Engine.
vtnet0 / vtnet1: It’s useful to first clarify that these are created by the FreeBSD
VirtIO kernel driver. As the examined instance is virtual, it shouldn’t be a sur-
prise. It’s important to recall that inside a single MX960 chassis there are two
interfaces connecting the Routing Engines to the SCB management switch.
Those interfaces are named ‘em0’ and ‘em1’ and, besides the different names
that depend on the FreeBSD underlying kernel driver (em interfaces are created
by the Intel NIC module), the ending numbers might ring a bell: bottom line,
vtnet0, and vtnet1, respectively, are the replacements for em0 and em1 inter-
faces and, at the same time, they achieve exactly the same goal as their physical
75 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image
counterparts (that is, to interconnect the virtual Routing Engines to the Junos
node slicing forwarding plane) and to attach the virtual instances to the master
VLAN shared by all the B-SYS control components, such as Routing Engine
CPUs and line card control CPUs. Moreover, we’ll see that vtnet0 uses cb0
10GE ports, while vtnet1 uses cb1 ones.
Okay, you may have already noticed that each vtnet0/1 interface has two logical
units, namely 32763 and 32764, sharing the same IP address. Let’s take a closer
look:
root> show interfaces vtnet0.32763
Logical interface vtnet0.32763 (Index 3) (SNMP ifIndex 504)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.4001 ] Encapsulation: ENET2
----- SNIP -----
Destination: 128/2, Local: 190.0.1.1, Broadcast: 191.255.255.255
Addresses
Destination: 128/2, Local: 190.0.1.4, Broadcast: 191.255.255.255
----- SNIP -----
root> show interfaces vtnet1.32763
Logical interface vtnet1.32763 (Index 5) (SNMP ifIndex 507)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.4001 ] Encapsulation: ENET2
----- SNIP ------
Destination: 128/2, Local: 190.0.1.1, Broadcast: 191.255.255.255
Addresses
Destination: 128/2, Local: 190.0.1.4, Broadcast: 191.255.255.255
------ SNIP ------
root> show interfaces vtnet0.32764
Logical interface vtnet0.32764 (Index 4) (SNMP ifIndex 505)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.4011 ] Encapsulation: ENET2
-----SNIP-----
Destination: 128/2, Local: 190.0.1.1, Broadcast: 191.255.255.255
Addresses
Destination: 128/2, Local: 190.0.1.4, Broadcast: 191.255.255.255
----- SNIP ----
root> show interfaces vtnet1.32764
Logical interface vtnet1.32764 (Index 6) (SNMP ifIndex 508)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.4011 ] Encapsulation: ENET2
----- SNIP -----
Destination: 128/2, Local: 190.0.1.1, Broadcast: 191.255.255.255
Addresses
Destination: 128/2, Local: 190.0.1.4, Broadcast: 191.255.255.255
---- SNIP ----
Despite the fact that both logical interfaces have the same IP address, they are con-
nected to different VLANs. As previously explained, VLAN4001 is the B-SYS
Master VLAN, whereas VLAN4011 is a dedicated VLAN for the VNF itself.
Indeed, despite the undeniable fact that the Junos node slicing architecture is close
to that of a single chassis MX router, there is one main and fundamental differ-
ence: where there is a one-to-one relationship between a couple of physical Rout-
ing Engines and the chassis they control, with Junos node slicing this is no longer
76 Chapter 3: GNF Creation, Bootup, and Configuration
true. It’s possible, at the time of this writing, to have up to twenty virtual Routing
Engines controlling up to ten physical chassis partitions composed of one or more
MPCs, therefore there must be a machinery that can provide the right isolation so
that a line card belonging to a certain GNF can join only the correct virtual Rout-
ing Engine couple. As always, the simpler the better, so a dedicated VLAN is going
to be created for each couple of VNFs.
NOTE All the VLAN tagging and untagging operations are performed at the
control board management switching for the B-SYS. There is no VLAN awareness
in any chassis component involved (B-SYS RE CPUs, line card control CPUs).
Instead, they are done by the Junos network stack on the external virtual Routing
Engines.
There is a very simple algorithm behind how the VLAN ID for each couple of GNF
Routing Engines is calculated, where GNF-ID comes into play again as shown by
the subsequent formula:
vRE VLANs = 4010 + GNF-ID.
Indeed, in our case, as the GNF-ID for this is 1, and the dedicated VLAN is 4011
as displayed by the outputs just shown above.
NOTE These implementation details are used to explain how the nature of the
Junos node slicing feature is close to what has already been implemented for two
decades inside every Juniper Networks device, changing only where it was abso-
lutely necessary. It’s important to understand that, despite the flexibility and how
innovative it can look, it is still that beloved Juniper Networks router, just on
steroids!
The addressing scheme is always in the 128/2 subnet. The easy way to distinguish
a physical component from a virtual one is that the physical components are num-
bered starting with 128, while the virtuals start with 190.
For instance, let’s examine the Address Resolution Protocol (ARP) cache on our
virtual RE0:
root> show arp vpn __juniper_private1__
MAC Address Address Name Interface Flags
02:00:00:00:00:04 128.0.0.1 bsys-master vtnet0.32763 none
02:00:00:00:00:04 128.0.0.4 bsys-re0 vtnet0.32763 none
02:01:00:00:00:05 128.0.0.5 bsys-re1 vtnet1.32763 none
02:01:00:00:00:05 128.0.0.6 bsys-backup vtnet1.32763 none
02:01:00:3e:01:05 190.0.1.5 190.0.1.5 vtnet1.32764 none
02:01:00:3e:01:05 190.0.1.6 190.0.1.6 vtnet1.32764 none
You can see that all the B-SYS Res are visible on the Master VLAN in vRE0 ARP
cache, while only the remote virtual RE1 is present on the dedicated. Let’s use the
same command but disable name resolution:
77 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image
root> show arp vpn __juniper_private1__ no-resolve
MAC Address Address Interface Flags
02:00:00:00:00:04 128.0.0.1 vtnet0.32763 none
02:00:00:00:00:04 128.0.0.4 vtnet0.32763 none
02:01:00:00:00:05 128.0.0.5 vtnet1.32763 none
02:01:00:00:00:05 128.0.0.6 vtnet1.32763 none
02:01:00:3e:01:05 190.0.1.5 vtnet1.32764 none
02:01:00:3e:01:05 190.0.1.6 vtnet1.32764 none
Total entries: 6
NOTE It’s perfectly normal that each Routing Engine inside an MX chassis has
two IP addresses. They are used to identify RE0 (.4) / RE1 (.5) and RE-MASTER
(.1) / RE-BACKUP (.6). Indeed, also the remote virtual RE1 has exactly the same
addresses. This is more proof that all the Junos underlying machineries work
exactly the same on an external virtual Routing Engine in a Junos node slicing
environment.
vtnet2: This connects to the JDM bme2 interface in the ‘jdm_nv_ns’ and it’s
used to perform GNF liveness detection. Every time the show virtual-network-
function command is invoked on the JDM CLI, the status of the VM is checked
through libvirt APIs; if the returned value is “isActive” then five ICMP echoes
are triggered from JDM to the GNF over this connection. If the probes are re-
plied by the GNF, then the liveness status reported will be Up, otherwise it will
be Down and a more in depth investigation will be needed to understand why.
WARNING The VLAN and addressing schemes explained here can change
without warning at any time during the node slicing feature development, as these
details are completely transparent and (almost) invisible to the end users. More-
over, there are also algorithms to create the MAC addresses used by each interface
of the VNF, as well as a kernel parameter passed during the virtual Routing Engine
boot sequence to correctly identify the nature of the VNF and other details, but
these are outside of the scope of this book, hence not covered.
Once it is pretty clear how the connectivity works on the virtual Routing Engine,
it’s also important to double check the other end of the connection. We know the
virtual Routing Engines are KVM powered VMs, therefore the correct way to in-
vestigate where each interface is plugged (virtually speaking of course!) is to check
the host operating system.
Taking a look at the Linux Ubuntu interface, it is quite visible that some macvtap
interfaces are now up and running in our setup.
78 Chapter 3: GNF Creation, Bootup, and Configuration
Now, let’s check which macvtap interface is connected to each of them on the
Linux Host:
VTNET0:
root@jns-x86-0:~# ifconfig | grep 02:00:00:3e:01:04
macvtap0 Link encap:Ethernet HWaddr 02:00:00:3e:01:04
VTNET1:
root@jns-x86-0:~# ifconfig | grep 02:00:01:3e:01:04
macvtap1 Link encap:Ethernet HWaddr 02:00:01:3e:01:04
FXP0:
root@jns-x86-0:~# ifconfig | grep 02:ad:ec:d0:83:0a
macvtap2 Link encap:Ethernet HWaddr 02:ad:ec:d0:83:0a
Great findings! So our vtnet0, vtnet1, and fxp0 are connected to macvtap0, 1 and
2, respectively.
The last piece of the connectivity puzzle is to find which physical interface is con-
nected to which macvtap. Always using the Linux bash shell, let’s examine the ac-
tive links in the kernel:
root@jns-x86-0:~# ip link list | grep macvtap
19: macvtap0@enp4s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_
fast state UNKNOWN mode DEFAULT group default qlen 500
20: macvtap1@enp4s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_
fast state UNKNOWN mode DEFAULT group default qlen 500
22: macvtap2@eno3: <BROADCAST,MULTICAST,UP,LOWER_
UP> mtu 1500 qdisc htb state UNKNOWN mode DEFAULT group default qlen 500
79 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image
NOTE Both the enp4s0f0 and enp4s0f1 interfaces are configured in “VEPA”
mode, hence all the traffic will be forwarded to the physical ports and bridged by
the control board management switch, no hair-pinning through the virtual
switching layer is performed.
--- 192.168.2.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms
As this is self-contained connectivity between the local virtual Routing Engine and
JDM, there is no need to create a macvtap interface to extend it outside of the local
device.
Now that everything should be quite clear, let’s illustrate the connectivity big pic-
ture as shown in Figure 3.1.
80 Chapter 3: GNF Creation, Bootup, and Configuration
Let’s continue on our first GNF creation by adding its second building block, the
forwarding plane. The configuration takes place on the B-SYS this time. Our
EDGE-GNF will be composed of a single line card, namely an MX MPC5e-Q
(2x100GE + 4x10GE version) inserted in slot number 6.
NOTE At the time of this writing, the B-SYS partitioning granularity is at the slot
level. This means a whole line card can belong to one, and only one, GNF. There
are enhancements in the feature roadmap that may introduce, at least, a per-pic
(aka per-PFE) granularity in a future release.
chassis {
network-slices {
guest-network-functions {
gnf 1 {
fpcs 6;
81 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image
}
}
}
}
{master}[edit]
magno@MX960-4-RE0#
The (1..10)identifies the GNF number and must match the JDM virtual-net-
work-functionID to configure the data plane and the control plane components
in the same partition (aka GNF).
fpcs 6: This number identifies the slot (or the slots as the command accepts
multiple values) where the GNF MPC is installed.
Time to commit the configuration, but before doing this, let’s activate CLI time-
stamping to observe what happens to the MPC.
{master}[edit]
magno@MX960-4-RE0#
Feb 14 15:48:39
{master}[edit]
magno@MX960-4-RE0# commit
Feb 14 15:49:25
re0:
configuration check succeeds
re1:
configuration check succeeds
commit complete
re0:
commit complete
82 Chapter 3: GNF Creation, Bootup, and Configuration
{master}[edit]
magno@MX960-4-RE0#
{master}
magno@MX960-4-RE0# run show chassis fpc 6
Feb 14 15:49:30
Temp CPU Utilization (%) CPU Utilization (%) Memory Utilization (%)
Slot State (C) Total Interrupt 1min 5min 15min DRAM (MB) Heap Buffer GNF
6 Offline ---GNF initiated Restart--- 1
{master}
magno@MX960-4-RE0>
You can see that the line card was rebooted after the commit, with the reason be-
ing GNF initiated restart. Perfect, everything behaved exactly as expected. The line
card went online again, but this time it is, logically speaking, not part of the B-SYS
anymore.
To check the FPC status, let’s use the EDGE-GNF CLI exactly as we would have
done on a single chassis MX series router:
root> show chassis hardware
bsys-re0:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis JN122E1E0AFA MX960
Midplane REV 04 750-047853 ACRB9287 Enhanced MX960 Backplane
Fan Extender REV 02 710-018051 CABM7223 Extended Cable Manager
FPM Board REV 03 710-014974 JZ6991 Front Panel Display
PDM Rev 03 740-013110 QCS1743501N Power Distribution Module
PEM 0 Rev 04 740-034724 QCS171302048 PS 4.1kW; 200-240V AC in
PEM 1 Rev 07 740-027760 QCS1602N00R PS 4.1kW; 200-240V AC in
PEM 2 Rev 10 740-027760 QCS1710N0BB PS 4.1kW; 200-240V AC in
PEM 3 Rev 10 740-027760 QCS1710N0BJ PS 4.1kW; 200-240V AC in
Routing Engine 0 REV 01 740-051822 9009170093 RE-S-1800x4
Routing Engine 1 REV 01 740-051822 9009176340 RE-S-1800x4
CB 0 REV 01 750-055976 CACM2281 Enhanced MX SCB 2
Xcvr 0 REV 01 740-031980 163363A04142 SFP+-10G-SR
Xcvr 1 REV 01 740-021308 AS90PGH SFP+-10G-SR
CB 1 REV 02 750-055976 CADJ1802 Enhanced MX SCB 2
Xcvr 0 REV 01 740-031980 AHJ09HD SFP+-10G-SR
Xcvr 1 REV 01 740-021308 09T511103665 SFP+-10G-SR
FPC 6 REV 42 750-046005 CADM2676 MPC5E 3D Q 2CGE+4XGE
CPU REV 11 711-045719 CADK9910 RMPC PMB
PIC 0 BUILTIN BUILTIN 2X10GE SFPP OTN
Xcvr 0 REV 01 740-031980 B11B02985 SFP+-10G-SR
PIC 1 BUILTIN BUILTIN 1X100GE CFP2 OTN
83 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image
PIC 2 BUILTIN BUILTIN 2X10GE SFPP OTN
PIC 3 BUILTIN BUILTIN 1X100GE CFP2 OTN
Fan Tray 0 REV 04 740-031521 ACAC1075 Enhanced Fan Tray
Fan Tray 1 REV 04 740-031521 ACAC0974 Enhanced Fan Tray
gnf1-re0:
--------------------------------------------------------------------------
Chassis GN5C5C634895 MX960-GNF
Routing Engine 0 RE-GNF-2100x8
Routing Engine 1 RE-GNF-2100x8
root>
There are some interesting things going on here – some obvious, some not:
The FPC6 is now connected to the virtual Routing Engines of EDGE-GNF; ex-
actly as it happens on a standalone chassis, the line card receives all the needed
information from the GNF Routing Engine.
The FPC has maintained its slot number. This is a very important characteris-
tic, because the Junos interface naming convention dictates that the first digit
always represents the chassis slot number. The migration of existing devices is
made a lot easier by maintaining this information consistently between the
standalone and the Junos node slicing configuration, as the current config can
be reused just as it is, no renumbering actions are needed.
Even though the MPC6 is now logically part of the EDGE-GNF, it is still physi-
cally installed inside the MX chassis; therefore, the B-SYS has an active role in
the line card life cycle as all the physical related functionalities are still handled
by the B-SYS itself.
EDGE-GNF administrators must be able to fully manage their own slice on the
other end, so a new feature called command forwarding was implemented. This
means that some Junos commands are executed on the GNF Routing Engines
but they are then forwarded to the B-SYS to retrieve the relevant outputs. The
show chassis hardware is a good example of this feature. Indeed, if we take a
closer look at the output, it is clearly divided into two main sections. The first
section with the “bsys-re0:” and the second with “gnf1-re0:”. This means that
the former output was retrieved from the B-SYS, while the latter was retrieved
directly from the GNF Routing Engine.
Note that besides all the common components, just the MPC in slot 6 is shown.
Indeed, the output is filtered so that only the MPCs belonging to the enquiring
GNF are shown.
There is the concept of GNF chassis; it has a dedicated serial number and has
its own product description; it is useful for licensing purposes;
The Routing Engines are identified as “RE-GNF”; indeed, the virtual Routing
Engines have a dedicated personality in Junos, so they are correctly handled by
the operating system; it is passed on as a boot-string during the boot phase;
84 Chapter 3: GNF Creation, Bootup, and Configuration
The CPU speed, and the number of cores to mimic the same Juniper Networks
model naming convention used for the physical routing engines, are also re-
ported in the Routing Engine model number.
Now that you know how the MPC is correctly attached to its virtual control plane
instances, as usual, let’s check what happened on the B-SYS when the new configu-
ration was committed. And again, the most interesting things happen on the con-
trol board management switch. This is how it looks in terms of VLANs and ports
before and after the commit:
Before configuring the GNF:
{master}
magno@MX960-4-RE0> test chassis ethernet-switch shell-cmd “vlan show”
VLAN 4011 is now configured on the management switch! Indeed, the same for-
mula on the control plane side applies to the data plane side as well. If you exam-
ine which ports are added to VLAN 4011, they are exactly the ones connecting
B-SYS Routing Engines (ge12-ge13), external servers (xe), and the FPC belonging
to this particular GNF, that is, MPC in slot 6 (ge6). Moreover, the only ports
where traffic is tagged are the 10GE connected to the external server.
By laying on VLAN 4011, on reboot the MPC in slot 6 will request the boot image
to the virtual Master Routing Engine running on the external server, which is the
only one sitting on the same broadcast domain as the line card. And because the
B-SYS the VLAN-ID operations only happen inside the management control
switch, no modifications are required on the line card ukernel/embedded OS at all.
Once more, you can appreciate how non-invasive the Junos node slicing imple-
mentation is.
Congratulations! If you’ve been following along, the GNF is all set, great job!
85 Manage Node Slicing on JDM and B-SYS
GNF ID: 2
The GNF should have a one MPC (in this Lab it will have MPC7e in Slot 1)
server1:
--------------------------------------------------------------------------
ID Name State Liveness
--------------------------------------------------------------------------------
2 CORE-GNF Running up
1 EDGE-GNF Running up
Now that two GNFs are running concurrently on the same servers, all the outputs can
be more relevant, so let’s look to useful commands to manage the Junos node slicing
solution on both JDM and B-SYS.
NOTE Despite it being a good habit to use the keyword “all-servers” to retrieve
outputs from both JDMs at the same time, to avoid almost never-ending CLI outputs,
only locals are collected.
86 Chapter 3: GNF Creation, Bootup, and Configuration
VNF Uptime: 21:41.62
VNF CPU Utilization and Allocation Information
--------------------------------------------------------------------------------
GNF CPU-Id(s) Usage Qemu Pid
---------------------------------------- ----------------------- ----- --------
CORE-GNF 12,13,14,15 14.3% 18153
VNF Memory Information
----------------------------------------------------------------
Name Actual Resident
------------------------------------------------ ------ --------
CORE-GNF 32.0G 18.0G
VNF Storage Information
---------------------------------------------------------
Directory Size Used
------------------------------------------- ------ ------
/vm-primary/CORE-GNF 52.7G 5.6G
root@JDM-SERVER0>
All this information should be quite self-explanatory. As expected, from the con-
nectivity standpoint the second GNF is exactly the same as the first one. In this
case, the footprint is smaller if compared to the EDGE-GNF, as only four cores
and 32Gbytes of RAM are dedicated to this instance. There is a dedicated section
that highlights which cores are assigned to the VM and their total occupation.
Chassis mx960
BSYS master RE 0
GNF uptime 2 hours, 57 minutes, 7 seconds
Slot 0:
Current state Master
Model NA
GNF host name NA
Slot 1:
Current state Backup
Model NA
GNF host name NA
{master}
magno@MX960-4-RE0>
Even if this seems straightforward and easy to understand, nevertheless, let’s add
extra clarification:
FPCs assigned / FPCs online: These outputs refer to the slots where the FPCs
assigned to the specific network-slice are installed; it may be misunderstood as
the number of FPCs assigned and online but that’s not the case. For instance,
EDGE-GNF has one MPC installed in slot 6, which is also online, and that’s
why “6” appears on both lines.
Routing Engine mastership status: All the Routing Engines involved in a Junos
node slicing setup, both B-SYS physical ones and the couple of virtual ones in
each GNF, run their own mastership election. There are no restrictions about
which Master Routing Engine it should be. It’s perfectly supported in the sce-
nario where the Master Routing Engine runs on JDM Server0 on a given GNF
while it runs on JDM Server1 for another. There are no mastership dependen-
cies for the B-SYS, either.
root@EDGE-GNF-re0>
{master}
root@EDGE-GNF-re0>
Perfect! As expected, because no default election parameter was modified, re0 was
90 Chapter 3: GNF Creation, Bootup, and Configuration
elected Master Routing Engine, and re1 is the backup, let’s double check to be
sure:
root@JDM-SERVER1> request virtual-network-functions EDGE-GNF console
Connected to domain EDGE-GNF
Escape character is ^]
FreeBSD/amd64 (EDGE-GNF-re1) (ttyu0)
login: root
Password:
Last login: Wed Feb 13 14:47:43 on ttyu0
--- JUNOS 18.3R1.9 Kernel 64-bit JNPR-11.0-20180816.8630ec5_buil
root@EDGE-GNF-re1:~ #
root@EDGE-GNF-re1:~ # cli
{backup}
root@EDGE-GNF-re1>
That’s great. As an exercise, you should repeat the same configuration on the
CORE-GNF and check the end result.
NOTE And from now on, it will be possible to reach the GNFs directly using
their FXP0 IP address, as shown here:
EDGE-GNF:
mmagnani-mbp:.ssh mmagnani$ ssh root@172.30.181.175
The authenticity of host ‘172.30.181.175 (172.30.181.175)’ can’t be established.
ECDSA key fingerprint is SHA256:jkbl3XiXbgsgrjGH0augTOAQDeoTvCmag0rM5wQUVms.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘172.30.181.175’ (ECDSA) to the list of known hosts.
Password:
Last login: Wed Feb 20 17:44:45 2019
--- JUNOS 18.3R1.9 Kernel 64-bit JNPR-11.0-20180816.8630ec5_buil
root@EDGE-GNF-re0:~ #
CORE-GNF:
mmagnani-mbp:.ssh mmagnani$ ssh root@172.30.181.178
The authenticity of host ‘172.30.181.178 (172.30.181.178)’ can’t be established.
ECDSA key fingerprint is SHA256:NEddDs9zKcKjaG3pRw3eyVjCjYoMAeZ0JJCTbKwEgSk.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘172.30.181.178’ (ECDSA) to the list of known hosts.
Password:
Last login: Wed Feb 20 17:43:17 2019
--- JUNOS 18.3R1.9 Kernel 64-bit JNPR-11.0-20180816.8630ec5_buil
root@CORE-GNF-re0:~ #
are the ones under the “show system” stanza. Indeed, they provide very useful and
well-presented information at a glance and are useful to double-check how the
X86 servers resources are doing.
NOTE Providing a full troubleshooting guide is out of the scope of this Day One
book, so the purpose of this section is to show only the commands that provide a
synthetic view of the resources used by Junos node slicing. For full troubleshoot-
ing-oriented purposes, it is recommended to use the “show system visibility”
hierarchy as it offers a much more detailed output. And, if you need to retrieve
specific information about the X86 inventory, the “show system inventory
[hardware | software]” hierarchy is the right place to look.
Physical Interfaces
----------------------------------------------------------------------------------------------------
Name Index MTU Hardware-address Rcvd Bytes Rcvd Packets Rcvd Error Rcvd Drop Trxd Bytes Trxd
Packets Trxd Error Trxd Drop Flags
-------- ----- ------- ----------------- ------------ ------------ ---------- --------- ------------
eno2 3 1500 ac:1f:6b:90:50:21 59969926653 245718527 0 626593 15060741 163447
0 0 0
enp4s0f1 7 1500 ac:1f:6b:8a:42:b7 1660347940 22419111 0 15356393 41695631056
38780720 0 0 0
enp4s0f0 6 1500 ac:1f:6b:8a:42:b6 4097335868 50600711 0 15356413 6077222086
21521066 0 0 0
eno3 4 1500 ac:1f:6b:90:50:22 11442711222 111847706 0 20070 1944 24
0 0 0
root@JDM-SERVER0>
VNF CPU Utilization and Allocation Information
---------------------------------------------------------------------------------------------
VNF CPU-Id(s) Usage Qemu Pid State
---------------------------------------- ----------------------- ------ -------- -----------
CORE-GNF 12,13,14,15 13.6% 20225 Running
EDGE-GNF 4,5,6,7,8,9,10,11 13.9% 21073 Running
Free CPUs : 16,17,18,19
Host Isolcpu(s): 2-19
Emulator Pins : 2-3
93 Manage Node Slicing on JDM and B-SYS
root@JDM-SERVER0>
Memory Usage Information
---------------------------
Total Used Free
------ ------ ------
Host: 125.9G 35.6G 81.4G
JDM : 0K 0K 0K
VNF Memory Information
----------------------------------------------------------------
Name Actual Resident
------------------------------------------------ ------ --------
CORE-GNF 32.0G 17.2G
EDGE-GNF 64.0G 17.5G
root@JDM-SERVER0>
Host Storage Information
--------------------------------------------------------------------------------
Device Size Used Available Use Mount Point
---------------------------------- ------ ------ --------- ---- ----------------
/dev/mapper/jatp700--3--vg-root 491G 8.2G 458G 2% /
/dev/sda1 720M 158M 525M 24% /boot
/dev/mapper/vm--primary--vg-vm--pr 1008G 12G 946G 2% /vm-primary
JDM Storage Information
--------------------------------------------------
Directories Used
------------------------------------------- ------
/vm-primary/ 12G
/var/third-party/ 76M
/var/jdm-usr/ 12K
/juniper 1.1G
VNF Storage Information
---------------------------------------------------------
Directories Size Used
94 Chapter 3: GNF Creation, Bootup, and Configuration
------------------------------------------- ------ ------
/vm-primary/CORE-GNF 52.7G 5.6G
/vm-primary/EDGE-GNF 52.7G 5.6G
root@JDM-SERVER0>
And to retrieve the XML output from the CLI, the | display xml is available as well:
root@JDM-SERVER0> show virtual-network-functions | display xml
<rpc-reply xmlns:junos=”http://xml.juniper.net/junos/18.3R1/junos”>
<vnf-information xmlns=”http://xml.juniper.net/junos/18.3R1/junos-jdmd” junos:style=”brief”>
<vnf-instance>
<id>2</id>
<name>CORE-GNF</name>
<state>Running</state>
<liveliness>up</liveliness>
</vnf-instance>
<vnf-instance>
<id>1</id>
<name>EDGE-GNF</name>
<state>Running</state>
<liveliness>up</liveliness>
</vnf-instance>
</vnf-information>
95 About JDM Automation
<cli>
<banner></banner>
</cli>
</rpc-reply>
root@JDM-SERVER0>
Last but not least, another useful feature offered by the JDM CLI is using the op-
erational show system schema command to retrieve the YANG models to use them
for automation purposes. For instance, the YANG schema for jdm-rpc-virtual-net-
work-functions is:
root@JDM-SERVER0> show system schema module jdm-rpc-virtual-network-functions
/*
* Copyright (c) 2019 Juniper Networks, Inc.
* All rights reserved.
*/
module jdm-rpc-virtual-network-functions {
namespace “http://yang.juniper.net/jdm/rpc/virtual-network-functions”;
prefix virtual-network-functions;
import junos-common-types {
prefix jt;
}
organization “Juniper Networks, Inc.”;
contact “yang-support@juniper.net”;
description “Junos RPC YANG module for virtual-network-functions command(s)”;
revision 2018-01-01 {
description “Junos: 18.3R1.9”;
}
rpc get-virtual-network-functions {
description “Show virtual network functions information”;
input {
uses command-forwarding;
leaf vnf-name {
description “VNF name”;
type string {
length “1 .. 256”;
}
}
-----SNIP ------
And exactly as what exists on standard Junos, to enable the natural born transport
companion for YANG, that is NETCONF, the configuration, is available under
the “system / service” stanza on JDM as well:
system {
services {
ssh {
root-login allow;
96 Chapter 3: GNF Creation, Bootup, and Configuration
}
netconf {
ssh;
rfc-compliant;
}
}
MORE? Please refer to the Junos Node Slicing Feature Guide available at https://
www.juniper.net/documentation/en_US/junos/information-products/pathway-
pages/junos-node-slicing/junos-node-slicing.pdf, which contains all the instruc-
tions to set up YANG-based Junos node slicing orchestration by using an external
SDN controller. To learn how to exploit the NETCONF/YANG tools that Junos
OS offers, a great place to start can be found at http://yang.juniper.net.
Chapter 4
GNF AF Interfaces
Two GNF instances are now running inside the same MX physical chassis. They
are completely separated and behave as single chassis routers, each of them
equipped with its own routing engines, line cards, and physical interfaces.
The next step is to perform the foundation of every network, that is… to intercon-
nect different nodes! Of course, as the two partitions can be considered as separate
devices, the most obvious way to achieve this goal is to use a physical cross con-
nection between ports installed on MPCs belonging to different GNFs.
But this approach has major drawbacks:
If the connection must offer redundancy, more than one port is needed;
Interconnecting different partitions wastes revenue ports, increasing the eco-
nomic impact of Junos node slicing;
The topology to interconnect different GNFs running inside the same MX
chassis is a direct function of the maximum density achievable and the econom-
ics. If one more connection is needed for any reason, one more port per GNF,
two optics and a new cable will be needed;
The number of necessary connections will have to be engineered based on the
total expected throughput needed by the solution, becoming an additional di-
mensioning factor of the solution;
And, the internal chassis crossbar, which can interconnect line cards installed
in different slots, is completely wasted.
98 Chapter 4: GNF AF Interfaces
The solution to solve all the aforementioned drawbacks is provided by the AF In-
terfaces. As the name implies, these new interfaces are a logical abstraction of the
MX chassis internal fabric.Indeed, Junos OS has no way to handle the crossbar
directly, but it can easily manage it if the fabric is modeled as a physical Junos Eth-
ernet interface. This was the easiest way to create a very elegant and effective inter-
connection solution in a Junos node slicing installation.
The AF Interfaces are configured on the B-SYS as a point-to-point connection be-
tween two different GNFs. From a design perspective, they are the Junos node slic-
ing WAN or, in other words, core-facing interfaces. From a high-level logical view,
AF Interfaces can be depicted as shown in Figure 4.1.
AF Interfaces are numbered with a single digit in Junos, hence af0 to af9 interfaces
can be configured. In this Day One book lab, two interfaces, namely AF0, will be
configured to interconnect EDGE-GNF and CORE-GNF instances.
NOTE The PFEs installed on the same line card are part of the same GNF.
Therefore, they communicate through the fabric as it happens in a standalone
chassis without the need of an AF Interface.
Let’s configure the AF Interfaces and explore some more details about them once
they are in action. There are two main phases to correctly set up the connectivity
between two GNFs using AF Interfaces::
99 AF Interface Creation on the B-SYS
Phase 1: Create the AF Interfaces on the B-SYS so that they are available to the
desired GNFs.
Phase 2: Configure each end of the AF Interfaces on both GNFs as they are
plain-vanilla Junos interfaces.
Under the local ‘af name’ stanza (af0 for EDGE-GNF), the peer-gnf command
identifies the remote end of the AF Interface using the remote GNF ID; for in-
stance, the peer-gnf id 2command means the remote end of the GNF-ID 1 AF0
interface is located on GNF id 2; in our case, ID 1 = EDGE-GNF, and ID 2 =
CORE-GNF;
As the last parameter, the remote AF Interface name must be explicitly provid-
ed;
Bottom line, the set chassis network-slices guest-network-functions gnf 1 af0
peer-gnf id 2 af0 command means in readable human language “GNF 1 AF0
interface connects to GNF 2 AF0 interface”;
The configuration must be mirrored on the other end’s GNF, so the set chassis
command
network-slices guest-network-functions gnf 2 af0 peer-gnf id 1 af0
implements the AF Interface reverse direction from GNF 2 to GNF 1;
The optional description statement allows the B-SYS administrator to create a
{master}
magno@EDGE-GNF-re0> show interfaces terse | match af0
Feb 21 00:28:13
{master}
magno@EDGE-GNF-re0>
CORE-GNF:
{master}
magno@CORE-GNF-re0> set cli timestamp
Feb 21 00:27:23
CLI timestamp set to: %b %d %T
{master}
magno@CORE-GNF-re0> show interfaces terse | match af0
Feb 21 00:28:20
{master}
magno@CORE-GNF-re0>
As expected, no AF0 interface is present on either GNF:
101 AF Interface Creation on the B-SYS
MX960-4:
{master}[edit]
magno@MX960-4-RE0# run set cli timestamp
Feb 21 00:28:28
CLI timestamp set to: %b %d %T
{master}[edit]
magno@MX960-4-RE0#
{master}[edit]
magno@MX960-4-RE0#
{master}
magno@EDGE-GNF-re0>
CORE-GNF:
{master}
magno@CORE-GNF-re0> show interfaces terse | match af0
Feb 21 00:29:16
af0 up up
{master}
magno@CORE-GNF-re0>
Amazing! Now the AF0 interface appears on both GNFs and it’s in up / up state so
our Phase 1 can be considered completed!
NOTE With Junos 18.3, AF Interfaces feature parity is with Junos 17.4.
Before starting to fiddle with AF Interfaces, it’s very important to underline that
they are designed around the core-facing use case, therefore they are not completely
feature parity with a physical Ethernet interface; let’s examine all the major caveats:
H-QOS is not supported;
Only two traffic priorities, low and high, are available on AF interfaces;
NOTE This last point deserves a more elaborate explanation, as it may be seen as a
major flaw, but for the sake of brevity it’s important to recall that AF Interfaces
were introduced to play as simple and fast core-facing interfaces. Implementing
unnecessary service termination functionalities on an interface that is designed to
forward traffic as fast as possible is a bad engineering decision and would go against
the fundamental principle of the whole Junos node slicing design – simplicity.
This book’s AF Interfaces will be configured with the following main features:
IFD encapsulation will be ‘flexible-ethernet-services’;
Unit 72 with VLAN-ID 72 is the core-facing IFL between the two GNFs;
Inet, inet6, iso, and MPLS families will be activated under unit 72 even though
MPLS is not used;
ISIS will be the IGP of choice; single Level 2 domain with point-to-point inter-
face, 100Gb reference-bw and wide-metric;
Loopbacks interfaces are configured to demonstrate routing is working prop-
erly.
The IP addressing scheme used for the book’s lab is listed next in Table 4.1.
Okay, now let’s configure both ends and see if everything works as expected.
EDGE-GNF configuration:
set interfaces af0 flexible-vlan-tagging
set interfaces af0 mtu 9216
set interfaces af0 encapsulation flexible-ethernet-services
set interfaces af0 unit 72 vlan-id 72
set interfaces af0 unit 72 family inet address 72.0.0.1/30
set interfaces af0 unit 72 family iso
set interfaces af0 unit 72 family inet6 address fec0::71.0.0.1/126
set interfaces af0 unit 72 family mpls
set interfaces lo0 unit 0 family inet address 72.255.255.1/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0001.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.1/128
104 Chapter 4: GNF AF Interfaces
set protocols isis reference-bandwidth 100g
set protocols isis level 1 disable
set protocols isis level 2 wide-metrics-only
set protocols isis interface af0.72 point-to-point
set protocols isis interface lo0.0 passive
CORE-GNF configuration:
set interfaces af0 flexible-vlan-tagging
set interfaces af0 mtu 9216
set interfaces af0 encapsulation flexible-ethernet-services
set interfaces af0 unit 72 vlan-id 72
set interfaces af0 unit 72 family inet address 72.0.0.2/30
set interfaces af0 unit 72 family iso
set interfaces af0 unit 72 family inet6 address fec0::71.0.0.2/126
set interfaces af0 unit 72 family mpls
set interfaces lo0 unit 0 family inet address 72.255.255.2/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0002.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.2/128
set protocols isis reference-bandwidth 100g
set protocols isis level 1 disable
set protocols isis level 2 wide-metrics-only
set protocols isis interface af0.72 point-to-point
set protocols isis interface lo0.0 passive
set protocols layer2-control nonstop-bridging
After committing the configurations, let’s see if routing is properly set up:
{master}
magno@EDGE-GNF-re0> show isis adjacency
Interface System L State Hold (secs) SNPA
af0.72 CORE-GNF-re0 2 Up 20
{master}
magno@EDGE-GNF-re0>
Looks promising. The ISIS adjacency is UP. Let’s take a quick look at the inet and
inet6 routing table to confirm the remote loopback addresses are correctly learned
through ISIS and placed into the inet0 and inet6.0 tables:
{master}
magno@EDGE-GNF-re0> show route protocol isis
inet.0: 8 destinations, 9 routes (8 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
72.255.255.2/32 *[IS-IS/18] 00:37:26, metric 1
> to 72.0.0.2 via af0.72
iso.0: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)
inet6.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
fec0::48ff:ff02/128*[IS-IS/18] 00:37:26, metric 1
> to fe80::22a:9900:48ce:a142 via af0.72
{master}
magno@EDGE-GNF-re0>
105 AF Interface Creation on the B-SYS
--- 72.255.255.2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.682/1.761/1.888/0.076 ms
{master}
magno@EDGE-GNF-re0> ping inet6 fec0::72.255.255.2 source fec0::72.255.255.1 count 5
PING6(56=40+8+8 bytes) fec0::48ff:ff01 --> fec0::48ff:ff02
16 bytes from fec0::48ff:ff02, icmp_seq=0 hlim=64 time=2.495 ms
16 bytes from fec0::48ff:ff02, icmp_seq=1 hlim=64 time=12.118 ms
16 bytes from fec0::48ff:ff02, icmp_seq=2 hlim=64 time=1.842 ms
16 bytes from fec0::48ff:ff02, icmp_seq=3 hlim=64 time=1.747 ms
16 bytes from fec0::48ff:ff02, icmp_seq=4 hlim=64 time=1.748 ms
--- fec0::72.255.255.2 ping6 statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/std-dev = 1.747/3.990/12.118/4.074 ms
{master}
magno@EDGE-GNF-re0>
Now that connectivity is achieved, let’s take a closer look at the AF Interfaces.
Let’s examine it on the EDGE-GNF side and highlight what is different from a
physical Ethernet counterpart:
{master}
magno@EDGE-GNF-re0> show interfaces af0
Physical interface: af0, Enabled, Physical link is Up
Interface index: 156, SNMP ifIndex: 544
Type: Ethernet, Link-level type: Flexible-Ethernet, MTU: 9216, Speed: 480000mbps
Device flags : Present Running
Interface flags: Internal: 0x4000
Link type : Full-Duplex
Current address: 00:90:69:a4:14:1a, Hardware address: 00:90:69:a4:14:1a
Last flapped : 2019-02-21 00:28:28 UTC (14:18:53 ago)
Input rate : 384 bps (0 pps)
Output rate : 408 bps (0 pps)
Bandwidth : 480 Gbps
Peer GNF id : 2
Peer GNF Forwarding element(FE) view :
FPC slot:FE num FE Bandwidth(Gbps) Status Transmit Packets Transmit Bytes
1:0 240 Up 0 0
1:1 240 Up 0 0
Residual Transmit Statistics :
Packets : 0 Bytes : 0
106 Chapter 4: GNF AF Interfaces
Fabric Queue Statistics :
FPC slot:FE num High priority(pkts) Low priority(pkts)
1:0 0 0
1:1 0 0
FPC slot:FE num High priority(bytes) Low priority(bytes)
1:0 0 0
1:1 0 0
Residual Queue Statistics :
High priority(pkts) Low priority(pkts)
0 0
High priority(bytes) Low priority(bytes)
0 0
{master}
magno@EDGE-GNF-re0>
As you can see, most of the information retrieved by the show interfaces af0 com-
mand is exactly the same that can be found on a real interface. There are differ-
ences though, that are worth further explanations, so let’s deep dive into them:
Type: Ethernet, Link-level type: Flexible-Ethernet, MTU: 9216, Speed: 480000mbps
As explained, the AF Interface behaves like an Ethernet interface and the flexible-
ethernet-service encapsulation is configured as well as an MTU of 9216 bytes. So
let’s focus on the most relevant information of the pack: Speed: 480000mbps. The
AF Interface speed is reported to Junos (kernel and rpd) as a 480Gbps Ethernet
interface!
107 AF Interface Creation on the B-SYS
But where is this number coming from? Let’s dig deeper by examining some other
lines:
Bandwidth : 480 Gbps
Peer GNF id : 2
Peer GNF Forwarding element(FE) view :
FPC slot:FE num FE Bandwidth(Gbps) Status Transmit Packets Transmit Bytes
1:0 240 Up 0 0
1:1 240 Up 0 0
As shown by this output, the local AF Interface knows by the B-SYS configuration
that the remote GNF is the one with ID = 2 and it is composed by a line card host-
ed in slot number 1 hosting of two PFEs, each of them capable of pushing up to
240Gbps fabric bandwidth. Indeed, GNF ID = 2 is the CORE-GNF and the slot 1
line card is a MPC7e which has two EA chips capable of pushing 240Gbps each.
By summing up each PFE BW capacity, the total AF available bandwidth towards
GNF 2 is 480 Gbps.
Hey, let’s stop for a moment. We know that EDGE-GNF is also composed of a sin-
gle line card, but it is an MPC5eQ, different from an MPC7e. Indeed, it has two
PFEs based on the previous generation of TRIO chipset, capable of supporting
120Gbps towards the fabric. So, from CORE-GNF AF Interface’s perspective,
AF0 should have 120 + 120 = 240 Gbps forwarding capacity. Let’s check if this
understanding is correct!
From CORE-GNF, execute the show interfaces af0 CLI command:
{master}
magno@CORE-GNF-re0> show interfaces af0
Physical interface: af0, Enabled, Physical link is Up
Interface index: 190, SNMP ifIndex: 578
Type: Ethernet, Link-level type: Flexible-Ethernet, MTU: 9216, Speed: 240000mbps
Device flags : Present Running
Interface flags: Internal: 0x4000
Link type : Full-Duplex
Current address: 00:90:69:39:cc:1a, Hardware address: 00:90:69:39:cc:1a
Last flapped : 2019-02-21 00:28:28 UTC (13:28:55 ago)
Input rate : 344 bps (0 pps)
Output rate : 0 bps (0 pps)
Bandwidth : 240 Gbps
Peer GNF id : 1
Peer GNF Forwarding element(FE) view :
FPC slot:FE num FE Bandwidth(Gbps) Status Transmit Packets Transmit Bytes
6:0 120 Up 0 0
6:1 120 Up 0 0
Residual Transmit Statistics :
Packets : 0 Bytes : 0
Fabric Queue Statistics :
FPC slot:FE num High priority(pkts) Low priority(pkts)
6:0 0 0
108 Chapter 4: GNF AF Interfaces
6:1 0 0
FPC slot:FE num High priority(bytes) Low priority(bytes)
6:0 0 0
6:1 0 0
Residual Queue Statistics :
High priority(pkts) Low priority(pkts)
0 0
High priority(bytes) Low priority(bytes)
0 0
{master}
magno@CORE-GNF-re0>
Bingo! The GNF ID 1, that is the EDGE-GNF, has one line card installed in slot 6,
made of two PFEs, each of them capable of 120 Gbps towards the fabric. Bottom
line: the local AF Interface available bandwidth is simply the sum of the bandwith
available on all the line cards belonging to the peer GNF!
To better explain how AF Interface BW works, assume we have a setup where one
GNF is composed of two MPC5e line cards, while the other is using a single
MPC7e. The sample setup is shown in Figure 4.2.
109 AF Interface Creation on the B-SYS
Now let’s assume that GNF 1 MPC5e in slot 1 is not available for any reason, as in
Figure 4.3.
Figure 4.3 GNF 2 AF Interface Bandwidth Availability Towards GNF 1 is Halved by MPC5e Slot 1 Failure
110 Chapter 4: GNF AF Interfaces
NOTE As you may notice, it is not unusual that the AF Interface bandwidth is
asymmetric between two GNFs. Indeed, as it depends on the sum of the PFE
installed on a line card, if they are different on the two peer GNFs it is perfectly
normal. On the other hand, it is not different from what happens in a single chassis
installation because the cards and the fabric are exactly the same from a hardware
perspective; this fact is just more evident because of the intrinsic nature of the AF
Interface.
Bandwidth is also a dynamic parameter that can change during the GNF life. If a
remote GNF is composed of two MPC5e line cards, for instance, the local AF In-
terface will account for 480Gbps bandwidth (2 MPC5e = 4 PFE each with
120Gbps fabric capacity) during normal operations. But what happens if one
MPC5e disappears from the GNF for any reason? While it doesn’t come back to
service, the very same AF Interface will account for half the bandwidth as only two
PFEs out of four are available in that moment.
Before starting the lab, let’s see how AF Interfaces can also provide a suitable
transport infrastructure to terminate services.
each new release. So, let’s configure a simple example to show how service termi-
nation can be achieved on AF Interfaces through the use of the pseudowire sub-
scriber (PS) interfaces.
In this lab set up, three services must be delivered through AF Interfaces from
EDGE-GNF to CORE-GNF. To demonstrate the flexibility of the configuration,
each of them will have a different nature ranging from a plain Layer 2 bridge do-
main to more sophisticated VPLS and EVPN services.
These services will be collected on EDGE-GNF on the line card interface connected
to the IXIA Test Center and then, by using the pseudowire subscriber interface ma-
chinery, the overlay (service layer) and the underlay (transport layer) will be
stitched to deliver the traffic over the other end of the AF Interface sitting on the
CORE-GNF. The setup schematics are illustrated in Figure 4.4.
On the IXIA Test Center twenty end clients are emulated for each service.
The configuration works in principle by using the Pseudowire Subscriber Service
(PS) interfaces to collect the traffic directly from the origin service instantiation
and tunnel it through a transparent local cross-connect with the local AF Interface
to deliver it to the other end of the AF Interface located on the remote GNF.
NOTE For the sake of brevity, the bridge domain PS interface configuration is not
shown in Figure 4.5, but it is exactly like the other ones only with VLAN-ID =
200.
112 Chapter 4: GNF AF Interfaces
For instance, the EVPN service needs all the interfaces to be configured as either
ethernet-bridge or vlan-bridge, otherwise the Junos commit will fail. On the other
hand, the VPLS instance mandates the access interfaces encapsulation to be config-
ured as ethernet-vpls or vlan-vpls, therefore they all must be available on the pseu-
dowire subscriber service.
Let’s examine just the EVPN configuration, as all of the interfaces are configured
in the same way concerning the pseudowire subscriber interface and any other dif-
ferences are just related to specific service configuration statements.
The EVPN instance contains two interfaces: the physical access instance connected
to the IXIA Test Center (interface xe-6/0/0.50) and the pseudowire subscriber ser-
vice interface (ps2 unit 50). By configuring both into the same EVPN routing in-
stance, the communication between them is automatically achieved using the
EVPN machinery. As visible from the PS2 configuration snippet, this interface has
just two units: the transport and the service. They are stitched together just be-
cause they belong to the same underlying physical PS interface, so no further con-
figurations are needed.
The final missing piece is the cross-connect between the pseudowire subscriber
transport unit and the AF Interface on the EDGE-GNF. It’s achieved using a local
switched pseudowire configured leveraging Junos L2Circuit functionality.
Once the local cross connect is up and running, all the frames coming from the xe-
6/0/0.50 access interface will be forwarded through the PS service logical interface
and, in turn, to the PS transport unit and then, local switched to the local end of
the AF Interface.
NOTE The AF Interface configuration was not shown in the previous diagram
because of lack of space, so it is added here:
{master}[edit]
magno@EDGE-GNF-re0# show interfaces af0 unit 50
encapsulation vlan-ccc;
vlan-id 50;
{master}[edit]
magno@EDGE-GNF-re0#
As you can see, there’s nothing exciting here, just a plain-vanilla VLAN-CCC en-
capsulation with vlan-id = 50, as a normal L2circuit configuration requires.
Let’s examine the EVPN service configuration, starting from EDGE-GNF access
interface xe-6/0/0 interface all the way to the last hop that is the CORE-GNF inter-
face xe-1/0/0. Just one single service is explained as all of the principles can be sim-
ply applied to all of the services.
114 Chapter 4: GNF AF Interfaces
Business as usual here, with the physical interface (IFD in Junos jargon) configured
to provide the most flexible feature set available on the MX Series routers through
the use of the encapsulation flexible-ethernet-services (which allows you to mix
Layer 2 and Layer 3 services on the same IF), and flexible-vlan-tagging, which
provides the ability to use single and dual VLAN-tags on different units belonging
to the same underlying IFD. Then a single unit 50 with vlan-id 50 and encapsulation
vlan-bridge. Remember, EVPN access interfaces must be configured as plain bridg-
ing interfaces.
Okay, now let’s move towards the service configuration stanza:
{master}[edit]
magno@EDGE-GNF-re0# show routing-instances EVPN-VLAN-50
instance-type evpn;
vlan-id 50;
interface xe-6/0/0.50;
interface ps2.50;
route-distinguisher 72.255.255.1:150;
vrf-target target:65203:50;
protocols {
evpn;
}
{master}[edit]
magno@EDGE-GNF-re0#
The EVPN-VLAN-50 instance is, surprise surprise, an EVPN type instance and contains
two interfaces: the unit xe-6/0/0.50 just examined above, and a pseudowire service
unit, namely ps2.50. Despite the local-only nature of this EVPN context (we are
just stitching interfaces on a single node, no other EVPN PEs are present in the set-
up), the route-distinguisher and the route target must be configured otherwise Ju-
nos commit will fail. The vlan-id command enables the eventual VLAN
normalization feature, which is not needed in this setup since all the interfaces con-
figured on the same vlan-id and protocols evpn enable the EVPN machinery. Very
simple and straightforward so far, right?
115 AF Interface Creation on the B-SYS
Now to the tricky part: the pseudowire service interface and the local switched
cross-connection. First of all, the create the PS interfaces, the pseudowire-service
command must be configured under the chassis stanza. Moreover, as these inter-
faces are anchored to logical tunnels, they must be configured using the tunnel-ser-
vices statement. The resulting configuration:
{master}[edit]
magno@EDGE-GNF-re0# show chassis
--- SNIP ---
pseudowire-service {
device-count 10;
}
fpc 6 {
pic 0 {
tunnel-services {
bandwidth 40g;
}
}
pic 1 {
tunnel-services {
bandwidth 40g;
}
}
}
--- SNIP ---
{master}[edit]
magno@EDGE-GNF-re0#
With these settings, up to ten PS interfaces, from ps0 to ps9, and two LT interfaces,
namely lt-6/0/0 and lt-6/1/0 are created. The pic number means one logical tunnel
is created for each PFE installed on the MPC5e-Q line card, each providing up to
40Gbps bandwidth.
Now that the PS interfaces are available, let’s configure them. For the EVPN ser-
vice use the PS2 interface:
116 Chapter 4: GNF AF Interfaces
{master}[edit]
magno@EDGE-GNF-re0# show interfaces ps2
anchor-point {
lt-6/0/0;
}
flexible-vlan-tagging;
mtu 9216;
encapsulation flexible-ethernet-services;
unit 0 {
encapsulation ethernet-ccc;
}
unit 50 {
encapsulation vlan-bridge;
vlan-id 50;
}
{master}[edit]
magno@EDGE-GNF-re0#
{master}[edit]
magno@EDGE-GNF-re0#
Again, the first thing to notice is how simple the configuration is: business as usual
encapsulation and tagging, and a single unit configured with a vlan-id, and the
right encapsulation to be suitable for a Level 2 circuit service. You may have al-
ready noticed, the mtu value for this interface is slightly higher than the one ob-
served on the PS side, 9224 bytes versus 9216. We’ll come back to this in a
moment as it’s time to examine the last piece of the configuration, the local
switched l2circuit:
117 AF Interface Creation on the B-SYS
{master}[edit]
magno@EDGE-GNF-re0# show protocols l2circuit
local-switching {
interface af0.50 {
end-interface {
interface ps2.0;
}
ignore-encapsulation-mismatch;
}
}
{master}[edit]
magno@EDGE-GNF-re0#
This is maybe the trickiest piece of the setup: the “local-switching” defines the
l2circuit as a self-contained cross-connect service inside the EDGE-GNF; bottom
line, no remote end points are involved. With this configuration, the AF0.50 and
the PS2.0 interfaces are stitched together in a point-to-point connection acting as a
very simple pseudowire. Nevertheless, as usual, the devil hides in the details so it’s
paramount to consider two mandatory conditions to successfully set up a so-called
“Martini” circuit:
Encapsulation on both ends must be the same;
{master}[edit]
magno@CORE-GNF-re1#
Not too much to explain here that you haven’t already seen before. This time the
unit 50 is an encapsulation vlan-bridge interface because it must be configured in-
side a bridge domain. The xe-1/0/0 access interface configuration is very similar to
the one just examined:
{master}[edit]
magno@CORE-GNF-re1# show interfaces xe-1/0/0
flexible-vlan-tagging;
encapsulation flexible-ethernet-services;
unit 50 {
encapsulation vlan-bridge;
vlan-id 50;
}
{master}[edit]
magno@CORE-GNF-re1#
The most noticeable difference is the MTU value, set to the default 1518 bytes val-
ue (Layer 2), because it’s a customer-facing interface.
Both interfaces are then inserted into a bridge-domain:
{master}[edit]
magno@CORE-GNF-re1# show bridge-domains
VLAN-50 {
vlan-id 50;
interface xe-1/0/0.50;
interface af0.50;
routing-interface irb.50;
}
{master}[edit]
magno@CORE-GNF-re1#
Verification
Now that all the service termination over AF Interface pieces of the Junos node
slicing puzzle are in place, let’s quickly test the services to see if they work as
expected.
Ten hosts on each side of the setup are configured for each service (10 x 2 sides x 3
services = 60 hosts) and will act as end users. They will generate three bidirectional
10,000 pps traffic streams (one for each service) running for 60 seconds. The ex-
pected result is to send 3 x 10,000 * 60 seconds = 1,800,000 packets for each di-
rection (a total of 3,600,000 packets) that must be correctly received on the remote
end of the stream.
The test plan looks like Figure 4.8.
Before starting the real test, let’s see if the control plane is working by first looking
at the l2 circuit connections. There are three, one for each service, and all of them
must be in the Up state, otherwise there would be traffic blackholing:
{master}
magno@EDGE-GNF-re0> show l2circuit connections
--- SNIP ---
Local Switch af0.100
Interface Type St Time last up # Up trans
af0.100(vc 0) loc Up Mar 5 15:59:16 2019 1
Local interface: af0.100, Status: Up, Encapsulation: VLAN
Local interface: ps1.0, Status: Up, Encapsulation: ETHERNET
Local Switch af0.200
121 Verification
Interface Type St Time last up # Up trans
af0.200(vc 0) loc Up Mar 5 16:01:49 2019 1
Local interface: af0.200, Status: Up, Encapsulation: VLAN
Local interface: ps0.0, Status: Up, Encapsulation: VLAN
Local Switch af0.50
Interface Type St Time last up # Up trans
af0.50(vc 0) loc Up Mar 6 00:41:16 2019 1
Local interface: af0.50, Status: Up, Encapsulation: VLAN
Local interface: ps2.0, Status: Up, Encapsulation: ETHERNET
{master}
magno@EDGE-GNF-re0>
All three l2 circuits local connections are Up and ready to receive traffic.
Before starting the traffic, the Address Resolution Protocol (ARP) resolution pro-
cess was triggered on the IXIA traffic generator, so let’s take a look to see if all the
MAC Addresses are correctly learned the different service instances:
{master}
magno@EDGE-GNF-re0> show vpls mac-table count
20 MAC address learned in routing instance VPLS-VLAN100 bridge domain __VPLS-VLAN100__
MAC address count per interface within routing instance:
Logical interface MAC count
ps1.100:100 10
xe-6/0/0.100:100 10
MAC address count per learn VLAN within routing instance:
Learn VLAN ID MAC count
100 20
0 MAC address learned in routing instance __juniper_private1__ bridge domain ____juniper_
private1____
{master}
magno@EDGE-GNF-re0> show evpn mac-table count
21 MAC address learned in routing instance EVPN-VLAN-50 bridge domain __EVPN-VLAN-50__
MAC address count per interface within routing instance:
Logical interface MAC count
ps2.50:50 11
xe-6/0/0.50:50 10
MAC address count per learn VLAN within routing instance:
Learn VLAN ID MAC count
50 21
{master}
magno@EDGE-GNF-re0> show bridge mac-table bridge-domain VLAN-200 count
20 MAC address learned in routing instance default-switch bridge domain VLAN-200
MAC address count per interface within routing instance:
Logical interface MAC count
ps0.200:200 10
122 Chapter 4: GNF AF Interfaces
xe-6/0/0.200:200 10
MAC address count per learn VLAN within routing instance:
Learn VLAN ID MAC count
200 20
{master}
magno@EDGE-GNF-re0>
Again, everything is fine, and all the MAC addresses are present in the relevant
MAC tables: there are 20 in all, as 10 hosts are simulated, per service, on each ac-
cess interfaces. But wait a second! The EVPN MAC table count output reads 21
MAC addresses! It’s easily explainable: raise your hand if you can remember the
unused IRB interface configured inside the VLAN-50 bridge domain on the
CORE-GNF! Indeed, 11 MAC addresses are learned through the pseudowire sub-
scriber interface pointing to the remote GNF! So now that there’s confidence in the
lab that everything should work, time to start the traffic.
During the traffic run, we’ll examine all the involved interfaces, from the xe-6/0/0
access interface on EDGE-GNF to the CORE-GNF xe-1/0/0. As traffic is symmet-
rical, only one direction is shown (the counters should show the same values on
both input and output directions). The traffic is transmitted at 30,000 pps on all
the involved interfaces.
EDGE-GNF:
{master}
magno@EDGE-GNF-re0> show interfaces xe-6/0/0 | match rate
Mar 06 13:52:39
Input rate : 121920120 bps (30000 pps)
Output rate : 121916040 bps (29998 pps)
{master}
magno@EDGE-GNF-re0> show interfaces ps0 | match rate
Input rate : 40640056 bps (10000 pps)
Output rate : 40640056 bps (10000 pps)
{master}
magno@EDGE-GNF-re0> show interfaces ps1 | match rate
Input rate : 40640120 bps (10000 pps)
Output rate : 40640120 bps (10000 pps)
{master}
magno@EDGE-GNF-re0> show interfaces ps2 | match rate
Input rate : 40638000 bps (9999 pps)
Output rate : 40640040 bps (10000 pps)
{master}
magno@EDGE-GNF-re0> show interfaces af0 | match rate
Input rate : 121918016 bps (29999 pps)
Output rate : 121920048 bps (30000 pps)
{master}
magno@EDGE-GNF-re0>
123 Verification
You can see here that each PS interface carries 10,000 pps in both directions and
the aggregated packet per second value of 30,000 is seen in input and output on
both the xe-6/0/0 and af0 interfaces. To ensure confirmation that the traffic is
flowing as expected, let’s also check CORE-GNF:
{master}
magno@CORE-GNF-re1> show interfaces af0 | match rate
Mar 06 13:53:12
Input rate : 121933480 bps (30000 pps)
Output rate : 121929312 bps (30000 pps)
{master}
magno@CORE-GNF-re1> show interfaces xe-1/0/0 | match rate
Input rate : 121925768 bps (30000 pps)
Output rate : 121923736 bps (30000 pps)
{master}
magno@CORE-GNF-re1>
These results look pretty promising. Indeed, on both af0 and xe-1/0/0 interfaces
you can see that 30,000 packets per second are forwarded in both input and out-
put directions. Let’s wait until the traffic stops and check the aggregate counters to
confirm that all the 1,800,000 packets (in each direction) could make their end-to-
end journey. From the EDGE-GNF:
{master}
magno@EDGE-GNF-re0> show interfaces xe-6/0/0
Physical interface: xe-6/0/0, Enabled, Physical link is Up
Interface index: 171, SNMP ifIndex: 538
Link-level type: Flexible-Ethernet, MTU: 1522, MRU: 1530, LAN-
PHY mode, Speed: 10Gbps, BPDU Error: None, Loop Detect PDU Error: None, MAC-
REWRITE Error: None, Loopback: None,
--- SNIP ---
Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.50 ] Encapsulation: VLAN-Bridge
Input packets : 600000
Output packets: 600000
Protocol bridge, MTU: 1522
Flags: Is-Primary
Logical interface xe-6/0/0.100 (Index 359) (SNMP ifIndex 548)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.100 ] Encapsulation: VLAN-VPLS
Input packets : 600000
Output packets: 600000
Protocol vpls, MTU: 1522
Logical interface xe-6/0/0.200 (Index 349) (SNMP ifIndex 551)
Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.200 ] Encapsulation: VLAN-Bridge
Tenant Name: (null)
Input packets : 600000
Output packets: 600000
Protocol bridge, MTU: 1522
Flags: Is-Primary
--- SNIP ---
{master}
magno@EDGE-GNF-re0>
124 Chapter 4: GNF AF Interfaces
Each unit configured on the xe-6/0/0 access interface has sent and received
600,000 packets in each direction, the sum of 1,800,000 as expected.
And now the pseudowire subscription PS interfaces:
{master}
magno@EDGE-GNF-re0> show interfaces ps0
Physical interface: ps0, Enabled, Physical link is Up
--- SNIP ---
Logical interface ps0.0 (Index 338) (SNMP ifIndex 564)
Flags: Up Point-To-Point 0x4004000 VLAN-Tag [ 0x8100.200 ] Encapsulation: VLAN-CCC
Input packets : 600000
Output packets: 600000
Protocol ccc, MTU: 9216
Logical interface ps0.200 (Index 330) (SNMP ifIndex 593)
Flags: Up 0x20004000 VLAN-Tag [ 0x8100.200 ] Encapsulation: VLAN-Bridge
Tenant Name: (null)
Input packets : 600000
Output packets: 600000
Protocol bridge, MTU: 9216
---- SNIP ---
{master}
magno@EDGE-GNF-re0> show interfaces ps1
Physical interface: ps1, Enabled, Physical link is Up
--- SNIP ---
Logical interface ps1.0 (Index 341) (SNMP ifIndex 578)
Flags: Up Point-To-Point 0x4004000 Encapsulation: Ethernet-CCC
Input packets : 600000
Output packets: 600000
Protocol ccc, MTU: 9216
Logical interface ps1.100 (Index 352) (SNMP ifIndex 594)
Flags: Up 0x4000 VLAN-Tag [ 0x8100.100 ] Encapsulation: VLAN-VPLS
Input packets : 600000
Output packets: 600000
Protocol vpls, MTU: 9216
Flags: Is-Primary
--- SNIP ---
{master}
magno@EDGE-GNF-re0> show interfaces ps2
Physical interface: ps2, Enabled, Physical link is Up
--- SNIP ---
Logical interface ps2.0 (Index 355) (SNMP ifIndex 587)
Flags: Up Point-To-Point 0x4004000 Encapsulation: Ethernet-CCC
Input packets : 600000
Output packets: 600000
Protocol ccc, MTU: 9216
Logical interface ps2.50 (Index 331) (SNMP ifIndex 590)
Flags: Up 0x20004000 VLAN-Tag [ 0x8100.50 ] Encapsulation: VLAN-Bridge
Input packets : 600000
Output packets: 600000
Protocol bridge, MTU: 9216
125 Verification
--- SNIP ---
{master}
magno@EDGE-GNF-re0>
The PS interface counters look good, too! The transport and service units on each
pseudowire subscription interface accounted for the magic number of 600,000
packets in each direction. So, the last step is a double check of the last leg, the AF
Interface:
{master}
magno@EDGE-GNF-re0> show interfaces af0
Physical interface: af0, Enabled, Physical link is Up
---- SNIP ----
FPC slot:FE num FE Bandwidth(Gbps) Status Transmit Packets Transmit Bytes
1:0 240 Up 901256 457838048
1:1 240 Up 898744 456561952
Residual Transmit Statistics :
Packets : 0 Bytes : 0
Fabric Queue Statistics :
FPC slot:FE num High priority(pkts) Low priority(pkts)
1:0 0 901256
1:1 0 898744
FPC slot:FE num High priority(bytes) Low priority(bytes)
1:0 0 457838048
1:1 0 456561952
Residual Queue Statistics :
High priority(pkts) Low priority(pkts)
0 0
High priority(bytes) Low priority(bytes)
0 0
Logical interface af0.50 (Index 356) (SNMP ifIndex 592)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.50 ] Encapsulation: VLAN-CCC
Input packets : 600000
Output packets: 600000
Protocol ccc, MTU: 9224
Logical interface af0.100 (Index 333) (SNMP ifIndex 549)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.100 ] Encapsulation: VLAN-CCC
Input packets : 600000
Output packets: 600000
Protocol ccc, MTU: 9224
Flags: Is-Primary
Logical interface af0.200 (Index 351) (SNMP ifIndex 550)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.200 ] Encapsulation: VLAN-CCC
Input packets : 600000
Output packets: 600000
Protocol ccc, MTU: 9216
Flags: User-MTU
--- SNIP ---
{master}
magno@EDGE-GNF-re0>
126 Chapter 4: GNF AF Interfaces
Nothing strange on this interface, either. The 600,000 magic number shows up
again on all the AF units, so it’s absolutely on track.
You may have spotted some interesting output from the show interface af0 com-
mand, in the Fabric Queue Statistics section. As explained, the AF Interface is a
logical abstraction of the underlying fabric, hence the packets should be sprayed
evenly among different PFEs. At the same time, they are also Junos interfaces,
therefore the load balancing algorithm is applied to share packets among different
PFEs. It is worth noting how well the load balancing algorithm works, the total
traffic is shared with a 50,0070 / 49,9930 % ratio between PFE 0 and PFE 1,
which is an amazing result, especially considering the relatively low number of
flows used during the test.
A quick check on the CORE-GNF interfaces confirms the good results already
observed:
{master}
magno@CORE-GNF-re1> show interfaces xe-1/0/0
Physical interface: xe-1/0/0, Enabled, Physical link is Up
--- SNIP ---
Logical interface xe-1/0/0.50 (Index 346) (SNMP ifIndex 931)
Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.50 ] Encapsulation: VLAN-Bridge
Tenant Name: (null)
Input packets : 600000
Output packets: 600000
Protocol bridge, MTU: 1522
Logical interface xe-1/0/0.100 (Index 389) (SNMP ifIndex 613)
Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.100 ] Encapsulation: VLAN-Bridge
Tenant Name: (null)
Input packets : 600000
Output packets: 600000
Protocol bridge, MTU: 1522
Logical interface xe-1/0/0.200 (Index 390) (SNMP ifIndex 615)
Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.200 ] Encapsulation: VLAN-Bridge
Tenant Name: (null)
Input packets : 600000
Output packets: 600000
Protocol bridge, MTU: 1522
---- SNIP ----
{master}
magno@CORE-GNF-re1> show interfaces af0
Physical interface: af0, Enabled, Physical link is Up
---- SNIP ----
Peer GNF Forwarding element(FE) view :
FPC slot:FE num FE Bandwidth(Gbps) Status Transmit Packets Transmit Bytes
6:0 120 Up 900714 457561802
6:1 120 Up 899288 456838304
Residual Transmit Statistics :
Packets : 0 Bytes : 0
127 Verification
Fabric Queue Statistics :
FPC slot:FE num High priority(pkts) Low priority(pkts)
6:0 0 900714
6:1 0 899288
FPC slot:FE num High priority(bytes) Low priority(bytes)
6:0 0 457561802
6:1 0 456838304
Residual Queue Statistics :
High priority(pkts) Low priority(pkts)
0 0
High priority(bytes) Low priority(bytes)
0 0
Logical interface af0.50 (Index 345) (SNMP ifIndex 930)
Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.50 ] Encapsulation: VLAN-Bridge
Tenant Name: (null)
Input packets : 600000
Output packets: 600000
Protocol bridge, MTU: 9216
Logical interface af0.100 (Index 341) (SNMP ifIndex 608)
Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.100 ] Encapsulation: VLAN-Bridge
Tenant Name: (null)
Input packets : 600000
Output packets: 600000
Protocol bridge, MTU: 9216
Flags: Is-Primary
Logical interface af0.200 (Index 342) (SNMP ifIndex 607)
Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.200 ] Encapsulation: VLAN-Bridge
Tenant Name: (null)
Input packets : 600000
Output packets: 600000
Protocol bridge, MTU: 9216
---- SNIP ----
{master}
magno@CORE-GNF-re1>
These CORE-GNF xe-1/0/0 and AF0 interfaces look good, too. All 600,000 pack-
ets were forwarded in both directions over all three configured units. Also, in this
case, it’s interesting to notice how good the load balancing algorithm worked on
the AF Interface, where the split ratio between the two PFEs was 50,0039 /
49,9961 %! Impressive!
So far all the traffic generated by the IXIA Test Center was correctly received and
forwarded, leaving the counters on the traffic generator itself to be verified as
shown in Figure 4.9.
128 Chapter 4: GNF AF Interfaces
And bingo! The IXIA Test Center certifies that all the traffic was sent and received
correctly, claiming the test was successful!
This use case is a perfect fit for scenarios where the best service delivery point is
located over the abstract fabric interface. Even though AF Interfaces are consid-
ered simple and fast, core-facing interfaces, it is still possible to leverage them as
underlay transport for traffic originated by services terminated over the dedicated
subscriber service infrastructure. In this way, it is perfectly possible to use the
bandwidth offered by the AF Interfaces without wasting revenue ports. At the
same time, these high-speed fabric-based interfaces will stay fast, reliable, and sim-
ple to fulfill their main purpose, which is to allow core connections between differ-
ent GNFs in the most flexible, efficient, and easiest way possible.
Chapter 5
Lab Setup
First of all, let’s take a look at Figure 5.1 for a logical view of the lab so far, where
basic configurations, such as AF Interface point-to-point connection, loopback
addresses, and ISIS routing are already up and running on both GNFs.
The two GNFs have the same features, scaling, look, and feel as a standalone MX
Series router, therefore anything that can run on a single device can run on this set-
up! But for our purposes, one EDGE and one CORE application will be config-
ured on the two GNFs, and in particular:
The EDGE GNF will provide a very basic broadband edge functionality for
64,000 subscribers.
The CORE-GNF will act as a BGP peering router to provide Internet access to
the EDGE BNG.
Leveraging the IXIA Test Center, the lab configuration will showcase:
Figure 5.2 Lab Setup Running BBE and BGP Peering Scenarios
NOTE Full configurations for both GNFs, BASE-SYS, and IXIA Test Center are
provided in the Appendix.
Some screenshots are provided inline within the text to testify to the results
achieved. Let’s start checking the routing infrastructure first, starting by inspecting
ISIS adjacencies between EDGE and CORE GNFs, and checking that loopback
addresses are correctly learned:
{master}
magno@EDGE-GNF-re0> show isis adjacency
Interface System L State Hold (secs) SNPA
af0.72 CORE-GNF-re1 2 Up 19
{master}
magno@EDGE-GNF-re0> show route protocol isis table inet.0
inet.0: 13 destinations, 14 routes (13 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
72.255.255.2/32 *[IS-IS/18] 2d 15:57:52, metric 1
> to 72.0.0.2 via af0.72
{master}
magno@EDGE-GNF-re0> ping 72.255.255.2 source 72.255.255.1 count 1
PING 72.255.255.2 (72.255.255.2): 56 data bytes
64 bytes from 72.255.255.2: icmp_seq=0 ttl=64 time=1.964 ms
--- 72.255.255.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.964/1.964/1.964/0.000 ms
{master}
magno@EDGE-GNF-re0>
132 Chapter 5: Lab It! EDGE and CORE Testing
{master}
magno@EDGE-GNF-re0> show route receive-protocol bgp 72.255.255.2 table inet.0
inet.0: 13 destinations, 14 routes (13 active, 0 holddown, 0 hidden)
Prefix Nexthop MED Lclpref AS path
* 0.0.0.0/0 72.255.255.2 100 65400 I
{master}
magno@EDGE-GNF-re0> show route advertising-protocol bgp 72.255.255.2 table inet.0
inet.0: 13 destinations, 14 routes (13 active, 0 holddown, 0 hidden)
Prefix Nexthop MED Lclpref AS path
* 100.100.0.0/16 Self 100 I
{master}
magno@EDGE-GNF-re0> show route protocol bgp table inet.0
inet.0: 13 destinations, 14 routes (13 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
0.0.0.0/0 *[BGP/170] 00:22:04, localpref 100, from 72.255.255.2
AS path: 65400 I, validation-state: unverified
> to 72.0.0.2 via af0.72
{master}
magno@EDGE-GNF-re0>
{master}
magno@CORE-GNF-re1> show route protocol bgp table inet.0 100.100.0.0/16 exact
inet.0: 100217 destinations, 100218 routes (100217 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
100.100.0.0/16 *[BGP/170] 00:25:55, localpref 100, from 72.255.255.1
AS path: I, validation-state: unverified
> to 72.0.0.1 via af0.72
{master}
magno@CORE-GNF-re1>
133 Lab Setup
Everything is just as expected. The iBGP session is established between the two
GNFs; the default route is advertised by the CORE-GNF and active in the EDGE-
GNF, and the broadband edge subscriber address pool is advertised the other way
around and active in CORE-GNF routing table. IP reachability between the simu-
lated Internet and the edge is thus achieved. The AF Interface is providing the un-
derlay connection between the two separated functions as desired.
NOTE You may have noticed that RE0 holds the mastership on EDGE-GNF
while on the CORE it is held by RE1. This was done on purpose to further under-
line that there are no technical issues or constraints related to the Routing Engine
mastership status.
Now let’s focus our attention on the core side, and check if all the 100 eBGP peer-
ing sessions are established and all the 100,000 expected routes are active in the
CORE-GNF routing table:
{master}
magno@CORE-GNF-re1> show bgp summary | match Establ | except 65203 | count
Count: 100 lines
{master}
magno@CORE-GNF-re1> show route summary | match BGP:
BGP: 100001 routes, 100001 active
{master}
magno@CORE-GNF-re1> show route advertising-protocol bgp 99.99.99.2 extensive
inet.0: 100216 destinations, 100217 routes (100216 active, 0 holddown, 0 hidden)
* 100.100.0.0/16 (1 entry, 1 announced)
BGP group eBGP type External
Nexthop: Self
AS path: [65203] I
{master}
magno@CORE-GNF-re1> show route advertising-protocol bgp 99.99.99.6 extensive
inet.0: 100216 destinations, 100217 routes (100216 active, 0 holddown, 0 hidden)
* 100.100.0.0/16 (1 entry, 1 announced)
BGP group eBGP type External
Nexthop: Self
AS path: [65203] I
{master}
magno@CORE-GNF-re1>
Perfect! It’s just as desired! One hundred established BGP sessions show up and all
the 100,001 BGP routes are learned and active in the CORE-GNF RIB.
NOTE Don’t forget that the CORE-GNF is also receiving one iBGP route from
the EDGE-GNF, hence the total BGP learned routes are 100,001 as the command
output doesn’t discriminate between external (100,000) and internal (1) BGP
routes!
134 Chapter 5: Lab It! EDGE and CORE Testing
Subscribers by State
Active: 128000
Total: 128000
Subscribers by Client Type
DHCP: 64000
VLAN: 64000
Total: 128000
{master}
magno@EDGE-GNF-re0> show dhcp relay binding | match BOUND | count
Count: 64000 lines
{master}
magno@EDGE-GNF-re0> show route protocol access-internal | match Access-internal | count
Count: 64000 lines
{master}
magno@EDGE-GNF-re0> show route 100.100.34.33
inet.0: 64013 destinations, 64014 routes (64013 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
100.100.34.33/32 *[Access-internal/12] 00:04:37
Private unicast
{master}
magno@EDGE-GNF-re0> ping 100.100.34.33 count 1
PING 100.100.34.33 (100.100.34.33): 56 data bytes
64 bytes from 100.100.34.33: icmp_seq=0 ttl=64 time=1.769 ms
--- 100.100.34.33 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.769/1.769/1.769/0.000 ms
{master}
magno@EDGE-GNF-re0>
NOTE With the C-VLAN access model with auto-sense VLANs, one subscriber is
accounted for the Layer 2 interface and one for the actual DHCP subscriber.
It must be the lab’s lucky day, because everything looks just fine, all the BBE sub-
scribers are connected to the EDGE-GNF, the DHCP relay bindings are all in a
“BOUND” state, all the access-internal routes are active in the RIB, and, by check-
ing a random subscriber, BNG-to-subscriber connectivity is up and running.
135 Lab Setup
So, as a final test, let’s configure some traffic streams on the IXIA Test Center and
double-check that the connectivity between the subscribers and emulated Internet
is working properly.
The IXIA Test Center is configured to send 512 bytes packets at 100,000 pps for
60 seconds in both directions; let’s check the final output in Figure 5.3.
The tester has correctly sent and received 6,000,000 frames (100,000 pps for 60
seconds) so the end-to-end connectivity is just fine. Indeed, if we double-check on
the GNFs themselves, other confirmations are easily spotted.
CORE-GNF:
{master}
magno@CORE-GNF-re1> show interfaces xe-1/0/0 | match rate
Mar 04 15:56:04
Input rate : 392011008 bps (99994 pps)
Output rate : 388810904 bps (99997 pps)
{master}
magno@CORE-GNF-re1> show interfaces af0 |match rate
Mar 04 15:56:07
Input rate : 388817224 bps (100004 pps)
Output rate : 392013128 bps (100003 pps)
{master}
magno@CORE-GNF-re1>
As shown by this output, the 100,000 pps are received by the Internet simulated
hosts on the IXIA and forwarded through the AF Interface. The fact that 100,000
packets also are present in the reverse direction already provides a good indication
that everything is working just fine.
Let’s continue our packet tracking activity on the EDGE-GNF:
{master}
magno@EDGE-GNF-re0> show interfaces af0 | match rate
Mar 04 15:56:11
Input rate : 392000776 bps (100000 pps)
136 Chapter 5: Lab It! EDGE and CORE Testing
Output rate : 388800792 bps (100000 pps)
{master}
magno@EDGE-GNF-re0> show interfaces xe-6/0/0 | match rate
Mar 04 15:56:13
Input rate : 388800384 bps (100000 pps)
Output rate : 392002352 bps (100000 pps)
{master}
magno@EDGE-GNF-re0>
Again, what we see is exactly what we expect. The end-to-end connectivity looks
to be there, as already suspected, but best practice means we need to collect con-
clusive proofs. Finally, let’s check the counters on the IXIA-facing interfaces after
the test has ended on both GNFs.
CORE-GNF:
{master}
magno@CORE-GNF-re1> show interfaces xe-1/0/0 media | match bytes:
Mar 04 16:06:43
Input bytes: 3072376256, Input packets: 6004536, Output bytes: 3048377456, Output packets: 6004523
EDGE-GNF:
{master}
magno@EDGE-GNF-re0> show interfaces xe-6/0/0 media | match bytes
Mar 04 16:08:12
Input bytes: 3072000000, Input packets: 6000000, Output bytes: 3096000000, Output packets: 6000000
{master}
magno@EDGE-GNF-re0>
Even in this case, the counters confirm our observations. You may have noticed
that they are slightly different between EDGE and CORE GNF interfaces. Indeed,
where on the former, only IXIA transit traffic travels on the subscriber-facing inter-
face, on the latter there is light control plane BGP traffic using the CORE GNF xe-
1/0/0 interface, such as BGP keepalive or updates packets. This explains the delta
between the two interface counters.
Well, it was a pretty long journey, but in the end all the goals of the lab were
achieved thanks to Junos node slicing. Indeed, it’s important to recall that thanks
to unique node slicing characteristics such as resource protection, full logical parti-
tioning, and flexible interconnection options through AF Interfaces, it’s perfectly
secure, feasible, and practical to deploy the very same physical node to perform
external and internal functions without jeopardizing scaling, reliability, and
security.
And now let’s turn a single chassis MX router running production services into a
Junos node slicing solution, minimizing downtimes during the process!
Chapter 6
In Chapter 5, the Junos node slicing setup was activated in an MX Series chassis
that wasn’t running any real service, hence there was no focus in being particularly
sensitive about conversion procedure optimization. But in the real world, the most
frequent scenario is for a production router, already providing end user services,
being converted to a Junos node slicing solution.
As already explained throughout this book, as soon as some prerequisites are ful-
filled (we will talk about them in a moment), only one step of the conversion pro-
cess impacts traffic and service: GNF configuration commit on the B-SYS. Indeed,
as soon as the commit process happens, the line cards involved by the new GNFs
deployment need to reboot to attach themselves to the external virtual Routing
Engines.
In this chapter, the goal is to get a single chassis MX running some services and to
convert it into a Junos node slicing solution while minimizing service and traffic
impacts. The look and feel of the procedure should be as close as possible to a
plain-vanilla Junos release upgrade. Let’s start!
1 x MPC7e 40XGE Line Card in slot 1 – xe-1/0/0 connected to IXIA Test Cen-
ter Card 7 – Port 8
1 x MPC5eQ 2CGE+4XGE Line Card in slot 6 – xe-6/0/0 connected to IXIA
Test Center Card 7 – Port 7
The lab schematic is depicted in Figure 6.1:
It looks pretty similar to the initial lab setup in Figure 6.1, doesn’t it? Logically
speaking nothing changes. The differences are only physical as the control plane of
the solution now runs on the external servers instead of running on the internal
Routing Engines!
The setup is exactly the same as used in Chapter 5 with the difference being that in
this case our DUT is a single chassis MX960-4. Before describing all the prerequi-
sites and the steps needed to achieve our goal to migrate a single MX to a one-
GNF Junos node slicing solution, it’s time to perform some sanity checks to be
sure everything is working as expected.
141 Single MX960-4 Chassis Running BGP Peering and BNG Services
{master}
magno@MX960-4-RE0> show route table inet.0 protocol bgp | match BGP | count
Count: 100000 lines
The BGP side looks perfect, and 100 peers are advertising 100,000 routes as
expected:
{master}
magno@MX960-4-RE0> show subscribers summary
Subscribers by State
Active: 128000
Total: 128000
Subscribers by Client Type
DHCP: 64000
VLAN: 64000
Total: 128000
{master}
magno@MX960-4-RE0> show route table inet.0 protocol access-internal | match Access- | count
Count: 64000 lines
{master}
magno@MX960-4-RE0> show dhcp relay binding | match BOUND | count
Count: 64000 lines
{master}
magno@MX960-4-RE0>
The BNG side looks as expected, 64,000 subscribers are connected, 64,000 access-
internal routes are installed, and 64,000 DHCP relay bindings are in “BOUND”
state. We’ll check the same KPIs again after the migration to single-GNF Junos
node slicing is completed.
NOTE In this lab exercise, we are going to leverage the same two servers already
used, hence we can assume they are already correctly configured (so no particular
outputs are provided). They are available in Chapter 5, anyway.
Once the JDM servers are properly set up, some prerequisites must be checked on
the production MX Device: none of them impact service, therefore they can be
performed during regular working hours.
WARNING The single chassis MX must run the suitable Junos version before
being configured as a Junos node slicing B-SYS. For this reason, all of the assump-
tions on performing activities that do not impact service outside of any mainte-
nance windows hold true only if the desired Junos version is already running on
the router. In case it’s not, a Junos upgrade activity must be performed according
to the normal well-know procedures. Of course, using the In-Service Software
Upgrade (ISSU) machinery may minimize service impacts but the mileage may vary
according to the hardware components installed and the features active on the
router. Therefore, if the device administrator wants to perform an in-service
upgrade, it is strongly advised to double-check that all of the hardware compo-
nents and features active on the router are supported by the starting Junos release
and that the upgrade path to the new operating system version is one of the ISSU
upgrade combinations officially supported by Juniper Networks.
{master}
magno@MX960-4-RE0>
Status: Passed – 18.3 is the correct Junos Version.
{master}
magno@MX960-4-RE0>
Status: Passed; the network-service mode is correctly set configured as “Enhanced-IP”;
Save the MX Router configuration into a file and store it in an easily accessible lo-
cation because you will need it during the external Routing Engine provisioning
process; in this case, we’re saving it on the jns-x86-0 server:
{master}
magno@MX960-4-RE0> show configuration | save scp://administrator@172.30.181.171:/home/administrator/
configs/MX960-4-CONFIG.txt
administrator@172.30.181.171’s password:
tempfile 100% 31KB
30.8KB/s 00:00
Wrote 1253 lines of output to ‘scp://administrator@172.30.181.171:/home/administrator/configs/
MX960-4-CONFIG.txt’
{master}
magno@MX960-4-RE0>
Status: Passed. The configuration is now
Check that all the physical cablings between the SCBe2 10GE ports and the exter-
nal X86 are properly working. The most effective way to perform this task is to
configure set chassis network-slices on the MX Router and check the status on the
four ports on the servers:
{master}[edit]
magno@MX960-4-RE0# set chassis network-slices
{master}[edit]
magno@MX960-4-RE0# show chassis
--- SNIP ---
network-slices;
{master}[edit]
magno@MX960-4-RE0# commit
re0:
configuration check succeeds
re1:
configuration check succeeds
commit complete
re0:
commit complete
{master}[edit]
magno@MX960-4-RE0#
NOTE With this command the MX router is now ready to act as a B-SYS. But as
no guest-network-function configurations are present, it keeps on behaving as a
single chassis. The command does not affect service.
At this point, using the Linux ethtool command on the external servers, check the
four 10GE connections state.
Server JNS-X86-0:
root@jns-x86-0:/home/administrator# ethtool enp4s0f0 | grep Speed: -A1
Speed: 10000Mb/s
144 Chapter 6: From Single Chassis to Junos Node Slicing
Duplex: Full
root@jns-x86-0:/home/administrator# ethtool enp4s0f1 | grep Speed: -A1
Speed: 10000Mb/s
Duplex: Full
root@jns-x86-0:/home/administrator#
Server JNS-X86-1:
root@jns-x86-1:/home/administrator# ethtool enp4s0f0 | grep Speed: -A1
Speed: 10000Mb/s
Duplex: Full
root@jns-x86-1:/home/administrator# ethtool enp4s0f1 | grep Speed: -A1
Speed: 10000Mb/s
Duplex: Full
root@jns-x86-1:/home/administrator#
Status: Passed. All the 10GE interfaces are connected and ready for the Junos
node slicing deployment.
NOTE Naturally, besides the physical ports state, check for proper cabling
schematics; as a reminder, Server0/1 port 0 should be connected to SCB0/1 port 0,
and Server0/1 port 1 should be connected to SCB0/1 port 1, respectively.
MX960-4-GNF-RE1: 172.30.181.177
NOTE In this particular case, external servers and B-SYS management networks
are not on the same subnet, therefore the IP GW address was also changed.
All the changes can be easily made modifying the previous saved configuration file
using a text editor of your choice. The changes are highlighted here:
groups {
re0 {
system {
host-name MX960-4-GNF-RE0;
backup-router 172.30.181.1 destination 172.16.0.0/12;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 172.30.181.176/24;
address 172.30.181.175/24 {
master-only;
}
}
}
}
}
}
re1 {
system {
host-name MX960-4-GNF-RE1;
backup-router 172.30.177.1 destination 172.16.0.0/12;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 172.30.181.177/24;
address 172.30.181.175/24 {
master-only;
}
}
}
}
}
}
}
server1:
--------------------------------------------------------------------------
Added image: /vm-primary/MX960-4-GNF/MX960-4-GNF.img
root@JDM-SERVER0> edit
Entering configuration mode
[edit]
root@JDM-SERVER0# set virtual-network-functions MX960-4-GNF id 3 no-autostart chassis-type mx960
resource-template 4core-32g
[edit]
root@JDM-SERVER0#
Copy the already tuned configuration to the GNF storage path: /vm-primary/
MX960-4-GNF:
root@jns-x86-0:~# cp /home/administrator/configs/MX960-4-GNF-CONFIG.txt /vm-primary/MX960-4-GNF/
The configuration file is now accessible by the JDM, so let’s configure the new
VNF to use it as a startup config and commit the configuration:
[edit]
root@JDM-SERVER0# set virtual-network-functions MX960-4-GNF base-config /vm-primary/MX960-4-GNF/
MX960-4-GNF-CONFIG.txt
[edit]
root@JDM-SERVER0# commit
server0:
configuration check succeeds
server1:
commit complete
server0:
commit complete
[edit]
root@JDM-SERVER0#
NOTE It is not necessary to manually copy the configuration files on the remote
JDM server. The configuration synchronization machinery will take care of this
task. Indeed, take a look on JDM server1:
Last login: Fri Mar 8 10:46:13 2019 from 172.29.81.183
administrator@jns-x86-1:~$ ls /vm-primary/MX960-4-GNF/
----- SNIP -----
/vm-primary/MX960-4-GNF/MX960-4-GNF-CONFIG.txt
---- SNIP ----
administrator@jns-x86-1:~$
147 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure
The file is already there! Great job, now the VNFs are ready to be started. Remem-
ber, we configured the no-autostart command a while back? Take a look for
yourself:
[edit]
root@JDM-SERVER0# exit
Exiting configuration mode
root@JDM-SERVER0> show virtual-network-functions all-servers
server0:
--------------------------------------------------------------------------
ID Name State Liveness
--------------------------------------------------------------------------------
3 MX960-4-GNF Shutdown down
server1:
--------------------------------------------------------------------------
ID Name State Liveness
--------------------------------------------------------------------------------
3 MX960-4-GNF Shutdown down
root@JDM-SERVER0>
Per the configuration, the two VNFs are provisioned but shut down. To power
them on:
root@JDM-SERVER0> request virtual-network-functions MX960-4-GNF start all-servers
server0:
--------------------------------------------------------------------------
MX960-4-GNF started
server1:
--------------------------------------------------------------------------
MX960-4-GNF started
root@JDM-SERVER0>
Now the two virtual Routing Engines are booting up, of course it’s possible to use
the console access to verify the booting process with the request virtual-network-
functions MX960-4-GNF console command.
For sake of curiosity, we measured the booting time of the VNF. We activated the
CLI timestamps, connected to the virtual Routing Engine console, and then got
back to the JDM CLI as soon as the configuration file had loaded and all the pro-
cesses had started:
root@JDM-SERVER0> request virtual-network-functions MX960-4-GNF start
Mar 09 10:42:49
MX960-4-GNF started
root@JDM-SERVER0> request virtual-network-functions MX960-4-GNF console
Mar 09 10:43:16
Connected to domain MX960-4-GNF
Escape character is ^]
Mounting junos-platform-x86-32-20180920.185504_builder_junos_183_r1
----- SNIP ------
148 Chapter 6: From Single Chassis to Junos Node Slicing
Mar 9 10:44:39 jlaunchd: general-authentication-
service (PID 6210) sending signal USR1: due to “proto-mastership”: 0x1
Mar 9 10:44:39 jlaunchd: Registered PID 6525(mpls-traceroute): exec_command
FreeBSD/amd64 (MX960-4-GNF-RE0) (ttyu0)
login:
root@JDM-SERVER0>
Mar 09 10:44:44
Wow, impressive, it looks like the VNF took less than two minutes to be fully op-
erational. This is a huge improvement if compared to the real counterparts and it’s
mostly due to the POST stage, which is basically fulminous on a VM.
From the observed prompt, you may have already seen that the hostname
“MX960-4-GNF-RE0” that we had set in the startup configuration file is already
shown. This is a very good indication that the new virtual RE0 has correctly used
the configuration file provided by the JDM CLI.
WARNING It’s important to remark that the Junos running on the virtual Routing
Engines is not in full parity in terms of CLI commands with its counterpart
running on the B-SYS. For this reason you can find that the configuration file
(retrieved from the chassis Routing Engines) used to boot the virtual Routing
Engines can contain commands that are not actually available on the GNF Junos,
and this can cause troubles, especially with commit operations. For instance, let’s
examine a case where the statement ‘system ports console log-out-on-disconnect’ is
originally present in the MX960-4 chassis configuration but it doesn’t exist in the
GNF Junos CLI. When the user tries to commit any configuration change, the
operation will fail as shown here:
{master}[edit]
magno@MX960-4-GNF-RE0# commit
re0:
{master}[edit]
magno@MX960-4-GNF-RE0#
Something strange is happening: the commit process was not run on the backup
Routing Engine although it was supposed to. This is a very good indication that
somewhere in the GNF loaded configuration there is one, or more, non-existent
commands. To discover which ones are preventing the configuration from commit-
ting, it’s sufficient to run the show | compare command:
{master}[edit]
magno@MX960-4-GNF-RE0# show | compare
/config/juniper.conf:93:(37) syntax error: log-out-on-disconnect
[edit system ports console]
'console log-out-on-disconnect;'
syntax error
[edit system ports]
149 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure
'console'
warning: statement has no contents; ignored
{master}[edit]
magno@MX960-4-GNF-RE0#
It’s clear the log-out-on-disconnect statement is not supported on the GNF Junos.
To fix the problem, the offending command must be removed before attempting to
commit the configuration. There are two ways to fix this kind of problem:
Perform a line-by-line comparation between the B-SYS and the GNF Junos
CLIs;
Perform a quick trial and error reiterated process until all the unknown com-
mands are removed.
The second approach is strongly advised because it is a lot quicker and more ef-
ficient to test the configuration and eventually get rid of any offending statement
because:
The number of unsupported commands is limited;
No more than three unsupported statements have ever been found in the expe-
rience of this book's author;
And ultimately, the GNF is not in production, so you have time to tune the con-
figuration.
Let’s fix the problem trying these two main methods:
1) Fix the configuration file and restart the virtual Routing Engines on the JDM.
2) Fix the configuration file directly inside the virtual Routing Engines and commit
the new sanitized configuration.
The first approach consists in directly editing the file configured as ‘base-config’ in
the JDM GNF configuration, namely in our case “/vm-primary/MX960-4-GNF/ MX960-
4-GNF-CONFIG.txt”. So, by logging on both Linux servers and using the text editor of
your choice, it’s possible to edit the file and remove the undesired statements. At
this point, it’s sufficient to go back to the JDM CLI and type the well-known op-
erational request virtual-network-functions MX960-4-GNF restart all-servers com-
mand to first destroy and then spin up the two VMs from scratch.
Of course, if more than one command must be removed, this ‘trial and error’ pro-
cess is a lot slower as the GNF must be restarted every time.
The second approach is faster because it takes place directly inside the GNF itself
and doesn’t require any reboot. Once the offending command is clearly identified,
as explained above, the Junos CLI can be used to remove it and then commit the
configuration, as shown here:
150 Chapter 6: From Single Chassis to Junos Node Slicing
{master}
magno@MX960-4-GNF-RE0> show configuration system ports
console log-out-on-disconnect;
{master}
magno@MX960-4-GNF-RE0> edit
Entering configuration mode
{master}[edit]
magno@MX960-4-GNF-RE0# delete system ports <--- DELETE THE FIRST OFFENDING COMMAND
{master}[edit]
magno@MX960-4-GNF-RE0# show chassis redundancy
failover {
on-loss-of-keepalives;
on-re-to-fpc-stale;
not-on-disk-underperform;
}
graceful-switchover;
{master}[edit]
magno@MX960-4-GNF-RE0# commit and-quit
re0:
{master}[edit]
magno@MX960-4-GNF-RE0#
Wait a moment! The problem is still here, the commit process on the backup rout-
ing engine didn’t go through! Indeed, it is easily explicable: the offending com-
mand is still present in the backup routing engine! So, we need to remove it there
before committing (and synchronizing) the configuration on the Master!
{master}
magno@MX960-4-GNF-RE0> request routing-engine login other-routing-engine
{backup}
magno@MX960-4-GNF-RE1> show configuration system ports
console log-out-on-disconnect;
{backup}
magno@MX960-4-GNF-RE1> edit
Entering configuration mode
{backup}[edit]
magno@MX960-4-GNF-RE1# delete system ports
{backup}[edit]
magno@MX960-4-GNF-RE1# show chassis redundancy
failover {
on-loss-of-keepalives;
on-re-to-fpc-stale;
not-on-disk-underperform;
}
graceful-switchover;
{backup}[edit]
magno@MX960-4-GNF-RE1# commit
151 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure
re1:
configuration check succeeds
re0:
configuration check succeeds
commit complete
re1:
commit complete
{backup}[edit]
magno@MX960-4-GNF-RE1#
Now that the configuration on the backup routing-engine is fixed, we can commit
the config on the Master as well:
{backup}[edit]
magno@MX960-4-GNF-RE1# exit rlogin: connection closed
{master}
magno@MX960-4-GNF-RE0> edit
{master}
magno@MX960-4-GNF-RE0# commit
re0:
configuration check succeeds
re1:
configuration check succeeds
commit complete
re0:
commit complete
{master}[edit]
magno@MX960-4-GNF-RE0#
As expected, the commit process goes finally through and the problem is now fixed.
Okay, it’s now time to verify that the new control plane status is in good shape de-
spite no line cards being attached to it. So, let’s do some basic connectivity tests us-
ing the management network and if they succeed, proceed to perform some sanity
checks about hardware, mastership status, task replication, network services, and
the running configuration.
First from one of the X86 servers, let’s try to simply ping the three management ad-
dresses assigned to the new virtual Routing Engines:
administrator@jns-x86-0:~$ ping 172.30.181.175 -c 1
PING 172.30.181.175 (172.30.181.175) 56(84) bytes of data.
64 bytes from 172.30.181.175: icmp_seq=1 ttl=64 time=0.382 ms
--- 172.30.181.175 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms
administrator@jns-x86-0:~$ ping 172.30.181.176 -c 1
PING 172.30.181.176 (172.30.181.176) 56(84) bytes of data.
64 bytes from 172.30.181.176: icmp_seq=1 ttl=64 time=0.652 ms
152 Chapter 6: From Single Chassis to Junos Node Slicing
--- 172.30.181.176 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms
administrator@jns-x86-0:~$ ping 172.30.181.177 -c 1
PING 172.30.181.177 (172.30.181.177) 56(84) bytes of data.
64 bytes from 172.30.181.177: icmp_seq=1 ttl=64 time=0.592 ms
--- 172.30.181.177 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms
administrator@jns-x86-0:~$
That’s very promising, both the RE0 and RE1 addresses are reachable as well as
the Master IP .175! Let’s connect to the Master Routing Engine using SSH:
administrator@jns-x86-0:~$ ssh magno@172.30.181.175
The authenticity of host ‘172.30.181.175 (172.30.181.175)’ can’t be established.
ECDSA key fingerprint is SHA256:QIW9uleS7Xm9hZLgTjiQjRw61JAl8smqJKGG+a98n1s.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘172.30.181.175’ (ECDSA) to the list of known hosts.
Password:
Last login: Sat Mar 9 10:05:00 2019 from 172.29.83.81
--- Junos 18.3R1.9 Kernel 64-bit JNPR-11.0-20180816.8630ec5_buil
{master}
magno@MX960-4-GNF-RE0>
Perfect, from the CLI prompt it’s clear the configuration was correctly applied, and
from the {master} prompt it’s fair to also assume the mastership election was cor-
rectly completed.
Let’s perform some sanity checks to be fully confident the system is actually work-
ing properly:
{master}
magno@MX960-4-GNF-RE0> show chassis hardware
Chassis GN5C8388BF4A MX960-GNF
Routing Engine 0 RE-GNF-2100x4
Routing Engine 1 RE-GNF-2100x4
{master}
magno@MX960-4-GNF-RE0>
The two virtual Routing Engines are correctly onboarded. This is the correct out-
put because on the B-SYS there is no GNF configuration at all, hence the command
forwarding output, which is filtered per GNF, is void.
Let’s check routing engine mastership and task replication status:
{master}
magno@MX960-4-GNF-RE0> show chassis routing-engine | match “Current state”
Current state Master
Current state Backup
{master}
magno@MX960-4-GNF-RE0> show task replication
Stateful Replication: Enabled
RE mode: Master
153 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure
Protocol Synchronization Status
BGP Complete
{master}
magno@MX960-4-GNF-RE0>
These outputs are good as well. The two virtual Routing Engines have correctly ne-
gotiated their status, and the only configured routing protocol is ready to be
synchronized.
Last check, chassis network-services:
{master}
magno@MX960-4-GNF-RE0> show chassis network-services
Network Services Mode: Enhanced-IP
magno@MX960-4-GNF-RE0>
Enhanced-IP it is! As all the checks appear the way they should, we can be quite
confident our new control plane can take ownership of the services. It’s now time to
attach the line cards to the new virtual Routing Engines!
On the BGP side, the BGP hold-timer expires at some point as the old session is not
available anymore on the MX side, and then the protocol itself will start to keep
on trying to re-establish the peering on both BGP speakers. Once the BGP session
is up, additional time must be allotted to wait for both peers to receive all the
routes and to mark them as active in the routing table, and then download them to
the forwarding plane.
Even though neither of the services is the quickest to restore, the BGP-based one
looks a little bit less complicated as it relies totally on the protocol itself, while in
the BNG case, the access devices and the DHCP server also play a fundamental
role in service restoration time.
After this rightful digression, let’s get our hands back on the CLI and start the real
work! What’s left is our MX960-4 single chassis running its service. They are still
in good shape, as the following CLI output testifies:
{master}
magno@MX960-4-RE0> show route summary table inet.0
Autonomous system number: 65203
Router ID: 72.255.255.1
inet.0: 164210 destinations, 164210 routes (164210 active, 0 holddown, 0 hidden)
Direct: 105 routes, 105 active
Local: 103 routes, 103 active
BGP: 100000 routes, 100000 active
Static: 2 routes, 2 active
Access-internal: 64000 routes, 64000 active
{master}
magno@MX960-4-RE0>
Now let’s configure a new GNF with ID = 3 (remember, the ID must match on both
the B-SYS and JDM configurations):
{master}
magno@MX960-4-RE0> edit
Entering configuration mode
{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-functions gnf 3 fpcs 1
{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-functions gnf 3 fpcs 6
{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-functions gnf 3 description “Single-
GNF Setup”
{master}[edit]
magno@MX960-4-RE0# show chassis network-slices
guest-network-functions {
gnf 3 {
description “Single-GNF Setup”;
fpcs [ 1 6 ];
}
}
{master}[edit]
magno@MX960-4-RE0#
155 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure
Perfect. We are now ready to start the migration of both MPCs (installed in slots 1
and 6) to the new GNF! As soon as the commit command runs, both line cards will
reboot themselves:
{master}[edit]
magno@MX960-4-RE0# run set cli timestamp
Mar 09 15:58:43
CLI timestamp set to: %b %d %T
{master}[edit]
magno@MX960-4-RE0# run show chassis hardware | no-more
Mar 09 15:58:52
Hardware inventory:
Item Version Part number Serial number Description
Chassis JN122E1E0AFA MX960
--- SNIP ---
FPC 1 REV 42 750-053323 CAGF3038 MPC7E 3D 40XGE
CPU REV 19 750-057177 CAGF7762 SMPC PMB
PIC 0 BUILTIN BUILTIN 20x10GE SFPP
Xcvr 0 REV 01 740-030658 B10L02628 SFP+-10G-USR
PIC 1 BUILTIN BUILTIN 20x10GE SFPP
FPC 6 REV 42 750-046005 CADM2676 MPC5E 3D Q 2CGE+4XGE
CPU REV 11 711-045719 CADK9910 RMPC PMB
PIC 0 BUILTIN BUILTIN 2X10GE SFPP OTN
Xcvr 0 REV 01 740-031980 B11B02985 SFP+-10G-SR
PIC 1 BUILTIN BUILTIN 1X100GE CFP2 OTN
PIC 2 BUILTIN BUILTIN 2X10GE SFPP OTN
PIC 3 BUILTIN BUILTIN 1X100GE CFP2 OTN
Fan Tray 0 REV 04 740-031521 ACAC1075 Enhanced Fan Tray
Fan Tray 1 REV 04 740-031521 ACAC0974 Enhanced Fan Tray
{master}[edit]
magno@MX960-4-RE0# commit
Mar 09 15:58:56
re0:
configuration check succeeds
re1:
configuration check succeeds
commit complete
re0:
commit complete
{master}[edit]
magno@MX960-4-RE0# run show chassis hardware | no-more
Mar 09 15:59:01
Hardware inventory:
Item Version Part number Serial number Description
Chassis JN122E1E0AFA MX960
--- SNIP ---
FPC 1 REV 42 750-053323 CAGF3038 MPC7E 3D 40XGE
CPU REV 19 750-057177 CAGF7762 SMPC PMB
FPC 6 REV 42 750-046005 CADM2676 MPC5E 3D Q 2CGE+4XGE
CPU REV 11 711-045719 CADK9910 RMPC PMB
Fan Tray 0 REV 04 740-031521 ACAC1075 Enhanced Fan Tray
Fan Tray 1 REV 04 740-031521 ACAC0974 Enhanced Fan Tray
{master}[edit]
magno@MX960-4-RE0#
156 Chapter 6: From Single Chassis to Junos Node Slicing
It’s clearly visible by examining the time stamps: as soon as the commit command is
executed, both line cards on the MX960 chassis reboot.
In the meanwhile, on the IXIA Test Center, the physical ports status has just gone
down, hence the icons’ color turned to red:
Figure 6.4 IXIA Test Generator Detected the Interface Down State
As expected, the BGP sessions are still established as the BGP Hold-Timer (set to
90 seconds) hasn’t expired yet and the subscriber sessions are all in UP state be-
cause none of the simulated CPEs tried to renew their DHCP bindings (set at 3600
seconds).
Even though the physical lasers of the interfaces are now turned off and the IXIA
port is in down state, see Figure 6.5, the simulated entities change status based on
protocol timers as if a Layer 2 switch was between it and the MX Series.
157 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure
Indeed, after some time, the BGP hold timer expired and the sessions went down,
while the CPEs DHCP lease time didn’t, because they are two orders of magnitude
longer.
In the meantime, on the new virtual Routing Engines, of course the line card on-
boarding process has already started as soon the commit was executed on the
B-SYS:
{master}
magno@MX960-4-GNF-RE0> show chassis hardware
bsys-re0:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis JN122E1E0AFA MX960
Midplane REV 04 750-047853 ACRB9287 Enhanced MX960 Backplane
Fan Extender REV 02 710-018051 CABM7223 Extended Cable Manager
FPM Board REV 03 710-014974 JZ6991 Front Panel Display
PDM Rev 03 740-013110 QCS1743501N Power Distribution Module
PEM 0 Rev 04 740-034724 QCS171302048 PS 4.1kW; 200-240V AC in
PEM 1 Rev 07 740-027760 QCS1602N00R PS 4.1kW; 200-240V AC in
PEM 2 Rev 10 740-027760 QCS1710N0BB PS 4.1kW; 200-240V AC in
PEM 3 Rev 10 740-027760 QCS1710N0BJ PS 4.1kW; 200-240V AC in
Routing Engine 0 REV 01 740-051822 9009170093 RE-S-1800x4
Routing Engine 1 REV 01 740-051822 9009176340 RE-S-1800x4
CB 0 REV 01 750-055976 CACM2281 Enhanced MX SCB 2
Xcvr 0 REV 01 740-031980 163363A04142 SFP+-10G-SR
Xcvr 1 REV 01 740-021308 AS90PGH SFP+-10G-SR
CB 1 REV 02 750-055976 CADJ1802 Enhanced MX SCB 2
Xcvr 0 REV 01 740-031980 AHJ09HD SFP+-10G-SR
Xcvr 1 REV 01 740-021308 09T511103665 SFP+-10G-SR
FPC 1 REV 42 750-053323 CAGF3038 MPC7E 3D 40XGE
CPU REV 19 750-057177 CAGF7762 SMPC PMB
FPC 6 REV 42 750-046005 CADM2676 MPC5E 3D Q 2CGE+4XGE
CPU REV 11 711-045719 CADK9910 RMPC PMB
Fan Tray 0 REV 04 740-031521 ACAC1075 Enhanced Fan Tray
Fan Tray 1 REV 04 740-031521 ACAC0974 Enhanced Fan Tray
gnf3-re0:
--------------------------------------------------------------------------
Chassis GN5C8388BF4A MX960-GNF
Routing Engine 0 RE-GNF-2100x4
Routing Engine 1 RE-GNF-2100x4
{master}
magno@MX960-4-GNF-RE0> show chassis fpc
bsys-re0:
--------------------------------------------------------------------------
Temp CPU Utilization (%) CPU Utilization (%) Memory Utilization (%)
Slot State (C) Total Interrupt 1min 5min 15min DRAM (MB) Heap Buffer GNF
1 Present 46 3
6 Present 43 3
{master}
magno@MX960-4-GNF-RE0>
158 Chapter 6: From Single Chassis to Junos Node Slicing
The first thing to notice here is that, this time, the show chassis hardware output has
a “bsys-re#” section. This means that a configuration for this GNF ID exists on the
base system and therefore the output returned by its chassisd daemon is filtered
and displayed on the GNF CLI.
The other important thing to notice is that the line cards are present, but they are
still booting. Indeed, no xe- interfaces are present in the show interface terse
output:
{master}
magno@MX960-4-GNF-RE0> set cli timestamp
Mar 09 16:00:03
CLI timestamp set to: %b %d %T
{master}
magno@MX960-4-GNF-RE0> show interfaces terse | match xe-
Mar 09 16:00:05
{master}
magno@MX960-4-GNF-RE0>
NOTE All the devices are NTP-synchronized, so the timestamps can be consid-
ered current and relevant.
The commit operation took place at the B-SYS at 15:58:56. Let’s check when the
xe- interfaces will in the new routing engine CLI and compare them:
{master}
magno@MX960-4-GNF-RE0> show interfaces terse | match xe-6/0/0 | refresh 2
---(refreshed at 2019-03-09 16:00:49 CET)---
---(refreshed at 2019-03-09 16:00:51 CET)---
--- SNIP ---
---(refreshed at 2019-03-09 16:03:47 CET)---
xe-6/0/0 up up
xe-6/0/0.32767 up up multiservice
^C[abort]
---(refreshed at 2019-03-09 16:03:49 CET)---
{master}
magno@MX960-4-GNF-RE0>
Great! At around 16:03:47 the xe- interface shows up and this is the worst case, as
slot 1 line card, whose logs are not shown, booted a little bit faster at 16:03:18.
As expected, because BGP keeps on trying to connect from both sides (as also IXIA
is configured as initiator), it converged very fast, indeed:
{master}
magno@MX960-4-GNF-RE0> show route summary table inet.0
Mar 09 16:04:27
Autonomous system number: 65203
Router ID: 72.255.255.1
159 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure
inet.0: 100209 destinations, 100210 routes (100209 active, 0 holddown, 0 hidden)
Direct: 105 routes, 104 active
Local: 103 routes, 103 active
BGP: 100000 routes, 100000 active
Static: 2 routes, 2 active
{master}
magno@MX960-4-GNF-RE0>
This output provides two useful bits of information: first of all, BGP routes are al-
ready learned and marked as active in the inet.0, and this is great news. On the
other end, unfortunately, and as expected, it shows no ‘access-internal’ routes,
which is a clear symptom that subscribers’ states have not built yet. The reason is
pretty straightforward: without any kind of keepalive machinery, the IXIA simu-
lated end users’ CPEs control plane is not (yet) aware that a catastrophic event
happened. The only way for them to reconnect is to wait for its DHCP lease time
to expire, which in turn will trigger a DHCP REQUEST to try to renew its lease
with the DHCP server. The BNG can now use this DHCP packet to trigger all the
auto-configuration machinery which will recreate the control plane states and the
data plane flows to forward subscriber traffic.
NOTE You can also use a PPP-based access model (which can leverage the Link
Control Protocol (LCP) to periodically check the client / server connection) to
demonstrate a faster service recovery scenario, but the DHCP example is more
relevant, as one of the goals of this exercise is to highlight the differences, in both
meaning and recovery times aspects, between a simple Junos node slicing one-
GNF migration and an end-to-end service restoration.
The current situation is easily shown by comparing the BNG and IXIA view of the
same service:
{master}
magno@MX960-4-GNF-RE0> show subscribers summary
Mar 09 16:05:45
Subscribers by State
Total: 0
Subscribers by Client Type
Total: 0
{master}
magno@MX960-4-GNF-RE0>
On the BNG, no subscriber states at all… while on IXIA side, everything looks like
it is working perfectly in Figure 6.6.
160 Chapter 6: From Single Chassis to Junos Node Slicing
In the real world the only way to fix this situation is to wait for the subscribers’
CPEs to renew their DHCP lease. Indeed, often it happens that some time before
the maintenance window, operators change DHCP server lease time to a much
smaller value to force the installed base to shorten their lease times, and thus, to
minimize the service interruption during the maintenance window. Then, they set
the lease time back to the default value once the activity is complete.
In our case, we are much luckier as we can control the simulated CPEs, so we can
trigger a DHCP binding renew on all of them. See Figure 6.7. Let’s do that, and
then check what happens:
Once the IXIA starts to send the DHCP renew, the subscribers’ states are rebuilt as
expected:
{master}
magno@MX960-4-GNF-RE0> show subscribers summary | refresh 5
Mar 09 16:07:10
---(refreshed at 2019-03-09 16:07:10 CET)---
Subscribers by State
Total: 0
--- SNIP ----
---(refreshed at 2019-03-09 16:07:25 CET)---
Subscribers by State
Init: 32
Total: 32
161 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure
Subscribers by Client Type
VLAN: 32
Total: 32
---(refreshed at 2019-03-09 16:07:30 CET)---
Subscribers by State
Init: 78
Configured: 50
Active: 3928
Total: 4056
Subscribers by Client Type
DHCP: 2016
VLAN: 2040
Total: 4056
---- SNIP ----
---(refreshed at 2019-03-09 16:10:10 CET)---
Subscribers by State
Init: 114
Configured: 11
Active: 127859
Total: 127984
Subscribers by Client Type
DHCP: 63984
VLAN: 64000
Total: 127984
---(refreshed at 2019-03-09 16:10:15 CET)---
Subscribers by State
Active: 128000
Total: 128000
Subscribers by Client Type
DHCP: 64000
VLAN: 64000
Total: 128000
---(refreshed at 2019-03-09 16:10:20 CET)---
Subscribers by State
Active: 128000
Total: 128000
Subscribers by Client Type
DHCP: 64000
VLAN: 64000
Total: 128000
---(*more 100%)---[abort]
{master}
magno@MX960-4-GNF-RE0> show subscribers summary
Mar 09 16:10:42
Subscribers by State
Active: 128000
Total: 128000
162 Chapter 6: From Single Chassis to Junos Node Slicing
Subscribers by Client Type
DHCP: 64000
VLAN: 64000
Total: 128000
{master}
magno@MX960-4-GNF-RE0> show route summary table inet.0
Mar 09 16:11:02
Autonomous system number: 65203
Router ID: 72.255.255.1
inet.0: 164209 destinations, 164210 routes (164209 active, 0 holddown, 0 hidden)
Direct: 105 routes, 104 active
Local: 103 routes, 103 active
BGP: 100000 routes, 100000 active
Static: 2 routes, 2 active
Access-internal: 64000 routes, 64000 active
{master}
magno@MX960-4-GNF-RE0>
Perfect, that’s much better. It now looks like the end-to-end services are completely
restored. In the end, the disruptive GNF configuration commit was performed at
15:58:56 and the end-to-end service restoration was achieved at 16:10:42, which
means our service disruption lasted about 11 minutes and 46 seconds. That’s not
too bad at all if we consider the nature of the services involved!
It’s now possible to remove all the configurations related to the protocol from the
MX chassis as they are not used anymore. Indeed, the line cards are now attached
to the new routing engines, which means no data plane resources are available to
the internal routing engines, which makes the old configuration completely
ineffective.
Summary
This has been an extraordinary Junos node slicing day. Ease of setup and flexibility
are the undeniable advantages this new technology brings to networking. Thank
you for labbing Junos node slicing, and remember that more information can be
found on the Junos Node Slicing Feature Guide, which is constantly updated with
new features once they are released. Look for it on the Juniper TechLibrary:
https://www.juniper.net/documentation/en_US/junos/information-products/path-
way-pages/junos-node-slicing/junos-node-slicing.pdf.
Appendix
Please find the final configurations used for this book. Each configuration is identi-
fied by the name of the element it belongs to and by the use cases covered. More-
over, to save space, the BGP protocol and interfaces configurations used to
establish the 100 peer sessions, are explicitly shown just once to save space.
MX960-4-BSYS-SET
The MX960 B-SYS configuration for Chapters 3 and 4 cases:
set version 18.3R1.9
set groups re0 system host-name MX960-4-RE0
set groups re0 system backup-router 172.30.177.1
set groups re0 system backup-router destination 172.30.176.0/20
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.178.71/24
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.177.196/24 master-only
set groups re1 system host-name MX960-4-RE1
set groups re1 system backup-router 172.30.177.1
set groups re1 system backup-router destination 172.30.176.0/20
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.178.72/24
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.177.196/24 master-only
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system commit persist-groups-inheritance
set system login user magno uid 2001
set system login user magno class super-user
set system login user magno authentication encrypted-password “-- SNIP --”
set system login user magno authentication ssh-rsa “ssh-rsa -- SNIP --”
set system login user remote uid 2000
set system login user remote class super-user
set system root-authentication encrypted-password “-- SNIP --”
set system domain-name poc-nl.jnpr.net
164 Appendix: Node Slicing Lab Configurations
set system backup-router 172.30.177.1
set system backup-router destination 172.30.176.0/20
set system time-zone Europe/Amsterdam
set system authentication-order password
set system authentication-order radius
set system name-server 172.30.207.10
set system name-server 172.30.207.13
set system radius-server 172.30.176.9 secret “$9$DMHPTz36CtOqmBEclLXik.mfT6/t1Eyn/”
set system radius-server 172.30.176.9 retry 3
set system radius-server 172.30.177.4 secret “$9$CgY9p1EcylvWx0B7VwgUDtuOBIEleWNVYre”
set system radius-server 172.30.177.4 retry 3
set system services ftp
set system services ssh root-login allow
set system services ssh client-alive-interval 120
set system services telnet
set system services xnm-clear-text
set system services netconf ssh
set system services rest http
set system services rest enable-explorer
set system services web-management http
set system syslog user * any emergency
set system syslog host 172.30.189.13 any notice
set system syslog host 172.30.189.13 authorization info
set system syslog host 172.30.189.13 interactive-commands info
set system syslog host 172.30.189.14 any notice
set system syslog host 172.30.189.14 authorization info
set system syslog host 172.30.189.14 interactive-commands info
set system syslog file messages any notice
set system syslog file messages authorization info
set system syslog file messages match-strings “!*0x44b*”
set system compress-configuration-files
set system ntp boot-server 172.30.207.10
set system ntp server 172.30.207.10
set chassis redundancy failover on-loss-of-keepalives
set chassis redundancy failover not-on-disk-underperform
set chassis redundancy graceful-switchover
set chassis fpc 0 pic 0 pic-mode 100G
set chassis fpc 0 pic 1 pic-mode 100G
set chassis network-services enhanced-ip
set chassis network-slices guest-network-functions gnf 1 fpcs 6
set chassis network-slices guest-network-functions gnf 1 af0 description “AF0 to CORE-GNF AF0”
set chassis network-slices guest-network-functions gnf 1 af0 peer-gnf id 2
set chassis network-slices guest-network-functions gnf 1 af0 peer-gnf af0
set chassis network-slices guest-network-functions gnf 2 fpcs 1
set chassis network-slices guest-network-functions gnf 2 af0 description “AF0 to EDGE-GNF AF0”
set chassis network-slices guest-network-functions gnf 2 af0 peer-gnf id 1
set chassis network-slices guest-network-functions gnf 2 af0 peer-gnf af0
set interfaces lo0 unit 0 family inet address 192.177.0.196/32 preferred
set interfaces lo0 unit 0 family inet address 127.0.0.1/32
set interfaces lo0 unit 0 family iso address 49.0177.0000.0000.0196.00
set snmp location “AMS, EPOC location=3.09”
set snmp contact “emea-poc@juniper.net”
set snmp community public authorization read-only
set snmp community public clients 172.30.0.0/16
set snmp community public clients 0.0.0.0/0 restrict
set snmp community private authorization read-write
set snmp community private clients 172.30.0.0/16
set snmp community private clients 0.0.0.0/0 restrict
165 EDGE-GNF-AF INTERFACE-ADV
set snmp trap-options source-address 172.30.177.196
set routing-options nonstop-routing
set routing-options static route 172.16.0.0/12 next-hop 172.30.177.1
set routing-options static route 172.16.0.0/12 no-readvertise
set routing-options router-id 192.177.0.196
set routing-options autonomous-system 100
set protocols layer2-control nonstop-bridging
EDGE-GNF-AF INTERFACE-ADV
For the advanced AF Interface use cases:
set version 18.3R1.9
set groups re0 system host-name EDGE-GNF-re0
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.175/24 master-only
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.176/24
set groups re1 system host-name EDGE-GNF-re1
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.175/24 master-only
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.177/24
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system configuration-database max-db-size 629145600
set system login user magno uid 2000
set system login user magno class super-user
set system login user magno authentication ssh-rsa “ssh-rsa -- SNIP --”
set system root-authentication encrypted-password “ -- SNIP -- “
set system time-zone Europe/Amsterdam
set system use-imported-time-zones
set system dynamic-profile-options versioning
set system services ftp
set system services ssh root-login allow
set system services netconf ssh
set system services rest http
set system services subscriber-management enable
deactivate system services subscriber-management
set system syslog user * any emergency
set system syslog file messages any notice
set system syslog file messages authorization info
set system syslog file interactive-commands interactive-commands any
set system ntp boot-server 172.30.207.10
set system ntp server 172.30.207.10
set chassis redundancy failover on-loss-of-keepalives
set chassis redundancy failover not-on-disk-underperform
set chassis redundancy graceful-switchover
set chassis aggregated-devices maximum-links 64
set chassis pseudowire-service device-count 10
set chassis redundancy-group interface-type redundant-logical-tunnel device-count 1
set chassis fpc 6 pic 0 tunnel-services bandwidth 40g
set chassis fpc 6 pic 1 tunnel-services bandwidth 40g
set chassis network-services enhanced-ip
set interfaces xe-6/0/0 flexible-vlan-tagging
set interfaces xe-6/0/0 encapsulation flexible-ethernet-services
set interfaces xe-6/0/0 unit 50 encapsulation vlan-bridge
166 Appendix: Node Slicing Lab Configurations
set interfaces xe-6/0/0 unit 50 vlan-id 50
set interfaces xe-6/0/0 unit 100 encapsulation vlan-vpls
set interfaces xe-6/0/0 unit 100 vlan-id 100
set interfaces xe-6/0/0 unit 200 encapsulation vlan-bridge
set interfaces xe-6/0/0 unit 200 vlan-id 200
set interfaces af0 flexible-vlan-tagging
set interfaces af0 mtu 9224
set interfaces af0 encapsulation flexible-ethernet-services
set interfaces af0 unit 50 encapsulation vlan-ccc
set interfaces af0 unit 50 vlan-id 50
set interfaces af0 unit 100 encapsulation vlan-ccc
set interfaces af0 unit 100 vlan-id 100
set interfaces af0 unit 200 encapsulation vlan-ccc
set interfaces af0 unit 200 vlan-id 200
set interfaces af0 unit 200 family ccc mtu 9216
set interfaces lo0 unit 0 family inet address 72.255.255.1/32
set interfaces lo0 unit 0 family inet address 100.100.255.254/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0001.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.1/128
set interfaces ps0 anchor-point lt-6/0/0
set interfaces ps0 flexible-vlan-tagging
set interfaces ps0 mtu 9216
set interfaces ps0 encapsulation flexible-ethernet-services
set interfaces ps0 unit 0 encapsulation vlan-ccc
set interfaces ps0 unit 0 vlan-id 200
set interfaces ps0 unit 200 encapsulation vlan-bridge
set interfaces ps0 unit 200 vlan-id 200
set interfaces ps1 anchor-point lt-6/1/0
set interfaces ps1 flexible-vlan-tagging
set interfaces ps1 mtu 9216
set interfaces ps1 encapsulation flexible-ethernet-services
set interfaces ps1 unit 0 encapsulation ethernet-ccc
set interfaces ps1 unit 100 encapsulation vlan-vpls
set interfaces ps1 unit 100 vlan-id 100
set interfaces ps2 anchor-point lt-6/0/0
set interfaces ps2 flexible-vlan-tagging
set interfaces ps2 mtu 9216
set interfaces ps2 encapsulation flexible-ethernet-services
set interfaces ps2 unit 0 encapsulation ethernet-ccc
set interfaces ps2 unit 50 encapsulation vlan-bridge
set interfaces ps2 unit 50 vlan-id 50
set routing-options nonstop-routing
set routing-options static route 100.100.0.0/16 discard
set routing-options static route 192.168.0.0/16 next-hop 172.30.181.1
set routing-options static route 172.16.0.0/12 next-hop 172.30.181.1
set routing-options static route 10.0.0.0/8 next-hop 172.30.181.1
set routing-options autonomous-system 65203
set routing-options forwarding-table chained-composite-next-hop ingress l2ckt
set routing-options forwarding-table chained-composite-next-hop ingress fec129-vpws
set routing-options forwarding-table chained-composite-next-hop ingress no-evpn
set routing-options forwarding-table chained-composite-next-hop ingress labeled-bgp inet6
set routing-options forwarding-table chained-composite-next-hop ingress l3vpn
set protocols mpls interface af0.72
set protocols l2circuit local-switching interface af0.200 end-interface interface ps0.0
set protocols l2circuit local-switching interface af0.100 end-interface interface ps1.0
set protocols l2circuit local-switching interface af0.100 ignore-encapsulation-mismatch
set protocols l2circuit local-switching interface af0.50 end-interface interface ps2.0
set protocols l2circuit local-switching interface af0.50 ignore-encapsulation-mismatch
167 EDGE-GNF-AF INTERFACE-ADV
set protocols layer2-control nonstop-bridging
set routing-instances EVPN-VLAN-50 instance-type evpn
set routing-instances EVPN-VLAN-50 vlan-id 50
set routing-instances EVPN-VLAN-50 interface xe-6/0/0.50
set routing-instances EVPN-VLAN-50 interface ps2.50
set routing-instances EVPN-VLAN-50 route-distinguisher 72.255.255.1:150
set routing-instances EVPN-VLAN-50 vrf-target target:65203:50
set routing-instances EVPN-VLAN-50 protocols evpn
set routing-instances VPLS-VLAN100 instance-type vpls
set routing-instances VPLS-VLAN100 vlan-id 100
set routing-instances VPLS-VLAN100 interface xe-6/0/0.100
set routing-instances VPLS-VLAN100 interface ps1.100
set routing-instances VPLS-VLAN100 protocols vpls vpls-id 100
set bridge-domains VLAN-200 vlan-id 200
set bridge-domains VLAN-200 interface ps0.200
set bridge-domains VLAN-200 interface xe-6/0/0.200
EDGE-GNF-AF INTERFACE-ADV
For the advanced AF Interface use cases:
set version 18.3R1.9
set groups re0 system host-name CORE-GNF-re0
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.178/24 master-only
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.179/24
set groups re1 system host-name CORE-GNF-re1
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.178/24 master-only
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.180/24
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system login user magno uid 2000
set system login user magno class super-user
set system login user magno authentication ssh-rsa “ssh-rsa -- SNIP -- “
set system root-authentication encrypted-password “ -- SNIP -- “
set system services ftp
set system services ssh root-login allow
set system services netconf ssh
set system services rest http
set system syslog user * any emergency
set system syslog file messages any notice
set system syslog file messages authorization info
set system syslog file interactive-commands interactive-commands any
set system ntp boot-server 172.30.207.10
set system ntp server 172.30.207.10
set chassis redundancy failover on-loss-of-keepalives
set chassis redundancy failover not-on-disk-underperform
set chassis redundancy graceful-switchover
set chassis redundancy-group interface-type redundant-logical-tunnel device-count 1
deactivate chassis redundancy-group interface-type redundant-logical-tunnel
set interfaces xe-1/0/0 flexible-vlan-tagging
set interfaces xe-1/0/0 encapsulation flexible-ethernet-services
set interfaces xe-1/0/0 unit 50 encapsulation vlan-bridge
set interfaces xe-1/0/0 unit 50 vlan-id 50
168 Appendix: Node Slicing Lab Configurations
set interfaces xe-1/0/0 unit 100 encapsulation vlan-bridge
set interfaces xe-1/0/0 unit 100 vlan-id 100
set interfaces xe-1/0/0 unit 200 encapsulation vlan-bridge
set interfaces xe-1/0/0 unit 200 vlan-id 200
set interfaces af0 flexible-vlan-tagging
set interfaces af0 mtu 9216
set interfaces af0 encapsulation flexible-ethernet-services
set interfaces af0 unit 50 encapsulation vlan-bridge
set interfaces af0 unit 50 vlan-id 50
set interfaces af0 unit 100 encapsulation vlan-bridge
set interfaces af0 unit 100 vlan-id 100
set interfaces af0 unit 200 encapsulation vlan-bridge
set interfaces af0 unit 200 vlan-id 200
set interfaces irb unit 50 family inet address 50.50.50.99/24
set interfaces lo0 unit 0 family inet address 72.255.255.2/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0002.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.2/128
set routing-options nonstop-routing
set routing-options static route 10.0.0.0/8 next-hop 172.30.181.1
set routing-options static route 10.0.0.0/8 no-readvertise
set routing-options static route 172.16.0.0/12 next-hop 172.30.181.1
set routing-options static route 172.16.0.0/12 no-readvertise
set routing-options static route 192.168.0.0/16 next-hop 172.30.181.1
set routing-options static route 192.168.0.0/16 no-readvertise
set routing-options router-id 72.255.255.2
set routing-options autonomous-system 65203
set protocols layer2-control nonstop-bridging
set bridge-domains VLAN-100 vlan-id 100
set bridge-domains VLAN-100 interface af0.100
set bridge-domains VLAN-100 interface xe-1/0/0.100
set bridge-domains VLAN-200 vlan-id 200
set bridge-domains VLAN-200 interface xe-1/0/0.200
set bridge-domains VLAN-200 interface af0.200
set bridge-domains VLAN-50 vlan-id 50
set bridge-domains VLAN-50 interface xe-1/0/0.50
set bridge-domains VLAN-50 interface af0.50
set bridge-domains VLAN-50 routing-interface irb.50
EDGE-GNF-BBE
The GNF configuration for Chapter 4 cases:
set version 18.3R1.9
set groups re0 system host-name EDGE-GNF-re0
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.175/24 master-only
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.176/24
set groups re1 system host-name EDGE-GNF-re1
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.175/24 master-only
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.177/24
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system configuration-database max-db-size 629145600
set system login user magno uid 2000
169 EDGE-GNF-BBE
set system login user magno class super-user
set system login user magno authentication ssh-rsa “ssh-rsa -- SNIP -- “
set system root-authentication encrypted-password “ -- SNIP -- “
set system time-zone Europe/Amsterdam
set system use-imported-time-zones
set system dynamic-profile-options versioning
set system services ftp
set system services ssh root-login allow
set system services netconf ssh
set system services rest http
set system services subscriber-management enable
set system syslog user * any emergency
set system syslog file messages any notice
set system syslog file messages authorization info
set system syslog file interactive-commands interactive-commands any
set system ntp boot-server 172.30.207.10
set system ntp server 172.30.207.10
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” no-traps
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” proxy-arp
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” vlan-
tags outer “$junos-stacked-vlan-id”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” vlan-
tags inner “$junos-vlan-id”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” demux-
options underlying-interface “$junos-underlying-interface”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address lo0.0
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address preferred-source-address 100.100.255.254
set dynamic-profiles DP-IP-DEMUX interfaces demux0 unit “$junos-interface-unit” demux-
options underlying-interface “$junos-underlying-interface”
set dynamic-profiles DP-IP-DEMUX interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address lo0.0
set chassis redundancy failover on-loss-of-keepalives
set chassis redundancy failover not-on-disk-underperform
set chassis redundancy graceful-switchover
set chassis aggregated-devices maximum-links 64
set chassis network-services enhanced-ip
set access-profile NOAUTH
set interfaces xe-6/0/0 flexible-vlan-tagging
set interfaces xe-6/0/0 auto-configure stacked-vlan-ranges dynamic-profile DP-AUTO-
VLAN accept dhcp-v4
set interfaces xe-6/0/0 auto-configure stacked-vlan-ranges dynamic-profile DP-AUTO-VLAN ranges 1000-
2000,any
set interfaces xe-6/0/0 auto-configure remove-when-no-subscribers
set interfaces xe-6/0/0 encapsulation flexible-ethernet-services
set interfaces af0 flexible-vlan-tagging
set interfaces af0 mtu 9216
set interfaces af0 encapsulation flexible-ethernet-services
set interfaces af0 unit 72 vlan-id 72
set interfaces af0 unit 72 family inet address 72.0.0.1/30
set interfaces af0 unit 72 family iso
set interfaces af0 unit 72 family inet6 address fec0::71.0.0.1/126
set interfaces af0 unit 72 family mpls
set interfaces lo0 unit 0 family inet address 72.255.255.1/32
set interfaces lo0 unit 0 family inet address 100.100.255.254/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0001.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.1/128
170 Appendix: Node Slicing Lab Configurations
set forwarding-options dhcp-relay server-group DHCPv4 99.99.10.10
set forwarding-options dhcp-relay group DHCPv4-ACTIVE active-server-group DHCPv4
set forwarding-options dhcp-relay group DHCPv4-ACTIVE interface xe-6/0/0.0
set forwarding-options dhcp-relay no-snoop
set accounting-options periodic-refresh disable
set routing-options nonstop-routing
set routing-options static route 100.100.0.0/16 discard
set routing-options static route 192.168.0.0/16 next-hop 172.30.181.1
set routing-options static route 172.16.0.0/12 next-hop 172.30.181.1
set routing-options static route 10.0.0.0/8 next-hop 172.30.181.1
set routing-options autonomous-system 65203
set protocols bgp group iBGP type internal
set protocols bgp group iBGP local-address 72.255.255.1
set protocols bgp group iBGP family inet unicast
set protocols bgp group iBGP family inet6 unicast
set protocols bgp group iBGP export BBE-POOL
set protocols bgp group iBGP neighbor 72.255.255.2
set protocols isis reference-bandwidth 100g
set protocols isis level 1 disable
set protocols isis level 2 wide-metrics-only
set protocols isis interface af0.72 point-to-point
set protocols isis interface lo0.0 passive
set protocols layer2-control nonstop-bridging
set policy-options policy-statement BBE-POOL term OK from protocol static
set policy-options policy-statement BBE-POOL term OK from route-filter 100.100.0.0/16 exact
set policy-options policy-statement BBE-POOL term OK then accept
set access profile NOAUTH authentication-order none
CORE-GNF-BGP
The GNF configuration for Chapter 4 cases:
set version 18.3R1.9
set groups re0 system host-name CORE-GNF-re0
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.178/24 master-only
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.179/24
set groups re1 system host-name CORE-GNF-re1
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.178/24 master-only
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.180/24
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system login user magno uid 2000
set system login user magno class super-user
set system login user magno authentication ssh-rsa “ssh-rsa -- SNIP -- “
set system root-authentication encrypted-password “ -- SNIP -- “
set system services ftp
set system services ssh root-login allow
set system services netconf ssh
set system services rest http
set system syslog user * any emergency
set system syslog file messages any notice
set system syslog file messages authorization info
set system syslog file interactive-commands interactive-commands any
set system ntp boot-server 172.30.207.10
171 CORE-GNF-BGP
set system ntp server 172.30.207.10
set chassis redundancy failover on-loss-of-keepalives
set chassis redundancy failover not-on-disk-underperform
set chassis redundancy graceful-switchover
set interfaces xe-1/0/0 description “Link to IXIA LC 7 Port 8”
set interfaces xe-1/0/0 flexible-vlan-tagging
set interfaces xe-1/0/0 encapsulation flexible-ethernet-services
set interfaces xe-1/0/0 unit 10 vlan-id 10
set interfaces xe-1/0/0 unit 10 family inet address 99.99.10.1/24
set interfaces xe-1/0/0 unit 300 vlan-id 300
set interfaces xe-1/0/0 unit 300 family inet address 99.99.99.1/30
set interfaces xe-1/0/0 unit 300 family inet6 address 2002::99/126
set interfaces xe-1/0/0 unit 301 vlan-id 301
set interfaces xe-1/0/0 unit 301 family inet address 99.99.99.5/30
set interfaces xe-1/0/0 unit 302 vlan-id 302
set interfaces xe-1/0/0 unit 302 family inet address 99.99.99.9/30
set interfaces xe-1/0/0 unit 303 vlan-id 303
set interfaces xe-1/0/0 unit 303 family inet address 99.99.99.13/30
set interfaces xe-1/0/0 unit 304 vlan-id 304
set interfaces xe-1/0/0 unit 304 family inet address 99.99.99.17/30
set interfaces xe-1/0/0 unit 305 vlan-id 305
set interfaces xe-1/0/0 unit 305 family inet address 99.99.99.21/30
set interfaces xe-1/0/0 unit 306 vlan-id 306
set interfaces xe-1/0/0 unit 306 family inet address 99.99.99.25/30
set interfaces xe-1/0/0 unit 307 vlan-id 307
set interfaces xe-1/0/0 unit 307 family inet address 99.99.99.29/30
set interfaces xe-1/0/0 unit 308 vlan-id 308
set interfaces xe-1/0/0 unit 308 family inet address 99.99.99.33/30
set interfaces xe-1/0/0 unit 309 vlan-id 309
set interfaces xe-1/0/0 unit 309 family inet address 99.99.99.37/30
set interfaces xe-1/0/0 unit 310 vlan-id 310
set interfaces xe-1/0/0 unit 310 family inet address 99.99.99.41/30
set interfaces xe-1/0/0 unit 311 vlan-id 311
set interfaces xe-1/0/0 unit 311 family inet address 99.99.99.45/30
set interfaces xe-1/0/0 unit 312 vlan-id 312
set interfaces xe-1/0/0 unit 312 family inet address 99.99.99.49/30
set interfaces xe-1/0/0 unit 313 vlan-id 313
set interfaces xe-1/0/0 unit 313 family inet address 99.99.99.53/30
set interfaces xe-1/0/0 unit 314 vlan-id 314
set interfaces xe-1/0/0 unit 314 family inet address 99.99.99.57/30
set interfaces xe-1/0/0 unit 315 vlan-id 315
set interfaces xe-1/0/0 unit 315 family inet address 99.99.99.61/30
set interfaces xe-1/0/0 unit 316 vlan-id 316
set interfaces xe-1/0/0 unit 316 family inet address 99.99.99.65/30
set interfaces xe-1/0/0 unit 317 vlan-id 317
set interfaces xe-1/0/0 unit 317 family inet address 99.99.99.69/30
set interfaces xe-1/0/0 unit 318 vlan-id 318
set interfaces xe-1/0/0 unit 318 family inet address 99.99.99.73/30
set interfaces xe-1/0/0 unit 319 vlan-id 319
set interfaces xe-1/0/0 unit 319 family inet address 99.99.99.77/30
set interfaces xe-1/0/0 unit 320 vlan-id 320
set interfaces xe-1/0/0 unit 320 family inet address 99.99.99.81/30
set interfaces xe-1/0/0 unit 321 vlan-id 321
set interfaces xe-1/0/0 unit 321 family inet address 99.99.99.85/30
set interfaces xe-1/0/0 unit 322 vlan-id 322
set interfaces xe-1/0/0 unit 322 family inet address 99.99.99.89/30
set interfaces xe-1/0/0 unit 323 vlan-id 323
set interfaces xe-1/0/0 unit 323 family inet address 99.99.99.93/30
172 Appendix: Node Slicing Lab Configurations
set interfaces xe-1/0/0 unit 324 vlan-id 324
set interfaces xe-1/0/0 unit 324 family inet address 99.99.99.97/30
set interfaces xe-1/0/0 unit 325 vlan-id 325
set interfaces xe-1/0/0 unit 325 family inet address 99.99.99.101/30
set interfaces xe-1/0/0 unit 326 vlan-id 326
set interfaces xe-1/0/0 unit 326 family inet address 99.99.99.105/30
set interfaces xe-1/0/0 unit 327 vlan-id 327
set interfaces xe-1/0/0 unit 327 family inet address 99.99.99.109/30
set interfaces xe-1/0/0 unit 328 vlan-id 328
set interfaces xe-1/0/0 unit 328 family inet address 99.99.99.113/30
set interfaces xe-1/0/0 unit 329 vlan-id 329
set interfaces xe-1/0/0 unit 329 family inet address 99.99.99.117/30
set interfaces xe-1/0/0 unit 330 vlan-id 330
set interfaces xe-1/0/0 unit 330 family inet address 99.99.99.121/30
set interfaces xe-1/0/0 unit 331 vlan-id 331
set interfaces xe-1/0/0 unit 331 family inet address 99.99.99.125/30
set interfaces xe-1/0/0 unit 332 vlan-id 332
set interfaces xe-1/0/0 unit 332 family inet address 99.99.99.129/30
set interfaces xe-1/0/0 unit 333 vlan-id 333
set interfaces xe-1/0/0 unit 333 family inet address 99.99.99.133/30
set interfaces xe-1/0/0 unit 334 vlan-id 334
set interfaces xe-1/0/0 unit 334 family inet address 99.99.99.137/30
set interfaces xe-1/0/0 unit 335 vlan-id 335
set interfaces xe-1/0/0 unit 335 family inet address 99.99.99.141/30
set interfaces xe-1/0/0 unit 336 vlan-id 336
set interfaces xe-1/0/0 unit 336 family inet address 99.99.99.145/30
set interfaces xe-1/0/0 unit 337 vlan-id 337
set interfaces xe-1/0/0 unit 337 family inet address 99.99.99.149/30
set interfaces xe-1/0/0 unit 338 vlan-id 338
set interfaces xe-1/0/0 unit 338 family inet address 99.99.99.153/30
set interfaces xe-1/0/0 unit 339 vlan-id 339
set interfaces xe-1/0/0 unit 339 family inet address 99.99.99.157/30
set interfaces xe-1/0/0 unit 340 vlan-id 340
set interfaces xe-1/0/0 unit 340 family inet address 99.99.99.161/30
set interfaces xe-1/0/0 unit 341 vlan-id 341
set interfaces xe-1/0/0 unit 341 family inet address 99.99.99.165/30
set interfaces xe-1/0/0 unit 342 vlan-id 342
set interfaces xe-1/0/0 unit 342 family inet address 99.99.99.169/30
set interfaces xe-1/0/0 unit 343 vlan-id 343
set interfaces xe-1/0/0 unit 343 family inet address 99.99.99.173/30
set interfaces xe-1/0/0 unit 344 vlan-id 344
set interfaces xe-1/0/0 unit 344 family inet address 99.99.99.177/30
set interfaces xe-1/0/0 unit 345 vlan-id 345
set interfaces xe-1/0/0 unit 345 family inet address 99.99.99.181/30
set interfaces xe-1/0/0 unit 346 vlan-id 346
set interfaces xe-1/0/0 unit 346 family inet address 99.99.99.185/30
set interfaces xe-1/0/0 unit 347 vlan-id 347
set interfaces xe-1/0/0 unit 347 family inet address 99.99.99.189/30
set interfaces xe-1/0/0 unit 348 vlan-id 348
set interfaces xe-1/0/0 unit 348 family inet address 99.99.99.193/30
set interfaces xe-1/0/0 unit 349 vlan-id 349
set interfaces xe-1/0/0 unit 349 family inet address 99.99.99.197/30
set interfaces xe-1/0/0 unit 350 vlan-id 350
set interfaces xe-1/0/0 unit 350 family inet address 99.99.99.201/30
set interfaces xe-1/0/0 unit 351 vlan-id 351
set interfaces xe-1/0/0 unit 351 family inet address 99.99.99.205/30
set interfaces xe-1/0/0 unit 352 vlan-id 352
set interfaces xe-1/0/0 unit 352 family inet address 99.99.99.209/30
173 CORE-GNF-BGP
set interfaces xe-1/0/0 unit 353 vlan-id 353
set interfaces xe-1/0/0 unit 353 family inet address 99.99.99.213/30
set interfaces xe-1/0/0 unit 354 vlan-id 354
set interfaces xe-1/0/0 unit 354 family inet address 99.99.99.217/30
set interfaces xe-1/0/0 unit 355 vlan-id 355
set interfaces xe-1/0/0 unit 355 family inet address 99.99.99.221/30
set interfaces xe-1/0/0 unit 356 vlan-id 356
set interfaces xe-1/0/0 unit 356 family inet address 99.99.99.225/30
set interfaces xe-1/0/0 unit 357 vlan-id 357
set interfaces xe-1/0/0 unit 357 family inet address 99.99.99.229/30
set interfaces xe-1/0/0 unit 358 vlan-id 358
set interfaces xe-1/0/0 unit 358 family inet address 99.99.99.233/30
set interfaces xe-1/0/0 unit 359 vlan-id 359
set interfaces xe-1/0/0 unit 359 family inet address 99.99.99.237/30
set interfaces xe-1/0/0 unit 360 vlan-id 360
set interfaces xe-1/0/0 unit 360 family inet address 99.99.99.241/30
set interfaces xe-1/0/0 unit 361 vlan-id 361
set interfaces xe-1/0/0 unit 361 family inet address 99.99.99.245/30
set interfaces xe-1/0/0 unit 362 vlan-id 362
set interfaces xe-1/0/0 unit 362 family inet address 99.99.99.249/30
set interfaces xe-1/0/0 unit 363 vlan-id 363
set interfaces xe-1/0/0 unit 363 family inet address 99.99.99.253/30
set interfaces xe-1/0/0 unit 364 vlan-id 364
set interfaces xe-1/0/0 unit 364 family inet address 99.99.100.1/30
set interfaces xe-1/0/0 unit 365 vlan-id 365
set interfaces xe-1/0/0 unit 365 family inet address 99.99.100.5/30
set interfaces xe-1/0/0 unit 366 vlan-id 366
set interfaces xe-1/0/0 unit 366 family inet address 99.99.100.9/30
set interfaces xe-1/0/0 unit 367 vlan-id 367
set interfaces xe-1/0/0 unit 367 family inet address 99.99.100.13/30
set interfaces xe-1/0/0 unit 368 vlan-id 368
set interfaces xe-1/0/0 unit 368 family inet address 99.99.100.17/30
set interfaces xe-1/0/0 unit 369 vlan-id 369
set interfaces xe-1/0/0 unit 369 family inet address 99.99.100.21/30
set interfaces xe-1/0/0 unit 370 vlan-id 370
set interfaces xe-1/0/0 unit 370 family inet address 99.99.100.25/30
set interfaces xe-1/0/0 unit 371 vlan-id 371
set interfaces xe-1/0/0 unit 371 family inet address 99.99.100.29/30
set interfaces xe-1/0/0 unit 372 vlan-id 372
set interfaces xe-1/0/0 unit 372 family inet address 99.99.100.33/30
set interfaces xe-1/0/0 unit 373 vlan-id 373
set interfaces xe-1/0/0 unit 373 family inet address 99.99.100.37/30
set interfaces xe-1/0/0 unit 374 vlan-id 374
set interfaces xe-1/0/0 unit 374 family inet address 99.99.100.41/30
set interfaces xe-1/0/0 unit 375 vlan-id 375
set interfaces xe-1/0/0 unit 375 family inet address 99.99.100.45/30
set interfaces xe-1/0/0 unit 376 vlan-id 376
set interfaces xe-1/0/0 unit 376 family inet address 99.99.100.49/30
set interfaces xe-1/0/0 unit 377 vlan-id 377
set interfaces xe-1/0/0 unit 377 family inet address 99.99.100.53/30
set interfaces xe-1/0/0 unit 378 vlan-id 378
set interfaces xe-1/0/0 unit 378 family inet address 99.99.100.57/30
set interfaces xe-1/0/0 unit 379 vlan-id 379
set interfaces xe-1/0/0 unit 379 family inet address 99.99.100.61/30
set interfaces xe-1/0/0 unit 380 vlan-id 380
set interfaces xe-1/0/0 unit 380 family inet address 99.99.100.65/30
set interfaces xe-1/0/0 unit 381 vlan-id 381
set interfaces xe-1/0/0 unit 381 family inet address 99.99.100.69/30
174 Appendix: Node Slicing Lab Configurations
set interfaces xe-1/0/0 unit 382 vlan-id 382
set interfaces xe-1/0/0 unit 382 family inet address 99.99.100.73/30
set interfaces xe-1/0/0 unit 383 vlan-id 383
set interfaces xe-1/0/0 unit 383 family inet address 99.99.100.77/30
set interfaces xe-1/0/0 unit 384 vlan-id 384
set interfaces xe-1/0/0 unit 384 family inet address 99.99.100.81/30
set interfaces xe-1/0/0 unit 385 vlan-id 385
set interfaces xe-1/0/0 unit 385 family inet address 99.99.100.85/30
set interfaces xe-1/0/0 unit 386 vlan-id 386
set interfaces xe-1/0/0 unit 386 family inet address 99.99.100.89/30
set interfaces xe-1/0/0 unit 387 vlan-id 387
set interfaces xe-1/0/0 unit 387 family inet address 99.99.100.93/30
set interfaces xe-1/0/0 unit 388 vlan-id 388
set interfaces xe-1/0/0 unit 388 family inet address 99.99.100.97/30
set interfaces xe-1/0/0 unit 389 vlan-id 389
set interfaces xe-1/0/0 unit 389 family inet address 99.99.100.101/30
set interfaces xe-1/0/0 unit 390 vlan-id 390
set interfaces xe-1/0/0 unit 390 family inet address 99.99.100.105/30
set interfaces xe-1/0/0 unit 391 vlan-id 391
set interfaces xe-1/0/0 unit 391 family inet address 99.99.100.109/30
set interfaces xe-1/0/0 unit 392 vlan-id 392
set interfaces xe-1/0/0 unit 392 family inet address 99.99.100.113/30
set interfaces xe-1/0/0 unit 393 vlan-id 393
set interfaces xe-1/0/0 unit 393 family inet address 99.99.100.117/30
set interfaces xe-1/0/0 unit 394 vlan-id 394
set interfaces xe-1/0/0 unit 394 family inet address 99.99.100.121/30
set interfaces xe-1/0/0 unit 395 vlan-id 395
set interfaces xe-1/0/0 unit 395 family inet address 99.99.100.125/30
set interfaces xe-1/0/0 unit 396 vlan-id 396
set interfaces xe-1/0/0 unit 396 family inet address 99.99.100.129/30
set interfaces xe-1/0/0 unit 397 vlan-id 397
set interfaces xe-1/0/0 unit 397 family inet address 99.99.100.133/30
set interfaces xe-1/0/0 unit 398 vlan-id 398
set interfaces xe-1/0/0 unit 398 family inet address 99.99.100.137/30
set interfaces xe-1/0/0 unit 399 vlan-id 399
set interfaces xe-1/0/0 unit 399 family inet address 99.99.100.141/30
set interfaces af0 flexible-vlan-tagging
set interfaces af0 mtu 9224
set interfaces af0 encapsulation flexible-ethernet-services
set interfaces af0 unit 72 vlan-id 72
set interfaces af0 unit 72 family inet address 72.0.0.2/30
set interfaces af0 unit 72 family iso
set interfaces af0 unit 72 family inet6 address fec0::71.0.0.2/126
set interfaces af0 unit 72 family mpls
set interfaces lo0 unit 0 family inet address 72.255.255.2/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0002.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.2/128
set routing-options nonstop-routing
set routing-options static route 10.0.0.0/8 next-hop 172.30.181.1
set routing-options static route 10.0.0.0/8 no-readvertise
set routing-options static route 172.16.0.0/12 next-hop 172.30.181.1
set routing-options static route 172.16.0.0/12 no-readvertise
set routing-options static route 192.168.0.0/16 next-hop 172.30.181.1
set routing-options static route 192.168.0.0/16 no-readvertise
set routing-options aggregate route 0.0.0.0/0 policy AGGR
set routing-options aggregate route 0.0.0.0/0 as-path origin igp
set routing-options router-id 72.255.255.2
set routing-options autonomous-system 65203
175 CORE-GNF-BGP
set protocols bgp precision-timers
set protocols bgp group eBGP family inet unicast
set protocols bgp group eBGP family inet6 unicast
set protocols bgp group eBGP export ADV-MINE
set protocols bgp group eBGP neighbor 99.99.99.2 peer-as 65400
set protocols bgp group eBGP neighbor 99.99.99.6 peer-as 65401
set protocols bgp group eBGP neighbor 99.99.99.10 peer-as 65402
set protocols bgp group eBGP neighbor 99.99.99.14 peer-as 65403
set protocols bgp group eBGP neighbor 99.99.99.18 peer-as 65404
set protocols bgp group eBGP neighbor 99.99.99.22 peer-as 65405
set protocols bgp group eBGP neighbor 99.99.99.26 peer-as 65406
set protocols bgp group eBGP neighbor 99.99.99.30 peer-as 65407
set protocols bgp group eBGP neighbor 99.99.99.34 peer-as 65408
set protocols bgp group eBGP neighbor 99.99.99.38 peer-as 65409
set protocols bgp group eBGP neighbor 99.99.99.42 peer-as 65410
set protocols bgp group eBGP neighbor 99.99.99.46 peer-as 65411
set protocols bgp group eBGP neighbor 99.99.99.50 peer-as 65412
set protocols bgp group eBGP neighbor 99.99.99.54 peer-as 65413
set protocols bgp group eBGP neighbor 99.99.99.58 peer-as 65414
set protocols bgp group eBGP neighbor 99.99.99.62 peer-as 65415
set protocols bgp group eBGP neighbor 99.99.99.66 peer-as 65416
set protocols bgp group eBGP neighbor 99.99.99.70 peer-as 65417
set protocols bgp group eBGP neighbor 99.99.99.74 peer-as 65418
set protocols bgp group eBGP neighbor 99.99.99.78 peer-as 65419
set protocols bgp group eBGP neighbor 99.99.99.82 peer-as 65420
set protocols bgp group eBGP neighbor 99.99.99.86 peer-as 65421
set protocols bgp group eBGP neighbor 99.99.99.90 peer-as 65422
set protocols bgp group eBGP neighbor 99.99.99.94 peer-as 65423
set protocols bgp group eBGP neighbor 99.99.99.98 peer-as 65424
set protocols bgp group eBGP neighbor 99.99.99.102 peer-as 65425
set protocols bgp group eBGP neighbor 99.99.99.106 peer-as 65426
set protocols bgp group eBGP neighbor 99.99.99.110 peer-as 65427
set protocols bgp group eBGP neighbor 99.99.99.114 peer-as 65428
set protocols bgp group eBGP neighbor 99.99.99.118 peer-as 65429
set protocols bgp group eBGP neighbor 99.99.99.122 peer-as 65430
set protocols bgp group eBGP neighbor 99.99.99.126 peer-as 65431
set protocols bgp group eBGP neighbor 99.99.99.130 peer-as 65432
set protocols bgp group eBGP neighbor 99.99.99.134 peer-as 65433
set protocols bgp group eBGP neighbor 99.99.99.138 peer-as 65434
set protocols bgp group eBGP neighbor 99.99.99.142 peer-as 65435
set protocols bgp group eBGP neighbor 99.99.99.146 peer-as 65436
set protocols bgp group eBGP neighbor 99.99.99.150 peer-as 65437
set protocols bgp group eBGP neighbor 99.99.99.154 peer-as 65438
set protocols bgp group eBGP neighbor 99.99.99.158 peer-as 65439
set protocols bgp group eBGP neighbor 99.99.99.162 peer-as 65440
set protocols bgp group eBGP neighbor 99.99.99.166 peer-as 65441
set protocols bgp group eBGP neighbor 99.99.99.170 peer-as 65442
set protocols bgp group eBGP neighbor 99.99.99.174 peer-as 65443
set protocols bgp group eBGP neighbor 99.99.99.178 peer-as 65444
set protocols bgp group eBGP neighbor 99.99.99.182 peer-as 65445
set protocols bgp group eBGP neighbor 99.99.99.186 peer-as 65446
set protocols bgp group eBGP neighbor 99.99.99.190 peer-as 65447
set protocols bgp group eBGP neighbor 99.99.99.194 peer-as 65448
set protocols bgp group eBGP neighbor 99.99.99.198 peer-as 65449
set protocols bgp group eBGP neighbor 99.99.99.202 peer-as 65450
set protocols bgp group eBGP neighbor 99.99.99.206 peer-as 65451
set protocols bgp group eBGP neighbor 99.99.99.210 peer-as 65452
set protocols bgp group eBGP neighbor 99.99.99.214 peer-as 65453
176 Appendix: Node Slicing Lab Configurations
set protocols bgp group eBGP neighbor 99.99.99.218 peer-as 65454
set protocols bgp group eBGP neighbor 99.99.99.222 peer-as 65455
set protocols bgp group eBGP neighbor 99.99.99.226 peer-as 65456
set protocols bgp group eBGP neighbor 99.99.99.230 peer-as 65457
set protocols bgp group eBGP neighbor 99.99.99.234 peer-as 65458
set protocols bgp group eBGP neighbor 99.99.99.238 peer-as 65459
set protocols bgp group eBGP neighbor 99.99.99.242 peer-as 65460
set protocols bgp group eBGP neighbor 99.99.99.246 peer-as 65461
set protocols bgp group eBGP neighbor 99.99.99.250 peer-as 65462
set protocols bgp group eBGP neighbor 99.99.99.254 peer-as 65463
set protocols bgp group eBGP neighbor 99.99.100.2 peer-as 65464
set protocols bgp group eBGP neighbor 99.99.100.6 peer-as 65465
set protocols bgp group eBGP neighbor 99.99.100.10 peer-as 65466
set protocols bgp group eBGP neighbor 99.99.100.14 peer-as 65467
set protocols bgp group eBGP neighbor 99.99.100.18 peer-as 65468
set protocols bgp group eBGP neighbor 99.99.100.22 peer-as 65469
set protocols bgp group eBGP neighbor 99.99.100.26 peer-as 65470
set protocols bgp group eBGP neighbor 99.99.100.30 peer-as 65471
set protocols bgp group eBGP neighbor 99.99.100.34 peer-as 65472
set protocols bgp group eBGP neighbor 99.99.100.38 peer-as 65473
set protocols bgp group eBGP neighbor 99.99.100.42 peer-as 65474
set protocols bgp group eBGP neighbor 99.99.100.46 peer-as 65475
set protocols bgp group eBGP neighbor 99.99.100.50 peer-as 65476
set protocols bgp group eBGP neighbor 99.99.100.54 peer-as 65477
set protocols bgp group eBGP neighbor 99.99.100.58 peer-as 65478
set protocols bgp group eBGP neighbor 99.99.100.62 peer-as 65479
set protocols bgp group eBGP neighbor 99.99.100.66 peer-as 65480
set protocols bgp group eBGP neighbor 99.99.100.70 peer-as 65481
set protocols bgp group eBGP neighbor 99.99.100.74 peer-as 65482
set protocols bgp group eBGP neighbor 99.99.100.78 peer-as 65483
set protocols bgp group eBGP neighbor 99.99.100.82 peer-as 65484
set protocols bgp group eBGP neighbor 99.99.100.86 peer-as 65485
set protocols bgp group eBGP neighbor 99.99.100.90 peer-as 65486
set protocols bgp group eBGP neighbor 99.99.100.94 peer-as 65487
set protocols bgp group eBGP neighbor 99.99.100.98 peer-as 65488
set protocols bgp group eBGP neighbor 99.99.100.102 peer-as 65489
set protocols bgp group eBGP neighbor 99.99.100.106 peer-as 65490
set protocols bgp group eBGP neighbor 99.99.100.110 peer-as 65491
set protocols bgp group eBGP neighbor 99.99.100.114 peer-as 65492
set protocols bgp group eBGP neighbor 99.99.100.118 peer-as 65493
set protocols bgp group eBGP neighbor 99.99.100.122 peer-as 65494
set protocols bgp group eBGP neighbor 99.99.100.126 peer-as 65495
set protocols bgp group eBGP neighbor 99.99.100.130 peer-as 65496
set protocols bgp group eBGP neighbor 99.99.100.134 peer-as 65497
set protocols bgp group eBGP neighbor 99.99.100.138 peer-as 65498
set protocols bgp group eBGP neighbor 99.99.100.142 peer-as 65499
set protocols bgp group iBGP type internal
set protocols bgp group iBGP local-address 72.255.255.2
set protocols bgp group iBGP import iBGP-TAG
set protocols bgp group iBGP family inet unicast
set protocols bgp group iBGP family inet6 unicast
set protocols bgp group iBGP export EXPORT-DEFAULT
set protocols bgp group iBGP neighbor 72.255.255.1
set protocols isis reference-bandwidth 100g
set protocols isis level 1 disable
set protocols isis level 2 wide-metrics-only
set protocols isis interface xe-1/0/0.10 passive
deactivate protocols isis interface xe-1/0/0.10
177 CONFIG-960-4-STANDALONE
set protocols isis interface af0.72 point-to-point
set protocols isis interface lo0.0 passive
set protocols layer2-control nonstop-bridging
set policy-options policy-statement ADV-MINE term OK from protocol bgp
set policy-options policy-statement ADV-MINE term OK from community INTERNAL
set policy-options policy-statement ADV-MINE term OK then community delete ALL
set policy-options policy-statement ADV-MINE term OK then accept
set policy-options policy-statement ADV-MINE term KO then reject
set policy-options policy-statement AGGR term OK from protocol bgp
set policy-options policy-statement AGGR term OK then accept
set policy-options policy-statement AGGR term KO then reject
set policy-options policy-statement EXPORT-DEFAULT term OK from route-filter 0.0.0.0/0 exact
set policy-options policy-statement EXPORT-DEFAULT term OK then accept
set policy-options policy-statement EXPORT-DEFAULT term KO then reject
set policy-options policy-statement iBGP-TAG term OK from protocol bgp
set policy-options policy-statement iBGP-TAG term OK then community add INTERNAL
set policy-options policy-statement iBGP-TAG term OK then accept
set policy-options community ALL members .*:.*
set policy-options community INTERNAL members 65203:100
CONFIG-960-4-STANDALONE
The starting configuration for Chapter 5:
set version 18.3R1.9
set groups re0 system host-name MX960-4-RE0
set groups re0 system backup-router 172.30.177.1
set groups re0 system backup-router destination 172.30.176.0/20
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.178.71/24
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.177.196/24 master-only
set groups re1 system host-name MX960-4-RE1
set groups re1 system backup-router 172.30.177.1
set groups re1 system backup-router destination 172.30.176.0/20
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.178.72/24
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.177.196/24 master-only
set groups isis-mpls interfaces <*-*> unit <*> family iso
set groups isis-mpls interfaces <*-*> unit <*> family mpls
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system commit persist-groups-inheritance
set system configuration-database max-db-size 629145600
set system login user magno uid 2001
set system login user magno class super-user
set system login user magno authentication encrypted-password “ -- SNIP -- “
set system login user magno authentication ssh-rsa “ssh-rsa -- SNIP -- “
set system login user remote uid 2000
set system login user remote class super-user
set system root-authentication encrypted-password “ -- SNIP -- “
set system domain-name poc-nl.jnpr.net
set system backup-router 172.30.177.1
set system backup-router destination 172.30.176.0/20
set system time-zone Europe/Amsterdam
set system authentication-order password
set system authentication-order radius
set system name-server 172.30.207.10
178 Appendix: Node Slicing Lab Configurations
set system name-server 172.30.207.13
set system radius-server 172.30.176.9 secret “$9$DMHPTz36CtOqmBEclLXik.mfT6/t1Eyn/”
set system radius-server 172.30.176.9 retry 3
set system radius-server 172.30.177.4 secret “$9$CgY9p1EcylvWx0B7VwgUDtuOBIEleWNVYre”
set system radius-server 172.30.177.4 retry 3
set system dynamic-profile-options versioning
set system services ftp
set system services ssh root-login allow
set system services ssh max-sessions-per-connection 32
set system services ssh client-alive-interval 120
set system services telnet
set system services xnm-clear-text
set system services netconf ssh
set system services netconf yang-modules device-specific
set system services netconf yang-modules emit-extensions
set system services rest http
set system services rest enable-explorer
set system services web-management http
set system services subscriber-management enable
set system syslog user * any emergency
set system syslog host 172.30.189.13 any notice
set system syslog host 172.30.189.13 authorization info
set system syslog host 172.30.189.13 interactive-commands info
set system syslog host 172.30.189.14 any notice
set system syslog host 172.30.189.14 authorization info
set system syslog host 172.30.189.14 interactive-commands info
set system syslog file messages any notice
set system syslog file messages authorization info
set system compress-configuration-files
set system ntp boot-server 172.30.207.10
set system ntp server 172.30.207.10
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” no-traps
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” proxy-arp
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” vlan-
tags outer “$junos-stacked-vlan-id”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” vlan-
tags inner “$junos-vlan-id”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” demux-
options underlying-interface “$junos-underlying-interface”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address lo0.0
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address preferred-source-address 100.100.255.254
set dynamic-profiles DP-IP-DEMUX interfaces demux0 unit “$junos-interface-unit” demux-
options underlying-interface “$junos-underlying-interface”
set dynamic-profiles DP-IP-DEMUX interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address lo0.0
set chassis redundancy failover on-loss-of-keepalives
set chassis redundancy failover not-on-disk-underperform
set chassis redundancy graceful-switchover
set chassis fabric redundancy-mode increased-bandwidth
set chassis fpc 0 pic 0 pic-mode 100G
set chassis fpc 0 pic 1 pic-mode 100G
set chassis network-services enhanced-ip
set chassis network-slices guest-network-functions gnf 3 description “Single-GNF Setup”
set chassis network-slices guest-network-functions gnf 3 fpcs 1
set chassis network-slices guest-network-functions gnf 3 fpcs 6
set interfaces xe-1/0/0 description “Link to IXIA LC 7 Port 8”
set interfaces xe-1/0/0 flexible-vlan-tagging
179 CONFIG-960-4-STANDALONE
set interfaces xe-1/0/0 encapsulation flexible-ethernet-services
set interfaces xe-1/0/0 unit 10 vlan-id 10
set interfaces xe-1/0/0 unit 10 family inet address 99.99.10.1/24
set interfaces xe-1/0/0 unit 300 vlan-id 300
--- SNIP --- // See previous for BGP peering interface configuration //
set interfaces xe-6/0/0 flexible-vlan-tagging
set interfaces xe-6/0/0 auto-configure stacked-vlan-ranges dynamic-profile DP-AUTO-
VLAN accept dhcp-v4
set interfaces xe-6/0/0 auto-configure stacked-vlan-ranges dynamic-profile DP-AUTO-VLAN ranges 1000-
2000,any
set interfaces xe-6/0/0 auto-configure remove-when-no-subscribers
set interfaces xe-6/0/0 encapsulation flexible-ethernet-services
set interfaces lo0 unit 0 family inet address 72.255.255.1/32
set interfaces lo0 unit 0 family inet address 100.100.255.254/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0001.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.1/128
set snmp location “AMS, EPOC location=3.09”
set snmp contact “emea-poc@juniper.net”
set snmp community public authorization read-only
set snmp community public clients 172.30.0.0/16
set snmp community public clients 0.0.0.0/0 restrict
set snmp community private authorization read-write
set snmp community private clients 172.30.0.0/16
set snmp community private clients 0.0.0.0/0 restrict
set snmp trap-options source-address 172.30.177.196
set snmp trap-group space targets 172.30.176.140
set forwarding-options dhcp-relay server-group DHCPv4 99.99.10.10
set forwarding-options dhcp-relay group DHCPv4-ACTIVE active-server-group DHCPv4
set forwarding-options dhcp-relay group DHCPv4-ACTIVE interface xe-6/0/0.0
set forwarding-options dhcp-relay no-snoop
set accounting-options periodic-refresh disable
set routing-options nonstop-routing
set routing-options static route 172.16.0.0/12 next-hop 172.30.177.1
set routing-options static route 172.16.0.0/12 no-readvertise
set routing-options static route 100.100.0.0/16 discard
set routing-options router-id 72.255.255.1
set routing-options autonomous-system 65203
set protocols bgp precision-timers
set protocols bgp group eBGP family inet unicast
set protocols bgp group eBGP family inet6 unicast
set protocols bgp group eBGP export BBE-POOL
set protocols bgp group eBGP neighbor 99.99.99.2 peer-as 65400
--- SNIP --- // See previous for eBGP peering configurations //
set protocols layer2-control nonstop-bridging
set policy-options policy-statement BBE-POOL term OK from protocol static
set policy-options policy-statement BBE-POOL term OK from route-filter 100.100.0.0/16 exact
set policy-options policy-statement BBE-POOL term OK then accept
set policy-options community ALL members .*:.*
set policy-options community INTERNAL members 65203:100
set access profile NOAUTH authentication-order none
180 Appendix: Node Slicing Lab Configurations
CONFIG-960-4-GNF-FINAL
The configuration once the MX960 was turned into a single GNF (Chapter 5):
set version 18.3R1.9
set groups re0 system host-name MX960-4-GNF-RE0
set groups re0 system backup-router 172.30.181.1
set groups re0 system backup-router destination 172.16.0.0/12
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.176/24
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.175/24 master-only
set groups re1 system host-name MX960-4-GNF-RE1
set groups re1 system backup-router 172.30.177.1
set groups re1 system backup-router destination 172.16.0.0/12
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.177/24
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.175/24 master-only
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system commit persist-groups-inheritance
set system configuration-database max-db-size 629145600
set system login user magno uid 2001
set system login user magno class super-user
set system login user magno authentication encrypted-password “$6$ENdnoKrZ$qLSWO5899HXEDuRYFYL0alWe1
U0dmSXW0mWkMWTxOsrNQmL940pvLgUVoaCBU7.FQxzLxBiI3y271FS4cAAVr0”
set system login user magno authentication ssh-rsa “ssh-
rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDI21uVGR6oQGBUjU5yB+MkyDDOBbegLzGleGAlQnuNVe7RrhJ7XsEl/XG75xy/ifv/
R8ck21/6iQvaJC1pydOJFXqfY/
YvD8L1jbGRtkEv5F0HnaOhiOeYer4C2aqgu0I38YSdQftigFm0Gx8R0qTXZYvmkykgEHvCkzDvFUd6NHC2sITMFysZdsah9US/
Av6uokPMfG1z+cdoE2SdfKHfb2W6LJzl9EmhQPVE7nWySmKVnCMizG8YmNjw2RmCScVbmUzLz8/DmoT2EL1qT0fsP9teyK0+6oKR
HQGMPFA76/J1RfmPsugswbAI04fdpyQCZ2WFaA26Bn5lgxgxXm/N mmagnani@mmagnani-mbp15”
set system login user remote uid 2000
set system login user remote class super-user
set system root-authentication encrypted-password “$6$WFIYvOs8$RbSrJggMYcgpEMDjHe0FHHTvmMElAWMUhkFvl
aw.BH1SIvhC5nfAZDSbDBKQlOwW.nORYh3VHU8TIExC9t3JC/”
set system domain-name poc-nl.jnpr.net
set system backup-router 172.30.177.1
set system backup-router destination 172.30.176.0/20
set system time-zone Europe/Amsterdam
set system authentication-order password
set system authentication-order radius
set system name-server 172.30.207.10
set system name-server 172.30.207.13
set system radius-server 172.30.176.9 secret “$9$DMHPTz36CtOqmBEclLXik.mfT6/t1Eyn/”
set system radius-server 172.30.176.9 retry 3
set system radius-server 172.30.177.4 secret “$9$CgY9p1EcylvWx0B7VwgUDtuOBIEleWNVYre”
set system radius-server 172.30.177.4 retry 3
set system dynamic-profile-options versioning
set system services ftp
set system services ssh root-login allow
set system services ssh max-sessions-per-connection 32
set system services ssh client-alive-interval 120
set system services telnet
set system services xnm-clear-text
set system services netconf ssh
set system services rest http
set system services rest enable-explorer
181 CONFIG-960-4-GNF-FINAL
set system services web-management http
set system services subscriber-management enable
set system syslog user * any emergency
set system syslog host 172.30.189.13 any notice
set system syslog host 172.30.189.13 authorization info
set system syslog host 172.30.189.13 interactive-commands info
set system syslog host 172.30.189.14 any notice
set system syslog host 172.30.189.14 authorization info
set system syslog host 172.30.189.14 interactive-commands info
set system syslog file messages any notice
set system syslog file messages authorization info
set system syslog file messages match-strings “!*0x44b*”
set system syslog file default-log-messages any info
set system syslog file default-log-messages match “(requested ‘commit’ operation)|(requested ‘commit
synchronize’ operation)|(copying configuration to juniper.save)|(commit complete)|ifAdminStatus|(FRU
power)|(FRU removal)|(FRU insertion)|(link UP)|transitioned|Transferred|transfer-file|(license
add)|(license delete)|(package -X update)|(package -X delete)|(FRU Online)|(FRU Offline)|(plugged
in)|(unplugged)|CFMD_CCM_DEFECT| LFMD_3AH | RPD_MPLS_PATH_BFD|(Master Unchanged, Members
Changed)|(Master Changed, Members Changed)|(Master Detected, Members Changed)|(vc add)|(vc
delete)|(Master detected)|(Master changed)|(Backup detected)|(Backup changed)|(interface vcp-)|BR_
INFRA_DEVICE”
set system syslog file default-log-messages structured-data
set system compress-configuration-files
set system ntp boot-server 172.30.207.10
set system ntp server 172.30.207.10
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” no-traps
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” proxy-arp
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” vlan-
tags outer “$junos-stacked-vlan-id”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” vlan-
tags inner “$junos-vlan-id”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” demux-
options underlying-interface “$junos-underlying-interface”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address lo0.0
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address preferred-source-address 100.100.255.254
set dynamic-profiles DP-IP-DEMUX interfaces demux0 unit “$junos-interface-unit” demux-
options underlying-interface “$junos-underlying-interface”
set dynamic-profiles DP-IP-DEMUX interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address lo0.0
set chassis redundancy failover on-loss-of-keepalives
set chassis redundancy failover not-on-disk-underperform
set chassis redundancy graceful-switchover
set chassis fpc 0 pic 0 pic-mode 100G
set chassis fpc 0 pic 1 pic-mode 100G
set chassis network-services enhanced-ip
set chassis network-slices
set interfaces xe-1/0/0 description “Link to IXIA LC 7 Port 8”
set interfaces xe-1/0/0 flexible-vlan-tagging
set interfaces xe-1/0/0 encapsulation flexible-ethernet-services
set interfaces xe-1/0/0 unit 10 vlan-id 10
set interfaces xe-1/0/0 unit 10 family inet address 99.99.10.1/24
set interfaces xe-1/0/0 unit 300 vlan-id 300
--- SNIP --- // See previous for BGP peering interface configuration //
set interfaces xe-6/0/0 flexible-vlan-tagging
182 Appendix: Node Slicing Lab Configurations
set interfaces xe-6/0/0 auto-configure stacked-vlan-ranges dynamic-profile DP-AUTO-
VLAN accept dhcp-v4
set interfaces xe-6/0/0 auto-configure stacked-vlan-ranges dynamic-profile DP-AUTO-VLAN ranges 1000-
2000,any
set interfaces xe-6/0/0 auto-configure remove-when-no-subscribers
set interfaces xe-6/0/0 encapsulation flexible-ethernet-services
set interfaces lo0 unit 0 family inet address 72.255.255.1/32
set interfaces lo0 unit 0 family inet address 100.100.255.254/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0001.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.1/128
set snmp location “AMS, EPOC location=3.09”
set snmp contact “emea-poc@juniper.net”
set snmp community public authorization read-only
set snmp community public clients 172.30.0.0/16
set snmp community public clients 0.0.0.0/0 restrict
set snmp community private authorization read-write
set snmp community private clients 172.30.0.0/16
set snmp community private clients 0.0.0.0/0 restrict
set snmp trap-options source-address 172.30.177.196
set snmp trap-group space targets 172.30.176.140
set forwarding-options dhcp-relay server-group DHCPv4 99.99.10.10
set forwarding-options dhcp-relay group DHCPv4-ACTIVE active-server-group DHCPv4
set forwarding-options dhcp-relay group DHCPv4-ACTIVE interface xe-6/0/0.0
set forwarding-options dhcp-relay no-snoop
set accounting-options periodic-refresh disable
set routing-options nonstop-routing
set routing-options static route 172.16.0.0/12 next-hop 172.30.181.1
set routing-options static route 172.16.0.0/12 no-readvertise
set routing-options static route 100.100.0.0/16 discard
set routing-options router-id 72.255.255.1
set routing-options autonomous-system 65203
set protocols bgp precision-timers
set protocols bgp group eBGP family inet unicast
set protocols bgp group eBGP family inet6 unicast
set protocols bgp group eBGP export BBE-POOL
set protocols bgp group eBGP neighbor 99.99.99.2 peer-as 65400
--- SNIP --- // See previous for eBGP peering configurations //
set protocols layer2-control nonstop-bridging
set policy-options policy-statement BBE-POOL term OK from protocol static
set policy-options policy-statement BBE-POOL term OK from route-filter 100.100.0.0/16 exact
set policy-options policy-statement BBE-POOL term OK then accept
set policy-options community ALL members .*:.*
set policy-options community INTERNAL members 65203:100
set access profile NOAUTH authentication-order none