Вы находитесь на странице: 1из 127

VSP Release Notes

Release 5.4.1

3HE14935AAAA

February 20, 2019


CONTENTS

1 About this Document 2


1.1 Validity of this Document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 List of Technical Publications for Current VSP Release . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Nuage VSP Release 5.4.1 Software Archives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5.1 VSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5.2 VSC/VSG/VSA/VSA-8/WBX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5.3 VRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5.4 Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Openstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
SCVMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5.5 VSPK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Requirements 6
2.1 Nuage VSP Platform Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.1 VSD Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Further Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
VSD Stats VM Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
VSD Statistics Storage / Elasticsearch VM Requirements . . . . . . . . . . . . . . . . . . . . 8
VSD Architect Supported Browsers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.2 VSC Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.3 VCIN Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.4 VRS Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
KVM Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
ESXi Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Docker Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Hyper-V Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
AVRS (Accelerated VRS) Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
VRS-B Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.5 CMS Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Hypervisor/CMS Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Hypervisor/CMS Compatibility Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Requirements for VRS VM on VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3 New Features 14
3.1 New Features/Enhancements in Release 5.3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.1.1 VSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
VSD ProxySQL and VSD Installation script . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

i
3.1.2 VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Laddered MC-LAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.1.3 Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Hyper-V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.4 VSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Top Talkers Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
ACL Analytics Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Virtual Firewall Rule Generation Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . 23
Contextual Flow Visibility Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 New Features/Enhancements in Release 5.3.2 U2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.1 SRIOV support on OSPD13 and OpenStack Queens . . . . . . . . . . . . . . . . . . . . . . 24
3.3 New Features/Enhancements in Release 5.3.2 U1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3.1 VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
BFD Static Routes IPv4 and IPv6 on AVRS . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3.2 Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4 New Features/Enhancements in Release 5.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.4.1 VSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Resource Utilization Statistics Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Decoupled Key Server Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
IPv6 Support for VSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
VSD and ES Operating Systems Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Statistics deployment on separate hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.4.2 VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
BGP PE-CE v6 (WBX/VSG) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
BFD BGP PE-CE IPv6 (WBX/VSG) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
ECMP64 Overlay on WBX for BGP IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
BGP PE-CE IPv6 on VRS/AVRS/VRS-G . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
BFD BGP PE-CE IPv6 on VRS/AVRS/VRS-G . . . . . . . . . . . . . . . . . . . . . . . . . 26
BETA BFD Static Routes IPv4 and IPv6 on VRS/AVRS/VRS-G . . . . . . . . . . . . . . . . 26
7750 Hardware VTEP from VSD using NETCONF . . . . . . . . . . . . . . . . . . . . . . . 26
SSH Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
FIP Route Precedence (ignore default route) . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
VSG/WBX Deterministic Hold Timers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
IPv6 connectivity for management traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4.3 Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4.4 Security/VSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
VSS IPSec Group-Key Encryption for Linux Container Workloads . . . . . . . . . . . . . . . 29
VSS for Non-overlay Datacenter Network Environments (BETA) . . . . . . . . . . . . . . . 29
VSS Flow Collection Scalability: Enhancements . . . . . . . . . . . . . . . . . . . . . . . . 30
Policy Group Category . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Virtual Firewall Rule and Service Enhancements . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5 New Features/Enhancements in Release 5.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5.1 VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
BGP PE-CE IPv4 Enhancements on VRS/AVRS/VRS-G . . . . . . . . . . . . . . . . . . . . 30
ECMP64 Underlay on WBX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
VPRN Underlay Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5.2 Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

ii
VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.5.3 Security/VSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Contextual Flow Visibility Without ACL Dependency . . . . . . . . . . . . . . . . . . . . . 32
Support for Redirected Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Policy Group Assignment Based on Flow Search . . . . . . . . . . . . . . . . . . . . . . . . 32
Virtual Firewall Rule Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.5.4 BGP PE-CE IPv4 Enhancements on VRS/AVRS/VRS-G . . . . . . . . . . . . . . . . . . . 32
3.5.5 BFD BGP PE-CE IPv4 on VRS/AVRS/VRS-G . . . . . . . . . . . . . . . . . . . . . . . . 32
3.5.6 BGP PE-CE IPv6 on VRS/AVRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.5.7 BFD BGP PE-CE IPv6 on VRS/AVRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.5.8 BGP PE-CE IPv6 on VSG/WBX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.5.9 BFD BGP PE-CE IPv6 on VSG/WBX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.5.10 ECMP64 for BGP PE-CE IPv6 on VSG/WBX . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.5.11 OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Multiple Fixed IPs (IPv4 and IPv6) From Same Subnet . . . . . . . . . . . . . . . . . . . . . 33
Allowed Address Pair (IPv4 and IPv6) Support for SRIOV . . . . . . . . . . . . . . . . . . . 33
Spoofing Support for Ports in VSD-managed Subnets . . . . . . . . . . . . . . . . . . . . . . 33
Multiple VSD-managed IPv4 Subnets per Network . . . . . . . . . . . . . . . . . . . . . . . 33
3.6 New Features/Enhancements in Release 5.2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.6.1 VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
V6 VIP on VRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.6.2 No-global-prepend-AS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
BFD Underlay for VSG/WBX IPv6 Underlay . . . . . . . . . . . . . . . . . . . . . . . . . . 34
BGP PE-CE VRS/VRS-G/AVRS - IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.6.3 Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Hyper-V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.6.4 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Mitigation for Kernel Side-Channel Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.6.5 VSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Per Enterprise/Domain Flow Collection Setting . . . . . . . . . . . . . . . . . . . . . . . . . 36
Support for Underlay Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Flow Analytics UI Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Virtual Firewall Rule Generation Improvements . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.7 New Features/Enhancements in Release 5.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.7.1 VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
VSG/WBX: Manual EVPN Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
VSG/WBX: BFD Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
WBX: ECMP 64 in the Overlay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
VSG/WBX: VIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Dual VTEP Uplink Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.7.2 Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Hyper-V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.7.3 Security/VSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Flow Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Policy Generation and Virtual Firewall Rule Management . . . . . . . . . . . . . . . . . . . 40
Security Administrator Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.8 New Features/Enhancements in Release 5.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

iii
3.8.1 VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
IPv6 OOB Management on WBX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Underlay Mac-move CLI Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Enhanced Active/Standby VSD cluster - GA . . . . . . . . . . . . . . . . . . . . . . . . . . 41
VRS on RHEL 7.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
LDAP Group Name Support Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.8.2 VSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Policy Group Expressions Enhancements (Supported for VCS/VRS only) . . . . . . . . . . . 41
Layer 4 Services Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Layer 7 Security (Supported for VNS/NSG only) . . . . . . . . . . . . . . . . . . . . . . . . 41
3.8.3 Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Kubernetes & OpenShift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.9 New Features/Enhancements in Release 5.1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.10 VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.10.1 Active/Standby VSD Cluster (BETA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.10.2 210 WBX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.10.3 Expose Shared Network Enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.10.4 EVPN Loop Prevention and MAC Move Control . . . . . . . . . . . . . . . . . . . . . . . 44
3.10.5 VIP on HW (BETA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.10.6 Support for VRS on SUSE Linux (BETA) . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.11 VSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.11.1 VSD Platform Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
TLS 1.2 For JMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
VSD Default Password Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
LDAP and AD Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.11.2 VSD Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Transparent Proxy/NAT for JMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
AMQP as a JMS alternative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.11.3 Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.11.4 VSD Platform Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.11.5 Elasticsearch Platform Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.12 Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.12.1 VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
VRS Agent - Logging Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
VRS Agent - Custom Hostname and VM Name . . . . . . . . . . . . . . . . . . . . . . . . . 46
VRS Agent - Monitor and Redeployment Policy for Disk Usage . . . . . . . . . . . . . . . . 46
VMware Integration - Metadata Changes Handling Without Cold Boot . . . . . . . . . . . . 46
3.12.2 Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Kubernetes & OpenShift Installation using DaemonSets . . . . . . . . . . . . . . . . . . . . 46
IPtables kube-proxy for OpenShift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.12.3 Microsoft Hyper-V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Microsoft WHQL Certification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
VSS support (BETA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
VPort and ACL Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
NIC Teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.12.4 OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
OpenStack Ocata Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Ironic Alignment with Upstream Ocata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Nuage AVRS Integration with RedHat OSP10/RHEL 7.3 and Ubuntu OpenStack Ocata/16.04 47
Openstack-Managed Dual Stack IPv4/IPv6 Support . . . . . . . . . . . . . . . . . . . . . . . 47
SR-IOV with VLAN Support and Automated VSG/WBX Orchestration . . . . . . . . . . . . 47
VMWare ESXi integration with OpenStack Newton and Ocata . . . . . . . . . . . . . . . . . 47
SUSE OpenStack Cloud 7 - Newton (BETA) . . . . . . . . . . . . . . . . . . . . . . . . . . 47

iv
DHCPv6 support for VSD-Managed Virtio Ports (BETA) . . . . . . . . . . . . . . . . . . . . 48
OpenStack SFC Support (BETA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Nuage Openstack Monolithic Plugin Discontinued . . . . . . . . . . . . . . . . . . . . . . . 48
3.13 Security/VSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.13.1 Policy Group Expressions in ACL (VCS only) . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.13.2 Services and Service Groups (VCS only) . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.13.3 L7 Security for NSG (VNS only) (BETA) . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.13.4 Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.14 New Features/Enhancements in Release 5.0.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.14.1 Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
VSP upgrade to 5.0.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.14.2 Kubernetes and OpenShift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Customizable Subnet Size for a Namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.15 New Features/Enhancements in Release 5.0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.15.1 VSD Infrastructure Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.15.2 Security Policy Scale Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.15.3 VSD Platform Security Hardening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.15.4 VCS: Expose VLANs to VMs on VRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.15.5 IPv6 Overlay for VRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.15.6 OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Openstack Newton support with full ML2 Mechanism Driver . . . . . . . . . . . . . . . . . 51
Openstack Ocata Support with Full ML2 Mechanism Driver (BETA) . . . . . . . . . . . . . 52
Dual Stack IPv6 Overlay Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
VLAN Transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Trunk and Sub-VPort Support for VLAN-aware VMs . . . . . . . . . . . . . . . . . . . . . . 52
SRIOV with VLAN Support and Automated VSG Orchestration (BETA) . . . . . . . . . . . 52
Enable Flow Logging and Flow Stats Collection . . . . . . . . . . . . . . . . . . . . . . . . 53
AVRS (Accelerated VRS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.15.7 VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.15.8 Kubernetes & OpenShift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
CNI Plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Certificate-based Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.15.9 Microsoft Hyper-V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Support for OpenStack Newton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
New Workflow for Nuage Add-In for SCVMM . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.15.10 VSP Upgrade to 5.0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.16 Deprecated Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4 Upgrade 55
4.1 Supported Upgrade Paths for 5.3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.1.1 VCS Deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Major Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Minor Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.2 What’s New in 5.2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.2.1 VSD default CSP user permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.2.2 Procedure to apply the VSD patch on a 5.2.2 VSD . . . . . . . . . . . . . . . . . . . . . . . 56
4.3 What’s New in 5.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.3.1 Upgrade of Elasticsearch From 2.2 to 5.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.3.2 VSD Certificate Renewal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.4 Resolved Upgrade-related Issues in 5.3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.5 Resolved Upgrade-related Issues in 5.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.6 Resolved Upgrade-related Issues in 5.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.7 Resolved Upgrade-related Issues in 5.2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

v
4.8 Upgrade-related Known Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.9 Upgrade-related Known Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

5 Resolved Issues 60
5.1 Resolved in Release 5.3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.2 Resolved in Release 5.3.2 U2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.3 Resolved in Release 5.3.2 U1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.4 Resolved in Release 5.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.5 Resolved in Release 5.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.6 Resolved in Release 5.2.2 U1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.7 Resolved in Release 5.2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.8 Resolved in Release 5.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.9 Resolved in Release 5.1.2 U4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.10 Resolved in Release 5.1.2 U2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.11 Resolved in Release 5.1.2 U1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.12 Resolved in Release 5.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.13 Resolved in Release 5.1.1 U2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.14 Resolved in Release 5.1.1 U1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.15 Resolved in Release 5.0.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.16 Resolved in Release 5.0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

6 Known Issues 80
6.1 Known Issues First Reported in Release 5.3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.2 VSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.2.1 MC-LAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.3 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.4 VSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.5 VSC and 7850 VSG/VSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.5.1 BGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.5.2 CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.5.3 IS-IS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.5.4 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.5.5 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.5.6 Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.5.7 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.6 VRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.7 VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.8 OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.9 CloudStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.10 OVSDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.11 Hyper-V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.12 SCVMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
6.13 Container Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

7 Known Limitations 101


7.1 Known Limitations in Release 5.3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
7.2 Known Limitations in Release 5.3.2 U1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.3 Known Limitations in Release 5.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.4 7850 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.5 CMS Integration Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.6 Static Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
7.7 VSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
7.8 VRS/VRS-G Data Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
7.9 7850 VSG/VSA Data Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

vi
7.10 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.11 RADIUS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.12 TACACS+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.13 VSC and 7850 VSG/VSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.13.1 CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.13.2 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.13.3 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.14 SCVMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
7.15 TCP Authentication Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.16 IS-IS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.17 OSPF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.18 BFD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.19 BGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.20 VPRN/2547 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
7.21 OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
7.21.1 Multiple VSD-managed IPv4 Subnets on a Network . . . . . . . . . . . . . . . . . . . . . . 117
7.22 CloudStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.23 OpenShift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.24 VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.25 Hyper-V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
7.26 210 WBX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
7.27 End-to-End QoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
7.28 VSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

vii
VSP Release Notes, Release 5.4.1

Release: 5.4.1
Issue: 1
Issue Date: February 20, 2019
Document Number: 3HE14935AAAA

NUAGE NETWORKS – PROPRIETARY & CONFIDENTIAL

This document contains proprietary/trade secret information which is the property of Nokia Corporation. Not to be
made available to, or copied or used by anyone who is not an employee of Nokia Corporation except when there is a
valid non-disclosure agreement in place which covers such information and contains appropriate non-disclosure and
limited use obligations.
This document is protected by copyright. Except as specifically permitted herein, no portion of the provided infor-
mation can be reproduced in any form, or by any means, without prior written permission from Nokia Corporation /
Nuage Networks.
Nuage Networks and the Nuage Networks logo are trademarks of the Nokia group of companies. Nokia is a registered
trademark of Nokia Corporation. Other product and company names mentioned herein may be trademarks or trade
names of their respective owners.
The information presented is subject to change without notice.
Nokia Corporation / Nuage Networks assumes no responsibility for inaccuracies contained herein.

Copyright©2018 Nokia Corporation / Nuage Networks. All rights reserved.

Build Number: 9

CONTENTS 1
CHAPTER

ONE

ABOUT THIS DOCUMENT

• Validity of this Document (page 3)


• Audience (page 3)
• Technical Support (page 3)
• List of Technical Publications for Current VSP Release (page 3)
• Nuage VSP Release 5.4.1 Software Archives (page 4)
– VSD (page 4)
– VSC/VSG/VSA/VSA-8/WBX (page 4)
– VRS (page 4)
– Integrations (page 5)

* Openstack (page 5)
* Containers (page 5)
* SCVMM (page 5)
– VSPK (page 5)

2
VSP Release Notes, Release 5.4.1

For a complete list of applicable user documentation, see the Technical Publications section of the Release Notes for
your Nuage Networks software version.

1.1 Validity of this Document

Printed versions of this document may not be up to date. Only the Web version of this document is current.

1.2 Audience

This manual is intended for enterprise system administrators who are responsible for enterprise network configuration
and administrators for the Nuage VSP/VNS software. It is assumed that the reader is familiar with virtualization and
networking technologies. Other assumptions are explicitly called out in the relevant chapters.

1.3 Technical Support

If you purchased a service agreement for your Nuage Networks VSP/VNS solution and related products from a dis-
tributor or authorized reseller, contact the technical support staff for that distributor or reseller for assistance. If you
purchased an Alcatel-Lucent or Nokia service agreement, contact your welcome center:
https://networks.nokia.com/support
Nokia Online Services (NOLCS) provides registered customers with access to technical support, software downloads,
training, documentation, literature, and other related assets for our products and solutions. For assistance with NOLCS,
including inability to access, contact us as follows:
• Inside the U.S. and Canada: 1-866-582-3688, prompt 7.
• Outside the U.S.: 1-630-224-9000
• Via email: NOLS.support@nokia.com

1.4 List of Technical Publications for Current VSP Release

• 3HE14936AAAA Nuage VSP Install Guide


• 3HE14954AAAA Nuage VSP OpenStack Queens Neutron ML2 Driver Guide
• 3HE14947AAAA Nuage VSP Kubernetes Integration Guide
• 3HE14944AAAA Nuage 7850 VSA-VSG Installation Guide
• 3HE14949AAAA Nuage VSP and Microsoft Hyper-V Integration Guide
• 3HE14935AAAA Nuage VSP Release Notes (current document)
• 3HE14943AAAA Nuage 7850 VSA-8 Installation Guide
• 3HE14946AAAA Nuage VSP Docker Integration Guide
• 3HE14945AAAA Nuage VSP API Programming Guide
• 3HE14955AAAA Nuage VSP OpenStack Rocky Neutron ML2 Driver Guide
• 3HE14941AAAA Nuage VSS User Guide

1.1. Validity of this Document 3


VSP Release Notes, Release 5.4.1

• 3HE14956AAAA Nuage VSP VMware Integration User Guide


• 3HE14957AAAA Nuage Networks Glossary
• 3HE14948AAAA Nuage VSP OpenShift Integration Guide
• 3HE14951AAAA Nuage VSP OpenStack Newton Neutron ML2 Driver Guide
• 3HE14937AAAA Nuage VSP User Guide
• 3HE14952AAAA Nuage VSP OpenStack Ocata Neutron ML2 Driver Guide
• 3HE14942AAAA Nuage 210 WBX Software Installation Guide
In addition, see the Nuage 210 WBX User Guides Zipped Collection for related 210 WBX documents.

1.5 Nuage VSP Release 5.4.1 Software Archives

1.5.1 VSD

Filename MD5 Hash Value


Nuage-VSD-5.3.3-99-OVA.tar.gz 46308f8f51e0dcc1d5d4b0bad3e52d65
Nuage-VSD-5.3.3-99-QCOW.tar.gz 8491874efb6d2bd75c7b4ea0c3c6cfc8
Nuage-VSD-migration-scripts-5.3.3-99-ISO.tar.gz 7f157922a7684533c1f1c1a12a45b220
Nuage-elastic-backup-5.3.3-95.tar.gz 1d1514c5190250a0a1fecb150f7233ed
Nuage-elastic-5.3.3-95.tar.gz 77d37adb06b13c4ff85e78f1a1622a2c
Nuage-Netconf-manager-5.3.2-95.tar.gz 45c39771e610fed2d4b0df6a10fab65d

1.5.2 VSC/VSG/VSA/VSA-8/WBX

Filename MD5 Hash Value


Nuage-VSC-5.3-3-100.tar.gz 7078d34aa781fb554cce49f5064f5031
Nuage-VSG-VSA-5.3-3-100.tar.gz b12078d44e748be928dbdd68d0c03a6c
Nuage-VSA-8-5.3-3-100.tar.gz b25f4831a46397857a442746b3458f34
Nuage-MIBS-5.3.3-100.tar.gz 8d747c698521326e08a48a9dbd713ee0
Nuage-210-WBX-5.3-3-100.tar.gz a2845969238747ff8429069d64770d45

1.5.3 VRS

Filename MD5 Hash Value


Nuage-VRS-5.3.3-100-el7.tar.gz b7e279ae01e916a78f1cf0a49710a964
Nuage-VRS-5.3.3-100-hyperV.tar.gz 40deecccfed50de7249d8a59fa36c2e3
Nuage-VRS-5.3.3-100-ubuntu.16.04.tar.gz 67e4dc31a5939aebd03c121c7b4a4012
Nuage-AVRS-E-5.3.3-100-vmware.tar.gz 57ebeded8ab3560202cec2a362f064be
Nuage-VRS-5.3.3-100-vmware.tar.gz c633cceaf716ebc3c3c3cae890f701e0
Nuage-AVRS-selinux-5.3.3-95.tar.gz 2b323c7c9eebb48a933c1fdfacfc5958
Nuage-selinux-5.3.3-95.tar.gz cbf43d344b3e0d98d5e78d2142dfe056
Nuage-AVRS-5.3.3-100-el7.tar.gz 9c1eec91567a73f668ac40e31e5f7aa9
Nuage-AVRS-5.3.3-100-ubuntu-1604.tar.gz e59a7d07e49173b57d353fddec46dce2
Nuage-AVRS-E-5.3.3-100-vmware.tar.gz 57ebeded8ab3560202cec2a362f064be
Nuage-VRS-5.3.3-100-sles12.tar.gz f6de0b9ffb373ee362ef948095f88f66
Nuage-package-signing-keys-5.3.R3.tar.gz a68bd68e0ed5f7b27a8653f36903feb6

1.5. Nuage VSP Release 5.4.1 Software Archives 4


VSP Release Notes, Release 5.4.1

1.5.4 Integrations

Openstack

Filename MD5 Hash Value


Nuage-openstack-5.3.3_99.tar.gz b996e16459b5b309cad1bb54320bc45e

Containers

Filename MD5 Hash Value


Nuage-kubernetes-5.3.3-95.tar.gz d262de0e0bfba057c5ec4e0e4f7abdae
Nuage-Mesos-CNI-5.3.3-95-el7.tar.gz 9508870e6482ada5cf3c98d6f2a226e6
Nuage-openshift-5.3.3-95.tar.gz 8bd2134b65c24b9e8a2cd219070ce4e7
Nuage-libnetwork-plugin_5.3.3-95.tar.gz 2a4f8159e3f3d57909d83e80715f1ead

SCVMM

Filename MD5 Hash Value


Nuage-SCVMM-5.3.3-95.zip 69972d40f91cc4032eac90f781561515

1.5.5 VSPK

Filename MD5 Hash Value


Nuage-VSPK-5.3.3.99.tar.gz ac482476b0f47fde16161d0bcf89cc99

1.5. Nuage VSP Release 5.4.1 Software Archives 5


CHAPTER

TWO

REQUIREMENTS

This section contains the following subsections:

• Nuage VSP Platform Requirements (page 6)


– VSD Requirements (page 7)

* Further Details (page 8)


* VSD Stats VM Specifications (page 8)
* VSD Statistics Storage / Elasticsearch VM Requirements (page 8)
* VSD Architect Supported Browsers (page 9)
– VSC Requirements (page 9)
– VCIN Requirements (page 11)
– VRS Requirements (page 11)

* KVM Support (page 11)


* ESXi Support (page 11)
* Docker Support (page 11)
* Hyper-V Support (page 11)
* AVRS (Accelerated VRS) Support (page 12)
* VRS-B Support (page 12)
– CMS Integrations (page 12)

* Hypervisor/CMS Support (page 12)


* Hypervisor/CMS Compatibility Matrix (page 13)
* Requirements for VRS VM on VMware (page 13)
• Licensing (page 13)

2.1 Nuage VSP Platform Requirements

6
VSP Release Notes, Release 5.4.1

• VSD Requirements (page 7)


– Further Details (page 8)
– VSD Stats VM Specifications (page 8)
– VSD Statistics Storage / Elasticsearch VM Requirements (page 8)
– VSD Architect Supported Browsers (page 9)
• VSC Requirements (page 9)
• VCIN Requirements (page 11)
• VRS Requirements (page 11)
– KVM Support (page 11)
– ESXi Support (page 11)
– Docker Support (page 11)
– Hyper-V Support (page 11)
– AVRS (Accelerated VRS) Support (page 12)
– VRS-B Support (page 12)
• CMS Integrations (page 12)
– Hypervisor/CMS Support (page 12)
– Hypervisor/CMS Compatibility Matrix (page 13)
– Requirements for VRS VM on VMware (page 13)

2.1.1 VSD Requirements

For high availability, the VSD is deployed as a three-node cluster. Each node is a VM running a Red Hat-based OS.
The VSD VM disk built by Nuage can be deployed on any of the following hypervisor types:
• ESXi 6.0 Virtual Hardware Version 10
• ESXi 6.5 Virtual Hardware Version 10
• ESXi 6.7 Virtual Hardware Version 10
• CentOS 7.x/KVM
• RHEL 7.x/KVM
In a production environment, each VSD VM must be installed on its own hypervisor.
Hypervisor CPU A processor from any AMD Opteron or Intel E5/E7 series Xeon processor or better
with:
• Required: 6 or more physical cores, each with a speed of 2.6 GHz or higher
• Recommended: Processors with more cores and/or higher processing speeds
• Intel Extended Page Tables (EPT) may be enabled
• HyperThreading may be enabled
VM CPU

2.1. Nuage VSP Platform Requirements 7


VSP Release Notes, Release 5.4.1

• 6 vCPUs
• All CPUs must be reserved in case of CPU oversubscription on the hypervisor hosting the VSD
VM
VM RAM
• 24 GB RAM
• Full memory must be reserved in case of memory oversubscription on the hypervisor hosting
the VSD VM
VM Disk
• 285 GB provisioned
• The VSD VM disk and partitions can be increased as needed pre- and post-installation. Contact
the support service to receive assistance.

Further Details

Newer CPUs perform better and therefore lower clock speeds are generally acceptable with newer CPUs. Indeed, in
some cases, dropping to 2.0 GHz is permissible, provided that the newest CPU series is used. Contact Nuage support
for confirmation.
For large-scale deployments, additional vCPU and memory may be required. The 6 vCPU requirement is the mini-
mum requirement for small-scale environments. Regardless of the physical CPU clock speed, in larger-scale environ-
ments, up to 12 vCPU and 64 Gb of memory may be recommended, depending on the customer use case. Contact
Nuage support for counsel regarding your particular use case.
For active/standby VSD deployment and large scale deployments, SSD storage or storage with equivalent I/O
throughput, - input/output operations per second (IOPS) - and latency is required. For IOPS, the current recommen-
dation is 23,000 because unless the full scale is met, IOPS is typically the bottleneck for VSD performance. There is
a test tool available in the VSD image (sysbench) that should be used to test the IOPS available to the VSD. Contact
Nuage support for further assistance.
For deployments with large-scale statistics and/or high rates of flow collection, additional VSD nodes may be
required. Contact Nuage support for counsel.

VSD Stats VM Specifications

OS CentOS Linux release 7.4


Disk 100GB
CPU Six or more logical cores
Memory 16GB

VSD Statistics Storage / Elasticsearch VM Requirements

Each Statistics Storage VM can be run on any of the following hypervisors:


• ESXi 6.0 Virtual Hardware Version 10
• ESXi 6.5 Virtual Hardware Version 10
• ESXi 6.7 Virtual Hardware Version 10
• CentOS 7.x/KVM
• RHEL 7.x/KVM

2.1. Nuage VSP Platform Requirements 8


VSP Release Notes, Release 5.4.1

When deploying the Statistics Storage VM in a production environment as a cluster, each Statistics Storage VM must
be installed on its own hypervisor to ensure redundancy.
Hypervisor CPU A processor from any AMD Opteron or Intel E5/E7 series Xeon processor or better
with:
• Required: 6 or more physical cores, each with a speed of 2.6 GHz or higher
• Recommended: Processors with more cores and/or higher processing speeds
• Intel Extended Page Tables (EPT) may be enabled
• HyperThreading may be enabled
VM CPU
• 6 vCPUs
• All CPUs must be reserved in case of CPU oversubscription on the hypervisor hosting the
Statistics Storage VM
VM RAM
• 16 GB RAM
• Full memory must be reserved in case of memory oversubscription on the hypervisor hosting
the Statistics Storage VM
VM Disk 100 GB provisioned
The Statistics Storage VM disk and partitions can be increased as needed before and/or after instal-
lation. Please contact the support service to receive assistance.

Note: For larger scale deployments, additional Statistics Storage nodes may be required. Contact support for assis-
tance.

VSD Architect Supported Browsers

The following browser versions support the VSD Architect:


• Firefox 47 and higher
• Chrome 57 and higher

2.1.2 VSC Requirements

The VSC VM can be run on any of the following hypervisors:


• ESXi 6.0 Virtual Hardware Version 8
• ESXi 6.5 Virtual Hardware Version 8
• ESXi 6.7 Virtual Hardware Version 8
• CentOs 7.x KVM
• RHEL 7.x KVM
Hypervisor CPU A processor from any AMD Opteron or Intel E5/E7 series Xeon processor or better
with:
• Required: 4 or more physical cores

2.1. Nuage VSP Platform Requirements 9


VSP Release Notes, Release 5.4.1

• Recommended: Processors with more cores and/or higher processing speeds


• Important: Hyperthreading must be disabled to achieve the best use of the physical cores
• Best Performance Achieved with higher L3 cache and higher clock speed versions

Note:
• EPT can be enabled or disabled.
• If HyperThreading is enabled, no workloads should use other hyperthreads on the cores used
by VSC.
• If multiple VSC’s run on the same hypervisor:
– The VSC’s can not manage the same VRS’s or NSG Uplinks
– The VSC’s CPUs must use different physical CPU cores

Hypervisor Ethernet Chipset


• Any Ethernet 1 GB or better NIC supported by the hypervisor (two physical NICs recom-
mended).
• Two emulated E1000 NICs to be provided by the hypervisor.
Hypervisor Physical Memory 8 GB of ECC memory with higher speed RAM (DDR3 133/1600) rec-
ommended.
VM CPU
• 4 vCPUs
• CPU pinning to distinct physical cores (CPU Affinity)
• CPU pinning to the same NUMA node
• No CPU oversubscription on the hypervisor hosting VSC VM
VM Memory
• 4 GB RAM – Any memory in excess of 4 GB will not be used by the VSC.
• Full memory reservation
• No memory oversubscription on the hypervisor hosting VSC VM
VM Disk 2 GB of available mass storage (SSD or hard drive) for use by the VSC VM as emulated disks.
NTP The hypervisor must run NTP and expose an NTP-synchronized CPU clock to ensure the event
notifications passed between the VSP components have the proper timestamps.
VMware Specifics
• Enable CPU Scheduling Affinity to 4 different physical CPU cores on the same NUMA node
• Enable 100% CPU Reservation
• Enable 100% Memory Reservation
• Set Latency Sensitivity to High
• Disable DRS for the VSC
• Live vMotion of VSC is not supported
• Fault Tolerance is not supported

2.1. Nuage VSP Platform Requirements 10


VSP Release Notes, Release 5.4.1

2.1.3 VCIN Requirements

The Nuage vCenter Integration Node has the same requirements as a standalone Nuage VSD node and is supported
on the same platforms.

2.1.4 VRS Requirements

Nuage VSP ships with an optional Open vSwitch kernel module that enables MPLS over GRE. Native VXLAN-only
Open vSwitch kernel modules in the supported distributions are compatible.

KVM Support

Table 2.1: KVM Support Matrix


Hypervisor Supported Native MPLSoGRE DKMS module Kernel Version
RHEL/CentOS 7.3 Yes Yes No 3.10.0-514.36.5
RHEL/CentOS 7.4 Yes Yes No 3.10.0-693
RHEL/CentOS 7.5 Yes Yes No 3.10.0-862

ESXi Support

• ESXi 6.0 Virtual Hardware Version 10


• ESXi 6.5 Virtual Hardware Version 10
• ESXi 6.7 Virtual Hardware Version 10

Docker Support

Table 2.2: Docker Support Matrix


Hypervisor Native DKMS (MPLSoGRE)
RHEL 7.1/Docker 1.12.6 Yes No
RHEL 7.2/Docker 1.12.6 Yes No
RHEL 7.3/Docker 1.12.6 Yes No
RHEL 7.4/Docker 1.12.6 Yes No
Atomic 7.3/Docker 1.12.6 Yes No
Atomic 7.4/Docker 1.12.6 Yes No

Hyper-V Support

Table 2.3: Hyper-V Support Matrix


Hypervisor Native DKMS (MPLSoGRE)
2012 R2 Yes N/A
2016 Yes N/A

2.1. Nuage VSP Platform Requirements 11


VSP Release Notes, Release 5.4.1

AVRS (Accelerated VRS) Support

Table 2.4: AVRS Support Matrix


Hypervisor Supported Native DKMS (MPLSoGRE)
RHEL 7.4, 7.5 Yes Yes No
ESXi 6.0 Yes Yes No
ESXi 6.5 Yes Yes No
ESXi 6.7 Yes Yes No
AVRS/AVRS-G is supported on the following NICs:
• Intel 10G/40G XL710
• Intel 10G 82598, 82599, X540
• Mellanox 10G/25G/50G/100G ConnectX-4 EN and ConnectX-4 Lx EN series
AVRS/AVRS-G requires a CPU supporting the SSSE3 instruction set.

VRS-B Support

VRS-B is only supported on RHEL/CentOS 7.2 and 7.3.

2.1.5 CMS Integrations

Table 2.5: CMS Support Matrix


Red Hat OSP 10
OpenStack Newton
Suse Cloud 7 (beta)
OpenStack Ocata Red Hat OSP 11
Ubuntu Cloud Archive 16.04
OpenStack Pike
Red Hat OSP 12
VMware vSphere vCenter 6.0
VMWare VMware vSphere vCenter 6.5
VMware vSphere vCenter 6.7
OSE 3.5
OpenShift OSE 3.6
OSE 3.7
1.7.4
Kubernetes 1.8.1
1.9.3

Hypervisor/CMS Support

• OpenStack: KVM, ESXi

Note: CloudStack is not supported.

2.1. Nuage VSP Platform Requirements 12


VSP Release Notes, Release 5.4.1

Hypervisor/CMS Compatibility Matrix

Table 2.6: Hypervisor CMS Compatibility Matrix


OpenStack VMware
RH RH RH OS Ubuntu Suse CBIS vCenter vSphere
OSP OSP OSP Cloud 7 18
10 11 12
New- Ocata Pike New- Ocata Pike Newton New- 6.0 6.5 6.7
ton ton (BETA) ton
KVM:CentOS/RHEL Yes Yes Yes No No No No Yes No No No
7.4 VRS
KVM:CentOS/RHEL Yes Yes Yes No No No No No No No No
7.5 VRS
KVM:Ubuntu No No No No No Yes No No No No No
16.04 VRS
RHEL 7.4 AVRS Yes Yes No No No No No No No No No
SLES 12 VRS No No No No No No No No No No No
ESXi 6.0 VRS No No No No No No No No Yes No No
ESXi 6.5 VRS No No No No No No No No No Yes No
ESXi 6.7 VRS No No No No No No No No No No Yes
ESXI 6.0, 6.5, 6.7 No No No No No No No No Yes Yes Yes
AVRS

Requirements for VRS VM on VMware

On VMware, the VRS VM uses:


• 2 virtual central processing units (vCPUs)
• 4Gb memory
• 40Gb free disk space

2.2 Licensing

The VSP licensing model uses two license types, the cluster add-on license and the entitlement license. The cluster
add-on license determines whether VSD can be operated as a cluster installation and the entitlement license determines
the number of VSP entities that can be operated. Both licenses are necessary for a cluster installation; only the
entitlement license is necessary for a standalone VSD installation. In both cases, the entitlement license must be
installed first (and will appear in the Standalone section). The Cluster license will show in the Cluster section. For a
detailed description and instructions, see VSP Licenses in the VSP User Guide.
In 4.0 and 5.0 VSP, a license will work across minor versions of the same major version. When upgrading to a new
major version (from 3.2 to 4.0, or from 4.0 to 5.0), check all the active license attributes “additionalSupportedVersions”
(readable via the VSD API for the License). If this field value is “0”, the license will be invalid after an upgrade to
a different major version and a new license must be requested. If this value is >=1, you can re-use the same license
across major versions, up to the number of major versions indicated by the “additionalsupportedversions”.

2.2. Licensing 13
CHAPTER

THREE

NEW FEATURES

This section contains the following subsections:

• New Features/Enhancements in Release 5.3.3 (page 22)


– VSP (page 22)

* VSD ProxySQL and VSD Installation script (page 22)


– VCS (page 22)

* Laddered MC-LAG (page 22)


– Integrations (page 22)

* OpenStack (page 22)


* VMware (page 22)
· Full vSphere 6.7 Support (page 22)
· vCenter 6.7 HTML5 Client Metadata Plugin (page 23)
· Integration Improvements (page 23)

* Hyper-V (page 23)


· System Monitoring Support (page 23)
– VSS (page 23)

* Top Talkers Report (page 23)


* ACL Analytics Enhancements (page 23)
* Virtual Firewall Rule Generation Enhancements (page 23)
* Contextual Flow Visibility Enhancements (page 24)
• New Features/Enhancements in Release 5.3.2 U2 (page 24)
– SRIOV support on OSPD13 and OpenStack Queens (page 24)
• New Features/Enhancements in Release 5.3.2 U1 (page 24)
– VCS (page 24)

* BFD Static Routes IPv4 and IPv6 on AVRS (page 24)


– Integrations (page 24)

* OpenStack (page 24)

14
VSP Release Notes, Release 5.4.1

· OSPD10 Support with AVRS and RHEL 7.5 (page 24)


· OSPD13 upgrade (virtio) (page 24)
• New Features/Enhancements in Release 5.3.2 (page 25)
– VSP (page 25)

* Resource Utilization Statistics Changes (page 25)


* Decoupled Key Server Process (page 25)
* IPv6 Support for VSD (page 25)
* VSD and ES Operating Systems Upgrade (page 25)
* Statistics deployment on separate hosts (page 25)
– VCS (page 25)

* BGP PE-CE v6 (WBX/VSG) (page 25)


* BFD BGP PE-CE IPv6 (WBX/VSG) (page 26)
* ECMP64 Overlay on WBX for BGP IPv6 (page 26)
* BGP PE-CE IPv6 on VRS/AVRS/VRS-G (page 26)
* BFD BGP PE-CE IPv6 on VRS/AVRS/VRS-G (page 26)
* BETA BFD Static Routes IPv4 and IPv6 on VRS/AVRS/VRS-G (page 26)
* 7750 Hardware VTEP from VSD using NETCONF (page 26)
* SSH Enhancements (page 26)
* FIP Route Precedence (ignore default route) (page 26)
* VSG/WBX Deterministic Hold Timers (page 27)
* IPv6 connectivity for management traffic (page 27)
– Integrations (page 27)

* OpenStack (page 27)


· RedHat OSPD13 with RHEL 7.5 Support (page 27)
· VLAN-unaware SRIOV L2 Duplex (OpenStack-managed) (page 28)
· Multiple VSD-managed Dual Stack Subnets on a Network (page 28)
· RedHat OSP10 with RHEL 7.5 Support (page 28)
· RedHat OSP11 with RHEL 7.5 Support (page 28)
· RedHat OSP12 with RHEL 7.5 Support (page 28)

* VMware (page 28)


· vSphere 6.7 Support (BETA) (page 28)
· BGP PE-CE Support (page 28)
· VSS and Security without Overlay Networking (BETA) (page 28)

* Containers (page 29)


· OpenShift 3.7 Atomic Masters Support (page 29)

15
VSP Release Notes, Release 5.4.1

· Encryption Support (page 29)


· Container Platform Improvements (page 29)
· BGP PE-CE Support on Container vPorts (page 29)
– Security/VSS (page 29)

* VSS IPSec Group-Key Encryption for Linux Container Workloads (page 29)
* VSS for Non-overlay Datacenter Network Environments (BETA) (page 29)
* VSS Flow Collection Scalability: Enhancements (page 30)
* Policy Group Category (page 30)
* Virtual Firewall Rule and Service Enhancements (page 30)
• New Features/Enhancements in Release 5.3.1 (page 30)
– VCS (page 30)

* BGP PE-CE IPv4 Enhancements on VRS/AVRS/VRS-G (page 30)


* ECMP64 Underlay on WBX (page 30)
* VPRN Underlay Enhancements (page 30)
– Integrations (page 30)

* OpenStack (page 30)


· Multiple Fixed IPs (IPv4 and IPv6) From Same Subnet (page 30)
· Allowed Address Pair (IPv4 and IPv6) Support for SRIOV (page 31)
· Spoofing Support for Ports in VSD-managed Subnets (page 31)
· Ironic Support in Pike (page 31)
· Multiple VSD-managed IPv4 Subnets per Network (page 31)
· Red Hat OpenStack Platform 12 (page 31)

* VMware (page 31)


· VRS Agent Availability Management (page 31)
· VRS Agent BGP PE-CE IPv4 BETA Support (page 31)

* Containers (page 31)


· Source IP Preservation (page 31)
– Security/VSS (page 32)

* Contextual Flow Visibility Without ACL Dependency (page 32)


* Support for Redirected Flows (page 32)
* Policy Group Assignment Based on Flow Search (page 32)
* Virtual Firewall Rule Enhancements (page 32)
– BGP PE-CE IPv4 Enhancements on VRS/AVRS/VRS-G (page 32)
– BFD BGP PE-CE IPv4 on VRS/AVRS/VRS-G (page 32)
– BGP PE-CE IPv6 on VRS/AVRS (page 32)

16
VSP Release Notes, Release 5.4.1

– BFD BGP PE-CE IPv6 on VRS/AVRS (page 32)


– BGP PE-CE IPv6 on VSG/WBX (page 32)
– BFD BGP PE-CE IPv6 on VSG/WBX (page 33)
– ECMP64 for BGP PE-CE IPv6 on VSG/WBX (page 33)
– OpenStack (page 33)

* Multiple Fixed IPs (IPv4 and IPv6) From Same Subnet (page 33)
* Allowed Address Pair (IPv4 and IPv6) Support for SRIOV (page 33)
* Spoofing Support for Ports in VSD-managed Subnets (page 33)
* Multiple VSD-managed IPv4 Subnets per Network (page 33)
• New Features/Enhancements in Release 5.2.2 (page 33)
– VCS (page 33)

* V6 VIP on VRS (page 33)


– No-global-prepend-AS (page 33)

* BFD Underlay for VSG/WBX IPv6 Underlay (page 34)


* BGP PE-CE VRS/VRS-G/AVRS - IPv4 (page 34)
– Integrations (page 34)

* OpenStack (page 34)


· Fine-grained Route-to-underlay and PAT-to-underlay Setting (page 34)
· SR-IOV L2 Duplex (page 34)
· Neutron Trunk HEAT Resource (page 34)
· Allowed Address Pair with VIP Support for IPv6 (page 34)
· Stateless Security Groups (page 34)
· OpenStack Project Name in VSD User Group Description (page 35)
· OpenStack Pike Support (page 35)
· OSPd12 (BETA) (page 35)

* VMware (page 35)


· vCenter Integration Node Active/Standby (page 35)
· Metadata Plugin Clean-up Management (page 35)
· vCenter Integration Node Reduced Footprint (page 35)

* Containers (page 35)


· OpenShift 3.7 with RHEL 7.4 and Atomic 7.4 Support (page 35)
· OpenShift 3.7 on Azure with RHEL 7.4 Support (page 35)

* Hyper-V (page 36)


· SCVMM Plugin Support for L2 Domains (page 36)
· SCVMM Plugin vSwitch Filter Support (page 36)

17
VSP Release Notes, Release 5.4.1

· Persistent VLAN Configuration for the Hyper-V VRS (page 36)


– Security (page 36)

* Mitigation for Kernel Side-Channel Attacks (page 36)


– VSS (page 36)

* Per Enterprise/Domain Flow Collection Setting (page 36)


* Support for Underlay Flows (page 36)
* Flow Analytics UI Improvements (page 37)
* Virtual Firewall Rule Generation Improvements (page 37)
• New Features/Enhancements in Release 5.2.1 (page 37)
– VCS (page 37)

* VSG/WBX: Manual EVPN Configuration (page 37)


* VSG/WBX: BFD Support (page 37)
* WBX: ECMP 64 in the Overlay (page 37)
* VSG/WBX: VIP (page 37)
* Dual VTEP Uplink Support (page 37)
– Integrations (page 38)

* OpenStack (page 38)


· OpenStack Pike Support (BETA) (page 38)
· Octavia Load Balancer (BETA) (page 38)
· DHCPv6 Support for VSD-Managed Virtio Ports (page 38)
· DHCPv4 for VM SRIOV Ports (page 38)
· Ironic Support for Newton and Ocata (page 38)
· Computes with AppArmor Default Profiles (page 38)
· OpenStack Newton with ESXi 6.5 (page 38)
· VSD Certificate Verification by OpenStack Plugin (page 39)

* VMware (page 39)


· VRS Agent Dual VTEP Interface Support (page 39)
· Accelerated VRS Agent for ESXi (page 39)
· VRS Agent Resource Management (page 39)
· VRS Agent Offload Behaviour Management (page 39)
· vCenter Integration Node Scalability Improvements (page 39)

* Containers (page 39)


· Kubernetes 1.9.0 Support (page 39)
· OpenShift 3.6 Support (page 40)

* Hyper-V (page 40)

18
VSP Release Notes, Release 5.4.1

· Hyper-V 2016 Support with SCVMM and OpenStack (page 40)


· MTU Management (page 40)
– Security/VSS (page 40)

* Flow Search (page 40)


* Policy Generation and Virtual Firewall Rule Management (page 40)
* Security Administrator Role (page 40)
• New Features/Enhancements in Release 5.1.2 (page 40)
– VCS (page 40)

* IPv6 OOB Management on WBX (page 40)


* Underlay Mac-move CLI Configuration (page 41)
* Enhanced Active/Standby VSD cluster - GA (page 41)
* VRS on RHEL 7.4 (page 41)
* LDAP Group Name Support Enhancement (page 41)
– VSS (page 41)

* Policy Group Expressions Enhancements (Supported for VCS/VRS only) (page 41)
* Layer 4 Services Enhancements (page 41)
* Layer 7 Security (Supported for VNS/NSG only) (page 41)
– Integrations (page 42)

* Kubernetes & OpenShift (page 42)


· Subnet Autoscaling for Kubernetes & OpenShift 3.5 (page 42)
· Support for OpenShift 3.5 on Atomic Hosts (page 42)
· NodePort Services for Kubernetes and OpenShift 3.5 (page 42)
· Kubernetes & OpenShift Services Accessible through Service IP from the Underlay Nodes
(page 42)

* OpenStack (page 42)


· Openstack Controller with Ubuntu 16.04 Ocata and AppArmor (page 42)
· Ocata Upgrade Support (page 42)
· Ironic - Port Groups, VLAN Trunking and VLAN Transparency (page 42)
· DHCPv6 (BETA) (page 43)
· Horizon Support for IPv6 (BETA) (page 43)
· Allow non-IP Packets by Default (page 43)
· Red Hat OSP 10 Support with RHEL 7.4 (page 43)
· Red Hat OSP 11 Support with RHEL 7.4 (page 43)
• New Features/Enhancements in Release 5.1.1 (page 43)
• VCS (page 43)
– Active/Standby VSD Cluster (BETA) (page 43)

19
VSP Release Notes, Release 5.4.1

– 210 WBX (page 43)


– Expose Shared Network Enterprise (page 44)
– EVPN Loop Prevention and MAC Move Control (page 44)
– VIP on HW (BETA) (page 44)
– Support for VRS on SUSE Linux (BETA) (page 44)
• VSP (page 44)
– VSD Platform Security (page 44)

* TLS 1.2 For JMS (page 44)


* VSD Default Password Change (page 44)
* LDAP and AD Support (page 45)
– VSD Operations (page 45)

* Transparent Proxy/NAT for JMS (page 45)


* AMQP as a JMS alternative (page 45)
– Alarms (page 45)
– VSD Platform Upgrade (page 45)
– Elasticsearch Platform Upgrade (page 45)
• Integrations (page 45)
– VMware (page 45)

* VRS Agent - Logging Improvements (page 45)


* VRS Agent - Custom Hostname and VM Name (page 46)
* VRS Agent - Monitor and Redeployment Policy for Disk Usage (page 46)
* VMware Integration - Metadata Changes Handling Without Cold Boot (page 46)
– Containers (page 46)

* Kubernetes & OpenShift Installation using DaemonSets (page 46)


* IPtables kube-proxy for OpenShift (page 46)
– Microsoft Hyper-V (page 46)

* Microsoft WHQL Certification (page 46)


* VSS support (BETA) (page 46)
* VPort and ACL Statistics (page 46)
* NIC Teaming (page 47)
– OpenStack (page 47)

* OpenStack Ocata Support (page 47)


* Ironic Alignment with Upstream Ocata (page 47)
* Nuage AVRS Integration with RedHat OSP10/RHEL 7.3 and Ubuntu OpenStack Ocata/16.04
(page 47)

* Openstack-Managed Dual Stack IPv4/IPv6 Support (page 47)

20
VSP Release Notes, Release 5.4.1

* SR-IOV with VLAN Support and Automated VSG/WBX Orchestration (page 47)
* VMWare ESXi integration with OpenStack Newton and Ocata (page 47)
* SUSE OpenStack Cloud 7 - Newton (BETA) (page 47)
* DHCPv6 support for VSD-Managed Virtio Ports (BETA) (page 48)
* OpenStack SFC Support (BETA) (page 48)
* Nuage Openstack Monolithic Plugin Discontinued (page 48)
• Security/VSS (page 48)
– Policy Group Expressions in ACL (VCS only) (page 48)
– Services and Service Groups (VCS only) (page 48)
– L7 Security for NSG (VNS only) (BETA) (page 48)
– Platforms (page 48)
• New Features/Enhancements in Release 5.0.2 (page 48)
– Core (page 49)

* VSP upgrade to 5.0.2 (page 49)


– Kubernetes and OpenShift (page 49)

* Customizable Subnet Size for a Namespace (page 49)


* Security (page 49)
• New Features/Enhancements in Release 5.0.1 (page 49)
– VSD Infrastructure Improvements (page 50)
– Security Policy Scale Improvements (page 50)
– VSD Platform Security Hardening (page 50)
– VCS: Expose VLANs to VMs on VRS (page 51)
– IPv6 Overlay for VRS (page 51)
– OpenStack (page 51)

* Openstack Newton support with full ML2 Mechanism Driver (page 51)
* Openstack Ocata Support with Full ML2 Mechanism Driver (BETA) (page 52)
* Dual Stack IPv6 Overlay Support (page 52)
* VLAN Transparency (page 52)
* Trunk and Sub-VPort Support for VLAN-aware VMs (page 52)
* SRIOV with VLAN Support and Automated VSG Orchestration (BETA) (page 52)
* Enable Flow Logging and Flow Stats Collection (page 53)
* AVRS (Accelerated VRS) (page 53)
– VMware (page 53)
– Kubernetes & OpenShift (page 53)

* CNI Plugin (page 53)

21
VSP Release Notes, Release 5.4.1

* Certificate-based Authentication (page 53)


– Microsoft Hyper-V (page 53)

* Support for OpenStack Newton (page 53)


* New Workflow for Nuage Add-In for SCVMM (page 53)
– VSP Upgrade to 5.0.1 (page 54)
• Deprecated Features (page 54)

3.1 New Features/Enhancements in Release 5.3.3

3.1.1 VSP

VSD ProxySQL and VSD Installation script

ProxySQL is a new service on VSD to provide additional flexibility and resiliency in VSD deployments. This new
VSD service can be monitored using Monit. The service configuration varies for different VSD installation modes
(standalone, cluster, or geo-distributed). A new configuration item has been added to the VSD installation script in
order to support configuration of ProxySQL for a geo-distributed cluster.

3.1.2 VCS

Laddered MC-LAG

Laddered MC-LAG is a two-layer MC-LAG topology that involves two leaf and two spine nodes. The pairs of leaf
nodes do have MC-LAG in the access to the servers and a LAG in the uplink ports to the spines. The spines have an
MC-LAG to the leafs. L3 starts at the spine layer with an RVPLS, leaf nodes are L2 only. This topology is deployed
as an underlay.

3.1.3 Integrations

OpenStack

• OpenStack security group scale increases to 30 security groups per vPort.


• Support has been added for multiple floating IP subnets associated to the same external network, which allows
additional FIP subnets to be used by the same router or tenant.
In this release we have introduced support for binding a router to a subnet, and detaching a router from a subnet,
when that subnet has Bare Metal ports (Ironic ports) attached. The OpenStack commands “neutron router-attach” and
“neutron router-detach” will now perform the appropriate bind or detach action within Nuage VSP.

VMware

Full vSphere 6.7 Support

With the release of Nuage Networks VSP 5.3.3, vSphere 6.7 is now fully supported as a platform for deployment and
integration. This includes deploying the Nuage components on vSphere 6.7, deploying and integrating the vCenter

3.1. New Features/Enhancements in Release 5.3.3 22


VSP Release Notes, Release 5.4.1

Integration Node with vCenter 6.7 and using VCIN to deploy VRS’s on ESXi 6.7 hypervisors.

vCenter 6.7 HTML5 Client Metadata Plugin

Included in the support for vSphere 6.7 is a new vCenter 6.7 HTML5 Client plugin to manage the Metadata of Virtual
Machines in vCenter 6.7. The functionality of this plugin is full feature parity with the existing vCenter Web Client
Metadata plugin.

Integration Improvements

The Nuage Networks VSP integration for VMware has also been improved in two areas:
• Multi-Language support: VCIN and VRS can now integrate with vCenter servers which default API language
is not set to English.
• VRS Agent Boot speed improvements: Speed increase to the boot process of the VRS Agent, especially in large
VMware environments with a large number of hosts and VMs.

Hyper-V

System Monitoring Support

In the system monitoring console of the VSD UI, it is now possible to find the following additional information for
Hyper-V VRSs:
• Hostname and UUID
• VRS Version
• Hypervisor IP
• Uptime
• Number of VM interfaces, with information on Name, Status and Type

3.1.4 VSS

Top Talkers Report

VSS analytics support new top talker reports that provides top source/destinations by IP address, subnets, zones and
policy groups based on number of packets sent or received.

ACL Analytics Enhancements

ACL analytics reports are enhanced to display ACL stats for virtual firewall rules, ingress and egress ACLs.

Virtual Firewall Rule Generation Enhancements

Virtual firewall rule generation based on flow visibility provided by VSS flow explorer is extended to also generate
rules for virtual firewall policies that are not in draft mode

3.1. New Features/Enhancements in Release 5.3.3 23


VSP Release Notes, Release 5.4.1

Contextual Flow Visibility Enhancements

VSS Flow Visualization GUI supports contextual visibility of flows based on new construct called Policy Group
Categories. Policy Group Categories can be used to categorize Policy Groups of similar type (e.g., Application, App-
Tier, Location etc.) to provide additional context for flow visualization and analytics.

3.2 New Features/Enhancements in Release 5.3.2 U2

3.2.1 SRIOV support on OSPD13 and OpenStack Queens

With this release, SRIOV automation for 210 WBX and VSG is supported on OSPD13 on RHEL 7.5. This does not
include support for the External DHCP agent providing DHCPv4 and DHCPv6 to SRIOV ports.

Note: Although all OpenStack packages are updated to new versions in this release, the only differences from 5.3.1
U1 packages are in the nuage-topology-collector packages.

3.3 New Features/Enhancements in Release 5.3.2 U1

3.3.1 VCS

BFD Static Routes IPv4 and IPv6 on AVRS

Release 5.3.2c U1 adds support of BFD for static routes IPv4 and IPv6 on AVRS. Enabling the BGP and BFD features
requires the installation of a dedicated RPM, which is included in the Nuage software packages.
Important BFD is not supported on VRS/VRS-G, so previous release notes stating BFD support for
static routes/BGP IPv4/IPv6 on VRS/VRS-G stand corrected with this release note in 5.3.2 Update
1.

3.3.2 Integrations

OpenStack

OSPD10 Support with AVRS and RHEL 7.5

With this release, OSPD10 is supported with AVRS (VA 1.7.5m1) on RHEL 7.5 with SELinux.

OSPD13 upgrade (virtio)

OSPD13 upgrade from OSPD12 (virtio only) is now supported with this release.

3.2. New Features/Enhancements in Release 5.3.2 U2 24


VSP Release Notes, Release 5.4.1

3.4 New Features/Enhancements in Release 5.3.2

3.4.1 VSP

Resource Utilization Statistics Changes

Starting in release 5.3.2, the collection of resource utilization statistics for VSC, NSG and VRS (the CPU, memory
and disk usage) requires deploying Elasticsearch and enabling the Nuage VSP Statistics. This change comes with a
limitation specific to VRS deployed on KVM (VRS-K): the resource utilization will no longer be reported. Instead,
the information needs to be retrieved from hypervisor monitoring tools instead. The following improvements are
introduced :
• The statistics collection interval is now 30 seconds for NSG and VRS, and 1 minute for VSC, instead of 10
minutes. The CPU/Mem average and peak are computed based the data samples from the last 4 hours.
• The NSG port up/down alarm will be raised within 1 minute, instead of within 10 minutes.
• New VSD and Elasticsearch APIs are available to consume the statistics. Refer to the VSP API Programming
Guide. The existing VSD APIs (/vscs/id and /vrss/id) are still available. The VSD back-end for the existing APIs
has been updated so that data is now retrieved from the Elasticsearch database instead of the MySQL database.

Decoupled Key Server Process

The Key Server process is now decoupled from the JBoss process on VSD. The new process ‘keyserver’ is monitored
by the Monit daemon and grouped under the vsd-core group with dependency on NTP and DNS status. The Monit
‘keyserver-status’ check has also been updated to include dependency on the Infinispan cluster status. For example,
if Infinispan process is stopped on all VSD nodes, then both the ‘’infinispan-cluster-status’ and the ‘keyserver-status’
will show as ‘Status Failed’.

IPv6 Support for VSD

VSD can be deployed using IPv6 only address (as a Cluster or Standalone). New installation instructions are available
in the VSP Installation Guide. Note that deploying VSD using IPv6 is not currently supported for VSD “stats-out”
deployment.

VSD and ES Operating Systems Upgrade

VSD is now shipped with RHEL 7.4 and ES is now shipped with CentOS 7.4.

Statistics deployment on separate hosts

A new way for deploying the VSD statistics collector, called the “stats-out” deployment, is introduced to support
higher flow collection rate. Currently this new model is only supported for new VCS/VSS deployments or existing
VCS/VSS deployments that have been upgraded to 5.3.2. It is not supported for VNS deployments, and it is no
supported with IPv6 VSP infrastructure.

3.4.2 VCS

BGP PE-CE v6 (WBX/VSG)

Release 5.3.2 adds the support of BGP PE-CE for IPv6 on WBX/VSG.

3.4. New Features/Enhancements in Release 5.3.2 25


VSP Release Notes, Release 5.4.1

BFD BGP PE-CE IPv6 (WBX/VSG)

Release 5.3.2 adds the support of BFD BGP PE-CE for IPv6 on WBX/VSG.

ECMP64 Overlay on WBX for BGP IPv6

Release 5.3.2 adds the support of ECMP64 overlay for BGP IPv6 on WBX.

BGP PE-CE IPv6 on VRS/AVRS/VRS-G

Release 5.3.2 adds the support of BGP PE-CE IPv6 on VRS/AVRS/VRS-G. Enabling the BGP and BFD features
requires the installation of a dedicated RPM, which is included in the Nuage software packages.

BFD BGP PE-CE IPv6 on VRS/AVRS/VRS-G

Release 5.3.2 adds the support of BFD for BGP PE-CE IPv6 on VRS/AVRS/VRS-G. Enabling the BGP and BFD
features requires the installation of a dedicated RPM, which is included in the Nuage software packages.

BETA BFD Static Routes IPv4 and IPv6 on VRS/AVRS/VRS-G

Release 5.3.2 adds BETA support of BFD for static routes IPv4 and IPv6. Enabling the BGP and BFD features requires
the installation of a dedicated RPM, which is included in the Nuage software packages.

7750 Hardware VTEP from VSD using NETCONF

Release 5.3.2 adds the support of VSD provisioning a new hardware VTEP named 7750 NETCONF. The feature
enables the integration of a 7750 as part of the VSD regular configuration of domains. Only L2 domains are supported
in 5.3.2. The feature requires the deployment of a NETCONF manager which is VM external to the VSD. The feature
has no interaction with the previous Service WAN extensions.

SSH Enhancements

Release 5.3.2 adds several enhancements:


• The support of sFTP on WBX/VSG.
• 7X50 SROS supports a number of ciphers/MACs, part of the SSH negotiation between client and server. With
release 5.3.2, VSC, VSG and WBX support the same ciphers/MACs and enable individual configuration of
them.
• The openSSH version has also been updated to a proprietary OpenSSH Nuage version.

FIP Route Precedence (ignore default route)

Release 5.3.2 provides to FIPS the flexibility to ignore the default route that can exist in the domain and always break
to the underlay. This is configurable per domain and or per VPort.

3.4. New Features/Enhancements in Release 5.3.2 26


VSP Release Notes, Release 5.4.1

VSG/WBX Deterministic Hold Timers

Release 5.3.2 adds the support of new timers for deterministic behavior when a system reboots and MC-LAG is
configured. The new feature obviates the need to configure individual port hold timers that need to be removed after
the system is in 5.3.2.

IPv6 connectivity for management traffic

Release 5.3.2 introduces support of Management connectivity for VSD, VSC, 210 WBX, 7850 VSG/VSA. This in-
cludes:
• IPv6 REST API on VSD
• IPv6-only connectivity within VSD clusters
• IPv6-only connectivity between VSD active/standby
• IPv6 XMPP connectivity to VSC, 210 WBX, 7850 VSG/VSA
• IPv6 connectivity between VSD and ElasticSearch
Some features are not yet qualified with IPv6 management network (Note: these may work, but they have not com-
pleted testing, so are currently unsupported):
• Deployment Architectures:
– VRS-G as software VTEP
– VSG/WBX as hardware VTEP
– Dual Uplinks on VRS
– Floating IP to VSG/WBX/VRS-G as Gateway
• Overlay features:
– Multicast send/receive via Underlay
– OpenStack VLAN Trunking / Multi-Network VMs using VLANs
Statistics delivery from VRS/AVRS must traverse an IPv4 to IPv6 proxy, since VRS/AVRS do not have IPv6 connec-
tivity in this release.

3.4.3 Integrations

OpenStack

RedHat OSPD13 with RHEL 7.5 Support

With release 5.3.2, Nuage VCS supports RedHat OSPD13 (based on OpenStack Queens) with RHEL 7.5. As OSPD13
is based on containerized OpenStack services, the Nuage plugin(s) are run as part of the relevant OpenStack service
containers. As VRS and metadata agent did previously, they continue to run as bare-metal processes. SRIOV automa-
tion, Ironic, AVRS, SFC and LBaaSv2 with HAProxy namespace agent (deprecated in OSPD13) are not supported in
this release with OSPD13.

3.4. New Features/Enhancements in Release 5.3.2 27


VSP Release Notes, Release 5.4.1

VLAN-unaware SRIOV L2 Duplex (OpenStack-managed)

Two OpenStack-managed subnets can be mapped to a common back-end VSD subnet. This capability is added in
support of the SR-IOV L2 duplex use case where two VLAN-unaware SR-IOV ports from a VM can attach to the
same L2 domain (or subnet) using the same CIDR (but non-overlapping address allocation pool) through two different
physnets for redundancy.

Multiple VSD-managed Dual Stack Subnets on a Network

Multiple VSD-managed dual stack subnets can be configured per network containing virtio ports only. A neutron port
on this network can have one or multiple IPv4 or IPv6 addresses if they belong to a common IPv4 or IPv6 subnet.
External DHCP agent is not supported with this feature.

RedHat OSP10 with RHEL 7.5 Support

RedHat OSP10 is now supported with RHEL 7.5.

RedHat OSP11 with RHEL 7.5 Support

RedHat OSP11 is now supported with RHEL 7.5.

RedHat OSP12 with RHEL 7.5 Support

RedHat OSP12 is now supported with RHEL 7.5.

VMware

vSphere 6.7 Support (BETA)

With this release, the VMware integration now supports vSphere 6.7 as beta. This will allow customers to start testing
the Nuage integration in their vSphere 6.7 labs.

BGP PE-CE Support

The VRS BGP feature which allows VMs to use BGP with the VRS to advertise routes into the overlay is now fully
supported in the VMware VRS Agent.

VSS and Security without Overlay Networking (BETA)

VSS and security without overlay is now released as beta in a VMware deployment using the VRS Agent with a VDF
personality.

3.4. New Features/Enhancements in Release 5.3.2 28


VSP Release Notes, Release 5.4.1

Containers

OpenShift 3.7 Atomic Masters Support

Until now, Atomic was only supported on worker nodes and not on the master nodes. Nuage 5.3.2 delivers support for
deploying the Nuage integration on Atomic master nodes in OpenShift 3.7.

Encryption Support

With this release, the Nuage Kubernetes and OpenShift integrations now support full end-to-end encryption between
the different container nodes. This results in full IPSEC encrypted tunnels between the nodes whenever pods or
services need to communicate. This added security allows security-aware companies to run their workloads in public
clouds without any concern over their traffic being visible to the cloud operators.

Container Platform Improvements

Several container platform improvements have been added in this release. vPorts created on a VRS by external
systems are not removed as part of the CNI plugin audit, Kubernetes/OpenShift ACLs Stats collection can be enabled
per cluster, and Kubernetes/OpenShift ACLs are enabled as stateful by default.

BGP PE-CE Support on Container vPorts

BGP PE-CE is now supported on container vPorts.

3.4.4 Security/VSS

VSS IPSec Group-Key Encryption for Linux Container Workloads

The VSS IPSec Group-Key Encryption for Linux Workloads feature provides a way to encrypt communication be-
tween OpenShift container nodes in the public cloud environment. This feature makes use of the Nuage key server
component in VSD and adds support for IPSec group-key encryption between workloads based on a new Nuage
datapath component for Linux called Encryption-enabled Virtual Distributed Firewall (eVDF).
The Nuage eVDF is a Nuage workload agent that is deployed as a part of linux container host in public cloud that
supports the following:
• L3-4 security policy enforcement
• Contextual visibility and analytics
• IPSec group-key encryption between eVDFs as well as between eVDF and NSG
• Virtual overlay networking support

VSS for Non-overlay Datacenter Network Environments (BETA)

VSS for Non-overlay Datacenter environments supports VSS micro segmentation capabilities (L4 distributed policy
enforcement, east-west visibility and analytics) for workloads that are not connected to SDN overlay.
This is based on two new Nuage data plane components that provide layer 4 security policy enforcement and traffic
visibility - Virtual Distributed Firewall (VDF) for ESXi/KVM workloads and Virtual Distributed Firewall gateway
(VDF-G) for securing bare-metal workloads.

3.4. New Features/Enhancements in Release 5.3.2 29


VSP Release Notes, Release 5.4.1

VSS Flow Collection Scalability: Enhancements

The VSD Stats Collector process can be deployed as a separate VSD stats node in a cluster to enable scaling of flow
collection for large VSS deployments while minimizing the impact on the VSD server.

Policy Group Category

Policy Group Categories are used to categorize Policy Groups of similar type (e.g., Application, App-Tier, Location,
etc.) to provide additional context for contextual flow visibility use cases.

Virtual Firewall Rule and Service Enhancements

The VSD Service construct has been enhanced to support IP protocols beyond TCP/UDP including ICMP. VSS Virtual
firewall rules can be used to specify security policies for protocols beyond TCP/UDP based on the service construct.

3.5 New Features/Enhancements in Release 5.3.1

3.5.1 VCS

BGP PE-CE IPv4 Enhancements on VRS/AVRS/VRS-G

Release 5.3.1 adds the support of no-prepend-global-as configurable, sticky ECMP, and peering to a loopback (where
next-hop is the VPort IP).

ECMP64 Underlay on WBX

Release 5.3.1 adds the support of ECMP64 in the base router context configurable up to 64.

VPRN Underlay Enhancements

Release 5.3.1 adds the following enhancements for underlay VPRNs:


• MCLAG IPv4
• OSPFv3
• BFD static routes, BGP and OSPF IPv4
• BFD static routes, BGP and OSPFv3 IPv6

3.5.2 Integrations

OpenStack

Multiple Fixed IPs (IPv4 and IPv6) From Same Subnet

Multiple fixed IPs (IPv4 and IPv6) from same subnets are supported on a Neutron port. The highest IPv4/IPv6 address
is applied as the fixed IPv4/IPv6 address in Nuage VSP and the rest are treated as allowed address pairs for Nuage
internal implementation. The highest IP address (the one that goes as fixed IP to VSP) cannot be used as a virtual IP
(or allowed address pair) on another port.

3.5. New Features/Enhancements in Release 5.3.1 30


VSP Release Notes, Release 5.4.1

Allowed Address Pair (IPv4 and IPv6) Support for SRIOV

Allowed Address Pairs (IPv4 and IPv6) are now supported on SRIOV ports.

Spoofing Support for Ports in VSD-managed Subnets

Spoofing can now be enabled on ports in VSD managed subnets by disabling the port security. Toggling port security
on a port will only toggle spoofing without making any other changes. Upgrade support for this feature will be
introduced in a subsequent release.

Ironic Support in Pike

Ironic is now supported with OpenStack Pike release with manual introspection.

Multiple VSD-managed IPv4 Subnets per Network

Multiple VSD managed IPv4 subnets can be configured per network containing virtio ports only. Multiple IPv4/IPv6
dual stack subnets are not supported. A neutron port on this network can have one or multiple IP addresses as long as
they belong to a single subnet. External DHCP agent is not supported with this feature.

Red Hat OpenStack Platform 12

OSP Director 12 (Pike) is now supported with RHEL 7.4.

VMware

VRS Agent Availability Management

The vCenter Integration Node (VCIN) can now be configured to manage the availability status of the VRS Agent on
the ESXi host it is running on. By enabling this feature, the VCIN will only allow VMs to be booted and/or migrated
to the ESXi host once the VRS Agent is fully up and running, including all its required services.

VRS Agent BGP PE-CE IPv4 BETA Support

The VRS Agent on ESXi supports the BGP PE-CE IPv4 feature as BETA in this release.

Containers

Source IP Preservation

Nuage Kubernetes integration now preserves the source IP of the sending pod when the packet is delivered to the
destination pod. This is a change from the previous implementation where source hypervisor IP was used as the
source IP. The only case where the source IP is not preserved (and instead gateway IP is used as the source IP) is when
a pod is communicating with its own service IP and the kube-proxy selects the same pod for fulfilling the service
request.

3.5. New Features/Enhancements in Release 5.3.1 31


VSP Release Notes, Release 5.4.1

3.5.3 Security/VSS

Contextual Flow Visibility Without ACL Dependency

VSS flow visualization is enhanced to visualize flows grouped based on context (zone, subnet, policy group) without
requiring the context to be derived based on ACL entries. In addition, Flow Explorer includes policy group(s), L4
service for each flow record without relying on ACLs for this information.

Support for Redirected Flows

VSS Flow Explorer now includes redirected flow type information as a part of flow record.

Policy Group Assignment Based on Flow Search

VSS Flow Explorer enables assignment of multiple VPorts to a policy group based on a flow search and selection of
multiple flow records.

Virtual Firewall Rule Enhancements

Virtual firewall rule can be defined with stateful flag enabled or disabled.

3.5.4 BGP PE-CE IPv4 Enhancements on VRS/AVRS/VRS-G

Release 5.3.1 adds the support of no-prepend-global-as configurable, sticky ECMP, and peering to a loopback (where
next-hop is the VPort IP) in the overlay.

3.5.5 BFD BGP PE-CE IPv4 on VRS/AVRS/VRS-G

Release 5.3.1 adds the support of BFD BGP IPv4 on VRS/AVRS in the overlay.

3.5.6 BGP PE-CE IPv6 on VRS/AVRS

Release 5.3.1 adds the support of BGP IPv6 PE-CE to VPort and peering to a loopback (where next-hop is the VPort
IP) in the overlay.
• Multicast send/receive via Underlay
• OpenStack VLAN Trunking / Multi-Network VMs using VLANs

3.5.7 BFD BGP PE-CE IPv6 on VRS/AVRS

Release 5.3.1 adds the support of BFD BGP IPv6 on VRS/AVRS in the overlay.

3.5.8 BGP PE-CE IPv6 on VSG/WBX

Release 5.3.1 adds BETA support of BGP IPv6 PE-CE to VPort and peering to a loopback (where next-hop is the
VPort IP) in the overlay.

3.5. New Features/Enhancements in Release 5.3.1 32


VSP Release Notes, Release 5.4.1

3.5.9 BFD BGP PE-CE IPv6 on VSG/WBX

Release 5.3.1 adds BETA support of BFD BGP IPv6 on VSG/WBX in the overlay.

3.5.10 ECMP64 for BGP PE-CE IPv6 on VSG/WBX

Release 5.3.1 adds BETA support of ECMP64 BGP IPv6 on VSG/WBX in the overlay.

3.5.11 OpenStack

Multiple Fixed IPs (IPv4 and IPv6) From Same Subnet

Multiple fixed IPs (IPv4 and IPv6) from same subnets are supported on a Neutron port. The highest IPv4/IPv6 address
is applied as the fixed IPv4/IPv6 address in Nuage VSP and the rest are treated as allowed address pairs for Nuage
internal implementation. The highest IP address (the one that goes as fixed IP to VSP) cannot be used as a virtual IP
(or allowed address pair) on another port.

Allowed Address Pair (IPv4 and IPv6) Support for SRIOV

Allowed Address Pairs (IPv4 and IPv6) are now supported on SRIOV ports.

Spoofing Support for Ports in VSD-managed Subnets

Spoofing can now be enabled on ports in VSD managed subnets by disabling the port security. Toggling port security
on a port will only toggle spoofing without making any other changes. Upgrade support for this feature will be
introduced in a subsequent release.

Multiple VSD-managed IPv4 Subnets per Network

Multiple VSD-managed IPv4 subnets can be configured per network containing virtio ports only. Multiple IPv4/IPv6
dual stack subnets are not supported. A neutron port on this network can have one or multiple IP addresses as long as
they belong to a single subnet. External DHCP agent is not supported with this feature.

3.6 New Features/Enhancements in Release 5.2.2

3.6.1 VCS

V6 VIP on VRS

VIP (Virtual IP Address) has been enhanced and now it is supported for IPv6 on VRS.

3.6.2 No-global-prepend-AS

BGP AS is defined globally per enterprise, and it is always included in the BGP path AS. As of 5.2.2, it is now possible
to configure, whether the global AS is included or not. This is configurable per neighbor using the blob. Supported on
VSG and WBX.

3.6. New Features/Enhancements in Release 5.2.2 33


VSP Release Notes, Release 5.4.1

BFD Underlay for VSG/WBX IPv6 Underlay

BFD underlay IPv4 for routing protocols was introduced in 5.2.1 as GA, whereas BFD underlay IPv6 was BETA in
5.2.1. Release 5.2.2 adds BFD IPv6 underlay for routing protocols on VSG/WBX as GA.

BGP PE-CE VRS/VRS-G/AVRS - IPv4

Release 5.2.2 adds the support of BGP IPv4 as routing protocol for VRS/AVRS/VRS-G VPorts. The BGP PE-CE
feature is supported on Centos/RHEL 7.3/7.4 only (not available on Ubuntu 16.04, SLES 12 or VMware). On Cen-
tOS/RHEL 7.3/7.4, this functionality requires the installation of an additional optional RPM: nuage-bgp and its de-
pendencies. This package is located in the Nuage-VRS-el7 archive. For installation instructions, see the VSP Install
Guide.

3.6.3 Integrations

OpenStack

Fine-grained Route-to-underlay and PAT-to-underlay Setting

Nuage OpenStack integration now supports provisioning of route-to-underlay and PAT-to-underlay on a per subnet
and per router level. It is possible to provision a combination of route-to-underlay and pat-to-underlay on individual
subnets within an L3 domain or a router. The provisioning API/CLI has been remodeled while maintaining backward
compatibility.

SR-IOV L2 Duplex

Two VSD-managed subnets in OpenStack can now be mapped to a common backend VSD subnet. This capability is
added in support of the SR-IOV L2 duplex use case where two VLAN-unaware SR-IOV ports from a VM can attach
to the same L2 domain (or subnet) using the same CIDR (but non-overlapping address allocation pool) through two
different physnets for redundancy.

Neutron Trunk HEAT Resource

The Neutron Trunk HEAT resource has been backported from Pike to Newton as Nuage Trunk HEAT resource in
support of the SRIOV VLAN-aware-VM use case. The Trunk HEAT resource is not supported in OpenStack Ocata at
this time.

Allowed Address Pair with VIP Support for IPv6

Allowed address pairs are now supported with VIP for IPv6 in L3 Domains.

Stateless Security Groups

Support for stateless security groups (SGs) has been added in 5.2.2. All rules within an SG are stateless if the SG
is created as stateless. Nuage Security Group HEAT resource has been added to support stateless SG orchestration
through HEAT.

3.6. New Features/Enhancements in Release 5.2.2 34


VSP Release Notes, Release 5.4.1

OpenStack Project Name in VSD User Group Description

The OpenStack Project or Tenant name is now reflected in the description of VSD user groups for all user groups
created by the Nuage OpenStack Plugin.

OpenStack Pike Support

OpenStack Pike support has reached GA in this release. This is supported on Ubuntu 16.04 OpenStack Pike with
all OpenStack services running as bare-metal processes and on RedHat OSP 12 with all OpenStack services running
as containers except Neutron and VRS in line with RedHat OSP strategy. Nuage HEAT and Horizon extensions are
layered on top of the respective base service containers. SR-IOV automation, Ironic, AVRS, and ESXi integration are
not yet supported on Pike but will be supported in a subsequent release.

OSPd12 (BETA)

Support for OSPd12 has been introduced as BETA in 5.2.2. OSPd12 deploys all OpenStack services except Neutron,
Manila and Cinder as containers.

VMware

vCenter Integration Node Active/Standby

With the introduction of vCenter Integration Node (VCIN) Active/Standby support, customers can now deploy a
standby VCIN node that can be activated easily in case of an active VCIN failure.

Metadata Plugin Clean-up Management

It is now possible to configure the time when VMs that have been activated using metadata are cleaned up from the
VSD once they are powered off. This is achieved by providing a new metadata field that identifies the timer.

vCenter Integration Node Reduced Footprint

The vCenter Integration Node now has its own installation option when running the VSD installation script. In order
to reduce the size of the VCIN footprint, this option disables several of the services that run on a standard standalone
VSD install.

Containers

OpenShift 3.7 with RHEL 7.4 and Atomic 7.4 Support

The Nuage OpenShift integration now supports OpenShift 3.7 with Red Hat Enterprise Linux 7.4 and Red Hat Atomic
7.4.

OpenShift 3.7 on Azure with RHEL 7.4 Support

The Nuage OpenShift integration now supports OpenShift 3.7 with Red Hat Enterprise Linux 7.4 on Azure.

3.6. New Features/Enhancements in Release 5.2.2 35


VSP Release Notes, Release 5.4.1

Hyper-V

SCVMM Plugin Support for L2 Domains

The SCVMM plugin now supports the configuration of L2 domains on a VM network interface in Hyper-V.

SCVMM Plugin vSwitch Filter Support

The SCVMM plugin now supports filtering the VM network interfaces by the vSwitch to which they are connected.
This enables the user to display only the interfaces connected to the Nuage-enabled vSwitch.

Persistent VLAN Configuration for the Hyper-V VRS

With the 5.2.2 release, the VLAN configuration that could previously be applied manually can now be configured
during the installation of the Hyper-V VRS, allowing this configuration to be persistent.

3.6.4 Security

Mitigation for Kernel Side-Channel Attacks

There have been recent CVEs regarding three potential attacks (CVE-2017-5754, CVE-2017-5753 and CVE-2017-
5715) collectively known as Meltdown and Spectre. The available Operating System patches for mitigating these
attacks have been applied to VSD and Elasticsearch VM images. This includes an upgrade of the Elasticsearch VM
from CentOS 7.3 to 7.4. Performance impact of up to 17% on VSD has been observed under high API or activation
load. No significant impact to Statistics storage or retrieval has been observed on Elasticsearch.
It is possible to control the performance impact of these patches following configuration instructions for the appropriate
Operating System. For Red Hat Enterprise Linux and CentOS (used on VSD, VCIN and Elasticsearch Statistics DB
VMs), customers can use the Red Hat instructions.
VSC, VSG, VSA are not affected by these vulnerabilities since they do not have any facility to run user-provided
software or code. VSC can run on affected hypervisors, and is supported in a patched environment.
For hypervisors running VRS or AVRS on the hypervisor (KVM RHEL/Ubuntu/SLES, Hyper-V), Nuage does not sup-
ply the Operating System, but has validated functionality and performance with available Operating System patches.
No significant impact to steady state traffic is observed, but up to 15% degradation in flow setup rate can occur.
For ESXi hypervisors running VRS or AVRS as a VM, Nuage has validated functionality and performance with
available ESXi patches. No significant impact to steady state traffic is observed, but up to 15% degradation in flow
setup rate can occur.

3.6.5 VSS

Per Enterprise/Domain Flow Collection Setting

VSS flow collection can be selectively enabled/disabled at both enterprise and domain level.

Support for Underlay Flows

VSS Flow Explorer shows flows that are sent by vPorts to underlay as well as overlay flows.

3.6. New Features/Enhancements in Release 5.2.2 36


VSP Release Notes, Release 5.4.1

Flow Analytics UI Improvements

VSS flow analytics includes top talkers report per domain as well as the breakdown of flow statistics matching explicit
and implicit as well as allow and deny ACLs.

Virtual Firewall Rule Generation Improvements

VSS Virtual firewall rule generation based on flows now supports L2 domains as well as L3 domains. In addition,
the virtual firewall rule generation GUI is enhanced to automatically display matching source and destination policy
groups based on the selected flow.

3.7 New Features/Enhancements in Release 5.2.1

3.7.1 VCS

VSG/WBX: Manual EVPN Configuration

This release allows manual provisioning of EVPN services without the need of VSD. Support includes: L2 domains
defining VPLS services and EVPN within the vSwitch controller. VLANs used in SAPs can only be outside the range
defined in the dynamic services port profile, which is reserved for VSD.

VSG/WBX: BFD Support

Bidirectional Forwarding Detection (BFD) is a network protocol that serves to detect faults between two forwarding
engines connected by a link; i.e., failures in the path between two systems. Nuage is adding BFD to VSG/WBX plane
routing protocols in both the underlay and the overlay, for both IPv4 and IPv6. BFD support in 5.2.1 includes:
• Underlay routing protocols: static routes, BGP, ISIS, OSPF (note IPv6 support is BETA in this release).
• Overlay: static routes (IPv4 and IPv6) and BGP PE-CE IPv4.

WBX: ECMP 64 in the Overlay

This feature enables support of ECMP up to 64. This feature is available only on the WBX, and it is configured via
CLI (not from the VSD).

VSG/WBX: VIP

Release 5.1.1 introduced the BETA support of virtual IP address (VIP) and optionally, virtual MAC (vMAC) on the
VSG/WBX. VIP is shared by a set of servers, typically for external server redundancy or load balancing. It has been
developed to support resilient (a/s) virtual appliances with multiple VIPs. Release 5.2.1 officially supports VIP as GA.

Dual VTEP Uplink Support

For VRS-ESXi and KVM only, release 5.2.1 adds the capability of defining two underlay VTEP interfaces in the VRS
that are connected to two different underlay fabrics. This feature enables creating a domain and associating it to a
particular underlay interface VTEP. This is useful when domains need to be physically separated in different underlay
networks.

3.7. New Features/Enhancements in Release 5.2.1 37


VSP Release Notes, Release 5.4.1

3.7.2 Integrations

OpenStack

OpenStack Pike Support (BETA)

OpenStack Pike is supported with this release. This is supported on Ubuntu 16.04 OpenStack Pike and RedHat OSP 12
with all OpenStack services running as processes. Support with OSP Director and containerized services will follow
in a subsequent release.

Octavia Load Balancer (BETA)

The support for Octavia Load Balancer using native Octavia APIs (v2.0) has been introduced in this release. Stan-
dalone and as well as active/standby modes are supported. This feature has been tested only on OSP 12 with all
OpenStack services running as processes (not containers).
Distribution support in Pike is limited:
• Octavia is tech preview in Red Hat OSP 12
• Octavia is not packaged for Canonical OpenStack Pike

DHCPv6 Support for VSD-Managed Virtio Ports

The support for DHCPv6 on VM virtio and SRIOV ports for both OpenStack-managed subnets and VSD-managed
subnets has reached general availability in this release. This support is based on the Neutron DHCP agent (DNSMasq).

DHCPv4 for VM SRIOV Ports

DHCPv4 for VM SRIOV ports for both OpenStack-managed subnets and VSD-managed subnets has been introduced
in this release. This support is based on the Neutron DHCP agent (DNSMasq).

Ironic Support for Newton and Ocata

Ironic support for OSP 10, Ubuntu 16.04 Openstack Newton and OSP 11 has been introduced in this release. Ironic
for OSP 11 has feature parity with Ironic for Ubuntu 16.04 Openstack Ocata that was released in 5.1.1/5.1.2. Ironic
for Newton has support for all Ironic features as supported in Ocata except Port Groups and Security Groups for
Provisioning/Cleaning networks due to lack of support for these features in the base OpenStack Newton distribution.
Security Groups are supported for tenant networks in Newton.

Computes with AppArmor Default Profiles

Ubuntu 16.04 OpenStack computes are now supported with AppArmor running with default profiles.

OpenStack Newton with ESXi 6.5

ESXi 6.5 hypervisors are now supported with OpenStack Newton.

3.7. New Features/Enhancements in Release 5.2.1 38


VSP Release Notes, Release 5.4.1

VSD Certificate Verification by OpenStack Plugin

The Nuage OpenStack plugin verifies VSD certificate during connection establishment. Root-authority signed certifi-
cates as well as self-signed certificates are supported.

VMware

VRS Agent Dual VTEP Interface Support

With this release, we are introducing the support for two VTEP interfaces on the ESXi VRS Agent. This feature
allows customers to send traffic over multiple underlays in case they want to physically separate the traffic over
multiple interfaces on their ESXi hypervisors. This is useful if certain domain traffic needs to be physically separated
from other traffic, even in a pure overlay scenario.

Accelerated VRS Agent for ESXi

We now support the deployment of an Accelerated VRS (AVRS) Agent. By using DPDK specifically designed to
work in the VMware environment, the AVRS Agent provides increased performance compared to the regular VRS.
The AVRS Agent is provided as a separate image and can be deployed using the vCenter Integration Node. As well as
the current release notes, check the VMware Integration Guide for more details on requirements and limitations.

VRS Agent Resource Management

We now allow the customer to configure the amount of vCPUs and memory assigned to the VRS Agent. As part of
this change, we have also changed the default VRS image to require 4Gb of memory as the Nuage product capabilities
grow larger.
We also introduce resource reservation capabilities for the VRS Agent. When this feature is enabled, we will attempt
to reserve 100% of the assigned vCPUs and memory of the VRS Agent.

VRS Agent Offload Behaviour Management

By default the VRS Agent will enable LRO and GRO on the interfaces involved in the data path traffic (VM facing
interface and VTEP interfaces) for better performance. Because this is not a desired configuration in certain environ-
ments, we have made this behaviour configurable through the vCenter Integration Node.

vCenter Integration Node Scalability Improvements

To support larger scale deployments, it is now possible to configure the rate at which the VRS communicates its
metrics and statistics towards the VCIN.

Containers

Kubernetes 1.9.0 Support

The Kubernetes integration of Nuage now supports Kubernetes 1.9.0.

3.7. New Features/Enhancements in Release 5.2.1 39


VSP Release Notes, Release 5.4.1

OpenShift 3.6 Support

The Openshift integration of Nuage now supports OpenShift 3.6 with RHEL 7.3 and Atomic 7.3.

Hyper-V

Hyper-V 2016 Support with SCVMM and OpenStack

With this release, Nuage brings support for Hyper-V 2016. This support is available for both the SCVMM environ-
ments and the OpenStack environments.

MTU Management

With VXLAN, a 50 byte header is added on top of an existing packet. This header can cause the packet size to
become larger than the MTU of the physical interface of the Hyper-V host if the physical interface is configured with
an MTU of 1500 bytes. To address this problem, we are introducing a feature which, when enabled, will automatically
reduce the VM’s MTU to accommodate the VXLAN header. This feature targets environments where the physical
infrastructure is configured with an MTU of 1500 bytes.

3.7.3 Security/VSS

Flow Search

VSS supports the ability to search flows based on various criteria such as src/dst IP, src/dst ports, proto, and L4
services, as well as Nuage metadata such as Zone/Subnet.

Policy Generation and Virtual Firewall Rule Management

VSS simplifies firewall rule management by providing intent-based, enforcement-direction-independent virtual fire-
wall rule abstraction for security administrators that is automatically translated to Ingress/Egress ACLs.
VSS supports generation of virtual firewall rules for microsegmentation based on contextual visibility that are enforced
in Nuage VRS.

Security Administrator Role

VSP includes support for a new enterprise security administrator role that provides restricted privileges to manage
security policies, view security analytics reports and audit configuration without the ability to configure networking.

3.8 New Features/Enhancements in Release 5.1.2

3.8.1 VCS

IPv6 OOB Management on WBX

This release introduces IPv6 support for the out-of-band management connection configured in the BOF that applies
to the virtual machine running SROS.

3.8. New Features/Enhancements in Release 5.1.2 40


VSP Release Notes, Release 5.4.1

Underlay Mac-move CLI Configuration

Release 5.1.1 introduced evpn mac-move support in the overlay, and BETA support for the underlay. Mac move is
supported in the underlay from this release.

Enhanced Active/Standby VSD cluster - GA

This release introduces generally available support for deploying two VSD clusters in different locations with one
active VSD cluster in one location and one warm standby VSD cluster in a different location. The warm standby VSD
cluster is configured with automatic asynchronous replication enabled (from the active VSD cluster to the standby
VSD cluster) to ensure minimal loss of data during a failover from the active to the standby VSD cluster. For more
information on this VSD deployment model, refer to the VSP Install Guide.

VRS on RHEL 7.4

For KVM hypervisors, Red Hat Enterprise Linux 7.4 is now supported.

LDAP Group Name Support Enhancement

The LDAP integration has been improved to also allow underscores instead of spaces in the names of the mandatory
groups. This adds support to the LDAP integration with Redhat Identity Manager.

3.8.2 VSS

Policy Group Expressions Enhancements (Supported for VCS/VRS only)

Policy Group Expressions are supported for advanced forwarding ACLs in addition to Ingress/Egress ACLs. Policy
Group Expressions in ACLs are currently supported for VCS/VRS only and not currently supported for VNS/VNS.
New VSS L4 service analytics reports provide visibility into L4 services and associated flow details.

Layer 4 Services Enhancements

L4 service constructs are supported for advanced forwarding ACLs in addition to Ingress/Egress ACLs. In addition,
default service definitions can be customized on a per Enterprise basis.

Layer 7 Security (Supported for VNS/NSG only)

Layer 7 / application signatures are supported as a part of Ingress/Egress ACL matching criteria to allow/deny access
to specific applications. In addition, VSS Flow visualization supports Layer 7 / application visibility. Layer 7 security
policies and flow visibility are currently supported for VNS/NSG only and not currently supported for VCS/VRS. New
VSS L7 application analytics reports provide visibility into L7 applications and associated flow details.

3.8. New Features/Enhancements in Release 5.1.2 41


VSP Release Notes, Release 5.4.1

3.8.3 Integrations

Kubernetes & OpenShift

Subnet Autoscaling for Kubernetes & OpenShift 3.5

The Nuage Monitor running on the Masters automatically adds more subnets for a Kubernetes namespace if the
initially allocated subnet is running out of IP addresses.

Support for OpenShift 3.5 on Atomic Hosts

This release supports running the Nuage CNI Plugin and VRS on RHEL Atomic 7.3 Hosts. The Nuage Monitor on
the Masters is only supported on RHEL server hosts.

NodePort Services for Kubernetes and OpenShift 3.5

The Nuage integration for Kubernetes now supports exposing Kubernetes Services on each node’s IP at a static
port (the NodePort). You will be able to contact the NodePort service from outside the cluster by requesting
<NodeIP>:<NodePort>.

Kubernetes & OpenShift Services Accessible through Service IP from the Underlay Nodes

The Kubernetes and OpenShift nodes can now access Kubernetes and OpenShift Service IPs directly from the underlay.
This is achieved using the nuage-infra daemonset.

OpenStack

Openstack Controller with Ubuntu 16.04 Ocata and AppArmor

Ubuntu 16.04 Ocata OpenStack Controller has been validated with AppArmor using default profiles.

Ocata Upgrade Support

OpenStack Newton with the Nuage plugin can be upgraded to OpenStack Ocata. The RedHat OSP 10 to OSP 11
upgrade has been validated with RHEL 7.4.

Ironic - Port Groups, VLAN Trunking and VLAN Transparency

This release introduces support for Ironic port groups and VLAN trunking and VLAN transparency on Ironic ports.
Both LAG and MC-LAG are supported with Ironic port groups. The switch configuration for LAG and MC-LAG is
not automated through OpenStack plugin, it must be pre-configured. VLAN trunking and VLAN transparency can be
configured individually on Ironic ports or on port groups. This feature is supported only on Ubuntu 16.04 OpenStack
Ocata.

3.8. New Features/Enhancements in Release 5.1.2 42


VSP Release Notes, Release 5.4.1

DHCPv6 (BETA)

DHCPv6 for VM virtio and SRIOV ports for both OpenStack-managed subnets and VSD-managed subnets is sup-
ported in this release. This support is based on the Neutron DHCP agent (DNSMasq).

Horizon Support for IPv6 (BETA)

Horizon support for IPv6 covering both OpenStack-managed subnets and VSD-managed subnets has been introduced
in this release.

Allow non-IP Packets by Default

All new subnets and routers (L2 and L2 domains) created in OpenStack will allow non-IP packets by default once the
config knob is enabled for this feature. Pre-existing subnets/routers will continue to disallow non-IP packets when this
feature is turned on.

Red Hat OSP 10 Support with RHEL 7.4

Red Hat OSP 10 is now supported with RHEL 7.4.

Red Hat OSP 11 Support with RHEL 7.4

Red Hat OSP 11 is now supported with RHEL 7.4.

3.9 New Features/Enhancements in Release 5.1.1

3.10 VCS

3.10.1 Active/Standby VSD Cluster (BETA)

This release introduces support for deploying two VSD clusters in different location with one Active VSD cluster
in one location and one “warm” Standby VSD cluster in a different location. The “warm” Standby VSD cluster
is configured with automatic asynchronous replication enabled (from the Active VSD Cluster to the Standby VSD
Cluster), to ensure minimal loss of data during a failover from the Active to the Standby VSD cluster. For more
information on this VSD deployment model, refer to the VSP Install Guide.

3.10.2 210 WBX

Release 5.1.1 introduces the support of the new 210 WBX platforms running Nuage Networks software. Two models
of 210 WBX using Nuage Networks software are available:
• 210 WBX 32QSFP28: A compact 1 RU platform with capacity for 32 small form-factor pluggable QSFP28 or
QSFP+ ports, two redundant AC/DC power supplies, and five fan trays
• 210 WBX 48SFP28 6QSFP28: A compact 1 RU platform featuring 48 SFP ports (SFP28, SFP+ or SFP) and 6
QSFP28 or QSFP+ ports, two redundant AC/DC power supplies, and four fan trays

3.9. New Features/Enhancements in Release 5.1.1 43


VSP Release Notes, Release 5.4.1

The 210 WBX uses a powerful six-core x86 architecture CPU, extending the capabilities of Nuage SROS.
Feature set support is equivalent to 7850 VSG (exceptions apply).

3.10.3 Expose Shared Network Enterprise

The Shared Infrastructure feature exposes the VSD Shared Network construct directly to the csproot users through
the VSD architect and the VSD API. In addition, the Shared Infrastructure now supports the following advanced
configuration:
• Static routes
• ACL rules
• Dual shared resource uplink (previously, the API only supported single uplink)
• PE-CE configuration. PE-CE is necessary to dynamically inject multiple routes, route-out via nearest exit, avoid
statics, etc.

3.10.4 EVPN Loop Prevention and MAC Move Control

Release 5.1.1 adds the support of VPLS loop detection capabilities on VSG/WBX. MAC move allows monitoring
MAC relearn rate in a VPLS context and blocking VPLS ports (SAPs or VPorts) where the MAC relearn rate exceeds
a predefined rate. This mechanism is used to detect and remove the loops in the VPLS network without using any
control protocol. Support includes EVPN VSD created overlay services. Support of mac move for manually created
VPLS (underlay) is BETA.

3.10.5 VIP on HW (BETA)

Currently supported on VRS, in host vports, release 5.1.1 introduces the support of virtual IP address (VIP) and virtual
MAC (vMAC) on VSG/WBX. VIP is shared by a set of servers, typically for external server redundancy or load
balancing. It is developed to support resilient (a/s) virtual appliances with multiple VIPs.

3.10.6 Support for VRS on SUSE Linux (BETA)

Support for VRS on SUSE Linux - SLES 12 SP2 (kernel @4.4.21-81) has been added.

3.11 VSP

3.11.1 VSD Platform Security

TLS 1.2 For JMS

External JMS clients to connect to the VSD JMS Server over TLS v1.2.

VSD Default Password Change

During the VSD installation, the root user can decide to update all the internal passwords to be used by the VSD
services.

3.11. VSP 44
VSP Release Notes, Release 5.4.1

LDAP and AD Support

Nuage supports LDAP and AD using the API and the GUI (the VSD Architect). LDAP and AD can be used both as a
single trusted source for user authentication and authorization and to secure application and infrastructure resources.
When integrating with LDAP on the CSP and/or Organization/Enterprise level, you can:
• Configure the LDAP User property used for the username in VSD.
• Configure a prefix and suffix for matching the VSD group names to the group names in LDAP.

3.11.2 VSD Operations

Transparent Proxy/NAT for JMS

Validation of JMS client to server connection using a front end load balancer and provide configuration example with
HAProxy.

AMQP as a JMS alternative

AMQP can now be used as an alternative for JMS to listen to events and alarms of VSD.

3.11.3 Alarms

Adding validation and documentation for VNS and VCS related alarms.

3.11.4 VSD Platform Upgrade

• JRE 1.8.0_141
– MySQL percona 5.6.32-25-17 + Galera 3.17
– Ejabberd 3.2.16_3
– RHEL 7.3 (3.10.0-514.26.2.el7.x86_64)

3.11.5 Elasticsearch Platform Upgrade

• CentOS 7.3 (3.10.0-514.16.1.el7.x86_64)

3.12 Integrations

3.12.1 VMware

VRS Agent - Logging Improvements

The VRS Agent has been configured to use syslog for all Nuage processes and store all log files in a single folder.
Through the vCenter Integration Node, the VRS Agent can also be configured to send its log messages to a remote
syslog server over TCP or UDP.

3.12. Integrations 45
VSP Release Notes, Release 5.4.1

VRS Agent - Custom Hostname and VM Name

Through the vCenter Integration Node, the hostname of the VRS Agent can be configured for each individual VRS
Agent. This hostname will be used if the management interface uses DHCP. In the case of vSphere 6.5, the VRS Agent
hostname will also be used to rename the VRS Agent in vCenter, facilitating easy identification.

VRS Agent - Monitor and Redeployment Policy for Disk Usage

The disk usage of both the root partition and the log partition have been added to the VRS Agent monitoring and
statistics in the vCenter Integration Node. These new metrics can also be used in a redeployment policy.

VMware Integration - Metadata Changes Handling Without Cold Boot

Previously, a VM had to be cold booted before a change in metadata was applied. With this improvement, changes
made to metadata on a VM while it is running will be applied immediately.

3.12.2 Containers

Kubernetes & OpenShift Installation using DaemonSets

The install procedure has been updated to use DaemonSets to automatically deploy the Nuage VRS and CNI plugin on
all the cluster nodes and the kubemon/openshift-monitor running on the cluster Masters. The Nuage VRS, CNI plugin
audit daemon, and kubemon/openshift-monitor all run as Pods now. The old installation method using RPMs is still
available for this release but will be deprecated.

IPtables kube-proxy for OpenShift

The Openshift CNI plugin now supports the iptables-based kube-proxy. This is more efficient than moving the packets
from the kernel to the user-space kube-proxy and then back to the kernel. This results in higher throughput and lower
latency. The user-space kube-proxy is still supported.

3.12.3 Microsoft Hyper-V

Microsoft WHQL Certification

VRS for Hyper-V 2012 R2 is now officially certified by Microsoft under the Windows Hardware Quality Labs
(WHQL) certification program. This means that users can rest assured that the Open vSwitch (OVS) kernel com-
ponent of VRS has been validated to work satisfactorily with Windows.

VSS support (BETA)

This release introduces VSS Security Monitoring and Visualization as a BETA feature for Hyper-V 2012 R2 hosts.
This allows users to realize the full potential of the VSP platform by being able to prevent, detect and respond to
security threats.

VPort and ACL Statistics

VPort and ACL statistics are now exported from the VRS on Hyper-V hosts to the VSD.

3.12. Integrations 46
VSP Release Notes, Release 5.4.1

NIC Teaming

Hyper-V 2012 R2 native NIC Teaming has been validated to work in conjunction with VRS.

3.12.4 OpenStack

OpenStack Ocata Support

Nuage integration support for Openstack Ocata is now GA for greenfield deployments. Upgrades from Newton to
Ocata is not yet supported.

Ironic Alignment with Upstream Ocata

Nuage Ironic support has been updated and aligned with the latest upstream Ocata design/APIs. Additionally, this
release brings in new features such as LLDP-based introspection, Port Security, Security Groups (for both provision-
ing and tenant networks) and VLAN transparency apart from multi-tenancy support for bare-metals. This feature is
supported only on Ocata. Note that upgrading to Ocata Ironic and upgrading from Ocata Ironic to a future Nuage or
Openstack release is not supported in Release 5.1.1. Contact your Nuage Representative if there is a need to deploy
5.1.1 Ocata Ironic in production or any other environment that will require an upgrade path to a future release.

Nuage AVRS Integration with RedHat OSP10/RHEL 7.3 and Ubuntu OpenStack Ocata/16.04

Nuage Accelerated VRS (DPDK) is now integrated and supported with RedHat OSP10/RHEL 7.3 and Ubuntu Open-
Stack Ocata/16.04. The vPorts of the VMs spun up with OpenStack will be accelerated using AVRS. AVRS with
OSP11/RHEL 7.3 and Ubuntu OpenStack Newton/16.04 is not supported yet.

Openstack-Managed Dual Stack IPv4/IPv6 Support

Openstack-managed dual stack IPv4/IPv6 support has been introduced in this release. Features such as Security groups
with IPv6 rules, IPv6 extra routes and IPv6 Allowed Address Pairs are supported. IPv6 address allocation through
DHCPv6/SLAAC is not supported at this time for OpenStack managed subnets.

SR-IOV with VLAN Support and Automated VSG/WBX Orchestration

SR-IOV support with dynamic VSG/WBX orchestration of VLANs and mapping to HW-VTEP and automated
Compute-to-ToR topology discovery using LLDP is supported in this release. The Nuage ML2 Mechanism Driver
supports both VLAN-aware-VM (VLAN tagging done by the VM) and VLAN-unaware-VM (VLAN tagging done by
the SR-IOV driver) modes with SR-IOV. The topology collector is now available as an installable package and also
supports automated population of the discovered host-to-switch topology in the Neutron database.

VMWare ESXi integration with OpenStack Newton and Ocata

VMWare ESXi 5.5 integration is now supported with the Nuage OpenStack Newton and Ocata Neutron plugins.

SUSE OpenStack Cloud 7 - Newton (BETA)

Support for Nuage integration with SUSE OpenStack Cloud 7 (Newton) and SLES 12 SP2 has been introduced in this
release.

3.12. Integrations 47
VSP Release Notes, Release 5.4.1

DHCPv6 support for VSD-Managed Virtio Ports (BETA)

DHCPv6 support for VM virtio ports has been introduced for VSD-managed subnets. This is based on the Neutron
DHCP agent (DNSMasq). This feature is supported only on Newton.

OpenStack SFC Support (BETA)

VLAN-based Service Chaining aligned with the Neutron networking-SFC model has been introduced in the release.
The VLAN ID is used as an SFC ID to direct the packet through a specific set of Value Added Services (VAS). This
BETA support is only for a single instance VAS per hop. For more information, go to https://docs.openstack.org/
newton/networking-guide/config-sfc.html .

Nuage Openstack Monolithic Plugin Discontinued

As stated in 5.0.1 Release Notes, support for the Nuage Openstack Monolithic/Core Plugin has been discontinued for
all OpenStack releases from this VCS release. The Nuage plugin now uses ML2 drivers.

3.13 Security/VSS

3.13.1 Policy Group Expressions in ACL (VCS only)

Policy group expressions can be defined based on boolean expressions to match select VPorts using AND, OR, NOT
operations and used as a part of ACL definition. Supported on VRS in this release.

3.13.2 Services and Service Groups (VCS only)

Ingress/egress ACL rules can be defined based on higher level service / service group abstraction rather than using
protocol and ports. In addition, VSS analytics supports visualization of traffic flows with additional services context.
Supported on VRS in this release.

3.13.3 L7 Security for NSG (VNS only) (BETA)

Ingress/Egress Security rules can be defined and enforced on NSG based on L7 / Application signatures to restrict
traffic to/from branch to datacenter or Internet to select applications (e.g., allow Skype for Business). In addition, VSS
flow analytics includes L7 / application context for matching traffic flows at NSG.

3.13.4 Platforms

Support for Hyper-V environments (BETA)

3.14 New Features/Enhancements in Release 5.0.2

• Core (page 49)


– VSP upgrade to 5.0.2 (page 49)

3.13. Security/VSS 48
VSP Release Notes, Release 5.4.1

• Kubernetes and OpenShift (page 49)


– Customizable Subnet Size for a Namespace (page 49)
– Security (page 49)

3.14.1 Core

VSP upgrade to 5.0.2

• VCS Incremental upgrade support from 4.0.R6.1/R7/R8/R9


• VNS Incremental upgrade support from 4.0.R7/R8/R9
• VCS/VNS Incremental upgrade support from 5.0.1
For VNS incremental upgrades: VSC software patches must be applied prior to the upgrades on the 4.0 releases (R6.1,
R7, R8, R9) to support incremental upgrades to 5.0.2.
The ElasticSearch nodes do not need to be upgraded when upgrading VSP 4.0R6.1/R7/R8/R9 or VSP 5.0.1 to VSP
5.0.2.

3.14.2 Kubernetes and OpenShift

Customizable Subnet Size for a Namespace

Users can now set the size of the subnet that is automatically created for a Kubernetes Namespace or OpenShift Project.
This enables users to size their subnets based on the size of their Namespace/Project.

Security

Scalability improvements that significantly increase number of policy groups per domain as well as number of policy
groups per vPort for large enterprise micro-segmentation use cases

3.15 New Features/Enhancements in Release 5.0.1

• VSD Infrastructure Improvements (page 50)


• Security Policy Scale Improvements (page 50)
• VSD Platform Security Hardening (page 50)
• VCS: Expose VLANs to VMs on VRS (page 51)
• IPv6 Overlay for VRS (page 51)
• OpenStack (page 51)
– Openstack Newton support with full ML2 Mechanism Driver (page 51)
– Openstack Ocata Support with Full ML2 Mechanism Driver (BETA) (page 52)
– Dual Stack IPv6 Overlay Support (page 52)

3.15. New Features/Enhancements in Release 5.0.1 49


VSP Release Notes, Release 5.4.1

– VLAN Transparency (page 52)


– Trunk and Sub-VPort Support for VLAN-aware VMs (page 52)
– SRIOV with VLAN Support and Automated VSG Orchestration (BETA) (page 52)
– Enable Flow Logging and Flow Stats Collection (page 53)
– AVRS (Accelerated VRS) (page 53)
• VMware (page 53)
• Kubernetes & OpenShift (page 53)
– CNI Plugin (page 53)
– Certificate-based Authentication (page 53)
• Microsoft Hyper-V (page 53)
– Support for OpenStack Newton (page 53)
– New Workflow for Nuage Add-In for SCVMM (page 53)
• VSP Upgrade to 5.0.1 (page 54)

3.15.1 VSD Infrastructure Improvements

• ActiveMQ replaces HornetQ as the JMS backend


• New client sample available on Nuage GitHub
• Updated JMS integration documentation available in VSP API Programming Guide
• New monit services for Infinispan and ActiveMQ service status monitoring
• JRE version updated to 1.8
• JBoss version updated to AS 7.2

3.15.2 Security Policy Scale Improvements

In previous releases, Security Policy rule computation was performed in part on VSD and in part on VSC. VSD would
compute the Security Policy rules for each VPort as it was instantiated, and send to VSC. VSC would then perform
more detailed computation of the rules (e.g., Policy Group membership lists) and download to VRS/NSG.
In 5.0.1 and above, Security Policy is computed at the domain level by VSD, and sent to VSC once per domain rather
than per VPort. This improves messaging efficiency between VSD and VSC, especially when VPorts in the domain are
rapidly instantiated, and it also increases the maximum number of policy rules that can be instantiated on the VPorts
of the domain.
In 5.0.1, configuring security policy on shared resources (L3 Shared Resource, L2 Shared Resource) is no longer
permitted. In previous releases it was possible to configure security policy on the corresponding object within an
organization (e.g., subnet in public zone, linked L2 domain). This is blocked in 5.0.1. In a future release, it will be
possible to configure security policy on the shared resource directly, rather than on the object within the organization.

3.15.3 VSD Platform Security Hardening

• Upgraded JRE version

3.15. New Features/Enhancements in Release 5.0.1 50


VSP Release Notes, Release 5.4.1

• API calls logged on VSD with additional header fields


• Restricted cron access to root
• Removed root SSH access requirements between VSD nodes during VSD installation and upgrade.
• Provided VSD API Calls and JMS events correlation mechanism

3.15.4 VCS: Expose VLANs to VMs on VRS

VLANs can now be used on VMs hosted on KVM hypervisors with VRS. The primary use cases are with OpenStack,
but the features are available from VSD. They can be used in two ways:
• VLAN transparency on L2 domains and L3 domain subnets
• Trunk and sub-VPort support for VLAN-aware guest VMs
VLAN transparency allows VMs on the same L2 domain or subnet of an L3 domain to send traffic with VLAN tags
to one another. The VLAN tag is forwarded transparently from the source VM to the destination VM. Security Policy
rules are ignored on VLAN-tagged traffic. VLAN traffic is treated as unknown by L3 functions and dropped (routing,
DHCP, FIP, etc.).
Trunk and sub-VPort support allows a single guest VM vNIC to have connectivity to multiple L2 domains and L3
domain subnets. A single vNIC is connected to a parent VPort and trunk. Multiple sub-VPorts can be connected to
the trunk, each with a specified VLAN. When traffic is sent on the VNIC with the specified VLAN for a sub-VPort,
the VLAN tag is stripped and the traffic is forwarded on the sub-VPort.

3.15.5 IPv6 Overlay for VRS

Complementing IPv6 overlay support on VSG that was introduced in an earlier release, IPv6 overlay on VRS intro-
duces support for dual-stack IPv4/IPv6 subnets in L3 domains and L2 domains.
• Dual-stack only is supported
• There is no DHCPv6/SLAAC support and therefore VMs’ IPv6 addresses must be manually configured
• ACLs are supported
• Static IPv6 routes are supported

3.15.6 OpenStack

Openstack Newton support with full ML2 Mechanism Driver

Openstack Newton (Red Hat OSP 10 and Ubuntu/UCA 16.04) is now supported with the full ML2 mechanism driver
starting with this release. The full ML2 mechanism driver has full feature parity with the monolithic plugin and
supports all new features from 5.0.1. Upgrade to full ML2 is supported from both the monolithic plugin and the partial
ML2 plugin (VSD-managed only) in the case of Mitaka and only from the monolithic plugin in the case of Newton.
The Nuage monolithic plugin is supported at the feature level of 4.0R8 and therefore none of the new features from
5.0.1 are supported. The Nuage monolithic plugin is deprecated in 5.0.1. The last release supporting this plugin
will be 5.0.2. Subsequent releases will support the ML2 driver only.

3.15. New Features/Enhancements in Release 5.0.1 51


VSP Release Notes, Release 5.4.1

Openstack Ocata Support with Full ML2 Mechanism Driver (BETA)

Openstack Ocata is supported (BETA) with the full ML2 mechanism driver in this release. The new features in 5.0.1
and a few legacy features are not yet supported on Ocata and will be added in a future release. Integration with
VMWare ESXi is not supported at this time. The Nuage monolithic plugin has been replaced with the full ML2
mechanism driver. This monolithic plugin is no longer supported starting with the Ocata release.
The VSP Ocata ML2 Guide will be released shortly.

Dual Stack IPv6 Overlay Support

Dual stack IPv6 overlay support is introduced with VSD-managed workflow and is aligned with VSP core capabilities.
OpenStack-managed workflow and IPv6 address allocation through DHCP/SLAAC are not supported at this time. This
feature is supported only on Newton with the full ML2 mechanism driver.

VLAN Transparency

VLAN transparent networks allow for communication between guest VMs over tagged VLANs without any knowl-
edge of the VLANs by OpenStack Neutron. VLAN transparency is configured in OpenStack using the standard
Neutron API attribute vlan-transparent on a network.
https://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html
Since VLAN transparency is not configurable on Nuage VSP, the OpenStack plugin will accept any valid value for the
vlan-transparent attribute (true, false, absent). In all cases VLANs will be passed transparently between the
VMs. This feature is supported only on Newton with the full ML2 mechanism driver.

Trunk and Sub-VPort Support for VLAN-aware VMs

This feature introduces support for multi-network VMs using VLANs that are compatible with the trunk port and
subport implementation of OpenStack VLAN-aware VLANs:
• https://specs.openstack.org/openstack/neutron-specs/specs/newton/vlan-aware-vms.html
• https://docs.openstack.org/draft/networking-guide/config-trunking.html
This feature is supported only on Newton with the full ML2 mechanism driver.

SRIOV with VLAN Support and Automated VSG Orchestration (BETA)

SRIOV support with dynamic VSG orchestration of VLAN & mapping to HW-VTEP and automated compute to-ToR
topology discovery using LLDP has been added in this release. The Nuage ML2 Mechanism Driver supports both
VLAN-aware-VM (VLAN tagging done by VM) and VLAN-unaware-VM (VLAN tagging done by SRIOV driver)
modes with SRIOV. The former mode supports multiple VLANs per SRIOV NIC Virtual Function (VF) while the
latter mode supports one VLAN per VF. In both cases, each of the individual VLANs can be mapped to a separate
overlay subnet (VxLAN VNID). VLAN-aware-VM with SRIOV is compatible with the trunk port and subport imple-
mentation of OpenStack VLAN-aware VLANs: https://specs.openstack.org/openstack/neutron-specs/specs/newton/
vlan-aware-vms.html
This feature is supported only on Newton with the full ML2 mechanism driver.

3.15. New Features/Enhancements in Release 5.0.1 52


VSP Release Notes, Release 5.4.1

Enable Flow Logging and Flow Stats Collection

Flow logging and flow statistics collection can be enabled by default when security group rules are created or modified
in OpenStack. When these operations are enabled, all flows created by the security group rules are logged, and
statistics are collected for them. Flow logging and flow statistics collection are disabled by default. This feature is
supported only on Newton with the full ML2 mechanism driver.

AVRS (Accelerated VRS)

Nuage AVRS (DPDK) support is not available for OpenStack Newton or Ocata in this release. It will be re-introduced
in a future release. AVRS and AVRS-G can continue to be used for non-OpenStack use cases.

3.15.7 VMware

• 5.0.1 brings new security improvements to the vCenter Integration Node including support for LDAP-based
authentication and authorization as well as support for changing the csproot password.
• The communication between the VRS Agents and the vCenter Integration Node will now use certificate-based
authentication for extra security.
• The vSphere Web Client metadata plugin now supports vCenters in linked mode.

3.15.8 Kubernetes & OpenShift

CNI Plugin

New network plugins for Kubernetes and OpenShift based on the Container Network Interface (CNI). CNI is a pro-
posed standard for configuring network interfaces for Linux containers and is also supported by Mesos. The older
exec network plugins for Kubernetes and OpenShift will not be supported for the Nuage 5.x releases.

Certificate-based Authentication

The Nuage Kubernetes/OpenShift monitor, which includes the NetworkPolicy API plugin and runs on the Kuber-
netes/OpenShift masters, now uses certificate-based authentication for all REST-based communication to the VSD.
This is a more secure approach than the password-based authentication.

3.15.9 Microsoft Hyper-V

Support for OpenStack Newton

VRS for Hyper-V has been certified for OpenStack Newton deployments.

New Workflow for Nuage Add-In for SCVMM

Users can now update the IP address of a running VM without having to restart the VM.

3.15. New Features/Enhancements in Release 5.0.1 53


VSP Release Notes, Release 5.4.1

3.15.10 VSP Upgrade to 5.0.1

• VSD DB migration supported from 4.0R6.1/R7/R8 to 5.0.1, no incremental upgrade/backward compatibility


support.
• End-to-end upgrade procedure available in VSP Install Guide.

3.16 Deprecated Features

• Support of EL6 for VRS was removed as of 4.0.R7.


• Support for Ubuntu 12.04 hypervisors has been deprecated since 4.0R2.
• App Designer was deprecated in VSP 4.0.
• Support for VSG Virtual Chassis was removed in VSP 4.0.
• Support for OpenStack Icehouse (RHEL OSP 5) and Juno (OSP 6) was removed in VSP 4.0. Support is main-
tained only in the 3.2 release and earlier.
• Support for RHEL 7.0 and CentOS 7.0 as a KVM hypervisor operating system (OS) was removed in 3.2R6.
This OS version is no longer supported by Red Hat.
• Support of EL6 as hypervisor for VSC and VSD is deprecated and will be removed in a future release.

3.16. Deprecated Features 54


CHAPTER

FOUR

UPGRADE

This section contains the following subsections:

• Supported Upgrade Paths for 5.3.3 (page 55)


– VCS Deployments (page 55)

* Major Upgrades (page 55)


* Minor Upgrades (page 56)
• What’s New in 5.2.2 (page 56)
– VSD default CSP user permissions (page 56)
– Procedure to apply the VSD patch on a 5.2.2 VSD (page 56)
• What’s New in 5.2.1 (page 56)
– Upgrade of Elasticsearch From 2.2 to 5.5 (page 56)
– VSD Certificate Renewal (page 57)
• Resolved Upgrade-related Issues in 5.3.3 (page 57)
• Resolved Upgrade-related Issues in 5.3.2 (page 57)
• Resolved Upgrade-related Issues in 5.3.1 (page 57)
• Resolved Upgrade-related Issues in 5.2.2 (page 58)
• Upgrade-related Known Limitations (page 58)
• Upgrade-related Known Issues (page 58)

See the VNS Release Notes for VNS-specific upgrade notes.

4.1 Supported Upgrade Paths for 5.3.3

4.1.1 VCS Deployments

Major Upgrades

• 4.0R11 to 5.3.3: Full Backward Compatibility - VCS only

55
VSP Release Notes, Release 5.4.1

Minor Upgrades

• 5.1.2 Ux to 5.3.3: Full Backward Compatibility - VNS/VCS


• 5.2.2/3 Ux to 5.3.3: Full Backward Compatibility - VNS/VCS
• 5.3.2 Ux to 5.3.3: Full Backward Compatibility - VNS/VCS

Note: VNS/VCS 5.2.1 to 5.3.3 minor upgrade is not supported.

4.2 What’s New in 5.2.2

4.2.1 VSD default CSP user permissions

In prior releases, any newly created CSP user was added to the “Operator Group” group by default. This group allows
read access to all system data, including data within all organizations. Starting in 5.2.2, the default CSP group for
new CSP users is the “Everybody” group and allows only read access to the CSP enterprise information, users and
groups in the enterprise and the modification of the user’s own settings. When upgrading to 5.2.2, all the CSP users are
automatically added to the “Everybody” group. In order to create an CSP Operator user, the user must be explicitely
added to the “Operator Group” group.

4.2.2 Procedure to apply the VSD patch on a 5.2.2 VSD

On the VSD Cluster, follow the following steps to apply the patch:
1. Shutdown vsd-core on VSD1 using “monit -g vsd-core stop”
2. Ensure that jboss is not running anymore
3. On the first VSD, navigate to /opt/vsd/jboss/standalone/deployments and move the current UI WAR file to /tmp
4. Copy over the new UI WAR file, “ui-5.2.2_*.war” to /opt/vsd/jboss/standalone/deployments
5. Start vsd-core on VSD1 “monit -g vsd-core start”
6. Make sure that jboss is up properly
7. Start a UI from VSD1 to ensure it is correctly loaded
8. Perform the same steps above on VSD2 and VSD3
9. Inform users to restart their UIs if desired, to pick up the new UI version

4.3 What’s New in 5.2.1

4.3.1 Upgrade of Elasticsearch From 2.2 to 5.5

The Elasticsearch component of VSD statistics has been updated from release 2.2 to 5.5. This requires an upgrade of
any existing Elasticsearch deployed nodes. This procedure is documented as part of the VSP upgrade procedure.
When upgrading from a release older than 5.2.1 , due to the Elasticsearch version upgrade (to 5.5 from 2.2), any
external client using the Elasticsearch APIs directly could break in reading or writing into Elasticsearch. Please refer
to the following document provided by Elastic in order to find the remedy in that case: https://www.elastic.co/guide/
en/elasticsearch/reference/5.5/breaking-changes-5.0.html

4.2. What’s New in 5.2.2 56


VSP Release Notes, Release 5.4.1

4.3.2 VSD Certificate Renewal

After a VSP upgrade to 5.2.x, as the last step of the platform upgrade, the VSD certificates used for TLS connections
can be renewed using the scripts provided. This procedure is documented as part of the VSP upgrade procedure.

4.4 Resolved Upgrade-related Issues in 5.3.3

• [VSD-24968] When upgrading the vCenter Integration Node from a 4.0 release to the latest release, the Syslog
Server IP field on the vCenter, Datacenter, Cluster and Hypervisor level might be configured with a ‘null’ string
value. This breaks the VRS Agent bootstrap configuration. Workaround Verify the Syslog Server IP field
before deploying the new VRS Agent.

4.5 Resolved Upgrade-related Issues in 5.3.2

• [VSD-23860] When a package-based upgrade is attempted with a package that has already been installed, the
services on the VRS Agent will still be restarted nonetheless.
• [VSD-23899] Routing policies created in 5.1.x were not migrated properly during the upgrade to 5.2.1/5.2.2.
This resulted in previously created routing policies not being pushed down to the VSG (policies created in
5.2.1/5.2.2 were not impacted). In cases where deleting and recreating the policy after upgrading was not
possible, Support was to be contacted before upgrading.
• [VSD-23903] The TLS certificate of the OCSP service (“ocspsigner” entry in the EJBCA list of certificates)
does not get correctly renewed when executing “vsd-renew-cert.sh”. The old certificate is still used. Upon
expiration of this certificate, the VSD does not be able to verify client certificates presented to the VSD Rest
API and refuses any new TLS certicate-based connections on the Rest API. It may also impact connections from
the Nuage VSCs to the VSD in case the EjabberdMode is set to “allow” or “require”. Please contact the Nuage
support to get instructions on how to renew the OSCP service certificate.
• [VSD-25167] When upgrading a VSD cluster from 5.1.x, 5.2.x, or 5.3.1 to any ver-
sion, if the MySQL database passwords (cnaPwd, ejbcaPwd) were changed using
/opt/vsd/install/vsd_password.ini during the initial VSD installation, the
decouple.sh script will fail to decouple the VSD node. Workaround: Before execut-
ing decouple.sh, execute the following command to prevent this issue from occurring:
[root@vsd1 ~]# eval $(python /opt/vsd/password/passwordParser.py).
If you have triggered decouple.sh without executing the above command and it has failed,
execute the following commands:

[root@vsd1 ~]# eval $(python /opt/vsd/password/passwordParser.py)


[root@vsd1 ~]# /media/vsdmigration/decouple.sh -z

4.6 Resolved Upgrade-related Issues in 5.3.1

• [VSD-23878] When performing the TLS certificate renewal for the VSD nodes using vsd-renew-cert.sh,
as described in the last step of a platform upgrade, the script would fail when there were more than 500 entries re-
turned by the EJBCA command /opt/vsd/ejbca/bin/ejbca.sh ra listendentities -S 00
| wc -l. This was generally the case when numerous NSGs were deployed, or when numerous VRSs were
deployed using TLS certificates generated by VSD.
• [VSD-23921] If VSD upgrade is executed twice or more within 3 hours, then the second and later execution of
upgrade will wrongly determine the starting version for the upgrade.

4.4. Resolved Upgrade-related Issues in 5.3.3 57


VSP Release Notes, Release 5.4.1

• [VSD-24000] After upgrading VSD to 5.2.2, the event tab on VSD GUI did not function properly. Workaround:
Execute the following commands from the VSD CLI:

[root@vsd ~]# mysql


mysql> use vsddb
mysql> UPDATE PERMISSION set permission = REPLACE(permission, ':RE', ':READ')
˓→where permission like "%:RE";

mysql> exit

4.7 Resolved Upgrade-related Issues in 5.2.2

• [VSD-23904] This issue impacted the VSD build 5.2.2-24. It is resolved with the VSD build 5.2.2-33. If
a VSD was upgraded to 5.2.1 from a previous release (any 4.x/5.x releases older than 5.2.1), a subsequent
VSD upgrade to 5.2.2-24 would fail. This issue affected only a specific upgrade path: 5.x/4.x to 5.2.1 to
5.2.2. When running /opt/vsd/install.sh to migrate 5.2.1 data into a new 5.2.2-24 VSD, the vsddb
upgrade script would fail, and the installation was halted. The VSD install.log would contain the error message:
“java.sql.SQLSyntaxErrorException: Table ‘nuageDbUpgrade.DOMAIN’ doesn’t exist”. This upgrade issue
has been fixed in VSD 5.2.2-33. The VSD 5.2.2-24 may still be used for fresh installation or for upgrades as
long as it is not a multi-hop upgrade via VSD release 5.2.1.
• [VSD-23829] After upgrading the VSD to VSP 5.1.x or to VSP 5.2.1, the Policy Group Expressions used in
ACLs can cause traffic forwarding issues. Moreover, an error occurs when accessing the PG Expression via
API/UI. This has been fixed in VSD upgrade to 5.2.2.

4.8 Upgrade-related Known Limitations

• When upgrading from a release older than 5.2.1, due to the Elasticsearch version upgrade, there is a downtime
on the statistics collection until both the VSD and ES are upgraded to the same release.
• [VSD-19151] When using the upgrade functionality of the vCenter Integration Node to upgrade the VRS Agents
in a cluster, an error can occur in the vCenter events whereby the renaming of a VRS Agent fails. Workaround:
In the vCenter Integration Node, click the Reconnect button on the vCenter.
• [VSD-23408] When executing a VRS Agent image-based upgrade through vCenter Integration Node, changing
the time limit during the upgrade has no impact on deployments that have already started.
• [VSD-24516] Only alphanumerical characters can be used for the internal VSD passwords provisioned via the
vsd_password.ini file.

4.9 Upgrade-related Known Issues

• [VSD-23300-1] VSP product version is shown as 4.0 in the Monitoring console VSD UI when upgrading from
4.x to 5.x release(s).
• [VSD-23361] VSS UI is not accessible after upgrading from any 4.0 release to 5.2.x until turn-on-api is
executed on the VSD node.
• [VSD-23582] When there is a large number of shards, sometimes restarting the Elasticsearch service takes more
than 3 minutes. This could cause a failure of the vsd-es-cluster-config.sh or vsd-es-standalone.sh executed on
the VSD during the Elasticsearch upgrade phase. When it fails, the user should wait until the cluster health
status become green, and execute the following script: vsd-es-cluster-nuage-setting.sh (for ES cluster) or vsd-
es-standalone-setting.sh (for a standalone ES). This is described in the Elasticsearch upgrade section.

4.7. Resolved Upgrade-related Issues in 5.2.2 58


VSP Release Notes, Release 5.4.1

• [VSD-24748] The Elasticsearch schema for VSS flows (i.e., “nuage_flow” index) was changed between Nuage
Release 4.0 and Nuage release 5.3.1. Consequently, after upgrading the VSD from release 4.0 to 5.3.1 or 5.3.2,
the VSD stats collector will not be able to write new VSS statistics to the Elasticsearch node for that day because
of the presence of old VSS stats for the same day. Workaround: Remove the nuage_flow stats of the day, or
wait for the next day. To find and remove today’s nuage_flow index:

[root@es-1 ~]# curl -XGET localhost:9200/_cat/indices?pretty | grep flow

% Total % Received % Xferd Average Speed Time Time Time Current


Dload Upload Total Spent Left Speed
100 1536 100 1536 0 0 66024 0 --:--:-- --:--:-- --:--:-- 66782
yellow open nuage_flow-2018-08-05 hwJmO7khRkiUIBMBdgtrCg 5 1 4586060 0 2.6gb 2.6gb
yellow open nuage_flow-2018-08-04 -WxnB4meQyyLPueRWODkbw 5 1 860174 0 501.7mb 501.
˓→7mb

yellow open nuage_flow-2018-08-06 _eMB9uEdTDK4J-q5itTKtw 5 1 4976462 0 3.1gb 3.1gb


Delete the index

[root@es-1 ~]# curl -XDELETE localhost:9200/nuage_flow-2018-08-06

• [VSD-25749] During a VSD cluster upgrade, when executing “decouple.sh” on a system that does not have
VSD statistics enabled, the Monit files for VSD statistics are wrongly loaded which results in Monit command’s
output to show entries for statistics components (statscollector, tcadaemon, elasticsearch). Execute “monit stop
-g vsd-stats” on that decoupled VSD to stop the statistics processes before resuming the upgrade process.

4.9. Upgrade-related Known Issues 59


CHAPTER

FIVE

RESOLVED ISSUES

The following sections list known issues that have been resolved in the stated release. The workaround references
have been retained in the event that you are running an earlier version and encounter the issue.
Please be aware that upgrade-related resolved issues are listed in the Upgrade section of these release notes.
This section contains the following subsections:

• Resolved in Release 5.3.3 (page 60)


• Resolved in Release 5.3.2 U2 (page 64)
• Resolved in Release 5.3.2 U1 (page 64)
• Resolved in Release 5.3.2 (page 64)
• Resolved in Release 5.3.1 (page 67)
• Resolved in Release 5.2.2 U1 (page 69)
• Resolved in Release 5.2.2 (page 69)
• Resolved in Release 5.2.1 (page 71)
• Resolved in Release 5.1.2 U4 (page 72)
• Resolved in Release 5.1.2 U2 (page 72)
• Resolved in Release 5.1.2 U1 (page 73)
• Resolved in Release 5.1.2 (page 73)
• Resolved in Release 5.1.1 U2 (page 74)
• Resolved in Release 5.1.1 U1 (page 74)
• Resolved in Release 5.0.2 (page 77)
• Resolved in Release 5.0.1 (page 78)

5.1 Resolved in Release 5.3.3

• [OpenStack-2320] If an OpenStack Firewall in admin-state down was deleted, the corresponding rules on VSD
are not removed.
• [OpenStack-2308] In OpenStack, if the same router is linked to a firewall using the firewall update command
twice, the firewall status is reported as PENDING_UPDATE rather than ACTIVE.

60
VSP Release Notes, Release 5.4.1

• [OpenStack-2289] When the number of FWaaS rules retrieved by the OpenStack Neutron plugin from VSD ex-
ceeds the maximum page limit of VSD (default 500), it results in a policy rules inconsistency between Openstack
and VSD.
• [OpenStack-2105] In OpenStack, if a subnet is attached to two routers, the attach is permitted, but some func-
tions do not work correctly. Current Nuage VSP does not support a subnet attached to two routers. This has
now been blocked so the OpenStack user receives an error message when this is attempted.
• [SROS-18086] Interface type is incorrectly set for system loopback LAG-97 member ports after the reboot of a
WBX node. As a result, packet drops are observed for VXLAN traffic egressing on system loopback LAG-97
if the rate exceeds 10G. Workaround: Manually remove and add the ports on system loopback LAG-97 after
reboot of WBX node.
• [SROS-18074] When BGP comes up first before hold timer gets started, all MC-LAG ports with dynamic service
profile and LAG-98 are held down for 900 seconds after reboot.
• [SROS-17988] On WBX, in some circumstances, a race condition on SD card mount during an upgrade can
cause the SD card to not be correctly mounted, so the cloud-init.cfg is not accessible and is therefore not
executed.
• [SROS-17934] Deterministic hold-down timers does not work when oper-groups are configured.
• [SROS-17868] On a VSG, VSA, or a WBX while using DC power, when the power cable is unplugged, the
output of “show chassis power-supply” is incorrect and a wrong SNMP trap is sent.
• [SROS-17862] A BFD static route session is reset for VIP VPorts if there is a controller VSC switchover.
• [SROS-17760] Under rare circumstances, when associating a Floating IP to a Virtual IP, the VSC could enter a
state where it is unable to send additional Floating IP configurations to the VRS hosting the Virtual IP.
• [SROS-17509] On VSG/WBX, after modifying a host Vport IPv6 address, a new IPv6 neighbor entry is installed
maintaining the existing IPv6 neighbor.
• [VRS-16330] In extremely rare cases, when jESXmon changes the VLAN of a port on the Nuage dvSwitch
when a VM is activated (either live vMotion or power on), vCenter confirms the VLAN change to be successful.
However, vCenter is unable to push that change to the ESXi host on which the VM is running. This results
in jESXmon and OVS to expect the VM traffic to arrive on the configured VLAN (as vCenter confirmed the
VLAN change), but ESXi not applying that VLAN to the traffic from that VM. As a result, the VM does not
have connectivity. Resolution: jESXmon is now more robust and only applies the VLAN change when it is
confirmed that ESXi can accept the change.
• [VRS-16298] In rare cases, the VRS Agent bootstrap procedure does not write the Secondary uplink underlay
ID to the proper location in the OpenvSwitch configuration file, resulting in a misconfiguration of the Dual
VTEP functionality.
• [VRS-16080] In rare scenarios when VMs are starting up, an error can occur which will result in nuage-
metadata-agent unable to process metadata requests. Workaround: nuage-metadata-agent restart.
• [VRS-15808] When the VRS Agent bootstraps, it has to find itself in vCenter, this process can take longer when
more VMs are running in vCenter.
• [VRS-15389] In some cases it had been seen that openvswitch on VRS/VRS-G removes short-lived flows when
traffic is still active. This could cause intermittent traffic drops and latency especially when Stateful ACLs are
used.
• [VRS-15149] When The VM metadata is changed for Subnet, Zone, policy group and/or redirection target
in quick succession the VM gets deleted from the subnet and does not get resolved in any other subnet.
Workaround: Change the VM to another subnet and get it back to the desired subnet.
• [VSD-26116] While a VRS Agent is deploying or being upgraded, an exception might be visible in the vsd-
server.log file on the vCenter Integration Node about a VRS Updating Metrics Task. This error can be safely
ignored.

5.1. Resolved in Release 5.3.3 61


VSP Release Notes, Release 5.4.1

• [VSD-25813] In the vCenter Integration Node UI, when enabling the secondary data path feature at the data-
center or cluster level, the primary controller field for the secondary data path asks for a netmask value (while
this is not needed). Workaround Put this value to 32.
• [VSD-25788] In the vCenter Integration Node, when disabling the Secondary Data Uplink fields and trying to
save the configuration, an error might be shown that the interface value is invalid. Workaround Disable the
feature without emptying the fields, or use the API to unset the fields.
• [VSD-25642] When the VRS Agent metrics have not been received by the vCenter Integration Node, a null
pointer exception might be observed in the vCenter Integration Node logs. This exception can be safely ignored.
In the vCenter Integration Node UI, the icon for receiving metrics also remains green in this case.
• [VSD-25724] In the zfbinit.py.log file of the eVDF bootstrap agent, incorrect padding and new lines are present
in single log messages.
• [VSD-26318] The eVDF License status field is empty in the VSD Sysmon interface. This does not impact
functionality.
• [VSD-26219] The eVDF Gateway template in VSD shows the personality as VRS-G instead of EVDF.
• [VSD-26214] An eVDF node uses both a VRS and VDF license, while it should only use a VDF license
• [VRS-17580] When the auto-scale feature is enabled for Kubernetes or OpenShift, the automatic scale down
will occasionally not remove a subnet from VSD after all pods have been deleted because of a missed stale entry
in etcd.
• [VRS-16887] In rare occasions, when a deployment in Kubernetes or OpenShift is rapidly being scaled in and
scaled out, a duplicate IP might get assigned to two Pods.
• [VRS-16853] The subnet of an NSG bridge port is advertised as a route on the underlay of the eVDF node, with
the next-hop the OpenvSwitch svc-pat-tap interface. This causes underlay traffic of the eVDF node to be routed
into OpenvSwitch, which is not desirable.
• [VRS-16828] When a VRS pod with eVDF enabled and is restarted on OpenShift, the IPSEC policies are deleted
and do not properly recover, causing traffic between eVDF nodes to fail.
• [VRS-16823] When an eVDF node loses its connection to a VSC, the BGP configuration is removed from the
eVDF node.
• [VRS-16723] An error message indicating that an eVDF node is unable to retrieve system-timer information
from DB can be seen in the nuage-Sysmon.log. This message can safely be ignored.
• [VRS-16665] After several automatic scale up and scale down of subnets in Kubernetes or OpenShift, an error
message can be observed in the nuage-openshift-monitor.INFO log stating “Cannot find subnet with
ID {ID}”.
• [VRS-16563] When OpenShift is installed in a multi-master mode, OpenShift-Ansible does not configure
HAProxy to also load-balance for the Nuage CNI Monitor on port 9443. Workaround Manually configure
HAProxy to load-balance for port 9443 to each master node.
• [VRS-16304] The eVDF node always uses eth0 as its uplink interface and disregards the nw_uplink_intf setting
in the OpenShift nodes file.
• [VRS-16150] When deploying OpenShift with eVDF, connectivity between an NSG and an OpenShift node
fails because a misconfiguration of the encryption keys
• [VRS-16144] The BGP PE-CE feature does not properly function for OpenShift integrations because of a miss-
ing interface in the BGP namespace.
• [VRS-16140] When OpenShift or Kubernetes is deployed in a multi-master mode, the multiple masters can
cause a transaction lock, causing some ACLs not to be applied on the domain.

5.1. Resolved in Release 5.3.3 62


VSP Release Notes, Release 5.4.1

• [VRS-16082] In an OpenShift and Kubernetes deployment, the resolution of external DNS hostnames fails
because of a missing ACL. Workaround: Implement an egress ACL allowing traffic from the Internet Policy
Group with a source Port 53 to any.
• [VRS-16054] When deploying OpenShift in combination with eVDF, Node Port functionality is not available.
This specifically impacts the OpenShift Registry Pod’s functionality.
• [SROS-18348] When the BGP PE-CE Autonomous System value is changed from the default 65500 value, it
will disconnect the MP-BGP connection between VSC’s and VRS’s and not recover as the wrong AS would be
used.
• [SROS-18238] When a vPort is deleted in VSD which has BGP configured, the BGP peer entry in VSC does
not get deleted automatically.
• [SROS-18160] VSC will continuously send the gateway configuration for each eVDF node to VSD.
• [SROS-18135] WBX crashes after creating 64 SAPs on different ports within a single VPLS service. This has
been resolved
• [SROS-17865] When a workload behind an NSG tries to communicate with a vPort on OpenShift running
eVDF, the traffic does not arrive on the correct OpenShift node because of missing /32 routes.
• [SROS-17854] The VSC CLI output of the the command “show vswitch-controller vswitches” listed the
Vswitch-instances with prefix “va-va-” instead of with prefix “va-”. This could also cause the Vswitch-instance
to be truncated.
• [SROS-17724] if one of the access LAG ports is down when the VPort is created, the MC-LAG active/active host
VPort might point to lag-98 on one of the MC-LAG nodes instead of the SAP. Workaround: Run vswitch
shutdown/no shutdown.
• [SROS-17689] On WBX/VSG, ARP reply handling has been improved by adding dedicated queue for such
packets.
• [SROS-16432] If the VSC receives a large amount of BGP messages it will buffer the messages for processing.
If the buffer exceeds 20,000 BGP related messages/events are dropped
• [OPENSTACK-2309] If the OpenStack Neutron Plugin was unable to remove a FWaaS rule from a policy in
VSD after multiple retries, it would erroneously delete the rule from OpenStack, leaving a dangling rule on
VSD. This has been corrected.
• [SROS-18241] In some cases the BFDv6 endpoint can use a stale interface index even if the EVPN interface
index has changed, resulting in erroneously neighbor solicitations being triggered and flapping the BFD session.
• [SROS-18229] On VSG/WBX, after LACP failure the MACs are not synced when LACP fallback kicks in.
• [SROS-17894] Syslog originated packets from WBX use the SROS VM IP 169.254.1.2. Workaround: Con-
figure log syslog log-prefix “YOUR-ROUTER-ID-HERE”.
• [SROS-15695] If a ping is run on the NSG from the VSC, through the tools command, that ping process would
never get killed on the NSG. This has been corrected.
• [OPENSTACK-2327] If an OpenStack FWaaS firewall is in admin down state, and a new router is associated to
the firewall then a deny all rule corresponding to that new router is not added in the VSD.
• [SROS-17787] The output of “show vswitch-controller vswitches detail” showed an incorrect IP address as the
connection IP of a VRS/NSG.
• [SROS-18550] VSG/WBX might not prefer a local next-hop for static route, if the next-hop is indirect next-hop
to another static route.
• [VRS-17808] If a Virtual IP is created on a vport with the same MAC as the vport, and the VIP is subsequently
deleted, the MAC route for the vport will be erroneously removed. This has been corrected. Workaround:
From VSC use “clear vswitch-controller vport” on the affected vport.

5.1. Resolved in Release 5.3.3 63


VSP Release Notes, Release 5.4.1

5.2 Resolved in Release 5.3.2 U2

• [SROS-17554] When doing an admin reboot now from the SROS VM on WBX, the console showed some
errors.
• [SROS-17687] On VSG/WBX, a failure in the underlay when enable-peer-tracking was configured caused the
iBGP EVPN overlay session to bounce, even if there were additional routes due to ECMP.
• [SROS-17885] When there are no VRSs/NSGs connected, all connected VSCs will be reported down in the
VSD GUI monitoring console. This is only a UI issue, the functionality of VSC is not impacted. Workaround
Restart the XMPP connection to bring it back to normal state in the UI.
• [SROS-17889] IPv6 BFD endpoint creation will fail if the interface IP is using the unique local fd00::/8 range,
resulting in the BFD session staying down for that interface.

5.3 Resolved in Release 5.3.2 U1

• [SROS-16725] On WBX, ping did not work when do-not-fragment was set for packets with a size larger than
1956 bytes.
• [SROS-17715] On a VSC switchover to the standby VSC, BFD sessions for static routes were re-initialized.
• [VRS-13331] VMs hosted on a VRS could not ping the gateway for a remote subnet if that subnet and VM were
located on an AVRS.
• [VRS-14114] On NSGs that have dual PHY (i.e., “combo port” with 1000BASE-T or optical SFP) for the uplink
connection, the ethtool command would always report the connection as “Twisted Pair” even when an optical
connection was being used.
• [VRS-15223] On VRS, the CLI nuage-bfd-show session-detail <vrf id> incorrectly displayed
the admin state of the BFD session as <NULL> instead of Up. Operational state was correctly displayed.
• [VRS-15385] On VRS using BFD static routes when a host on a VPort goes away and a host with the configured
VIP on that VPort comes up with the same MAC address as the earlier host, there are occasions when table
entries for the earlier host are not cleaned up properly on the VRS/VRS-Gs on the VSC.
• [VRS-15387] On VRS with BFD enabled on static routes having the VIP as the next-hop, when the VIP becomes
active on a different VPort, the BFD session does not move to the new VPort as expected.
• [VSD-25675] The VSD Architect UI displays the word ‘Debug’ in the version number. This does not reflect the
VSD Version itself and does not cause extra logging or extra information to be stored. This also does not impact
the UI functionality or behavior. It can be safely ignored.

5.4 Resolved in Release 5.3.2

• [OpenStack-1807] Nuage-metadata-agent was not supported with cloudbase-init.


• [OpenStack-2200] The OpenStack plugin supports communication with VSD using IPv6 addresses. However,
the VSD IPv6 address could not be mentioned directly in the plugin configuration file (plugin.ini) due to parse
errors. The workaround was to use the hostname in the plugin.ini file.
• [SROS-16224] Manual L2 EVPN configuration allows configuring qinq SAPs, even though they are not sup-
ported. User must not configure qinq SAPs.

5.2. Resolved in Release 5.3.2 U2 64


VSP Release Notes, Release 5.4.1

• [SROS-16449] On WBX/VSG, in the scaled-up setups with high volumes of BGP updates and withdrawals in
rapid succession might have caused an assertion in the IOM which would have caused the device to hang. The
Workaround was restarting the system.
• [SROS-16479] BGP PE-CE on VRS is now supported when using Dual VRS VTEPs.
• [SROS-16690] WBX SROS could have become unreachable after performing an admin reboot on the
SROS-VM. The workaround was to reboot the WBX.
• [SROS-16812] To troubleshoot correlation and synchronization issues between Router Table Manager and BGP
on VSC, fast traces logs between RTM callbacks and BGP logs that generate rib in changes are available by
CLI: tools dump bgp fast-trace messages.
• [SROS-16876] BGP PE CE VRS IPv6 peers do not come up with stateful ACLs. Workaround: Do not use
stateful ACLs.
• [SROS-17015] Sometimes the SD card was not detected on the WBX (no OS installed), entering into a con-
tinuous loop. The Workaround was to extract the SD card, power-cycle the WBX, and insert the SD card
again.
• [SROS-17030] When an NSG was configured with an access-side BGP peer, the NSG would incorrectly share
BGP IPv4 family routes with the VSC. The only routes that should be shared between NSG and VSC are BGP
EVPN routes.
• [SROS-17244] When the role (primary/secondary) of an uplink is changed while the OpenFlow connection is
down, ingress traffic on this uplink could be dropped after the OpenFlow connection is restored, if the NSG has
controllerless mode enabled with remote forwarding. Workaround: Reboot the NSG.
• [SROS-17274 / VSP-2769] Due to poor optical signal on a WBX port using fiber connection, the WBX the link
could continuously flap causing the port to become operationally down/up affecting control plane functionality.
An internal detection method has been implemented to re-initialize the port if low optic readings are detected
until a good measurement is obtained.
• [SROS-17322] A VSC will no longer have inactive MP-BGP sessions towards NSGs without OSPF or BGP
configuration on the access side. When an NSG with OSPF or BGP configuration on the access side, already
has 2 active MP-BGP sessions with other VSCs, there will still be an inactive MP-BGP session on other VSCs.
• [SROS-17373] Management Access Filters (MAF) were not working for out-of-band connections to WBX.
• [SROS-17417] When BGP peers were using system IP and there were several paths available for ECMP, BFD
session did not come up.
• [SROS-17453] When WBX VM is in boot sequence and loading both.tim, pressing any random key can
cause VM to hang.
• [SROS-17479] In an eBGP EVPN overlay setup with two route reflectors (RR), both advertised the MAC/IP
routes of the VPorts with next-hop as self. The receiving peers (VSG/WBX) might have installed mac-route
from one of the RRs and ip-route from other RR causing a mismatch and traffic destined to this VPort was
dropped.
• [SROS-17536] The status of a VSG or WBX in the VSD monitoring console might have been displayed as down
after running commands to reboot or restart certain components on the VSG or WBX.
• [SROS-17662] Removing the last port from LAG-16 on the WBX used to cause an assertion in the IOM which
would cause the device to hang. The workaround was to issue a systemctl restart vsgvm.
• [VRS-11145] When redeploying the nuage-node daemonset, the VRS might loose connectivity to VSC
Workaround: Execute the following commands inside the VRS pod: ovs-vsctl add-controller
alubr0 ctrl1 tcp:<controller-1-IP>:6633 and ovs-vsctl add-controller alubr0
ctrl2 tcp:<controller-2-IP>:6633.
• [VRS-11924] The sysmon information of Hyper-V VRS nodes lacks several items of information.

5.4. Resolved in Release 5.3.2 65


VSP Release Notes, Release 5.4.1

• [VRS-12116] Access to Docker registry and router pods would fail unless underlay support was enabled on the
domain.
• [VRS-12322] The infra pod version number was not dynamically updated based on the release. The
Workaround was to update nuage-infra-pod-config-daemonset.yaml to reflect the correct ver-
sion.
• [VRS-12602] The vsp-openshift.yaml file might have been missing from an OSE node. The
Workaround was that if pods failed to resolve and you noticed missing vsp-k8s.yaml er-
rors in the /var/log/nuage-cni.log on the nodes, delete and redeploy the node dae-
monset. (oc delete -f /etc/nuage-node-config-daemonset.yaml and oc create -f
/etc/nuage-node-config-daemonset.yaml)
• [VRS-12964] Automatically generated ACLs in the Kubernetes integration for each namespace were not state-
ful.
• [VRS-13000] When a VRS Hostname is changed and a reload of the VRS Agent configuration is attempted, the
hostname change is not applied. Workaround: Reboot the VRS instead.
• [VRS-13421] When the first attempt to bootstrap the VRS Agent failed, the subsequent retry attempts might
have failed because of a dormant DHCP client. The Workaround was to reboot the VRS Agent manually.
• [VRS-14172] When split activation was used and a VM in vCenter had a UUID in its name, VRS would
erroneously use that UUID instead of the proper vm.config.uuid for split activation. The Workaround was
to avoid creating a VM in vCenter with a UUID in its name.
• [VRS-14569] For underlay breakout use case, VRS used a raw socket on the hypervisor to perform GARP upon
initial configuration or on VM migration. This was done for every FIP that used the underlay. These raw sockets
were not cleaned up, leading to memory leaks and socket exhaustion.
• [VRS-14626] When a VM was live-migrated away from a hypervisor, the policy group ACL and Ingress QoS
configurations were not cleaned up correctly. This caused a minor memory leak in the ovs-vswitchd process.
• [VRS-14703] With VSS enabled, when an RST packet was sent in response to a SYN packet, the direction of
the first RST packet in the VSS flow statistics in Elasticsearch was classified incorrectly.
• [VRS-14722] After a non-Nuage managed VM was vMotioned between hypervisors, renamed and vMotioned
again, the JESXmonitor could stop processing VM events.
• [VRS-14906] When the ESXi Monitor on the VRS Agent skipped processing an irrelevant event, it could have
triggered the the “Skip processing of an event” message multiple times.
• [VSD-19794] When using the API to instantiate from the L3 domain template, encryption was not enabled
inherited.
• [VSD-23221] The Receiving metrics button in VCIN did not automatically change colour on events unless the
tab was refreshed.
• [VSD-23469] When the VRS Agent was not yet deployed, the vCenter Integration Node would show invalid
values in the Current Version and VRS Agent Name fields.
• [VSD-23482] The advanced search functionality of the VSD GUI did not work for date-based filters like the
Creation Date or Last Update Date. Workaround: Enter the full search string in the text box, for exampe:
creationDate < “01/17/2018 10:50:11 GMT-08:00 (PST)”.
• [VSD-23947] Sometimes system monitoring update messages do not get processed correctly by VSD due to
database deadlocks. A possible impact of this issue is VSC status incorrectly changing to “DOWN” in the
Monitoring Dashboard.
• [VSD-23975] In VCIN, if at cluster level ‘Require metadata’ was selected and the ‘Use Portgroup Metadata’
or ‘Multi VM Support’ checkboxes were changed, the Update button did not become selectable in the UI. The
Workaround was to use the API to update these settings.

5.4. Resolved in Release 5.3.2 66


VSP Release Notes, Release 5.4.1

• [VSD-23977] In the cluster view, in the vCenter Integration Node UI, the ‘Use Portgroup Metadata’ and ‘Mul-
tiVM support’ might have incorrectly shown as selected.
• [VSD-24010] When a VRS Agent was marked as available by VCIN, the UI might not have automatically
updated. The Workaround was to manually refresh the UI.
• [VSD-24094] When upgrading from 4.0, the Syslog server field gets configured as “null” at the vCenter level,
causing an issue during VRS deployment. Workaround: Remove the “null” value from the field on the vCenter
level before deploying or upgrading the VRS.
• [VSD-24304] When flow-logging for an ACL/Forwarding rule is enabled, the performance of the VRS or NSG
can be reduced. A warning has been added to UI and API responses to indicate this when flow-logging is
enabled.
• [VSD-24437] Fixed an inconsistency when reporting the DNS timeout value while using the following VSD
installation script /opt/vsd/sysmon/dnsStatus.py.
• [VSD-24574] Through the VSD REST API, it was possible to create a VLAN on an individual device of a
redundant VSG port. This VLAN could not be updated or deleted as it should have been created on the redundant
port and not on the individual port.
• [VSD-24846] In previous releases, changing the VSD passwords “keyStorePwd” and “trustStorePwd” to cus-
tom passwords during the VSD installation (using the “vsd_password.ini” file) would break the VSD alarm
generation by the TCA statistics service. The workaround for previous releases is to leave those passwords to
the default ones. This issue has been fixed in VSD 5.2.3 and VSD 5.3.2.
• [VSD-24922] The VSD memory usage could have increased because of a large number of TCP connections to
port 7443 that were left in the CLOSE_WAIT state.
• [VSD-24955] When the collectLog.sh script is run on VSD, it might fill the /tmp partition if default
arguments are used. The script now returns an error message if there is not enough space available.
• [VSD-25135] Using the import functionality of the VSD UI to import an L2 domain would result in the error
“DTO list to be created is set to empty.” This prevented the import of an L2 Domain.

5.5 Resolved in Release 5.3.1

• [OpenStack-1809] When creating a VSD-managed subnet in OpenStack and when the gateway specified in
OpenStack does not correspond to the gateway pre-set up in VSD, the specified allocation pool(s) in OpenStack
are lost and restored to the entire CIDR, and the gateway in OpenStack is overwritten what was set in VSD.
Note that in the case of L2Domains, the gateway set in VSD is set by DHCP option 3 (“Router”) (not by what is
called ‘gateway’ in the API), while in the case of an L3 Subnet it is set as the gateway set in the API. Specifically
in the case of the SRIOV Duplex L2 feature, this has to be taken into careful consideration when provisioning
the deployment.
• [OpenStack-2021] When creating a VSD-managed subnet in OpenStack and when the gateway specified in
OpenStack does not correspond to the gateway pre-set up in VSD, the OpenStack request is not rejected and the
behavior is described as part of OpenStack-1809.
• [OpenStack-2158] In an SR-IOV deployment, the default ACL rules in an endpoint domain should use ANY as
the source and destination option.
• [SCVMM-51] When the SCVMM plugin window was left open for longer than 15 minutes without interaction,
it might have crashed.
• [SCVMM-54] After installing SCVMM Nuage addin, login to VSD might have failed the first time.
• [SROS-16081] During creation of manual L2 EVPN services, while creating default SAP both 4095 and * are
allowed, but while deleting only * is allowed.

5.5. Resolved in Release 5.3.1 67


VSP Release Notes, Release 5.4.1

• [SROS-16623] When using metadata-based activation for VMs in a VMware environment, when a VM was
shut down for a longer period of time, the VM and its VPort would be deleted from the VSD, resulting in the
loss of custom configurations done through the VSD. Workaround: Use the nuage.delete-mode metadata field
and configure its value as “TIMER” and set the nuage.delete-expiry metadata field to a value appropriate for the
environment. This expiry time is in seconds and controls how long a shutdown VM remains in the VSD before
it is cleaned up.
• [SROS-16685] Ports 1/1/1-1/1/36 on the 210 WBX 48s may have shown incorrect port LED status.
• [SROS-17038] In some cases flapping BFD sessions could cause a crash on the WBX.
• [SROS-17077] After an XMPP connectivity flap on a VSC, alarms containing “Bad MAC format” would be
raised on VSD. These alarms were erroneous and could be safely ignored.
• [VRS-12270] When VMs are being migrated to a host before the VRS Agent has been fully booted, some of the
VRSs might not get proper connectivity. Workaround: vMotion the VMs without connectivity to a host with a
fully running VRS.
• [VRS-12446] When booting or restarting a VRS Agent, the boot sequence might have hung for up to 1.5 minutes
when restarting OpenvSwitch.
• [VRS-12578] When the hostd service on an ESXi hypervisor was restarted, the Nuage VRS Agent did not
capture all events required to activate VMs.
• [VRS-12606] Open vSwitch on Hyper-V listened to port 6640 on all IPs allowing OVSDB connections from
outside the Hyper-V host. Workaround: Use Windows firewall to block outside connections to port 6640.
• [VRS-12744] The Hyper-V VRS could not be uninstalled when it was installed in a location other than the
default location.
• [VRS-12745] When changing the vCenter Usename and password in the vCenter Integration Node, a redeploy-
ment of the VRS was required to recognise the new username and password.
• [VRS-12798] When the VRS Container was deployed in OpenShift or Kubernetes, the stats forwarder was not
automatically started.
• [VRS-12800] In a VMware VRS Agent, the DKMS module LRO-MOD improvement could generate over-sized
TCP segments towards MPLSoGRE.
• [VRS-12858] During the bootstrap of the VRS Agent, the inner VRS monitor service was restarted twice, which
could cause the restart of other services before they were properly configured.
• [VRS-12924] When a specific VRS Hostname is not configured for a VRS Agent, the VRS Agent name is not
updated in the vCenter Integration Node.
• [VRS-12978] In some cases, the VRS Agent bootstrap process might have hung while hot-adding CPU or
memory. Workaround: Reboot the VRS Agent
• [VRS-13131] An advanced forwarding policy rule which also included forwarding class override would not be
correctly applied to VRS. Traffic would be forwarded as if this rule did not exist.
• [VRS-13506] The ESXi monitor on the VRS Agent could stop logging to its own log file and start logging in
the /var/log/messages file instead.
• [VRS-13546] When a guest VM on ESXi was powered off, the ESXi monitor would wrongfully send a delete
event to Open vSwitch before the power off, potentially causing the VM to be removed from VSD too early in
the case of metadata-based activation.
• [VSD-20293] When an interface that has been configured with Nuage metadata is removed from a VM in
vCenter and metadata is applied to the remaining interfaces using the Web Client metadata plugin, stale metadata
might remain for the removed interface. This does not impact VRS functionality. Workaround: Remove the
metadata manually.

5.5. Resolved in Release 5.3.1 68


VSP Release Notes, Release 5.4.1

• [VSD-23453] When the Secondary Data Uplink fields are configured in the vCenter Integration Node and the
Secondary Data Uplink is disabled, the fields are emptied of all content. Workaround: After you enable the
Secondary Data Uplink, enter the information again.
• [VSD-23620] No alarms would be generated for BGP_NEIGHBOR-related events on VRS/VRSG and thus no
alarms were sent to VSD.
• [VSD-23723] When an upgrade of the VRS Agent was attempted without changing the OVF URL first, the status
of the VRS Agent in the vCenter Integration Node would get stuck in the ‘UPGRADING’ state. Workaround:
Provide a new OVF URL before attempting an upgrade.
• [VSD-23808] Under Platform Configuration > Settings > Infrastructure BGP PE-CE, the AS should have been
within the range 1 - 4294967294. However, a value of 0 could be set.
• [VSD-23837] When a package-based upgrade is taking longer than expected, the upgrade status is shown as
timed out in VCIN. This status might disappear after 10 seconds while the upgrade is still continuing.
• [VSD-23958] The restriction for a hardware gateway whereby a VLAN was mapped strictly to a VNID when
selecting “Per Domain VLAN enable” (i.e., one-to-one mapping) has been relaxed: but only for untagged bridge
VPorts.
• [VSD-24149] In VCIN, the Syslog server type, CPU count, and Memory size fields do not properly inherit their
value from the vCenter level.
• [VSD-24269] The receiving metrics indication on a VRS Agent in the vCenter Integration Node did not change
to red after reaching the configured threshold.

5.6 Resolved in Release 5.2.2 U1

• [OpenStack-2119] When two baremetal servers are booted back to back and are configured with VLAN aware
ports (VLAN trunking) with parent ports and sub-ports using independent security groups, the second baremetal
boot up fails.
• [OpenStack-2120] Access to horizon dashboard fails with an internal server error.
• [VSD-23960] When viewing a list of items, the list is limited to 50 items and the search functionality is not
available. Workaround: Use the API to get the full list of the items.

5.7 Resolved in Release 5.2.2

• [OpenStack-2113] Topology collector failed to recognize ports 1/1/64 and 1/2/64 on a 210-WBX gateway.
• [SROS-15012] Static route configured on a single homed bridge VPort in one VSG MCS peer was not being
installed on the other VSG MCS peer.
• [SROS-16122] Per VPort mirroring ingress AND egress in manual EVPN services is now supported. In 5.2.1
support was ingress OR egress.
• [SROS-16298] WBX nodes (leaf and/or spine) could have crashed and rebooted when the number of EVPN
host routes exceeded 40K. Workaround: Keep the total number of routes below 40K.
• [SROS-16308] When an egress mirror is configured for an LACP-enabled LAG, the VSG-sourced LACP mes-
sages were forwarded without any egress encapsulation, which could cause the link to flap if it were forwarded
out another LACP-enabled LAG.
• [SROS-16459] Fan speed algorithm has been improved to allow better handling of overall system temperature
under more extreme conditions.

5.6. Resolved in Release 5.2.2 U1 69


VSP Release Notes, Release 5.4.1

• [DOC-1810] When metadata is used to activate a VM and the VM was shut down, the VM was automatically
removed from VSD after a timeout. This timeout is now configurable. See the VMware Integration guide for
more details on the new metadata field to manage this behaviour.
• [VRS-6534] If the ovs-vswitchd process is restarted by the monitoring service, VMs already on the hypervisor
do not resolve.
• [VRS-7599] The Hyper-V VRS MSI installer and uninstaller do not support the Change or Repair options.
• [VRS-8406] The Nuage Hyper-V VRS does not support VMs with the same name on a single Hyper-V host.
Workaround Rename VMs on a single Hyper-V host so that their names are unique.
• [VRS-8928] On Hyper-V, large packets are dropped when jumbo frames are not enabled. Workaround: To
allow large-sized packets, enable jumbo frames.
• [VRS-10608] On Hyper-V VM, the first ping packet might get dropped when VLANs are enabled.
• [VRS-11593] When attempting to change the configuration of the Hyper-V VRS installation by selecting repair,
an error is thrown.
• [VRS-11600] In a high load VMware environment, when a VM was live vMotioned to a different hypervisor,
the origin VRS might have sent out a delayed VM modified event a long time after the vMotion had completed,
causing the VM to be unresolved and lose connectivity.
• [VRS-11608] When stateful ACLs are used in combination with AVRS, the traffic matching the stateful ACL
will not work for the initial 20-30 seconds. Workaround: Do not use Stateful ACLs when using AVRS.
• [VRS-11711] The Hyper-V NuageSvc service did not restart automatically.
• [VRS-11785] Once the Dual VTEP feature is enabled on a VRS, it can not be disabled without a redeployment.
• [VRS-11998] The VRS Agent might stop reporting metrics. Restarting the inner monitor resolves the issue.
• [VRS-12048] When metadata was used to activate a VM and the VM was shut down, the VM was automatically
removed from VSD after a timeout. This timeout is now configurable. See the VMware Integration guide for
more details on the new metadata field to manage this behaviour.
• [VRS-12073] The NodePort on a master node of Kubernetes did not work.
• [VRS-12224] Hyper-V VMs with checkpoints could not be resolved properly.
• [VRS-12315] OpenvSwitch was listening for OVSDB connections on external-facing interfaces.
• [VRS-12462] The VRS Agent Inner monitor might stop reporting metrics to the vCenter Integration Node for a
short period of time while it is updating redeployment information.
• [VSD-16778] No validation is done on the vCenter IP/FQDN when deploying the Nuage vCenter Web-Client
Metadata plugin.
• [VSD-21020] If a VRS Agent was deployed and removed from an ESXi host, the current version field in VCIN
still showed the old version of the VRS instead of showing nothing.]
• [VSD-22769] If remote syslog functionality was configured for a VRS Agent and the Syslog Server Type was
changed to ‘NONE’, the remote syslog functionality would still be configured using type TCP. Workaround:
Empty the Remote Syslog Server IP and Port fields as well.
• [VSD-22772] In the vCenter Integration Node, validation was missing for the Syslog Server IP and Syslog
Server Port.
• [VSD-23359] The vCenter Integration Node wrongly showed the fields for Secondary Data Uplink IP and
Netmask on the vCenter, Datacenter, and Cluster level.
• [VSD-23383] When the Metrics Push Interval was configured on the vCenter level, it might not have been
inherited by the Datacenter level. Workaround: Configure the Metrics Push Interval on the Datacenter level.

5.7. Resolved in Release 5.2.2 70


VSP Release Notes, Release 5.4.1

• [VSD-23417] When a hypervisor was removed from vCenter and readded, the vCenter Integration Node would
lose its link to it as it changed MOID. This is fixed and the link can be re-established by executing a resync
between VCIN and vCenter.
• [VSD-23431] It was possible to assign an Underlay ID on a domain of tunnel type GRE, which is not supported.
• [VSD-23443] Clients of the REST push channel got disconnected before the “Inactive Timeout” expired, which
sometimes caused VSD Architect to disconnect.
• [VSD-23623] When configuring csproot authentication to use LDAP, the validation of the LDAP login using
the csproot user would ignore failures, allowing csproot to always log in, even with the wrong password.
• [VSD-23631] The NSG had a hard-coded limit of 32 DHCP address ranges, and therefore crashed when attempts
were made to create more.
• [VSD-23829] Policy Group expressions configured in 5.1.1 might not have passed traffic correctly after VSP
upgrade. A VSD error when accessing the PG Expression via API/UI would also occur.

5.8 Resolved in Release 5.2.1

• [OpenStack-2027] On RHEL 7 systems with SELinux enabled, spurious log messages “ovs-
vswitchd ovs|#####|netlink_socket|ERR|fcntl: Permission denied” and audit logs “avc: denied {
create } for pid=#### comm=”ovs-vswitchd” scontext=system_u:system_r:openvswitch_t:s0 tcon-
text=system_u:system_r:openvswitch_t:s0 tclass=netlink_xfrm_socket” appeared.
• [SROS-14863] With an active/standby VSD cluster, there might have been a delay before VSC connected to
VSD.
• [SROS-14892] US-CERT vulnerability VU#793496 “Open Shortest Path First (OSPF) protocol implementa-
tions may improperly determine LSA recency” (https://www.kb.cert.org/vuls/id/793496) has been resolved for
VSG/WBX/VSC/NSG.
• [SROS-15062] When OpenFlow connection with controller went down and NSG BGP detected this change,
NSG BGP removed the controller config to bring down the BGP session with controller. Sometimes, if the
controller OpenFlow connection came back up, NSG BGP did not initiate BGP session with controller. This
happened because the previous controller session delete operation was not completed when we tried to create
the new session again.
• [SROS-15419] The VSG was incorrectly advertising /32 routes.
• [SROS-15906] Mac-move was not triggered during a broadcast storm in an L2 Domain.
• [SROS-15921] A timing condition existed where an IP could become unreachable in scaled setups with high
volumes of BGP updates and withdraws in rapid succession. This would result in a “poll_stale_arp_entry” error
in the logs on the VSG.
• [SROS-16039] After a reboot of the 7850 VSA a port may have been operationally down until the port was
shutdown and brought back up.
• [SROS-16187] When an MC-LAG peer was rebooted there was a timing window where mac-move could be
triggered on the MC-LAG vPort while the lag-98 interconnect was still down due to hold-timers.
• [SROS-16428] When rebooting the WBX or reseating a 100G QSFP the QSFP may not have been detected.
• [SROS-16464] When graceful disconnect between VSC and VSD occurred (XMPP shutdown) the VSD was
taking an extended period of time before marking the VSC as ‘Down’ in SYSMON.
• [VRS-11119] Log rotate failed in an OpenStack environment when nuage-metadata-agent was installed.
• [VRS-11138] The reported memory usage in the monitoring console now uses the available memory on the
system instead of free memory.

5.8. Resolved in Release 5.2.1 71


VSP Release Notes, Release 5.4.1

• [VRS-11574] In a ESXi Environment if VMs were moved from source VRS to Destination VRS while the
source VRS was shutdown for any reason, when the VRS was powered up it would send an update for the VMs
that were moved. This caused the VRS to send a delete for the VPort to the VSD and unresolved the vPorts for
the VMs on the destination VRS. This could have been triggered by vSphere HA events or loss of power to the
VRS.
• [VRS-11758] Stats forwarder on VRS was not connecting to an alternate stats collector on failure.
• [VRS-11872] In OpenShift, the CNI deleted the certificates folder, causing the Nuage Monitor to crash.
• [VRS-11876] In OpenShift, the REST server used by the Nuage integration in OpenShift did not get started
correctly.
• [VSD-19530] Logging in on the Nuage Metadata plugin for the vCenter Web Client might have taken a long
time which might have triggered a browser warning of a hanging page or tab. This could be ignored as the login
would eventually succeed.
• [VSD-22643] When using VSD active/standby, the monit UI was not accessible on the standby nodes.
• [VSD-22704] When a network macro is updated with a new IP address or mask, the new value was not pushed
down to the ACLs associated with that macro. Workarounds: Either of the following options could be applied:
(1) After editing a network macro, edit the ingress security policy entry for each of the associated ACLs by
clicking the Update button; (2) Re-attach the network macro to the network macro group, then re-attach the
group to the ACL.
• [VSD-22751] When running vsd-install.sh to install a VSD if the user performs an abort of the install
the script may continue to run.
• [VSD-22971] In some cases, JBOSS will fail on the VSD if the Elasticsearch node is isolated for a certain period
of time.
• [VSD-23060] An API call to retrieve domain information now shows the Backhaul Service ID as well.
• [VSD-23209] The maximum VNID value has been reduced from 16777215 to 16777165 to allow for reserved
VNIDs.

5.9 Resolved in Release 5.1.2 U4

• [SROS-16374] GATEWAY-CONFIG request was erroneously scheduled when GATEWAY-SECURITY push


was received even though gateway config already existed.
• [VSD-22909] XMPP connection recovery code error resulted in XMPP messages not being pushed to key server.
• [VSD-23280] When enterprises were deleted and recreated multiple times, the keyserver cache could go out of
sync. This resulted in an exception on the VSD server log and missing sek/seed on VSC.
• [VSD-23489] Key server now uses haproxy/load-balancer for the vsd.host parameter, so that push to the VSC
and request to the VSD will always be from the same VSD node and thus avoid any potential database sync
issues.

5.10 Resolved in Release 5.1.2 U2

• [SROS-15659] For WBX, under certain conditions, permanent port egress packet drop was experienced because
of an incorrect default hardware interface configuration.
• [SROS-15871] Provisioning of static routes for IPv6 prefixes from the VSD will lead to continuous reboots of
VSC/VSG/WBX. This has been resolved in 5.1.2 Update 2.

5.9. Resolved in Release 5.1.2 U4 72


VSP Release Notes, Release 5.4.1

5.11 Resolved in Release 5.1.2 U1

• [SROS-14493] There are no create, delete or modify (if any) traps for the DL and UBR MIB tables, and some
of the traps needed for Domain Linking and NAT/PAT functionality are also missing. This has been resolved.
• [SROS-15659] For WBX, under certain conditions, permanent port egress packet drop was experienced because
of an incorrect default hardware interface configuration.

5.12 Resolved in Release 5.1.2

• [SROS-14905] Under certain circumstances, crashes were seen on VSG @fdbMcsUpdateMacDb. The issue
that caused them has been resolved.
• [SROS-14281] Multiple creation and deletion of domain linking instances may result on a hardware node to run
out of memory.
• [SROS-14905] Under certain circumstances, crashes were seen on VSG @fdbMcsUpdateMacDb. The issue
that caused them has been resolved.
• [SROS-15077] Static route updates will fail for fully overlapping prefixes with different mask lengths, resulting
in stale routes in the routing table. For example, 0.0.0.0/4 -> 0.0.0.0/6 - fails, while 10.0.0.0/8 -> 10.1.0.0/16 -
works. Workaround: Delete existing prefix and add it back with updated mask length.
• [VRS-4850] 0/0 Static Route with Underlay-enabled vPort as Next-hop: When OpenFlow interface is uplink,
adding a 0/0 static route with underlay-enabled vPort as next-hop is a destructive operation because OpenFlow
session traffic will also be redirected to the vPort. Then the setup goes into a damaged state and recovery is
impossible, as all the traffic coming from the uplink will be redirected to the next-hop vPort.
• [VRS-6887] In VMware, to vMotion a VM across multiple vCenters, multiple restarts of Open vSwitch are
required.
• [VRS-9425] For the Nuage integration with Hyper-V to work properly, the Virtual LAN should not be config-
ured. Untagged traffic must be used.
• [VRS-9758, SROS-15132] Mirrored traffic reaching an Overlay Mirror destination can hit a low priority ACL
configured as a part of a Policy Group. This can incorrectly drop traffic. Workaround: To create Any to Any
“Allow” rule in the Mirror destination domain.
• [VRS-10165] With underlay routing between domains, incorrect forwarding can be observed when an overlay
default route and stateful ACLs are used with exit domain routes.
• [VRS-10439] When PAT to Underlay is enabled on the VRS and a VIP is assigned to the VPort with an associ-
ated FIP, the VPort IP will no longer perform PAT on VRS.
• [VSD-17834] The vCenter password must be unique in the entire metadataplugin.properties configuration
file in the Nuage metadata plugin for the vSphere desktop client installation. If the vCenter password is the same
as any other word in this configuration file, that other word too will be replaced by the encrypted value after
jBoss is restarted. Ensure that the password does not match any other word in the metadataplugin.properties
configuration file.
• [VSD-18045] The Nuage metadata plugin for the vSphere Web Client connected only to port 8443 on the VSD
IP or FQDN that was entered. When deploying the plugin, you can now configure the VSD port to use.
• [VSD-19679] The Policy Search Filter is unable to filter by Zone/Subnet/Policy Group. Supported filters:
Any/All Endpoint’s/Non-IP.
• [VSD-20492] If you use the revoke and delete function (certificates cannot really be deleted from EJBCA), there
is still trace of the certificate in the database. Due to this condition, EJBCA certificate generation may fail for
certain user.

5.11. Resolved in Release 5.1.2 U1 73


VSP Release Notes, Release 5.4.1

• [VSD-20645] When using the vSphere Web Client metadata plugin to change the metadata of a VM with more
than 2 NICs, removing a NIC with metadata from the VM might result in the information of the removed NIC
being written instead of the metadata of the remaining NIC. Workaround: Manually update the metadata on
the VM.
• [VSD-20686] In the Ingress/Egress forwarding policies popovers, proper error message is not displayed when
the user is trying to create forwarding policy entries leaving Redirection Target blank under ‘Actions’ tab. On
the Ingress Forwarding Security Policy entry popover, ‘VLAN Range’ field is not accepting ranges.
• [VSD-21433] In the Metadata plugin for the vCenter Web Client, a disabled user might still be selected in the
user field on a VM. This might cause the VM to not gain connectivity as the user is blocked from VSD.
• [VSD-21592] When updating a subnet via the API, the routeTarget, routeDistinguisher, and vnId were being
deliberately ignored. This behaviour was removed in 5.1.2 to align L3 subnets with L2 domains.
• [VSD-21876] The search bar for the network security policies does not work unless you use the advanced search
function.
• [VSD-21890] When re-installing a VSD node that is part of a VSD cluster, following the “Re-install a VSD
Node” in the “VSP Install Guide”, the re-installation procedure will fail if it is applied on the “first” node (the
node that was first installed in the cluster). This issue exists in 5.1.1, but does not impact 4.0.Rx or 5.0. It is
fixed in 5.1.2.
• [VSD-21942] L4 service construct updating is not handled in analytics portion correctly, which means that
associated flows will not get updated information on l4service or service group resulting in an inaccurate data
on visualization.
• [VSD-21960] When enabling FIP to Underlay using the Shared Infrastructure Organization, it is possible to
change the Domain FIPUnderlay state even after a Subnet of type FLOATING is created within that domain.
This should not be permissible, and therefore if this change is made, the results will be unpredictable.
• [VSD-21962] When enabling FIP to Underlay using the Shared Infrastructure Organization, it is possible to
enable FIPUnderlay for more than one domain, although only one domain with FIPUnderlay enabled should be
permitted. Enabling this for multiple domains will have unpredictable results.
• [VSD-22462] After VSD upgrade, the ejabberd-status would show as failed, with a missing p1db node entry.

5.13 Resolved in Release 5.1.1 U2

• [SROS-15458, VSP-2166, VSP-2183] The VSC, 7850 VSG, and WBX 210 might crash during an SNMP walk
due to the addition of two new MIB tables for DSCP and COS that mark QoS profiles for NSG egress. If
these MIBs are polled when there are no NSG uplinks in the MIB, the device(s) will crash. The added MIBs
are: tmnxDCvSGwVlanDscpRemarkTable and tmnxDCvSGwVlanCosRemarkTable. Although these
MIBs are used for the VSC, they exist on the 7850 VSG and WBX as well. This issue impacts all TiMOS-based
devices (VSC, 7850 VSG, WBX 210) on 5.1.1 U1.

5.14 Resolved in Release 5.1.1 U1

• [SROS-13457] When an MC-LAG overlay host VPort is deleted, its ARP entry is not removed and it is active
until it expires.
• [SROS-14297] There is traffic loss between hosts (using bridge VPorts) after one VSG (part of an MC) recovers
from a reboot.
• [SROS-14487] When modifying BGP policies and neighbor attributes, the BGP instance flaps if the VSG has a
VPort in multiple enterprises.

5.13. Resolved in Release 5.1.1 U2 74


VSP Release Notes, Release 5.4.1

• [SROS-14875] With domain linking enabled, deleting the redirection target causes traffic to be sent to incorrect
interface.
• [VRS-6316] On Hyper-V, connection setup can be delayed when stateful ACLs are used.
• [VRS-7924] On a Hyper-V VM, when pinging the gateway IP of the subnet, a packet loss of up to 100% might
be seen.
• [VRS-8576] When a Kubernetes Network Policy is created with a malformed network policy file, nuage-
kubernetes-monitor may crash.
• [VRS-10154] When ACL Flow logging is enabled, a loop inside the logging system is causing a rapid growth
of the log files. This will cause the log partition to fill up quickly.
• [VRS-10237] Both the VRS Agent inner monitoring tool and the ESXi monitor are causing duplicate log mes-
sages in syslog.
• [VSD-15576] User was not able to change the VLAN range on the port belonging to VSG RG.
• [VSD-20356] In LDAP, users with objectClass: inetOrgPerson not sync in VSD LDAP. VSD now allows the
objectClass ‘person’ and ‘inetOrgPerson’ for users.
• [VSD-20468] Selecting “Hourly Precision” on statistics graphs and then increasing the “maxHour” and “min-
Hour” may result in the intervals reverting to their original setting; for example, increasing the maximum time
by 1 hour may cause the maximum time to be reduced by 1 hour.
• [VSD-20601] When retrieving statistics from elasticsearch on the current day, stat interval shows from 00:00 to
current system time. The VSD creates a graph with a X-Axis reading from 00:00 - 24:00 and plot the retrieved
data.
This causes current real-time data to be plotted on 24:00 instead of the correct interval.
• [VSD-20985] In the Nuage metadata plugin for the vCenter Web Client, if you change the domain field to
a Layer 2 domain where previously a Layer 3 domain was selected, the network (subnet) field might not be
emptied.
• [VSD-20100] In the VSD GUI, user is unable to Create DHCP Option With FQDN starting with a number.
• [VSD-21309] On receiving events from push channel, the parameter sourceEnterpriseID is present and allowing
to identify the enterprise associated. This is available in the JMS messages.
• [VSD-21221] VSD GUI inspect button does not show attributes of API 5.0R1.
• [VSD-21901] From the GUI it is not possible to properly change the Seed Payload Authentication Algorithm to
a value other than HMAC_SHA1. This is possible when making the change via the API manually and will take
effect for new enterprises.
• [SROS-13597] When traffic is flowing and one of the ports in the access LAG is shut down, after the reboot of the
VSG node of the MC-LAG pair, when the VSG comes up online, traffic does not flow through the interconnect
LAG and is dropped.
• [SROS-13712] VSD may incorrectly show alarms indicating issues related to resiliency mismatch with a VSG
Redundancy group. The alarms have no functional impact. It is possible to delete them using the UI or the API.
• [SROS-14003] Some ARPs might not be learnt/synced on a dual homed MC-LAG VPort.
• [VRS-4346] VPort stats are not collected.
• [VRS-7254] When the VRS receives a fragmented packet from a VM, it forwards the first segment, but drops
the subsequent segments instead of forwarding them.
• [VRS-7521] VRS does not forward MPLS packets received from a VPort.

5.14. Resolved in Release 5.1.1 U1 75


VSP Release Notes, Release 5.4.1

• [VRS-9361] While installing the Nuage VRS components on HyperV, you must select “Install this driver soft-
ware anyway” when the Windows Security popup window pops up. This applies to both GUI-based and CLI-
based installs.
• [VRS-9653] VRS does not parse Prefix of 0.0.0.0 Static Route.
• [VSD-18049] When using LDAP with Nuage, Windows Server 2012 R2 conflicts with the built-in Nuage ad-
ministrators group.
• [VSD-18221] If a new NIC is added to a VM after metadata has been applied using the vSphere Web Client
metadata plugin, this new NIC might not show up in the vSphere Web Client metadata plugin immediately.
Switching to another VM and back will reveal the new NIC.
• [VSD-18772] During minor migration, if you had executed the command /opt/vsd/bin/vsd-upgrade-complete to
unfreeze the VM as part of the migration workflow, then you must either, via GUI From the System Configura-
tion tab, change a value and update (you can change it back afterward) so that system configuration changes can
take effect. Optionally, you can also restart jboss from VSD command line by executing monit restart
jboss; if HA, you must execute this one node at a time and wait for jboss to start up again before proceeding
to the next node.
• [VSD-18919] In the GUI for the QoS queue policy, when certain resolutions are used, some buttons at the
bottom of the popup windows do not work.
• [VSD-20305] API Create/Update/Delete operations on a trunk object may appear on the VSD GUI only after
the tree view has been refreshed.
• [VSD-20316] In the vCenter Integration Node, only the user that creates an entity (for instance a vCenter
connection) is able to edit or delete it.
• [VSD-20688] When the vSphere Web Client metadata plugin is used to apply a static IP to an interface, and that
IP does not match the subnet selected, the wrong IP is written in the metadata information.
• [VSD-20862] If a security rule uses a network macro containing an IPv6 address, that rule does not get applied
on VRS.
• [VSD-21216] For a major upgrade: The policy group change can be done only before setting the
vsd_complete_flag or after calling turn on API script. For a minor upgrade: Do not create any new policy
group until the full upgrade is complete; otherwise, the policy group gets sent in new format and VSC cannot
recognize the policy group.
• [VSD-20992] When the vSphere Web Client metadata plugin is used to reapply metadata on a VM, where the
metadata contains an already deleted subnet, it fails to apply the metadata.
• [VSD-20965] When the vSphere Web Client metadata plugin is used and the Enterprise field is emptied, not all
the dependent fields are automatically emptied.
• [VSD-21419] When the vSphere Web Client metadata plugin is used to apply empty metadata, a HTTPClien-
tException error is thrown.
• [VSD-21425] When the vSphere Web Client metadata plugin is used and the form is refreshed after a Zone has
been deleted from the VSD, the Subnets dropdown still shows the subnets of the deleted zone, while the zone
field is emptied.
• [VSD-21182] The configuration file of the vSphere Desktop Client metadata plugin uses the v4_0 API endpoint.
• [VSD-21533] On the Metadata plugin for the vCenter Web Client, when removing the metadata information,
the user information might remain on the VM.
• [VSD-21357] The API specifications and VSPKs do not have the information about the ingressQOSPolicy API
entity.
• [VSD-20925] The API specifications and VSPKs for the uplinkConnection API entity are missing the ‘Any’
option for the mode.

5.14. Resolved in Release 5.1.1 U1 76


VSP Release Notes, Release 5.4.1

5.15 Resolved in Release 5.0.2

• [SROS-12303] When leveraging hub and spoke domains there was the potential for a crash after a high number
of unique Subnet/L3 Domain spoke pairs were linked to a hub domain.
• [VRS-7521] VRS does not forward MPLS packets received from a VPort.
• [VRS-7724] If connectivity from the VRS Agent to the ESXi host is lost for a longer period of time, the ESXi
monitor in the VRS will stop working and needs to be restarted.
• [VRS-7749] Sometimes a live Storage vMotion of a VM on ESXi might cause it to lose network connectivity.
• [VRS-8188] For the Nuage integration with Hyper-V to work properly, the interface name created by the Hyper-
V virtual switch cannot be changed from its default value. This default value is in the form of “vEthernet
(<vswitch name>)”. Renaming this interface causes the functionality to break.
• [VRS-8328] When using stateful ACLs and live migrating VMs, connections from the underlay to the overlay
VM might time out.
• [VRS-8510] In the VRS Agent for ESXi, the Open vSwitch bridge MTU value does not match the MTU setting
configured in VCIN.
• [VSD-18533] When a subnet is modified to dual_stack, the IPv6 address and the IPv6 gateway are not displayed
in the VSD Architect.
• [VSD-18904] When creating IPv6 ICMP ACLs (etherType 0x86DD and Protocol 58), specific ICMP Type and
ICMP Code cannot be provided because those fields cannot be configured.
• [VSD-18977] The VSD Architect (UI) does not enforce relationships between IP type (IPv4/IPv6) and IP ad-
dress information.
• [VSD-19078] When creating network macro groups, the following combinations are not supported: (1) IPv4
network macro and IPv6 network macro; (2) IPv6 ACL entry with an IPv4 network macro or network macro
group; (3) IPv4 ACL entry with an IPv6 network macro or network macro group.
• [VSD-19487] In the vCenter Integration Node, the Configuration Time Limit field is erroneously displayed on
the hypervisor level. This field can be safely ignored.
• [VSD-19488] Update of virtual IP for a redirection target is not supported.
• [VSD-19740] When the VRS Agent is manually deleted from vCenter, the last successful package upgrade will
not be reapplied.
• [VSD-20006] The rsyslog messages larger than 2038 bytes are getting truncated at the VSD. This issue has been
fixed by sending the rsyslog message using TCP. The rsyslog server must accept TCP messages.
• [VSD-20027] The Policy Computation Decisions for ACLs are not displayed on the UI. If Policy Computation
Decisions are to be fetched, use APIs to get the computed policies.
• [VSD-20396] When the VRS loses connection to the VCIN for longer than 30 minutes, the VRS is reported in
TIMEDOUT state and does not recover after the connection is reestablished.
• [VSD-20696] When using the vCenter Integration Node to move a standalone hypervisor in scope, the UI
sometimes displays an error while the VRS is being deployed and the in scope button on the UI is not updated.
• [VSD-20824] VSD does not send messages to syslog, which causes the remote syslog to become unavailable.
• [VSD-20858] A steady memory increase is observed in vCenter while VRS Agents are deployed.
• [SROS-13914] When one of the nodes of an MC-LAG pair is rebooted, traffic loss should be less than one
second. However, traffic loss of ~3 seconds has been observed under these circumstances.
• [VSD-19239] Destination Network value is erroneously deleted when clicking the unlink icon in the Origin
Location in the New Ingress Security Policy Entry popover.

5.15. Resolved in Release 5.0.2 77


VSP Release Notes, Release 5.4.1

• [VSD-19533] When clicking on the refresh button in the Nuage Data form of the Nuage Metadata plugin for
the vCenter Web Client, the enterprise field is emptied. Selecting an enterprise will reset other fields, which will
require the user to specify each field again.
• [VSD-19669] The search function in VSD is not working for NSG and DC Gateways.
• [VSD-20959] After upgrade to 5.0 in pre BRS mode, enterprise deletion fails because of reference to entity
IngressExtServiceTemplate, which is deprecated.

5.16 Resolved in Release 5.0.1

• [VRS-7842] When installing the Hyper-V VRS as a Domain Administrator, UAC must be disabled on the
Windows server for the installation to succeed.
• [VRS-8231] x.x.x.0/31 was not allowed to be configured as static IP address on the uplink connection, even
though it is valid IP for /31 mask. This issue is now resolved.
• [VRS-8360] When there is no separate data interface on the ESXi VRS Agent, the MTU settings and queue
optimizations are not configured correctly on the management interface.
• [VRS-8654] When using a redeployment policy for the VRS Agent on vSphere, the intervals between attempts
to fix the issue by restarting the services might be longer than configured, causing delayed redeployment.
• [VSD-17402] In the VSD UI, when opening an ACL entry that has been created with a remote destination, the
UI mistakenly shows the ACL type as being local.
• [VSD-19015] When a VRS loses connection to the secondary VSC, no alarm is raised in the VSD.
• [VSD-19342] When the vCenter Web Client metadata plugin is used to remove metadata from an interface, the
ID fields are not emptied. This does not impact functionality.
• [VSD-19521] In the Nuage Data form of the Nuage Metadata plugin for the vCenter Web Client, if the Enterprise
field is emptied, the user and Site ID field are automatically repopulated with the old information.
• [VSD-20224] When the Uplink Connection Type is not selected, the Create button is grayed out and no further
options are available. Now, when the connection type is selected, the button is active, and the corresponding
options are available so that configuration can be continued.
• [VSD-20273] In VSD HA deployments, whenever mysql-status shows “Status failed”, activemq-status should
also show “Status failed”; however, there are some situations where monit incorrectly reports activemq-status
as “Status ok” on the slave activemq node.
• [VSD-20295] When the ESXi VRS Agent is manually deleted after a successful package-based upgrade, during
the redeployment of the VRS Agent, the package upgrade might be applied twice. This does not impact the
functionality.
• [INF-1030] The Elasticsearch VM for VSP 4.0.R7 was created with 7GB disk space instead of 250GB. As a
result, the disk fills up quickly, and when it is full, statistics collection stops.
• [SROS-13399] When one of the nodes of an MC-LAG pair is rebooted, there is an approximate traffic loss of
6-8 seconds.
• [SROS-13692] When bringing up a VRS-G RG pair, both VRS-G nodes sometimes stick in slave mode. This is
more likely to happen if the VRS is Ubuntu 1604.
• [VRS-8002] On a Hyper-V VM, when the Description field is populated, VMs might not be resolved.
• [VSD-17492] If a cluster or a standalone host is moved out of scope in the vCenter Integration node, in vCenter
an upgrade task might appear with a warning “The task was canceled by a user” before the VRS Agents get
powered off and deleted. This warning can be safely ignored.

5.16. Resolved in Release 5.0.1 78


VSP Release Notes, Release 5.4.1

• [VSD-17994] The Nuage metadata plugin for the vSphere Web Client does currently not support linked vCenter
deployments.
• [VSD-19102] The VSD protocol version is displayed in JMS entityVersion and VSC “VSD Release Version”.
This version often matches the deployed VSD version, but in some cases the VSD protocol version may not be
updated in VSP minor releases. This can cause confusion when the user sees a “VSD Release Version” or JMS
entityVersion which does not match the installed VSD version. In 4.0R8 these versions do match, so the issue
is not visible.
• [VRS-16759] Sometimes, when running the ‘ovs-ofctl dump-flows’ command, the process will hang and con-
sume 100% CPU until manually aborted. It is not advised to use this command but use ‘ovs-appctl bridge/dump-
flows’ instead.
• [VRS-16069] When stateful ACLs are enabled which allow ICMP traffic, VRS/NSG may not handle ICMP type
3 (ICMP destination unreachable) messages correctly causing all other ICMP code traffic to drop.
• [VRS-16062] In prior 5.3 releases, VMs in different domains on the same VRS and hypervisor could not com-
municate communicate via route-to-underlay. This has been corrected. This does not apply to static routes
pointing to those VMs. For addresses matching the static-routes behind such VMs to communicate, the user is
expected to manually add the static routes in the main Linux routing table of the hypervisor with the respective
VM as the next-hop.
• [SROS-18202] BGP PE-CE learnt routes are advertised to the VSC with EVPN VNID label 0, which can cause
interoperability issues with third party EVPN implementations.
• [SROS-17195] BGP PE-CE session advertises infrastructure AS and any other AS used in the underlay as part
of the AS path announced to a CE for overlay routes.
• [OPENSTACK-2245] Creating VSD managed subnets through the OpenStack Horizon dashboard resulted in an
error and would fail.
• [VSD-25711] Adding the same IPv6 VIP with different formats (long vs short format) would raise an exception
and prevent the VIP from being added.
• [VSD-23847] The entity type for the PAT address map was inconsistent between the VSD REST API and the
VSD JMS API: the “entityType” value was “adressmap” in the REST API and “natmapentry” in the related JMS
events. It is now set to “adressmap” in both REST API and JMS API.
• [VRS-16069] When stateful ACLs are enabled which allow ICMP traffic, VRS/NSG may not handle ICMP type
3 (ICMP destination unreachable) messages correctly causing all other ICMP code traffic to drop.
• [SROS-18084] When BGP PE-CE peering to a loopback, access side BGP peer learned routes with a non-vPort
IP BGP next-hop will fail to resolve to a vPort IP (recursive lookup).
• [SROS-17925] During upgrade from 4.0R10 to 5.2R2, at turn-on-api, flapping the OVSDB session with a 3rd
party gateway switch was necessary.
• [SROS-17861] 210 WBX can occasionaly display erroneous log messages in the form
“soc_tomahawk_ser_process_mmu_err”. These are spurious log messages and have been disabled.
• [SROS-17649] On VSG/WBX, when an Ingress QoS Policy is applied to a port and that port is part of a LAG,
LAG QoS policy is ignored and the QoS per port is applied.

5.16. Resolved in Release 5.0.1 79


CHAPTER

SIX

KNOWN ISSUES

Please be aware that upgrade-related known issues are listed in the Upgrade section of these release notes.
Please consult Known Limitations too, because some known issues may have been moved to that section.
The known issues are organized based on the VSP component:

• Known Issues First Reported in Release 5.3.3 (page 81)


• VSS (page 83)
– MC-LAG (page 83)
• Miscellaneous (page 85)
• VSD (page 87)
• VSC and 7850 VSG/VSA (page 90)
– BGP (page 90)
– CLI (page 91)
– IS-IS (page 91)
– Management (page 92)
– Routing (page 92)
– Services (page 93)
– System (page 94)
• VRS (page 94)
• VMware (page 97)
• OpenStack (page 98)
• CloudStack (page 99)
• OVSDB (page 99)
• Hyper-V (page 99)
• SCVMM (page 100)
• Container Integration (page 100)

80
VSP Release Notes, Release 5.4.1

6.1 Known Issues First Reported in Release 5.3.3

• [VSD-26435] The JSON-RPC connection status is wrongly displayed as a red exclamation mark rather a green
circle whenever a sysmon status probe is triggered by clicking on the “Probe object for up to date information”
on the “Monitoring console” of the VSD GUI. Select another view and then re-select the VRS view to get correct
connection status.
• [VSD-26358] The VSD users part of the “Operator” group should have read access to the “NSG uplink” and
“VSC profile” in VSD Architect, but they do not.
• [VSD-25939] When deploying VCIN, the Jboss process runs as root user, instead of vsd user.
• [VRS-17652] When upgrading from a version that supports BGP, the yum local install of upgraded BGP RPMs
fails if an older BGP version is installed and running. Workaround: Uninstall and install again BGP RPM
• [VRS-17617] Running the vm-monitor script when vm-monitor is already up triggers a loop and /var directory
is filled up in seconds
• [VRS-16639] A warning message can be observed in the nuage-openshift-monitor.WARNING log,
stating “etcdserver: mvcc: required revision has been compacted”. This message can safely be ignored.
• [VRS-16637] The CNI audit will unnecessarily fetch pods across all nodes while it should only fetch pods for
the local node. This causes extra log messages in /var/log/messages. These messages can be safely
ignored and do not impact functionality or connectivity of pods.
• [VRS-16561] The eVDF bootstrap agent install will succeed, even when the install log complains about missing
packages. This can occur in case the eVDF node does not have access to the general Red Hat repositories and
certain required packages are not available for installation.
• [VRS-16555] The eVDF bootstrap agent transforms the UUID of the VM it retreives through dmidecode even
when not deployed on Azure, this can cause the VSD eVDF Instance ZFB match UUID criteria fail to match
with an open request. Workaround Option 1: bootstrap the eVDF node with the ZFB info specifically for the
eVDF instance, instead of the ZFB info of the eVDF template. Option 2: Manually accept the eVDF request.
Option 3: Use the IP or hostname as ZFB match criteria.
• [VRS-16553] The eVDF bootstrap agent replaces the default systemd service for ntpd, re-
placing it with a wrapper to point ntpd to the VSC’s as the NTP servers. This can in-
terfere with the OpenShift installation in some cases. Workaround Backup the original
/etc/systemd/system/multi-user.target.wants/ntpd.service before bootstrapping
an eVDF node and restore it before installing OpenShift. Make sure to point the NTP configuration to the
VSC’s as NTP servers.
• [VRS-16291] IPSEC traffic between an eVDF node and an NSG with dual uplink fails. Only single uplink
NSG’s are currently supported.
• [VRS-15461] When multiple packets are sent with only RST flag set, statsd process on the NSG and VRS report
incorrect stats to Elastic.
• [VRS-15015] Sysmon on eVDF nodes will send minimal information about an extra port eth2. If this port
does not exist on the eVDF Gateway Profile in VSD, this can cause alarms in the VSD, syslog and JMS. These
alarms can safely be ignored.
• [VRS-13361] The ovs-appctl list-commands mistakenly refers to the ipsec/list-dr-seeds
sub-commands as accepting a bridge name, where it only accepts a customer-id. This is only a cosmetic is-
sue.
• [VSD-24322] When performing a VSD and Statistics installation on separate hosts, if the user issues “Monit”
command on the Statistics hosts before the script to enable statistics is executed, Monit will start the statis-
tics process with an incomplete configuration. An example of an incomplete configuration is when the

6.1. Known Issues First Reported in Release 5.3.3 81


VSP Release Notes, Release 5.4.1

stats-vms ‘monit status elasticsearch-status’ shows “No value of statscollector.elasticsearch.host in the con-
figuration file /opt/vsd/stats_collector/conf/stats.conf”. To recover from this situation, disable stats using
‘/opt/vsd/install/change_credential.sh -j ‘<csproot_password>’ ; /opt/vsd/vsd-stats.sh -d’ and try enabling statis-
tics again using ‘/opt/vsd/install/change_credential.sh -j ‘<csproot_password>’ ; /opt/vsd/vsd-stats.sh - e <es-
names> - s <HAproxy-name>’
• [VRS-16565] When an already bootstrapped eVDF node is rebooted, the eVDF bootstrap agent will attempt a
new activation. This does not impact functionality. In some cases a new pending bootstrap request can be seen
in the VSD, these can safely be ignored.
• [SROS-18450] In some cases when configuring BGP and BFD on vports, the configuration is pushed to the
VRS and hence the peer received routes are not learnt on the VSC. Workaround: Restart bgpd.
• [SROS-18440] MP-BGP peering between VSC and VRS is sometimes taking more than 1 minute to come up.
• [SROS-18407] On VSG/WBX, adding/removing ports to LAGs might have a traffic impact if flows are moved
as a result of the new calculated hash.
• [SROS-18226] In case the port which is in LACP-Fallback mode is brought down, the other port of MCLAG
which was previously in operationally down will not come up automatically. Workaround: Manually shut-
down/no shutdown this port.
• [SROS-18048] On shutting down MCLAG MCS protocol on a node , MACs learnt through MCS in the underlay
vpls services will not be aged out.
• [SROS-18046] If a single homed VPLS services having single homed SAPs are made non MCLAG MCS (by
deleting the dummy LAG), the MACs on peer node pointing to LAG-98 will only get deleted after MAC ages .
• [SROS-18005] If macs learnt from single-homed MCLAG node are cleared with “clear service id <service-id>
fdb all”, the macs are not relearnt. To make the peer re-learn all single-homed-macs, execute “clear service id
<service-id> fdb sap <sap-id>” on the single-homed-node.
• [SROS-17999] In case of ACL failure (i.e. resources exhausted) on one peer of the MCLAG, SAP is brought
down on that peer and flows to that node will be blackholed as the CE ignores this situation.
• [SROS-17930] After clearing fdb on the peer MCLAG MCS node for a single homed SAP where macs are
learnt through LAG-98 , MAC re-sync will not happen.
• [SROS-16651] If the first 10 characters of the Prefix List Name is the same in multiple routing policies then the
policy will overwrite each other.
• [SROS-18504] On VSG/WBX, BGP PE-CE to a loopback using dual stack may create an exception when
adding the IPv6 peer, forcing the system to reboot
• [OPENSTACK-2300] If a subport is attached to trunk port via OpenStack Neutron CLI/API, and the subport is
created on a VLAN-only network, the attachment fails with an error from the nuage-sriov mechanism driver.
Workaround: Disable the nuage-sriov mechanism driver if VLAN-only networks are desired. VLAN-only
network creation should only be used when nuage-sriov is disabled. If nuage-sriov is enabled, multi-segment
networks with VLAN and VXLAN segments should be used instead.
• [VRS-17112] When multiple packets are sent with only RST flag set, statsd process on the NSG and VRS report
incorrect stats to Elastic.
• [SROS-18076] When BFD is enabled on LAG-98 and the LAG-98 is flapped, BFD error messages are seen on
the console.
• [VSD-26843] Flow Explorer does not support search by PG name and PG category.
Flow Explorer does not support “contain” in the search query. It displays “Please wait while loading”
if a user puts “contain” in search query. Subsequent queries to flow explorer displays failure message
“Oops unable to connect to datastore.”

6.1. Known Issues First Reported in Release 5.3.3 82


VSP Release Notes, Release 5.4.1

• [VSD-25950] When the plugin deployment tool on the vCenter Integration Node is used to deploy the HTML5
plugin, a status message shows that states “Started Web plugin deployment,” this message may be confusing
as the same message is used when the Web Client plugin is being deployed. This can be safely ignored as the
proper HTML5 plugin is deployed.
• [VSD-25671] Changing the Elasticsearch log rotation scheme from date based to size based causes
error messages to be shown in /var/log/messages. This is caused by erroneous configuration in
/etc/elasticsearch/log4j2.properties. Workaround: Do not use size-based rotation or contact support for de-
tails of workaround.
• [VRS-15735] The VRS Inner monitor on the VRS Agent opens port 8080 for a monitoring service. This does
not cause any security risk or extra load.
• [SROS-17853] If a WBX/VSG has a xmpp channel connected without any vports, then the hold-timer will
expire after init-hold-timer-value.

6.2 VSS

• [VRS-15359] When VSS is enabled for a domain, a memory leak is observed in OpenvSwitch for VRS’s and
VRS-G’s with a vPort in that domain. Workaround: Restart OpenvSwitch or disable VSS.
• [SROS-14944] When VSS flow and Event Collection is enabled on the VSD, in some cases the VSS flow
collection is not enabled on a few VRS. Workaround: Enable Flow Collection on VSD again.
• [VRS-14695] If L2 VM used as Overlay mirror destination resides on same OVS as L3 Source VM, VSS flows
for L3 VM are not reported by OVS to stats collector.
• [VRS-15379] When VSS is enabled, the collected flow statistics in Elasticsearch for the traffic between two
Virtual Machines or Containers do not contain the source and destination MAC addresses.
• [VSD-17936] If two PGs and an ACL using these PGs are configured, and then the name of one of the PGs is
changed, the new flows still use the old name in Elasticsearch. Workaround: Do not change the names of PGs
that are in use.
• [VSD-18170] The VSS display does not automatically refresh, no matter what the refresh interval selected.
Workaround: Refresh manually.
• [VSD-18173] When the VSS Security Analytics page is refreshed while Elasticsearch is down, it gets stuck
at “Please Wait While loading” even after Elasticsearch has recovered and is once more in a healthy state.
Workaround: Refresh manually.
• [VSD-25635] ICMP Service is reported incorrectly in VSS Analytics Flow Explorer GUI as well as in Elastic-
search VSS Flow Index for certain ICMP flows. It is only reported properly for flows that match an ACL Entry
with ICMP service referenced in the ACL definition.
• [VSD-25701] In the VSS Security Analytics Visualization, the VSS Events graph is not populated with ACL
Deny events for L2 domains even though there is data in Elasticsearch corresponding to the events.
• [VSD-25731] For traffic matching an ACL having a Service Group containing more than one service with ICMP
Code/Type, the service is reported incorrectly on Elasticsearch by the stats collector. It is reported correctly only
if there is a single service with protocol ICMP in the Service Group.

6.2.1 MC-LAG

• [SROS-17887] Using manual L2 EVPN services, if the SAPs receive the same MAC address from a host,
mac-move can bring both saps down. Workaround: Configure MC-LAG in the nodes, or configure Active or
Standby in the host

6.2. VSS 83
VSP Release Notes, Release 5.4.1

• [SROS-12546] The trace message below appears when lag-98 goes down on the MCS node, which has a single
homed VPort synced on it from the other MCS node:

1:iomMsg-1:IOM:is_valid_mac_entry Stale entry with same cpmtag=2. TLS 1592604809


˓→(TlsId 4) Entry - MAC 68:54:ed:00:cc:11 idx 26

[016 m 02/04/17 06:33:41.259] 1:iomMsg-1:IOM:bind_mac_entry Attempt to bind over


˓→existing valid MAC entry! TLS 1592604809 MAC 68:54:ed:00:cc:11 Idx=29 cpmtag=2

[016 m 02/04/17 06:33:41.260] 1:iomMsg-1:IOM:add_or_update_mac_entry TLS


˓→1592604809 (TlsId 4) cannot create new MAC entry - MAC 68:54:ed:00:cc:11 idx 29

[016 m 02/04/17 06:33:41.262] 1:iomMsg-1:IOM:process_one_icc_socket Rejecting ICC


˓→transaction 3795 socket 19

• [SROS-13593] When one node of an MC-LAG pair is upgraded to any release after 4.0R6.1 and the other node
is still 4.0R5 or earlier, traffic outage is seen if traffic is ingressing on the node that is still in the older release.
Workaround:
1. Shut down interconnect lag for VSG node that is to be upgraded first (say Node-1) . Run: /configure
lag <interconnect-lag-id> shutdown
2. Save the configuration. Run: /admin save
3. Upgrade Node-1.
4. Upgrade Node-2.
5. No shut the interconnect lag on Node 1 : Run: /configure lag <interconnect-lag-id> no
shutdown
6. Save the configuration on Node 1: Run: /admin save.
• [SROS-15142] For active-standby MC-LAG, if a clear ARP is issued on the standby node, all learned ARP
entries will be cleared and not recovered. Workaround: Issue a clear router ARP on the active node and
it will restore ARP entries.
• [SROS-15146] ARP and MAC entries will remain out of sync on the MC-LAG peers if the Interconnect
LAG(lag-98) is down/shutdown.
• [SROS-15221] Continuous local ARP move between dual homed bridge VPorts (updated with a received
GARP) on MC-LAG peers can create ARP entry inconsistencies not updating the MAC properly. Workaround:
Clear offending ARP entry in both MC nodes.
• [SROS-15244] In a scaled setup where we have 8000 hosts connected behind the bridge VPorts of MC-LAG,
failover times for access lag shutdown or operational state down can be on the order of 0-30 seconds.
• [SROS-15247] When one of the MC-LAG nodes is rebooted, sometimes the fdb-mac entry for host behind a
bridge VPort on MC-LAG points to Interconnect LAG even though the access LAG is up. This issue is not
observed in every reboot. Workaround: Clear that particular FDB MAC from the node where FDB entry is
correct.
• [SROS-15810] When hundreds of endpoints exist on a single VPort configured for MC-LAG, it is possible
that a small number of ARP entries will be removed on the nodes of the MC-LAG when the access LAG
and interconnect LAG flap in the following sequence: a) Interconnect LAG shut; b) Access LAG shut; c)
Interconnect LAG no shut; d) Access LAG no shut. If this happens, the traffic destined for the endpoints with
missing ARP entries will be dropped. Workaround: Stop and restart the BGP session using bgp shut/no
shut.
• [SROS-15816] When hundreds of endpoints exist on a single VPort configured for MC-LAG, on the node of the
MC-LAG where the interconnect and access LAGs are shut down, it is possible that some of the fdb-mac entries
will be deleted. Traffic will be dropped if it is destined for the endpoint where the ARP entry is present and
the MAC entry is deleted. Workaround: On the MC-LAG peer node, run clear service id <svcid>
fdb mac <mac>.

6.2. VSS 84
VSP Release Notes, Release 5.4.1

• [SROS-16289] Changing the System IP on VSG/WBX with manual EVPN services configured will result in
the configuration being lost. If it is necessary to change the System IP, save the configuration, and restore the
configuration after the change has been applied.
• [SROS-16299] Traffic from dual-homed VPort to a single-homed VPort on its MC-LAG peer may be dropped
after the reboot of one of the MC-LAG peers. Workaround: shut/no shut of Interconnect LAG.
• [SROS-16305] After the reboot of one of the nodes of VSG MC-LAG, some of the FDB table (MAC table)
entries for dual-homed VPorts may point incorrectly to Interconnect LAG SAP even though the Access LAG
is UP on one of the MC-LAG nodes. This results in traffic being dropped. Workaround: Issue the command
clear service id <svcid> fdb mac <mac-entry>.
• [SROS-16491] In an Active/Standby MC-LAG scenario, EVPN loop across single-homed hosts connected to
both nodes will not be detected if the number of MACs is scaled to approximately 100.
• [SROS-16493] Stale MAC entries pointing to interconnect LAG may be present in the FDB MAC table
when a SAP on the active node of Active/Standby MC-LAG is shut. Workaround: clear service id
<evpnid> fdb <mac-entry>.
• [SROS-16619] An IPv6 static route configured behind a dual-homed bridge VPort will not be installed on one
of the MC-LAG peers.
• [SROS-17591] When inter-chassis ports LAG-98 are shut down, SAPs of LAG-98 can appear as admin Up; this
is a cosmetic issue.

6.3 Miscellaneous

• [SROS-17972] In case the Anycast MAC Address on WBX/VSG is changed, it is observed that the Anycast
Gateway IP inside the VPRN Interface gets reversed. The same is reversed in ARP Table. Workaround:
Change the Anycast Gateway MAC Address twice. For example, if the MAC address has to change to Y, first
change to X and then to Y.
• [SROS-17965] In case a large number of manual EVPN Services configured on a VSG/WBX needs to be
deleted, when removing them with bulk procedures in a short span of time, it may lead to reset of the node.
Workaround: Remove one EVPN service at a time.
• [SROS-16924] “SdpId” not updated properly in WBX L2 domain dual-homed to a pair of DC-GWs after VRRP
switchover. Workaround: Configure a static MAC under the VRRP on both the DC-GWs.
• [SROS-17438] Static routes with a dynamically learned next-hop (bridge vPort) will flap in the RTM every few
minutes. Within the Nuage datacenter this does not cause a forwarding impact, but if the reconvergence further
upstream is slow, this could cause a service impact.
• [SROS-17501] On VSG/WBX, no shutdown of SSH server might take more than 30 seconds to be in operational
state enabled.
• [SROS-17685] On a VSC controller switchover, BFD for static routes session will be re-initialized
• [VRS-13825] The CNI log might contain messages about the SiteId not being set. These messages can safely
be ignored.
• [VRS-15120] On VRS with BFD static routes with bridge vports and vport has not been resolved yet, BFD init
packets are sent from all bridge vports of the subnet.
• [VRS-15310] When a BFD session for static routes is up between VRS-G and the next-hop on a bridge VPort
and the MAC address of the VPort next-hop changes, BFD flaps on both endpoints, potentially causing data
path impact.
• [VRS-15335] In a rare case, due to an error in messaging between the VRS/NSG and the VSD, the VSD may
receive a stats message that puts the stats thread on the VSD communicating with the VRS into a bad state. This

6.3. Miscellaneous 85
VSP Release Notes, Release 5.4.1

affects only the stats-thread communicating with that particular VRS/NSG and not the stats-collector process as
a whole. Workaround: Temporarily fix this either by restarting the stats-forwarder process on the VRS or by
restarting the stats-collector process on the VSD.
• [VRS-15419] When TCP traffic stats are sent to Elasticsearch when a stateful ACL is hit, only server to client
side stats information is sent by statsd.
• [VRS-15731] eVDF nodes will use both VDF and VRS licenses in VSD. Please make sure you have enough
VRS and VDF licenses available.
• [VSD-20114] When an Access Port is added to the VSG/WBX redundant group, the audit report of redundant
group on VSD Architect incorrectly shows the Access port as Network and Missing.
• [VSD-23962] In a deployment where the Stats-out feature is used, monit on the statistics nodes will not report
a failure status if the VSD or the load-balancer/proxy is unreachable.
• [VSD-24289] Using the VSD GUI to create multiple static route entries through the DHCP option 121 will fail.
Workaround: Manage the DHCP option through the REST API by using hex values for the type and value
fields.
• [VSD-24611] If the /opt/vsd/vsd-stats.sh script exited with a failure on VSD or on the Statistics VM because
the wrong parameters were used, first disable Statistics by running /opt/vsd/vsd-stats.sh -d before enabling VSD
Statistics again with the correct parameters. For example:
If the proxy IP is incorrect for the initial execution, disable statistics and enable it again with the
correct proxy IP:
If you had changed your csproot password, execute the following command on all VSD nodes

[root@vsd-1 ~]# /opt/vsd/install/change_credential.sh -j '<csproot_


˓→password>'

The proxy IP is incorrect for the initial execution.

[root@vsd-1 ~]# /opt/vsd/vsd-stats.sh -e elastic-1.example.com -s 10.10.


˓→10.101

ElasticSearch status: PASS


Fail rest call check to https://10.10.10.101:7443/nuage

Disable Statistics: [root@vsd-1 ~]# /opt/vsd/vsd-stats.sh -d


Enable Statistics:

[root@vsd-1 ~]# /opt/vsd/vsd-stats.sh -e elastic-1.example.com -s 10.10.


˓→10.100

ElasticSearch status: PASS


Reinitializing monit daemon

• [VSD-24795] The 7x50 gateway NETCONF only supports bridge VPorts, but VSD does not block the configu-
ration of host VPorts.
• [VSD-24798] When creating a VLAN in a redundancy group of a hardware VPort, deleting the redundancy
group makes the VLAN unusable.
• [VSD-24932] When creating redundancy groups on 7X50 NETCONF gateways, RG VPorts’ SAPs are not
deleted if the redundancy group is deleted from VSD.
• [VSD-25071] When a 7x50 NETCONF gateway ingress/egress profile is deleted and there are VPort provisioned
with the profile, the associated per VPort SAP configuration is not removed on the 7X50.

6.3. Miscellaneous 86
VSP Release Notes, Release 5.4.1

• [VSD-25256] For Elasticsearch cluster deployment especially 6 Elasticsearch nodes cluster deployment, Elas-
ticsearch creates a number of controller_log files in /tmp folder. Removing these files will not have any impact
on the system.
• [VSD-25451] In the VSD UI and API it is possible to create two Services with the same protocol and ports in
one Enterprise but with a different name. The statistics will reference only the Service created most recently.
Workaround: Avoid creating multiple Services with the same protocol and ports.
• [VSD-25507] The top 5 Source Policy Group and top 5 Destination Policy Group tab shows the same data as
the ACL hits by source PG and ACL Hits by dest PG instead of correctly classifying by Policy Group.
• [VSD-25529] When there are more than 450 subnets in a zone, the Tree view in the VSD Architect becomes
very slow, icons disappear, etc., or if there are two zones and the first one already has 500 objects, the second
zone will not display any subnets at all. Workaround: Use the List view.
• [VSD-25535] Enterprise permissions can fail when using Redundancy Groups with 7X50 NETCONF gateways.
Workaround: Use CSP user.
• [VSD-25552] With 7750 NETCONF gateways, it is not possible to change L2 domain (service) type from routed
VPLS to non-routed VPLS and vice versa if VPorts have already been created in the service.
• [VSD-25555] For VSD HA deployment, when events that disrupt infinispan.cluster occur (events such as net-
work partitioning) causing infinispan status to fail and consequently keyserver as well, to recover, stop the
infinispan process and start it again using `monit -g vsd-common start on each node. If keyserver
status does not recover after the infinispan cluster is restored, to restore keyserver functionality, stop keyserver
process using monit stop keyserver and restart keyserver process and status using monit start
keyserver and monit start keyserver-status.
• [VSD-25562] 7x50 NETCONF gateway icon might remain red after the gateway is instantiated and connected.
Workaround: Move to a different tab, then go back to gateway tab on VSD and it will be shown as green.
• [VSD-25637] The Service details for reverse traffic such as HTTP is not reported on Elasticsearch by the Stats
Collector. It is only reported for traffic in the forward direction.

6.4 VSD

• [VSD-1281] An unpredictable sequence of creation events renders Network Designer unable to display the
updated VM/VM interface. Workaround: from the Domain Data view, refresh the topology by first clicking on
another domain then clicking on the original domain.
• [VSD-6668] Neutron TCP/UDP port range is from 0-64K, but Nuage supports only 1-64k.
• [VSD-6874] DNS server changes after a VSD reboot are not reflected. Workaround: If the DNS server is
changed, VSD services must be restarted for the changes to be reflected.
• [VSD-11900] VRS deployment using EAM gets stuck if vCenter tries to clone a VRS from a hypervisor that
becomes unreachable.
• [VSD-12565] Updating a redirection target (L3 redundancy disabled) to point to a new VPort does not update
the virtual IP address associated with it. Workaround: Delete the virtual IP address and recreate it.
• [VSD-12610] When a container created using the docker run command on the VRS is deleted, sometimes
the VSD continues to display the VPort for several hours even after it has been successfully deleted from the
VSC and VRS.
• [VSD-13028] The ECMPCount value push in the Shared Network Resources (SNR) domain fails. It is not
possible to update the ECMPCount value to anything between 1 and 8 (valid values).
• [VSD-13544] When a redundant port in the VSG redundancy group is deleted, it does not send push messages to
the non-authoritative VSG in the redundancy group, and the vPort is not removed from it. The correct behaviour

6.4. VSD 87
VSP Release Notes, Release 5.4.1

is to send push messages to both the VSGs and the vPort should be removed from the non-authoritative VSG.
When the audit time expires, the push happens correctly, and the vPort is removed from the non-authoritative
VSG.
• [VSD-14683] When the Elasticsearch VM is up and stats is enabled on VSD, trying to fetch the stats sometimes
produces the error “None of the configured nodes are available.” Workaround: First, stop the Elasticsearch pro-
cess and reassign the IP address to the Elasticsearch VM. Then restart the Elasticsearch process using service
elasticsearch restart.
• [VSD-14758] Once VSD is configured and statistics is enabled, the following parameters in System Configura-
tion should not be changed, even though they can be updated on the fly using REST or UI.
– Elastic Cluster Name
– Collector Address
– Collector Port
– Collector Protobuf Port
– Max Data Points
– Min Duration
– Number of Data Points
– Elastic Server Address
• [VSD-15789] After upgrading from 4.0R1 to 4.0R3, the AntiSpoof packet drop data from 4.0R1 cannot be
viewed. However, read/write capability for new data is not affected.
• [VSD-15810] If a VSD Cluster is pointing to some Stats Node (Elastic VM), and later there is a need to change
the configuration, we need to point to a new Stats Node (Elastic VM - Standalone or Cluster). in this case, it is
required to bring down the older Stats Node before we try to enable Stats on VSD pointing to new Stats Node.
• [VSD-17059] The system does not support accessing or using a domain PGID from a different domain in the
ACL entry of the current domain.
• [VSD-17269] Established TCP (stateful) traffic between 2 VMs goes down after router-interface-add. The TCP
connection has to be established for traffic to resume.
• [VSD-17335] In the Security Policy Entry popover, when the ACL Type is toggled more than once between
Local and Remote while an organization is selected for the domain, the unlink button erroneously appears.
Workaround: Close the popover and start again. Do not toggle the ACL Type more than once.
• [VSD-17493] In the VSD UI, if the “use” permission on the zone under a domain is deleted, the permissions
are deleted from the entire domain. This causes removal of access to the domain for a user group.
• [VSD-18321] When stats are enabled on the VSD and the connection between the VSD and the Elasticsearch
stats server is down, the VSD GUI displays the error message “None of the configured Elasticsearch nodes are
available.”
• [VSD-18536] After creating more than 500 TCAs in VSD, Elasticsearch will have the same (correct) number of
active TCAs. However, once the tca_daemon maintenance process starts, there will be only 500 active TCAs in
Elasticsearch.
• [VSD-18581] When NTP is slow to synchronize on VSD, the ZooKeeper process might fail to start. Monit
recovers the ZooKeeper process by restarting it automatically after 10 minutes.
• [VSD-18698] When a dual IPv6 host VPort is resolved, the default name for the host interface is “missing
information”, and the system does not allow modifying it. Workaround: First modify the IPv6 address and
then the host interface name.

6.4. VSD 88
VSP Release Notes, Release 5.4.1

• [VSD-18819] Creation of Service Chaining Domain Link type after VSD upgrade is not pushed down to the
VSD. This issue is resolved by the audit.
• [VSD-18858] When all three VSD nodes are rebooted and the MySQL cluster requires bootstrapping, the Monit
output on the bootstrapped node might erroneously display the jboss-status as Execution failed. If
this occurs, run monit reload to find out if this was a false status.
• [VSD-19394] When Monit shows EJBCA status as failed, JBOSS status also shows as failed. However, your
JBOSS might still be healthy. Workaround: Log in to the VSD GUI to verify the JBOSS status.
• [VSD-19513] “DHCP Behavior” setting changes on the domain are not supported for existing subnets. Only
new subnets created after changing the DHCP Behavior will use the new behavior.
• [VSD-19518] The VSD does not enforce minimum TCA duration. Workaround: Set a TCA duration of at
least 1 minute.
• [VSD-19840] VSD HA Deployment: If keyserver-status shows “Status ok” when infinispan-cluster-status
shows “Not monitored” or Status failed”, keyserver-status is reported incorrectly. Workaround: Run monit
stop infinispan on each VSD node, fix Infinispan issues, and bring up Infinispan on all the VSD nodes
by running monit start -g vsd-common on each node. When infinispan-cluster-status shows “Status
ok”, keyserver-status will also be OK.
• [VSD-20123] For VSD HA deployment, if the Infinispan process is stopped or fails on two nodes, for
the Infinispan cluster to join properly, Infinispan on the third node should be stopped using monit stop
infinispan. Restart the Infinispan cluster using monit -g vsd-common start on all three nodes.
• [VSD-20209] After updating the password, the VSD GUI remains at the Oops page instead of returning the user
to the login page. Workaround: Click the “Logout” button to bring up the login page.
• [VSD-20237] Under very rare circumstances some ingress or egress ACLs may be missing on the VSC even
when they are present on the VSD. Workaround: Toggle lock/unlock on the VSD Architect (the GUI).
• [VSD-20297] When two Infinispan processes are stopped ungracefully, the command monit stop
infinispan must be run on all nodes. Then confirm that processes are stopped by running ps -ef |
grep infinispan. Finally, run monit start infinispan-status on all nodes.
• [VSD-20300] During rebooting of VSD, NTPD can take 15 minute or more to sync on its own. Workaround:
Restart the NTPD service with system restart ntpd to reduce the sync delay.
• [VSD-20310] Because Elasticsearch log rotations are date-based and not size-based, it is necessary to check
/var/log/elasticsearch/ for large log files and remove them periodically to prevent the node from running out of
disk space.
• [VSD-20353] In a VSD cluster deployment, if one VSD node goes down or gets disconnected from the cluster, it
is expected that the Infinispan service on the other cluster nodes stays up and that its Monit status shows “Status
ok”. However, a brief Infinispan outage is sometimes observed on the remaining cluster nodes: Monit status
might show “Status failed” for a small number of Monit cycles.
• [VSD-20956] Under local csproot user when LDAP is enabled and “Synchronize Users and Groups from LDAP”
is disabled, Groups and users can be added/removed but you cannot configure groups.
• [VSD-21487] Any event on other two nodes apart from slave node of standby may cause mysql-status failure
on them if slave is replicating and is lagging behind. This will be status ok as soon as slave catches up on
replication.
• [VSD-21602] Spinning icon does not maintain the same position as the reload button; however it does not impact
backend function.
• [VSD-22759] When a list of over 100 objects is presented in the VSD Architect (e.g gateway selection for
vPorts) only 100 are displayed. Filtering for the desired object via the search field in the pop-up window will
retrieve the object.

6.4. VSD 89
VSP Release Notes, Release 5.4.1

• [VSD-23341] When performing a fresh install of standalone Elasticsearch, the vsd-es-standalone.sh script gives
an incorrect error message “index_not_found_exception”. This error message should be ignored.
• [VSD-23373] If it is not possible to delete TCA on the VSD UI, it might be because of large numbers of alarms
associated with it. Contact support for assistance in deleting the excess alarms. Then go to VSD UI and delete
that TCA.
• [VSD-24366] In some scenarios when large numbers of BGP IPv6 peers are created in a very short time, ap-
proximately 40 peers will come up immediately, and the rest of them will come up after the audit interval.
Workaround: Either create peers in a paced manner or a controller shutdown/no shutdown can be done
for all peers to come up.
• [VSD-24472] When a dual stack subnet is created on the VSD, and the default IPv6 subnet address is changed
to a custom value, VSD auto allocates an invalid IPv6 gateway address. The gateway address has to be assigned
manually to a valid IP.
• [VSD-24504] When passing multiple -e option for vsd-es-cluster-config.sh, only the last one is taken into con-
sideration.
• [VSD-24680] User will not be able to use the VSS flow explorer to add to a Virtual Firewall Rule for ICMP
flows.
• [VSD-24694] Setting stateful entry option as a part of “Add to Virtual Firewall Rule” workflow in VSS Flow
Explorer does not update the virtual firewall rule configuration accordingly.
• [VSD-24700] VSS Visualization for L2 domains is not supported.
• [VSD-25646] After enabling Statistics on VSD and before a first statistics query is performed by the VSD,
the VSD might raise an alarm “ElasticSearchHealthCheckFailed” as well as a “NoNodeAvailableException”
error in vsdserver.log. Workaround: Opening any statistics view from the VSD GUI should clear the alarm
automatically.

6.5 VSC and 7850 VSG/VSA

6.5.1 BGP

• [112610-MI] TCP sessions may flap in a scaled setup when the show system connections command is issued
with environment no more and there are more than 5k BGP peer sessions.
• [121246-MI] Changing the BGP router-id value in a base or VPRN configuration will immediately cause a flap
of all BGP neighbors that are part of that instance.
• [140913-MI] The route preference is not always correctly compared between a BGP and a BGP-VPN route if
the same BGP-VPN route is imported by another VPRN on the same PE router with a modified route preference.
• [143041-MI] If there are routes in the BGP RIB-IN whose BGP next hop could be resolved through either
another BGP route or a less specific IGP route, when the bgp>next-hop-resolution>use-bgp-route command is
enabled or disabled, these route’s next hops may not be re-evaluated correctly.
• [SROS-10375] In certain scenarios, static routes being advertised through BGP-EVPN from the VSC might not
be withdrawn after the end-point hosting the next hop is removed from the system. Depending on the particular
user’s configuration and if the route is part of an ECMP set, partial traffic blackholing could occur.
• [SROS-13140] IPv6 traffic from VSG to 7750 SR fails when IPv6 in SR VPRN RVPLS interface (backhaul)
is toggled (ipv6, no ipv6, ipv6). Workaround: Clear BGP neighbor session (configure router bgp
group neighbor shutdown, then no shutdown) in 7750 and VSG.
• [SROS-13183] When a dynamic-service-profile is shutdown, any affected VPort with BGP PE-CE instance
configured cannot be internally deleted.

6.5. VSC and 7850 VSG/VSA 90


VSP Release Notes, Release 5.4.1

• [SROS-14861] During scaling of BGP PE CE across domains and multiple BGP CE peers in each domain using
scripting using APIs, it might happen that some BGP instances of the domains are not activated. Therefore, the
BGP neighbors go in IDLE state. Workaround: Add sleep during provisioning of BGP CE peers or reduce the
scale of BGP peer sessions.
• [SROS-x] The MP-BGP code for 3.0R4 has been updated to match the final values adopted in RFC5512. This
change affects the “tunnel encap extended community” which changes value from 0x0003 to 0x030c. The way
the customer’s VNI is advertised in the BGP route advertisements has changed to match the RFC. These changes
mean that from VSP 3.0.R4 onward, the Nuage VSP solution can interoperate - when using VXLAN as transport
- with Alcatel-Lucent’s 7750SR running release 12.0R8 or newer. However, an upgrade from 3.0.R3 to 3.0.R4
will have an impact because the two versions do not interoperate at the BGP level.
• [SROS-16348] On manually created EVPN services, changing the mirror configuration on a SAP from ingress
to egress or egress to ingress is not reflected in the mirroring functionality. Workaround: Remove configuration
first and add the change.
• [SROS-16621] When configuring manual L2 EVPN services, status host MAC is not installed in the FDB if
this is configured before BGP route-distinguisher and route-target. Workaround: shutdown/no shutdown the
affected SAP (VPort).
• [VRS-13340] When a BGP PE CE IPv6 neighbor is provisioned, the following harmless console error might be
displayed “openvswitch: netlink: Unexpected mask (mask=11108, allowed=3d9d05c)”.
• [VSD-23840] Although per Domain BGP AS is configurable, it is not supported. Global AS per Domain is
inherited from the provisioned Enterprise BGP AS.

6.5.2 CLI

• [SROS-1435] Issuing a debug>show>tools command that directs a large amount of output to the Serial Console
of the VSC VM can cause instability.
• [079185-MI] The system incorrectly allows an “admin save” operation initiated by a user to be aborted if another
user initiates another “admin save” from another session.
• [100089-MI] Special characters (“s”, “d”, “w”) do not work with pipe/match functions.
• [126371-MI] When using the “file vi” command to edit files, there is a 1024 character limit on the amount of
text to be pasted correctly. Exceeding that limit will cause the pasted content to be overwritten.
• [SROS-13083] Prefix hunt option for IPv6 show router bgp routes evpn ipv6-prefix hunt
prefix <2001:2b6c:1fa3:53e9::/64>
• [SROS-13155] Executing show router <SvcId> arp displays the endpoints entry twice.
• [SROS-13272] The output of show qos ingress "Auto Sap Ing Qos Policy lag-10:2002"
does not display the updated DSCP to FC mapping as expected. Workaround: In VSD, first delete the DSCP
to FC mapping entry in the table, then add it again. The newly added entry displays the updated information.
• [SROS-13379] The output of show router bgp routes evpn bgp-mac does not show the IPv6
neighbour entries.

6.5.3 IS-IS

• [130612-MI] When IID TLV is enabled on an IS-IS instance, the router can form an adjacency with a router that
does not send IID TLV. This could lead to routing issues if another interface belonging to that instance forms
adjacencies on other instances. IID TLV should not be configured if non multi-instance-capable routers are part
of the same routing domain.

6.5. VSC and 7850 VSG/VSA 91


VSP Release Notes, Release 5.4.1

6.5.4 Management

• [064537-MI] The system may not correctly count the number of failed SNMPv3 authentication attempts in the
event-control log.
• [069819-MI] SNMP replay events may not function properly for replay functionality with multiple trap-targets
pointing to the same address (even if they belong to different trap-groups/logs). This issue does not affect replay
functionality with only one trap-target per trap-receiver address.
• [080594-MI] The system may not return a lexicographically higher OID than the requested OID in an SNMP
GET-NEXT operation when incorrect values are used. This behavior is seen in the tcpConnectionTable table.
• [083801-MI] After 497 days, any “Last Change” counter on the system will wrap around due to a 32-bit time-
stamp limitation. The “Last Oper Chg” value in the output of the “show router interface” command is one
example of such counter, but there are numerous other cases where this limitation applies.
• [97589-MI] Using an SNMP walk or GET-NEXT for a newly created SNMP view may cause a High-Availability
switchover. The workaround is to configure the default excluded OID trees for the new SNMP view, similar to
view “iso” when executing “info detail”.
• [124839-MI] When management connectivity is lost, the system may not log the SNMP trap-relay notification
associated with an IPv6 trap-target server and used to report the number of the first unsuccessfully trapped
event. This issue only affects the first IPv6 trap-target notification and only when the system loses management
connectivity.
• [SROS-16626] The same VXLAN tenant-id should not be configured in the underlay and in manually configured
EVPN/Overlay service. To avoid overlapping, a Virtual Network ID (VNID) range can be provisioned in VSD
settings.
• [VSD-23688] VRS crashes when Syslog Destination Host has spaces or special characters other than comma-
separated IP addresses. Workaround: Make sure there are no spaces or special characters, just IP address(es
and comma(s)).

6.5.5 Routing

• [089371-MI] Policy-statement entry “from interface name” currently does not match any local interface. A
workaround is to combine a prefix-list with “from protocol direct”.
• [SROS-7173] For traffic coming in on ECMP interfaces at transit node of VXLAN outage close to a second seen
after standby reconciles.
• [SROS-12766] The VSC may lose its static-peer configuration after repeated openflow connection flaps.
Workaround: Reconfigure the static peer.
• [SROS-14684] Service SAPs are not removed from the VSG when a VPort which is down due to MAC move
loop detection is deleted from the VSD.
• [SROS-14871] Creating a host VPort with a MAC address that has earlier been dynamically recognized in a
bridge VPort does not update the L2 FDB entry.
• [SROS-15821] A static route configured from VSD cannot resolve a next-hop that has been learned via BGP
PE-CE.
• [SROS-15977] ECMP valuse of 32 and 64 are not supported. if the next-hop count for BGP prefix or static route
is 32/64, the hashing will only work for 31/63 out of the available next-hops.
• [SROS-16180] Domain Leak (Hub and Spoke) is not supported for Domains configured with underlay Id.
• [SROS-16366] Domain AS configuration is not supported on VSG/WBX. Global AS is taken from the Enter-
prise AS.

6.5. VSC and 7850 VSG/VSA 92


VSP Release Notes, Release 5.4.1

6.5.6 Services

• [163229] Although the ingress MAC table implies 512 IP interfaces, only 400 are allowed for IES VPRN
interfaces.
• [163881] Stats are not supported on q.* SAP.
• [SROS-7784] VSG with 100 VPorts on LAG port: when the vswitch controller is flapped and a switchover is
done, the following traces are seen:

*B:nuage1-3# [036 m 04/13/14 19:49:44.328] 2:iomMsg-2:IOM:allocate_index


˓→Trying to allocate already allocated index 148 (nbits 4080) vxTaskEntry->_

˓→Z15iom_icc_rx_taskP9semaphore->_ZN8IOM_INFO16icc_rx_task_loopEP9semaphore->_

˓→ZN8IOM_INFO22process_one_icc_socketEj->_ZN7IOM_SVC12sap_

˓→listenerEP15tIccTransaction->_ZN11IOM_INDEXER14allocate_indexEjbb->

˓→tracePrintVRtr

[036 m 04/13/14 19:49:44.330] 2:iomMsg-2:IOM:sap_listener Can't allocate index


˓→148 for sap lag-20:5.0 already in use

[036 m 04/13/14 19:49:44.331] 2:iomMsg-2:IOM:process_one_icc_socket Rejecting ICC


˓→transaction 10228 socket 17

• [SROS-12546] The trace message below appears when lag-98 goes down on the MCS node, which has a single
homed VPort synced on it from the other MCS node:

1:iomMsg-1:IOM:is_valid_mac_entry Stale entry with same cpmtag=2. TLS 1592604809


˓→(TlsId 4) Entry - MAC 68:54:ed:00:cc:11 idx 26

[016 m 02/04/17 06:33:41.259] 1:iomMsg-1:IOM:bind_mac_entry Attempt to bind over


˓→existing valid MAC entry! TLS 1592604809 MAC 68:54:ed:00:cc:11 Idx=29 cpmtag=2

[016 m 02/04/17 06:33:41.260] 1:iomMsg-1:IOM:add_or_update_mac_entry TLS


˓→1592604809 (TlsId 4) cannot create new MAC entry - MAC 68:54:ed:00:cc:11 idx 29

[016 m 02/04/17 06:33:41.262] 1:iomMsg-1:IOM:process_one_icc_socket Rejecting ICC


˓→transaction 3795 socket 19

• [SROS-13157] For routes learned on a multi-chassis bridge VPort ARP route MC synch will not take place
between the MCS nodes when they go down and come up multiple times. Workaround: Reboot the MCS node
in which the ARP route is learned.
• [SROS-13202] BGP PE-CE on VSG, peer-tracking and rapid withdrawal are not supported (internally equivalent
to configure service vprn bgp enable-peer-tracking and configure service vprn
bgp rapid-withdrawal).
• [SROS-13325, SROS-13325] In a multi-chassis setup, when the LAG-98 IC link between the MCS nodes goes
down, single homed host devices on the MCS node fail to install the fdb-MAC entry.
• [SROS-14425] On rare occasions, the SD card is not detected when inserted on a WBX to install the software.
Workaround: Power cycle the WBX (unplug the power cables from the PS and plug them in again) without the
SD card and select ONIE rescue mode. Wait 5 minutes, then insert the SD card. Type reboot from the ONIE
CLI.
• [SROS-16101] The following trace is sometimes seen when trying to modify/delete a SAP: “CON-
SOLE:CLI:cliFindExtraPeriod Cli message has extra period at end: SAP 1/1/12:1 does not exist.” This has
no operational impact.
• [SROS-16125] While configuring a customer within the vSwitch context used for manual EVPN services, the
fields contact, description and phone cannot be added.
• [SROS-16131] Dynamic QoS policy from a dynamically created VPort can erroneously be used for manually
created EVPN SAP.

6.5. VSC and 7850 VSG/VSA 93


VSP Release Notes, Release 5.4.1

6.5.7 System

• [064581-MI] When the password-aging option is enabled, the reference time is the time of the last boot and not
the current time. Password expiry will also be reset on every reboot.
• [098479-MI] A system that does not have a system IP address or a management IP address configured may not
be able to generate SNMP traps.
• [120649-MI] Copying a file to a TFTP destination sometimes prompts for a confirmation to overwrite the desti-
nation file on the TFTP server, even if that file does not exist.
• [135570-MI] When negative threshold values are configured for alarms and the last value sampled is negative,
the values are not properly displayed in “show system thresholds”.
• [VSD-23439] The Virtual Firewall Policies icon shows up for some enterprises when the virtual firewall rules
option is disabled under System Config.

6.6 VRS

• [156121] The DHCP Decline and Release message types are treated as DHCP request packets. ACK is sent
back with the resolved IP addresses. The DHCP Inform message type is also treated as DHCP request packet,
and instead of ACK being sent back with the resolved IP address, ACK is sent back with the IP address 0.0.0.0.
• [SROS-12079] When upgrading from 3.2Rx to 4.0Rx, the OpenFlow connection between VRS running 4.0Rx
and VSC still running 3.2Rx flaps continuously. Workaround: Finish upgrading without delay.
• [SROS-6995] When installing VRS DKMS package on CentOS, the following messages are displayed. This
does not impact the final installation of the module or its behavior

`Building module:
cleaning build area...(bad exit status: 2) `

• [VRS-484] The following error message is sometimes seen when the Open
vSwitch service is restarted. Killing vm-monitor (27834) with SIGKILL
/usr/share/openvswitch/scripts/ovs-lib: line 571: kill: (27834) -No
such process [FAILED] The message is wrong. There is no impact on functionality.
• [VRS-2625] FIP-based rate limiting should not be configured for any VPort that belongs to a domain linked to
a leakable domain, as this causes undesirable forwarding behavior on VRS.
• [VRS-2776] With some very specific types of configuration - network policies involving policy groups, virtual
IPs, and VIP to FIP associations - the traffic between two VMs connected to the same OVS is dropped instead
of being forwarded.
• [VRS-3660] The VRS Agent creates endpoint specific static routes for the vCenter API endpoint and for the
DNS Server. The next hop for those host static routes is the Default Gateway. Those static routes are created
even if the vCenter and DNS server are in the same subnet as the VRS-VM. As such, if connectivity to the
Default Gateway is lost, connectivity will be lost to the vCenter and DNS server even if they are in the same
subnet.
• [VRS-3795] SELinux profiles are not part of the OpenStack-Metadata Agent package. Temporary workaround
is to create them manually.
• [VRS-4585] Stateful ACLs - ICMP fragmented packets do not get matched on ICMP Stateful ACL.
• [VRS-4757] When vPort mirroring (VPM) and policy based mirroring (PBM) are both enabled for the same
mirror destination and traffic hits both, traffic stats for VPM should increase to the number of packets mirrored.
Instead, stats for both VPM and PBM are incremented.

6.6. VRS 94
VSP Release Notes, Release 5.4.1

• [VRS-4792] If an advanced forwarding ACL is configured with action “mirror” and the vPort which the traffic
is sourced from is also configured for vPort-level mirroring and both mirror actions point to the same mirror
destination, two copies of the mirrored packets might be received.
• [VRS-4819] In cases when VRS Revertive Behavior is not enabled on the VRS node, and connection between
the primary VSC and VSD goes down, Sysmon shows the Connection Status of the Primary VSC as RED on
VSD as expected and VRS chooses the Secondary VSC as Active, also as expected. However, if the VSD
connection between the Primary VSC and VSD comes back up again, the VRS still stays connected to the
Secondary VSC as Active, but the Primary VSC Connection Status on Sysmon on VSD keeps showing RED.
• [VRS-4892] OpenShift integration is currently not supported with the tunnel type GRE. Ensure that the tunnel
type is set to VXLAN before proceeding with Nuage & OpenShift installation.
• [VRS-4906] In some cases it has been observed that disabling stats on VSD Nodes does not cleanup the opened
threads to statistics nodes (Elasticsearch VM).
• [VRS-5685] For VRS, the configuration file /etc/default/openvswitch has an incomplete description for
DHCP_RELAY_ADDRESS. When this field is specified in the config file, the IP@ will be used as source
IP@ when relaying DHCP.
• [VRS-5985] When you update the IP address for the VRS-B interface, randomly sometimes the VPort is re-
solved on the VSC with a new IP address but unresolved on the VRS. This issue only happens with the VRS
installations using the Nuage OpenVswitch Kernel Module (DKMS).
• [VRS-6015] When using routing to underlay for the VRS, each time you restart nuage-openvswitch, a
new IP rule will be added. The problem is that during the cleanup task, the IP rule does not get deleted.
• [VRS-6423] If a node has the VLAN name type set to VLAN_PLUS_VID_NO_PAD (so that VLAN
created is named vlan<vlan-id>, (the command to set this is sudo vconfig set_name_type
VLAN_PLUS_VID_NO_PAD), the script nuage-sw-gwcli.pl will fail.
• [VRS-6510] VSS Flowstats collection does not account for the very first packet in the flow.
• [VRS-6909] By default, RHEL7.2 (3.10.0.329 kernel) has different kernel modules for each of the transport
drivers (i.e., vxlan-vport.ko, gre-vport.ko, geneve-vport.ko, etc.), all of which have dependencies on the base
Open vSwitch kernel module (openvswitch.ko). The DKMS shipped by Nuage (nuage-openvswitch-dkms.ko)
cannot expose the symbols required by the transport kernel modules and therefore would emit warning messages.
However, these messages have no impact on functionality. The issue will NOT cause any traffic or functional
issues.
• [VRS-7150] When using FIP to underlay, TCP traffic to some VM FIPs may be disconnected after service
openvswitch restart.
• [VRS-7798] If an NSG is in controllerless mode with remote forwarding, and an NSG uplink, with NAT-T
probes enabled, restores at least one OF control session to the VSC, without restoring the corresponding DTLS
session, the NSG will not be able to forward traffic on this uplink. If the DTLS session is not established within
30 seconds after the OF control session is established, the NSG will revert to controllerless mode.
• [VRS-7818] For Kubernetes, in order to curl to Service IP from Pod IP, Pat to Underlay must be enabled on the
domain from the VSD Architect UI.
• [VRS-8166] VRS can use large amounts of memory when ACL logging and statistics are enabled on a large
number (thousands) of ACLs per VPort is configured over a period of time. Workaround: Restart the ovs-
vswitchd process to recover the memory.
• [VRS-8350] After some events (mostly seen in clear VPort and flag XMPP connections), the ACL is not pro-
grammed on VRS. Workaround: Use clear vswitch to fix this issue, or it will stay in that state.
• [VRS-8403] VSC does not show IPv6 link local addresses of VMs; however, traffic originated by or destined
for IPv6 link local addresses of VMs within a subnet will be transmitted with no issue.

6.6. VRS 95
VSP Release Notes, Release 5.4.1

• [VRS-9142] In some scenarios the egress ACL entries might have the wrong priority on the VRS (Example:
priority=4294967254). These ACLs are hit and traffic will be dropped. Workaround: Flap the OpenFlow
session. The ACLs should have the correct priority now.
• [VRS-9849] VRS process can potentially use large amount of memory when VSS is enabled and VRS has very
high number of kernel flows to the tune of 16K or higher. Workaround: Restart ovs-vswitchd process.
• [VRS-10029] In Kubernetes or OpenShift, when a pod belonging to a Network Policy is deleted, the pod may
fail to restart. Workaround: (1) Delete the network policy; (2) Re-deploy the pod; (3) Re-create the network
policy.
• [VRS-10053] If the nuage-openshift-monitor pod complains about PodFitsHostPorts, then delete that pod and
let daemonset redeploy the pod again.
• [VRS-10101] With small packet size close to the line rate, on the AVRS-G, the control traffic (OpenFlow) to
VSC can be impacted and can toggle the connection.
• [VRS-10611] When the AVRS virtual-accelerator service is restarted, the interfaces managed by the virtual-
accelerator lose their configuration and need to be reconfigured.
• [VRS-10667] When using AVRS/AVRS-G, traffic will not be forwarded correctly if Stateful/Reflexive ACLs
are used. Workaround: With AVRS, do not use TCP Reflexive/Stateful ACLs.
• [VRS-10843] There are two independent circumstances in which traffic may drop: (1) active TCP session on PG
expression ACL drops when the ACL table is modified with new ACL addition/deletion; (2) active TCP session
on PG expression ACL drops after vSwitch controller shutdown or XMPP goes down. Workaround: Restart
the TCP session after events such as any ACL deletion/addition/update or vSwitch shutdown/no shutdown.
• [VRS-10949] On VSP upgrade to 5.2 R1, if revertive behavior is enabled on VRS and when VRS switches back
to primary VSC (after primary VSC upgrade), egress ACLs are cleaned up and installed back again on VRS.
• [VRS-10953] If the OpenFlow connection to VSC bounces twice or more within 180 s of a VIP being resolved,
and the interval between the first and last bounces is 120-180 s, the VIP route is lost. Workaround: Delete the
VIP and add it again.
• [VRS-10993] When issuing the clear vswitch-controller vports command on VSC with tunnel
type GRE. sometimes the VPort is unresolved on VRS but still shows as resolved on VSC. Workaround: Issue
the command again to restore the setup (retry might be needed).
• [VRS-11096] VRS is not able to filter ICMPv6 traffic based on ICMP type and code. We introduced a new
default ACL to fix DHCP for dual stack VM but it will actually allow all ICMPv6 traffic. It is not possible to
disable the ICMPv6 allow rule.
• [VRS-11744] Using VPort mirroring with egress ACL match mirroring (or stateful ingress ACLs) results in
only VPort ingress traffic reaching the VPort mirror destination.
• [VRS-11766] After upgrade, sometimes alubr0 bridge goes missing on VRS. To recover, issue service
openvswitch restart.
• [VRS-11790] In cases when DNS resolution takes 21 seconds or more, the command ovs-appctl -t
nuage_perfd httping/dump-probe-stats hangs until a valid response is received for the probe.
• [VRS-12593] BFD session on VRS incorrectly shows transmit 1000 ms when 100 ms is configured. This issue
is only seen when the BFD session is down.
• [VRS-13331] VMs hosted on a VRS cannot ping the gateway for a remote subnet if that subnet and VM is
located on an AVRS.
• [VRS-13716] If mirror rules are created on VPort (bidirectional), ingress and egress policy-based mirroring on
VRS and apply to the same packet on the same VPort, a packet which should be mirrored to all destinations
might be mirrored to only a subset of destinations.

6.6. VRS 96
VSP Release Notes, Release 5.4.1

• [VRS-13832] When a guest VM sends slightly oversized packets compared to the MTU, Open vSwitch drops
the last fragment.

6.7 VMware

• [VMware-211] When a VRS profile is configured in the VCIN, all the port groups used should have a unique
name in the vCenter.
• [VRS-3009] It is recommended that you have unique portgroup names for dvSwitch portgroups even if the
dvSwitch belongs to different data centers on the vCenter.
• [VRS-3397] If MTU field under general section in the Deployment Toolbox GUI is modified and reload config
button is clicked, the MTU will be modified only for eth1 on the VRS.* [VRS-3440] VMWARE: If user is
restarting nuage-openvswitch-switch on VRS he needs to restart esxMonit service manually.
• [VRS-11885] The AVRS for ESXi wrongfully consumes regular VRS licenses instead of the AVRS license that
it should be consuming instead.
• [VRS-11914] When AVRS is enabled, the hostname on the VRS Agent might still mention “VRS” instead of
“AVRS.”
• [VSD-11900] EAM: VMwareSDKSupport vrs deployment using EAM gets stuck if vcenter tries to clone an
existing vrs from an existing hypervisor and that hypervisor becomes unreachable.
• [VSD-16151] The system checks every 5 minutes to determine whether the vCenter connection is alive, but
sometimes the connection goes down. Workaround: Connect manually.
• [VSD-18736] If the vCenter Integration Node is connecting to a vCenter because of a restart or an automatic
reconnection, and the connect button is clicked by a user in the UI, the user might get a ‘Job not found error’.
This has no impact on functionality.
• [VSD-19845] When the API is used to create a vCenter hypervisor in VCIN, the response to the POST request
might contain a null value for fields that are inherited. Workaround: After a POST request, do a GET request
for full, up-to-date information on a vCenter hypervisor in VCIN.
• [VSD-19541] In Internet Explorer, the Nuage Data form of the Nuage Metadata plugin for the vCenter Web
Client might only show a single entry in one or more of the dropdown fields, even when there are multiple
values available in VSD. Workaround: Use Google Chrome.
• [VSD-20891] When a vCenter Cluster is moved into scope in the vCenter Integration Node, an exception is
shown in the vsdserver.log. This exception can be safely ignored.
• [VSD-21022] In case a VRS Agent has been deployed and removed from an ESXi host, the Receiving Metrics
field in VCIN is not reset and can still show green in UI and as true in API.
• [VSD-21413] When a vCenter object is deleted in the vCenter Integration Node, an exception is shown in the
vsdserver.log. It is safe to ignore this exception.
• [VSD-21530] When metadata has been applied using the Metadata plugin for the vCenter Web Client and the
Enterprise field of the VM is changed, corresponding Domain field is cleared, but the zone and Network are not
cleared. Workaround: Clear the fields manually.
• [VSD-21536] When metadata has been applied using the Metadata plugin for the vCenter Web Client and the
Zone of an interface is changed in the dropdown and then changed back to the original, the Network field is not
restored to its original value. Workaround: Reselect the original Network from the dropdown.
• [VSD-21537] After a period of time, a VM that is shut down in vCenter might appear as running in the VSD.

6.7. VMware 97
VSP Release Notes, Release 5.4.1

• [VSD-21650] Using the Metadata plugin for the vCenter Web Client, when emptying the domain field for an
interface, the network field might not be emptied. Work-around for emptying the metadata of an interface
Manually empty the fields.
• [VSD-21949] When using the vCenter Web Client Metadata plugin and quickly logging in and out successively
multiple times, the plugin pages might be blank. Workaround: Reload the vCenter Web Client.
• [VSD-22545] When an interface which has metadata configured is removed from a VM and a new interface is
added, the vCenter Web Client Metadata plugin will show the old metadata for the new interface.
• [VSD-22547] When more than five interfaces are present on a VM, the vCenter Web Client Metadata plugin
might not show all the available policy groups or redirection targets in some of the drop downs. Workaround:
Refresh the Metadata plugin form.
• [VSD-23179] When an ESXi host is disconnected and connected again in vCenter, vCenter might indicate
the VRS no longer running under the ESXi Agent resource pool. This can cause the VRS monitoring and
redeployment policies to fail for that VRS or cluster. Workaround: Do not disconnect/connect hosts from a
cluster directly. Before disconnecting the host, move it out of the cluster.
• [VSD-23528] When a cluster is moved into scope on the vCenter Integration Node, an exception might be
observed in the VCIN log files. This exception can be safely ignored.
• [VRS-11953] If the Virtual Accelerator service is restarted on the AVRS Agent, the data interface will lose its
configuration. Workaround: After restarting the Virtual Accelerator, reconfigure the interface manually.
• [VSD-25260] In the vSphere Web Client Metadata plugin, the Policy Group and Redirection target drop downs
are sometimes empty. Workaround: Right click in the form and select ‘Refresh this frame’.
• [VSD-25600] When a URL is entered for the OVF location in the vCenter Integration Node that is reachable but
points to an invalid OVF, the vCenter Integration Node does not provide a proper error message and will instead
return an Internal Server error.

6.8 OpenStack

• [OpenStack-1721] The attributes “–router:external” and “–shared” cannot be used together in the neutron net-
update CLI command due to a bug in the upstream OpenStack Neutron code.
• [OpenStack-1754] Security Groups (SGs) cannot be associated with ports created in VSD-managed subnets.
An explicit check has been introduced to reject this SG association, which was silently ignored in the previous
releases.
• [OpenStack-1827] It is not possible to delete an SR-IOV switch port mapping on a physical port if a VM is
bound any of the SR-IOV port VF’s on the same port as the one being deleted.
• [OpenStack-2048] When deleting a VM from an Openstack node, the tap interface may not get removed from
AVRS.
• [OpenStack-2110] VIP addresses must not be used for spinning up VMs. If a VIP is used for this purpose, the
VM fails to spin up. Workaround: Keep track if the VIPs used on L3 domains so as to avoid using them for
VM deployment.
• [OpenStack-2126] An SRIOV port cannot be deleted from a running VM.
• [OpenStack-2145] If X.254 address is used by a port in a subnet (with CIDR X/24) which is attached to a router,
router interface delete fails.
• [OPENSTACK-2153] On a system using Cavium NICs, the FP interface becomes inoperable if AVRS is
restarted.

6.8. OpenStack 98
VSP Release Notes, Release 5.4.1

• [OpenStack-2181] With Cavium NICs and VA 1.7 to send 1500-byte packets, setting the MTU of the NIC 1500
plus VXLAN overhead (~1600) causes packets to be dropped on the NIC. Workaround: We have tested this
with 9000, and this can be made to work by setting the MTU to 1600.
• [OpenStack-2190] Nuage metadata agent process cannot be stopped or restarted using nuage-metadata-agent
script or by stopping or restarting the Open vSwitch service alone. To restart the metadata agent, stop the Open
vSwitch service , kill the metadata agent process and start Open vSwitch service. To stop metadata agent, kill
the metadata agent process. These steps apply to both RHEL7.4 and Ubuntu 16.04.
• [OpenStack-2241] In RHEL 7.5 the fast path interface used by the AVRS cannot be set to an MTU size greater
than 1500.
• [OPENSTACK-2254] (1) On an AVRS that uses a Cavium NIC, the datapath interface will fail when it receives
a payload with 2300 bytes MTU or greater. This is due to an issue with Cavium NICs. (2) On an AVRS that
uses a Mellanox NIC, AVRS cannot pass MTU traffic of 9190 bytes or greater.
• [OPENSTACK-2268] In some instances, jumbo frames will be dropped at the tap interface when VMs are
brought up through OpenStack Nova with the tap interface MTU greater than 1500 bytes. Workaround: Man-
ually set the MTU size of the tap interface: ip link set dev <tap-device> mtu <mtu>.

6.9 CloudStack

• [CLOUD-159] The Reset VPC function only reboots the Virtual Router and does not reset the ACLs, Load
Balancers, FIPs, etc., which would be the expected behavior.

6.10 OVSDB

• [SROS-9573] clear vswitch-controller vswitches might not get the 3rd party gateways back in
connected state.
• [VSD-11268] Search option in the VSD Architect for searching ports on a 3rd party gateway will not work as
expected. Workaround: use advanced search option.

6.11 Hyper-V

• [VRS-5007] When live migrating a Virtual Machine on Hyper-V from one host to another, there might be a stale
port remaining in OpenvSwitch on the source host. Additionally, a live migration on Hyper-V might cause a
Hyper-V crash and reboot of the hypervisor. Workaround Use cold migration.
• [VRS-6119] The Pause or Saved state of a VM is not recognised in Hyper-V.
• [VRS-6390] VM does not resolve when DefaultBridgeName is set to anything other than alubr0.
• [VRS-8338] When the Nuage VRS looses connection with the VSC and a new VM is booted, even after the
connection to VSC is restored, the VM is not recognised when doing a VM reset or VM rename. Workaround:
Shut down the VM and start it again.
• [VRS-8735] For the Nuage integration with Hyper-V to properly work, the interface driver properties cannot
be changed from their default value. Changing this interface driver properties causes the functionality to break.
Workaround: Restart NuageSVC service to fix it.
• [VRS-9188] If the Nuage vSwitch Extension is disabled and re-enabled, the VRS connection to VSC is not
re-established. Workaround: Restart NuageSVC service to fix it.

6.9. CloudStack 99
VSP Release Notes, Release 5.4.1

• [VRS-10326] In the ovs-vswitchd log files on Hyper-V, fatal errors related to the driver can be seen. These
errors are caused by the kernel stats because it cannot get the correct information. This does not impact func-
tionality.
• [VRS-10736] In the ovs-vsctl output of the Hyper-V VRS, some parameters are missing. This does not impact
functionality.
• [VRS-11347] When the NuageSvc service in Hyper-V is restarted, a small number of initial pings might be
dropped after the restart.
• [VRS-15810] When the NuageSvc service in Windows is restarted, it might fail because OpenvSwitch remains
in the starting state. Workaround Kill the ovs-vswitchd.exe and NuageSvc.exe processes using the
Task Manager or through PowerShell CLI and restart the NuageSvc service using the Service Console. Another
workaround is to reboot the Windows server.
• [VRS-15902] When live migrating a Virtual Machine on Hyper-V from one host to another, the VM might loose
connectivity for 10 seconds.

6.12 SCVMM

• [SCVMM-72] Resync with SCVMM may not work with different domains.
• [SCVMM-73] Configuring FIP with SCVMM may not work.

6.13 Container Integration

• [VRS-11967] When the auto-scaling feature of the Nuage Kubernetes integration creates new subnets, these
subnets will not be removed when the number of containers is scaled down.
• [VRS-12324] When you change the nuageMon rest server port in the nodes file, the port in the daemonset yaml
file will not be updated. Workaround: Update the rest server port manually in the daemonset yaml file.
• [VRS-12254] The secondary VSC IP is mandatory in the deployment with OpenShift.
• [VRS-12556] The NUAGE_K8S_SERVICE_IPV4_SUBNET is hard-coded in an OpenShift deployment.
Workaround: Change the value in the /etc/nuage-node-config-daemonset.yaml and redeploy
the node deamonset.
• [VRS-12612] The NUAGE_NETWORK_UPLINK_INTF is hard-coded in an OpenShift deployment.
Workaround: Update /etc/nuage-node-config-daemonset.yaml and redeploy the node daemon-
set.
• [VRS-14843] When first creating a default deny ingress network policy for a namespace in Kubernetes, changing
that to a default allow ingress network policy will not delete the deny rule.
• [VRS-15904] When deploying a Kubernetes or OpenShift deployment with Nuage, the kube-dns pod is un-
reachable for any pod in a namespace other than the kube-system namespace because of a missing Egress Policy
rule. Workaround: Create an Egress Policy with priority 0 and an Egress Policy rule to allow all traffic from
the Internet Policy Group to any destination for any protocol.

6.12. SCVMM 100


CHAPTER

SEVEN

KNOWN LIMITATIONS

Please be aware that upgrade-related known limitations are listed in the Upgrade section of these release notes.
This section contains the following subsections:

• Known Limitations in Release 5.3.3 (page 102)


• Known Limitations in Release 5.3.2 U1 (page 103)
• Known Limitations in Release 5.3.2 (page 103)
• 7850 (page 104)
• CMS Integration Status (page 104)
• Static Routes (page 105)
• VSP (page 105)
• VRS/VRS-G Data Path (page 106)
• 7850 VSG/VSA Data Path (page 107)
• Hardware (page 110)
• RADIUS (page 110)
• TACACS+ (page 111)
• VSC and 7850 VSG/VSA (page 111)
– CLI (page 111)
– Management (page 112)
– Routing (page 112)
• SCVMM (page 113)
• TCP Authentication Extension (page 114)
• IS-IS (page 114)
• OSPF (page 114)
• BFD (page 114)
• BGP (page 114)
• VPRN/2547 (page 115)
• OpenStack (page 115)

101
VSP Release Notes, Release 5.4.1

– Multiple VSD-managed IPv4 Subnets on a Network (page 117)


• CloudStack (page 117)
• OpenShift (page 117)
• VMware (page 117)
• Hyper-V (page 118)
• 210 WBX (page 118)
• End-to-End QoS (page 118)
• VSS (page 119)

7.1 Known Limitations in Release 5.3.3

• [SROS-17473] In cases where we have IPv6 address configured on the interface of a VPRN, when VPRN is
shutdown / no shutdown, IPv6 ND will only happen for that interface if BFD for static routes IPv6 is configured.
• [SROS-17563] On a pair of VSG/WBX nodes connected to a server (no MCLAG) and VIP is configured and
resolved in node one , ARP table is not updated in node 2 with the new MAC address when system one reboots.
Workaround: In case of Reboot scenario, the best solution is enabling “enable-peer-tracking”, so that BGP
session goes down immediately.
• [SROS-17744] When BFD overlay flag is toggled on VSD for static-route next-hops, may result in intermittent
disruption of traffic and control plane sessions, that will resume immediately.
• [SROS-18017] In case MC-LAG MCS sync is broken because of some reason, access lag on both nodes stays
up and this might result in traffic black holing as new mac/arp learnt does not synch between nodes.
• [SROS-18080] If BGP is globally shutdown after reboot on VSG/WBX, deterministic global hold timer still
waits for BGP route-sync timer which cannot be started, hence MCLAG ports will be down till global hold
timer expires.
• [SROS-17932] In cases where MCLAG MCS connection is through LAG-98,if LAG-98 is shutdown/no shut-
down on a device with access LAGs down , MACs will not be re-synched from the peer.
• [VSD-25000] When a VSD host internally failover to a different mysql read/write target, all the users connected
to this VSD GUI will get logged out. MySQL target failover is managed by the proxysql service in VSD, the
current MySQL target can be read from the proxysql Monit status.
• [VSD-27025] The vCenter Web Client Metadata Plugin Nuage data tab does not allow the form to scroll in
some occasions, this might prevent the management of the metadata of VMs with multiple nics. Workaround
If possible, make sure your screen resolution is large enough and to give the plugin tab the most amount of
space. Otherwise, remove the 5.3.3 plugin using the procedure documented in the VMware Integration guide.
Then deploy the 5.3.2 vSphere Web Client Metadata Plugin. This can be done from a temporary 5.3.2 vCenter
Integration Node which is only used to deploy the plugin. Once the plugin is deployed and confirmed working,
the temporary 5.3.2 VCIN can be safely shut down and deleted. The vSphere Web Client Metadata Plugin does
not use VCIN after deployment.
• [VSD-25052] The vSphere Web Client metadata plugin does not function properly in a vSphere 6.7 environment
(vSphere 6.7 support is BETA in this release).
• [VSD-17882] In the Nuage metdata plugin for the vSphere Desktop client, if the static IP is the last field edited,
the Apply button must be clicked twice (or click anywhere on the Nuage Data tab and then click Apply).
• [VRS-16630] In rare occasions, the eVDF Gateway in VSD will remain in the Device certificate is
signed state, even though OpenShift has been successfully installed and VRS is deployed on the node. This

7.1. Known Limitations in Release 5.3.3 102


VSP Release Notes, Release 5.4.1

happens because Kubernetes or OpenShift sometimes deploys two pods for the same daemonset on a single
node. As a result, a corrupted OVS DB file can be created, failing the VRS Daemonset to properly start. This
in turn fails the eVDF Gateway to complete its full bootstrap. Workaround Remove the stale OVS DB file by
running rm /var/run/openvswitch/conf.db/conf.db on the affected node and deleting the VRS Pod on that affected
node, allowing the Daemonset to be redeployed.
• [VRS-16559] When changing the IP of an eVDF node, a reboot is required.
• [VRS-16557] If the eVDF instance does not contain any ports, the bootstrap agent puts the node in an unusable
state until the profile is updated and the bootstrap agent is restarted with a proper ZFB file containing ports.
Make sure to always add ports to the eVDF Instance or parent template.
• [VRS-16781] In TCP or UDP flows matching ACL Policy Rules containing L4 criteria, any fragmented packets
are dropped.
• [SCVMM-87] The Nuage plugin for SCVMM can sometimes use the wrong VM UUID when configuring the
information for a VM with multiple Network Interfaces. Workaround Delete the VM from VSD using the API
and reapply the data using the SCVMM plugin.
• [SCVMM-86] When upgrading the Nuage plugin for SCVMM, SCVMM config-
ures the wrong permissions on the folder containing the plugin in Windows. This
can cause a failure to use the plugin in SCVMM. Workaround Manually change
the permissions of the nuageaddin folder, typically located in Virtual Machine
manager\bin\addinpipeline\addin\scvmm_administrator. More information: https://
support.microsoft.com/en-us/help/2785682/description-of-update-rollup-1-for-system-center-2012-service-pack-1

7.2 Known Limitations in Release 5.3.2 U1

• [SROS-17811] In some cases, when Open vSwitch is restarted on VRS, ARPs are not being resolved on the
bridge VPort until traffic initiated from another VPort is resumed. This limitation does not impact traffic outage.
• [VRS-15876] BFD is not supported on VRS/VRS-G. BFD is only supported on AVRS.
• [VRS-16044] Underlay to overlay traffic is dropped since the overlay static-route entry is missing in nu-
age_routes table for underlay-supported domains.

7.3 Known Limitations in Release 5.3.2

• [SCVMM-7] Deletion of VMs from SCVMM sometimes results in stale objects in VSD. Workaround: Remove
the VMs from the add-in before deleting them from SCVMM.
• [SROS-17195] BGP PE-CE session advertises infrastructure AS and any other AS used in the underlay as part
of the AS path announced to a CE for overlay routes.
• [SROS-17697] On VSG/WBX, the vswitch init hold-timer and route-sync-timer do not show operational output.
• [SROS-17736] From VSC, show router ID bgp summary or show router ID bfd session
does not display BGP and BFD information provisioned on VRS. The VSC command tools vswitch ID
command OVS-command does not support all VRS commands. Workaround: Issue all BGP- and BFD-
related VRS OVS commands directly from VRS.
• [SROS-17613] On VSG/WBX with MCLAG A/A, shutdown of the vSwitch controller does not bring down
MC-LAG access ports, creating a potential blackhole as access node is not informed.
• [VRS-8020] When VMs is created from a template in SCVMM and it is started, it may not get activated initially.
Workaround: Restart the VM.

7.2. Known Limitations in Release 5.3.2 U1 103


VSP Release Notes, Release 5.4.1

• [VRS-13250] Traffic across local eBGP bridge VPort peers via a VRS-G does not work. Workaround: Enable
routing between hosts so that traffic does not transit the VRS-G.
• [VRS-13758] When % is used in the name of a VM in a VMware environment, the VM is advertised in the
platform with %25 in its name. Workaround: Do not use special characters in VM names.
• [VRS-14699] When a Hyper-V host is rebooted, sometimes it fails to sync with SCVMM, which causes VM
resolution to fail. Workaround: Refresh the VM in SCVMM or remove and re-add the metadata on the VM
using the plugin and stop and start the VM.
• [VRS-15348] On VRS when there is an existing static route without BFD, when BFD is enabled for the first
time and when the session comes up, the flows entries corresponding to that static route have their duration reset
without data plane impact.
• [VSD-24702 / SROS-17429] When the BGP flag is disabled and enabled on a VRS VPort, the peers are not
recreated on VSC.
• [VSD-24732] For Elasticsearch cluster extensions, all Elasticsearch nodes belonging to the cluster need to
be the same version. If the nodes are running different versions, they will not be able to form a cluster
and there is a possibility of data corruption. The Elasticsearch node versions can be confirmed by execut-
ing curl localhost:9200 or running rpm -qi elasticsearch | grep -i version on each
Elasticsearch node.
• [VSD-24750] In VSD “stats-out” deployment, the tca-daemon-status on the Statistics Nodes is reporting “Status
ok” when it fails to connect to the ActiveMQ cluster (ex: the ActiveMQ cluster is down, or JBoss is down)
although the TCA service is in a failing state. In order to capture TCA connection errors to ActiveMQ, users
have to check for error messages in tca_daemon.log (ex: “Could not connect to broker URL”), located under the
“/opt/vsd/tca_daemon/logs” directory.
• [VSD-24893] In a VSD Cluster, “acquireLock” exceptions maybe be seen with high frequency in VSD log files
(for example in ‘/opt/vsd/jboss/standalone/log/vsdserver.log’). This type of exceptions, although raised as error
in the logs, does not affect the VSD cluster system health and can be safely ignored.
• [VSD-25177] When doing VSD IPv4 installation, the installation will fail in the case when the DNS server
resolved the VSD hostname with both IPv4 and IPv6 addresses. Workaround: Only configure IPv4 addresses
in the DNS server for VSD hostname when doing a IPv4 only VSD installation.
• [VSD-25453] When a new 7x50 Netconf gateway is added to VSD, it synchronizes with the 7X50 configuration
only if Netconf Manager is running. Workaround: Manually invoke Netconf Synchronization.
The following known limitations were introduced in past releases.

7.4 7850

[DOC-159] While the 7850 VSG supports the creation of multiple SAPs (port+VLAN bindings) on the same service
(L2 domain), this behavior is not supported when the SAPs are being bound to the same subnet in an L3 domain.
Multiple port+VLAN combinations are supported on the same port as long as each VLAN on the port maps to a
different subnet in the same or different L3 domains.

7.5 CMS Integration Status

• No support for VMware vCloud Director


• No support for XEN hypervisor

7.4. 7850 104


VSP Release Notes, Release 5.4.1

7.6 Static Routes

[DOC-487] Static routes will not be matched against their next-hop’s underlying policy groups or zone. Specific
macros need to be defined for the static route’s destination CIDR to be used in ACLs. Also, static-routes will not
be matched against a particular zone’s subnets or subnets themselves, so zone-to-zone or subnet-to-subnet ACLs (or
variations of these) will NOT cover a static route destination that falls within the zone’s subnets. Workaround: To
define a specific macro and refer to it explicitly in the ACL.

7.7 VSP

• IPv6 is not qualified for Active/Standby MC-LAG.


• IP resolution via DHCP is supported on VPorts of type VM and Host, not on Bridge VPorts.
• [VSD-1260] The password for the MySQL root user is not set during installation. To remedy this, immediately
after verifying successful installation of all components on all VMs, set the root password on every node.
• [VSD-4540] The RD/RT for a subnet in a public zone can be incorrectly set to values other than those that have
been allocated to that public zone.
• [VSD-6356] When one of the nodes hosting the ejabberd component of VSD experiences network connectivity
issues, the node’s ejabberd process might be disconnected from the XMPP cluster and unable or unwilling to
rejoin it automatically. The VSD service monitoring will detect and report this issue, indicating an ejabberd
failure. The error message will be shown in the details of the Monit UI : EJabberd: 2 cna users
connected,expecting 3. To identify the node that has split off the cluster, find the node where the error
message indicates the following: “EJabberd: 1 cna users connected, expecting 3.” Restart the ejabberd or VSD
service stack by running service vsd restart on the affected node to recover the XMPP cluster.
• [VSD-6892] Backhaul EVPN VNID,RT & lRD can be updated to any valid value regardless of the “allowed
RT/RD VNID range” present in the System Configuration panel on VSD.
• [VSD-12075] The monit restart command is not supported as a CLI or as a GUI action. To restart any
processes or groups, use the commands specified in the VSP User Guide.
• [VSD-12136] The monit stop all or monit start all commands are not supported. To stop or start
any processes or groups, use the commands specified in the VSP User Guide.
• [VSD-12154] Under certain circumstances Monit can give up the process status query, therefore the status
program does not reflect the actual process status. For example : the mediator process shows as “Execution
failed” and the mediator-status program shows “Status ok”. Workaround: perform a monit start on the status
program to update the status, for example : “monit start mediator-status”.
• [VSD-12269] service vsd commands are no longer supported. To start or stop VSD processes, use the
monit commands specified in the VSP User Guide.
• [VSD-12457] No monit CLI will reveal degraded status for any program. To see degraded status, use the
Monit GUI.
• [VSD-12499] When the ejabberd process is stopped using monit stop ejabberd, another process called
epmd will continue to run. You can safely ignore this process.
• [VSD-12561] The VSD servers are no longer displayed in the monitoring console as in previous VSP releases.
As of Nuage VSP release 3.2.R5, Monit is used to give much more detailed information on VSD than was
previously available. For a full description of this feature and instructions on using it, see VSD: Monitoring
with Monit Service Manager.
• [VSD-12697] After setting the ejmode to allow, it cannot be changed back to clear. The supported changes
in the ejabberd server mode are fully described in the section Secure XMPP and OpenFlow Channels.

7.6. Static Routes 105


VSP Release Notes, Release 5.4.1

• [VSD-12892] The Monit log rotation configuration is now part of the VSD default installation. No additional
configuration is required.
• [VSD-13128] The monit command ignores parameters provided on the command line once it has parsed a valid
set.
• [VSD-13940] If a vPort mirror is not saved with the default option, “Mirror both”, and the user then goes to
other options, subsequently returning to the first, “Mirror both”, the changes cannot be saved. Workaround:
reselect a mirror destination.
• [VSD-14488] Stateful ACLs–Using both ‘reflexive’ and ‘stateful’ flags in the same API call, with contrasting
values repeatedly, may lead to incorrect ACL compute.
• [VSD-16347] The VSG does not have the support to match traffic based on the IPv6 prefix on the egress ACL.
Ingress works fine and has no issue. So whenever a user configures an egress ACL with an IPv6 prefix on a
VSG VPort, it is expected that the VSG cannot match the traffic based on the IPv6 prefix on the egress ACL.
• [VSD-18501] When DPI is not applicable the Domain Self editor will be blank.
• [VSD-18606] Clicking on a subnet should load and display VPort and interface details. However, when a subnet
has only a single VPort with interface, the information is not displayed immediately. Workaround: Click the
expand button in the subnet node to view its VPorts and interfaces.
• [VSD-19011] When the subnets of an underlay-type domain are updated to dual stack, VSD does not allow
changing the underlay and address translation flags for domain and subnet.
• [VSD-19076] IP reservations are supported only for IPv4 addresses, not for IPv6 addresses.
• [VSD-21290] vsd-prepare-replication-master.sh is re-entrant only on the same host as it does
not re-create the certificates every time.
• [VSP-1117] KVM host for VRS-G: LRO should be turned OFF in NICs which are part of bond interfaces.
• [VSD-1050] LDAP Certificates must be imported on all cluster nodes manually for LDAP authentication to
work in a clustered environment. Although the LDAP certificate imported on the first node shows up on the UI
of second node, the certificate is not stored in the second node. Workaround: (1) Launch VSD Architect on
the second node. (2) With the organization selected, select LDAP from the Dashboard menu. (3) Scroll down,
select Accept All Certificates, and click Save. (4) Deselect Accept All Certificates and click Save again.
• [VSD-12283] Monit allows running multiple commands back-to-back but we strongly recommend using monit
summary to confirm the system status after each command, before running the next command.
• [VSD-12616] When a nonexistent site-ID is specified, the container is not spawned and no VSD alarm is trig-
gered.
• [VSD-19347] When using ACL commit rollback, the “policy not found” error might be seen upon hitting the
Apply button. However, the ACL change has actually been applied. No further action is needed.
• [VSD-19769] In the VSD Architect GUI, in the Networks > Design view, the connecting lines associating the
parent and child objects are not displayed correctly: they are superimposed on the objects themselves.
• [VSD-21660] After Elasticsearch switchover, watcher would take up to 1 hour to update its count. You can
speed this up by executing “monit stop tca-daemon ; monit start tca-daemon-status” on your current active VSD
cluster after the VSD cluster is fully up from the output of “monit summary”.
• [VSD-23141] DHCP range addition and deletion are not supported for VCS subnets and should not be used.
These ranges are only usable on VNS subnets (with NSGs).

7.8 VRS/VRS-G Data Path

• [SROS-16241] If Dual VTEP on VRS-G is configured, data path does not interoperate with a domain stretched

7.8. VRS/VRS-G Data Path 106


VSP Release Notes, Release 5.4.1

out to a VSG/WBX.
• [VRS-216] The ARP generated by the OVS is flooded on all ports.
• [VRS-2863] Creating a static route whose next hop to floating IP is not supported.
• [VRS-3023] If logging is enabled for egress ACLs and an ingressing packet is flooded, the log will report only
one hit (to the first port in the switch implementing the particular ACL entry) and subsequent hits will not be
reflected for other copies of the same ACL entry in other VPorts of the same domain in the same hypervisor.
• [VRS-4674] When port mirroring is enabled in BOTH directions with policy-based mirroring in the egress
direction on an ACL that is also active on the port, two packets are sent to the mirror destination instead of one.
Related issue: VRS-4675.
• [VRS-4675] When vPort mirroring (VPM) and policy based mirroring (PBM) are both enabled in the egress
direction, two packets will be sent to the mirror destination instead of one. Related issue: VRS-4674.
• [VRS-4685] When the datapath kernel timer is set to a value higher than the TCP timeout, the conntrack entries
are recreated after traffic is stopped and the TCP timeout kicks in. Workaround: configure a datapath timer value
lower than or the same as the TCP timeout value.
• [VRS-4762] An egress mirror action configured at a vPort level will not capture packets being sent out of that
vPort if those packets are caused by flooding (due to them being Broadcast, Unknown, or Multicast) and the
ingress port over which those packets were received also has an ingress mirroring action enabled.
• [VRS-4773] User Configurable Burst Value is not supported for FIP Rate Limiting.
• [VRS-5943] - There is no forward compatibility support between VRS 4.0.R4 and VSC 4.0.R3 or 4.0.R2.
• [VRS-5602] (1) eth0 and eth1 interfaces should not be used as Uplink Underlay Interface. (2) When
moving the Uplink underlay interface from ethx to eth2, remove the namespace ‘fip’, i.e., the field NET-
WORK_NAMESPACE= should be blank in the /etc/default/openvswitch-switch file.
• [VRS-5999] The Linux kernel can crash if an unknown host responds to an ICMP echo request message from the
said node.The issue seems to be related to OpenVswitch bug http://openvswitch.org/pipermail/dev/2014-March/
038062.html. OpenVswitch registers a gre_cisco_protocol but does not supply an err_handler with it. The
gre_cisco_err() calls the err_handler without an existence check, and causes the kernel to crash.
• [VSD-14924] Stateful ACL–Although modifying vPortInitStatefulTimer is allowed on VSD, it does
not get programmed on VSC and VRS. Default value of 300 seconds takes effect.
• [VRS-4900] In VRS when both controllers loose connectivity to the VSD, both of their role become ‘slave.’
• [VRS-5700] A non-DHCP address change on an uplink interface requires an openvswitch restart for the new
uplink IP address to take effect for the datapath.
• [VRS-11951] Since HTTP/HTTPS probing does not support redirected URLs, for those that return failure reason
as 3xx error, enter the correct URL for probing. For URLs that return a 404 Not Found but are reachable through
wget/curl, the request format of HTTP/HTTPS probing is based on Absolute Path and does not account for
webservers that expect Relative Path.

7.9 7850 VSG/VSA Data Path

• [SROS-4893] The hardware scheduler prioritizes out-of-profile high priority traffic over in-profile low priority
traffic.
• [SROS-5257] DSCP remarking of VXLAN traffic via network policy fails: (1) Default Egress policy + Default
Network Policy - VXLAN DSCP from network policy, No dot1p remarking for Transit or VXLAN, Transit IP,
No remarking (Since remarking is disabled in default egress) (2) Non-Default Egress policy with no remarking
+ Default Network Policy - Same as (1). (3) Non-Default Egress policy with remarking + Default Network

7.9. 7850 VSG/VSA Data Path 107


VSP Release Notes, Release 5.4.1

Policy - VXLAN DSCP from egress policy (overwrite of network value), dot1p from egress policy for VXLAN
and transit, DSCP from Egress from Transit IP. When dot1p remarking option is introduced, (3) will change
to: VXLAN DSCP from network policy, dot1p from egress policy for VXLAN and Transit, DSCP from Egress
from Transit IP.
• [SROS-5633] When an SFP [/ SFP+ / QSFP+] is inserted into a port, the transceiver information in show port
detail is not refreshed until the link comes up on the transceiver.
• [SROS-5775] QoS policy match criteria and IP security policy cannot be enforced against the IP fields of the
outer IP header of VXLAN-terminated traffic.
• [SROS-5851] Due to a hardware limitation, egress mirroring of traffic configured on an EVPN SAP will only
mirror the egress forwarded traffic; egress traffic that is flooded out of the SAP will not be mirrored.
• [SROS-6113] Due to a hardware limitation, egress SAP traffic is not counted for a routed VPLS service when
the traffic ingresses the service on another slot on the 7850 VSG/VSA.
• [SROS-6139] dot1p from original packet seen on VXLAN header when egress port is tied to non-default egress
QoS with dot1p remarking disabled (176706). The dot1p could be always set to 0, but without re-marking,
copying is an accepted behavior.
• [SROS-6227] During a Virtual Chassis switchover, collection of security policy statistics is suspended until the
switchover is complete. Statistics collection is suspended for up to 10 seconds.
• [SROS-6643] If both IP and MAC filters are configured on a VPLS SAP, any traffic that matches an entry in both
the IP and MAC filters is counted only in the IP filter statistics. It is NOT counted in the MAC filter statistics.
• [SROS-6729] Modified DSCP to FC values from QoS ingress 2 is not downloaded in QoS ingress auto policy
with trusted FC enabled: the default trusted values are the same as the default DSCP entries of ingress policy 2.
This is just for reference: if the ingress policy 2 changes its default values, it does not reflect in the automatically
created SAP policy.
• [SROS-6735] Mirrored traffic is always sent across a single path even if multiple ECMP paths exist. ERSPAN
traffic is not shared across multiple ECMP links.
• [SROS-6742] In a virtual chassis configuration it is possible to configure a different number of Virtual Fabric
Links (virtual chassis interconnect links) on the two peers in a pair. This is incorrect: both peers in a pair
should have the same number of VFLs. Incorrect configuration is possible because each chassis is configured
independently and has no knowledge of its virtual chassis peer.
• [SROS-6834] Frames with the following specific Ethertypes are not forwarded via the VPLS service: 8808
Ethernet flow control, and 888E EAP over LAN (IEEE 802.1X).
• [SROS-6953] Traffic that ingresses a port may (severely) affect the amount of traffic that is able to egress a port.
Due to the Network Processor Design, when any flooding or multicast replication occurs for a SAP or SDP, the
packet goes to the MMU before the split horizon check determines that the packet should not actually get sent
out. Therefore, the MMU bandwidth is being used up unnecessarily by packets that wind up getting dropped.
• [SROS-7060] If BUM traffic is received on SAP of a VPLS service with Virtual Chassis enabled, traffic is
replicated across the VFL links.
• [SROS-7089] If traffic falls below the Committed Information Rate (CIR), it is distributed across the queues
in a strict round-robin fashion. It is only when traffic rises above CIR that it is distributed across queues in a
Weighted Round-Robin (WRR) fashion. This is by design in the forwarding hardware.
• [SROS-7175] For traffic passing through a 7850 VSG/VSA node through a VPLS service where an ingress
LAG and egress LAG have unequal numbers of members, the hash for load-sharing for the egress LAG may be
unequal. The LAG hash to determine the egress LAG member for a given slot for the traffic is calculated on
ingress.
• [SROS-7257] After a Virtual Chassis master changeover, the EVPN MAC age time is not preserved and is reset
to 0.

7.9. 7850 VSG/VSA Data Path 108


VSP Release Notes, Release 5.4.1

• [SROS-7727] A MAC learned by a local EVPN SAP where the SAP is a LAG ages out based on the remote age
timer (default is 900 seconds) instead of the local age time for the EVPN (default is 300 seconds).
• [SROS-7853] For traffic egressing an SDP over a network IES LAG interface, learned L2 or L3 traffic will be
load-shared across the available links, but unknown/unlearned L2 or L3 traffic will only be sent along a single
member of the LAG.
• [SROS-8329] Loopback SAPs are a means to enable functionality that allows traffic leaving a VXLAN tunnel
traffic to be routed and allows routed traffic to enter a VLXAN tunnel. They do not necessarily serve as “regular”
SAPs while providing this functionality. Therefore, the correctness of the stats associated with these SAPs
cannot be guaranteed. For the most part, the presence of loopback SAPs should be completely ignored.
• [SROS-10421] DSCP re-marking for VXLAN traffic egressing VSG VPorts is not supported.
• [SROS-11362] MC-LAG and virtual chassis are mutually exclusive. Do not attempt to configure virtual chassis
on a node running MC-LAGs as the configuration will fail to parse upon reboot.
• [SROS-11411] When using LACP fallback, with a LAG which is already in LACP fallback mode, removing the
port in UP state from the LAG will not result in the next priority port to be enabled. Only after the LAG exits
LACP fallback and returns would the next port be selected. Shutting down the whole LAG and then restoring it
will also select the new port.
• [SROS-12069] GRT ECMP Failure with 3.2 - 4.0 BGP Interoperability: In 4.0R4 pre-Big Red Switch (in
backward-compatibility mode), we do not support ECMP for static-routes leaked into the hub-domain. We only
allow 1 static-route leaked into the hub-domain.
• [SROS-13342] When in a dual-homed bridge VPort a MC-LAG node learns the MAC address, it is synced to
the other MC-LAG node with MCS. Both have a cumulative aging timer. If the other MC-LAG MCS node goes
down and comes back up again, the expiry timer of the learned MAC address is reset to zero.
• [SROS-13377] Neighbour discovery (ND) of IPv6 host over VXLAN tunnel does not work because there is a
limitation on VSD whereby multicast ND packets received over a VXLAN tunnel are dropped. For example,
when a VSG owns the destination EVPN in different VSGs, and when that VSG attempts to send an ND to the
destination through the VXLAN tunnel, the ND is dropped on the destination VSG. Workaround Ensure the
IPv6 hosts on the remote VSG already have the neighbour entry for the host (both link local and global unicast).
• [SROS-13514] Provider DSCP in the outer IP header (VXLAN) cannot be used to classify traffic on Network
Ingress on the VSG. As a result, traffic is classified as BE and is queued accordingly. Dot1p in the provider
frame can be used for classification and will be queued correctly.
• [SROS-14119] Two separate VTEP IP addresses are not supported for third-party OVSDB hardware switches
configured as an MLAG.
• [SROS-14596] When there is a VIP external appliance owner change and VIP resolution changes to a different
hardware node, convergence is 4 minutes and 15 seconds. When VIP is in the same node convergence is in 15
seconds.
• [SROS-16195] Ingress MAC filter is applied on bridged SAPs in manually created EVPN VPLSs, however
dropped frames’ MACs are learnt in the VPLS database.
• [157631] Egress filters cannot be mirror sources.
• [161579] When the mirror source is LAG/port egress, an extra VLAN tag is present on the mirrored traffic.
• [161896 | SROS-2225] The 7850 VSG/VSA does not support fragmentation at an intermediate node.
• [163194] Null encapsulated SAPs in a VPLS service accept only untagged traffic.
• [163281] The egress packet count for SAP increases even though the traffic is actually ingressing through that
SAP.
• [164991] CPU-generated IP packets are not counted in the transmit statistics of the IP interface.

7.9. 7850 VSG/VSA Data Path 109


VSP Release Notes, Release 5.4.1

• [165720] Load-sharing across LAG and ECMP members is most evenly distributed when the number of mem-
bers for LAG and ECMP are a power of 2. If the number of members is not a power of 2, it is possible for traffic
to normalize to an uneven distribution.
• [166895] When the system attempts to learn more than 14000 ARP entries at a time, the exact number of entries
no longer appears in the ARP table. Instead the number of entries displayed is 3-5% less than expected.
• [167320] The system cannot qualify presence in ingress QoS policy on a VLAN tag.
• [168534] VSG does not support auto-negotiation for its ports operating in GigE mode. The speed of all con-
nected ports must be manually configured.
• [168771] If BUM traffic with uniformly varying L2 fields is being hashed over a LAG, sub-optimal hashing
behavior might result. This is due to VSG data path’s handling of BUM traffic hashing can result in pointing to
the same outgoing link if L2 fields are not varying significantly enough.
• [168898] ECMP does not work when there are multiple interfaces on the same port.
• [169041] Sending unknown unicast traffic causes the SAP stats on i/c SAP to increment for both ingress and
egress.
• [169044] Rate limiting applied to the egress rate limits the ingress traffic for the multicast traffic type.
• [169606] With multiple EVPN traffic sent in one, EVPN services are sent out to all the EVPN services. The
system only maintains stats per SDP, not per SDP binding.
• [169745] The egress packet count is incremented even if a packet does not egress an SAP or SDP.
• [170304] ISIS should shut down when the IOM rejects its routes. However, when the IOM has more than 7500
routes, ISIS remains up, and the show commands still display the routes although they are not written in the
IOM.
• [173324] Egress mirroring of VPLS SAPs does not work.

7.10 Hardware

• [SROS-5943] 1000BASE-TX copper small form-factor pluggable (SFP) transceivers that do not provide a loss
of signal (LOS) indication will be reported as link up when there is no cable plugged into the SFP.
• [156462] GigE operation is not supported on the 7850 VSG and VSA management Ethernet port. The manage-
ment Ethernet port supports 10BASE-T and 100BASE-TX operation.

7.11 RADIUS

• If the system IP address is not configured, RADIUS user authentication will not be attempted for in-band RA-
DIUS servers unless a source-address entry for RADIUS exists.
• The NAS IP address selected is that of the management interface for out-of-band RADIUS servers. For in-band
RADIUS servers if a source-address entry is configured, the source-address IP address is used as the NAS IP
address, otherwise the IP address of the system interface is used.
• SNMP access cannot be authorized for users by the RADIUS server. RADIUS can be used to authorize access
to a user by FTP, console or both.
• If the first server in the list cannot find a user, the server will reject the authentication attempt. In this case, the
router does not query the next server in the RADIUS server list and denies access. If multiple RADIUS servers
are used, the software assumes they all have the same user database.

7.10. Hardware 110


VSP Release Notes, Release 5.4.1

• [013449] In defining RADIUS Vendor Specific Attributes (VSAs), the TiMetra-Default-Action parameter is
required even if the TiMetra-Cmd VSA is not used.

7.12 TACACS+

• If the TACACS+ start-stop option is enabled for accounting, every command will result in two commands in the
accounting log.
• [039392] If TACACS+ is first in the authentication order and a TACACS+ server is reachable, the user will be
authenticated for access. If the user is authenticated, the user can access the console and any rights assigned to
the default TACACS+ authenticated user template (“config>system>security>user-template tacplus_default”).
Unlike RADIUS, TACACS+ does not have fine granularity for authorization to define if the user has just console
or FTP access, but a default template is supported for all TACACS+ authenticated users. If TACACS+ is first
in the authentication order and the TACACS+ server is NOT reachable, authorization for console access for the
user is checked against the user’s local or RADIUS profile if configured. If the user is not authorized in the
local/RADIUS profile, the user is not allowed to access the box. Note that inconsistencies can arise depending
upon combinations of the local, RADIUS and TACACS+ configuration. For example, if the local profile restricts
the user to only FTP access, the authentication order is TACACS+ before local, the TACACS+ server is UP and
the TACACS+ default user template allows console access, an authenticated TACACS+ user will be able to log
into the console using the default user template because TACACS+ does NOT provide granularity in terms of
granting FTP or console access. If the TACACS+ server is DOWN, the user will be denied access to the console
as the local profile only authorizes FTP access.

7.13 VSC and 7850 VSG/VSA

7.13.1 CLI

• [ VSP-1085] When traceroute or tracepath commands are executed, the VRS will not be listed as a hop
because it does not respond to ICMP TTL expired messages.
• [157860] The FQDN is limited to approximately 100 characters for the XMPP server name.
• [168916] There is no statistic information for traffic dropped in dynamic SAP.
• [018554] The CLI allows the user to specify a TFTP location for the destination for the “admin save” and “admin
debug-save” commands which will overwrite any existing file of the specified name.
• [032747] There is currently no ‘show’ command to show the current values of the password hash settings.
• [057198] The system does not prevent the user from using the same IP address of its BGP peer on one of its
router interfaces.
• [093998] Non-printable 7-bit ASCII characters (for example, French letters with accents) are not allowed inside
the various description fields. Configurations that do not comply may result in failed config “exec” in CLI and/or
during system bootup.
• Output modifiers (“| match” and “>”) are not supported in configuration files executed using the “exec” command
(scripts).
• Although the “http-download” CLI command is referenced in the Systems Basics Guide, it is not currently
supported.
• [SROS-10611] Static routes provisioned through VSD are advertised to BGP peers of the VSC with the same
preference value (169) regardless of whether the VSC is the master or stand-by controller for the VRS hosting

7.12. TACACS+ 111


VSP Release Notes, Release 5.4.1

the next hop. This is different from the way VM/VPort routes are advertised, i.e., with different priorities for
the active and stand-by controllers.

7.13.2 Management

• [SROS-8280] Packets sent to CPU for MAC learning are also counted with network interface egress stats.
• [SROS-8899] Generated OVA can be deployed only via vCenter as on standalone ESXi vApp parameters are
not supported.
• [SROS-16355] Mirroring of egress SAPs only works for known L2-forwarded traffic, not for L2 BUM traffic or
L3-forwarded traffic.
• [SROS-16464] When TCP session between VSC and VSD is not gracefully shut down, e.g., (in the case of hard
reboot of VSC) the VSD will not show the VSC as Down in Sysmon until TCP timeout occurs.
• Source address configuration applies only to the Base routing instance, and where applicable, to VPRN services.
As such, source address configuration does not apply to unsolicited packets sent out the management interface.
• [047122] The SSHv2 implementation does not support the RC5 cryptographic algorithm.
• [051129] After 497 days, system up-time will wrap around due to the standard RFC 1213 MIB-II 32-bit limit.
• [VSD-1585] On slower machines, on the VSD login page, Firefox frequently displays a warning about an
unresponsive script.

7.13.3 Routing

• [SROS-7262] MAC age time suddenly increases differently after switchover for MAC learnt from unicast stream
and multicast stream.
• [SROS-9086] If traffic is sent from a VM or host to an endpoint on a bridge interface, an ARP is generated since
the IP to MAC resolution is not known. If the only rule that allows traffic is the one to the VPort group of the
bridge interface, until the ARP route is generated from the VRS, we would not know that the endpoint belongs
to the VPort group. As a result, the ARP will not pass the ACL rule and the packet will get dropped. If ARP
entry is created first, then the ACL rule will allow the traffic.
• [SROS-16654] Prefixes learnt from the CE of the single-homed BGP PE CE session are not installed on its
MC-LAG peer.
• [017488] Setting a metric of zero in OSPF or IS-IS is not supported and causes the interface to fall back to the
“reference-bandwidth” computed value instead of setting the value to zero.
• [040147] Routes exported from one protocol to another are redistributed with only the first ECMP next-hop.
Therefore, if BGP routes having multiple next-hops are exported to a VPRN client, only one next-hop for the
route will be exported. The one chosen is the lowest IP address of the next-hop address list.
• [062663] A static route with a CPE connectivity target IP address which is part of the subnet of the static route
itself will not come up if there is no alternate route available in the routing table which resolves the target IP
address. This is because a static route can only be activated if the linked CPE session is up, and in this case the
CPE session can only come up if the static route itself is activated.
• [090244] When the applied export policy is changed in conjunction with an export-limit, it may not take effect
immediately without clearing the policy (no export/export), or in very few cases, toggling the administrative
state of the protocol.
• [090274] There is no warning trap sent after a clear export policy is issued when the export-limit is increased a
few times and clear export is performed.

7.13. VSC and 7850 VSG/VSA 112


VSP Release Notes, Release 5.4.1

• [079495] When export limit is reduced via the “export-limit” command, toggling the administrative state of the
protocol is required to remove all previously exported routes.
• [SROS-13387] Learned IPv6 neighbor entries on dual-homed VPorts are not advertised to remote VSG. The
issue is specific to dual-homed neighbor entries. Due to this, other VSGs that also own the EVPN of the dual-
homed host cannot learn the neighbor entry for that dual-homed host.
• [SROS-8934] Do not use a VSG as a controller if it is used to attach a gateway for a Floating IP domain. If
a VSG is used as a gateway for a Floating IP domain, and acts as a controller for VMs which use Floating IP
addresses from the domain, traffic from the gateway to the Floating IP addresses will be dropped.
• [SROS-9529] ICMP packets destined to any of the router’s interface IP will be sent to the CPM for processing
regardless of any ingress ACLs blocking such ICMP packets. Packets are extracted for processing before the
ACL is applied although the ACL match counters will still be incremented for the extracted packets. Manage-
ment Access Filters can be used to protect the CPM’s CPU if needed.
• [SROS-16411] For manual EVPN: (1) Even if there is a filter with a valid map attached under the SAP,
the DSCP Map Valid will always be 0. (2) ACL filter # and QoS filters are not reflected in the show
vswitch-controller vports type bridge detail. (3) Summary with one bridge on evpn 5,
is as follows:

A:Dut-C>show>vswitch-controller# summary

===============================================================================
Virtual Switch Controller Summary
===============================================================================
Number of vswitches : 1 - (VSG - 1, VRS - 0, VRS_G - 0, NSG - 0,
: HW-VTEP - 0, NSG-BR - 0, VRS_B - 0)
Number of VMs : 0
Number of vports : 1 - (VM - 0, Host - 0, Bridge - 1)
Number of resolved vports : 1 - (VM - 0, Host - 0, Bridge - 1)
Number of l3 domains : 1
Number of subnets : 0
Number of l2 domains : 65503
Number of IP routes : 0 - (Subnet - 0, Host - 0, Static - 0)
Number of MAC routes : 0
Number of ARP routes : 0
Number of PBR routes : 0
Number of Containers : 0
===============================================================================

7.14 SCVMM

• [SCVMM-14] Currently there is no way to remove a single NIC. Also, a VM reset is required to resolve IP
changes from Nuage. Since a VM reset is involved, removing a single NIC from Nuage VSD will affect traffic
on other connected NICs as well. Workaround: All the NICs must be removed together.
• [SCVMM-34] The plugin for SCVMM does not support VMs with Dynamic MAC allocation.
• [SCVMM-77] The plugin for SCVMM is unable to activate VMs with long names as this is a limitation of the
VSD.
• [SCVMM-78] The plugin for SCVMM can not activate VMs with names that contain special characters as these
are not supported by VSD.

7.14. SCVMM 113


VSP Release Notes, Release 5.4.1

7.15 TCP Authentication Extension

[057277] It is not possible to delete an authentication keychain if that keychain was recently removed from a BGP
neighbor while BGP was operationally down. BGP has to become operationally active before the keychain can be
deleted.

7.16 IS-IS

• [056527] A change in any IS-IS Multi-topology and/or level will cause the SPF to be run in all levels and/or
topologies.
• [085326] ECMP across multiple-instances is not supported. ECMP is per instance only. Only one route, the one
with the lowest instance ID, is installed.
• [085463] In a multi-instance IS-IS configuration, the same IS-IS prefix is not leaked to all instances via the
traditional Layer-1 and Layer-2 leaking.

7.17 OSPF

[091520] A router with more than one point-to-point adjacency to another router over links of equal metric may
compute the shortest-path tree over the incorrect link in the case of unidirectional link failures on the far-end router.

7.18 BFD

• [SROS-16264] BFD overlay is not supported for static routes where the next-hop resolves to MC-LAG VPorts.
To avoid issues, do not configure BFD in the CPE/compute attached to the MC-LAG VPort.

7.19 BGP

• [012074] If BGP transitions to the operationally disabled state, the “clear router bgp protocol” command will not
clear this state. The BGP protocol administrative state must be shutdown/no shutdown to clear this condition.
• [085198, 132818] If the BGP neighbor address is configured prior to configuring that same IP address on a
router interface, the configuration can be saved and loads properly with a warning message displayed. Also, the
peering shows up as idle. Workaround: Do not use the same IP address for a local router interface and a BGP
neighbor.
• [85601] After a CPM or CFM failover, BGP graceful restart will not work initially. It will start working after
the neighbor session is flapped and capability messages are exchanged.
• BGP Graceful Restart (GR) helper supports the IPv4 address family but the VPN IPv4 address family is not
supported.
• [SROS-13317] For the uplink subnet use case, since a FIP is assigned individually to the VMs in the domain,
there is no subnet route for the FIP subnet present in the uplink subnet’s VPRN. Only the VM routes (/32) are
present. For BGP PE-CE, advertising all the /32 routes by default to the CE is suppressed. Therefore, in this case
none of the VM routes (which are /32 from the FIP domain) are advertised to the CE. Since there is no subnet
route, the CE has no way to get the VM-specific routes. Although the /32 route suppression can be disabled via
the neighbor blob, that allows advertising other /32 routes as well. Furthermore, configuring a route policy to

7.15. TCP Authentication Extension 114


VSP Release Notes, Release 5.4.1

advertise /32 routes via prefix-lists does not work unless the ‘host-advertisement’ field in the neighbor blob is
set to ‘true’ to advertise /32 routes.
Workarounds
1. Enable the host route (/32) advertisements by setting “<host-advertisement>true</host-
advertisement>” in the neighbor blob.
2. Advertise a default route from the PE to the CE.
• [SROS-13333] Routing policy names with spaces are not supported on the VSD for the VSG BGP PE-CE.
Workaround: Use CamelCase or hyphens in routing policy names containing more than one word.
• [SROS-17105] When single session peering over IPv6 BGP is enabled, and IPv4 and IPv6 address families are
used, IPv4 routes that are unusable after BGP policy changes to next-hop are not withdrawn. This causes remote
IPv4 traffic to be blackholed.
• [VRS-12287] BGP on VRS is not yet supported on Ubuntu platforms.
• [VRS-12522] Sticky ECMP is not supported for BGP PE-CE on VRS/VRS-G/AVRS.

7.20 VPRN/2547

• [031055] The service operational state of a VPRN might be displayed incorrectly as Up during its configuration
while some mandatory parameters to bring it up have yet to be set.
• [034205] Each MP-BGP route has only one copy in the MP-BGP RIB, even if that route is used by multiple
VRFs. Each MP-BGP route has system-wide BGP attributes and these attributes (preference) can not be set to
different values in different VRFs by means of vrf-import policies.
• [055343] Executing a ping from a VPRN without a configured loopback address may fail with a “no route to
destination” error message despite there being a valid route in the routing table. The error message is misleading.
It should state that the reason for the failure is that there is no source address configured.

7.21 OpenStack

• [OpenStack-188] When a neutron net-update changes the network type to or from “shared”, the update will be
accepted, but the permissions will not be changed. To change the network type to/from shared, the network
must be recreated.
• [OpenStack-332] Although Nuage VSP does not permit the creation of VMs on a Floating IP subnet, this is not
blocked by the Nuage OpenStack plugin. Thus, a VM can be attached on a network where external=True, but
will never resolve.
• [OpenStack-467] Deleting Enterprise (netpartition) from OpenStack will not succeed if VSD-managed subnets
are available in VSD on the specific Enterprise. VSD user must delete the resources on VSD.
• [OpenStack-512] If a tenant does not exist in Keystone, an invalid UUID can be passed in the tenant-id has when
creating Neutron objects. This matches OpenStack reference implementation behavior.
• [OpenStack-677] If a floating IP address is used before the external network is associated with a router, then
OpenStack attempts to use the .1 address for the floating IP. This is not permitted by VSD. Attach the external
network to the router before using the floating IP addresses.
• [OpenStack-1277] Port security can be disabled at network/port level. When port-security-enabled is
set to False at network level, VM boot in such networks fails due to upstream issue: https://bugs.launchpad.
net/nova/+bug/1554728. VMs with port having less security can be launched by creating network with

7.20. VPRN/2547 115


VSP Release Notes, Release 5.4.1

port-security-enabled=true and then creating ports with port-security-enabled=false in


that network.
• [OpenStack-1557, OpenStack-1613] Attachment of DHCP-disabled subnet to a router is not supported.
• [OpenStack-1597, VSD-17794] Redirect Target Support: The following scenarios are not permitted and the
operation is therefore blocked:
– An L2 domain (subnet) consisting of redirect target(s) cannot be attached to a router irrespective of whether
any VPort references the redirect target(s) or not.
– A subnet in an L3 domain (router) consisting of a redirect target cannot be detached from the L3 Domain
(router) if it has VPorts referencing the redirect target(s).
• [OpenStack-1776] A known issue in upstream LBaaSv2 https://bugs.launchpad.net/octavia/+bug/1495430 can
result in qlbaas namespaces not being correctly cleaned up in Newton and Mitaka. Contact your OpenStack
vendor for assistance resolving this issue.
• [OpenStack-1701] DHCP is not supported for IPv6 addresses. In the VSD-managed workflow, when a managed
L2 domain is created in VSD, an IP address from the same subnet must be specified for the DHCP server. By
default, the DHCP server IP address is the same as the gateway IP address. OpenStack, being unaware of this,
might allocate the VSD-reserved gateway/DHCP server IP address to a VM port. To avoid the conflict that
would result and prevent OpenStack from creating ports with the reserved IP address, the following steps must
be followed when an IPv6 subnet is created in Openstack:
1. Define an allocation pool that excludes the reserved VSD DHCP Server address and
2. Specify no gateway in the subnet. The gateway must be explicitly disabled by any of the following means:
– Using CLI with the --no-gateway option
– Using Neutron API Python code with the gateway_ip=None
– Using Heat template with gateway_ip:null

Note: In the VSP REST API, the VSD DHCP IP address is labeled as “IPv6Gateway”.

Example:
For a VSD-managed L2 domain with:

IPv6 cidr = 2001:5f74:c4a5:b82e::/64


IPv6 gateway IP = 2001:5f74:c4a5:b82e::1

The OpenStack IPv6 subnet is created with the following:

cidr = 2001:5f74:c4a5:b82e::64
NO gateway
allocation_pool
start "2001:5f74:c4a5:b82e::2"
end "2001:5f74:c4a5:b82e:ffff:ffff:ffff:ffff"

• [OpenStack-1721] The attributes --router:external and --shared cannot be used together in the neu-
tron net-update CLI command due to a bug in upstream OpenStack Neutron code.
• [OpenStack-1807] Nuage-metadata-agent is not supported with cloudbase-init.
• [VRS-6050] The VPNaaS Reference Implementation does not work with VSP 4.0.R4.

7.21. OpenStack 116


VSP Release Notes, Release 5.4.1

7.21.1 Multiple VSD-managed IPv4 Subnets on a Network

• No SRIOV and SRIOV duplex support


• No support for external DHCP agent
• No support for dual stack IPv4/IPv6 subnets
• Single or multiple IPs on Neutron port can be assigned only from a single subnet.

7.22 CloudStack

• [Cloud-956] When the networkOffering of a network with ConfigDrive is changed to a


networkOffering with VR, all the VMs in that network must be restarted so that they can get user data
and metadata. Also, if the configIso is mounted on the VM, the user data cannot be updated.

7.23 OpenShift

• [VRS-8644] In the case of Openshift HA install, Ansible removes the nuage-monitor-server HA proxy config-
uration from the /etc/haproxy/haproxy.cfg file on the load balancer node. Workaround: Add the configuration
as follows at the end of the haproxy.cfg file on the load balancer node:

frontend nuage-monitor-server
bind *:9443
default_backend nuage-monitor-server
mode tcp
option tcplog

backend nuage-monitor-server
balance source
mode tcp
server master0 <master0 IP>:9443 check
server master1 <master1 IP>:9443 check

Then restart the service using service haproxy restart and check its status using service
haproxy status -l.

7.24 VMware

• [SROS-8921] Changes in VMware vApp options are taken into account only if “Update Nuage VSC configu-
rations and reboot” is chosen as a boot option. If not updated as you expected, be sure the vApp options have
been updated correctly. VMware sometimes fails to update them (see the VMware logs).
• [VMware-13] In VRS for VMware, if both L2 domain metadata and L3 domain metadata are present, the VM
will be resolved by VSD based on the L2 domain metadata. The L3 domain metadata will be ignored. In the
vCenter setup, the extra config fields cannot be deleted. The workaround is to put null entries in the unneeded
fields.
• [VMware-16] The vApp metadata file is not accessible after a hypervisor reboot. The workaround is to shut
down the OVS VM and then power it on again.
• [VMware-68] Virtual machines under ESXi managed directly from vCenter using VM metadata as the attach-
ment mechanism (such as using the Nuage vCenter Plugin) must use the first <N> VNICs as managed by

7.22. CloudStack 117


VSP Release Notes, Release 5.4.1

Nuage. Any non-Nuage VNIC must be after this range. That is, all Nuage-managed VNICs must be contiguous
in numbering and must be the first virtual network cards exposed to the VM.
• [VRS-7611] When changing the configuration of the NFS mount through the vCenter Integration Node for a
deployed VRS, a reboot is required for the change to be applied.
• [VRS-11710] When hot adding memory or CPUs to the VRS Agent, a few kernel messages can be observed
surrounding crash recovery and intel_rapl. These messages can be safely ignored.
• [VRS-12445] When a VRS is deployed on an ESXi host without using the vCenter Integration Node, the VRS
might not recover properly when it loses its connection to the ESXi host. Workaround It is strongly advised to
use VCIN to deploy VRS Agents.
• [VSD-17339] When a VM is powered off from the Nuage Login or Nuage Data tab of the web client, the popup
to confirm powering off the VM is hidden behind the tabs and thus becomes inaccessible (when Webmetadata
is deployed on Vcenter6.5, however, this does not happen). Workaround: Navigate to a tab other than Nuage
Login or Nuage Data before powering off a VM.
• [VSD-17610] Due to a bug in the vCenter Flex container, with a first-time login, the redirection to the VM
Nuage data page does not work, and no data is sent to the back end. Therefore, to apply metadata after logging
in for the first time, select another VM before returning to the first VM. When metadata is successfully applied,
there is a popup message saying “Metadata applied successfully.”
• [VSD-23181] When a host is disconnected or removed from vCenter while the VRS Agent is deployed, the
VRS Agent will be orphaned and will not be automatically removed from the host. A manual deletion of the
VRS will be required.

7.25 Hyper-V

• [VRS-4356] The vNic hot plugin is not supported in Hyper-V 2012 R2. Therefore, Nuage VSP does not support
it either.
• [VRS-8743] If the uninstallation of the Hyper-V VRS fails, the OVSExt.sys driver will not be properly removed.
To remove the driver, follow the steps in the Troubleshooting section of the Hyper-V Integration guide.

7.26 210 WBX

• [SROS-16273] ACLs configured to match BFD packets on L2 SAPs are not supported.

7.27 End-to-End QoS

Due to a hardware limitation, VSGs that are terminating VXLAN tunnels (acting as a VTEP) are unable to classify
traffic on network ingress using the DSCP markings in the provider header. To preserve end-to-end QoS, we recom-
mend using VLAN tags along with dot1p marking / classification on network egress and ingress respectively. Note
that a VLAN tag of 0 will result in no VLAN header being added, which will lose the dot1p marking. A non-zero
VLAN tag is required to leverage the dot1p field.
Example QoS Configuration Using dot1p:

description "QoS policy using dot1p"


ingress
dot1p 1 fc be profile out
exit

7.25. Hyper-V 118


VSP Release Notes, Release 5.4.1

egress
fc be
dot1p-out-profile 6
exit
exit

7.28 VSS

• [VSD-24315] Flow Explorer shows flows with packets = 0 . These are TCP control packets, i.e., packets with
TCP handshake connection state information (SYN, SYN-ACK, FIN).

7.28. VSS 119

Вам также может понравиться