Академический Документы
Профессиональный Документы
Культура Документы
RELEASE 2
JULY 2018
Table of Contents
Table of Contents
Purpose of This Guide......................................................................................................................................................... 1
Audience................................................................................................................................................................................................................. 1
Related Documentation...................................................................................................................................................................................... 1
Introduction.......................................................................................................................................................................... 2
Overview................................................................................................................................................................................................................ 3
Design Models....................................................................................................................................................................28
Summary..............................................................................................................................................................................35
• Provides architectural guidance for solution architects and engineers who are familiar with the next-generation
firewall but not ACI. It links the technical aspects of the ACI and Palo Alto Networks solutions together before
exploring the technical design models and preparing you for the deployment guide. Use this guide as a roadmap
for architectural discussions between Palo Alto Networks and your organization.
• Must be read prior to using the Deployment Guide for Cisco ACI. This reference architecture guide helps you to
understand ACI configuration concepts and to properly evaluate decision criteria in the deployment guide.
AUDIENCE
This guide is written for technical readers, including system architects and design engineers, who want to deploy the
Palo Alto Networks Security Operating Platform within a public or private cloud datacenter infrastructure. It assumes
the reader is familiar with the basic concepts of applications, networking, virtualization, security, and high availability
and that the reader possesses a basic understanding of network and data center architectures.
RELATED DOCUMENTATION
The following documents support this guide:
• Palo Alto Networks Security Operating Platform Overview—Introduces the various components of the Security
Operating Platform and describes the roles they can serve in various designs.
• Deployment Guide for Cisco ACI—Details deployment scenarios and step-by-step guidance for Cisco ACI-based
private data center environments. If you are unable to access the URL for the deployment guide, ask your ac-
count team to assist you.
Introduction
When architecting a data center, you must integrate the security policies for application data access and control with
the major building blocks standard in all data center environments: compute, storage, and network. These elements are
combined in various quantities and topologies in order to form a data center infrastructure that provides applications
and data processing to meet business requirements. This guide focuses on integrating Palo Alto Networks security
platforms with the Cisco Application Centric Infrastructure.
The configuration and administration of the elements in most data center environments require careful coordination
between different groups in order to provide network connectivity and storage resources before application deploy-
ment. Network security policy configuration (changes to align with business requirements) is no longer an optional part
of the data center operation, but it can create friction between teams. The coordination of duties can delay applica-
tion delivery and business agility. One approach organizations have taken to minimize coordination bottlenecks is to
present all network connectivity to all compute resources, often by trunking all available VLANs to every server, and
associating application network connectivity at the time of application creation. This model trusts that applications are
deployed on the correct network segment and provides no visibility or separation within network segments. Applying
security policy in this highly dynamic environment can be challenging when virtualized servers move around the data
center. The ideal goal is to permit network, storage, and security teams to define independent policies that dynamically
link to applications, and these characteristics remain associated with applications throughout their lifecycle. This model
permits rapid provisioning and management of applications without prior coordination of network, storage, and security
teams. High-level policies are maintained when existing servers move throughout the data center or when new servers
are created. In this model, web servers can communicate with applications servers, and application servers can commu-
nicate with database servers.
Software-defined data center solutions address many of the challenges of the dynamic compute, storage, and network
environment; however the security capabilities are limited to Layer 2 through 4 traffic control. The Palo Alto Networks
next-generation firewalls can be easily integrated into a dynamic data center environment to secure traffic at the appli-
cation layer. The firewalls and the Panorama™ management system have a rich application programming interface (API)
that allows them to participate in the overall data center orchestration and automation. Furthermore, the firewalls and
Panorama can then link into the Security Operating Platform to deliver advanced security to the data center and to the
overall enterprise network.
OVERVIEW
ACI is a software-defined networking data center architecture that uses an application-centric policy model to enable
rapid application deployment. ACI data center infrastructure is deployed in a spine-leaf topology and implemented on
Cisco Nexus 9000 series switches.
The ACI architecture is different in many ways from a traditional network design. One example of this difference is
where endpoints connect to the fabric. In the ACI’s spine-leaf topology, spine switches are used only for traffic travers-
ing between leaf switches. All endpoints (servers, storage, firewalls, routers, etc.) connect to the ACI fabric at a leaf
switch. Because all endpoints (compute, firewalls, routers, etc.) connect at the same layer, leaf switches are commonly
described based on the endpoints that attach to them.
• Border leaf—The border-leaf switches provide Layer 2 or Layer 3 external connectivity to outside networks. The
border leaf supports routing protocols to exchange routes with external routers, and it also applies and enforces
policies for traffic between internal and external endpoints.
• Service leaf—The service-leaf switches connect to Layer 4-7 service appliances, such as firewall, load balancer,
and such. The connectivity between the service leaf and the service appliance can be Layer 2 or Layer 3, de-
pending on design scenarios.
• Compute leaf—The compute leaf switches connect to compute systems. The compute leaf supports individual
port, port channel, and vPC interfaces, based on the nature and requirements of the application or the system. It
also applies and enforces policies for traffic to and from local endpoints.
• IP storage leaf—The storage leaf switches connect to IP storage systems. It supports individual port, port chan-
nel, and vPC interfaces based on the nature and requirements of the application and the system. It also applies
and enforces policies for traffic to and from local endpoints.
Note
Unlike traditional data center architectures, ACI separates application requirements from the underlying details of the
network. This abstraction of application requirements and policy offers several advantages over traditional data center
architectures, including simpler automation and deployment of data center security.
The Cisco Application Policy Infrastructure Controller (APIC) is the central point of management for ACI data center
policy and infrastructure. The APIC provides a variety of management interface options including GUI, CLI, and RESTful
API. Regardless of the interface through which you define policy, after you define the policy on the APIC, it dynamically
implements and distributes those policies to the ACI fabric. Although the APIC does not directly control data-plane
forwarding, its resiliency is critical to the data center, so it is implemented as a distributed cluster of controllers.
The Cisco APIC is also the point of integration to security and virtualization infrastructure services. ACI interacts with
security and virtualization services within the data center to coordinate information across the services in support of
the application policy. APIC can pull information from these services to define policy elements (for example, virtual ma-
chine information), as well as push configuration back into the services in order to implement a required configuration
in support of the policy, such as firewall rulesets or virtual distributed switch port-groups.
ACI supports multitenancy, and multiple communities can share a Cisco APIC and ACI fabric. Traffic, connectivity, and
policies from different tenants can share the same infrastructure without leakage of information across tenants, while
still being allowed access to shared resources if required.
Firewalls provide security to east-west traffic within the data center by inspecting and securing traffic between security
groups. In a traditional data center, typically the endpoint’s subnet defines its security group. In ACI, the policy defines
the security groups (called endpoint groups in ACI) regardless of their network connectivity or even if they are physical
and virtual. Separating the creation of security groups from the infrastructure into policy allows for a more flexible and
dynamic data center. ACI uses a zero-trust security model between endpoint groups. Connectivity between groups is
disallowed unless policy explicitly defines port/protocol rules to allow communication through the ACI built-in firewall
rules. Alternatively, ACI enables the insertion of security services into the traffic flow between endpoint groups.
Not only does ACI enable the insertion of security service, but as applications in the data center change, ACI enables
the security policies to change along with them. Automation of security policies is needed to support on-demand provi-
sioning, dynamic scaling of applications, and the handling of application mobility.
Networking
ACI is a fabric networking architecture that provides Layer 2 and Layer 3 forwarding by using non-traditional control-
plane and data-plane protocols. Although ACI has traditional networking constructs like VRFs, subnets, and default
router IP addresses, how ACI uses them can vary significantly from a traditional data center architecture.
In ACI, tenants, while not a network themselves, contain the network instances. Tenants can have private segmented
networks, or networking can be shared between tenants while still maintaining separate policies and services. Shared
networking typically is deployed in a special tenant called Tenant Common.
VRF
In ACI, logically segmented groups of Layer 3 subnets are implemented through virtual routing and forwarding (VRF)
instances. By default, the subnets within a VRF are reachable only from within the VRF. A tenant is not required to
contain a VRF, but in most deployments, each tenant has at least one. The most common use of multiple VRFs within a
tenant is for traffic segmentation or to manage IP address overlap. There can be no IP address overlap within a VRF as,
when allowed by policy, all the endpoints within a VRF can communicate directly.
VRFs contain bridge domains. A bridge domain provides Layer 2 forwarding and defines a Layer 2 boundary. ACI uses
endpoint learning along with the bridge domains to build a distributed mapping database for the locations of endpoint
MAC addresses. When receiving traffic, ACI uses the mapping database to determine the leaf switch for the traffic
destination without flooding broadcast traffic or relying on traditional networking loop-avoidance protocols such as
spanning-tree. Each leaf switch caches the information about local endpoints and remote endpoints that are part of
traffic flow with a local endpoint.
Although the VRF defines the Layer 3 domain, bridge domains contain the IP subnets. IP routing for a subnet is con-
figurable. When configured for IP routing, ACI configures the subnet’s IP address as an anycast gateway on every leaf
switch that has endpoints in the subnet. Having the default gateway on each leaf switch allows the ACI infrastructure
to efficiently forward traffic across the fabric. In addition, configuring IP routing enables the mapping database to learn
an endpoints IP address information in addition to its MAC address. When available, ACI can share the learned end-
point IP addressing with service devices for use in their policies.
Tenant
Traditional Infrastructure
Segmentation Policy
Components
10.10.10/0.24
Controlling and securing applications in the data center relies on the ability to segment the application tiers (web serv-
ers, database servers, etc.) and insert policy and security controls on the connectivity between them. ACI uses a zero-
trust security model, permitting only traffic between segments that have been explicitly allowed in the policy. In ACI,
you group endpoints that have shared policy requirements through endpoint groups (EPGs). EPGs typically represent
a tier or an application, but you can organize EPGs in any way that accurately groups the endpoints of an application
based on their traffic and security policy requirements.
Because most applications have more than one group of endpoints with unique policy requirements, defining an ap-
plication policy requires an additional level of policy above an EPG. Application profiles contain all the EPGs necessary to
describe the policy requirements of an application, as well as the relationships between them.
Contract Contract
For an endpoint, whether bare metal or virtual, to communicate with anything else in the ACI data center, it must be
part of an EPG. ACI uses EPGs to define policy on which endpoints can communicate, but they do not handle the
forwarding of traffic. For forwarding, each EPG relies on their associated bridge domain. EPGs can associate with only
one bridge domain, but a bridge domain can have multiple EPGs linked with it.
ACI has two default policies that control traffic within and between EPGs:
• Endpoints that are within the same EPG can communicate directly with each other.
• As part of its zero-trust model, ACI blocks traffic between EPGs. Endpoints that are in separate EPGs cannot
communicate with each other.
Allowing traffic between EPGs requires a contract. Filters are reusable rules that describe the traffic protocol along with
source and destination ports. Filters are similar to access control lists, but because ACI separates policy and forwarding,
they do not need IP address information. Contracts define the allowed traffic through a list of filters.
You apply contracts in a specific direction. Unlike traditional ACLs, ACI uses the consumer and provider EPGs to
describe directionality instead of IP source and destination. ACI abstracts the forwarding and infrastructure from the
policy by using EPGs instead of IP information. Although the terminology can be confusing, a few points make con-
tracts easier to understand:
• The consumer EPG is the source of the traffic, and provider EPG is the destination.
• You do not configure contracts with an EPG. Instead, you apply the contract to the EPG. Keeping the EPG out
of the contract allows you to reuse the contract in multiple applications.
• Communication can only be initiated by the consumer unless the contract is set to apply in both directions.
• When a contract applies in both directions, the source and destination ports are the same for both directions
unless reverse filter ports is enabled.
Global fabric access configuration defines how endpoints connect to EPGs. There are two fabric access configuration
components: VLAN pools and domains.
VLANs are used locally within a leaf switch to connect endpoints to the correct EPG. ACI uses VLANs only for this
limited functionality and does not use them to send traffic across the infrastructure. Instead, ACI uses VXLAN to carry
the traffic across the fabric.
Because VLANs are locally significant to a switch only, the APIC controls the VLAN to EPG mapping. VLAN pools define
a range of VLAN numbers that the APIC uses to associate an endpoint with an EPG. VLAN pools can be static or a
dynamic. Generally, you use static pools for manually configured (bare-metal) endpoints. Use dynamic pools when
you want the APIC to dynamically allocate VLANs, such as when integrating to a hypervisor or performing managed
services insertion.
Defining which EPG an endpoint is a member of can be done either statically or dynamically and depends somewhat
on whether the endpoints are bare metal hosts or virtual machines. A domain defines how the APIC uses VLAN pools
to attach endpoints to an EPG. Domains act as the glue between the fabric access and endpoint group configuration.
The fabric operator creates the domains and the tenant administrators associate domains to endpoint groups. Domain
types include:
• Physical—Physical domains are used for bare metal endpoints or endpoints without a hypervisor.
• Virtual—ACI provides support for multiple-hypervisor deployments. It is possible to integrate ACI with multiple
virtual machine managers (VMMs) such as VMware vSphere, Microsoft SCVMM, and OpenStack.
• External routed—External routed domains are used for Layer 3 connections. For example, an external routed
domain could be used to connect a WAN router to the leaf switch.
When a physical domain is associated with an EPG, you can use that domain to statically assign a VLAN from the VLAN
pool and map that VLAN on a leaf-switch interface to an EPG. VLANs can be reused between switches for different
EPGs because VLANs are locally significant to the switch.
When you associate a VMM domain to an EPG, the APIC dynamically provisions a port group on the hypervisors’ dis-
tributed switch that maps to the EPG. The APIC dynamically chooses a VLAN from the VLAN pool and configures it on
the infrastructure and virtual switch port group. To assign a virtual machine to an EPG, all that the virtualization admin-
istrator must do is configure the virtual machine with the port group in the hypervisor.
Switch-specific fabric profiles and policies define the configuration of the connections between the leaf switch and the
endpoints. The switch specific profiles and policies include:
• Switch profile and vPC domain—Specifies which leaf switches connect to the endpoint.
• Interface profile—Specifies the leaf switch interfaces. Interfaces can be a single interface, port-channel, or vir-
tual port channel when you select multiple switches as part of the switch profile.
• Interface policy group—Defines the configuration of interface-level parameters, such as enabling or disabling
LLDP, LACP, and setting the port speed.
Note
LLDP configuration is not just for operational convenience; it is necessary for forward-
ing to work correctly. Subnet router interfaces are not deployed on a leaf switch unless
ACI determines that there is an endpoint on the switch that requires it. The resolution
that determines whether these resources are required on a given leaf is based on dis-
covery protocols such as LLDP and CDP.
• Attachable access entity profile (AEP)—Associate a physical, virtual, or external domain with one or more in-
terfaces on a leaf switch. When AEP associates a domain to a leaf switch, APIC deploys the VLAN pool on the
switch. However, the VLANs are not configured on the switch interfaces until you associate an endpoint to an
EPG. You associate an interface to the AEP through the interface policy group.
Interface types used to connect interfaces on a border-leaf switch to an external router include:
• Switched virtual interface (SVI)—Supports connections to vPC and aggregate Ethernet interfaces.
You have several routing options to statically or dynamically define the external routing, including:
• Static routing
You can configure dynamic routing protocol peering over a vPC for an L3Out connection by specifying the same SVI
VLAN on both vPC peers, which deploys a bridge domain for the L3out connection. Although the connection to the
switch pair is a vPC connection and looks like a single link to the external router, each switch has a separate control
plane and the external router peers with each leaf device separately. Also, the SVIs on the two leaf devices peer with
each other.
If static routing to the fabric is required, you must specify the same secondary IP address on both vPC peer devices’
SVIs.
10.5.254.4/24
In the Layer 3 external EPG configurations, you can map external networks and endpoints to this EPG by adding IP
prefixes and network masks. The network prefix and mask don’t need to be the same as the ones in the routing table.
To avoid complex configuration in the EPG, you can configure a network of 0.0.0.0/0 to assign all external networks to
The external EPG can use vzAny to automate the mapping of EPGs to the external EPG. The vzAny managed object
function provides an automated way of associating all endpoint groups in a VRF instance to one or more contracts, in-
stead of creating a separate contract relation for each EPG. By dynamically applying contract rules to all EPGs in a VRF,
vzAny automates the process of configuring EPG contract relationships.
When specifying subnets under a bridge domain or an EPG for a given tenant, you can specify the scope of the subnet:
• Advertised externally—This subnet is advertised to the external router by the border leaf.
• Private to VRF—This subnet is not advertised to external routers by the border leaf.
For ACI to announce subnets to an external router, the following conditions must be met:
• The bridge domain must have a relationship with the L3Out connection (in addition to its association with the
VRF instance).
• A contract must exist between the Layer 3 external EPG (external subnets for the external EPG) and the EPG
associated with the bridge domain.
Tenant
Firewall throughput 200 Gbps 120 Gbps 72.2 Gbps 35.9 Gbps 18.5 Gbps
Threat prevention 100 Gbps 60 Gbps 30 Gbps 20.3 Gbps 9.2 Gbps
throughput
10GBaseT interfaces — — 4 4 4
You can also integrate VM-Series firewalls to ACI. The majority of the design works equally as well with VM-Series
firewalls as it does with physical appliances. However, there are design aspects that pose more challenges to VM-Series
firewalls than physical appliances. Specifically, limitations in first-generation leaf switches require that the firewalls
(physical and virtual) be attached to dedicated service-leaf switches. Additionally, VM-Series firewalls do not support
multi-tenancy. Instead, you must deploy additional VM-Series firewalls. If you exclusively have current generation
Nexus leaf switches, you can use the following VM-Series capacity information to identify the appropriate model. The
following table summarizes the performance and capacity of VM-Series firewalls on VMWare when using a distributed
virtual switch (which is required to integrate VMware with ACI). For the latest detailed information, see the VM-Series
for VMWare document. Many factors affect performance, and Palo Alto Networks recommends you do additional test-
ing in your environment to ensure the deployment meets your performance and capacity requirements.
Table 2 VM-Series firewall platforms and capacities on VMWare with Distributed Virtual Switch
Configure the interfaces as an aggregate Ethernet interface on the firewall. Traffic between EPGs will ingress and
egress the firewall on the same interface and in some cases will be assigned the same zone. Because the default intra-
zone rule allows all traffic within a zone by default, you should limit traffic between trusted subnets by modifying the
default intrazone security policy rule. If the default intrazone rule is not modified, a rule must be created above it that
denies traffic within the zone. When additional bandwidth is required, add additional interface pairs (up to eight total)
to the aggregate and divided them between the two leaf switches.
Enable LACP and LLDP on both the firewall and the ACI interface profiles. Set LACP to a fast transmission rate, be-
cause the Cisco Nexus switches support the additional signaling and identifying and resolving interface failures as
quickly as possible is critical for a firewall in the data center.
Management interfaces should plug into a leaf switch and attach to the management EPG. However, remember that
the management interface does require internet connectivity, and if the management EPG does not provide it, you
must provision another network that can reach the internet as well as the IP address of the APIC.
Spine Switches
Mgmt Mgmt
Interface Interface
Aggregate
Interfaces
NGFW-2 (Pod1/Node-101-103/NGFW-2_PolGrp)
Service graph templates define a firewall device cluster that you want to be in the middle of inter-EPG traffic flow, the
firewall traffic integration method, and the logical interfaces that map to the consumer and provider EPGs. Templates
do not define the EPGs or the contracts that will use the service graph. Instead, you apply service graph templates
to EPGs and contracts after creation. Abstracting EPGs in the template allows a single service graph to be reused to
secure traffic between multiple EPGs or even multiple applications. When you apply a service graph template to a
contract, the APIC deploys the service graph template by mapping the firewalls and connecting them to the bridging
domain(s).
Contract - WebToDB
Although it may seem that the primary difference between the integrations is if the firewall is bridging or routing the
traffic, there is a more important difference. In GoTo and GoThrough integrations, the service graph doesn’t steer traffic
to the L4-L7 device. Instead, it creates contracts that allow inter-EPG traffic only when it traverses the L4-L7 device.
GoTo and GoThrough integrations closely resemble integrating a firewall into a traditional data center architecture. With
policy-based redirect, the service graph redirects traffic directly to the L4-L7 device and can be considered a more
software-defined approach to firewall integration. Policy-based redirect has fewer requirements on the ways the EPGs
are designed, simplifying the integration of the firewall.
GoTo Integration
To apply a service graph to EPGs by using a GoTo integration for east-west traffic flows, the consumer and provider
EPGs must be in different bridge domains. This requires the firewall to have two interfaces or sub-interfaces: one
dedicated to the consumer EPG and another to the provider. In a GoTo integration, traffic destined to another EPG is
sent to the firewall directly from the host. Typically, you accomplish this by setting the IP address of the firewall inter-
face that services the host’s EPG as the next-hop router to the destination EPG. The firewall must act as the next-hop
router for the endpoints because, as discussed previously, in GoThrough and GoTo integrations, ACI does not forward
traffic to the firewall but instead ensures that is the only traffic that is allowed. So the endpoints must be configured to
route through the firewall’s IP address when using a GoTo integration.
When you apply the service graph, the APIC configures a new EPG in each of the bridge domains. The firewall interfac-
es are members of these EPGs and not of the EPGs that consume or provide the contract. The contract that controls
the traffic between the consumer and provider EPGs is duplicated and modified by the APIC to apply to traffic between
those EPGs and the new EPGs dedicated to the firewall, also known as shadow EPGs.
WebToDB
WebToDB
Contract -
Contract -
EPG – Web Servers EPG – DB Servers
Because the GoTo integration routes traffic between EPGs, the two firewall interfaces must be configured as Layer 3
interfaces. The two interfaces can be discrete physical interfaces, or more commonly, sub-interfaces of an aggregate
Ethernet interface. The domain configured for the firewall in the APIC determines the VLANs allocated to the connec-
tion between the firewall and the bridge domains/shadow EPGs.
Contract rules define the traffic that can flow to the firewall. You can define these rules as specifically or broadly as
you want. You can use a broad catch-all rule, such as one that matches all IP traffic, to forward all traffic to the firewall,
which then determines which traffic to permit or deny. Alternatively, specific rules can be used to pre-filter the traffic
going to the firewall when you have a good understanding of the inter-EPG traffic profile and then use the firewall to
protect that traffic and ensure the applications that are using the ports and protocols are as expected, as well as filter
and monitor for malware.
When defining the firewall security policy in PAN-OS in a GoTo integration, you can control traffic by using IP address-
ing or zone information. Because the consumer and provider must be on separate networks in a GoTo integration, the
security policy configuration looks very similar to one in a traditional data center. Using zones to identify the traffic
source and destination gives you flexibility in the policy and allows the IP addressing of the endpoints to change with-
out affecting the security policy.
GoThrough Integration
A GoThrough integration has little difference from GoTo. The exception is the IP routing of the endpoint traffic. In a
GoThrough integration, the firewall does not provide the default gateway for the endpoints in the EPG. Instead, the
firewall bridges inter-EPG traffic at Layer 2. The endpoints in both EPGs share a common subnet, while their default
gateway is attached to only one of the EPGs. In fact, in a GoThrough integration, ACI enforces a limit of one bridging
domain enabled for IP routing.
The default gateway can be either the infrastructure subnet IP address or an external router. Although endpoint IP
address learning occurs on only the bridge domain enabled for IP routing, endpoints from both bridge domains are
learned as traffic traverses the firewall. Endpoints in the bridge domain that is on the other side of the firewall will have
the firewall’s interface as their MAC address.
IP: 10.5.1.1/24
MAC: 00:00:5E:00:53:11
In a GoThrough integration, the firewall interfaces are Layer 2 interfaces. Just like in a GoTo integration, the two inter-
faces can be discrete physical interfaces or sub-interfaces of an aggregate Ethernet interface. Also, like with a GoTo
integration, you can define the firewall’s security policy through IP addressing or zone information. In a GoThrough in-
tegration, defining the policy by using zones makes even more sense than it does in a GoTo integration because there is
no IP addressing configured on the firewall. A GoThrough integration where you use zones in the firewall security policy
allows you to migrate IP addressing on the endpoints without any changes to the firewall.
Policy-Based Redirect
Although GoTo and GoThrough integration look very similar to a traditional firewall deployment in the data center, the
simplest way to integrate a next-generation firewall into ACI is with policy-based redirect (PBR). Service graphs using
PBR insert the firewall into the traffic flow by redirecting traffic that matches the contract to the firewall. This contrasts
with GoTo and GoThrough integrations that just ensure the only traffic allowed is traffic to the firewall, and endpoint
routing dictates the traffic flow to the firewall. PBR simplifies the integration of the firewall by removing several barri-
ers:
• You can insert the firewall into an existing application flow without reconfiguration of the endpoints or topology.
This significantly reduces the amount of effort required to integrate security into a network-centric ACI deploy-
ment.
• You can insert the firewall for all or only specific types of traffic between EPGs.
• The firewall does not have to be a member of the EPG bridge domains.
• The consumer and provider EPGs can be in the same bridge domain.
• One integration method that supports both Layer 3 and Layer 2 EPG separation.
PBR forwards traffic to the firewall based on policy that defines the firewall’s IP and MAC address. Even though the
PBR supports Layer 2 inter-EPG traffic, the firewall interfaces are always in Layer 3 mode, and traffic is received and
then routed back to the infrastructure. When using PBR, you can configure the firewall with separate interfaces for
the consumer and provider connections, or you can use one-arm mode because the next-generation firewall support
ingress and egress traffic on the same interface. Although a one-arm mode deployment still has two interfaces config-
ured in the service graph template, a common firewall interface is configured on both. A single interface simplifies the
integration of the firewall, by reducing the number of interfaces, IP addresses, and VLANs you need to configure. The
only distinction to a single-arm PBR integration is that you cannot use zone information to define the security policy,
and you must modify the default intra-zone policy on the firewall to deny traffic.
The firewall interfaces should be in a dedicated bridge domain. Before ACI 3.1, the firewall interfaces could not be in
the same bridge domain as the endpoints. Although this restriction has been removed in ACI 3.1, dedicating a bridge
domain to the firewall is still a good practice. IP routing must be configured on all the bridge domains, and the IP de-
fault gateway for the endpoints and firewall must point to their local ACI infrastructure subnet IP address.
Contract - WebToDB
When traffic matches a contract with redirect policy configured, the infrastructure rewrites the destination MAC ad-
dress to the one defined in the policy. After the firewall receives the traffic, it processes it through the security policies
and determines the egress interface based on its routing table. The simplest method of ensuring proper routing is to
configure a static default route in the firewall that points to the ACI subnet IP address of the firewall’s bridge domain.
After the infrastructure receives the egress packet, ACI rewrites the original destination to the packet and forwards it
normally.
Because the firewall routes the egress traffic back to the ACI infrastructure, you must disable data-plane learning on
the firewall’s bridge domain. If you don’t disable it, the infrastructure mapping database will incorrectly be updated to
point to the firewall for the source endpoint address. This will correct itself the next time the source endpoint commu-
nicates but then flap again to the firewall when the firewall egresses the packet back to the infrastructure after protect-
ing the inter-EPG traffic.
IP: 10.5.0.4/24
MAC: 00:00:5E:00:53:04
IP: 10.5.0.1/24
IP: 10.5.1.10/24 IP: 10.5.3.10/24
MAC: 00:00:5E:00:53:01
MAC: 00:00:5E:00:53:10 MAC: 00:00:5E:00:53:30
Alternatively, the firewall can function as the external router for the L3Out connection and integrate into the traffic
flow purely based on the routing between it and the ACI infrastructure. Contracts are still required to allow traffic from
the external network EPG to an internal EPG, but the contract does not have to insert a firewall into the traffic flow.
Integrating the firewall through routing works only on L3Out connections, because you cannot configure the routing
on a bridge domain.
Peering the firewall through an L3Out connection simplifies the configuring of security policies that use NAT and
decryption. Although these features are not often used for east-west traffic flows, they are commonly used for traffic
entering and exiting the data center. Additionally, placing the firewall at the edge of the data center allows it to prevent
malicious traffic from ever touching the data center infrastructure.
When connecting the firewall to the ACI border-leaf switches, use an aggregate Ethernet interface and divide the
member interfaces between border-leaf switches that are operating as a vPC pair. The border-leaf switches do not
have a shared control plane, so for resiliency, each border-leaf switch must have an IP routing relationship with the
firewall. You can use static routing or dynamic routing between the border-leaf switches and the firewall, but dynamic
routing is recommended in order to simplify the administrative overhead of adjusting routing tables or sending traf-
fic into the ACI data center that ACI will just end up dropping. When using dynamic routing, each border leaf runs a
unique instance of the routing protocol and forms a neighbor relationship not only with the firewall but also with other
border-leaf switch in the pair. To accomplish this, the border-leaf switches and firewall must all be attached to the
same VLAN and configured with unique IP addressing in a common subnet.
10.5.254.4/24
Firewall Resiliency
Firewall resiliency in ACI is achieved through a normal high-availability (HA) configuration on the next-generation
firewalls. Deploy firewalls in HA pairs, which minimizes downtime by making sure that an alternate firewall is available
if one of them fails. The firewalls in an HA pair use dedicated HA ports on the firewall to synchronize data—network,
object, and policy configurations—and to maintain state information. Firewall-specific configuration such as manage-
ment interface IP address or administrator profiles, HA-specific configuration, log data, and the Application Command
Center information is not shared among peers.
When a failure occurs on a firewall in an HA pair and the peer firewall takes over the task of securing traffic, the event
is called failover. The conditions that trigger a failover are:
• A critical chip or software component fails, known as packet path health monitoring.
Active/passive high availability is recommended in an ACI integration because there should not be any asymmetric traf-
fic flows in this design. Configure the pre-negotiation of LACP with the infrastructure to enable a reduced failover time.
Because both the IP addresses and the MAC addresses failover from the active firewall to the passive when a failover
event occurs, always use the information from the active firewall when configuring the device and PBR information in
ACI.
Note
Configure policy-based redirect after enabling HA on the firewalls as the MAC address-
ing will change after HA is enabled.
• Service Manager Mode (Managed Mode)—Because the APIC has visibility into the network infrastructure, the
endpoints, and the rules that define how endpoints can communicate, the APIC can dynamically configure net-
working on the firewall in addition to the infrastructure. The APIC does not do the initial deployment of the fire-
wall but dynamically updates the configuration as the data center changes. The firewall configuration is limited
to the knowledge that is available to the APIC—specifically, interfaces, VLANs, and endpoint IP addressing.
• Network Policy Mode (Unmanaged Mode)—Security administrators configure the firewall’s virtual routers, inter-
faces, and VLANs, and the ACI administrator mirrors those configurations in the APIC.
Note
Regardless of the management mode with which the firewall is deployed, the security
administrators always configure the security policy in Panorama or directly on the fire-
wall. ACI’s understanding of security is limited to the endpoint group membership and
port and protocol rules that govern communication between them. The next-generation
firewall’s security policy provides significantly more control over the traffic, and ACI is
unable to properly describe it.
The following Palo Alto Networks components are used to integrate the next-generation firewall with ACI:
• Device package—A device package is a software update to the APIC that manages communication between
the APIC and firewalls. It allows the firewall’s high availability, network, and interface configuration to be done
within the APIC, which then pushes the configuration to Panorama and the firewalls.
• Palo Alto Networks firewall—The device package supports physical next-generation firewall appliances as well
as VM-Series firewalls. The device package supports dividing physical firewalls into virtual systems, which you
can manage as individual firewalls in the APIC.
• Panorama—Panorama is required for you to be able to associate a pre-configured security policy and register
endpoint IP addresses to the firewall from the APIC. However, you can integrate the firewall to ACI without
Panorama and still deploy virtual systems, high availability, and network interface configuration to the firewall
from within the APIC.
To define the capabilities and functions that ACI can configure on a firewall, Palo Alto Networks provides a device
package that a fabric administrator must install on the APIC. A device package contains the following:
• Device specification—An XML file that defines the device properties, functionality, and configuration param-
eters
• Device scripts—One or more Python scripts that run on the APIC and allow it to interact with the firewall or
Panorama through the PANOS API as events occur on the APIC
• Function profile—Default values that the APIC applies when deploying the device
• Device-level configuration parameters—A configuration file that specifies parameters that are required by a
device
After it is installed on the APIC, the device package is available to all the tenant administrators when configuring an
L4-L7 device. Using the device package, the APIC can deploy firewall high-availability, networking, zone, and interface
configuration parameters. The APIC configures the firewall in two stages. First, when you initially configure the firewall
as an L4-L7 device, you define the system, high availability, and aggregate interface configuration parameters. System
parameters include DNS, NTP, and hostname. High availability parameters include all the necessary information re-
quired to configure two firewalls (that are pre-configured enough to be reachable by the APIC) as a high availability pair.
The interface parameters in the first stage configure the firewall interfaces as an aggregate Ethernet interface but do
not configure any IP addressing or VLANs. When the parameters are committed to the APIC, the APIC configures the
information on the firewalls directly through the REST API.
Note
Not all of the first stage parameters are required to be configured within the APIC. For
example, you can configure an aggregate Ethernet interface directly on the firewall and
still reference the interface in the APIC policy.
System Configuration
(Hostname, DNS, NTP)
HA
Links
High Availability Configuration
Interface Configuration
The second stage occurs when you apply a service graph template that uses the device to a set of EPGs. In the sec-
ond stage, you can configure the parameters that are specific to that inter-EPG traffic flow. These parameters include
firewall interface IP addressing, zones, and Panorama device group. Because service manager mode supports only
GoThrough and GoTo integration, the APIC will always configure the firewall with two interfaces. The APIC deploys the
firewall interfaces as sub-interfaces on the aggregate interface you configured in the first stage as a device parameter.
VLAN numbering is dynamically allocated and configured by the APIC from the domain’s VLAN pool to match the
VLAN configuration on the ACI leaf switch. The zone a firewall interface is associated to can be defined from within
the APIC as one of the second state parameters or pre-defined in the firewall configuration.
AE1.150
10.5.1.1/24 - VLAN 150
Zone WebEPG
AE1.151
10.5.3.1/24 - VLAN 151
Zone DBEPG
When configuring second-stage parameters on the next-generation firewall, the APIC can configure the firewall directly
or through Panorama. Integrating Panorama into the APIC increases the information that you can configure from within
the APIC. Specifically, registering endpoint IP addresses and tags dynamically to the firewall, as well as associating a
firewall with preconfigured security policies from within the APIC, requires Panorama integration. The APIC classifies
Panorama as an L4-L7 device manager, which it uses to proxy configuration to the firewalls. Even though the APIC
doesn’t use the device manager in the first stage configuration, the device manager is assigned to the firewall when you
initially configure the device in the APIC. The Panorama device group second stage parameter allows the APIC to as-
sociate a pre-configured Panorama policy with the firewalls as it dynamically deploys them. This association allows you
to dynamically enable a security policy from within the APIC, which allows capabilities such as App-ID™, User-ID™, and
Content-ID™ that the APIC can’t characterize natively.
Finally, when you have enabled attachment notifications in the APIC, integrated Panorama as the device manager,
and enabled the security configuration binding parameter, the APIC can populate dynamic address groups within your
security policy. The APIC registers endpoint IP addresses to the firewall along with tags that describe their associated
tenant, application, and EPG. The registrations are updated when the ACI infrastructure learns endpoint IP address
information. However, the APIC does not configure a dynamic address group; you must create one that matches the
appropriate tags as part of your policy in Panorama.
Device Group -
Policy
• Level of coordination—How timely is the coordination between the ACI tenant administrators and the security
administrators? In service manager mode, tenant administrators and security administrators require less coordi-
nation than network policy mode as the configuration within the firewall that needs to be consistent between
the ACI infrastructure and firewall can be configured centrally within the APIC.
• Service Graph Integration—What service graph integration method do you want to deploy? Policy-based redi-
rect (PBR) provides the most flexibility and is only supported in network policy mode.
• Operational synchronization—How fast do you expect the security infrastructure to mirror the data center
state? Service manager mode provides the ability for the APIC to dynamically register the endpoint IP informa-
tion and policy tags (tenant, application, EPG) to the firewall for use within dynamic address groups. Dynamic
address groups can significantly decrease manual policy configuration. However, service manager mode sup-
ports only GoThrough and GoTo integration; dynamic address groups should not be needed, because you can
use the zone information to define the security policy.
Design Models
There are many ways to use the concepts discussed in the previous sections in order to achieve an architecture that
secures applications deployed in a Cisco ACI data center. The design models in this section offer example architectures
that secure inbound access to an application within the ACI data center, the communication between groups of end-
points within the data center, and the connection from within the data center to external networks.
The design models primarily differ in how they integrate into the traffic flows and how much configuration occurs in
the Cisco Application Policy Infrastructure Controller (APIC) versus on the APIC and Panorama™. Consider which model
best fits your needs and use it as a starting point for your design. The design models in this reference design are:
• Network Policy Mode—In this model, a high-availability (HA) pair of firewalls integrates into the east-west traffic
flows by using a policy-based redirect to a single logical HA interface. This design fully separates the configura-
tion of the ACI infrastructure from the configuration of the next-generation firewall. This design uses address
objects on the firewall to map to endpoint groups (EPGs) in ACI, which is more suitable for automation and
orchestration in a virtual machine data center environment. For north-south traffic flows implement a second
HA pair of firewalls by using the network policy mode with a GoTo service graph.
• Service Manager Mode—In this model, an HA pair of firewalls integrates into east-west traffic flows by using a
GoTo service graph. This model centralizes and automates the configuration of the firewall networking, inter-
faces, and association with a security policy through the APIC. The security policy rules are configured indepen-
dently using the firewall UI. For north-south traffic flows implement a second HA pair of firewalls by using the
network policy mode with a GoTo service graph.
• Multi-tenant Model—This model extends the next-generation firewalls used in the service manager and net-
work policy design models across multiple tenants. Virtual systems and virtual routers are used on the firewall to
create unique policy management and network forwarding for each tenant.
192.168.50.12
192.168.50.13
Management – 192.168.50.0/24
NGFW
Bridge Domain
10.5.0.4
(ae1.199) Business EPG - 10.5.2.0/24
External
172.16.1.2
(ae3) 10.5.254.4
(ae2.198)
NGFW EPG
10.5.0.0/24
DB EPG - 10.5.3.0/24
East-West Traffic
Contracts between EPGs primarily describe east-west traffic within a tenant. In this design model, the contracts
between EPGs direct traffic to the firewall through policy-based redirect. Maximize resiliency by using a pair of fire-
walls (or a virtual system on a shared firewall pair) configured for active-passive high availability and connected to the
service-leaf switches by using an aggregate Ethernet interface/vPC.
For each VRF, define a bridge domain and subnet in the infrastructure for the firewall. Configure the firewall for a
single sub-interface on the aggregate Ethernet interface and with a dedicated virtual router. A static default route in
the virtual router directs all traffic to the ACI infrastructure. The firewall does not translate the IP addressing for traffic
between EPGs.
All east-west traffic within the VRF, both ingress and egress, flows through this single firewall interface. A positive con-
trol security policy should allow only appropriate application traffic between endpoint groups and because each firewall
interface only has one zone, requires that the default intra-zone security policy rules be overridden and modified to
deny traffic. Security profiles should also be enabled to prevent known malware and vulnerabilities from moving later-
ally in the trusted network through traffic allowed in the security policy.
To support multiple contracts all sending traffic to a single firewall interface, you should use address groups in the defi-
nition of the security policy to separate EPGs. Use the same methods when you have multiple applications in a VRF.
A contract between the external networks’ EPG and the web EPG allows tcp/80 and tcp/443 traffic. The external
network EPG consumes (source) the contract, and the web EPG provides it (destination). Because the firewall security
policy is applied to the traffic before the traffic enters the data center, you do not need to apply a service graph to
the contract. The firewall security policy allows appropriate application traffic to the resources in the web EPG while
security profiles prevent known malware and vulnerabilities from entering the network in traffic allowed in the security
policy.
A contract between the vzAny managed object and the external network EPG allows traffic from any EPG within the
VRF to reach the external networks through the firewall. The vzAny managed object provides a convenient way of
associating all endpoint groups (EPGs) in a Virtual Routing and Forwarding (VRF) instance to one or more contracts,
instead of creating a separate contract relation for each EPG. The EPG collection consumes the contract, and the ex-
ternal networks EPG provides it. You can configure specific traffic profiles in the contract or send everything to the fire-
wall and allow it to control the traffic as it leaves the data center. Common ports required for outbound traffic include
TCP and UDP/53 (DNS), UDP/123 (NTP), TCP/80 (HTTP), and TCP/443 (HTTPS).
The firewall can apply source NAT to outbound traffic, but it isn’t required. When the outbound traffic originates from
an endpoint that has an address translation for inbound traffic, source NAT translates outbound traffic to the appropri-
ate address object. For endpoints not associated with an inbound address translation, the firewall translates the source
address to its untrusted interface.
The firewall security policy allows appropriate application traffic from the endpoints in the data center to the external
networks. You should implement the security policy by using positive security policies (whitelisting). Security profiles
prevent known malware and vulnerabilities from entering the network in return traffic allowed in the security policy.
URL filtering, file blocking, and data filtering protect against data exfiltration.
Management
In this design model, you manage the ACI infrastructure and the firewalls separately. The connection of the firewalls
to the infrastructure must be coordinated between the fabric and security administrators. Also, tenant administrators
must work with the security administrator to integrate the firewalls into the application traffic flow.
External
Networks EPG
Web EPG - 10.5.1.0/24
Bridge Domain
Web
10.5.1.1
(ae1.195)
Bridge Domain
(ae1.196)
172.16.1.2
Business
(ae3) 10.5.254.4
(ae2.198)
East-West Traffic
Contracts between EPGs primarily describe east-west traffic within a tenant. In this design model, the contracts use a
GoTo integration to the firewall to ensure that the only path between EPGs is through the firewall. Maximize resiliency
by using a pair of firewalls (or a virtual system on a shared firewall pair) configured for active-passive high availability
and connected to the service-leaf switches using an aggregate Ethernet interface/vPC.
Bridge domains have only one subnet and EPG associated with them. The firewall has sub-interfaces configured on
the aggregate Ethernet interface for each bridge domain and is the default gateway configured on the endpoints in the
EPGs. Configure all firewall interfaces with a virtual router that corresponds to the VRF in the tenant. The firewall does
not translate the IP addressing for traffic between EPGs.
Contracts ensure all east-west traffic within the VRF flows through the firewall. A positive control security policy should
allow only appropriate application traffic between endpoint groups, and because there is a firewall interface connected
to each EPG, you use zones to define the security policy. Security profiles should also be enabled to prevent known
malware and vulnerabilities from moving laterally in the trusted network through traffic allowed in the security policy.
When you have multiple applications, use the interface zone to define what endpoints can communicate through the
firewall. As long as a bridge domain only contains one subnet and EPG, the security policy should be relatively straight-
forward. This is because, no matter the number of contracts to which you associate an EPG, the firewall will have only
one interface in a bridge domain at a time.
A contract between the external networks’ EPG and the web EPG allows tcp/80 and tcp/443 traffic. The external
network EPG consumes (source) the contract, and the web EPG provides it (destination). Because the firewall security
policy is applied to the traffic before the traffic enters the data center, you do not need to configure a service graph on
the contract. The firewall security policy allows appropriate application traffic to the endpoints in the web EPG while
security profiles prevent known malware and vulnerabilities from entering the network in traffic allowed in the security
policy.
A contract between the vzAny managed object and the external network EPG allows traffic from any EPG within the
VRF to reach the external networks through the firewall. The vzAny managed object provides a convenient way of as-
sociating all endpoint groups in a virtual routing and forwarding (VRF) instance to one or more contracts, instead of cre-
ating a separate contract relation for each EPG. The EPG collection consumes the contract, and the external networks
EPG provides it. You can configure specific traffic profiles in the contract or send everything to the firewall and allow it
to control the traffic as it leaves the data center. Common ports required for outbound traffic include TCP and UDP/53
(DNS), UDP/123 (NTP), TCP/80 (HTTP), and TCP/443 (HTTPS).
The firewall can apply source NAT to outbound traffic, but it isn’t required. When the outbound traffic originates from
an endpoint that has an address translation for inbound traffic, source NAT translates outbound traffic to the appropri-
ate address object. For endpoints not associated with an inbound address translation, the firewall translates the source
address to its untrusted interface.
The firewall security policy allows appropriate application traffic from the endpoints in the data center to the external
networks. You should implement the security policy by using positive security policies (whitelisting). Security profiles
prevent known malware and vulnerabilities from entering the network in return traffic allowed in the security policy.
URL filtering, file blocking, and data filtering protect against data exfiltration.
Management
In this design model, the ACI infrastructure and the firewalls coordinate network, HA, and interface configuration. The
coordination does not significantly reduce the amount of configuration required but instead centralizes the configura-
tion in the APIC.
The multi-tenant design model uses the virtual system (vsys) functionality in the next-generation firewall to provide dis-
tinct management instances within a single next-generation firewall. This allows you to use a single HA pair of firewalls
across many tenants instead of deploying dedicated firewalls for each. Each vsys is an independent firewall instance
with a unique security policy that each tenant manages separately and cannot be accessed or viewed by other tenants.
Interfaces, zones, VLANs, and virtual routers are all assigned to a vsys. By default, all next-generation firewalls have a
single vsys. If you want to support multiple tenants on a single firewall pair, you can create additional virtual systems
(up to the limit of the platform) and assign interfaces, zones, VLANs, and virtual routers to it.
Although it is possible to route traffic between tenants through the firewall, in this design, all traffic between tenants
must leave the data center and come back through the L3out connections, which are not shared between tenants. This
design choice simplifies that routing and security policy on the firewall when multiple virtual systems are present.
Summary
The Cisco ACI software defined data center provides a flexible infrastructure to address the dynamic compute, stor-
age, and network requirements. The ACI data center’s high-performance security model is very flexible; however, it is
limited to Layer 4 enforcement. The Palo Alto Networks integration with Cisco ACI allows you to insert next-generation
firewalls between EPGs as a Layer 4 to Layer 7 service. Next-generation firewalls integrated into ACI complement
ACI’s contracts and filters by providing complete application visibility and control, resulting in a reduction in the threat
footprint and the ability to prevent known and unknown threats. The next-generation firewalls and Panorama are part
of the Palo Alto Networks Security Operating Platform, which prevents successful cyberattacks by harnessing analytics
to automate routine tasks and enforcement. Tight integration across the platform, and with partners, simplifies security
operations to secure users, applications, and data.
Related Guide
The Deployment Guide for Cisco ACI for this reference architecture reviews detail decision criteria for deployment
scenarios, then provides step-by-step procedures for the data center solution architect or engineer to program features
of the Cisco ACI fabric and the Palo Alto Networks firewall networking and policy in order to achieve an integrated
design.
• Split the reference architecture guide into two guides: reference architecture and deployment
• Updated drawings and procedures to more closely align IP addressing examples in procedures to the drawings
Headquarters
© 2018 Palo Alto Networks, Inc. Palo Alto Networks is a registered trademark of Palo Alto Networks. A list of our trade-
marks can be found at http://www.paloaltonetworks.com/company/trademarks.html. All other marks mentioned herein may
be trademarks of their respective companies. Palo Alto Networks reserves the right to change, modify, transfer, or other-
wise revise this publication without notice.
B00105P-18a-2