Вы находитесь на странице: 1из 250

Question #1

Which recommended practice is applicable?


A)
B)
C)
D)

If no core layer is deployed, the design will be easier to scale.


A dedicated campus core layer should be deployed for connecting three or more buildings.
If no core layer is deployed, the distribution switches should not be fully meshed.
A dedicated campus core layer is not needed for connecting fewer than five buildings.

Answer: B
Explanation:

Question #2:
When a router has to make a rate transition from LAN to WAN, what type of congestion needs should be considered in the network
design?
A)
B)
C)
D)
E)
F)

RX-queue deferred
TX-queue deferred
RX-queue saturation
TX-queue saturation
RX-queue starvation
TX-queue starvation

Answer: F

Question #3:
To which switch or switches should you provide redundant links in order to achieve high availability with reliable fast convergence in
the enterprise campus?
A)
B)
C)
D)
E)

to a core switch running Cisco NSF and SSO from redundant distribution switches connected with a Layer 2 link
to a core switch running Cisco NSF and SSO from redundant distribution switches connected with a Layer 3 link
to two core switches from redundant distribution switches connected with a Layer 2 link
to two core switches from redundant distribution switches connected with a Layer 3 link
to two core switches running Cisco NSF and SSO from two redundant distribution switches running Cisco NSF and SSO

Answer: D

Explanation:
A) Incorrect: Single core is always less desirable than redundant core.
B) Incorrect: Single core is always less desirable than redundant core.
C) Incorrect: Redundant distribution switches should be connected at Layer 3, not Layer 2. You would only connect redundant
distribution switches at Layer 2 if you were spanning VLANs across access switches, which is not recommended.
D) Correct: Redundant distribution switches should be connected at Layer 3. The Ether-channel connection between the
redundant distribution switches should use L3 and L4 (UDP/TCP port) information as input to hashing algorithms.
E) Incorrect: Redundant core with NSF and SSO is not recommended.

Question #4:
Which of these statements is correct regarding Stateful Switchover and Cisco Nonstop Forwarding?
A)
B)
C)
D)
E)

Utilizing Cisco NSF in Layer 2 environments can reduce outages to one to three seconds.
Utilizing SSO in Layer 3 environments can reduce outages to one to three seconds.
Distribution switches are single points of failure causing outages for the end devices.
Utilizing Cisco NSF and SSO in a Layer 2 environment can reduce outages to less than one second.
NSF and SSO with redundant supervisors have the most impact on outages at the access layer.

Answer: E
Explanation:
A) Incorrect: You can reduce the outage to one to three seconds in this access layer, as shown in Figure 2-8, by using SSO in a
Layer 2 environment or Cisco NSF with SSO in a Layer 3 environment. (Answers A and B are reversed)
B) Incorrect: You can reduce the outage to one to three seconds in this access layer, as shown in Figure 2-8, by using SSO in a
Layer 2 environment or Cisco NSF with SSO in a Layer 3 environment. (Answers A and B are reversed)
C) Incorrect: An access switch failure is a single point of failure that causes outage for the end devices connected to it.
D) Incorrect: NSF is Layer 3. SSO can be employed at Layer 2, but not NSF.
E) Correct

Question #5:
When is a first-hop redundancy protocol needed in the distribution layer?
A)
B)
C)
D)
E)

when the design implements Layer 2 between the access and distribution blocks
when multiple vendor devices need to be supported
when preempt tuning of the default gateway is needed
when a robust method of backing up the default gateway is needed
when the design implements Layer 2 between the access switch and the distribution blocks

Answer: A
Explanation:

Question #6:
Which of these is a recommended practice with trunks?
A)
B)
C)
D)

use ISL encapsulation


use 802.1q encapsulation
set ISL to desirable and auto with encapsulation negotiate to support ILS protocol negotiation
use VTP server mode to support dynamic propagation of VLAN information across the network

Answer: B
Explanation:

Question #7:
Which of the following is a recommended practice of a data center core?
A)
B)
C)
D)

Server-to-server traffic always remains in the core layer.


The core infrastructure should be in Layer 3.
Core layer should run BGP along with an IGP because iBGP has a lower administrative distance than any IGP.
The Cisco Express Forwarding hashing algorithm is the default, based on the IP address and Layer 4 port.

Answer: B
Explanation:

Question #8:
Which statement about data center access layer design modes is correct?
A)
B)
C)
D)

The access layer is the first oversubscription point in a data center design.
The data center access layer provides the physical-level connections to the server resources and only operates at Layer 3.
When using a Layer 2 looped design, VLANs are not extended into the aggregation layer.
When using a Layer 3 design, stateful services requiring Layer 2 connectivity are provisioned from the aggregation layer.

Answer: A
Explanation:

Question #9:
Which of these Layer 2 access designs does not support VLAN extensions?
A)
B)
C)
D)
E)

FlexLinks
loop-free U
looped square
looped triangle
loop-free inverted U

Answer: B
Explanation:

Further Reading

Question #10:
Which statement about Fibre Channel communications is correct?
A)
B)
C)
D)

N_Port to N_Port connections use logical node connection points.


Flow control is only provided by QoS.
It must be implemented in an arbitrated loop.
Communication methods are similar to those of an Ethernet bus.

Answer: A
Explanation:

Question #11:
In base e-Commerce module designs, where should firewall perimeters be placed?
A)
B)
C)
D)
E)

core layer
Internet boundary
aggregation layer
aggregation and core layers
access and aggregation layers

Answer: A
Explanation:

Question #12:
The Cisco Nexus 1000V is intended to address which disadvantage of the VMware vSphere solution?
A)
B)
C)
D)

Inability to deploy new functional servers without requiring physical changes on the network
Complexity added by the requirement for an ESX host for each virtual machine
Network administrators lack control of the access layer of the network
To increase the number of physical infrastructure and the virtual machines that can be managed

Answer: C
Explanation:
Answers A, B, and D are just wrong, as none of these statements accurately describe VMware.
However, I am unable to document much support for the correct answer. It's simply the only choice that isn't completely false.

Question #13:
Which of the following facts must be considered when designing for IP telephony within an Enterprise Campus network?

A) Because the IP phone is a three-port switch, IP telephony extends the network edge, impacting the Distribution layer.
B) Video and voice are alike in being bursty and bandwidth intensive, and thus impose requirements to be lossless, and have
minimized delay and jitter.
C) IP phones have no voice and data VLAN separation, so security policies must be based on upper layer traffic characteristics.
D) Though multi-VLAN access ports are set to dot1q and carry more than two VLANs they are not trunk ports.

Answer: D

Explanation:
See comment at bottom of page 89.

Question #14:
Addressing QoS design in the Enterprise Campus network for IP Telephony applications means what?

A) It is critical to identify aggregation and rate transition points in the network, where preferred traffic and congestion QoS
policies should be enforced
B) Suspect traffic should be dropped closest to the source, to minimize wasting network resources
C) An Edge traffic classification scheme should be mapped to the downstream queue configuration
D) Applications and Traffic flows should be classified, marked and policed within the Enterprise Edge of the Enterprise Campus
network

Answer: A
Explanation:

Question #15:
With respect to address summarization, which of the following statements concerning IPv4 and IPv6 is true?

A) The potential size of the IPv6 address blocks suggests that address summarization favors IPv6 over IPv4.
B) Role based addressing using wildcard masks to match multiple subnets is suitable for IPv4, but unsuitable for IPv6.
C) In order to summarize, the number of subnets in the IPv4 address block should be a power of 2 while the number of
subnets in the IPv6 address block should be a power of 64.
D) WAN link addressing best supports summarization with a /126 subnet for IPv4 and a /31 for IPv6.

Answer: B
Explanation:

Question #16:
There are 3 steps to confirm whether a range of IP addresses can be summarized. Which of the
following is used in each of these 3 steps?

A)
B)
C)
D)

The first number in the contiguous block of addresses


The last number in the contiguous block of addresses
The size of the contiguous block of addresses
The subnet mask of the original network address

Answer: C
Explanation:

Question #17:
A well-designed IP addressing scheme supporting role-based functions within the subnet will result in the most efficient use of which
technology?

A)
B)
C)
D)

Layer 3 switching in the core


Network Admission Control (NAC)
IP telephony (voice and video) services
ACLs

Answer: D
Explanation:

Question #18:
Which of the following is true regarding the effect of EIGRP queries on the network design?

A) EIGRP queries will be the most significant issue with respect to stability and convergence
B) EIGRP queries are not a consideration as long as EIGRP has a feasible successor with a next hop AD that is greater than the
FD of the current successor route
C) EIGRP queries will only increase the convergence time when there are no EIGRP stubs designed in the network

Answer: A
Explanation:

Question #19:
Which of the following is a result when designing multiple EIGRP autonomous systems within the Enterprise Campus network?

A)
B)
C)
D)

Improves scalability by dividing the network using summary routes at AS boundaries


Decreases complexity since EIGRP redistribution is automatically handled in the background
Reduces the volume of EIGRP queries by limiting them to one EIGRP AS
Scaling is improved when a unique AS is run at the Access, Distribution, and Core layers of the network

Answer: A
Explanation:

Question #20:
When designing the routing for an Enterprise Campus network it is important to keep which of the following route filtering aspects
in mind?

A) Filtering is only useful when combined with route summarization


B) It is best to filter (allow) the default and summary prefixes only in the Enterprise Edge to remote sites or site-to-site IPsec
VPN networks
C) IGPs (for example EIGRP or OSPF) are superior to route filtering in avoiding inappropriate transit traffic through remote
nodes or inaccurate or inappropriate routing updates
D) The primary limitation of router filtering is that it can only be applied on outbound updates

Answer: B
Explanation:
This answer is difficult to pin down, but all sources agree that "It is best to filter (allow) the default and summary prefixes only in the
Enterprise Edge to remote sites or site-to-site IPsec VPN networks" is the correct answer.
Default and summary route filtering to remote sites, VPN site, and stub sites is always the recommended design for many obvious
reasons (query suppression, security, CPU load, transit route suppression), but it has been difficult to pin down this answer's explicit
recommendation.
Also, the other three choices are pretty bad.
See http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Security/SAFE_RG/SAFE_rg/chap5.html
Also, see below sections, especially pages 117 & 118.

Question #21:
Which statement is the most accurate regarding IPsec VPN design for an Enterprise Campus environment?

A) VPN device IP addressing must align with the existing Campus addressing scheme.
B) The choice of a hub-and-spoke or meshed topology ultimately depends on the number of remotes.
C) Sizing and selection of the IPsec VPN headend devices is most affected by the throughput bandwidth requirements for the
remote offices and home worker
D) Scaling considerations such as headend configuration, routing protocol choice, and topology have the broadest impact on
the design.

Answer: D
Explanation:
All sources agree that the correct answer is "Scaling considerations such as headend configuration, routing protocol choice, and
topology have the broadest impact on the design." The answer, however, is never directly stated.

______________________________________________________________________
Also, see http://www.cisco.com/application/pdf/en/us/guest/netsol/ns171/c649/ccmigration_09186a008074f22f.pdf
______________________________________________________________________

Question #22:
Which unique characteristics of the Data Center Aggregation layer must be considered by an Enterprise Campus designer?

A) Layer 3 routing between the Access and Aggregation layers facilitates the ability to span VLANs across multiple access
switches, which is a requirement for many server virtualization and clustering technologies.
B) "East-west" server-to-server traffic can travel between aggregation modules by way of the core, but backup and replication
traffic typically remains within an aggregation module.
C) Load balancing, firewall services, and other network services are commonly integrated by the use of service modules that
are inserted in the aggregation switches.
D) Virtualization tools allow a cost effective approach for redundancy in the network design by using two or four VDCs from
the same physical switch.

Answer: C
Explanation:
A) "Layer 3 routing between the Access and Aggregation layers facilitates the ability to span VLANs across multiple access switches,
which is a requirement for many server virtualization and clustering technologies.
False: Layer 3 between Access and Distribution isolates VLANs to the local access switch. See page 244: "Another approach is to
use Layer 3 routing between the access and distribution layer, because routing protocols can use full bandwidth between the
layers through use of Equal Cost Multipath (ECMP). However, Layer 3 routing between the access and aggregation layer restricts
the ability to span VLANs across multiple access switches."
B) "East-west" server-to-server traffic can travel between aggregation modules by way of the core, but backup and replication traffic
typically remains within an aggregation module.
False: High bandwidth support of "East-West" traffic typically needs to be supported in the Aggregation Blocks. See page 244: In
modern data center environments, traditional oversubscription rules do not apply. In most campus environments, traffic tends to
be north-south in nature. Traffic flows from the clients in the access layer to the campus core to access services in the data
center or on the Internet. The data center environment is different; in addition to the north-south traffic, there is also a need to
support high volumes of east-west traffic.
Servers do not only communicate with hosts outside the data center, but there is also a lot of traffic between servers inside the
data center, such as database replication, vMotion traffic, and intercluster communication.
To accommodate these traffic patterns, a data center network design must be able to support high volumes of bandwidth in the
access and distribution layers in the data center aggregation blocks. In traditional spanning-tree-based topologies, half of the
links between the access and distribution layers are not used, because they are blocked by the spanning-tree loop-prevention
mechanism. vPC solves this problem by allowing MECs between the access and distribution layer, which eliminate blocked ports.
By definition, however, a vPC domain consists of a single pair of switches. It is not possible to expand a vPC domain to three or
more switches to achieve better scalability and availability.
D) Virtualization tools allow a cost effective approach for redundancy in the network design by using two or four VDCs from the
same physical switch.
False: VDCs are not used for redundancy, due to the potential of hardware failure. See pages 238-240, particularly the following
section: "It is not recommended to use two VDCs from the same physical switch to construct any single layer of a hierarchical
network design. For example, if you use two different VDCs inside the same physical switch as the two aggregation switches in an
aggregation block, the whole aggregation block will fail when the physical switch fails. Distinct, physical box redundancy within a
network layer is a key characteristic that
contributes to the high availability of the hierarchical network design reference model."
C) See below for support for the correct choice:

Question #23:
Refer to the exhibit.

The Cisco Nexus 1000V in the VMware vSphere solution effectively creates an additional access layer in the virtualized data center
network; which of the following 1000V characteristics can the designer take advantage of?
A) Offloads the STP requirement from the external Access layer switches
B) If upstream access switches do not support vPC or VSS, the dual-homed ESX host traffic can still be distributed using virtual
port channel host mode using subgroups automatically discovered through CDP
C) Allows transit traffic to be forwarded through the ESX host between VMNICs
D) Can be divided into multiple virtual device contexts for service integration, enhanced security, administrative boundaries,
and flexibility of deployment

Answer: B
Explanation:
A) "Offloads the STP requirement from the external Access layer switches".
False: The VEM does not participate in STP. See page 272: "The Cisco Nexus 1000V VEM is not a switch in the traditional sense. It
does not participate in the STP and uses different frame forwarding rules than traditional Ethernet switches. It is better
characterized as an Ethernet host virtualizer (EHV). It forwards traffic between the connected VMs and the physical access
switches but will not allow transit traffic to be forwarded through the ESX host. The VEM never forwards traffic between VMNICs,
but only between VMNICs and virtual network interface cards (vNIC), or between the vNICs within the ESX hosts.
Therefore, most of the considerations that are associated with the virtualized access layer design revolve around channeling and
trunking to provide network connectivity for the VMs and the ESX host management and control. STP or other control plane
protocols do not need to be considered."
C) "Allows transit traffic to be forwarded through the ESX host between VMNICs"
False: The opposite is true. See the same text on page 272: "It forwards traffic between the connected VMs and the physical
access switches but will not allow transit traffic to be forwarded through the ESX host. The VEM never forwards traffic between
VMNICs, but only between VMNICs and virtual network interface cards (vNIC), or between the vNICs within the ESX hosts.
D) "Can be divided into multiple virtual device contexts for service integration, enhanced security, administrative boundaries, and
flexibility of deployment"

False: This describes VDCs. See page 237.


B) See notes below in support of correct choice.

Question #24:
Support of vPC on the Cisco Nexus 5000 access switch enables various new design options for the data center Access layer, including
which of the following?

A) The vPC peer link is not required for Access layer control traffic, and can instead be used to span VLANs across the vPC
access switches
B) A single switch can associate per-interface with more than one vPC domain
C) vPC can be used on both sides of the MEC, allowing a unique 16-link EtherChannel to be built between the access and
aggregation switches
D) Allows an EtherChannel between a server and an access switch while still maintaining the level of availability that is
associated with dual-homing a server to two different access switches

Answer: C
Explanation:
C) "vPC can be used on both sides of the MEC, allowing a unique 16-link EtherChannel to be built between the access and
aggregation switches"
Correct: See notes.

Incorrect choices:
A) "The vPC peer link is not required for Access layer control traffic, and can instead be used to span VLANs across the vPC access
switches"
Incorrect: The Peer Link is required for Access layer control traffic. See page 241: "vPC peer link: This is the link between the vPC
peer switches, used to exchange vPC control traffic. The peer link can also be used to forward vPC data if one of the links in a vPC
fails. The availability of this link is vital to the operation of vPC, so it is recommended to configure it as a port channel with
members spread across different line cards."

B) "A single switch can associate per-interface with more than one vPC domain"
Incorrect: See page 241: vPC domain: "A vPC domain is group of two vPC peer switches using vPC. The vPC domain must have a
unique identifier. A single vPC domain cannot consist of more than 2 switches. A single switch cannot be part of more than 1 vPC
domain."

D) "Allows an EtherChannel between a server and an access switch while still maintaining the level of availability that is associated
with dual-homing a server to two different access switches"
Incorrect: For this choice to be correct, it would need to describe an EtherChannel between a REDUNDANT PAIR of access
switches and a server. See page 269: "Another application of vPC on the Cisco Nexus 5000 is that it allows an EtherChannel to be

built between a server and a redundant pair of Cisco Nexus 5000 switches. Normally, EtherChannels can be built only from a
server that is dual-homed to a single switch. Using vPC allows the most efficient form of load balancing to be used on the server,
while still maintaining the level of availability that is associated with dual-homing a server to two different access switches."

Further reading:

Question #25:
The requirement for high availability within the Data Center network may cause the designer to consider which one of the following
solutions?

A) Construct a hierarchical network design using EtherChannel between a server and two VDCs from the same physical switch
B) Utilize Cisco NSF with SSO to provide intrachassis SSO at Layers 2 to 4
C) Define the Data Center as an OSPF NSSA area, advertising a default route into the DC and summarizing the routes out of the
NSSA to the Campus Core
D) Implement network services for the Data Center as a separate services layer using an active/active model that is more
predictable in failure conditions

Answer: B
Explanation:

Incorrect choices:
A) "Construct a hierarchical network design using EtherChannel between a server and two VDCs from the same physical switch"
False: Never do this, as the hardware might fail. See page 240: "It is not recommended to use two VDCs from the same physical
switch to construct any single layer of a hierarchical network design. For example, if you use two different VDCs inside the same
physical switch as the two aggregation switches in an aggregation block, the whole aggregation block will fail when the physical
switch fails. Distinct, physical box redundancy within a network layer is a key characteristic that contributes to the high
availability of the hierarchical network design reference model.
C) "Define the Data Center as an OSPF NSSA area, advertising a default route into the DC and summarizing the routes out of the
NSSA to the Campus Core"
False: See page 220: "Use a not-so-stubby area (NSSA) from the core down. It limits link-state advertisement (LSA) propagation
but permits route redistribution. You can advertise the default route into the aggregation layer and summarize the routes coming
out of the NSSA.
D) "Implement network services for the Data Center as a separate services layer using an active/active model that is more
predictable in failure conditions"
False: Active/Standby is more predictable. See page 229: "The active/standby model is simpler to deploy and more predictable
in failure conditions, because the aggregate load can never exceed the capacity of a single service chain. The active/active model

allows all available hardware resources to be used. However, the active/active model is more complex. Also, it is important to
keep different active contexts that are combined into a service chain in the same physical service chassis or chain of appliances. If
a single service chain consists of active contexts that are spread across multiple service chassis, it can result in unnecessary
additional load on the link between the aggregation switches or the links between the aggregation and services layer.

Question #26:
When designing remote access to the Enterprise Campus network for teleworkers and mobile workers, which of the following
should the designer consider?

A) It is recommended to place the VPN termination device in line with the Enterprise Edge firewall, with ingress traffic limited
to SSL only
B) Maintaining access rules, based on the source IP of the client, on an internal firewall drawn from a headend RADIUS server
is the most secure deployment
C) VPN Headend routing using Reverse Route Injection (RRI) with distribution is recommended when the remote user
community is small and dedicated DHCP scopes are in place
D) Clientless SSL VPNs provide more granular access control than SSL VPN clients (thin or thick), including at Layer7

Answer: D
Explanation:
D) "Clientless SSL VPNs provide more granular access control than SSL VPN clients (thin or thick), including at Layer7"
Correct: See below notes.

Incorrect choices:
A) "It is recommended to place the VPN termination device in line with the Enterprise Edge firewall, with ingress traffic limited to SSL
only"
Incorrect: ingress traffic limited to SSL and IPsec. See page 465: "VPN Termination Device and Firewall Placement The VPN
termination device can be deployed in parallel with a firewall, inline with a firewall, or in a demilitarized zone (DMZ). For best
security, a recommended practice is to place the public side of the VPN termination device in a DMZ behind a firewall.
Note: The firewall could be the VPN termination device.
The firewall policies should limit traffic coming in to the VPN termination device to IPsec and SSL. Any IPsec tunnels should
terminate on the VPN appliance. For extra security, send traffic through another firewall for additional inspection after it passes
through the VPN appliance.

You should also enforce endpoint security compliance on the remote system.

B) "Maintaining access rules, based on the source IP of the client, on an internal firewall drawn from a headend RADIUS server is the
most secure deployment"
Incorrect: See the included section from page 466:

C) "VPN Headend routing using Reverse Route Injection (RRI) with distribution is recommended when the remote user community is
small and dedicated DHCP scopes are in place"
Incorrect: RRI is appropriate for larger organizations, not smaller organizations. See page 465: "Note: Smaller organizations
typically configure a few static routes to point to the VPN device and do not need RRI. The RRI function is usually of more benefit
to larger organizations that have more complex requirements (for example, organizations that do not have a dedicated scope of
Dynamic Host Configuration Protocol [DHCP] addresses that are associated to a specific VPN headend).

Question #27:
Which of the following is most accurate with respect to designing high availability within the Enterprise Campus network?
A) High availability at and between the Distribution and Access layers is as simple as redundant switches and redundant Layer
3 connections
B) Non-deterministic traffic patterns require a highly available modular topology design
C) Distribution layer high availability design includes redundant switches and Layer 3 equal-cost load sharing connections to
the switched Access and routed Core layers, with a Layer 3 link between the Distribution switches to support
summarization of routing information from the Distribution to the Core
D) Default gateway redundancy allows for the failure of a redundant Distribution switch without affecting endpoint
connectivity

Answer: D
Explanation:
See bottom of page 26

Incorrect choices:
A) "High availability at and between the Distribution and Access layers is as simple as redundant switches and redundant Layer 3
connections"
Incorrect: It's never "as simple as" anything. This is obviously wrong.

B) "Non-deterministic traffic patterns require a highly available modular topology design"


Incorrect: This choice is difficult to document, as it mixes a couple of concepts.
One) "Non-deterministic traffic patterns" are an issue for QoS. See page 647.
Two) "modular topology design" helps promote deterministic traffic patterns, not non-deterministic patterns. See
http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Campus/HA_campus_DG/hacampusdg.html, under the
Hierarchical Network Design Model section

C) "Distribution layer high availability design includes redundant switches and Layer 3 equal-cost load sharing connections to the
switched Access and routed Core layers, with a Layer 3 link between the Distribution switches to support summarization of routing
information from the Distribution to the Core"
Incorrect: They're being tricky and vague. This answer isn't bad, but it's not the best choice. It is difficult to support the idea that
a Layer 3 link between Distribution Switches has any impact on route summarization to the core.
This is the correct description, from page 26: "High availability is typically provided through dual paths from the distribution layer
to the core and from the access layer to the distribution layer. Layer 3 equal-cost load sharing allows both uplinks from the
distribution to the core layer to be used."

Question #28:
Which of the following should the Enterprise Campus network designer consider with respect to Video traffic?

A) While it is expected that the sum of all forms of video traffic will grow to over 90% by 2013, the Enterprise will be spared
this rapid adoption of video by consumers through a traditional top-down approach
B) Avoid bandwidth starvation due to video traffic by preventing and controlling the wide adoption of unsupported video
applications
C) Which traffic model is in use, the flow direction for the traffic streams between the application components, and the traffic
trends for each video application
D) Streaming video applications are sensitive to delay while interactive video applications, using TCP as the underlying
transport, are fairly tolerant of delay and jitter

Answer: C
Explanation:
All sources agree that the correct answer is Which traffic model is in use, the flow direction for the traffic streams between the
application components, and the traffic trends for each video application, but I can find very little to support this answer.
This appears to be one of those Cisco questions where one must choose the only answer that isnt wrong.
The choices:
A) While it is expected that the sum of all forms of video traffic will grow to over 90% by 2013, the Enterprise will be spared this
rapid adoption of video by consumers through a traditional top-down approach
It can be argued that this answer does not make much sense, as there is no good reason why the Enterprise would not be subject
to these conditions (if you accept that this prediction is even valid).
B) Avoid bandwidth starvation due to video traffic by preventing and controlling the wide adoption of unsupported video
applications
It can be argued that this answer, also, does not make much sense, in that most enterprises would not have a problem with wide
adoption of unsupported video applications. An enterprise might need to control the widespread use of YouTube, or possibly
BitTorrent. However, the nature of either of those problems are not related to bandwidth starvation, but are more problems of
basic security and group policy at the desktop level. Maybe Im misinterpreting the answer.
C) Which traffic model is in use, the flow direction for the traffic streams between the application components, and the traffic
trends for each video application
I cant see anything wrong with this.
D) Streaming video applications are sensitive to delay while interactive video applications, using TCP as the underlying transport,
are fairly tolerant of delay and jitter
We know that this is incorrect:
From Designing Cisco Network Service Architectures (ARCH), Third Edition by John Tiso, Page 9
Cisco Unified Communications: Includes voice, video, and web conferencing solutions.
Desktop video conferencing solutions that are integrated into multimedia collaboration tools can enable higher productivity
through more effective meetings. This type of application is interactive in nature, and as such, it is sensitive to network delay,
jitter, and packet loss.
Can anyone offer any further information that either 1) supports answer C, or 2) refutes answers A and B?
Also, this page is very good:
http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Video/IPVS/IPVS_DG/IPVS-DesignGuide/IPVSchap4.html

Question #29:
Which technology is an example of the need for a designer to clearly define features and desired performance when designing
advanced WAN services with a service provider?
A)
B)
C)
D)

FHRP to remote branches


Layer 3 MPLS VPNs secure routing
Control protocols (for example Spanning Tree Protocol) for a Layer 3 MPLS service
Intrusion prevention, QoS, and stateful firewall support network wide

Answer: B
Explanation:
Let's think about the three incorrect choices:
A) "FHRP to remote branches"
C) "Control protocols (for example Spanning Tree Protocol) for a Layer 3 MPLS service"
Neither of these make sense.
D) "Intrusion prevention, QoS, and stateful firewall support network wide"
This sounds OK, but it's not quite right. QoS is important for voice and video, even network-wide, but you don't need QoS for
everything, network-wide. I'm not sure what "stateful firewall support" is.

Question #30:
Which of the following is true concerning best design practices at the switched Access layer of the traditional layer2 Enterprise
Campus Network?
A) Cisco NSF with SSO and redundant supervisors has the most impact on the campus in the Access layer
B) Provide host-level redundancy by connecting each end device to 2 separate Access switches
C) Offer default gateway redundancy by using dual connections from Access switches to redundant Distribution layer switches
using a FHRP
D) Include a link between two Access switches to support summarization of routing information from the Access to the
Distribution layer

Answer: A
Explanation:
Correct: "Cisco NSF with SSO and redundant supervisors has the most impact on the campus in the
Access layer"
Incorrect: "Provide host-level redundancy by connecting each end device to 2 separate Access switches"
Ridiculous. Are you going to to dual-NIC every desktop? What about IP phones?
Incorrect: "Offer default gateway redundancy by using dual connections from Access switches to redundant Distribution layer
switches using a FHRP".
This looks OK, but NSF with SSO is a better solution.
Incorrect: "Include a link between two Access switches to support summarization of routing information from the Access to the
Distribution layer"
What? Does not improve/enable route summarization; does not support "summarization from Access to Distribution".

Question #31:
Which protocol will not adhere to the design requirement of the control plane being either separated or combined within a
virtualization technology?
A)
B)
C)
D)

FHRP
STP
CEF
NSF with SSO

Answer: B
Explanation:

Question #32:
Which of the following features might be used by the Enterprise Campus network designer as a means of route filtering?
A)
B)
C)
D)

IPv4 static routes


Route tagging using a route map in an ACL
Tagging routes using the BGP MED
EIGRP stub networks

Answer: D
Explanation:

Question #33:
The network designer needs to consider the number of multicast applications and sources in the
network to provide the most robust network possible. Which of the following is a consideration the designer must also address?
A)
B)
C)
D)

The IGPs should utilize authentication to avoid being the most vulnerable component
With SSM source or receiver attacks are not possible
With Shared Trees access control is always applied at the RP
Limit the rate of Register messages to the RP to prevent specific hosts from being attacked on a PIM-SM network

Answer: B
Explanation:

Question #34:
When considering the design of the E-Commerce topology which of the following are true?
A) One-armed SLB design with multiple security contexts removes the need for a separate firewall in the core layer
B) Two-firewall-layer SLB design considers the aggregation and access layers to be trusted zones, requiring no security
between the web, application, and database zones
C) One-armed SLB design with two firewall layers ensures that non load-balanced traffic still traverses the ACE so that the
health and performance of the servers is still being monitored
D) In all cases there will be configuration requirements for direct access to any servers or for nonload-balanced sessions
initiated by the servers

Answer: A
Explanation:
Correct Answer:

Incorrect: "Two-firewall-layer SLB design considers the aggregation and access layers to be trusted zones, requiring no security
between the web, application, and database zones."
"considers the aggregation and access layers to be trusted zones" describes the base firewall design, not the two-firewall-layer
design.

Incorrect: "One-armed SLB design with two firewall layers ensures that non load-balanced traffic still traverses the ACE so that the
health and performance of the servers is still being monitored."
From Designing Cisco Network Service Architectures (ARCH), Third Edition by John Tiso, page 396:

Incorrect: "In all cases there will be configuration requirements for direct access to any servers or for nonload-balanced sessions
initiated by the servers"

Question #35:
Distinct, physical redundancy within a network layer is a key characteristic that contributes to the high availability of the hierarchical
network design. Which of the following is not an examples of this model?
A)
B)
C)
D)

SAN extension with dual fabrics such as a yellow VSAN and a blue VSAN utilized via multipath software
Redundant power supplies and hot-swappable fan trays in Aggregate switches
A single SAN fabric with redundant uplinks and switches
Servers using network adapter teaming software connected to dual-attached access switches

Answer: C
Explanation:
This is pretty obvious, right? The selected answer is the only choice that highlights an isolated component. Every other choice
explicitly states redundant components.

Question #36:
Which four Cisco proprietary Spanning Tree Protocol enhancements are supported with rapid per- VLAN Spanning-Tree plus?
(Choose four.)
A)
B)
C)
D)
E)
F)

PortFast
UplinkFast
loop guard
root guard
BPDU guard
BackboneFast

Answer: A, C, D, E
Explanation:
This answer is a little tricky. From page 40, we have the answer that satisfies the question:

So, in this list we see that the four items with an asterisk are the correct answers to this question.

However, on pages 71 and 224, we have the following:

Finally, see these notes from https://supportforums.cisco.com/discussion/11152846/rapid-pvst-uplink-fast-backbone-fast


There seems to be an ongoing misunderstanding about the relation of BackboneFast and UplinkFast to RSTP.
The RSTP alone, by itself, has built-in mechanisms that provide a similar functionality to Cisco's proprietary BackboneFast and
UplinkFast STP extensions. Therefore, activating the RSTP (or the RPVST/RPVST+) immediately gives you the advantages of
BackboneFast and UplinkFast. However, the BackboneFast and UplinkFast themselves are distinct extensions and they are not
activated along with RSTP. In fact, even if you configured them together with running RSTP, they would not be active because RSTP
already provides their functionality, although by slightly different mechanisms. That fact is confirmed by your output of show span
sum
To sum it up, the RSTP already incorporates the functionality of UplinkFast and BackboneFast (although not implemented exactly in
the way the UplinkFast and BackboneFast implement it), and when you activate RSTP, you get UplinkFast-like and BackboneFast-like
functionality. The UplinkFast and BackboneFast alone, however, are not and will not be activated.

So, we can conclude that the four correct answers are PortFast, loop guard, root guard, and BPDU guard.

Question #37:
Which two of these are correct regarding the recommended practice for distribution layer design? (Choose two.)
A)
B)
C)
D)
E)

use a redundant link to the core


use a Layer 2 link between distribution switches
never use a redundant link to the core because of convergence issues
use a Layer 3 link between distribution switches with route summarization
use a Layer 3 link between distribution switches without route summarization

Answer: A, D
These answers are in two groupings:
"use a redundant link to the core"
"never use a redundant link to the core because of convergence issues"
and
"use a Layer 2 link between distribution switches"
"use a Layer 3 link between distribution switches with route summarization"
"use a Layer 3 link between distribution switches without route summarization"
The answer for the first grouping is obvious: always use properly designed redundancy, where budget allows. To "never use a
redundant link to the core because of convergence issues" is not correct.
For the second grouping:

Further support can be found at:


http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Campus/HA_campus_DG/hacampusdg.html#wp1107746

Question #38:
Which three of these Metro service types map to E-Line (versus E-LAN) services that are defined by the Metro Ethernet Forum
(MEF)? (Choose three.)
A)
B)
C)
D)
E)

Ethernet Private Line


Ethernet Wire Service
Ethernet Relay Service
Ethernet Multipoint Service
Ethernet Relay Multipoint Service

Answer: A, B, C
Explanation:
Try to remember that any answer with "Multipoint" is incorrect.

Question #39:
Which two design concerns must be addressed when designing a multicast implementation? (Choose two.)
A)
B)
C)
D)
E)
F)

only the low-order 23 bits of the MAC address are used to map IP addresses
only the low-order 24 bits of the MAC address are used to map IP addresses
only the high-order 23 bits of the MAC address are used to map IP addresses
only the low-order 23 bits of the IP address are used to map MAC addresses
the 0x01004f MAC address prefix is used for mapping IP addresses to MAC addresses
the 0x01005e MAC address prefix is used for mapping IP addresses to MAC addresses

Answer: A, F
Explanation:

Question #40:
Which two of these are characteristics of multicast routing? (Choose two.)
A)
B)
C)
D)
E)

multicast routing uses RPF.


multicast routing is connectionless.
In multicast routing, the source of a packet is known.
When network topologies change, multicast distribution trees are not rebuilt, but use the original path
Multicast routing is much like unicast routing, with the only difference being that it has a a group of receivers rather than
just one destination

Answer: A, C
Explanations
Incorrect: B: "multicast routing is connectionless"
Multicast is neither connection-oriented, nor connectionless. TCP is connection-oriented, UDP is connectionless. Multicast uses
UDP, but this does not make Multicast connectionless by definition.
Incorrect: D: "When network topologies change, multicast distribution trees are not rebuilt, but use the original path"
PIM Terminology
When a router is forwarding a unicast packet, it looks up the destination address in its routing table and forwards the packet out of
the appropriate interface. However, when forwarding a multicast packet, the router might have to forward the packet out of
multiple interfaces, toward all the receiving hosts.
Multicast routing is connection oriented: Multicast traffic does not flow to the destinations until connection messages are sent
toward the source to set up the flow paths for the traffic.
Multicast-enabled routers use PIM to dynamically create distribution trees that control the path that IP multicast traffic takes
through the network to deliver traffic to all receivers. Building multicast distribution trees via connection messages is a dynamic
process; when network topology changes occur, the distribution trees are rebuilt around failed links.
Incorrect: E: "Multicast routing is much like unicast routing, with the only difference being that it has a a group of receivers rather
than just one destination"
With unicast transmission, multiple packets must be sent from a source to reach multiple receivers. In contrast, an IP multicast
source sends a single packet; downstream routers replicate the packets only on links where receiving hosts exist.
An IP multicast group address is the destination address to which packets for the group are sent. A device must be a member of a
group to receive the groups traffic.
Multicast applications can use a variety of models, including one to many or many to many. Using multicast provides advantages
including enhanced efficiency and performance and support for distributed applications. However, because multicast applications
are UDP based, reliability, congestion control, duplicate packets, out-of-sequence packets, and security may become issues.
Correct: "multicast routing uses RPF"
Correct: "In multicast routing, the source of a packet is known"

Question #41:
Which two design recommendations are most appropriate when OSPF is the data center core routing protocol? (Choose two.)
A)
B)
C)
D)
E)

Never use passive interfaces.


Use NSSA areas from the core down.
Use totally stub areas to stop type 3 LSAs.
Use the lowest Ethernet interface IP address as the router ID.
Manipulate the reference bandwidth.

Answer: B, E
Explanation:
Incorrect choices:
A) "Never use passive interfaces"
Never? Under any circumstances? Should be obviously wrong.
C) "Use totally stub areas to stop type 3 LSAs"
Well, TSAs certainly stop Type 3 LSAs (apart from a default route), but this choice does not specify where to use the TSAs
D) "Use the lowest Ethernet interface IP address as the router ID"
What is an "Ethernet interface IP"? Ethernet is Layer 2. This is intentionally confusing. Best policy is an explicitly configured
Router ID.
Support for correct choices:

Question #42:
Which two design recommendations are most appropriate when EIGRP is the data center core routing protocol? (Choose two.)
A)
B)
C)
D)

Summarize data center subnets.


Advertise a default route into the data center core from the aggregation layer.
Tune the EIGRP timers to enable EIGRP to achieve quicker convergence.
Adjust the default bandwidth value to ensure proper bandwidth on all links.

Answer: A, B
Explanation:

Question #43:
Which three Layer 2 access designs have all of their uplinks in an active state? (Choose three.)
A)
B)
C)
D)
E)

Flex Links
loop-free U
looped square
looped triangle
loop-free inverted U

Answer: B, C, E
Explanation:

Question #44:
Which three statements about Network Attached Storage are correct? (Choose three.)
A)
B)
C)
D)
E)

Data is accessed using NFS or CIFS.


Data is accessed at the block level.
NAS is referred to as captive storage.
Storage devices can be shared between servers.
A NAS implementation is not as fast as a DAS implementation.

Answer: B, D, E
Explanation:
Correct: "Data is accessed using NFS or CIFS"
Correct: "Storage devices can be shared between servers"
Correct: "A NAS implementation is not as fast as a DAS implementation"
(All three correct answers referenced on page 319)

Incorrect: "Data is accessed at the block level"


SANs access data at the block level.

Incorrect: "NAS is referred to as captive storage"


DAS is referred to as captive storage.

Question #45:
In a collapsed core design, which three benefits are provided by a second-generation Cisco MDS director? (Choose three.)
A)
B)
C)
D)
E)

a higher fan-out ratio


fully redundant switches
100 percent port efficiency
all ISLs contained within a single chassis
higher latency and throughput than a core-edge design switch

Answer: B, C, E
Explanation:

Incorrect answers:
A) "a higher fan-out ratio"
Collapsed-Core provides a lower fan-out ratio.
See page 333:

E) "higher latency and throughput than a core-edge design switch"


Incorrect: Collapsed-Core is one chassis, which moves everything onto the backplane, not the wire. This will improve throughput
and latency, not degrade it.

Question #46:
Which two statements about both FCIP and iSCSI are correct? (Choose two.)
A)
B)
C)
D)
E)

They support file-level storage for remote devices.


They require high throughput with low latency and low jitter.
Their purpose is to provide connectivity between host and storage.
They support block-level storage for remote devices.
Their purpose is to provide connectivity between separate wide-area SANs.

Answer: B, D
Explanation:

Question #47:
Which three statements about zoning are correct? (Choose three.)
A)
B)
C)
D)
E)

Zoning increases security.


DNS queries are used for software zoning.
Software zoning is more secure than hardware zoning.
When using zones and VSANs together, the zone is created first.
Zoning requires that VSANs be established before it becomes operational.

Answer: A, B, E
Explanation:
Support for correct choices is highlighted in yellow. Information refuting incorrect choices is circled in blue.

Question #48:
What are two characteristics of Server Load Balancing router mode? (Choose two.)
A)
B)
C)
D)
E)

The design supports multiple server subnets.


An end-user sees the IP address of the real server.
SLB routes between the outside and inside subnets.
The source or destination MAC address is rewritten, but the IP addresses left alone.
SLB acts as a "bump in the wire" between servers and upstream firewall or Layer 3 devices.

Answer: A, C
Explanation:
Notation supporting correct answers highlighted in yellow. Notation refuting incorrect answers circled in blue.
Correct: "The design supports multiple server subnets"
Incorrect: "An end-user sees the IP address of the real server"
Correct: "SLB routes between the outside and inside subnets"
Incorrect: "The source or destination MAC address is rewritten, but the IP addresses left alone"
Incorrect: "SLB acts as a "bump in the wire" between servers and upstream firewall or Layer 3 devices"
This statement is accurate for bridging mode.

Question #49:
What are two characteristics of Cisco Global Site Selector? (Choose two.)
A)
B)
C)
D)
E)

It helps verify end-to-end path availability.


It provides traffic rerouting in case of disaster.
HSRP, GLBP, and VRRP can be clients of GSS.
BGP must be the routing protocol between the distributed data centers.
DNS responsiveness is improved by providing centralized domain management.

Answer: B, E
Explanation:

See also:
http://www.cisco.com/c/en/us/td/docs/app_ntwk_services/data_center_app_services/gss4400series/v13/configuration/cli/gslb/guide/cli_gslb/Intro.html#wp1097368
GSS Overview
Server load-balancing devices, such as the Cisco Content Services Switch (CSS), Cisco Content Switching Module (CSM), and Cisco
Application Control Engine (ACE) that are connected to a corporate LAN or the Internet, can balance content requests among two or
more servers containing the same content. Server load-balancing devices ensure that the content consumer is directed to the host
that is best suited to handle that consumer's request.
Organizations with a global reach or businesses that provide web and application hosting services require network devices that can
perform complex request routing to two or more redundant, geographically dispersed data centers. These network devices need to
provide fast response times and disaster recovery and failover protection through global server load balancing, or GSLB.
The Cisco Global Site Selector (GSS) platform allows you to leverage global content deployment across multiple distributed and
mirrored data locations, optimizing site selection, improving Domain Name System (DNS) responsiveness, and ensuring data center
availability.
The GSS is inserted into the traditional DNS routing hierarchy and is closely integrated with the Cisco CSS, Cisco CSM, Cisco ACE, or
third-party server load balancers (SLBs) to monitor the health and load of the SLBs in your data centers. The GSS uses this
information and user-specified routing algorithms to select the best-suited and least-loaded data center in real time.

The GSS can detect site outages, ensuring that web-based applications are always online and that customer requests to data
centers that suddenly go offline are quickly rerouted to available resources.
The GSS offloads tasks from traditional DNS servers by taking control of the domain resolution process for parts of your domain
name space, responding to requests at a rate of thousands of requests per second.

Question #50:
Which three statements about firewall modes are correct? (Choose three.)
A)
B)
C)
D)
E)
F)

A firewall in routed mode has one IP address.


A firewall in transparent mode has one IP address.
In routed mode, the firewall is considered to be a Layer 2 device.
In routed mode, the firewall is considered to be a Layer 3 device.
In transparent mode, the firewall is considered to be a Layer 2 device.
In transparent mode, the firewall is considered to be a Layer 3 device.

Answer: B, D, E
Explanation:
Notation in yellow supports correct choices; notation circled in blue refutes incorrect choices.

Question #51:
Which two of these correctly describe asymmetric routing and firewalls? (Choose two.)
A)
B)
C)
D)
E)

only operational in routed mode


only operational in transparent mode
only eight interfaces can belong to an asymmetric routing group
operational in both failover and non-failover configurations
only operational when the firewall has been configured for failover

Answer: C, D
Explanation:
Notation in yellow supports correct choices; notation circled in blue refutes incorrect choices.

Question #52:
Which of the following two statements about Cisco NSF and SSO are the most relevant to the network designer? (Choose two.)
A) You can reduce outages to 1 to 3 seconds by using SSO in a Layer 2 environment or Cisco NSF with SSO in a Layer 3
environment.
B) SSO and NSF each require the devices to either be graceful restart-capable or graceful-aware.
C) In a fully redundant topology adding redundant supervisors with NSF and SSO may cause longer convergence times than
single supervisors with tuned IGP timers
D) The primary deployment scenario for Cisco NSF with SSO is in the Distribution and Core layers.
E) Cisco NSF-aware neighbor relationships are independent of any tuned IGP timers

Answer: A, C
Explanation:
A) Correct: "You can reduce outages to 1 to 3 seconds by using SSO in a Layer 2 environment or Cisco NSF with SSO in a Layer 3
environment"
http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Campus/HA_campus_DG/hacampusdg.html
See near the end of this section
Network and In-the-Box Redundancy
When designing a campus network, the network engineer needs to plan the optimal use of the highly redundant devices. Careful
consideration should be given as to when and where to make an investment in redundancy to create a resilient and highly available
network.
As shown in Figure 6, the hierarchical network model consists of two actively forwarding core nodes, with sufficient bandwidth and
capacity to service the entire network in the event of a failure of one of the nodes. This model also requires a redundant distribution
pair supporting each distribution building block. Similarly to the core, the distribution layer is engineered with sufficient bandwidth
and capacity so that the complete failure of one of the distribution nodes does not impact the performance of the network from a
bandwidth or switching capacity perspective.

Campus network devices can currently provide a high level of availability within the individual nodes. The Cisco Catalyst 6500 and
4500 switches can support redundant supervisor engines and provide L2 Stateful Switchover (SSO), which ensures that the standby
supervisor engine is synchronized from an L2 perspective and can quickly assume L2 forwarding responsibilities in the event of a
supervisor failure.
The Catalyst 6500 also provides L3 Non-Stop Forwarding (NSF), which allows the redundant supervisor to assume L3 forwarding
responsibilities without resetting or re-establishing neighbor relationships with the surrounding L3 peers in the event of the failure
of the primary supervisor.
When designing a network for optimum high availability, it is tempting to add redundant supervisors to the redundant topology
in an attempt to achieve even higher availability. However, adding redundant supervisors to redundant core and distribution
layers of the network can increase the convergence time in the event of a supervisor failure.
In the hierarchical model, the core and distribution nodes are connected by point-to-point L3 routed fiber optic links. This means
that the primary method of convergence for core or distribution node failure is loss of link. If a supervisor fails on a non-redundant
node, the links fail and the network converges around the outage through the second core or distribution node. This allows the
network to converge in 60-200 milliseconds for EIGRP and OSPF.
When redundant supervisors are introduced, the links are not dropped during an SSO or NSF convergence event if a supervisor fails.
Traffic is lost while SSO completes, or indirect detection of the failure occurs. SSO recovers in 1-3 seconds, depending on the physical
configuration of device in question. L3 recovery using NSF happens after the SSO convergence event, minimizing L3 disruption and
convergence. For the same events, where 60-200 milliseconds of packet loss occurred without redundant supervisors when dual
supervisor nodes were used in the core or distribution, 1.8 seconds of loss was measured.
The access layer of the network is typically a single point of failure, as shown in Figure 7.

While the access nodes are dual connected to the distribution layer, it is not typical for endpoints on the network to be dual
connected to redundant access layer switches (except in the data center). For this reason, SSO provides increased availability when
redundant supervisors are used in the access layer and the L2/L3 boundary is in the distribution layer of the network. In this
topology, SSO provides for protection against supervisor hardware or software failure with 1-3 seconds of packet loss and no
network convergence. Without SSO and a single supervisor, devices serviced by this access switch would experience a total network
outage until the supervisor was physically replaced or, in the case of a software failure, until the unit reloaded.
If the L2/L3 boundary is in the access layer of the network, a design in which a routing protocol is running in the access layer, then
NSF with SSO provides an increased level of availability. Similarly to the L2/L3 distribution layer topology, NSF with SSO provides 13 seconds of packet loss without network convergence compared to total outage until a failed supervisor is physically replaced for
the routed access topology.
Campus topologies with redundant network paths can converge faster than topologies that depend on redundant supervisors for
convergence. NSF/SSO provide the most benefit in environments where single points of failure exist. In the campus topology, that is
the access layer. If you have an L2 access layer design, redundant supervisors with SSO provide the most benefit. If you have a
routed access layer design, redundant supervisors with NSF with SSO provide the most benefit.

C) Correct: "In a fully redundant topology adding redundant supervisors with NSF and SSO may cause longer convergence times
than single supervisors with tuned IGP timers"

Question #53:
Refer to the exhibit.

Which of the following two are advantages of Server virtualization using VMware vSphere? (Choose two)
A) Retains the one-to-one relationship between switch ports and functional servers
B) Enables the live migration of a virtual server from one physical server to another without disruption to users or loss of
services
C) The access layer of the network moves into the vSphere ESX servers, providing streamlined vSphere management
D) Provides management functions including the ability to run scripts and to install third-party agents for hardware
monitoring, backup, or systems management
E) New functional servers can be deployed with minimal physical changes on the network

Answer: B, D

Explanation:
http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/dc-partner-vmware/c22-59961701_vSphere_sOview.pdf
Improved Application Availability
The combined solution provides exceptional application availability by facilitating planned downtime and reducing the effects of
unplanned downtime.
Planned Downtime: VMware VMotion eliminates the need to schedule application downtime for planned physical server
maintenance, instead providing live migration of virtual machines to other servers with no disruption to users or loss of service,
and with Cisco VN-Link technology moving network security and QoS attributes and policies with the virtual machines. VMware
Storage VMotion performs live migration of virtual machine disks with no disruption to users or loss of service. The low latency and
high bandwidth of the Cisco Unified Computing System unified fabric contributes to the performance of VMware Storage VMotion.
From http://www.vmware.com/files/pdf/techpaper/vSphere-5-ESXi-Operations-Guide.pdf
Architecture
In the original ESX architecture, the virtualization kernel (VMkernel) is augmented by a management partition known as the console
operating system (COS) or service console. The primary purpose of the COS is to provide a management interface with the host.
Various VMware management agents are deployed in the COS, along with other infrastructure service agents (for example, name
service, time service, logging, and so on). In this architecture, many customers deploy other agents from third parties to provide a
particular functionality, such as hardware monitoring and systems management. Furthermore, individual administrative users log
in to the COS to run configuration and diagnostic commands and scripts.

Question #54:
Which of the following two are effective and simple means of employing route summarization within the Enterprise Campus
network? (Choose two)
A) A default route (0.0.0.0 /0) advertised dynamically into the rest of the network
B) Route filtering to manage traffic flows in the network, avoid inappropriate transit traffic through remote nodes, and provide
a defense against inaccurate or inappropriate routing updates
C) Use manual split horizon
D) Use a structured hierarchical topology to control the propagation of EIGRP queries
E) Open Shortest Path First (OSPF) stub areas

Answer: A, E
Explanation: OK, this one is obvious.
Correct: "A default route (0.0.0.0 /0) advertised dynamically into the rest of the network"
Simple and effective
Incorrect: "Route filtering to manage traffic flows in the network, avoid inappropriate transit traffic through remote nodes, and
provide a defense against inaccurate or inappropriate routing updates"
Not simple
Incorrect: "Use manual split horizon"
What? Since there is no such thing, this is neither simple, nor effective.
Incorrect: "Use a structured hierarchical topology to control the propagation of EIGRP queries"
What? This sounds like properly designed subnetting, but that's not how you "control the propagation of EIGRP queries". That is
done with stub routing and default routes.
Correct: "Open Shortest Path First (OSPF) stub areas"
Simple and effective

Question #55:
From a design perspective which two of the following OSPF statements are most relevant? (Choose two)
A)
B)
C)
D)
E)

OSPF stub areas can be thought of as a simple form of summarization


OSPF cannot filter intra-area routes
An ABR can only exist in two areas - the backbone and one adjacent area
Performance issues in the Backbone area can be offset by allowing some traffic to transit a non-backbone area
The size of an area (the LSDB) will be constrained by the size of the IP MTU

Answer: A, B
Explanation:
OK, this one is fairly obvious.
Correct: "OSPF stub areas can be thought of as a simple form of summarization"
Correct: "OSPF cannot filter intra-area routes"

Incorrect: "An ABR can only exist in two areas - the backbone and one adjacent area"
False: ABRs can support more than one non-backbone area. Also: Transit Areas

Incorrect: "Performance issues in the Backbone area can be offset by allowing some traffic to transit a non-backbone area"
False: Ugh, that's just stupid. When traversing a transit area, all affected traffic suffers, and the ABRs have to work harder.
Transit Areas should never be more than a temporary solution.

Incorrect: "The size of an area (the LSDB) will be constrained by the size of the IP MTU"
False: How in the hell would Maximum Transmission Unit size affect the size of the Link State Data Base? They're just screwing
with us now. (Actually, the idea that they're trying to confuse us with is that Stub Areas and Totally Stubby Areas constrain the
size of the LSDB).

Question #56:
OSPF stub areas are an important tool for the Network designer; which of the following two should be considered when utilizing
OSPF stub areas? (Choose two)
A)
B)
C)
D)
E)

OSPF stub areas increase the size of the LSDB with the addition of Type 3 and 5 LSAs
OSPF not so stubby areas are particularly useful as a simpler form of summarization
OSPF stub areas are always insulated from external changes
OSPF totally stubby areas cannot distinguish among ABRs for the best route to destinations outside the area
OSPF stub areas can distinguish among ASBRs for destinations that are external to the OSPF domain

Answer: C, D
Explanation:

Question #57:
Which two protocol characteristics should be most considered when designing a single unified fabric for the Data Center? (Choose
two.)
A)
B)
C)
D)
E)

FCIP or FCoE allow for easier integration by using the Fibre Channel Protocol (FCP) and Fibre Channel framing
iSCSI uses a special EtherType and an additional header containing additional control information
FCIP and iSCSI has higher overhead than FCoE owing to TCP/IP
FCoE was initially developed to be used as a switch-to-switch protocol, while FCIP is primarily meant to be used as an access
layer protocol to connect hosts and storage to a Fibre Channel SAN
FCoE requires gateway functionality to integrate into an existing Fibre Channel network

Answer: A, C
Explanation:
Correct: "FCIP or FCoE allow for easier integration by using the Fibre Channel Protocol (FCP) and Fibre Channel framing"

Incorrect: "iSCSI uses a special EtherType and an additional header containing additional control information"
False: This describes FCoE.

Correct: "FCIP and iSCSI has higher overhead than FCoE owing to TCP/IP"

Incorrect: "FCoE was initially developed to be used as a switch-to-switch protocol, while FCIP is primarily meant to be used as an
access layer protocol to connect hosts and storage to a Fibre Channel SAN"
These descriptions are inverted.

Incorrect: "FCoE requires gateway functionality to integrate into an existing Fibre Channel network"
False: this describes iSCSI.

Question #58:

Answer:

Question #59:

Answer:

Question #60:

Answer:

Question #61:

Answer:

Question #62:

Answer:

Question #63:
Which option describes the effect of using softphones instead of VoIP handsets on QoS implementation for the voice traffic?
A)
B)
C)
D)

It provides a Layer 2 CoS marking in the frames that can be used for QoS implementation.
Using softphones means that 802.1Q tagging must be configured between the PC and the switch.
The voice traffic of softphones is mixed with data traffic of PC on the access VLAN.
By using softphones, the implementation of a QoS depends only on trusting DSCP markings set by the PC.

Answer: C
Explanation:

Question #64:
A)
B)
C)
D)

source port
ToS
protocol type
source IP

Answer: B
Explanation:

Question #65:
Which three options are features of IP SLAs? (Choose three.)
A)
B)
C)
D)
E)

enables verification of service guarantees


dynamically adjusts QoS parameters
validates network performance and metrics
initiates failover for saturated links
proactively identifies network related issues

Answer: A, C, E
Explanation:

Question #66:
Which three options are benefits of using VRFs within an enterprise network? (Choose three.)
A)
B)
C)
D)
E)
F)

simplifies configuration and troubleshooting


enables partitioning of network resources
enhances security between user groups
provides additional redundancy at Layer 3
allows for routing and forwarding separation
improves routing protocol convergence

Answer: B, C, E
Explanation:
Incorrect: "simplifies configuration and troubleshooting"
VRFs are not simple.
Correct: "enables partitioning of network resources"

Correct: "enhances security between user groups"


This goes without saying, right? User groups can now be isolated at Layer 3, as well as layer 2.

Incorrect: "provides additional redundancy at Layer 3"


VRFs isolate networks, not make them redundant.

Correct: "allows for routing and forwarding separation"

Incorrect: "improves routing protocol convergence"

VRFs improve security, and allow virtualization of distinct routing domains on a common platform. There are costs associated
with these benefits, namely complexity and higher CPU usage. Throughput and convergence will suffer, not improve.

Question #67:
Which option is a common cause of congestion found in a campus network?
A)
B)
C)
D)

input serialization delay


output serialization delay
Rx-queue starvation
Tx-queue starvation

Answer: D
Explanation:

Question #68:
Which two protocols are used for high availability in enterprise networks? (Choose two.)
A)
B)
C)
D)
E)

BGP
GLBP
RSTP
VRRP
OSPF

Answer: B, D
Explanation:
BGP = Exterior Gateway Routing Protocol
GLBP = First Hop Redundancy Protocol
RSTP = Layer 2 Loop Prevention Protocol
VRRP = First Hop Redundancy Protocol
OSPF = Interior Gateway Routing Protocol
While BGP, RSTP, and OSPF may well be found highly available networks, the FHRP protocols are specifically related to High
Availability, as the provide gateway redundancy.

Question #69:
Which three major points are important for network virtualization to separate logical networks on a shared physical infrastructure?
(Choose three.)
A)
B)
C)
D)
E)
F)

VLANs
data plane
control plane
VPNs
VSANs
management plane

Answer: B, C, F
Explanation:
The incorrect answers describe logical entities that are created by virtualization. The correct answers describe entities that need to
be virtualized in order to achieve Network Virtualization.

Question #70:
Which VRF component ensures control plane separation between the different Layer 3 VPNs?
A)
B)
C)
D)

FIB
routing protocol instance
RIB
a subset of the router interfaces

Answer: B
Explanation:

Question #71:
Which option is the Cisco recommendation for data oversubscription for access ports on the access-to-distribution uplink?
A)
B)
C)
D)

4 to 1
20 to 1
16 to 1
10 to 1

Answer: B
Explanation:

Question #72:
Which two modes does LACP support? (Choose two.)
A)
B)
C)
D)

On
Passive
Associated
Link

Answer: A, B
Explanation:
LACP: On, Active, Passive, Off
PaGP: Auto, Desirable, On

Question #73:
Which option describes why duplicate IP addresses reside on the same network in Cisco network design?
A)
B)
C)
D)

HSRP designed network


using Cisco ACE in active/passive mode
VRRP designed network
running multiple routing protocols

Answer: B
Explanation:
I cannot find any documentation on this.
All of the answers point toward redundancy being the object of the question.
The correct answer is "using Cisco ACE in active/passive mode", which suggests that there are more than one ACE module, and that
they are in a redundant configuration.
The two incorrect FHRP answers also suggest redundancy.
Perhaps the answer is that if the primary (active) ACE module fails, then the standby (passive) module will take over on the same IP
address.
If anyone can find some documentation to support this answer, please respond.

Question #74:
When an enterprise network is designed, which protocol provides redundancy for edge devices in the event of a first-hop failure?
A)
B)
C)
D)

ICMP
HSRP
STP
HTTP

Answer: B
Explanation:
ICMP is a messaging protocol
HSRP is an First Hop Redundancy Protocol, providing gateway redundancy
STP prevents bridging loops
HTTP is used for browsing porn

Question #75:
Which two ways to support secure transport of multicast traffic are true? (Choose two.)
A)
B)
C)
D)
E)

Use spoke-to-spoke design.


Use IPsec over GRE tunnel.
Use GET VPN.
Use NBMA instead of broadcast.
Disable encryption for multicast traffic.

Answer: B, C
Explanation:

Question #76:
Which two ways to manage scalability issues inside an IBGP domain with 20 or more routers are recommended? (Choose two.)
A)
B)
C)
D)
E)

Configure route reflectors.


Use OSPF instead of EIGRP as an underlying routing protocol.
Create a full mesh at Layer 1.
Configure confederations.
Configure static routes between all routers.

Answer: A, D
Explanation:
The text (Designing Cisco Network Service Architectures (ARCH) Foundation Learning Guide, 3rd Ed) never specifically says that iGBP
scaling should begin at 20 routers. However, the section "Designing Scalable BGP Designs" (pages 146-155) describes how to scale
iBGP with route reflectors and confederation.

Question #77:
Which multicast implementation strategy provides load sharing and redundancy by configuring intradomain RPs as MSDP peers?
A)
B)
C)
D)

anycast RP
auto-RP
bootstrap router
static RP

Answer: A
Explanation:

Question #78:
Which option maximizes EIGRP scalability?
A)
B)
C)
D)

route redistribution
route redundancy
route filtering
route summarization

Answer: D
Explanation:
If youve already passed ROUTE, then this should be obvious. But here it is, anyway.

Question #79:
Which practice is recommended when designing scalable OSPF networks?
A)
B)
C)
D)

Maximize the number of routers in an area.


Minimize the number of ABRs.
Minimize the number of areas supported by an ABR.
Maximize the number of router adjacencies.

Answer: C
Explanation:
http://docwiki.cisco.com/wiki/Internetwork_Design_Guide_--_Designing_Large-Scale_IP_Internetworks
The number of areas supported by any one router-A router must run the link-state algorithm for each link-state change that occurs
for every area in which the router resides. Every area border router is in at least two areas (the backbone and one area). In general,
to maximize stability, one router should not be in more than three areas.

Question #80:
Which two options improve BGP scalability in a large autonomous system? (Choose two.)
A)
B)
C)
D)

route reflectors
route redistribution
confederations
communities

Answer: A, C
Explanation:
The text (Designing Cisco Network Service Architectures (ARCH) Foundation Learning Guide, 3rd Ed) addresses this in the section
"Designing Scalable BGP Designs" (pages 146-155), and describes how to scale iBGP with route reflectors and confederation.

Question #81:
Which option lists the EIGRP minimum timer settings for hello and dead timers in seconds?
A)
B)
C)
D)

4 and 6
2 and 4
2 and 6
both 6

Answer: C
First of all, we should know this from ROUTE.
Explanation:

Question #82:
Which option is the Cisco preferred, most versatile, and highest-performance way to deploy IPv6 in existing IPv4 environments?
A)
B)
C)
D)

dual stack
hybrid
service block
dual service

Answer: A
Explanation:

Question #83:
Which option is the preferred and most versatile model to deploy IPv6 in existing IPv4 environments?
A)
B)
C)
D)

Hybrid
service block
dual stack
processes

Answer: C
Explanation:

Question #84:
Which router type injects external LSAs into the OSPF database using either other routing protocols or static routes?
A)
B)
C)
D)
E)

backbone router
ABR
internal router
designated router
ASBR

Answer: E
Explanation:

Question #85:
Given the addresses 10.10.16.0/24 and 10.10.23.0/24, which option is the best summary?
A)
B)
C)
D)

10.10.0.0/16
10.10.8.0/23
10.10.16.0/23
10.10.16.0/21

Answer: D
Explanation:
Basic subnetting. If you need help, go pull out a CCNA book.

Question #86:
Refer to the exhibit.

The network engineer wants to ensure that receiver A does not receive traffic from the video conference.
For multicast traffic, where must the filtering be placed to fulfill that requirement?
A)
B)
C)
D)
E)

R1
Video Conference
A
S1
R2

Answer: D
Explanation:

Question #87:
Which two VPN solutions extend the routing capabilities of basic IPsec VPNs? (Choose two.)
A)
B)
C)
D)
E)

GRE
NVI
DES
VTI
AES

Answer: A, D
Explanation:

Question #88:
Which option is an advanced WAN services design consideration for a multipoint architecture that connects two or more customer
devices using Ethernet bridging techniques over an MPLS network?
A)
B)
C)
D)

VPLS
Metro Ethernet
MPLS
SONET/SDH

Answer: A
Explanation:

Incorrect choices:
The Question asks for a WAN service that runs OVER an MPLS.
So, MPLS is wrong, because you're choosing a service that runs OVER an MPLS.
Metro Ethernet and SONET/SDH are both lower in both the OSI stack, as well as the IP stack.
So, this leaves VPLS.
This means that you should either learn how VPLS works, or understand that the other three choices are wrong.

Question #89:
Which three options are basic technical metrics that a good SLA includes? (Choose three.)
A)
B)
C)
D)
E)
F)

packet loss
devices
latency
clients
IP availability
distance

Answer: A, C, E
Explanation:

Question #90:
Which option is a benefit of site-to-site VPNs?
A)
B)
C)
D)

less configuration required than a WAN circuit


more secure than a dedicated WAN circuit
less expensive than a dedicated WAN circuit
more reliable than a dedicated WAN circuit

Answer: C
Explanation:

Question #91:
Which three options are basic design principles of the Cisco Nexus 7000 Series for data center virtualization? (Choose three.)
A)
B)
C)
D)
E)
F)

easy management
infrastructure scalability
cost minimization
upgrade of technology
transport flexibility
operational continuity

Answer: B, E, F
Explanation:
I cannot see where this Question is directly addressed in the text (Designing Cisco Network Service Architectures (ARCH) Foundation
Learning Guide, 3rd Ed.) However, we can work through the choices.
See Designing Cisco Network Service Architectures (ARCH) Foundation Learning Guide, 3rd Ed, pages 236 through 244. The text
briefly describes features of the Nexus 7000 series switches, and then goes into great detail describing VDCs, and vPCs, their
benefits, design options, and best practices.
A) easy management
Incorrect: This should be seen as obviously wrong.
B) infrastructure scalability
Correct: On page 237, the section on High Bandwidth addresses scalability to future bandwidth thresholds.
High bandwidth:
The Cisco Nexus family of switches was designed for line rate 10 Gigabit Ethernet speeds at present and 40 and 100 Gigabit Ethernet
speeds in the future to support the high-bandwidth requirements of the data center. Although the Cisco Nexus 7000 switches can be
deployed as high-density, highly available access switches in an EOR design, they have primarily been designed for the data center
aggregation and core layers. The Cisco Nexus 7000 has been built for high-density 10 Gigabit Ethernet and is ready to support 40 and
100 Gigabit Ethernet in the future.
C) cost minimization
Incorrect: Again, this should be seen as obviously wrong.
D) upgrade of technology
Incorrect: I'm not sure how to respond to this one. I suppose that "upgrading technology", just for the sake of upgrading, is not a
good design principle. Based on three choices, this choice is not one of the three best choices for this question.
E) transport flexibility
Correct: On page 237, the following section addresses Transport Flexibility, as well as Infrastructure Scalability
VDCs: Cisco Nexus 7000 switches that are running Cisco NX-OS Software have introduced the capability to divide a single physical
switch into up to four virtual switches, referred to as virtual device contexts or VDCs. Each VDC operates like a standalone switch
with a distinct configuration file, a complement of physical ports, and separate instances of necessary control plane protocols such
as routing protocols and spanning tree. This feature provides the potential option to use a single physical switch pair to serve
multiple roles within a data center topology. Different VDC design options can use this feature for service integration, enhanced
security, administrative boundaries, or flexibility of hardware deployment during changing business needs. One common design
replaces a core and an aggregation 6500 with a single Nexus 7010 using one VDC in each of the core and aggregation roles.

F) operational continuity

Correct: On page 236, the section on High Availability aligns with Operational Continuity.
High availability:
All switches have redundant power supplies and hot-swappable fan trays. All Cisco Nexus switches use Cisco NX-OS Software, an
operating system that is designed specifically for the data center and engineered for high availability, scalability, and flexibility. The
Cisco NX-OS Software is modular in nature and has capabilities that improve the overall availability of the system. One of these
capabilities is stateful process restart, which allows a network process to be restarted without having to relearn adjacencies, MAC
addresses, or other state information.
Page 243
The biggest advantage of vPC is that it enables loop-free topologies where STP is no longer actively involved in maintaining the
network topology and where no links between the access and aggregation layers are blocked. This increases the stability and
efficiency of the aggregation blocks.
Page 237
This section detailing vPCs addresses both Infrastructure Scalability, as well as Operational Continuity.
vPCs: Two Cisco Nexus 7000 switches can be combined into a vPC domain, allowing multichassis Link Aggregation Control Protocol
(LACP) port-channel connections across the pair. vPCs (also known as virtual channel ports) can be built between the vPC switch pair
and other neighboring devices. Even though the vPCs are terminated on two different physical switches, the vPC switch pair
represents itself as a single switch to neighboring devices that are connected on the vPCs. This allows the traditional triangles
between the access and aggregation layers to be removed from the logical design. Physically, the access switches still connect to two
different aggregation switches, but logically, the pair of Cisco Nexus 7000 switches acts as a single switch. The links between the
access switch and aggregation switch pair are combined into a vPC, and STP treats the connection as a single link. As a result, STP
does not block any of the links, and the complete bandwidth between the access and aggregation layers can be used. The concept of
VPCs is similar to the Catalyst 6500 VSS (Virtual Switching System) technology. With vPCs, however, it is an active/active backplane
model, whereas the Catalyst 6500 only has one supervisor active between VSS pair switches. This is discussed in more detail later.
With VPC, the switches combine to provide FHRP services, and therefore both switches forward packets sent to a HSRP, VRRP, or
GLBP virtual gateway MAC addresses, to avoid the routing polarization previously common to FHRPs.

Question #92:
Which two options are advantages of having a modular design instead of an EOR design in a data center? (Choose two.)
A)
B)
C)
D)
E)
F)

cooling constraints
cable bulk
decreased STP processing
redundancy options
cost minimization
low-skilled manager

Answer: C, D
Explanation:
This Question is bad. The Question takes the perspective that "EoR" and "Modular" are two different Data Center Switching Design
strategies. However, modular switches (like a 6509) will be found in an EoR (or MoR) design, not a ToR design.
Therefore, the correct answers, "decreased STP processing", and "redundancy options", are advantages of using modular switching,
regardless of EoR, MoR, or ToR design.
"cooling constraints" and "cable bulk" are disadvantages to modular switches, again, regardless of EoR, MoR, or ToR design.

Question #93:
Which statement about NIC teaming configurations is true?
A)
B)
C)
D)

With ALB, all ports use one IP address and multiple MAC addresses.
With AFT, two NICs connect to the different switches.
With SFT, all ports are active.
With AFT, all ports use one IP address and multiple MAC addresses.

Answer: A
Explanation:

Question #94:
Which two services are provided at the aggregation layer in a data center design? (Choose two.)
A)
B)
C)
D)
E)

service module integration


default gateway redundancy
high-speed packet switching backplane
network interface card teaming
Layer 3 domain definitions

Answer: A, B
Explanation:

Question #95:
Which two options are two benefits of a Layer 2 looped model? (Choose two.)
A)
B)
C)
D)

extends VLANs between switches that are connected to a common aggregation module
prevents uplink ports from entering the spanning-tree blocking state
provides quick convergence with Rapid Spanning Tree Protocol
increases performance to end hosts using directly connected, bonded Layer 2 links

Answer: A, C
Explanation:

Question #96:
ACME corporation owns a single MDS. Which two SAN tools can be used to optimize the use and cost of the switching hardware?
(Choose two.)
A)
B)
C)
D)

zoning
IVR
VSAN
iSCSI

Answer: A, C
Explanation:

On page 359, we see that the two incorrect choices are, in fact, used for building SAN extensions (which is to say building out and
expanding, not optimizing and economizing).

Question #97:
Source traffic is sent to a VIP on an SLB device, which in turn is routed to the destination server. Return traffic is policy-based routed
back to the SLB.
Which SLB design has been implemented?
A)
B)
C)
D)

router mode
inline bridge mode
one-armed mode
two-armed mode

Answer: D
Explanation:
The Question states that the traffic, upon leaving the SLB device, is routed to the destination server. This implies that the
destination server is in a different subnet. In the one-armed approach, traffic would be forwarded (within the same VLAN) to the
destination server.

Question #98:
Which four options are network virtualization technologies that are employed in the data center?
(Choose four.)
A)
B)
C)
D)
E)
F)

VLAN
VSAN
VRF
VRP
VLC
VPC

Answer: A, B, C, F
Explanation:
VRP stands for Versatile Routing Platform, or perhaps Vortex Ring Parachute.
VLC is a well known media player.

Question #99:
Which three options are the three layers of the Cisco design in the data center architecture? (Choose three.)
A)
B)
C)
D)
E)
F)

core layer
distribution layer
service layer
aggregation layer
Layer 2 domain sizing
access layer

Answer: A, D, F
Explanation:
Do not be confused! In the Data Center, Cisco uses the term aggregation layer, not distribution layer.

Question #100:
Which three virtualization categories are in campus networks? (Choose three.)
A)
B)
C)
D)
E)
F)

Layer 2 virtualization
Layer 3 clustering
network virtualization
device virtualization
network clustering
device clustering

Answer: C, D, F
Explanation:

Question #101:
Which two key components are related to one firewall per ISP design option for e-commerce? (Choose two.)
A)
B)
C)
D)
E)

It is a common approach to single-homing.


This approach is commonly used in large sites.
Any failure on an edge router results in a loss of session.
It has one NAT to two ISP-assigned blocks.
It is difficult to set up and administer.

Answer: C, D
Explanation:
A) It is a common approach to single-homing.
Incorrect: per ISP means more than one ISP.
B) This approach is commonly used in large sites.
Incorrect: See the comment on page 382 This approach is commonly used in small sites because it is relatively easy to set up and
administer.
C) Correct: Any failure on an edge router results in a loss of session
D) Correct: It has one NAT to two ISP-assigned blocks.

E) It is difficult to set up and administer.


Incorrect: See the comment on page 382 This approach is commonly used in small sites because it is relatively easy to set up and
administer.

Question #102:
What is the latest Cisco high-availability solution?
A)
B)
C)
D)

VRRP
HSRP
VSS
GLBP

Answer: C
Explanation:
This Question is outdated, in that the exam asks about the latest HA solution, from 2011. The text states that Cisco has
introduced VSS, indicating that it is new technology.

Question #103:
Which two options are VRF components. (Choose two.)
A)
B)
C)
D)

RIB
VSS
FIB
HSRP

Answer: A, C
Explanation:

Question #104:
Which two options are storage topologies? (Choose two.)
A)
B)
C)
D)

WAS
DAS
CAS
NAS

Answer: B, D
Explanation:
WAS = no such thing. WAAS = Wide Area Application Services
DAS = Direct Attached Storage
CAS = Client Access Server, or Central Authentication Service
NAS = Direct Attached Storage

Question #105:
Refer to the exhibit.

Which statement about the ASA is true?


A)
B)
C)
D)

The management interface is reachable only from VLAN 30.


The management interface is reachable only from VLAN 40.
It is running in transparent mode.
It is running in routed mode.

Answer: C
Explanation:
Note that all devices are in the same subnet.

Question #106:
Which statement about IPS and IDS solutions is true?
A)
B)
C)
D)

IDS and IPS read traffic only in inline mode.


IDS and IPS read traffic only in promiscuous mode.
An IDS reads traffic in inline mode, and an IPS reads traffic in promiscuous mode.
An IDS reads traffic in promiscuous mode, and an IPS reads traffic in inline mode.

Answer: D
Explanation:

Question #107:
Which NAC design model matches the following definitions?
- NAS is deployed centrally in the core or distribution layer.
- Users are multiple hops away from the Cisco NAS.
- After authentication and posture assessment, the client traffic no longer passes through the Cisco NAS.
- PBR is needed to direct the user traffic appropriately.
A)
B)
C)
D)

Layer 3 in-band virtual gateway


Layer 3 out-of-band with addressing
Layer 2 in-band virtual gateway
Layer 2 out-of-band virtual gateway

Answer: B
Explanation:

Question #108:
Which option is a recommended firewall topology?
A)
B)
C)
D)

using two firewalls with stateful failover switched mode.


using one firewall with NAT enabled in transparent mode.
using two firewalls in active/active mode.
using one firewall with stateful failover enabled in routed mode.

Answer: C
Explanation:
A) using two firewalls with stateful failover switched mode.
Not sure if there is such a thing as "stateful failover switched mode".
Firewalls can be configured in a stateful failover configuration, but this is rarely called "stateful failover mode".
"Stateful failover mode" usually refers to physical failover connectivity.
This choice is questionable, at best, and clearly not the best choice.
B) using one firewall with NAT enabled in transparent mode.
non-redundant is never a recommended configuration
C) using two firewalls in active/active mode.
Correct: see notes below
D) using one firewall with stateful failover enabled in routed mode.
What? One firewall failing over to, what, exactly? The extra firewall that you wish you had?

Question #109:
Which three options are recommended practices when configuring VTP? (Choose three.)
A)
B)
C)
D)
E)
F)

Set the switch to transparent mode.


Set the switch to server mode.
Enable VLAN pruning.
Disable VLAN pruning.
Specify a domain name.
Clear the domain name.

Answer: A, D, E
Explanation:
If youve passed SWITCH, you should know this.

Question #110:
Which four primary attributes define a WAN service? (Choose four.)
A)
B)
C)
D)
E)
F)
G)

Bandwidth
bursting capacity
memory
CPU
QoS classes and policies
latency
multicast support

Answer: A, B, E, G
Explanation:
Memory and CPU are obviously wrong. Latency (which could be related to QoS) is a good choice. One could argue that
there are five correct choices. However, Cisco is asking for four choices, so do not choose latency.

Question #111:
Which option does the FabricPath technology use to create loop-free Layer 2 networks?
A)
B)
C)
D)

STP
TTL
fabric tags
FSTP

Answer: B
Explanation:
From:
http://www.cisco.com/c/en/us/products/collateral/switches/nexus-7000-series-switches/white_paper_c11-687554.html
4.2.10 TTL
The Time to Live (TTL) field serves the same purpose in FabricPath as it does in traditional IP forwarding - each switch
hop decrements the TTL by 1, and frames with an expired TTL are discarded. The TTL in FabricPath prevents Layer 2
bridged frames from looping endlessly in the event that a transitory loop occurs (such as during a
reconvergence event). As of NX-OS release 5.2(1), ingress FabricPath edge switches set the TTL to 32 for all frames.
Beginning in NX-OS release 6.2(2), you can change the default TTL independently for unicast and multicast frames using
thefabricpath ttl command:
Broadcast and unknown unicast frames use the unicast TTL setting
IPv4/IPv6 and non-IP multicast frames use the multicast TTL setting
From:
http://www.cisco.com/c/en/us/products/collateral/switches/nexus-7000-series-switches/white_paper_c11-605488.html
Scalability based on proven technology
- Cisco FabricPath uses a control protocol built on top of the powerful Intermediate System-to-Intermediate System (IS-IS)
routing protocol, an industry standard that provides fast convergence and that has been proven to scale up to the largest
service provider environments. Nevertheless, no specific knowledge of IS-IS is required in order to operate a Cisco
FabricPath network.
- Loop prevention and mitigation is available in the data plane, helping ensure safe forwarding that cannot be matched by
any transparent bridging technology. The Cisco FabricPath frames include a time-to-live (TTL) field similar to the
one used in IP, and a Reverse Path Forwarding (RPF) check is also applied.

Question #112:
Which Cisco NAC Appliance component is optional?
A)
B)
C)
D)

NAC Appliance Manager


NAC Appliance Server
NAC Appliance Agent
NAC Appliance Policy Updates

Answer: C
Explanation:

Question #113:
Which one of the following is related to utilization, response time, availability and error rates?
A)
B)
C)
D)

Fault management
Configuration management
Accounting management
Performance management

Answer: D
Explanation:

Question #114:
Which option does VSS reduce Layer 3 to Layer 2 boundary to?
A)
B)
C)
D)

Three-tier logic devices


Multiple logic devices
Single logic device
Virtual devices

Answer: C
Explanation:
There are several sections in the text describing how VSS combines two physical switches into one logical entity, thus
removing the need for an FHRP. However, the below section answers the question with the exact language.

Вам также может понравиться