Cisco 642-845 Study Material, Free Cisco 642-845 Exam Practice On Store

Welcome to download the newest Pass4itsure 117-201 VCE dumps: http://www.pass4itsure.com/117-201.html

The Newest Cisco 642-845 VCE and PDF! As we know,only valid and newest Cisco 642-845 Flydumps vce can help you a lot in passing the exam. Just try Flydumps Cisco 642-845 latest vce and pdf, which are authenticated by expert and covering every aspect of Cisco 642-845 exam.100% money back guarantee!

QUESTION 65
Certkiller has decided to use IntServ in parts of their network as opposed to DiffServ. Which two Integrated Services (IntServ) functions are required on a router? (Select two)
A. DSCP classification
B. Monitoring
C. Scheduling
D. Admission control
E. Marking

Correct Answer: CD Section: (none) Explanation
Explanation/Reference:
Explanation: The Integrated Services or IntServ architecture is a multiple service model that can accommodate multiple QoS requirements. In this model the application requests a specific kind of service from the network before it sends data. The request is made by explicit signaling. The application informs the network of its traffic profile and requests a particular kind of service that can encompass its bandwidth and delay requirements. The application is expected to send data only after it gets a confirmation from the network. It is also expected to send data that lies within its described traffic profile. The network performs admission control, based on information from the application and available network resources. It also commits to meeting the QoS requirements of the application as long as the traffic remains within the profile specifications. The network fulfils its commitment by maintaining a per-flow state and then performing packet classification, policing, and intelligent queuing based on that state. The Cisco IOS IntServ model allows applications to make use of the IETF Resource Reservation Protocol (RSVP), which can be used by applications to signal their QoS requirements to the router.
QUESTION 66
When comparing the two QOS models (DiffServ versus IntServ), which three statements are true about these QoS models? (Select three)
A. The DiffServ model can be used to deliver QoS based upon IP precedence, or source and destination addresses.
B. The best effort model is suitable for applications such as file transfer and e-mail
C. The DiffServ model requires applications to signal the network with QoS requirements.
D. The DiffServ model requires RSVP.
E. The IntServ model attempts to deliver a level of service based on the QoS specified by each packet
F. The IntServ model requires applications to signal the network with QoS requirements.

Correct Answer: ABF Section: (none) Explanation
Explanation/Reference:
Explanation:
1.
DiffServ Model: The Differentiated Services or DiffServ architecture is an emerging standard from the IETF. This architecture specifies that each packet is classified upon entry into the network. The classification is carried in the IP packet header, using either the IP

precedence or the preferred Differential Services Code Point (DSCP). These are represented using the first three or six bits of the Type of Service (ToS) field. Classification can also be carried in the Layer 2 frame in the form of the Class of Service (CoS) field embodied in ISL and 802.1Q frames. Once packets are classified at the edge by access layer switches or by border routers, the network uses the classification to determine how the traffic should be queued, shaped, and policed.

2.
IntServ Model: The Integrated Services or IntServ architecture is a multiple service model that can accommodate multiple QoS requirements. In this model the application requests a specific kind of service from the network before it sends data. The request is made by explicit signaling. The application informs the network of its traffic profile and requests a particular kind of service that can encompass its bandwidth and delay requirements. The application is expected to send data only after it gets a confirmation from the network. It is also expected to send data that lies within its described traffic profile. The network performs admission control, based on information from the application and available network resources. It also commits to meeting the QoS requirements of the application as long as the traffic remains within the profile specifications. The network fulfils its commitment by maintaining a per-flow state and then performing packet classification, policing, and intelligent queuing based on that state. The Cisco IOS IntServ model allows applications to make use of the IETF Resource Reservation Protocol (RSVP), which can be used by applications to signal their QoS requirements to the router.
QUESTION 67
You need to determine the best QoS strategy for use within the Certkiller network. What are three considerations when choosing the QoS model to deploy in a network? (Select three)
A. The applications utilizing the network
B. The routing protocols being utilized in the network
C. Network addressing scheme
D. Cost of implementation
E. The amount of the control needed of the resources
F. The traffic destinations

Correct Answer: ADE Section: (none) Explanation
QUESTION 68
RSVP is already being used in the Certkiller WAN, and you want to implement a
QoS method that will take advantage of this. Which QoS model makes use of the Resource Reservation Protocol (RSVP)?
A. Best Effort
B. DSCP
C. NBAR
D. DiffServ
E. IntServ
F. None of the above.

Correct Answer: E Section: (none) Explanation
Explanation/Reference:
Explanation: Integrated Services (IntServ): IntServ can provide very high QoS to IP packets. Essentially, applications signal to the network that they will require special QoS for a period of time and that bandwidth should be reserved. With IntServ, packet delivery is guaranteed. However, the use of IntServ can severely limit the scalability of a network. IntServ uses Resource Reservation Protocol (RSVP) to explicitly signal the QoS needs of traffic of an application along the devices in the end-to-end path through the network. If network devices along the path can reserve the necessary bandwidth, the originating application can begin transmitting. If the requested reservation fails along the path, the originating application will not send any data.
QUESTION 69
The Certkiller network needs to mark packets on the LAN. Which two statements about packet marking at the data link layer are true? (Select two)
A. In an 802.1q frame, the 3-bit 802.1p priority field is used to identify the class of service (CoS) priority.
B. Frames maintain their class of service (CoS) markings when transiting a non-802.1p link.
C. Through the use of DE markings, Frame Relay QoS supports up to 10 classes of service.
D. The 802.1p CoS markings are preserved through the LAN, but are not maintained end to end.
E. IEEE 802.1p supports up to 10 class of service (CoS) markings.

Correct Answer: AD Section: (none) Explanation
Explanation/Reference:
Explanation:
The 802.1Q standard is an IEEE specification for implementing VLANs in Layer 2 switched networks. The

802.1Q specification defines two 2-byte fields (tag protocol identifier [TPID] and tag control information [TCI]) that are inserted within an Ethernet
frame following the source address field. The TPID field is currently fixed and assigned the value 0x8100. The TCI field is composed of three fields: User priority bits (PRI) (3 bits): The specifications of this 3-bit field are defined by the IEEE 802.1p standard. These bits can be used to mark packets as belonging to a specific CoS. The CoS marking uses the three 802.1p user priority bits and allows a Layer 2 Ethernet frame to be marked with eight levels of priority (values 0-7). Three bits allow for eight levels of classification, allowing a direct correspondence with IP version 4 (IPv4) (IP precedence) type of service (ToS) values. The table lists the standard definitions the IEEE 802.1p specification defines for each CoS.
QUESTION 70
802.1p allows QoS parameters to be used at the MAC layer on a LAN. The IEEE 802.1p user priority field consists of how many bits?
A. 4
B. 1
C. 8
D. 3
E. 6
F. 2
G. None of the above

Correct Answer: D Section: (none) Explanation
Explanation/Reference:
Explanation: User priority bits (PRI) (3 bits): The specifications of this 3-bit field are defined by the IEEE 802.1p standard. These bits can be used to mark packets as belonging to a specific CoS. The CoS marking uses the three 802.1p user priority bits and allows a Layer 2 Ethernet frame to be marked with eight levels of priority (values 0-7). Three bits allow for eight levels of classification, allowing a direct correspondence with IP version 4 (IPv4) (IP precedence) type of service (ToS) values. The table lists the standard definitions the IEEE 802.1p specification defines for each CoS.
QUESTION 71
You need to classify different packets within the Certkiller network so that they can be marked. What are three traffic descriptors typically used to categorize traffic into different classes? (Select three)
A. DSCP
B. DLCI
C. Media type
D. IP precedence
E. Incoming interface
F. Outgoing interface

Correct Answer: ADE Section: (none) Explanation
Explanation/Reference:
Explanation:
Classification is the process of identifying traffic and categorizing that traffic into classes. Classification
uses a traffic descriptor to categorize a packet within a specific group to define that packet. Typically used
traffic descriptors include these:

1.
Incoming interface

2.
IP precedence

3.
differentiated services code point (DSCP)

4.
Source or destination address

5.
Application After the packet has been classified or identified, the packet is then accessible for quality of service (QoS) handling on the network. Using classification, network administrators can partition network traffic into multiple classes of service (CoSs). When traffic descriptors are used to classify traffic, the source implicitly agrees to adhere to the contracted terms and the network promises QoS. Various QoS mechanisms, such as traffic policing, traffic shaping, and queuing techniques, use the traffic descriptor of the packet (that is, the classification of the packet) to ensure adherence to that agreement. Classification should take place at the network edge, typically in the wiring closet, within IP phones, or at network endpoints. It is recommended that classification occur as close to the source of the traffic as possible.
QUESTION 72
Traffic on the Certkiller LAN needs to be classified and marked at the data link layer. Which three statements about classification marking of traffic at Layer 2 are true? (Select all that apply)
A. A Frame Relay header includes a 1-bit discard eligible (DE) bit to provide the class of service (CoS).
B. The CoS field only exists inside Ethernet frames when 802.1Q or Inter-Switch Link (ISL) trunking is used.
C. An ATM header includes a 1-bit DE field to provide the CoS.
D. An MPLS EXP field is inserted in the Layer 3 IP precedence field to identify the CoS.
E. In the IEEE 802.1p standard, three bits are used to identify the user priority bits for the CoS.
F. In the IEEE 802.1q standard, six bits are used to identify the user priority bits for the CoS.

Correct Answer: ABDE Section: (none) Explanation
Explanation/Reference:
Explanation: LAN Class of Service (CoS) Many LAN switches today can mark and react to a Layer 2 3-bit field called the Class of Service (CoS) located inside an Ethernet header. The CoS field only exists inside Ethernet frames when 802.1Q or Inter-Switch Link (ISL) trunking is used. You can use the field to set 8 different binary values, which can be used by the classification features of other QoS tools, just like IP precedence and DSCP. Other Marking Fields You can use single-bit fields in Frame Relay and ATM networks to mark a frame or cell for Layer 2 QoS. Unlike IP precedence, IP DSCP, and 802.1P/ISL CoS, however, these two fields are not intended for general, flexible use. Each of these single-bit fields, when set, imply that the frame or cell is a better candidate to be dropped, as compared with frames or cells that do not have the bit set. In other words, you can mark the bit, but the only expected action by another QoS tool is for the tool to drop the frame or cell. Frame Relay defines the discard eligibility (DE) bit, and ATM defines the cell loss priority (CLP) bit. The general idea is that when a device, typically a WAN switch, experiences congestion, it needs to discard some frames or cells. If a frame or cell has the DE or CLP bit set, respectively, the switch may choose to discard those frames or cells, and not discard other frames or cells. Ifthe DE or CLP bit is set, there is no requirement that the Frame Relay and ATM switches react to it-just like there is no guarantee that an IP packet with DSCP EF will get special treatment by another router. It’s up to the owner of the Frame Relay or ATM switch to decide whether it will consider the DE and CLP bits, and how to react differently. You can use two other QoS marking fields in specialized cases. The MPLS Experimental bits comprise a 3-bit field that you can use to map IP precedence into an MPLS label. This allows MPLS routers to perform QoS features indirectly based on the original IP Precedence field inside the IP packets encapsulated by MPLS, without the need to spend resources to open the IP packet header and examine the IP Precedence field. Reference: http://www.ciscopress.com/articles/article.asp?p=101170&seqNum=2
QUESTION 73
You want to implement QoS on your enterprise network using best practices. Where is the least likely place for classification to be performed?
A. Access layer
B. Core layer
C. End system
D. Distribution layer
E. None of the above

Correct Answer: B Section: (none) Explanation
Explanation/Reference:
Explanation: The model provides a modular framework that allows flexibility in network design and facilitates ease of implementation and troubleshooting. The hierarchical model divides networks or their modular blocks into the access, distribution, and core layers, with these features: The access layer is used to grant user access to network devices. In a network campus, the access layer generally incorporates switched LAN devices with ports that provide connectivity to workstations and servers. In the WAN environment, the access layer at remote sites or at a teleworker location may provide access to the corporate network across WAN technology. The distribution layer aggregates the wiring closets, and uses switches to segment workgroups and isolate network problems in a campus environment. Similarly, the distribution layer aggregates WAN connection at the edge of the campus and provides policy-based connectivity. The core layer (also referred to as the backbone) is a high-speed backbone and is designed to switch packets as fast as possible. Because the core is critical for connectivity, it must provide a high level of availability and adapt to changes very quickly. This is the layer where is the least likely classification placed.
QUESTION 74
Some of the Incoming packets seen on a Certkiller router are marked with the DSCP value 101110. Which PHB is identified in this DSCP value?
A. Class selector PHB
B. Expedited Forwarding (EF) PHB
C. Default PHB
D. Assured Forwarding (AF) PHB
E. None of the above

Correct Answer: B Section: (none) Explanation
Explanation/Reference:
Explanation:
The EF PHB is identified based on the following:

1.
The EF PHB ensures a minimum departure rate: The EF PHB provides the lowest possible delay to delay-sensitive applications.

2.
The EF PHB guarantees bandwidth: The EF PHB prevents starvation of the application if there are multiple applications using EF PHB.

3.
The EF PHB polices bandwidth when congestion occurs: The EF PHB prevents starvation of other applications or classes that are not using this PHB. Packets requiring EF should be marked with DSCP binary value 101110 (46 or 0x2E). Non-DiffServ-compliant devices regard EF DSCP value 101110 as IP precedence 5 (101). This precedence is the highest user-definable IP precedence and is typically used
for delay-sensitive traffic (such as VoIP). Bits 5 to 7 of the EF DSCP value are 101, which matches IP precedence 5 and allows backward compatibility.
QUESTION 75
Which two QoS fields should be used by R1 and R2 to classify the traffic sent from PC1 to Server1? (Select two)

A. Priority bits
B. DE bit
C. IP DSCP
D. CoS
E. IP precedence

Correct Answer: CE Section: (none) Explanation
Explanation/Reference:
Explanation: The introduction of DSCP replaces IP precedence, a 3-bit field in the ToS byte of the IP header originally used to classify and prioritize types of traffic. However, DiffServ maintains interoperability with non-DiffServ-compliant devices (those that still use IP precedence). Because of this backward compatibility, DiffServ can be deployed gradually in large networks. The meaning of the 8 bits in the DiffServ field of the IP packet has changed over time to meet the expanding requirements of IP networks. Originally, the field was referred to as the ToS field, and the first three bits of the field (bits 7 to 5) defined a packet IP precedence value. A packet could be assigned one of six priorities based on the value of the IP precedence value (eight total values minus two reserved ones). IP precedence 5 (101) was the highest priority that could be assigned (RFC 791). RFC 2474 replaced the ToS field with the DiffServ field, in which a range of eight values (class selector) is used for backward compatibility with IP precedence. There is no compatibility with other bits used by the ToS field. The class selector PHB was defined to provide backward compatibility for DSCP with ToS-based IP precedence. RFC 1812 simply prioritizes packets according to the precedence value. The PHB is defined as the probability of timely forwarding. Packets with higher IP precedence should be (on average) forwarded in less time than packets with lower IP precedence.
QUESTION 76
NBAR is being used to recognize the traffic traversing the Certkiller network. What
are the steps for configuring stateful NBAR for dynamic protocols?
A. Configure a traffic class. Configure a traffic policy. Attach the traffic policy to an interface
B. Use the command ip nbar protocol-discovery to allow identification of stateful protocols. Use the command ip nbar port-map to attach the protocols to an interface.
C. Use the command match protocol to allow identification of stateful protocols. Use the command ip nbar port-map to attach the protocols to an interface.
D. Configure video streaming. Configure audio streaming. Attach the codec to an interface.
E. Use the command match protocol rtp to allow identification of real-time audio and video traffic. Use the command ip nbar port-map to extend the NBAR functionality for well-known protocols to new port numbers.
F. None of the above.

Correct Answer: A Section: (none) Explanation Explanation/Reference:
Explanation: Network-Based Application Recognition (NBAR), a feature in Cisco IOS software, provides intelligent network classification for the infrastructure. NBAR is a classification engine that can recognize a wide variety of applications, including web-based applications and client and server applications that dynamically assign TCP or User Datagram Protocol (UDP) port numbers. After the application is recognized, the network can invoke specific services for that particular application. NBAR currently works with quality of service (QoS) features to ensure that the network bandwidth is best used to fulfill company objectives. These features include the ability to guarantee bandwidth to critical applications, limit bandwidth to other applications, drop selected packets to avoid congestion, and mark packets appropriately so that the network and the service provider network can provide QoS from end to end. Complete the following steps to implement the QoS Step 1 Configure traffic classification by using the class-map command. Step 2 Configure traffic policy by associating the traffic class with one or more QoS features using the policy-map command. Step 3 Attach the traffic policy to inbound or outbound traffic on interfaces, subinterfaces, or virtual circuits by using the service-policy command.
QUESTION 77
You need to implement NBAR into the Certkiller network. Which three configuration tasks are required to successfully deploy NBAR to recognize TCP and UDP stateful protocols? (Select three)
A. Use the “ip rsvp bandwidth” command to set a strict upper limit on the bandwidth NBAR uses, and to guarantee admission of any flows.
B. Use the “service-policy” command to attach a traffic flow to an interface on the router.
C. Use the “class-map” command to define one or more traffic classes by specifying the criteria by which traffic is classified.
D. Use the “policy-map” command to define one or more QoS policies (such as shaping, policing, and so on) to apply to traffic defined by a class map.
E. Use the “random-detect dscp” command to modify the default minimum and maximum thresholds for the DSCP value.
F. Over leased lines, use the “multilink ppp” command to reduce latency and jitter, and to create Distributed Link Fragmentation and interleaving.

Correct Answer: BCD Section: (none) Explanation
Explanation/Reference:
Explanation:
Complete the following steps to implement the QoS
Step 1 Configure traffic classification by using the class-map command. Step 2 Configure traffic policy by
associating the traffic class with one or more QoS features using the policy-map command.
Step 3 Attach the traffic policy to inbound or outbound traffic on interfaces, subinterfaces, or virtual circuits
by using the service-policy command.

QUESTION 78
The Certkiller network is using NBAR to classify applications that use well known static TCP and UDP ports. The company has recently added several applications that are not currently recognized by their NBAR implementation. A PDLM file has been downloaded to the routers to be used by NBAR for protocol matching. What action should be taken so that NBAR can use the data in the PDLM file?
A. Reboot the router so that NBAR will read the file into memory.
B. Configure the routers with the global “ip nbar port-map” command and reboot.
C. Configure the routers with the global “ip nbar pdlm” command.
D. Do nothing. NBAR automatically uses the data in the PDLM file once download is complete.
E. Stop and restart CEF so that NBAR will read the file into memory.
F. None of the above
Correct Answer: C Section: (none) Explanation

Explanation/Reference:
Explanation:
NBAR is the first mechanism that supports dynamic upgrades without having to change the Cisco IOS
version or restart a router. PDLMs contain the rules that are used by NBAR to recognize an application by
matching text patterns in data packets, and they can be used to bring new or changed functionality to
NBAR. An external PDLM can be loaded at run time to extend the NBAR list of recognized

protocols. PDLMs can be used to enhance an existing protocol-recognition capability. PDLMs allow NBAR
to recognize new protocols without requiring a new Cisco IOS image or a router reload.
Router(Config)# ip nbar pdlm pdlm_name : Used to enhance the list of protocols recognized by NBAR
through a PDLM.

QUESTION 79
You need to classify the specific traffic traversing the Certkiller network. Which classification tool can be used to classify traffic based on the HTTP URL?
A. Class-based policing
B. Committed access rate (CAR)
C. Network-based application recognition (NBAR)
D. Dial peers
E. Policy-based routing (PBR)
F. None of the above

Correct Answer: C Section: (none) Explanation
Explanation/Reference:
Explanation: Network-Based Application Recognition (NBAR), a feature in Cisco IOS software, provides intelligent network classification for the infrastructure. NBAR is a classification engine that can recognize a wide variety of applications, including web-based applications and client and server applications that dynamically assign TCP or User Datagram Protocol (UDP) port numbers. After the application is recognized, the network can invoke specific services for that particular application. NBAR currently works with quality of service (QoS) features to ensure that the network bandwidth is best used to fulfill company objectives. These features include the ability to guarantee bandwidth to critical applications, limit bandwidth to other applications, drop selected packets to avoid congestion, and mark packets appropriately so that the network and the service provider network can provide QoS from end to end.
QUESTION 80
NBAR has been configured on router CK1 . What is supported by the network-based application recognition (NBAR) feature?
A. Matching beyond the first 400 bytes in a packet payload
B. Multicast and switching modes other than Cisco Express Forwarding (CEF)
C. Subport classification
D. More than 24 concurrent URLs, hosts, or MIME-type matches
E. Fragmented packets
F. None of the above

Correct Answer: C Section: (none) Explanation Explanation/Reference:
Explanation: NBAR is a classification and protocol discovery feature. NBAR can determine the mix of traffic on the network, which is important in isolating congestion problems. NBAR can classify application traffic by subport classification, or looking beyond the TCP or UDP port numbers of a packet. NBAR looks into the TCP or UDP payload itself and classifies packets based on the content within the payload, such as transaction identifier, message type, or other, similar data. Classification of HTTP, by URL, or by Multipurpose Internet Mail Extensions (MIME) type is an example of subport classification. NBAR classifies HTTP traffic by text within the URL, using regular expression matching. NBAR uses the UNIX filename specification as the basis for the URL specification format. The NBAR engine then converts the specification format into a regular expression. The NBAR Protocol Discovery feature provides an easy way to discover application protocols that are transiting an interface. The feature discovers any protocol traffic supported by NBAR. NBAR Protocol Discovery can be applied to interfaces and can be used to monitor both input and output traffic. It maintains the following per-protocol statistics for enabled interfaces: Total number of input and output packets and bytes Input and output bit rates An external Packet Description Language Module (PDLM) can be loaded at run time to extend the NBAR list of recognized protocols. PDLMs can also be used to enhance an existing protocol-recognition capability. PDLMs allow NBAR to recognize new protocols without requiring a new Cisco IOS image or a router reload.
QUESTION 81
Network Based Application Discovery is being used within the Certkiller network. What is the purpose of the NBAR discovery protocol?
A. To build a database of all application data that passes through the router and queue the data accordingly
B. To build a Packet Description Language Module (PDLM) file to be used in protocol matching
C. To look into the TCP or UDP payload and classify packets based on the content
D. To discover applications and build class maps for data classification
E. None of the above

Correct Answer: C Section: (none) Explanation
Explanation/Reference:
Explanation:
The NBAR Protocol Discovery feature provides an easy way to discover application protocols that are transiting an interface. The feature discovers any protocol traffic supported by NBAR. NBAR Protocol Discovery can be applied to interfaces and can be used to monitor both input and output traffic. It maintains the following per-protocol statistics for enabled interfaces: Total number of input and output packets and bytes Input and output bit rates An external Packet Description Language Module (PDLM) can be loaded at run time to extend the NBAR list of recognized protocols. PDLMs can also be used to enhance an existing protocol-recognition capability. PDLMs allow NBAR to recognize new protocols without requiring a new Cisco IOS image or a router reload.
QUESTION 82
NBAR is being used in the Certkiller network for application identification. Which three statements about the NBAR protocol are true? (Select three)
A. NBAR classifies HTTP traffic by text within the URL.
B. NBAR is used by IntServ as a classification and protocol discovery feature.
C. NBAR performs identification of Layer 4 through Layer 7 applications and protocols.
D. NBAR can be used to classify output traffic on a WAN link where tunneling or encryption is used.
E. Packet Description Language Modules (PDLMs) allow NBAR to recognize new protocols without requiring a new Cisco IOS image or a router reload.
F. NBAR is supported on logical interfaces such as Fast EtherChannel.

Correct Answer: ACE Section: (none) Explanation
Explanation/Reference:
Explanation: NBAR is a classification and protocol discovery feature. NBAR can determine the mix of traffic on the network, which is important in isolating congestion problems. NBAR can classify application traffic by subport classification, or looking beyond the TCP or UDP port numbers of a packet. NBAR looks into the TCP or UDP payload itself and classifies packets based on the content within the payload, such as transaction identifier, message type, or other, similar data. Classification of HTTP, by URL, or by Multipurpose Internet Mail Extensions (MIME) type is an example of subport classification. NBAR classifies HTTP traffic by text within the URL, using regular expression matching. NBAR uses the UNIX filename specification as the basis for the URL specification format. The NBAR engine then converts the specification format into a regular expression. The NBAR Protocol Discovery feature provides an easy way to discover application protocols that are transiting an interface. The feature discovers any protocol traffic supported by NBAR. NBAR Protocol Discovery can be applied to interfaces and can be used to monitor both input and output traffic. It maintains the following per-protocol
statistics for enabled interfaces: Total number of input and output packets and bytes Input and output bit rates
QUESTION 83
A new PDLM file has been downloaded from Cisco and needs to be used on a Certkiller router. Which command would add a new Packet Description Language Module (PDLM) called citrix.pdlm to the list of protocols that would be recognized by network-based application recognition (NBAR)?
A. RTA(config)# ip nbar pdlm flash://citrix.pdlm
B. RTA(config)# ip nbar pdlm
C. RTA(config-if)# ip nbar pdlm flash://citrix.pdlm
D. RTA# ip nbar pdlm
E. RTA(config-if)# ip nbar pdlm
F. RTA# ip nbar pdlm flash://citrix.pdlm
G. None of the above

Correct Answer: A Section: (none) Explanation
Explanation/Reference:
Explanation: NBAR is the first mechanism that supports dynamic upgrades without having to change the Cisco IOS version or restart a router. PDLMs contain the rules that are used by NBAR to recognize an application by matching text patterns in data packets, and they can be used to bring new or changed functionality to NBAR. An external PDLM can be loaded at run time to extend the NBAR list of recognized protocols. PDLMs can be used to enhance an existing protocol-recognition capability. PDLMs allow NBAR to recognize new protocols without requiring a new Cisco IOS image or a router reload. Router(Config)# ip nbar pdlm pdlm_name : Used to enhance the list of protocols recognized by NBAR through a PDLM.
QUESTION 84
RED has been configured on many of the Certkiller routers. What are three random early detection (RED) dropping modes? (Select three)
A. Center drop
B. Head drop
C. Random drop
D. No drop
E. Tail drop

Correct Answer: CDE Section: (none) Explanation
Explanation/Reference:
Explanation: Random early detection (RED) is a dropping mechanism that randomly drops packets before a queue is full. The dropping strategy is based primarily on the average queue length-that is, when the average size of the queue increases, RED is more likely to drop an incoming packet than when the average queue length is shorter. Because RED drops packets randomly, it has no per-flow intelligence. The rationale is that an aggressive flow will represent most of the arriving traffic, and it is likely that RED will drop a packet of an aggressive session. RED therefore punishes more aggressive sessions with a higher statistical probability and is able to somewhat selectively slow the most significant cause of congestion. Directing one TCP session at a time to slow down allows for full utilization of the bandwidth rather than utilization that manifests itself as crests and troughs of traffic. As a result of implementing RED, TCP global synchronization is much less likely to occur, and TCP can utilize link bandwidth more efficiently. In RED implementations, the average queue size also decreases significantly, because the possibility of the queue filling up is reduced. This is because of very aggressive dropping in the event of traffic bursts, when the queue is already quite full. RED distributes losses over time and normally maintains a low queue depth while absorbing traffic spikes. RED can also utilize IP precedence or differentiated services code point (DSCP) bits in packets to establish different drop profiles for different classes of traffic. RED is useful only when the bulk of the traffic is TCP traffic. With TCP, dropped packets indicate congestion, so the packet source reduces its transmission rate. With other protocols, packet sources might not respond or might re-send dropped packets at the same rate, and so dropping packets might not decrease congestion. RED has three modes:
*
No drop: When the average queue size is between 0 and the minimum threshold

*
Random drop:When the average queue size is between the minimum and the maximum threshold

*
Full drop (tail drop):When the average queue size is above the maximum threshold

*
Random drop should prevent congestion (prevent tail drops).
QUESTION 85
You need to implement QoS on the Certkiller network. Which two queuing methods will allow a percentage of the available bandwidth to be allocated to each queue? (Select two)
A. Weighted Fair Queuing (WFQ)
B. Priority Queuing (PQ)
C. Class-based WFQ (CBWFQ)
D. Custom Queuing (CQ)
E. Low Latency Queuing (LLQ)
F. First-In, First-Out Queuing (FIFO)

Correct Answer: CE Section: (none) Explanation
Explanation/Reference:
Explanation:
1.
CBWFQ: Class-based weighted fair queuing (CBWFQ) extends the standard WFQ functionality to provide support for user-defined traffic classes. By using CBWFQ, network managers can define traffic classes based on several match criteria, including protocols, access control lists (ACLs), and input interfaces. A FIFO queue is reserved for each class, and traffic belonging to a class is directed to the queue for that class. More than one IP flow, or “conversation”, can belong to a class. Once a class has been defined according to its match criteria, the characteristics can be assigned to the

class. To characterize a class, assign the bandwidth and maximum packet limit. The bandwidth assigned to a class is the guaranteed bandwidth given to the class during congestion. CBWFQ assigns a weight to each configured class instead of each flow. This weight is proportional to the bandwidth configured for each class. Weight is equal to the interface bandwidth divided by the class bandwidth. Therefore, a class with a higher bandwidth value will have a lower weight. By default, the total amount of bandwidth allocated for all classes must not exceed 75 percent of the available bandwidth on the interface. The other 25 percent is used for control and routing traffic. The queue limit must also be specified for the class. The specification is the maximum number of packets allowed to accumulate in the queue for the class. Packets belonging to a class are subject to the bandwidth and queue limits that are configured for the class.

2.
LLQ The Low Latency Queuing (LLQ) feature provides strict priority queuing for class-based weighted fair queuing (CBWFQ), reducing jitter in voice conversations. Configured by the priority command, strict priority queuing gives delay-sensitive data, such as voice, preferential treatment over other traffic. With this feature, delay-sensitive data is sent first, before packets in other queues are treated. LLQ is also referred to as priority queuing/class-based weighted fair queuing (PQ/CBWFQ) because it is a combination of the two techniques. For CBWFQ, the weight for a packet belonging to a specific class is derived from the bandwidth assigned to the class during configuration. Therefore, the bandwidth assigned to the packets of a class determines the order in which packets are sent. All packets are serviced equally, based on weight. No class of packets may be granted strict priority. This scheme poses problems for voice and video traffic that is largely intolerant of delay, especially variation in delay. For voice traffic, variations in delay introduce irregularities of transmission, which manifest as jitter in the conversation. To enqueue a class of traffic to the strict priority queue, configure the priority command for the class after specifying the class within a policy map. Classes to which the priority command is applied are considered priority classes. Within a policy map, give one or
more classes priority status. When multiple classes within a single policy map are configured as priority classes, all traffic from these classes is enqueued to the same, single, strict priority queue and they will contend with each other for bandwidth
QUESTION 86
DRAG DROP Using the fewest commands possible, drag the commands on the left to the blanks on the right to configure and apply a QoS policy that guarantees that voice packets receive 20 percent of the bandwidth on the S0/1/0 interface.
A.
B.
C.
D.

Correct Answer: Section: (none) Explanation
Explanation/Reference:
Explanation: Complete the following steps to implement the QoS Step 1 Configure traffic classification by using the class-map command. A class map is created using the class-map global configuration command. Class maps are identified by case-sensitive names. Each class map contains one or more conditions that determine whether the packet belongs to the class. There are two ways of processing conditions when there is more than one condition in a class map: Match all: All conditions have to be met to bind a packet to the class. Match any: At least one condition has to be met to bind the packet to the class. The default match strategy of class maps is match all. Step 2 Configure traffic policy by associating the traffic class with one or more QoS features using the policy-map command. The name of a traffic policy is specified in the policy-map command (for example, issuing the policy-map class1 command would create a traffic policy named class1). After you issue the policy-map command, you enter policy-map configuration mode. You can then enter the name of a traffic class. Here is where you enter QoS features to apply to the traffic that matches this class. Step 3 Attach the traffic policy to inbound or outbound traffic on interfaces, subinterfaces, or virtual circuits by using the service-policy command. Using the service-policy command, you can assign a single policy map to multiple interfaces or assign multiple policy maps to a single interface (a maximum of one in each
direction, inbound and outbound). A service policy can be applied for inbound or outbound packets.
QUESTION 87
You need to determine the best queuing method for use on a new Certkiller router. Which two statements about queuing mechanisms are true? (Select two)
A. FIFO queuing is only appropriate for slower serial interfaces.
B. Only one queuing mechanism type can be applied to an interface.
C. Weighted fair queuing does not require the configuration of access lists to classify traffic.
D. Flow-based weighted fair queuing provides for queues to be serviced in a round-robin fashion.
E. Weighted fair queuing is the default queuing mechanism used for all but slower than E1 rate interfaces.

Correct Answer: BC Section: (none) Explanation
Explanation/Reference:
Explanation:
The weighted fair queuing algorithm arranges traffic into conversations, or flows. The sorting of traffic into
flows is based on packet header addressing. Common conversation discriminators are as follows:

1.
Source/destination network address

2.
Source/destination Media Access Control (MAC) address

3.
Source/destination port or socket numbers

4.
Frame Relay data-link connection identifier (DLCI) value

5.
Quality of service/type of service (QoS/ToS) value The flow-based weighted fair queuing algorithm places packets of the various conversations in the fair queue before transmission. The order of removal from the fair queue is determined by the virtual delivery time of the last bit of each arriving packet. WFQ assigns a weight to each flow, which determines the transmit order for queued packets. In this scheme, lower weights are served first. Small, low-volume packets are given priority over large, high-volume conversation packets. Weighted fair queuing is configured on a particular interface using the command fair-queue congestive-discard-threshhold. The congestive discard policy applies only to high-volume conversations that have more than one message in the queue. The discard policy tries to control conversations that would monopolize the link. If an individual conversation queue contains more messages than the congestive discard threshold, no new messages will be queued until the number of messages drops below one-fourth of the threshold value.
QUESTION 88
Congestion management is a QoS mechanism for dealing with periodic bursts of congestion. What are the three elements of configuring congestion management? (Select three)
A. FIFO configuration
B. Determining packet drop thresholds
C. Determining the random early detection method
D. Queue scheduling
E. Queue creation
F. Traffic classification

Correct Answer: DEF Section: (none) Explanation
Explanation/Reference:
Explanation: Congestion-management features control the congestion when it occurs. One way that network elements handle an overflow of arriving traffic is to use a queuing algorithm to sort the traffic and then determine some method of prioritizing it onto an output link. Each queuing algorithm was designed to solve a specific network traffic problem and has a particular effect on network performance. Many algorithms have been designed to serve different needs. A well-designed queuing algorithm provides some bandwidth and delay guarantees to priority traffic.
QUESTION 89
Queuing mechanisms have been put in place to support converged Certkiller network. Which two statements about queuing mechanisms are true? (Select two)
A. When no other queuing strategies are configured, all interfaces except serial interfaces at E1 speed
(2.048 Mbps) and below use FIFO by default.
B. Serial interfaces at E1 speed (2.048 Mbps) and below use weighted fair queuing (WFQ) by default.
C. Weighted fair queuing (WFQ) is the simplest of queuing method.
D. An advantage of the round-robin queuing algorithm is its ability to prioritize traffic.
E. Priority queuing (PQ) uses a dynamic configuration and quickly adapts to changing network conditions.
F. Custom queuing (CQ) uses a dynamic configuration and quickly adapts to changing network conditions.

Correct Answer: AB Section: (none) Explanation
Explanation/Reference:
Explanation:
WFQ is one of the premier Cisco queuing techniques. It is a flow-based queuing

algorithm that does two things simultaneously: It schedules interactive traffic to the front of the queue to
reduce response time, and it fairly shares the remaining bandwidth among the various flows to prevent
high-volume flows from monopolizing the outgoing interface.
The idea of WFQ is to have a dedicated queue for each flow without starvation, delay, or jitter within the
queue. Furthermore, WFQ allows fair and accurate bandwidth allocation among all flows with minimum
scheduling delay. WFQ makes use of the IP precedence bits as a weight when allocating bandwidth.
WFQ was introduced as a solution to the problems of the following queuing mechanisms:

1.
FIFO queuing causes starvation, delay, and jitter.

2.
Priority queuing (PQ) causes starvation of lower-priority classes and suffers from the FIFO problems within each of the four queues that it uses for prioritization. The WFQ method is used as the default queuing mode on serial interfaces configured to run at or below E1 speeds (2.048 Mbps).
QUESTION 90
You need to ensure that all critical application traffic traverses the Certkiller network in a timely fashion. Which three methods would help prevent critical network-traffic packet loss on high speed interfaces? (Select three)
A. Policy routing
B. CBWFQ
C. WRED
D. LFI
E. Increase link capacity
F. LLQ

Correct Answer: BCF Section: (none) Explanation Explanation/Reference:
Explanation:
1.
Class-based weighted fair queuing (CBWFQ) extends the standard WFQ functionality to provide support for user-defined traffic classes. By using CBWFQ, network managers can define traffic classes based on several match criteria, including protocols, access control lists (ACLs), and input interfaces. A FIFO queue is reserved for each class, and traffic belonging to a class is directed to the queue for that class. More than one IP flow, or “conversation”, can belong to a class. Once a class has been defined according to its match criteria, the characteristics can be assigned to the class. To characterize a class, assign the bandwidth and maximum packet limit. The bandwidth assigned to a class is the guaranteed bandwidth given to the class during congestion.

2.
Weighted random early detection (WRED) combines RED with IP precedence or DSCP and performs packet dropping based on IP precedence or DSCP markings. As with RED, WRED monitors the average queue length in the router and determines when to

begin discarding packets based on the length of the interface queue. When the average queue length exceeds the user-specified minimum threshold, WRED begins to randomly drop packets with a certain probability. If the average length of the queue continues to increase so that it becomes larger than the user-specified maximum threshold, WRED reverts to a tail-drop packet-discard strategy, in which all incoming packets are dropped. The idea behind using WRED is to maintain the queue length at a level somewhere below the maximum threshold and to implement different drop policies for different classes of traffic. WRED can selectively discard lower-priority traffic when the interface becomes congested and can provide differentiated performance characteristics for different classes of service. WRED can also be configured to produce nonweighted RED behavior.

3.
WFQ When FIFO queuing is in effect, traffic is transmitted in the order received without regard for bandwidth consumption or the associated delays. File transfers and other high-volume network applications often generate series of packets of associated data known as packet trains. Packet trains are groups of packets that tend to move together through the network. These packet trains can consume all available bandwidth, and other traffic flows back up behind them. Weighted fair queuing overcomes an important limitation of FIFO queuing. Weighted fair queuing is an automated method that provides fair bandwidth allocation to all network traffic. Weighted fair queuing provides traffic priority management that dynamically sorts traffic into conversations, or flows. Weighted fair queuing then breaks up a stream of packets within each conversation to ensure that bandwidth is shared fairly between individual conversations. There are four types of weighted fair queuing: flow-based, distributed, class-based, and distributed class-based. Weighted fair queuing (WFQ) is a flow-based algorithm that schedules delay-sensitive traffic to the front of a queue to reduce response time, and also shares the remaining bandwidth fairly among high-bandwidth flows. By breaking up packet trains, WFQ assures that low-volume traffic is transferred in a timely fashion. Weighted fair queuing gives low-volume traffic, such as Telnet sessions, priority over high-volume traffic, such as File Transfer Protocol (FTP) sessions. Weighted fair queuing gives concurrent file transfers balanced use of link capacity. Weighted fair queuing automatically adapts to changing network traffic conditions
QUESTION 91
LLQ is being used throughout the Certkiller converged network. What is a feature of low latency queuing?
A. LLQ consists of a class-based weighted fair queuing with a priority queue for real-time traffic such as voice
B. LLQ consists of multiple priority queues with FIFO queuing for voice
C. LLQ consists of multiple FIFO priority queues with round-robin queuing for data
D. LLQ consists of a priority queue with multiple weighted fair queues for data
E. LLQ consists of multiple priority queues with weighted round-robin queues for data
F. None of the above

Correct Answer: A Section: (none) Explanation
Explanation/Reference:
Explanation: The Low Latency Queuing (LLQ) feature provides strict priority queuing for class-based weighted fair queuing (CBWFQ), reducing jitter in voice conversations. Configured by the priority command, strict priority queuing gives delay-sensitive data, such as voice, preferential treatment over other traffic. With this feature, delay-sensitive data is sent first, before packets in other queues are treated. LLQ is also referred to as priority queuing/class-based weighted fair queuing (PQ/CBWFQ) because it is a combination of the two techniques. For CBWFQ, the weight for a packet belonging to a specific class is derived from the bandwidth assigned to the class during configuration. Therefore, the bandwidth assigned to the packets of a class determines the order in which packets are sent. All packets are serviced equally, based on weight. No class of packets may be granted strict priority. This scheme poses problems for voice and video traffic that is largely intolerant of delay, especially variation in delay. For voice traffic, variations in delay introduce irregularities of transmission, which manifest as jitter in the conversation. To enqueue a class of traffic to the strict priority queue, configure the priority command for the class after specifying the class within a policy map. Classes to which the priority command is applied are considered priority classes. Within a policy map, give one or more classes priority status. When multiple classes within a single policy map are configured as priority classes, all traffic from these classes is enqueued to the same, single, strict priority queue and they will contend with each other for bandwidth.
QUESTION 92
You want to implement a congestion avoidance mechanism within the Certkiller network. Which QoS tool is used to reduce the level of congestion in the queues by selectively dropping packets?
A. Weighted Random Early Detection (WRED)
B. Low Latency Queuing (LLQ)
C. Class-based Weighted Fair Queuing (CBWFQ)
D. Modified Deficit Round Robin (MDRR)
E. None of the above

Correct Answer: A Section: (none) Explanation
Explanation/Reference:
Explanation:
Weighted random early detection (WRED) combines RED with IP precedence or DSCP and performs
packet dropping based on IP precedence or DSCP markings. As with RED,

WRED monitors the average queue length in the router and determines when to begin discarding packets
based on the length of the interface queue. When the average queue length exceeds the user-specified
minimum threshold, WRED begins to randomly drop packets with a certain probability. If the average
length of the queue continues to increase so that it becomes larger than the user-specified maximum
threshold, WRED reverts to a tail-drop packet-discard strategy, in which all incoming packets are dropped.
The idea behind using WRED is to maintain the queue length at a level somewhere below the maximum
threshold and to implement different drop policies for different classes of traffic.
WRED can selectively discard lower-priority traffic when the interface becomes congested and can provide
differentiated performance characteristics for different classes of service. WRED can also be configured to
produce nonweighted RED behavior.

QUESTION 93
Interface congestion on a Certkiller link is causing drops in voice (UDP) and TCP packets. The drops result in jerky speech quality and slower FTP traffic flows. Which two technologies would proactively address the TCP transfer rate and the voice problems in this network? (Select two)
A. CBWFQ
B. WRED
C. Traffic shaping
D. LLQ

Correct Answer: BD Section: (none) Explanation
Explanation/Reference:
Explanation:
1.
Weighted random early detection (WRED) combines RED with IP precedence or DSCP and performs packet dropping based on IP precedence or DSCP markings. As with RED, WRED monitors the average queue length in the router and determines when to begin discarding packets based on the length of the interface queue. When the average queue length exceeds the user-specified minimum threshold, WRED begins to randomly drop packets with a certain probability. If the average length of the queue continues to increase so that it becomes larger than the user-specified maximum threshold, WRED reverts to a tail-drop packet-discard strategy, in which all incoming packets are dropped. The idea behind using WRED is to maintain the queue length at a level somewhere below the maximum threshold and to implement different drop policies for different classes of traffic. WRED can selectively discard lower-priority traffic when the interface becomes congested and can provide differentiated performance characteristics for different classes of service. WRED can also be configured to produce nonweighted RED behavior.

2.
The Low Latency Queuing (LLQ) feature provides strict priority queuing for class-based weighted fair queuing (CBWFQ), reducing jitter in voice conversations.
Configured by the priority command, strict priority queuing gives delay-sensitive data, such as voice, preferential treatment over other traffic. With this feature, delay-sensitive data is sent first, before packets in other queues are treated. LLQ is also referred to as priority queuing/class-based weighted fair queuing (PQ/CBWFQ) because it is a combination of the two techniques. For CBWFQ, the weight for a packet belonging to a specific class is derived from the bandwidth assigned to the class during configuration. Therefore, the bandwidth assigned to the packets of a class determines the order in which packets are sent. All packets are serviced equally, based on weight. No class of packets may be granted strict priority. This scheme poses problems for voice and video traffic that is largely intolerant of delay, especially variation in delay. For voice traffic, variations in delay introduce irregularities of transmission, which manifest as jitter in the conversation. To enqueue a class of traffic to the strict priority queue, configure the priority command for the class after specifying the class within a policy map. Classes to which the priority command is applied are considered priority classes. Within a policy map, give one or more classes priority status. When multiple classes within a single policy map are configured as priority classes, all traffic from these classes is enqueued to the same, single, strict priority queue and they will contend with each other for bandwidth.
QUESTION 94
Four distinct packet queues in a Certkiller router are displayed below: Study the exhibit carefully. Packet-based WRR (not byte-count WRR) is being used to control the output on an interface with four queues (A, B, C, D) configured. Each queue has an assigned weight of A=4, B=2, C=1, and D=1. If the queuing algorithm begins with Queue A and with packets placed into the four queues as shown in the exhibit, in which order will packets be selected from the queues for transmission?

A. 1, 3, 8, 2, 9, 4, 5, 10, 6, 7
B. 1, 2, 4, 5, 3, 9, 6, 7, 8 10
C. 1, 3, 8, 2, 4, 5, 9, 6, 7, 10
D. 1, 3, 8, 2, 9, 10, 4, 6, 5, 7
E. 1, 3, 2, 9, 4, 6, 5, 7, 8, 10
F. None of the above

Correct Answer: A Section: (none) Explanation
Explanation/Reference:
Explanation:
In WRR, packets are accessed round-robin style, but queues can be given priorities called “weights.” For
example, in a single round, four packets from a high-priority class might be dispatched, followed by two
from a middle-priority class, and then one from a low-priority class.
Some implementations of the WRR algorithm provide prioritization by dispatching a configurable number of bytes each round rather than a number of packets. The Cisco custom queuing (CQ) mechanism is an example of this implementation.
QUESTION 95
Weighted random early detection (WRED) has been configured on a Certkiller router. Out of every 512 packets, how many packets will be dropped if the mark probability denominator has been configured with a value of 512?
A. 4
B. 2
C. 1
D. 8
E. 512
F. None of the above

Correct Answer: C Section: (none) Explanation
Explanation/Reference:
Explanation: The idea behind using WRED is to maintain the queue length at a level somewhere below the maximum threshold and to implement different drop policies for different classes of traffic. WRED can selectively discard lower-priority traffic when the interface becomes congested and can provide differentiated performance characteristics for different classes of service. WRED can also be configured to produce nonweighted RED behavior. For interfaces configured to use Resource Reservation Protocol (RSVP), WRED chooses packets from other flows to drop rather than the RSVP flows. Also, IP precedence or DSCP helps determine which packets are dropped, because traffic at a lower priority has a higher drop rate than traffic at a higher priority (and, therefore, lower-priority traffic is more likely to be throttled back). In addition, WRED statistically drops more packets from large users than from small users. The traffic sources that generate the most traffic are more likely to be slowed down than traffic sources that generate little traffic. WRED reduces the chances of tail drop by selectively dropping packets when the output interface begins to show signs of congestion. By dropping some packets early rather than waiting until the queue is full, WRED avoids dropping large numbers of packets at once and minimizes the chances of global synchronization. As a result, WRED helps maximize the utilization of transmission lines.
QUESTION 96
A portion of the configuration for router CK1 is displayed below: Class-map match-any GOLD match ip precedence ef class-map match-any SILVER Match ip dscp af31 Policy-map Branch Class GOLD Priority percent 20 Class SILVER bandwidth percent 15 random-detect dscp-based interface Serial0/1 description PPP link to BRANCH bandwidth 1536 ip address 10.200.40.1 255.255.255.252 encapsulation ppp
This configuration has been used to prioritize voice traffic on the Certkiller network. After issuing several show commands, the administrator realizes the configuration is not working. What could be the problem?
A. Voice traffic should be mapped to a different DSCP value.
B. WRED is not configured for the voice traffic.
C. The policy map needs to be mapped to an interface.
D. The given LLQ configuration is not designed for voice traffic.
E. Custom queuing should be used on converged voice and data networks.
F. None of the above

Correct Answer: C Section: (none) Explanation
Explanation/Reference:
Explanation: Similar to access lists, policy maps must be mapped to an interface. Although the policy map portion of the configuration for prioritizing traffic is complete, the router needs to be informed which interface this policy needs to be applied to. To do this, Use the service-policy interface configuration command to attach a traffic policy to an interface and to specify the direction in which the policy should be applied (either on packets coming into the interface or packets leaving the interface). In this example, this would be done with the “service-policy input Branch” or “service-policy output Branch” command.
QUESTION 97
You want to limit the amount of throughput on one of the Certkiller WAN links. Which QoS mechanism will control the maximum rate of traffic that is sent or received on an interface?
A. Class-based shaping
B. LFI
C. Traffic shaping
D. Traffic policing
E. None of the above

Correct Answer: D Section: (none) Explanation
Explanation/Reference:
Explanation: Traffic policing drops excess traffic to control traffic flow within specified rate limits. Traffic policing does not introduce any delay to traffic that conforms to traffic policies. Traffic policing can cause more TCP retransmissions, because traffic in excess of specified limits is dropped. Traffic-policing mechanisms such as class-based policing or committed access rate (CAR) also have marking capabilities in addition to rate-limiting capabilities. Instead of dropping the excess traffic, traffic policing can mark and then
send the excess traffic. This feature allows the excess traffic to be re-marked with a lower priority before the excess traffic is sent.
QUESTION 98
Traffic shaping has been enabled on a Certkiller frame relay router. Of the choices below, which Cisco IOS traffic-shaping mechanism statement is true?
A. Class-based policing is configured using the Modular QoS command-line (MQC) interface.
B. Both Frame Relay traffic shaping (FRTS) and virtual IP (VIP)-based Distributed Traffic Shaping (DTS) have the ability to mark traffic.
C. Distributed Traffic Shaping (DTS) is configured with the police command under the policy map configuration.
D. Only the Frame Relay traffic-shaping (FRTS) mechanism can interact with a Frame Relay network, adapting to indications of Layer 2 congestion in the WAN links.
E. None of the above.

Correct Answer: A Section: (none) Explanation
Explanation/Reference:
Explanation: Traffic shaping is an attempt to control traffic in ATM, Frame Relay, or Metro Ethernet networks to optimize or guarantee performance, low latency, or bandwidth. Traffic shaping deals with concepts of classification, queue disciplines, enforcing policies, congestion management, quality of service (QoS), and fairness. Traffic shaping provides a mechanism to control the volume of traffic being sent into a network (bandwidth throttling) by not allowing the traffic to burst above the subscribed (committed) rate. For this reason, traffic-shaping schemes need to be implemented at the network edges like ATM, Frame Relay, or Metro Ethernet to control the traffic entering the network. It also may be necessary to identify traffic with a granularity that allows the traffic-shaping control mechanism to separate traffic into individual flows and shape them differently. Class-based policing is also available on some Cisco Catalyst switches. Class-based policing supports a single or dual token bucket. Class-based policing also supports single-rate or dual-rate metering and multiaction policing. Multiaction policing allows more than one action to be applied; for example, marking the Frame Relay DE bit and also the DSCP value before sending the exceeding traffic. Class-based policing is configured using the Cisco Modular QoS CLI (MQC), using the police command under the policy map configuration.
QUESTION 99
You need to consider the advantages and disadvantages of using traffic shaping versus traffic policing within your network. Which statement about traffic policing and which statement about traffic shaping are true? (Select two)
A. Traffic policing drops excess traffic in order to control traffic flow within specified rate limits.
B. Traffic shaping buffers excess traffic so that the traffic stays within the desired rate.
C. Traffic policing can cause UDP retransmissions when traffic in excess of specified limits is dropped.
D. A need for traffic shaping occurs when a service provider must rate-limit the customer traffic to T1 speed on an OC-3 connection.
E. Traffic policing and traffic conditioning are mechanisms that are used in an edge network to guarantee QoS.

Correct Answer: AB Section: (none) Explanation
Explanation/Reference:
Explanation: Policing can be applied to either the inbound or outbound direction, while shaping can be applied only in the outbound direction. Policing drops nonconforming traffic instead of queuing the traffic like shaping. Policing also supports marking of traffic. Traffic policing is more efficient in terms of memory utilization than traffic shaping because no additional queuing of packets is needed. Both traffic policing and shaping ensure that traffic does not exceed a bandwidth limit, but each mechanism has different impacts on the traffic: Policing drops packets more often, generally causing more retransmissions of connection-oriented protocols, such as TCP. Shaping adds variable delay to traffic, possibly causing jitter. Shaping queues excess traffic by holding packets in a shaping queue. Traffic shaping is used to shape the outbound traffic flow when the outbound traffic rate is higher than a configured rate. Traffic shaping smoothes traffic by storing traffic above the configured rate in a shaping queue. Therefore, shaping increases buffer utilization on a router and causes unpredictable packet delays. Traffic shaping can also interact with a Frame Relay network, adapting to indications of Layer 2 congestion in the WAN. For example, if the backward explicit congestion notification (BECN) bit is received, the router can lower the rate limit to help reduce congestion in the Frame Relay network.
QUESTION 100
Traffic policing has been implemented on router CK1 using CAR. In a network where traffic policing is implemented, what happens when a packet enters the router and exceeds the set parameters?
A. Traffic is dropped or sent with an increased priority.
B. Traffic is sent with a bit set for discard-eligible.
C. Traffic is dropped or sent with the priority unchanged.
D. Traffic is dropped or sent with a different priority.
E. None of the above.

Correct Answer: D Section: (none) Explanation
Explanation/Reference:
Explanation: Traffic policing drops excess traffic to control traffic flow within specified rate limits. Traffic policing does not introduce any delay to traffic that conforms to traffic policies. Traffic policing can cause more TCP retransmissions, because traffic in excess of specified limits is dropped. Traffic-policing mechanisms such as class-based policing or committed access rate (CAR) also have marking capabilities in addition to rate-limiting capabilities. Instead of dropping the excess traffic, traffic policing can mark and then send the excess traffic. This feature allows the excess traffic to be re-marked with a lower priority before the excess traffic is sent out.
QUESTION 101
You need to consider the pros and cons of using traffic shaping and traffic policing within your network. Which statement about traffic policing and which statement about traffic shaping are true? (Select two)
A. Traffic shaping can be applied only in the outbound direction.
B. Traffic shaping can be applied only in the inbound direction.
C. Traffic shaping can be applied in both the inbound and outbound direction.
D. Traffic policing can be applied in both the inbound and outbound direction.
E. Traffic policing can be applied only in the inbound direction.
F. Traffic policing can be applied only in the outbound direction.

Correct Answer: AD Section: (none) Explanation
Explanation/Reference:
Explanation:
Policing can be applied to either the inbound or outbound direction, while shaping can be applied only in
the outbound direction. Policing drops nonconforming traffic instead of queuing the traffic like shaping.
Policing also supports marking of traffic. Traffic policing is more efficient in terms of memory utilization than
traffic shaping because no additional queuing of packets is needed.
Both traffic policing and shaping ensure that traffic does not exceed a bandwidth limit, but each mechanism
has different impacts on the traffic:
Policing drops packets more often, generally causing more retransmissions of connection-oriented
protocols, such as TCP.
Shaping adds variable delay to traffic, possibly causing jitter. Shaping queues excess traffic by holding
packets in a shaping queue. Traffic shaping is used to shape the outbound traffic flow when the outbound
traffic rate is higher than a configured rate.

Traffic shaping smoothes traffic by storing traffic above the configured rate in a shaping queue. Therefore,
shaping increases buffer utilization on a router and causes unpredictable packet delays. Traffic shaping
can also interact with a Frame Relay network, adapting to indications of Layer 2 congestion in the WAN.
For example, if the backward explicit congestion notification (BECN) bit is received, the router can lower
the rate limit to help reduce congestion in the Frame Relay network.

Flydumps.com never believes in second chances and hence bring you the best Cisco 642-845 exam preparation materials which will make you pass in the first attempt. Flydumps.com experts have complied the fail Cisco 642-845 exam content to help you pass your Cisco 642-845 certification exam in the first attempt and score the top possible grades too.

Pass4itsure 117-201 dumps with PDF + Premium VCE + VCE Simulator: http://www.pass4itsure.com/117-201.html

Cisco 642-845 Study Material, Free Cisco 642-845 Exam Practice On Store