Skip to main content

What is QoS? Why is QoS important?


Quality of Service (QoS)
 is a set of technologies that work on a network to guarantee its ability to dependably run high-priority applications and traffic under limited network capacity. QoS technologies accomplish this by providing differentiated handling and capacity allocation to specific flows in network traffic. This enables the network administrator to assign the order in which packets are handled, and the amount of bandwidth afforded to that application or traffic flow.

Aslo Quality of Service (QoS) across IP networks is becoming an increasingly important aspect of today's enterprise IT infrastructure. Not only is QoS necessary for voice and video streaming over the network, it's also an important factor in supporting the growing Internet of Things (IoT). In this article, I'll explain why QoS is important, how it works, and describe some use-case scenarios to show how it can benefit your end users' experience.


Measurements of concern to QoS are bandwidth (throughput), latency (delay), jitter (variance in latency), and error rate. This renders QoS of particular importance to high-bandwidth, real-time traffic such as voice over IP (VoIP), video conferencing, and video-on-demand that have a high sensitivity to latency and jitter. These applications, with minimum bandwidth requirements and maximum latency limits, are called “inelastic.”


The QoS mechanisms for ordering packets and allotting bandwidth are queuing and bandwidth management respectively. Before they can be implemented however, traffic must be differentiated using classification tools. The classification of traffic according to policy allows organizations to ensure the consistency and adequate availability of resources for their most important applications.

 

Traffic can be classified crudely by port or IP, or using a more sophisticated approach such as by application or user. The latter parameters allow for more meaningful identification, and consequently, classification of the data.


Next, queuing and bandwidth management tools are assigned rules to handle traffic flows specific to the classification they received upon entering the network.


The queuing mechanism allows for packets within traffic flows to be stored until the network is ready to process it. Priority Queuing (PQ) is developed to ensure the necessary availability and minimal latency of network performance for the most important batches of applications and traffic by providing an assigned priority and specific bandwidth to them based on their classification. This ensures the most important activities on a network are not starved of bandwidth by activities of lower priority. Applications, users, and traffic can be batched in up to 8 differentiated queues.


Bandwidth management mechanisms measure and control traffic flows on the network to avoid exceeding its capacity and the resulting network congestion that occurs. Mechanisms for bandwidth management include traffic shaping, a rate limiting technique used to optimize or guarantee performance and increase usable bandwidth where necessary, and scheduling algorithms, which offer varied methods for providing bandwidth to specific traffic flows.


Depending on the provider, the above services and controls can be managed and consolidated down to a single box. Such is the case for QoS via Palo Alto Networks firewalls. Thus, to communicate QoS measures and classification outside the box and downstream network infrastructure, Differentiated Services Code Point (DSCP) is implemented. DSCP marks each packet based on its classification, and communicates this to each box the packet travels through, ensuring a consistent implementation of QoS policy.

Why is QoS important?


Some applications running on your network are sensitive to delay. These applications commonly use the UDP protocol as opposed to the TCP protocol. The key difference between TCP and UDP as it relates to time sensitivity is that TCP will retransmit packets that are lost in transit while UDP does not. For a file transfer from one PC to the next, TCP should be used because if any packets are lost, malformed or arrive out of order, the TCP protocol can retransmit and reorder the packets to recreate the file on the destination PC.


But for UDP applications such as an IP phone call, any lost packet cannot be retransmitted because the voice packets come in as an ordered stream; re-transmitting packets is useless. Because of this, any lost or delayed packets for applications running the UDP protocol are a real problem. In our voice call example, losing even a few packets will result in the voice quality becoming choppy and unintelligible. Additionally, the packets are sensitive to what's known as jitter. Jitter is the variation in delay of a streaming application.


If your network has plenty of bandwidth and no traffic that bursts above what it can handle, you won't have a problem with packet loss, delay or jitter. But in many enterprise networks, there will be times where links become overly congested to the point where routers and switches start dropping packets because they are coming in/out faster that what can be processed. If that's the case, your streaming applications are going to suffer. This is where QoS comes in.

Comments

Post a Comment

Popular posts from this blog

What is STP? - Explain Advantages and Disadvantages

The Spanning Tree Protocol is a network protocol that builds a loop-free logical topology for Ethernet networks. The basic function of STP is to prevent bridge loops and the broadcast radiation that results from them. STP is a protocol. It actively monitors all links of the network. To finds a redundant link, it uses an algorithm, known as the STA (spanning-tree algorithm). The STA algorithm first creates a topology database then it finds and disables the redundant links. Once redundant links are disabled, only the STP-chosen links remain active. If a new link is added or an existing link is removed, the STP re-runs the STA algorithm and re-adjusts all links to reflect the change. STP (Spanning Tree Protocol) automatically removes layer 2 switching loops by shutting down the redundant links. A redundant link is an additional link between two switches. A redundant link is usually created for backup purposes. Just like every coin has two sides, a redundant link, along with

What are the Advantages and Disadvantages of TCP/UDP ?? Difference between TCP and UDP

As in previous blog we have define and explain about what is TCP and UDP and from now we are moving ahead with Advantages, Disadvantages and Difference of TCP and UDP but for this you have to know about TCP and UDP hence to understand it go for a What is TCP and UDP . Advantage of TCP Here, are pros/benefits of TCP: It helps you to establish/set up a connection between different types of computers. It operates independently of the operating system. It supports many routing-protocols. It enables the internetworking between the organizations. TCP/IP model has a highly scalable client-server architecture. It can be operated independently. Supports several routing protocols. It can be used to establish a connection between two computers. Disadvantages of TCP Here, are disadvantage of using TCP: TCP never conclude a transmission without all data in motion being explicitly asked. You can't use for broadcast or multicast transmission. TCP has no block boundaries, so you