The term QoS (Quality of Service) has been used to describe many different ways of providing better service to some types of traffic in a network environment. These can include priority queuing, application specific routing, bandwidth management, traffic shaping and many others. We will be discussing priority queuing since it is the most widely implemented QoS approach. Using priority queuing is not the only approach that will work. Any approach that reliably delivers packets on time will support voice or video conferencing. But since priority queuing is the most available QoS, we will describe here how it works, and how to configure it to best support voice and video conferencing.
Enabling QoS in the network is only part of the problem. We will review here the four steps to insuring QoS works correctly in your network as follows:
Network QoS Implementation
Bandwidth Demand and Bandwidth Availability
Queues are the primary contributors to packet loss and delay in a packet network. There are queues at each output port in each router and switch in the network, and packets have to enter and leave an output queue on each device through which it passes on its route. If queues are empty or nearly empty, the packet enters and is quickly forwarded onto the output link.
If momentary traffic is heavy, queues fill up and packets are delayed waiting for all earlier packets in the queue to be forwarded before it can be sent to the output link. If momentary traffic is too high, the queue fills, and then packets are discarded, or lost.
A priority queuing mechanism provides additional queues at each switch or router output port, dedicated to high priority traffic. A simple priority queue, sometimes called a low latency queue, is always emptied before any lower priority queue is serviced. If additional high-priority packets arrive while the lower queue is being emptied, service is immediately switched to the high priority queue.
A rate based queue behaves slightly differently. A rate based policy empties queues based on the bandwidth that has been allocated to each queue. If queue 1 is allocated 40% of the available bandwidth, and queue 2 is allocated 60% of the available bandwidth, then queue 2 is serviced 6/4 times as often as queue 1. Using a rate based policy, the queue is serviced often enough to keep the allocated flow moving rapidly through the queue, but if excess traffic arrives, it will back up in the queue while priority is given to the other queue.
Layer 2 and Layer 3 Quality of Service
QoS is implemented both at layer 3 and layer 2 in the protocol stack. Layer 3 QoS will be recognized and handled by routers in the network. However, congestion also occurs in switch output queues, and so level 2 QoS is also required. This is less true in the WAN network, where the structure tends to be long links connecting one router to the next. However, in the Enterprise network it is often necessary to have both a layer 2 and layer 3 QoS deployment.
Layer 3 QoS
There are three methodologies available in most networks for implementing QoS at layer 3, IntServ (RSVP), DiffServ, and IP Precedence.
The wrong implementation of QoS setting could result in unwanted tradeoffs, e.g., packet loss improves but jitter gets worse.