COMPUTER NETWORK AND SECURITY

CONGESTION CONTROL

An important issue in a packet-switched network is congestion. Congestion in a network may occur if the load on the network-the number of packets sent to the network-is greater than the capacity of the network-the number of packets a network can handle. 

Congestion control refers to the mechanisms and techniques to control the congestion and keep the load below the capacity.

 

A state occurring in network layer when the message traffic is so heavy that it slows down network response

time.

Effects of Congestion

• As delay increases, performance decreases.

• If delay increases, retransmission occurs, making the situation worse.

 

There are two traffic shaping algorithm:

  1. Leaky Bucket

 

If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate as long as there is water in the bucket. The rate at which the water leaks does not depend on the rate at which the water is input to the bucket unless the bucket is empty. The input rate can vary, but the output rate remains constant. Similarly, in networking, a technique called leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket and sent out at an average rate. 

In the figure, we assume that the network has committed a bandwidth of 3 Mbps for a host. The use of the leaky bucket shapes the input traffic to make it conform to this commitment. In Figure 24.19 the host sends a burst of data at a rate of 12 Mbps for 2 s, for a total of 24 Mbits of data. The host is silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a total of 6 Mbits of data. In all, the host has sent 30 Mbits of data in lOs. The leaky bucket smooths the traffic by sending out data at a rate of 3 Mbps during the same 10 s. Without the leaky bucket, the beginning burst may have hurt the network by consuming more bandwidth than is set aside for this host. We can also see that the leaky bucket may prevent congestion. As an analogy, consider the freeway during rush hour (bursty traffic). If, instead, commuters could stagger their working hours, congestion on our freeways could be avoided.

 

The following is an algorithm for variable-length packets:

1. Initialize a counter to n at the tick of the clock.

2. If n is greater than the size of the packet, send the packet and decrement the counter by the packet size. 

3.Repeat this step until n is smaller than the packet size.

4.Reset the counter and go to step 1.

 

2. Token Bucket

 

The leaky bucket is very restrictive. It does not credit an idle host. For example, if a host is not sending for a while, its bucket becomes empty. Now if the host has bursty data, the leaky bucket allows only an average rate. The time when the host was idle is not taken into account. 

On the other hand, the token bucket algorithm allows idle hosts to accumulate credit for the future in the form of tokens. For each tick of the clock, the system sends n tokens to the bucket. The system removes one token for every cell (or byte) of data sent. 

For example, if n is 100 and the host is idle for 100 ticks, the bucket collects 10,000 tokens. Now the host can consume all these tokens in one tick with 10,000 cells, or the host takes 1000 ticks with 10 cells per tick. In other words, the host can send bursty data as long as the bucket is not empty. Figure 24.21 shows the idea. The token bucket can easily be implemented with a counter. The token is initialized to zero. Each time a token is added, the counter is incremented by 1. Each time a unit of data is sent, the counter is decremented by 1. When the counter is zero, the host cannot send data.

 

 

CONGESTION CONTROL METHODS 

 

Congestion control refers to the techniques used to control or prevent congestion. Congestion control techniques can be broadly classified into two categories: 

Open Loop Congestion Control

Open loop congestion control policies are applied to prevent congestion before it happens. The congestion control is handled either by the source or the destination. 

Retransmission Policy : 

This policy addresses the retransmission of packets in the event of loss or corruption. When the sender detects that a packet has been lost or corrupted, it initiates a retransmission process. However, this additional transmission activity can potentially exacerbate network congestion. To mitigate this risk, retransmission timers must be carefully designed to balance the need to prevent congestion while optimizing network efficiency.

Window Policy : 

The choice of window type on the sender's side can impact congestion levels. In the Go-back-N window, multiple packets are retransmitted, even if some have been successfully received by the receiver. This redundancy can potentially worsen network congestion. Therefore, adopting a Selective Repeat window is preferable as it retransmits only the specific packet that may have been lost, reducing unnecessary retransmissions and mitigating congestion.

Discarding Policy : 

An effective discarding policy implemented by routers involves managing congestion while selectively discarding corrupted or less critical packets to maintain message integrity. For instance, during audio file transmission, routers can prioritize the preservation of audio quality by discarding less critical packets, thus preventing congestion while ensuring overall message quality remains intact.

Acknowledgment Policy : 

Acknowledgments, being a part of network traffic, can contribute to congestion. The acknowledgment policy implemented by the receiver can influence congestion levels. Various strategies exist to mitigate congestion associated with acknowledgments. For example, the receiver can optimize acknowledgment traffic by sending acknowledgments for multiple packets at once, rather than individually. Additionally, acknowledgments should be sent only when necessary, such as when a packet is received or a timer expires, to minimize unnecessary traffic and congestion.

Admission Policy : 

In an admission policy, it's essential to employ mechanisms to preempt congestion. Before forwarding network flows, switches should assess the resource needs of each flow. If there's a likelihood of congestion or congestion already exists within the network, routers should reject the establishment of virtual network connections to avert exacerbating congestion issues further.

Closed Loop Congestion Control

Closed loop congestion control techniques are used to treat or alleviate congestion after it happens. Several techniques are used by different protocols; some of them are: 

Backpressure : 

Backpressure is a method where a congested node ceases to accept packets from its upstream nodes. This action can potentially lead to congestion in the upstream nodes, resulting in the rejection of data from higher-level nodes. Backpressure operates as a congestion control technique between nodes, propagating in the opposite direction of data flow. This technique is applicable only to virtual circuits where each node possesses information about its upstream nodes.

Choke Packet Technique : 

The choke packet technique is applicable to both virtual networks and datagram subnets. A choke packet, sent by a node to the source, serves to notify it of congestion. Each router monitors the utilization of its resources and output lines. If the resource utilization exceeds a threshold value set by the administrator, the router sends a choke packet directly to the source, prompting it to reduce traffic. Intermediate nodes through which the packets have traveled are not alerted about congestion.

Implicit Signaling : 

Implicit signaling involves the absence of direct communication between congested nodes and the source. Instead, the source infers the presence of congestion in the network based on certain indicators. For instance, if the sender transmits multiple packets without receiving acknowledgments for a period, it may deduce that congestion has occurred.

Explicit Signaling : 

In explicit signaling, a congested node has the capability to directly transmit a packet to either the source or destination to communicate congestion. Unlike the choke packet technique, where a separate packet is generated to convey congestion, in explicit signaling, congestion signals are embedded within the data-carrying packets themselves. This distinction underscores the methodological difference between the two approaches.

TCP CONGESTION CONTROL

TCP (Transmission Control Protocol) congestion control is a set of algorithms and mechanisms implemented in the TCP protocol to manage and avoid congestion in computer networks. Congestion occurs when the demand for network resources exceeds its capacity, leading to degraded performance and potential packet loss. The primary goals of TCP congestion control are to ensure efficient and fair resource utilization while preventing network congestion. 

  1. Slow Start:
  • TCP typically starts with a slow increase in the sending rate to avoid overwhelming the network when a connection is established. This is known as slow start.
  • The sender exponentially increases its sending rate until it reaches a certain threshold (the slow-start threshold or ssthresh) or until congestion is detected.
  1. Congestion Avoidance:
  • After a slow start, TCP enters the congestion avoidance phase. In this phase, the sender gradually increases its transmission rate until it detects congestion.
  • The congestion avoidance algorithm adjusts the sending rate more conservatively compared to slow start, helping to maintain a stable and efficient network.
  1. Congestion Detection : 

The congestion detection phase is a critical aspect of the protocol's operation. The goal is to identify whether the network is experiencing congestion so that TCP can react appropriately to prevent further deterioration of performance. Congestion detection is typically inferred from observed events such as packet loss, increased round-trip times, or explicit signals from network devices. Here we consider two cases:

  • Case 1: Timeouts:
    • TCP relies on timeout events to detect packet loss. If a sender does not receive an acknowledgment (ACK) for a transmitted packet within a certain time (timeout period), it assumes that the packet is lost and considers it an indication of congestion.
    • However, relying solely on timeouts can be less responsive than other methods, as it may take longer to detect congestion.
  • Case 2: Triple Duplicate ACKs:
    • Instead of waiting for a timeout, TCP can use the occurrence of three consecutive duplicate acknowledgments as an indication of packet loss.
    • When the sender receives three duplicate ACKs, it assumes that the corresponding packet was lost and initiates Fast Retransmit and Fast Recovery to resend the lost packet.