Wednesday, September 10, 2008

Fair Queueing Algorithms

Alan Demers, Srinivasan Keshav, Scott Shenker, "Analysis and Simulation of a Fair Queuing Algorithm"
Ion Stoica, Scott Shenker, Hui Zhang, "Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks"

FQ Algorithm

There are two "points of implementation" for congestion control: source and gateway. The paper is looking at the gateway and is suggestion a Queuing algorithm. Whereas a Queueing algorithm is not changing the total traffic, so not directly avoiding congestion, it "affects the collective behavior of flow control algorithms".

The algorithm has a queue per flow (per-flow state).

The basic algorithm is best explained in a squence of drawings I made and forgot to scan. I found that the notation obfuscated the basic idea.

I found the quantities used when evaluating the algorithm interesting: "bandwidth", "promptness" and "buffer space" are supposed to be nearly independent.
The design requirements were:
- has to work in a heterogenous environment with different flow control algorithms
- fairness (using max-min def, we introduce a common limit for everybody so that the sum of all limited bandwidth usage is equal to the full bandwidth) despite misbehaved sources
- good delay-throughput curve for each flow

The performance was evaluated through computation and simulation. The basic setup had a couple of FTP-like and telnet-like flows. Simulations were run on different network topologies. Flows which use less than their share of bandwidth gain low delay times.

The paper was great: it contained a lot of ideas about the environment the algorithm has to work in, theoretical concepts what conditions an algorithm has to fullfill to provide stable flow control (e.g. continuity), definition of fairness and other wanted criteria, metrics for performance, analysis of performance and simulation. I don't feel I can write a summary doing the paper justice.

Core-stateless FQ algorithm

In the previous paper, a gateway routing traffic has to maintain per-flow state. For a gateway routing thousands or million of different flows, this might be prohibitively expensive. The paper therefore divides routers into two categories: core and edge. Edge routers maintain per-flow state and write into the headers of packets the rate of the respective flow. Core routers only compute global statistics and drop packets probabilistically to achive fair bandwidth allocation. If the core router sees a packet whose header indicates that the respective flow has an arrival rate beyond its fair share of bandwidth, the packet is dropped with the right probability to fall into fair share again. The fair share of bandwidth is the same for all flows and is estimated by rising it until bandwidth is fully utilized and lowering in case of congestion.

I imagine that the network might be an ISP providing Internet access to PCs at home, an edge router is the router connecting such a PC to the ISP and is responsible for calculating the flow rate of that customer. The core routers are the internal routers of the ISP and are responsible for maintaining fair use of bandwidth.

No comments: