I was amused by the beginning of the article: 3000000 hosts and astonishment that a newspaper has an online column...
The goal of the paper is to build a framework to analyze future techniques deployed on the Internet to solve problems arising from "real-time" applications.
Current (as of 1995):
- best-effort, no admissions control, no assurance about delay or delivery
- killer apps: traditional data uses (telnet, FTP, HTTP, DNS, SMTP)
- apps are elastic, i.e. tolerant to delay or loss, gracefully degrade in these cases
- apps can change transmission rates in case of congestion
- "Real-time" apps: video & audio
- apps not elastic, bad performance in case of variation of delays (jitter) or loss
- apps without congestion control
- "Realt-time" apps interfere badly with traditional data uses, no fairness
- changing the router implementation, e.g. FQ
- changing application implementation, e.g. delay adaptive techniques
- introduce "Qualitiy of Service": implicit (router analyzes traffic patterns) or explicit (in header)
- Admissions Control
- Overprovisioning
- service delivered to application i encoded in vector s_i
- U_i(s_i) gives performance of application, e.g. U_i for FTP would depend smoothly and mostly linearly on bandwidth, but for a video conference it would degrade when there is not enough bandwidth
- total performance V=sum U_i(s_i) is to be maximized
- V does not account for fairness!
No comments:
Post a Comment