The second part of the tutorial will discuss the
implications of the wireless channel on Internet protocols,
including routing, TCP performance, and mobility functions
with an emphasis on the inherent tradeoffs in placing
functionality at the various layers. Many practical,
real-world examples will be given throughout the
This tutorial is intended for those with some
understanding of the Internet architecture and who desire a
better understanding of the mobile wireless channel and its
implications for the Internet architecture.
Phil Karn has a BSEE from Cornell University and a MSEE
from Carnegie Mellon University. He is currently a
Principal Engineer at Qualcomm in San Diego. Before
Qualcomm, Phil was a member of the technical staff at Bell
Laboratories in Naperville, IL and Murray Hill, NJ, and at
Bell Communications Research in Morristown, NJ.
Before becoming involved professionally in digital radio
communications, Phil was active in the early development of
amateur (ham) packet radio. He wrote the KA9Q NOS TCP/IP
package (KA9Q is his amateur radio call sign) and invented
the MACA multiple access scheme now part of IEEE 802.11. At
Qualcomm, Phil conceived the IP-based CDMA packet data
architecture and designed the standard CDMA radio link
protocol (RLP). He is also active in the IETF, where he has
most recently edited the upcoming "Advice for Subnetwork
M2 Traffic Measurement for IP Operations
Monday, August 27, 2001 9:00-5:00
Matt Grossglauser and Jennifer Rexford
AT&T, NJ, USA.
You manage a large IP network. You notice that your
connectivity to several peers is degrading rapidly. Is it
due to an underprovisioned peering link? Distributed
denial-of-service attack? A new peer-to-peer network? A
bad routing advertisement? A flash crowd? What are you are
going to do about it?
Traffic measurement is an essential tool to guide operators of large
IP networks in key engineering decisions. This tutorial focuses on
measurement techniques and traffic models that provide a comprehensive
view of large IP networks, over which the operator has full
The first part of the tutorial describes the basic tasks involved in
operating a large IP network and derives requirements for network
measurement. We argue that the very properties responsible for the
Internet's success also make it difficult to control and manage. We
provide a variety of "real world" anecdotes that illustrate the role
of measurements in network operations.
In the second part, we give a comprehensive survey of measurement data
currently available in IP networks. We present an overview of
SNMP/RMON, flow-level measurement, packet monitoring, active
measurement, and techniques for collecting routing, configuration, and
topology data. We classify these measurements according to their
temporal and spatial granularity, their means of collection, and their
overhead, and present case studies of how to exploit the measurement
data in important operational tasks.
The third part of the tutorial discusses complex operational tasks
that require combining multiple types of measurement data. First, we
describe network tomography, a technique for inferring a traffic
matrix from link utilization statistics. Second, we describe how to
compute point-to-multipoint traffic demands by combining flow-level
measurements with routing table data. Third, we describe a hash-based
packet sampling technique for direct observation of the path matrix.
We discuss the pros and cons of each technique in
Researchers, network architects, and protocol implementors from
academia and industry looking for an applied introduction to IP
traffic measurement from the viewpoint of a network
Matt Grossglauser received his diploma from the Swiss Federal
Institute of Technology (EPFL) and his M.Sc. degree from the Georgia
Institute of Technology, both in 1994, and his Ph.D. from the
University of Paris 6, in 1998. He did most of his thesis work at
INRIA Sophia Antipolis, France. He is currently a member of the IP
Network Management and Performance Department at AT&T Labs -- Research
in Florham Park, New Jersey. His research interests are in network
traffic modeling and measurement, resource allocation, network
management, and mobile communications.
Jennifer Rexford received her B.S.E. degree in electrical engineering
at Princeton University in 1991 and her M.S.E. and PhD degrees in
electrical engineering and computer science at the University of
Michigan in 1993 and 1996, respectively. She is currently a member of
technical staff in the IP Network Management and Performance
Department at AT&T Labs -- Research in Florham Park, New Jersey. Her
research focuses on routing protocols, traffic engineering, and
network measurement. Jennifer is co-author (with Balachander
Krishnamurthy) of the book "Web Protocols and Practice: HTTP/1.1,
Network Protocols, Caching, and Traffic Measurement", published by
Addison-Wesley in May 2001.
T1A Interdomain Routing and BGP
Tuesday, August 28, 2001 9:00-12:30
Timothy G. Griffin
AT&T, NJ, USA.
How is IP connectivity maintained on the global Internet?
How do Internet Service Providers (ISPs) exchange routing
information? How well is the current routing system
working? Can the routing infrastructure continue to scale
as the Internet grows?
The tutorial will survey the basics of interdomain
routing. It will cover what an autonomous system is, how IP
addresses are assigned and aggregated, and why metric-based
routing protocols, such as RIP and OSPF, do not meet the
demands of scale and policy flexibility required for
interdomain routing. Today, interdomain routing is
accomplished with the Border Gateway Protocol (BGP). The
core of the tutorial will be an in-depth look at what BGP
is, how it works, and how it is configured by ISPs. The
tutorial will also survey some of the significant
challenges currently arising in interdomain routing. These
include rapid growth in BGP routing information, delay in
BGP convergence times, and complexity of analyzing the
interaction of autonomously defined routing policies.
Anyone who wants to know how connectivity is maintained in
the global Internet. Attendees are expected to have some
familiarity with basic IP addressing and forwarding. Some
understanding of routing with interior gateway protocols,
such as RIP or OSPF, will be helpful but not required.
Tim Griffin is a member of the IP Network Management and
Performance Department at AT&T Labs in Florham Park, New
Jersey. He received his undergraduate degree in
mathematics from the University of Wisconsin, Madison,
his MS and Ph.D. degrees in Computer Science from Cornell
University. Before joining AT&T Labs he was a member of
technical staff at Bell Laboratories. His current
research interests include interdomain routing and the
analysis and modeling of BGP.
T1B Equilibrium and Dynamics of TCP
Tuesday, August 28, 2001 1:30-5:00
Prof Steven H. Low
California Institute of Technology, CA, USA.
Congestion control is a set of distributed algorithms
to share network resources among competing users. They
adapt to fluctuations in the capacity of, and the
demand for, these resources. Congestion control on the
Internet consists of two subalgorithms: a queue management
algorithm (e.g., DropTail, RED, REM etc) that provides
congestion information to sources, and a TCP
algorithm (e.g., Tahoe, Reno, NewReno, SACK, Vegas, etc)
that adjusts source window in response.
Extensive research has recently established the heavy
tail nature of Web files. This implies that, while
most TCP connections are short ("mice"), most packets
belong to long connections ("elephants"). The goal of
congestion control is to adapt the elephants to
maximally utilize the network in a way that leaves the
network queues small, so that mice can fly through
the network with small delay and loss. This tutorial
will provide a fundamental understanding on how the
current protocols attempt to achieve this goal.
We will explain several recently
developed optimization and control-theoretic models
of TCP congestion control. We will use them to
provide the mathematical basis for many well-known
empirical observations and intuitions; to predict
network performance and clarify equilibrium
properties of TCP and AQM protocols such as
optimality, fairness, and friendliness; and finally,
to uncover new subtle stability problems and dynamics of
these protocols, that suggest how stability
problems might arise as network scales
up in size, capacity, and load, and how they might
Steven Low received his PhD in EE from Berkeley.
He was with AT&T Bell Labs, Murray Hill, NJ from
1992-1996, with the University of Melbourne, Australia
from 1996-2000 and is currently a Senior Fellow of
the University. He is now an Associate Professor of
the California Institute of Technology. He is a
co-recipient of IEEE Bennett Prize paper award in
1997 and US R&D 100 award in 1996. He is on
the editorial board of IEEE/ACM Transactions on
Networking and has been a guest editor of IEEE
JSAC. His research interests are in the control
and optimization of networks and protocols.
T2 Algorithms for Networks: Some Techniques for
Design and Analysis
Tuesday, August 28, 2001 9:00-5:00
University of Southern California, CA, USA,
Stanford University, CA, USA.
Algorithms are used everywhere in networks: for example, at the endhost
(TCP/IP's congestion avoidance), in switches (for arbitration), in routers
(for lookup, classification and link scheduling), in caches (for replacement),
at web servers (for load-balancing), in optical networks (for wavelength assignment),
and in wireless networks (for routing and power control). While the study of
algorithms, both design and performance evaluation, is a classical subject, there
are some unique constraints placed on algorithms designed for networking applications.
For example, the speed of operation, the size of data structures, the overhead of
communication, the cost of hardware implementation, etc, impose important constraints
that often render the classical solutions unsuitable and require a new approach to
This tutorial is aimed at both the implementor and the designer of network algorithms.
We consider specific examples of algorithm design; from high speed switch scheduling,
router queue management, web server and cache design, routing in wireline and ad hoc
wireless networks. In each case we discuss relevant performance metrics and implementation
constraints. A variety of interesting and representative algorithms are presented and
compared against their stated metrics.
Ashish Goel is an Assistant Professor of Computer Science at the University of Southern
California. His research interests lie mainly in algorithms and networking. He obtained
his PhD in 1999 from the department of Computer Science at Stanford University.
Nick McKeown is an Assistant Professor of Electrical Engineering and Computer Science
at Stanford University. He received his PhD from the University of California at Berkeley
in 1995. From 1986-1989 he worked for Hewlett-Packard Labs, in their network and communications
research group in Bristol, England. Nick's research interests include the architecture,
analysis and design of high performance switches and Internet routers, IP lookup and
classification algorithms, scheduling algorithms, Internet traffic analysis, traffic
modeling and network processors.
Balaji Prabhakar is an Assistant Professor of Electrical Engineering and Computer Science
at Stanford University. He received his PhD from the University of California, Los Angeles
in Fall 1994, was a post-doctoral fellow at the Basic Research Institute in the Mathematical
Sciences (BRIMS) during 1995-1997, and visited the EECS Department at MIT during 1997-1998.
He has been at Stanford since 1998. Balaji is interested in network algorithms - especially
for switching, routing and quality-of-service, wireless networks, web caching, network pricing,
information theory and stochastic network theory.