Modeling Topology of Large Internetworks
The explosive growth of internetworking, and particularly of the Internet,
has been accompanied by a wide range of internetworking problems
related to routing, resource reservation, and administration.
The study of algorithms and policies to address such problems often
involves simulation or analysis using an abstraction or model
of the actual network structure and applications.
The reason is clear: networks that are large enough to be interesting
are also expensive and difficult to control; therefore they
are rarely available for experimental purposes.
Moreover, it is generally more efficient to assess solutions using
analysis or simulation --- provided the model is a
"good" abstraction of the real network and application.
It is therefore rather remarkable that studies based on randomly-generated
or trivial network models are so common,
while rigorous analyses of how the results scale or
how they can be applied to actual networks are extremely rare.
Over the next few years, important decisions will be made
regarding the adoption of algorithms and placement of
facilities in the Internet to support scaling to
tens of thousands of administrative domains.
The inputs to these decisions will include simulation
and analyses based on models of networks and applications.
Unfortunately, with the current state of the art
it is very difficult to draw quantitative conclusions
based upon such models; indeed, there is presently no theoretical
basis for assessment of the accuracy of conclusions drawn from models.
A primary objective of our work is therefore to support the study
of large internetworks through scalable, realistic
models of internetwork structure and applications.
An additional objective is to apply and demonstrate the utility
of our models in the development of novel multicast routing algorithms.
Multicast routing is a critical and difficult problem within
large scale internetworking, and serves as a driver for the
rest of our work.
Our approach combines theoretical and experimental techniques.
The first step is formulation of a rigorous definition of model fidelity.
The second step is application of that definition in developing
a set of modeling components, including:
The third step is calibration and refinement of the models:
measurements from real networks and applications are used to
validate the scalability and fidelity of the models, and
additional levels of detail are added to the entire framework.
- models of network geography, i.e.,
structure that goes beyond simple topology to include
policy and other considerations, including known scaling properties;
- compositional techniques for abstracting large internets
as aggregates of smaller geographical components;
- models of the session structure of typical applications,
especially multicast applications;
- models of traffic within sessions, derived from
the work of other research groups.
- Megan Thomas (CRA summer intern; now at UC-Berkeley)
- Elizabeth Edwards (CRA summer intern; now at Georgia Tech)
- Samrat Bhattacharjee
College of Computing, 801 Atlantic Drive
Georgia Institute of Technology
Atlanta, Georgia 30332-0280
Telephone: +1 404 894 1403
Fax: +1 404 894 0272
Last updated 2000/7/26 (EWZ)
This material is based upon work supported by the National Science
Foundation under Grant No. MIP-9502669. Any opinions, findings, and
conclusions or recommendations expressed in this material are those of
the author(s) and do not necessarily reflect the views of the National