Internet QoS: Architectures and Mechanisms for Quality of Service / Edition 1

Internet QoS: Architectures and Mechanisms for Quality of Service / Edition 1

by Zheng Wang
ISBN-10:
1558606084
ISBN-13:
9781558606081
Pub. Date:
03/05/2001
Publisher:
Elsevier Science
ISBN-10:
1558606084
ISBN-13:
9781558606081
Pub. Date:
03/05/2001
Publisher:
Elsevier Science
Internet QoS: Architectures and Mechanisms for Quality of Service / Edition 1

Internet QoS: Architectures and Mechanisms for Quality of Service / Edition 1

by Zheng Wang

Hardcover

$86.95
Current price is , Original price is $86.95. You
$86.95 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.


Overview

Guaranteeing performance and prioritizing data across the Internet may seem nearly impossible because of an increasing number of variables that can affect and undermine service. But if you're involved in developing and implementing streaming video or voice, or other time-sensitive Internet applications, you understand exactly what's at stake in establishing Quality of Service (QoS) and recognize the benefits it will bring to your company.

What you need is a reliable guide to the latest QoS techniques that addresses the Internet's special challenges. Internet QoS is it-the first book to dig deep into the issues that affect your ability to provide performance and prioritization guarantees to your customers and users! This book gives a comprehensive view of key technologies and discusses various analytical techniques to help you get the most out of network resources as you strive to make, and adhere to, meaningful QoS guarantees.


Product Details

ISBN-13: 9781558606081
Publisher: Elsevier Science
Publication date: 03/05/2001
Series: The Morgan Kaufmann Series in Networking
Edition description: New Edition
Pages: 240
Product dimensions: 7.38(w) x 9.25(h) x (d)

About the Author

Dr. Zhendi Wang is a senior research scientist and Head of Oil Spill Research of Environment Canada, working in the oil and toxic chemical spill research field. His specialties and research interests include: development of oil spill fingerprinting and tracing technology, environmental forensics of oil spill; oil properties, fate and behavior of oil and other hazardous organics in the environment; oil burn emission and products study; oil bioremediation; identification and characterization of oil hydrocarbons; and, spill treatment studies; applications of modern analytical techniques (such as GC, GC/MS, HPLC, LC/MS, SFE and SFC, and IC) to oil spill studies and other environmental science and technology.
Dr. Wang has continually and extensively led and been involved in various scientific projects, technology transfer, and national and international cooperative researches with the total funding over 3 million dollars. He has authored over 270 academic publications including 72 peer-reviewed articles and 4 invited reviews in the most respected journals in the environmental science and chemistry, and 8 books and book chapters. The productivity, excellence and achievements of Dr. Wang have established him as a national and international "leading authority on the topic". He has won a number of national and international scientific honours and awards. Wang is also a member of America Chemical Society (ACS), the Canadian Society for Chemistry (CSC), and the International Society of Environmental Forensics.

Read an Excerpt

Chapter 1: The Big Picture

The current Internet has its roots in the ARPANET, an experimental data network funded by the U.S. Defense Advanced Research Projects Agency (DARPA) in the early 1960s. An important goal was to build a robust network that could survive active military attacks such as bombing. To achieve this, the ARPANET was built on the datagram model, where each individual packet is forwarded independently to its destination. The datagram network has the strength of simplicity and the ability to adapt automatically to changes in network topology.

For many years the Internet was primarily used by scientists for networking research and for exchanging information among themselves. Remote access, file transfer, and email were among the most popular applications, and for these applications the datagram model works well. The World Wide Web, however, has fundamentally changed the Internet. It is now the world's largest public network. New applications, such as video conferencing, Web searching, electronic media, discussion boards, and Internet telephony, are coming out at an unprecedented speed. E-commerce is revolutionizing the way we do business. As we enter the twenty-first century, the Internet is destined to become the ubiquitous global communication infrastructure.

The phenomenal success of the Internet has brought us fresh new challenges. Many of the new applications have very different requirements from those for which the Internet was originally designed. One issue is performance assurance. The datagram model, on which the Internet is based, has few resource management capabilities inside the network and so cannot provide any resource guarantees to users-you get what you get! When you try to reach a Web site or to make an Internet phone call, some parts of the network may be so busy that your packets cannot get through at all. Most real-time applications, such as video conferencing, also require some minimal level of resources to operate effectively. As the Internet becomes indispensable in our life and work, the lack of predictable performance is certainly an issue we have to address.

Another issue is service differentiation. Because the Internet treats all packets the same way, it can offer only a single level of service. The applications, however, have diverse requirements. Interactive applications such as Internet telephony are sensitive to latency and packet losses. When the latency or the loss rate exceeds certain levels, these applications become literally unusable. In contrast, a file transfer can tolerate a fair amount of delay and losses without much degradation of perceived performance. Customer requirements also vary, depending on what the Internet is used for. For example, organizations that use the Internet for bank transactions or for control of industrial equipment are probably willing to pay more to receive preferential treatment for their traffic. For many service providers, providing multiple levels of services to meet different customer requirements is vital for the success of their business.

The capability to provide resource assurance and service differentiation in a network is often referred to as quality o f service (QoS). Resource assurance is critical for many new Internet applications to flourish and prosper. The Internet will become a truly multiservice network only when service differentiation can be supported. Implementing these QoS capabilities in the Internet has been one of the toughest challenges in its evolution, touching on almost all aspects of Internet technologies and requiring changes to the basic architecture of the Internet. For more than a decade the Internet community has made continuous efforts to address the issue and developed a number of new technologies for enhancing the Internet with QoS capabilities.

This book focuses on four technologies that have emerged in the last few years as the core building blocks for enabling QoS in the Internet. The architectures and mechanisms developed in these technologies address two key QoS issues in the Internet: resource allocation and performance optimization. Integrated Services and Differentiated Services are two resource allocation architectures for the Internet. The new service models proposed in them make possible resource assurances and service differentiation for traffic flows and users. Multiprotocol Label Switching (MPLS) and traffic engineering, on the other hand, give service providers a set of management tools for bandwidth provisioning and performance optimization; without them, it would be difficult to support QoS on a large scale and at reasonable cost.

The four technologies will be discussed in depth in the next four chapters. Before we get down to the details, however, it is useful to look at the big picture. In this first chapter of the book we present a high-level description of the problems in the current Internet, the rationales behind these new technologies, and the approaches used in them to address QoS issues.

1.1 Resource Allocation

Fundamentally, many problems we see in the Internet all come down to the issue of resource allocation-packets get dropped or delayed because the resources in the network cannot meet all the traffic demands. A network, in its simplest form, consists of shared resources such as bandwidth and buffers, serving traffic from competing users. A network that supports QoS needs to take an active role in the resource allocation process and decides who should get the resources and how much.

The current Internet does not support any forms of active resource allocation. The network treats all individual packets exactly the same way and serves the packets on a first-come, first-serve (FCFS) basis. There is no admission control either-users can inject packets into the network as fast as possible.

The Internet currently relies on the TCP protocol in the hosts to detect congestion in the network and reduce the transmission rates accordingly. TCP uses a window-based scheme for congestion control. The window corresponds to the amount of data in transit between the sender and the receiver. If a TCP source detects a lost packet, it slows the transmission rate by reducing the window size by half and then increasing it gradually in case more bandwidth is available in the network.

TCP-based resource allocation requires all applications to use the same congestion control scheme. Although such cooperation is achievable within a small group, in a network as large as the Internet, it can be easily abused. For example, some people have tried to gain more than their fair share of the bandwidth by modifying the TCP stack or by opening multiple TCP connections between the sender and receiver. Furthermore, many UDP-based applications do not support TCP-like congestion control, and real-time applications typically cannot cope with large fluctuations in the transmission rate.

The service that the current Internet provides is often referred to as best effort. Best-effort service represents the simplest type of service that a network can offer; it does not provide any form of resource assurance to traffic flows. When a link is congested, packets are simply pushed out as the queue overflows. Since the network treats all packets equally, any flows could get hit by the congestion.

Although best-effort service is adequate for some applications that can tolerate large delay variation and packet losses, such as file transfer and email, it clearly does not satisfy the needs of many new applications and their users. New architectures for resource allocation that support resource assurance and different levels of services are essential for the Internet to evolve into a multiservice network.

Over the last decade the Internet community came up with Integrated Services and Differentiated Services, two new architectures for resource allocation in the Internet. The two architectures introduced a number of new concepts and primitives that are important to QoS support in the Internet:

  • Frameworks for resource allocation that support resource assurance and service differentiation

  • New service models for the Internet in addition to the existing best-effort service

  • Language for describing resource assurance and resource requirements

  • Mechanisms for enforcing resource allocation

Integrated Services and Differentiated Services represent two different solutions. Integrated Services provide resource assurance through resource reservation for individual application flows, whereas Differentiated Services use a combination of edge policing, provisioning, and traffic prioritization...

Table of Contents

1 The Big Picture2 Integrated Services3 Differentiated Services4 Multiprotocol Label Switching5 Internet Traffic Engineering
From the B&N Reads Blog

Customer Reviews