Special issue: Transport protocols for Next Generation Networks

Vol. 61, n° 1-2, January-February 2006
Content available on Springerlink

Guest editors
Congduc Pham, University of Pau, France
Guy Leduc, University of Liège, Belgium

Foreword

Congduc Pham, Guy Leduc

Characterization and evaluation of TCP and UDP-based transport on real networks

R. Les COTTRELL1, Saad ANSARI2, Parakram KHANDPUR1, Ruchi GUPTA1, Richard HUGHES-JONES3, Michael CHEN4, Larry McINTOSH5, Frank LEERS5

(1) Stanford Linear Accelerator Center (SLAC), 2575 Sand Hill Road, Menlo Park, CA 94025, USA
(2) Saad Ansari was with SLAC. He is now with Microsoft.
(3) Department of Physics and Astronomy, The University of Manchester, Oxford Road, Manchester M13 9PL, England;
(4) Chelsio, 370 San Aleso Avenue, Sunnyvale, CA 94085, USA
(5) Sun Microsystems Inc., 9515 Town Center Drive, San Diego CA 92121, USA

Abstract Standard TCP (Reno TCP) does not perform well on fast long distance networks, due to its AIMD congestion control algorithm. In this paper we consider the effectiveness of various alternatives, in particular with respect to their applicability to a production environment. We then characterize and evaluate the achievable throughput, stability and intra-protocol fairness of different TCP stacks (Scalable, HSTCP, HTCP, Fast TCP, Reno, BICTCP, HSTCP-LP and LTCP) and a UDP based application level transport protocol (UDTv2) on both production and testbed networks. The characterization is made with respect to both the transient traffic (entry and exit of different streams) and the steady state traffic on production Academic and Research networks, using paths with RTTs differing by a factor of 10. We also report on measurements made with 10 Gbit/sec NICs with and without TCP Offload Engines, on 10 Gbit/s dedicated paths set up for SC2004.

Keywords TCP/IP, Internet protocol, Transmission protocol, Throughput, Performance evaluation, High rate, Long distance transmission.

High-speed dedicated channels and experimental results with Hurricane protocol

Nageswara S. V. RAO, Qishi WU, Steven M. CARTER,William R. WING

Computer Science and Mathematics Division, Oak Ridge National Laboratory – Oak Ridge, TN 37831, USA

Abstract Networks are currently being deployed to provide dedicated channels to support large data transfers and stable control flow needed in large-scale scientific applications. We present experimental results on application-level throughputs achievable on such channels using a range of hosts and dedicated connections. These results high-light the throughput limitations in several cases due to host issues, including disk and file system speeds, processor scheduling and loads, and complexity of internal data paths. We characterize such effects using the notion of host-bandwidth, which must be considered together with the connectionbandwidth in designing and optimizing transport protocols for dedicated channels. We propose a new transport protocol implementation, named Hurricane, to achieve high utilization of dedicated channels. While the overall protocol is quite similar to existing UDP-based protocols, new parameters, such as group size of NACKs, are identified and carefully optimized to achieve high channel utilization. Our end hosts consist of workstations, a cluster and Cray X1 supercomputer. Between two workstations, we consider: (A) 1 Gbps layer 3 connection of several hundred miles, and (B) 10 Gbps layer 2 connection of several thousand miles. Between Cray X1 and the cluster, we consider: (C) 450 Mbps layer 3 channel provisioned by policy, and (D) 1 Gbps layer 2 connection provisioned over an MPLS tunnel.

Keywords Dedicated line, Transmission protocol, High rate, Experimental result, Computer network, Throughput, Scientific application, Long distance transmission.

Performance evaluation of DCCP: A focus on smoothness and TCP-friendliness

Xiaoyuan GU*, Pengfei DI**, Lars WOLF*

*Department of Computer Science, Technische Universität Braunschweig – Muehlenpfordtstr. 23, 38106 Braunschweig, Germany
** IBDS Systemarchitektur, Universität Karlsruhe (TH) – Am Fasanengarten 5, 76 131 Karlsruhe, Germany

This work was done during Pengfei DI’s study at the Technische Universität Braunschweig.

Abstract Recent years have seen dramatic increases of the use of multimedia applications on the Internet, which typically either lack congestion control or use proprietary congestion control mechanisms. This can easily cause congestion collapse or compatibility problems. Datagram Congestion Control Protocol (DCCP) fills the gap between UDP and TCP, featuring congestion control rather than reliability for packet-switched rich content delivery with high degree of flexibility. We present a DCCP model designed and implemented with OPNET Modeler, and the experiments and evaluation focused on largely the smoothness of the data rates, and the fairness between concurrent DCCP flows and TCP flows. We found DCCP-CCID3 demonstrates stable data rates under different scenarios, and the fairness between DCCP and TCP is only achieved under certain conditions. We also validated that the throughput of DCCP-CCID3 is proportional to the average packet size, and relatively fixed packet size is critical for the optimal operation of DCCP. Problems in the slow start phase and insufficient receiver buffer size were identified and we hereby proposed solutions on this.

Keywords: Transmission protocol, Internet Multimedia, Congestion control, TCP/IP, Packet transmission, Modeling, Simulation, Performance evaluation.

A study of a simple preventive transport protocol

Fabien CHATTÉ*,**, Bertrand DUCOURTHIAL*, Silviu-Iulian NICULESCU*

* Lab. HEUDIASYC (UMR CNRS 6599) – Université de Technologie de Compiègne, Centre de Recherche de Royallieu -BP 20529, 60205 Compiègne cedex, France.
** Fabien CHATTÉ is now at Neopost Industry

Abstract The main qualities of a protocol for multimedia flows transportation are related to the way congestions are handled. This paper addresses the problem of end-to-end congestion control performed in the Internet transport layer. We present a simple protocol called Primo, which determines the appropriate sending rate in order to maximize network resources usage and minimize packets loss. Comparison with existing transport protocols (TCP Reno, Sack, Vegas and TFRC) are considered, regarding various efficiency criteria such as sending and reception rates stability, loss rate, resources occupancy rate and fairness.

Keywords: Transmission protocol, Multimedia, Transport layer, Congestion control, Intenet, comparative study.

Rethinking end-to-end failover with transport layer multihoming

Armando L. CARO, Jr.*, Paul D. AMER**, Randall R. STEWART***

* BBN Technologies – 10 Moulton St., Cambridge, MA 02138, USA
** University of Delaware – Computer and Information Sciences Department, 103 Smith Hall, Newark, DE 19716, USA
*** Cisco Systems – 4875 Forest Drive, Suite 200, Columbia, SC 29206, USA

Abstract Using the application of bulk data transfer, we investigate end-to-end failover mechanisms and thresholds for transport protocols that support multihoming (e.g., SCTP). First, we evaluate temporary failovers, and measure the tradeoff between aggressive (i.e., lower) thresholds and spurious failovers. We surprisingly find that spurious failovers do not degrade performance, and often actually improve goodput regardless of the paths’ characteristics (bandwidth, delay, and loss rate). A permanent failover mechanism tries to avoid throttling the sending rate by not returning to a primary path when it recovers. We demonstrate that such a mechanism can be beneficial if the sender can estimate each path’s RTT and loss rate. We advocate a new approach to end-to-end failover that temporarily redirects traffic to an alternate path on the first sign of a potential failure (i.e., a timeout) on the primary path, but conservatively proceeds with failure detection of the primary path in the background.

Keywords Internet, Transport layer, Fault tolerant system, Alternative routing, Transmission protocol, Finite automation, Performance evaluation.

TICP: Transport Information Collection Protocol

Chadi BARAKAT*, Mohammad MALLI*, Naomichi NONAKA**

* INRIA, Planète team – 2004, route des Lucioles, 06902 Sophia Antipolis, France
** Hitachi Ltd, Systems Development Laboratory, 292 Yoshida-cho, Totsuka-ku, Yokohama, Kanagawa 244-0817, Japan

Abstract We present and validate TICP, a TCP-friendly reliable transport protocol to collect information from a large number of sources spread over the Internet. TICP is a stand-alone protocol that can be used by any application requiring the reliable collection of information. It ensures two main functions: (i) the information arrives at the collector entirely and correctly, (ii) the implosion at the collector and the congestion of the network are avoided. The congestion control in TICP is done by having the collector probe the sources at a rate function of network conditions. The probing rate increases and decreases in a way similar to how TCP adapts its congestion window. We implement TICP in ns-2 and validate its performance. In particular, we show how efficient TICP is in quickly and reliably collecting information from a large number of sources, while avoiding network congestion and being fair with competing traffic.

Keywords: Internet, Transmission protocol, TCP/IP, Congestion control, Transport layer, Data collection.

Power control for ad-hoc networks

Alain ABINAKHOUL*, Loutfi NUAYMI**

* FT R&D/MAPS/EIS – 38-40, rue du Général Leclerc, 92794 Issy-Les-Moulineaux cedex, France
** GET/ENST Bretagne (site de Rennes), Département RSM (Réseaux et Services Multimédia) – 2 rue de la châtaigneraie, 35576 Cesson-Sévigné, France

Abstract In an ad-hoc network, mobile stations communicate with each other using multi-hop wireless links. There is no stationary infrastructure such as base stations. Each node in the network also acts as a router, forwarding data packets for other nodes. In this architecture, mobile stations have a multi-hop path, via other mobile stations acting as intermediaries or relays, to indirectly forward packets from source to destination. Adjusting the transmitted power is extremely important in ad-hoc networks due to at least the following reasons. The transmitted power of the radio terminals determines the network topology. The network topology in turn has considerable impact on the throughput (fraction of packets, sent by a source, and successfully received at the receiver) performance of the network. The need for power efficiency must be balanced against the lifetime of each individual node and the overall life of the network. Power control problem can be classified in one of three categories. The first class comprises of strategies to find an optimal transmitted power to control the connectivity properties of the network. The second class of approaches could be called power aware routing. Most schemes use some shortest path algorithm with a power based metric, rather than a hop count based metric. The third class of approaches aim at modifying the MAC layer. We use distributed power control algorithms initially proposed for cellular networks. We establish a classification of power control algorithms for wireless ad-hoc networks. We evaluate these algorithms in an IEEE 802.11b multi-hop wireless ad-hoc LAN environment. Results show the advantage of power control in maximizing signal-to-interference ratio and minimizing transmitted power.

Keywords Ad hoc network, Mobile radiocommunication, Power control, Network routing, Signal interference, Distributed system, State of the art, Minimization, Topology, Graph connectivity, Link layer, Simulation, Comparative study, Wireless LAN.

Transfer functions attached to linear systems with time varying parameters

Valeriu B. MUNTEANU, Daniela G. TARNICERIU

Technical University “Gh. Asachi” Iasi, Faculty of Electronics and Telecommunications, Bd. Carol I no. 11, 700506, Romania

Abstract To obtain transfer functions attached to linear time varying (LTV) systems, a new method for getting the poles and residia of linear time invariant (LTI) continuous or discrete systems is proposed. The explored method is superior to others known, because it can be extended to systems with time varying parameters. With the poles and residia so obtained, the transfer function attached to the LTV systems, both continuous and discrete, results easily.

Keywords Transfer function, Linear system, Invarying system, Time variable system, Continuous system, Discrete system, Pole, Residue.

Anti-correlation as a criterion to select appropriate counter-measures in an intrusion detection framework

Frédéric CUPPENS*, Fabien AUTREL*,Yacine BOUZIDA*, Joaquin GARCIA*,**, Sylvain GOMBAULT*, Thierry SANS*

* GET-ENST Bretagne, 2, rue de la Châtaigneraie, CS 17607, 35576, Cesson Sévigné Cedex, France
** UAB-DEIC, Edifici Q, 08193 Bellaterra, Spain

Abstract Since current computer infrastructures are increasingly vulnerable to malicious activities, intrusion detection is necessary but unfortunately not sufficient. We need to design effective response techniques to circumvent intrusions when they are detected. Our approach is based on a library that implements different types of counter-measures. The idea is to design a decision support tool to help the administrator to choose, in this library, the appropriate counter-measure when a given intrusion occurs. For this purpose, we formally define the notion of anti-correlation which is used to determine the counter-measures that are effective to stop the intrusion. Finally, we present a platform of intrusion detection that implements the response mechanisms presented in this paper.

Keywords Computer security, Intruder detector, Information protection, Correlation, Modelling, Logic model.

An application of principal component analysis to the detection and visualization of computer network attacks

Khaled LABIB, V. Rao VEMURI

Department of Applied Science, University of California – Davis, USA

Abstract Network traffic data collected for intrusion analysis is typically high-dimensional making it difficult to both analyze and visualize. Principal Component Analysis is used to reduce the dimensionality of the feature vectors extracted from the data to enable simpler analysis and visualization of the traffic. Principal Component Analysis is applied to selected network attacks from the DARPA 1998 intrusion detection data sets namely: Denial-of-Service and Network Probe attacks. A method for identifying an attack based on the generated statistics is proposed. Visualization of network activity and possible intrusions is achieved using Bi-plots, which provides a summary of the statistics.

Keywords Computer security, Intruder detector, Principal component analysis, Statistical analysis, Teletraffic.