LEVERAGE
LEarn from Video Extensive Real Atm Gigabit Experiment

LEVERAGE News No 5, February 1999

Welcome | Co-ordinator | Pedagogical | Conference report


Technophile

Fritz-Lorenz Born, Solution Architect at ASCOM in Bern Fritz-Lorenz Born, Solution Architect at ASCOM in Bern, Switzerland gives a detailed and frank account of the challenges facing the LEVERAGE team when setting up the network for the interconnection of the three sites during the 3rd LEVERAGE trial.

Using ISDN technology for interconnection
The major technical challenge for the third trial of the LEVERAGE system was the integration of the new third site, the Universidad Politécnica de Madrid (UPM) into the LEVERAGE network alongside the two sites which had taken part in the second trial: Cambridge University Language Centre (CULC), and the Institut National de Télécommunications (INT). As all sites already used adequate multimedia equipment on a Local Area Network (LAN) the main emphasis was to find a new solution for the interconnection of all three sites. During the second trial an ATM-based interconnection between CULC and INT was used successfully. For the third trial an installation of ATM links between the three sites was beyond our scope as no telecommunications company could offer a suitable solution following the end of the European ATM Pilot JAMES at the time of the network design. This lead to the decision to use ISDN (Integrated Services Digital Network) as the technology for the interconnection of the three sites.

Multisite scenarios and trial organisation
As indicated above, the available technology to interconnect these three ATM LANs, the overall cost, as well as the organisation of the trial, influenced the network interconnection possibilities available. The overall cost limited our interconnection scenario choices as only minimal changes to the onsite hardware and software were possible. The organisation of the trial was driven by the pedagogical, end-user part of the project not the interconnecting network itself. At the network level, the final solution includes three ATM LANs interconnected by using ISDN dial-up access routers and the Internet as a fallback path (see Figure 1).

Figure 1: final interconnection scenario for 3rd trial LEVERAGE network
Figure 1: final interconnection scenario for 3rd trial LEVERAGE network

In the previous two-site interconnection trial, the two ATM LAN interconnections were pair-wise. In the new three-site interconnection scheme for the third trial, we could have either:

  • pairwise interconnections: i.e. each site is distinctly connected to one other remote site at one given time; or
  • three-site simultaneous interconnection: i.e. each site is simultaneously interconnected to the two other remote sites.
Due to the budget limitations, the pedagogical approach developed for the third trial was based mainly on bilateral sessions. Moreover, the French-Spanish, Spanish-English and English-French sessions did not need to take place at the same time - the main point being collaborative work between any two sites. This led us to choose the first option.

Interconnection
The selection of the ISDN dial-up access router for INT and CULC had to take into consideration that UPM already owned a Cisco 7000 ISDN dial-up access router, which directly routes packets coming from ISDN or the Internet to the internal ATM LAN. Taking into account that compatibility between routers of different brands can be very poor, we decided to choose a Cisco ISDN dial-up access router for INT and CULC. The local networks on these two sites were somewhat different from UPM, since both ATM-LANs were only accessible by passing through a Sun workstation. The Sun workstations at INT and CULC were used as both the video server for broadcasting and as a router, routing traffic between the ATM and Ethernet networks. The selected Cisco 3620 ISDN dial-up access router was equipped with one Ethernet and one ISDN module to meet the demands at INT and CULC.

With a Cisco 3620 it was possible to route packets coming from the ISDN port out over the Ethernet port in the direction of the Sun workstation's Ethernet NIC, which for its part forwarded the packet out over the ATM NIC towards a client on the ATM LAN. Any packet from a client in the ATM LAN had to take the same path in the opposite direction.

As an attentive reader can imagine the way a packet has to travel through all the devices is one problem, however, another major issue had to be taken into consideration - this was the fact that each packet had to pass different layers of the protocol stack. Moreover, in the ATM LAN a packet has to pass different devices since the ATM LAN itself consists of routers and switches. As shown in Figure 2, in the case of CULC a packet coming from the ISDN port travels through different layers on different devices, e.g. ATM LightRing nodes and Virata ATM Switch, until it reaches the target client. Real-time, interactive applications, such as desktop conferencing, are sensitive to accumulated delay, which is known as latency. Telephone networks are engineered to provide less than 400 milliseconds (ms) round-trip latency. Multimedia networks that support desktop audio and videoconferencing must also be engineered with a latency budget of less than 400 ms per round-trip. From an engineering point of view nothing should be used without having been tested in advance. What would happen if all this routing proved too time-consuming? Will the final performance fulfil the quality of service requirements? To attempt to answer these questions I decided to do some tests in our lab.

Figure 2: packet routing through the protocol network
Figure 2: packet routing through the protocol network

Technical requirements
Before starting testing I needed to know what I was testing. The first task was to define the requirements for the ISDN dial-up routers. The most critical point was the bandwidth required by the desktop audio and videoconferencing software which was running on all workstations at the three sites. The H.261 1 realtime video encoding and decoding hardware employed in the LEVERAGE system theoretically allows a reduction of the generated bitrate to a minimum of 64 kbps. Tests made at the INT showed that an acceptable video quality was possible with a bitrate of 200 kbps. With this bitrate lip-synchronisation of the video image to the audio sound is guaranteed. The G.711 2 audio-coding was fixed to 64 kbps to ensure good voice quality. Therefore, including the IP and ATM overhead, a session between two sites could be run with 320 kbps of available bandwidth. As an additional safety precaution we defined a bitrate of 512 kbps for the ISDN internetwork. Since a standard ISDN BRI (Basic Rate Interface) provides two B channels for sending and receiving data at 64 kbps, and one D channel for signalling or communicating with the ISDN switch at 16 kbps, it does not offer the required bandwidth. We had to choose a more powerful ISDN PRI (Primary Rate Interface) which provides 30 B channels and one D channel (all at 64 kbps) to access the ISDN network. To have a cost effective solution we used only a portion of the available channels. With eight channels we got what we wanted, since 8 * 64 kbps delivers the needed bitrate of 512 kbps. The PRI module on the selected Cisco 3620 ISDN dial-up access routers had to be configured for a so called 'eight-channel-bundling'. This configuration of the Cisco 3620 ISDN dial-up access routers allowed us the bandwidth and performance we needeed for the audio and videoconferencing interconnections between the sites.

In addition a lot of routing definitions had to be made e.g. routing tables used at CULC, INT and DIT-UPM. With this basic set of routing information a Cisco 3620 ISDN dial-up access router knows that the part of the networks it has to interconnect to is located to a specific port, e.g. packets coming from the ISDN port, with a destination address of a workstation located in the ATM network, have to take the route out through the Ethernet port in the direction of the Sun workstation, which does the final routing of the packet to the workstation. Some default routes had to be defined to be taken by packets with destination addresses with no explicit routing table entry. Obviously at this first, very analytical stage it was not possible to define all routes which could potentially be necessary.

Tests at Ascom Labs
With all this information I went into the lab. After the initial hardware installation, I had to configure the routers and the Sun workstation to simulate the final configuration. I set up the two routers, one for CULC and one for INT, and connected each device to a PRI line. Since we have a fully functional digital switch (Ericsson AXE10 PBX as used by telecom operators) at Ascom, I could simulate the international ISDN interconnection within the boundries of our lab (see Figure 3). The routers had to be configured to run with 8 bundled channels. Each router was assigned its own call number, so I could test the dial-up feature. The data throughput over the ISDN could be measured by sending large files from one PC over the ISDN link to an other PC with FTP (File Transfer Protocol). The fallback solution with the Internet (marked as emulated Internet in Figure 3) was also emulated with a small router between the networks with the addresses 193.60.95.0 and 157.159.174.0, representing the simulated CULC and INT LANs. At the end of the tests the routers were basically configured and I learnt a lot which was one of my personal aims! The tests showed that our solution was able to fulfil the technical requirements we used for the third trial.

Figure 3: test configuration for ISDN access routers and Sun workstation
Figure 3: test configuration for ISDN access routers and Sun workstation

Integration on the trial sites
The final integration of the ISDN access routers turned up some new issues. The local telecom operators, i.e. British Telecom and France Telecom, use different standards for the signalisation on the D channel on their ISDN PRI. This is because different types of ISDN switches are used in the United Kingdom, France and Spain. The specification of the framing type was one of the local adjustments to be done, e.g. the router at INT used no CRC4 framing in contrast to those at CULC and DIT-UPM. After this basic set up was done we had to test the communication between the routers themselves without any additional data traffic from the LEVERAGE multimedia applications. This was done by setting up telnet sessions between the routers. Since the routers at INT, CULC and UPM were accessible from within the campus Intranet, and in the case of CULC even over the public Internet, the basic router tests and any further configuration could be done remotely e.g. from my PC in Switzerland. With the help of local LEVERAGE project members from all three sites, Jan Wong and Agnès Fauverge at CULC, Tijani Chahed at INT and David Fernández at DIT-UPM, we installed a working ISDN internetworking environment. We actually had a lot of problems with connections between the sites because routing had to be done not only by the Cisco routers but also (in the case of CULC and INT) by the Sun workstation.

Problems and Solutions
We then tested the connection between CULC and INT with actual multimedia video and audio applications. The results were not satisfactory. The video and audio quality was not as good as it should be. Multimedia applications require reliable and fast transmissions, otherwise recipients may get 'chopped' and delayed packets, because of the way lower-level protocols, such as Ethernet or ATM handle packets to IP. The packets are forwarded to their destination, and are therefore prone to delays or 'bursts' in delivery. This can happen to all routed IP networks because every router in the data path examines each packet of information. For most data applications, this 'bursty' delivery is acceptable. It lends itself to high performance and high availability.

For multimedia applications, involving both video and audio, the traffic must be 'streamed' or transmitted continuously, not in bursts. Because the ISDN interconnection between two sites offered a very efficient channel, the problem we had could have arisen either from bad routing information in the ISDN router itself or in the routing Sun workstation. As a next step we reduced the routes in the ISDN router to the absolute minimum. Moreover, the routing tables on the Sun workstation, which had been defined statically with all possible routes, were changed to a dynamic routing table set up. The idea behind this dynamic routing set up was to add only those IP routes actually necessary for a given connection. As an example, for CULC three modes were defined: a.) local sessions; b.) remote sessions with INT; and c.) remote sessions with UPM. The team from CAP GEMINI TELECOM FRANCE wrote some Unix shell scripts, which allowed a specific site to initiate a session e.g. a script called Start-levmad starts a session from CULC to UPM and another script Start-levint starts a session from CULC to INT. In each of these scripts the necessary routes were added to the routing table of the Sun workstation. In the same way a script adds routes dynamically when starting a session, another script deletes the unused routes after finishing a session. With these scripts the routing from the ATM subnet over the Sun workstation towards the ISDN router was perfect.

Another hurdle
Another problem arose with the set up of a Multilink-PPP ISDN connection due to the implementation of the Cisco router software. Although the 8 channels (time slots) and one D channel were defined, the router did not set up all channels at once. Cisco implements Bandwidth on Demand to use additional bandwidth by placing additional calls to a single destination if the load on a channel already established exceeds a specified weighted value. Parallel communication channels are established based on traffic load. Even if we defined a maximum of 8 channels the router started any session with one channel at the very beginning and placed additional calls to establish more channels if the last activated channel reached a given traffic load. This set up was very time consuming and resulted in bad audio and video quality for the first minute after a connection was set up between two sites. As mentioned above, multimedia relies heavily on continuous data streams with sufficient bandwidth. What we needed was a stable ISDN link with all 8 channels established. But how could we achieve this? After browsing through all available documentation and seeking advice from specialists, we knew that we had to implement a 'workaround'. The solution was simple but effective.
David Fernández, principal lecturer at UPM David Fernández (right), principal lecturer at UPM, proposed using a small utility called spray, which generated so much traffic on the IP path leading to the ISDN port of the router that, after a while, all 8 channels were mounted. Once all channels were mounted the actual data traffic generated by the LEVERAGE multimedia conference applications could take over the established path. To automate everything a shell script did the necessary tasks to establish and terminate a connection. To be absolutely sure no ISDN traffic was generated after terminating a session the ISDN link (actually the Primary Rate Interface) was closed on the Cisco router by sending a shutdown interface configuration command from the same script to the command line interpreter of the Cisco router. In the end we needed a couple of scripts to automate all the necessary network functions: e.g.: enable the ISDN PRI; add all the necessary routes on the Sun workstation; establish a session to a specific site; finish a session to a site; delete all unused routes from the Sun workstation and shutdown the ISDN PRI at the end.

What we learned
I think we all gained considerable experience through the integration of ISDN internetworking technologies for the third LEVERAGE trials. I knew from personal experience that a nice network layout plan and a fully functional network are often worlds apart! The integration of the LAN at DIT-UPM was more complex than we had imagined. In comparison to the ATM LANs in CULC and INT, where we had a dedicated Cisco 3620 router for the ISDN interconnection for the LEVERAGE project, the Cisco 7000 access router was used not only by the LEVERAGE project but also by other people at DIT-UPM. This situation reduced the possible configurations during the test phase as we had to take other users into account. In the end it seemed that, due to the load on it, the router was running at its absolute limit. The most demanding problem was that of the reduced bandwidth when the ISDN path started up. The 'workaround' we developed solved the problem but not the cause of the problem. I was surprised that the Cisco IOS (Internetwork Operating System) software does not offer any command to simultaneously mount a number of channels. We also discovered that the combination of routing on the Sun workstation and the Cisco 3620 at INT and CULC caused a lot of problems. Since we had a limited budget and therefore could not change the equipment on both sites we had no other choice. If we had the luxury of designing a new network from scratch, the internetworking equipment with an ISDN access router as used at DIT-UPM, connecting ISDN directly with the ATM LAN, would be the best solution as this would prevent the Sun workstation from having to do any routing at all.

Possible future prospects
We can easily imagine a solution for the future where the Internet substitutes an interconnection over ISDN - if Quality of Service (QoS) on the Internet can be guaranteed. A decisive step towards this future has already been made. The Internet Engineering Task Force (IETF), a large open international community of network designers, operators, vendors and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet, has developed extensions to IP architecture and the best-effort service model so that applications or end users can request specific quality (or levels) of service from an internetwork in addition to the current IP best-effort service. The proposed standard, defined in RFC 2205, is called Resource ReSerVation Protocol (RSVP) which is the reservation protocol of choice on the Internet. RSVP is an end-to-end protocol compatible with current TCP/IP based networks. RSVP operates on top of IPv4 or IPv6. It is capable of providing the means to support a special QoS for multimedia applications. The RSVP protocol is used by a host to request specific qualities of service from the network for particular application data streams or flows. RSVP is also used by routers to deliver QoS requests to all nodes along the path(s) of the flows and to establish and maintain the provision of the requested service. RSVP requests will generally result in resources being reserved in each node along the data path. At the time when the LEVERAGE internetwork was being designed, RSVP was not yet available nor did the Internet have an efficient enough backbone network. In the future the Internet itself, supported by a protocol like RSVP, could be the best choice for interconnecting sites across international borders. We will see . . .

1 ITU-T recommendation H.261, Video codec for audiovisual services at p * 64 kbit/s. The video codec encodes the video from the video source (i.e. camera) for transmission and decodes the received video code which is output to a computer display. The recommendation H.261 is an integral part of the recommendation H.323 which defines the infrastructure of audiovisual services. Nowadays H.323 is a very popular standard for PC based conferencing systems like Microsoft NetMeeting (Trademark) or WhitePine CU-Seeme (Trademark).

2 CCITT recommendation G.711, Pulse Code Modulation Protocol (PCM) of voice frequencies.

About LEVERAGE | Conferences | Deliverables | Partners | Press Archive | Related Projects

LEVERAGE News 1 - Sept 96 | 2 - April 97 | 3 - Sept 97 | 4 - July 98 | 5 - Feb 99


LEVERAGE home page

Last updated 1st June 1999
E-mail: leverage@cilt.org.uk