## A.3.1 Networking Fundamentals

The goal of this module is to provide you with knowledge of networking, as it relates to the performance of distributed computing applications.

The goal is not to teach you details of network technologies and protocols, which are fascinating topics you can learn about in networking courses and textbooks.

Go through the tabs below in sequence…

#### Learning objectives

• Understand the concepts of latency and bandwidth
• Be able to estimate data transfer time through a network link

A network is built from network links (in the case of wired networks, these links are network cables). Each network link has two important characteristics:

1. Latency: the time it takes for one bit of data to travel along the length of the link (measured in second)
2. Bandwidth: the maximum number of bits that can be transferred by the link per time unit (measured in bit/second)
One can think of data flowing through a link as water flowing through a pipe (click to expand)

A popular analogy is to think of a link as a vertical physical pipe that connects a cistern (on top) to a pool (on the bottom) . The latency is the time for one drop of water to travel from the top-end of the pipe to the other. The bandwidth is how many liters of water can flow out of the end of the pipe per second. In this analogy, the latency is the length of the pipe, and the bandwidth is its width.

We assume that links are bidirectional, meaning that data can flow in both directions at the same time (which is unlike water in pipes). This model of a network link is not completely accurate as it abstracts away many of the details of actual network technologies and protocols. But it is sufficient for our purpose.

#### Data Transfer Time

Given a network link with latency $\alpha$ and bandwidth $\beta$, the time $T$ to transfer an amount of data $s$ over the link can be estimated as a first approximation as follows:

$T = \alpha + \frac{s}{\beta} .$

For instance, consider a link with latency 100 microseconds and effective bandwidth 120 MB/sec (“120 MegaBytes per second”), transferring 100KiB (“100 KibiBytes per second”) of data takes time:

$T = 100 \times 10^{-6} + \frac{100 \times 2^{10}}{120 \times 10^{6}} \simeq .000953 \; \text{sec}.$

Make sure you know your units and use them in a consistent manner, knowing when units are powers of 10 or powers of 2. In these pedagogic modules, we typically use power-of-10 units (e.g., KB rather than KiB).

In some cases the first term above (the latency) can dominate (i.e., with small data sizes and/or large bandwidths), or can be negligible (i.e., with large data sizes and/or small bandwidths).

Here we have used the term, effective bandwidth, to denote the maximum possible data transfer rate that a network link is able to achieve. Due to various network overheads, a network link can have a throughput of at most about 97% of its advertised physical bandwidth. Thus, if you purchase a 100 GB/sec physical link, you will not be able to transfer data at 100 GB/sec. From this point forward, when we describe the bandwidth of a network link, we will always mean its effective bandwidth.

#### Practice Questions

To make sure the above is crystal clear, answer the following practice questions:

[A.3.1.p1.1] How long, in milliseconds, does it take to transfer 250 MB on a network link with latency 500 microseconds and 20 GB/sec bandwidth?

$T = 500 / 1000 + 1000 \times (250 \times 10^6) / (20 \times 10^9) = 13 \; \text{ms}.$

[A.3.1.p1.2] How long, in minutes, does it take to transfer 1 GB on a network link with latency 100 microseconds and 520 MB/sec bandwidth?

$T = 100 / (60 \times 10^6) + (1 / 60 ) \times (1 \times 10^9) / (520 \times 10^6) \simeq 0.032 \; \text{min} .$

[A.3.1.p1.3] You need to transfer 148 MB of data through a network link with latency 1 ms. What bandwidth, in GB/sec, should the link have so that the data transfer takes 2.5 sec?

Let $B$ be the needed bandwidth. We simply need to solve the equation below for $B$:

$$1/1000 + (148 / 10^3) / B = 2.5 ,$$

which gives:

$$B = (148 / 10^3) / (2.5 - 1/1000) \simeq .059 \; \text{GB/sec} .$$

#### Questions

[A.3.1.q1.1] How long, in seconds, does it take to transfer 12 GB of data over a link with latency 10 ms and bandwidth 500 MB/sec?

[A.3.1.q1.2] 3 MB of data was transferred over a link with 18 MB/sec bandwidth in 3.03 sec. What is the link’s latency in seconds?

[A.3.1.q1.3] A data transfer took 14 minutes on a link with latency 100 ms and bandwidth 120 KB/sec. How much data, in MB, was transferred?

[A.3.1.q1.4] Say you are sitting at your computer and need to download a 10 GB movie file. The file is available at two mirror sites, both of them one network link away from your computer. Mirror A is connected to your computer by a link with latency 100 ms and bandwidth 400 MB/sec. Mirror B is connected to your computer by a link with latency 300 ms and bandwidth 700 MB/sec. Which mirror should you use and why?

#### Suggested Activities

[Latencies] What do you think the network latency is between major U.S. cities? Go to this site and find out the current network latency between Houston and Seattle and the current latency between New York and Philadephia. Which two U.S. cities (among those on that site) are the furthest apart in terms of network latencies?

[Latencies and speed of light] Assuming that data travels at the speed of light, in a straight line, what would be the network latency between New York and Los Angeles?
How much bigger is it in reality (using the same site as in the previous activity)?

[Bandwidth] Download a large file from some Web site and measure the (perceived) bandwidth that you achieved. For instance, you can download the 100 MB file on this site. Note that tools (other than Web browsers) are available to download files from URLs that print out download times and/or data transfer rates (e.g., wget on Linux).

#### Learning objectives

• Understand the concept of network topology
• Be able to compute end-to-end latencies and bandwidths
• Be able to compute end-to-end data transfer times

### Network Topologies

At an abstract level a network topology is a graph. The edges of the graph are network links with various latencies and bandwidths. The vertices of the graph represent either end-points, i.e., computers connected to the network, or routers, i.e., devices that are used to connect network links together. We are abstracting away here many interesting details of how network technology makes it possible to implement network topologies. For instance, we will not discuss how routers work (see a networking course or textbook for all interesting details).

Figure 1: An example network topology that interconnects 5 hosts.

The figure above shows an example topology with 5 hosts (the end-point vertices), 4 routers (internal vertices), and 9 network links (the edges). Data communicated on the network flows through links and routers. The route between two hosts is the sequence of network links (and routers) that the data traverses when being communicated from one of the hosts to another.

Figure 2: Two possible routes between host A and host C.

As an example, the figure above shows two possible routes between host A and host C. The network configuration, the details of which are outside our scope, defines which route is to be taken, for instance the blue route. We will always assume that the routes are static, i.e., data flowing from one host to another always traverses the same set of links. So in the example above, we assume that either the blue route or the red route exists.

### End-to-end Network Routes

Consider two hosts connected via a 3-link route, as depicted in the figure below.

Figure 3: An example 3-link route between two hosts.

In this example, all network links have the same bandwidth, 100 MB/sec. When transferring data from host A to host B, this transfer thus experiences a bandwidth of 100 MB/sec but a latency of 50 + 100 + 50 = 200 us, that is, the sum of the link latencies. Remember that in the “water in pipes” analogy in the previous tab, the latency is the length of a pipe. And so it makes sense that the length of a sequence of pipes is the sum of the individual pipe lengths.

For the route shown in Figure 3, transferring 100 MB of data from host A to host B will take time:

\begin{align} T_{100 \text{MB}} & = 200\;\text{us} + \frac{100\;\text{MB}}{100\;\text{MB/sec}} = 1.0002\; \text{sec} \end{align}

Figure 4: Another example 3-link route between two hosts.

In Figure 4, the middle link has a bandwidth of 10 MB/sec (shown in red). In this case, the data flows only as fast as the slowest link. The middle link is called the bottleneck link, and the other two links are only partially used (i.e., they can transfer data at 100 MB/sec, but they only transfer data at 10 MB/sec). This is again consistent with the “water in pipes” analogy, since the water flow is limited by the width of the narrowest pipe. In other words, the bandwidth available in a multi-link route is the minimum of the link bandwidths.

For the route shown in Figure 4, transferring 100 MB of data from host A to host B will take time:

\begin{align} T_{100MB} & = 200\;\text{us} + \frac{100\;\text{MB}}{10\;\text{MB/sec}} = 10.0002\; \text{sec} \end{align}

### Putting it all together

Given a route r, i.e., a sequence of connected network links, and a data transfer of size bytes, the data transfer time through the route is:

\begin{align} T_{size} & = \sum_{link \in r} latency(link) + \frac{size}{\min\limits_{link \in r} bandwidth(link)} \\ \end{align}

The latency of the route is the sum of the latencies, and the bandwidth of the route is the minimum of the bandwidths.

#### Practice Questions

To make sure you have understood the above, answer the following practice questions, which all pertain to this topology:

Figure 5: Network topology for practice questions.

[A.3.1.p2.1] What is the latency of the route from host E to host D?

The latency is the sum of the link latencies along the route:

\begin{align} 100\;\text{us} + 50\;\text{us} + 120\;\text{us} = 270\;\text{us}. \end{align}

[A.3.1.p2.2] What is the bandwidth of the route from host E to host D?

The bandwidth is the minimum of the link bandwidths along the route:

\begin{align} \min(20\;\text{MB/sec}, 30\;\text{MB/sec}, 100\;\text{MB/sec}) = 20\;\text{MB/sec}. \end{align}

[A.3.1.p2.3] I am a user sitting at host E and have to download a large file. That file is on a Web site at host A but also on a mirror Web site at host D. Which mirror should I select?

I should pick the mirror at host D. This is a large file, so the latency component of the data transfer time is negligible. So it’s all about the bandwidth. The bandwidth of the route from host E to host A is 10 MB/sec, while that of the route from host E to host D is higher at 20 MB/sec.

[A.3.1.p2.4] What is the transfer time for sending 1 MB of data from host E to host D?

The data transfer time is:

\begin{align} T = 100\;\text{us} + 50\;\text{us} + 120\;\text{us} + \frac{1\;\text{MB}}{20\;\text{MB/sec}} \simeq .05\;\text{sec}\\ \end{align}

#### Questions

Answer the following questions, which all pertain to this topology:

Figure 6: Network topology for questions (lat = "latency"; bw = "bandwidth").

[A.3.1.q2.1] What is the latency of the route from host A to host B?

[A.3.1.q2.2] What is the bandwidth of the route from host C to host D?

[A.3.1.q2.3] How long, in seconds, does it take to transfer 100 MB of data from host A to host D?

[A.3.1.q2.4] A lot of users are transferring data from host B to host D, thus competing for the bandwidth on that route. Would it help to purchase an upgrade to the link that connects host B to the network?

[A.3.1.q2.5] I am sitting at host D and want to download a tiny file that is mirrored at all other hosts. From my perspective, does it matter which mirror I pick, and if yes which one should I pick?

[A.3.1.q2.6] I am sitting at host D and want to download a huge file that is mirrored at all other hosts. From my perspective, does it matter which mirror I pick, and if yes which one should I pick?

[A.3.1.q2.7] Of all the possible routes above, which route has the highest bandwidth?

#### Learning Objectives

• Understand the concept of network contention
• Be able to estimate data transfer times in the presence of contention

#### Networks are Shared

Typically, several data transfers occur concurrently (i.e., at the same time) on a network, and some of these transfers may be using the same network links. For instance, two concurrent transfers could be along two routes that share a single link. As a result, a data transfer’s performance can be impacted by other data transfers. When a data transfer goes slower than it would go if alone in the network, it is because of contention (i.e., competition) for the bandwidth of at least one network link.

### A Simple example

Consider the following topology with the two depicted data transfers (symbolized by the red and the green arrow), that each were started at exactly the same time and transfer 100 MB of data.

Figure 1: A simple example in which two data transfers contend for bandwidth.

If the green data transfer was by itself, its bandwidth would be 30 MB/sec. If the red data transfer was by itself, its bandwidth would be 40 MB/sec. But when both transfers happen at the same time, they experience contention on the link into host C.

Contention on this link means that the two transfers split the link’s bandwidth. If this splitting is fair they both receive half of the link’s bandwidth, 20 MB/sec. (It turns out that bandwidth sharing is a bit complicated in practice as it also depends on latencies, but in this case both transfers have the same end-to-end latencies, which leads to fair sharing (see a networking course/textbook for more details if interested).

Given the above, both transfers proceed at 20 MB/sec, i.e., half the bandwidth of the link into host C, which is their bottleneck link. Thus, both transfers complete in time:

$T = 200\;\text{us} + \frac{100\; \text{MB}}{20\; \text{MB/sec}} = 5.0002\;\text{sec}$

### A slightly more complex example

Consider now another scenario, with the only difference that the “red” transfer now only transfers 50 MB:

Figure 2: A slightly more complex example in which one transfer operation transfers less data than the other.

In this scenario there are two phases:

1. In the first phase both transfers proceed with a bandwidth of 20 MB/sec due to contention;
2. In the second phase, after the “red” transfer has completed, the “green” transfer proceeds alone with a bandwidth of 30 MB/sec (because its bottleneck link is now the link out of host B!).

Therefore, the “red” transfer completes in:

$T_{red} = 200\;\text{us} + \frac{50\;\text{MB}}{20\;\text{MB/sec}} = 2.5002\;\text{sec}$

The “green” transfer transfers its first 50 MB of data with a bandwidth of 20 MB/sec and its last 50 MB of data with a bandwidth of 30 MB/sec. Therefore, it completes in time:

$T_{green} = 200\;\text{us} + \frac{50\;\text{MB}}{20\;\text{MB/sec}} + \frac{50\;\text{MB}}{30\;\text{MB/sec}} = 4.1668\;\text{sec}$

#### Simulating Contention

So that you can gain hands-on experience, you can use the simulation Web application below. This simulation is for the following scenario in which a number of transfers occur concurrently on the same three-link route:

Figure 3: Simulation scenario.

The simulation app allows you to enter a list of file sizes (in MB). Each file size corresponds to one data transfer on the three-link route. For example, if you enter just number “100” in the text box, the simulation will be for a single 100 MB data transfer and produce this output:

----------------------------------------
100 MB transfer completed at time 10.5
----------------------------------------


Note that the transfer’s completion time is a bit higher than what the computations we’ve done so far would give. We would expect the transfer time to be:

$T = 30\;\text{us} + \frac{100\; \text{MB}}{10\; \text{MB/sec}} = 10.00003\;\text{sec}.$

This discrepancy is because the simulator captures some details of real-world networks (e.g., the TCP slow-start behavior that you may have read about in a networking textbook) that are not captured by the above mathematical expression. Such expressions are still useful approximations that we can use to reason about data transfer times. However, we should not be surprised that they are “a bit off”.

Entering “100, 100, 50” in the text box will simulate two 100 MB transfers and one 50 MB transfer, producing this output:

----------------------------------------
100 MB transfer completed at time 26.25
100 MB transfer completed at time 26.25
50 MB transfer completed at time 15.75
----------------------------------------


As expected, the 50 MB transfer completes first, and the two 100 MB transfers complete at the same time.

You should use the simulation to explore different scenarios and test your computed data transfer time estimates for various combinations of concurrent transfers.

(Open simulator here)

### eduWRENCH Pedagogic Module Simulator

#### Practice Questions

The following practice questions pertain to this topology:

Figure 4: Topology for practice questions.

[A.3.1.p3.1] A 100 MB transfer from host A to host C, and a 100 MB transfer from host B to host C start at the same time. Do they finish at the same time?

Yes! Both transfers are bottlenecked on the link into host C, sharing its bandwidth, so that both transfers proceed at bandwidth 20 MB/sec.

[A.3.1.p3.2] A 100 MB transfer from host D to host B, and a 100 MB transfer from host A to host C start at time 0. At what time does each of them complete?

The transfer from D to B proceeds at 30 MB/sec as it is bottlenecked on the link into host B. The transfer from A to C proceeds at 40 MB/sec as it is bottlenecked on the link into host C. These two transfers share one network link, but that network link has bandwidth 100 MB/sec, and so there is no contention on that link. Consequently, the transfer times are as follows:

\begin{align} T_{D \rightarrow B} & = 250\;\text{us} + \frac{100\;\text{MB}}{30\;\text{MB/sec}} = 3.3335\;\text{sec}\\ T_{A \rightarrow C} & = 250\;\text{us} + \frac{100\;\text{MB}}{40\;\text{MB/sec}} = 2.5002\;\text{sec} \end{align}

[A.3.1.p3.3] A 100 MB transfer from host B to host C and a 60 MB transfer from host A to host C start at time 0. At what time do they complete?

Both transfers are bottlenecked on the link into host C, sharing its bandwidth so that both transfers proceed at 20 MB/sec. When the 60 MB transfer completes, then the 100 MB transfer still has 40 MB to transfer and proceeds at 30 MB/sec (as it is now bottlenecked on the link from host B). Therefore:

\begin{align} T_{A \rightarrow C} & = 250\;\text{us} + \frac{60\;\text{MB}}{20\;\text{MB/sec}} = 3.0002\;\text{sec}\\ T_{B \rightarrow C} & = 250\;\text{us} + \frac{60\;\text{MB}}{20\;\text{MB/sec}} + \frac{40\;\text{MB}}{30\;\text{MB/sec}} = 4.3335\;\text{sec} \end{align}

#### Questions

Answer the following questions, which pertain to this topology:

Figure 5: Topology for questions (lat = "latency"; bw = "bandwidth").

[A.3.1.q3.1] At time 0, a 10 MB transfer starts from host B to host C, and another 10 MB transfer starts from host A to host D. Do they finish at the same time?

[A.3.1.q3.2] At time 0, a 100 MB transfer starts from host B to host C, and a 200 MB transfer starts from host A to host D. At what time do these transfers finish?

Running eduWRENCH 1.0-dev

eduWRENCH is funded by the National Science Foundation (NSF) under grants number 1923539, and 1923621.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.