Figure 4 shows the signature graph for Ethernet, ATM, and FDDI networks using the TCP/IP communication protocol. All the data were collected by executing NetPIPE on two identical SGI Indy workstations. The network in each case consisted of a dedicated, noise free link between the two machines. ATM communication was performed via FORE  ATM interface cards using the FORE IP communication interface. Communication via the FDDI network yields the highest attainable throughput followed by ATM and Ethernet. However, notice that Ethernet has a lower latency, implying that Ethernet can outperform ATM for small messages. Ethernet latency is on the order of 0.7 ms followed by ATM at near 0.9 ms.
Figure 4: Signature Graphs for FDDI, ATM, and Ethernet
The reader may be alarmed to see that the signature graph is not univalued a function of time. This is not an anomaly, but an indication that a larger message can indeed take less time to transfer because of system buffer sizes and the interaction with the operating system. The phenomenon is repeatable. One suspects that it indicates the need for improvement in system and messaging software, since a superset of a task should always take longer than the task by itself.
In order to examine this further, Figure 5 presents the saturation graph. It verifies the latency order and also shows that for messages up to approximately 8 K bits, Ethernet has the shortest transmission time. It should be emphasized that all the experiments were executed on dedicated network connections.
Figure 5: Saturation Graphs for FDDI, ATM, and Ethernet
The results presented in Figure 5 were significant enough to attempt verification by an application that uses small messages. For such an application, one would expect better performance using a dedicated Ethernet connection than using a dedicated ATM connection. The ideal application for this purpose is the HINT benchmark. The communication in HINT is a global sum collapse of two double precision floating point numbers. Using the same pair of SGI INDY workstations, HINT was run using the Ethernet link and the ATM link. In each case, the links were dedicated and the configuration was identical to that used for the NetPIPE tests. The HINT QUIPS graphs for each configuration are shown in Figure 6. The Ethernet configuration is able to come up to speed sooner that the ATM configuration, and as a result, the Ethernet configuration produces better HINT performance.
Figure 6: HINT Graphs for Ethernet and ATM based communication
The graph shown in Figure 7 depicts the differences in network throughput for block and stream transfer modes. NetPIPE simulates streaming data transfer by executing a series of sends in rapid succession without acknowledgment at the application level. In block transfer, each block is sent to the receiver, which returns the message. Figure 7 presents the signature graphs for Ethernet, FDDI, and ATM, for both streaming and block transfer modes. In streaming mode, FDDI provides the largest throughput for all block sizes. We surmise that this is due to the large network cells used by FDDI. This is important information for application programmers looking for a network solution. If the application involves streaming data across the network, FDDI presents the best solution for transferring data via a dedicated link.
Figure 7: Block Transfer vs. Streaming Transfer
Figure 8: Protocol Layer Overhead