Channel-bonding is the striping of data from each message over multiple interfaces. While channel-bonding of multiple Fast Ethernet cards in a cluster can increase the throughput dramatically, the low cost and high performance for Gigabit Ethernet makes this work useless at this point. I'll still present the somewhat dated information though to illustrate the beneficial role that reducing the overhead can play, as was accomplished by using M-VIA in this case.
The Linux kernel has the ability to do low-level channel bonding. This works alright at Fast Ethernet speeds, where a doubling of the throughput can be achieved using 2 cards. It is not tuned for Gigabit speeds yet.
MP_Lite can do channel bonding at a higher level by striping data from a single message across multiple sockets set up between each pair of computers. The algorithm also tries to hide latency effects by increasing the amount of data being striped exponentially, starting with small chunks to get each interface primed, then doubling the size each time to hide the latency. This is a flexible approach, working for any Unix system, but will always suffer from a loss of potential performance due to the higher latency involved. A nearly ideal doubling of the throughput has been achieved using 2 Fast Ethernet cards, but little benefit was produced from using a 3rd Fast Ethernet card.
M-VIA is an OS-bypass technique for Ethernet hardware. Using the MP_Lite via.c module running on M-VIA to reduce overhead costs, a nearly ideal tripling of the throughput can be achieved using 3 Fast Ethernet cards, while 4 cards produces a 3.5 times speedup. This illustrates the benefits of channel bonding at a low level, providing encouragement for tuning the Linux kernel bonding.c module.