You can imagine where it goes from here…

Upgrading to gigabit ethernet

So the thing to look for is jumbo frame support, both on the switch and the NICs. Now what exactly is jumbo frame support? Depending on who you ask, it’s a network device that supports either a MTU of > 1500 (1500 is the historical limit & standard for 10/100 ethernet – incorrect) or a MTU >= 9000 (correct)

Then of course there’s the reality distortion field between manufacturer claims and The Real World(tm). This site lists some compliant hardware (jumbo frame clean).

What is the advantage of jumbo frame support? There is two major advantages:

  1. A 50% increase in throughput
  2. At the same time, a 50% decrease in CPU utilization

Almost sounds like a free lunch to me. And we all know there’s no such thing. For more detail see this great article.

So fortunately, my two Macbook Pros comes with Gigabit adapters that are Jumbo frame capable. For my Linux server I installed an Intel Pro/1000 GT desktop adapter that is rumoured to be well supported by the e1000 driver (contributed to and supported by actual Intel employees through Sourceforge).

As for a switch I decided on the Netgear GS108 which appears to support Jumbo frames too.

Configuring the MBP ethernet interfaces was easy as expected, see pic below. As for the linux server (running FC4), the card autoneg to 100Mbit/s and forcing it (ethtool -s eth1 speed 1000 duplex full autoneg off) into 1000Mbit causes it not to establish a link. I upgraded Ethtool 3 to Ethtool 5 to no avail. I also upgraded the e1000 kernel driver from what appears to be 7.0.33 to the latest 7.3.20. No luck. As a last resort I replaced the long’sh cheap ethernet cable with a shorter one that works with the MBP and.. voila! All cables involved in this episode are of Cat 5e persuasion (yes I need to get some Cat 6).

picture-1.png

So what does the actual benchmarks say? This is with iperf using a single thread:

Gigabit (1000Mbit) with MTU in brackets:

  • MBP (9000) -> Server (9000): 758Mbits/sec
  • MBP (9000) -> Server (1500): 480Mbit/sec
  • MBP (1500) -> Server (1500): 482Mbit/sec

FastEthernet (100Mbit) with MTU in brackets:

  • MBP (1500) -> Server (1500): 29.6Mbit/sec

Unstable with MTU 9000 using 100Mbit ethernet. The FastEthernet figures are surprisingly low, not quite sure why yet. As an aside, using 5 client side threads I could get to 970Mbit/s.

Eyeballing the CPU usage, during the MTU 9000 tests, the MBP averages around 50% on both cores while the server averages around 29%, during the MTU 1500 tests, the MBP averages around 70% on both cores while the server averages around 20% interestingly enough. I have not played with linux driver settings on the server at all, the e1000 driver allows for myriad of settings to be fiddled with (including NAPI).

More real world figures just from general usage, large file scp’s went from around 10Mbyte/s to around 14Mbyte/sec, totally unscientific of course 🙂 Not bad but not quite the improvement I was looking for either. The process on either side doesn’t appear to be disk or CPU bound, need to look into this more.

UPDATE: turns out the VIA Rhine linux driver doesn’t appreciate 9000MTU packages, spamming “oversized ethernet frame spanned multiple buffers” to the console and being generally unhappy with life. Which is unfortunate since this means my EPIA connected to my home theater & hifi doesn’t want to boot of the said linux server, using a NFS root fs and all unless I switch back to 1500MTU on the server. I see this problem mentioned a few places but no solutions. I’ll try upgrading the driver at some point 🙂

Advertisements

January 4, 2007 - Posted by | Tech

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: