I’m often seeing 40GbE backbones being specified on yachts. Is this the way to go? Here’s my take…

The short version is this: There are two scenarios, either you are building an IT network or a converged AV/IT network (i.e. implementing HDMI over IP). In the first scenario my opinion is that the answer is a firm NO, which I will try to back up with examples below. In the second scenario I’ll make the case that there is probably a better alternative available to you. Read on for a better explanation.

Before we get into it, let me just clarify the difference between the terms Gbit and GbE that you will see repeatedly below. They may seem to be the same thing, but they are not.

  • Gbit is an abbreviation of Gigabits (per second is implied here), simply a measure of data throughput (bandwidth).
  • GbE is an abbreviation of Gigabit Ethernet, the underlying technology making gigabit speed network links possible, i.e. if you require 10Gbit of bandwidth on a network link, implementing 10GbE technology will make that possible.)

First Scenario – IT networks for IT purposes

The heaviest traffic that you are likely to have on a modern network, other than the occasional large file transfer, is streaming video. That large file is only going to move as fast as the physical link from the client device to the network switch allows, which is typically going to be way less than the uplink can handle, so the uplink will not be a bottleneck there. Big files being pushed around on the network is not a common activity on a yacht network, and certainly not in a situation where that transfer is a huge priority and every second counts. Users tend to simply accept the time it takes to transfer data from A to B, as long as it happens reliably, while they will be far more critical of jerky or continuously buffering video streams. So putting in a 40GbE backbone for the eventuality that many guests connected to the same network switch will simultaneously be transferring lots of big files while being very critical of the transfer speeds is a real stretch. 10GbE covers your bases really well there. Don’t forget 10Gbit/s = 1.25 GByte/s, yes that is 1250 MByte/s.

Let’s in stead focus on video streaming. As mentioned, guests are not likely to be very accepting of poor video streaming performance. So what is the heaviest stream you are likely to encounter in common use in the next 5 or 6 years? Most people will tell you 4k HDR video and I agree with that. Fine, so let’s put this in perspective.

Netflix 4k HDR streams (or equivalent) are 25 to 40 Mbit. Let’s round up to 50 just to be on the safe side and then do some simple math: 10Gbit/50Mbit = 200. So to max out a single 10GbE uplink you would need to be pushing over 200 streams of 4k HDR video at Netflix quality to a single switch. Keep in mind that most distribution switches on yachts only serve a handful of users. Now jump to 40GbE, and you’ll need 800 streams of 4k HDR to saturate the uplink. Imagine having to explain to an owner he can’t serve 800 streams of 4k video to a single switch because you didn’t put in 40GbE uplinks. Oh the shame!! (Do I detect a hint of sarcasm?)

But wait, 4k Blu-ray is much more bandwidth intensive than netflix, right? Yes indeed… UHD Blu-ray codec bitrates top out at 128Mbit. Let’s round up again, say to 150Mbit, just to be totally safe. You’re unlikely to see higher video bitrates than this for years to come. Now the math again: 10Gbit/150Mbit = 66. Oooh, down into the high double digits there….starting to panic now!

I know I’m being sarcastic, but hopefully you get my point. I’m all for specifying networks to handle more than you expect to be necessary, these are superyachts after all, and the networks should be super too! But honestly, if your IT network is designed for IT tasks, a single 10GbE uplink on your distribution switches will serve you very well for many years to come. Anyone telling you any different is – in my opinion – trying to cover their behinds to the max, or trying to make extra money off you (or both). My advice would be to go for a 10GbE backbone with dual uplinks. This not only creates path redundancy, which is a good practice in mission critical networks, but it even gets and you double the bandwidth as a bonus. Further down you’ll see that this also happens to be an extremely future proof solution.

 

Second Scenario – IT networks for AV and IT purposes

If you are going down the route of HDMI over IP…and if you have read my other articles you’ll know that I am quite a fan of such solutions, you may want to take a different approach to network design. This is where it gets a bit more complex, as there are at least two types of HDMI over IP solutions.

If you go the 10Gbit HDMI over IP route, you will need 10GbE links to each endpoint, because you will be using much of that bandwith for the HDMI signal. In this scenario you will probably want to install a big switch at the heart of your installation with enough 10GbE ports to handle all your displays and sources. This will be a serious switch, more than serious enough to be used as your core IT switch (or one of them at the least). Given that your HDMI network connections are dedicated runs, you can now simply treat the rest of the network as IT only, and apply the logic explained above for that scenario. In other words, implement dual 10GbE links between your core and distribution switches. 10Gbits is more than enough bandwidth for that purpose as I have shown, so if one link fails, you can easily survive on the remaining one, and fix the issue without the users ever noticing something went wrong. Neither their network speed, nor their HDMI video performance will suffer if one link drops out.

If, on the other hand, you go with a 1Gbit HDMI over IP solution, you are much more likely to want to integrate that onto your ‘regular’ IT network rather than keeping it separate. In that case, it is wise to consider going beyond 10GbE on your uplinks. That is simply because these HDMI over IP solutions consume a significant chunk of their 1Gbit connection, and your uplink can quickly become saturated. Let’s assume that they use the full 1GBit per HDMI link, and your uplink is 10GbE, then you could only hope to get HDMI signals to 10 displays (on the same switch) simultaneously before the uplink becomes saturated and other network applications suffer badly. with a dual uplink in place you would have 20Gbit at your disposal. This would generally put you in safe territory as long as both links are up, but should one of the links fail, you drop down to 10Gbit and that might not be enough to keep you out of trouble, as video links may start failing, and other traffic could be seriously affected as well. Not a situation you want to put yourself or your client in, so then it really does make sense to move beyond 10GbE.

Ok great, we have found a scenario where it makes sense to go higher. So then on to 40GbE, right? Eeeh…not necessarily. Why? 40GbE is basically old technology that is slowly on its way out. That doesn’t necessarily make it a bad choice. It is reliable and will still be around for years to come, but if you consider there is a newer, proven technology available that is also much more efficient in terms of cabling, and that is quickly supplanting 40GbE tech in datacenters worldwide, then then it becomes more apparent why 40GbE perhaps should not be your first choice.

 

What are you talking about?

40GbE technology was developed initially as a datacenter technology, and actually uses 10GbE technology at its core. 10GbE is a single lane technology, and each lane requires 2 cores of fiber optic cable. 40GbE bundles 4 lanes of 10GbE into a single connection, therefore requiring 8 fiber cores.

The ‘new kid on the block’ is 25GbE, and it has been around for a few years now. Datacenter builders have ditched 40GbE technology and are moving forward 25GbE based solutions. In other words, this is not some bleeding edge, unproven tech; this is a technology being implemented at large scale worldwide.

But why? Well as I just mentioned, 40GbE is a 4 lane technology, based on 10GbE, but 25GbE is a single lane technology, meaning that other solutions can be and, are being built using multiple lanes of 25GbE. So 50 and and 100Gbit speeds become available using 2 and 4 lanes of 25GbE respectively. So datacenters can achieve 100Gbit speeds using the same cabling the were using to get 40Gbit. That’s why.

This is why 10GbE is such a good choice now for non-datacenter networks. If you have 10GbE in place, you can easily upgrade to 25GbE whenever you want, without changing your cabling. If you design for dual 10GbE links, you can upgrade to dual 25GbE links in the future, giving you 50Gbits of bandwidth and path redundancy using just 4 cores of fiber. Compare that to moving from 10 to 40Gbit links, where a dual link solution would require 16 cores of fiber.

Coming back to the example of 1Gbit HDMI over IP… Having a single 25GbE uplink in place, you could now serve 25 HDMI signals simultaneously before the uplink saturates. That’s typically going to be more than enough to keep you safe. If you implement a dual link, you have 50Gbit at your disposal, and you’re back in a scenario where things would have to go horribly wrong before anyone notices.

Now if you also consider that 50GbE single lane solutions are already being worked on, you can see how this plays out. In the not too distant future, you will have the option of implementing 50GbE on a single link, still using the cables you originally pulled for 10GbE! Now it must be said the spec and quality of your cables will ultimately determine how much throughput you actually get, particularly if you are using multimode fiber. However, you should still be able to go well beyond 10Gbit if that’s what you’re getting reliably now. If you are using single mode fiber, you should be able to upgrade to the latest standards in the future and get the maximum throughput without any problem, so if you want to be as future proof as possible, go with single mode fiber for your backbone cabling.

 

In closing

I would like to reiterate my opening remarks here. When building an IT network for the sake of IT functionality, 10GbE is your best friend right now. Dual 10GbE links will give you 20Gbits of bandwidth as well as path redundancy, making for a very robust and cost effective solution with more than enough bandwidth to last for years. So why pay for more speed that you will not need now, when you can wait several years and pay far less for a substantial upgrade? You also have the security built in that when the time comes to upgrade, you will be able to make the step to 25GbE and after that 50GbE without having to redo your cabling.

An added benefit of sticking with 10GbE now is that there is virtually no cost penalty for using single mode fiber rather than multimode fiber, so it’s a no-brainer to do so. The same is not true for 40GbE, as for that technology, single mode optical modules are a lot more expensive than their multimode counterparts. The same is true for 25GbE modules though, which is why it makes sense to stick with 10GbE now and wait for the right time to upgrade to 25GbE or higher.

When converging HDMI over IP and IT on a single network, you will arguably need more uplink bandwidth than 10GbE can offer, and so you will need to consider the alternatives. Now there is nothing wrong with 40GbE, but it requires a lot of cable and it is basically a technology that is slowly on its way out. So be aware of that and consider that there is a newer technology out there, 25GbE, that offers serious performance with a lot less cable and a very interesting upgrade to boot.

 

Originally published on April 20, 2018by Edwin Edelenbos on LinkedIn.

Share This

Share This

Share this post with your friends!