English

English

Home » Paving the way for Terabit Ethernet

Paving the way for Terabit Ethernet

Despite advances in Wi-Fi technology and the recent introduction of Wi-Fi 6, Ethernet remains the technology of choice that businesses use when they need to move large amounts of data quickly, especially in data centers. While the technology behind Ethernet is now over 40 years old, new protocols have been developed over the years that allow even more gigabytes of data to be sent.

To learn more about the latest technologies, protocols, advancements and the future of Gigabit Ethernet and maybe even one day soon Terabit Ethernet, TechToSee Pro spoke with Tim Klein, CEO of storage connectivity company ATTO.

Ethernet was first introduced in 1980, how has the technology evolved since and where does it fit into today’s data center?

Now over four decades old, Ethernet technology has made major improvements, but there are also a lot of things that look exactly like they were when they were first introduced. Originally intended for scientists to share small packets of data at 10 megabits per second (Mbps), we are now seeing giant data centers sharing massive pools of unstructured data over Ethernet networks, and a roadmap that will reach Terabit Ethernet. in just a few years.

The exponential growth of data, driven by new formats such as digital images, created huge demand and these early implementations of shared storage over Ethernet could not meet performance needs or handle congestion with deterministic latency. As a result, protocols such as Fiber Channel have been developed specifically for storage. Over the years, several innovations such as smart offloads and RDMA have been introduced so that Ethernet can meet the demands of unstructured data and overcome the deadlock that can arise when large pools of data are transferred. The latest high-speed Ethernet standards such as 10/25/40/50 / 100GbE are now the backbone of the modern data center.

(Image credit: Pixabay)

Today’s applications demand higher and higher performance. What are the challenges of configuring faster protocols? Can the software help here?

Tuning is extremely important these days due to the demand for higher performance. Each system, whether it is a client or a server, must be tailored to the requirements of each specific workflow. The large number of file sharing protocols and workflow requirements can be overwhelming. In the past, you might have simply accepted that half of your bandwidth was blown away by overhead with hiccups and packet loss slowing you down at a breakneck pace.

There are a number of methods today to optimize throughput and tune Ethernet adapters for very intensive workloads. Hardware drivers now come with built-in algorithms that improve efficiency, TCP offload engines reduce overhead from the network stack. Large Receive Offload (LRO) and TCP Segmentation Offload (TSO) can also be implemented in hardware and software to facilitate the transfer of large volumes of unstructured data. Adding buffers such as a continuous receive queue, speeds up the delivery of packets, increasing fairness and improving performance. Newer technologies such as RDMA allow direct memory access by bypassing the operating system’s network stack and virtually eliminating overhead.

What is driving the adoption of 10/25/50 / 100GbE interfaces?

The demand for larger, more efficient storage solutions and the enthusiasm for new Ethernet technologies such as RDMA and NVMe-over-Fabrics are driving the adoption of high-speed Ethernet in modern data centers. 10 Gigabit Ethernet (10GbE) is now the dominant interconnect for server-class adapters, and 40GbE was quickly introduced to push the boundaries by combining four lanes of 10GbE traffic. This eventually evolved into the 25/50 / 100GbE standard which uses 25 Gigabit lanes. Networks now use a mix of all 10/25/40/50/100 GbE speeds, with 100 GbE at the core, 50 and 25 GbE at the edge.

The ability to mix and match speeds, design paths to give them as much power as they need, and balance the data center from core to edge, leads to the rapid adoption of the 25 / standard. 50 / 100GbE. New technologies such as RDMA are opening up new opportunities for businesses to use network adapters and network attached storage (NAS) with deterministic latency to handle workloads that in the past had to be performed by computer networks. more expensive storage (SAN) using fiber. Channel adapters requiring more specialized assistance. More recently, we see NVMe-Over-Fabrics, which uses RDMA transport to share cutting-edge NVMe technology on a storage fabric. 100GbE network cards with RDMA opened the door to NVMe storage fabric that achieves the fastest throughput on the market today. These previously unthinkable levels of speed and reliability allow businesses to do more with their data than ever before.

What is RDMA and how does it impact Ethernet technology?

Remote Direct Memory Access (RDMA) allows smart network adapters to access memory directly from another system without going through the traditional TCP method and without any CPU intervention. Traditional transfers relied on the operating system’s network stack (TCP / IP) to communicate, which caused massive overload, resulting in loss of performance and limiting what was possible with Ethernet and storage. RDMA now enables lossless transfers that virtually eliminate overhead with a massive increase in efficiency through saving CPU cycles. Performance is increased and latency is reduced, allowing organizations to do more with less. RDMA is in fact an extension of DMA (Direct Memory Access) and bypasses the CPU to allow “zero-copy” operations. These technologies have been an integral part of Fiber Channel storage for many years. This deterministic latency that made Fiber Channel the first choice for businesses and heavy workloads is now readily available with Ethernet, making it easier for organizations of all sizes to take advantage of high-end shared storage.

How does NVMe integrate?

Where NVMe integrates with Ethernet is through the NVMe-over-Fabrics protocol. It is quite simply the fastest way to transfer files over Ethernet today. NVMe itself was designed to take advantage of modern SSDs and flash storage by upgrading SATA / SAS protocols. NVMe raises the bar even higher by taking advantage of the ability of non-volatile memory to operate in parallel. Since NVMe is a direct attached storage technology, the next step towards shared storage is where Ethernet or Fiber Channel comes in: bringing NVMe to a shared storage fabric.

(Image credit: Gorodenkoff / Shutterstock)

What are the Ethernet requirements for storage technologies like RAM disk and smart storage?

Smart NICs are a relatively new term for the ability of network controllers to handle operations that in the past were the load of a processor. Unloading the system processor improves overall efficiency. Taking this concept even further, network card manufacturers offer FPGA (Field Programmable Gate Array) technology that enables application-specific functionality, including data offloads and acceleration, which can be developed and encoded on the FPGA. . Rest at the hardware layer makes these network cards blazingly fast with huge potential in the future for more innovations to be added to this layer.

Disk RAM Smart Storage advances this area with the integration of data acceleration hardware into storage devices that use volatile RAM memory (which is faster than non-volatile memory used in NVMe devices today). This translates into blazingly fast storage with the ability to streamline heavy workloads.

The combination of super-fast RAM storage, integrated NIC controller and FPGA with smart offloads and data acceleration has huge potential for super high-speed storage. RAM disk and smart storage wouldn’t exist without the latest innovations in Ethernet RDMA and NVMe-over-Fabrics.

What future for Ethernet technology?

200 Gigabit Ethernet is already starting to spread from HPC solutions to data centers. The standard doubles the lanes at 50 GbE each and there is a hefty roadmap that will see 1.5 terabits in just a few years. PCI Express 4.0 and 5.0 will play an important role in enabling these higher speeds, and businesses will continue to look for ways to bring power to the edge, accelerate transfer speeds, and find ways to manage CPU operations. and GPUs with network controllers and FPGAs.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay on Top - Get the daily news in your inbox

Trending this Week