ConnectX-5
ConnectX-5 provides high performance and flexible solutions with single or dual ports of 100GbE connectivity, 750ns latency, up to 200 Million messages per second (Mpps), and a record-setting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 3.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency. The MCX515A-CCAT is available for PCIe Gen 3.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Mellanox Multi-Host® and Mellanox Socket Direct® technologies.
For Cloud and Web 2.0 Environments
ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.
Supported vSwitch/vRouter offload functions include: Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation and decapsulation, Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality, Flexible and programmable parser and match-action tables, SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server and Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance.
For Storage Environments
NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention and thus improving performance and reducing latency.
The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RDMA over Converged Ethernet (RoCE) for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.
ConnectX-5 enables an innovative storage rack design and Host Chaining, which enables different servers to interconnect without involving the Top of Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.
For Telecommunications
For telecom service providers, choosing the right networking hardware is critical to achieving a cloud-native NFV solution that is agile, reliable, fast and efficient. Telco service providers typically leverage virtualization and cloud technologies to better achieve agile service delivery and efficient scalability; these technologies require an advanced network infrastructure to support higher rates of packet processing. However, the resultant east-west traffic causes numerous interrupts as I/O traverses from kernel to user space, eats up CPU cycles and decreases packet performance. Particularly sensitive to delays are voice and video applications which often require less than 100ms of latency.
Reviews
There are no reviews yet.