MCX516A-CCAT Dual-Port 100GbE Ethernet Adapter by NVIDIA

商品の詳細:

ブランド名: Mellanox
モデル番号: MCX516A-CCAT
書類: connectx-5-en-card.pdf

お支払配送条件:

最小注文数量: 1個
価格: Negotiate
パッケージの詳細: 外箱
受渡し時間: 備蓄に基づいた
支払条件: T/T
供給の能力: プロジェクト/バッチによる供給
ベストプライス 連絡先

詳細情報

製品の状態: ストック 応用: サーバ
インターフェースの種類:: ネットワーク ポート: デュアル
最高速度: 25gbe コネクタの種類: SFP28
タイプ: 有線 状態: 新しい、オリジナル
保証期間: 1年 モデル: MCX516A-CCAT
名前: MellanoxネットワークカードCX516A ConnectX-5 100GBE MCX516A-CCATデュアルポートQSFP28 PCI-Eアダプター キーワード: Mellanoxネットワークカード

製品の説明

NVIDIA ConnectX-5 EN MCX516A-CCAT

Dual-port QSFP28 100GbE Ethernet adapter card — delivering up to 100Gb/s per port, 750ns latency, 200 million messages per second, and advanced application offloads. Ideal for Web 2.0, cloud, storage, AI, and telecommunications platforms requiring highest bandwidth and low latency.

2x 100GbE ports PCIe 3.0 x16 RoCE support SR-IOV up to 512 VFs VXLAN/NVGRE/GENEVE offloads NVMe-oF target offloads ASAP2 vSwitch offload
Product Overview

The NVIDIA ConnectX-5 EN MCX516A-CCAT is a dual-port 100GbE Ethernet adapter card designed for the most demanding data center workloads. Built on the ConnectX-5 architecture, this adapter supports multiple speeds including 100GbE, 50GbE, 40GbE, 25GbE, 10GbE, and 1GbE, providing seamless migration paths and infrastructure flexibility. With 750ns latency, up to 200 million messages per second (Mpps), and PCIe 3.0 x16 host interface, the MCX516A-CCAT delivers industry-leading throughput and CPU efficiency. Key capabilities include RoCE (RDMA over Converged Ethernet), SR-IOV virtualization with up to 512 Virtual Functions, ASAP2 accelerated switching and packet processing for vSwitch/vRouter offloads, NVMe over Fabric target offloads, T10-DIF Signature Handover, and comprehensive overlay network offloads (VXLAN, NVGRE, GENEVE). This adapter is available in a low-profile PCIe form factor with enhanced host management features.

Key Features
Dual-Port 100GbE
Two QSFP28 ports supporting 100/50/40/25/10/1GbE speeds. Backward compatible with lower-speed infrastructure.
Ultra-Low Latency and High Message Rate
750ns latency, up to 200 Mpps message rate, and 197 Mpps with DPDK for kernel bypass applications.
RDMA over Converged Ethernet (RoCE)
Low-latency RDMA services over Layer 2 and Layer 3 networks for storage and compute workloads.
ASAP2 Accelerated Switching
Hardware offload of Open vSwitch (OvS) and vRouter data plane, preserving control plane flexibility while achieving wire-speed performance.
NVMe over Fabric Offloads
Hardware-accelerated NVMe-oF target offloads enabling efficient NVMe storage access with near-zero CPU intervention.
SR-IOV Virtualization
Up to 512 Virtual Functions (VFs) and 8 Physical Functions per port, with guaranteed QoS and VM isolation.
Overlay Network Offloads
Hardware encapsulation and de-encapsulation for VXLAN, NVGRE, GENEVE, MPLS, and NSH tunnels.
Flexible Programmable Pipeline
Flexible parser and match-action tables enabling hardware offloads for current and future protocols.
Host Management and Remote Boot
NC-SI over MCTP, BMC interface, PLDM for monitoring and firmware update, PXE and UEFI remote boot.
Technology: ConnectX-5 Architecture

The ConnectX-5 EN ASIC delivers record-setting performance with advanced acceleration engines. Key technological innovations include:

  • PeerDirect (GPUDirect) – Eliminates unnecessary PCIe data copies between GPU and CPU, accelerating HPC, AI, and machine learning workloads.
  • Adaptive Routing on Reliable Transport – Enables out-of-order RDMA and adaptive routing for optimized fabric utilization.
  • Tag Matching and Rendezvous Offloads – Hardware offload of MPI tag matching and rendezvous protocol, reducing CPU overhead in HPC clusters.
  • Burst Buffer Offloads – Hardware acceleration for background checkpointing in large-scale simulations and ML training.
  • Embedded PCIe Switch – Supports up to 8 bifurcations, enabling host chaining and elimination of backend switches in storage racks.
  • On-Demand Paging (ODP) – Registration-free RDMA memory access, simplifying application development.
  • Extended Reliable Connected (XRC) and Dynamically Connected Transport (DCT) – Scales RDMA to tens of thousands of nodes.
  • T10-DIF Signature Handover – Hardware-based data integrity protection for storage workloads at wire speed.
Typical Deployments
Cloud and Web 2.0 Data Centers
High-density virtualization, overlay networks, and vSwitch offloads reduce CPU utilization while maintaining wire-speed performance.
High-Performance Storage
NVMe-oF target offloads, T10-DIF, and RoCE enable high-performance block storage with sub-microsecond latency.
AI and Machine Learning Clusters
PeerDirect GPUDirect, adaptive routing, and burst buffer offloads accelerate distributed training workloads.
Telecommunications and NFV
ASAP2 vSwitch offloads, service chaining, and hairpin hardware capability enable efficient Network Function Virtualization.
High-Frequency Trading (HFT)
Ultra-low latency (750ns) and high message rate (200 Mpps) meet the most demanding financial applications.
Host Chaining Storage Racks
Embedded PCIe switch enables servers to interconnect without top-of-rack switches, reducing TCO.
Compatibility and Ecosystem

The MCX516A-CCAT is compatible with a wide range of operating systems: RHEL/CentOS, Ubuntu, Windows Server, FreeBSD, VMware ESXi, and Citrix XenServer. It supports standard 100GbE QSFP28 optics, passive DAC cables, active optical cables (AOC), and breakout cables (100GbE to 4x25GbE or 2x50GbE). The adapter integrates seamlessly with NVIDIA Spectrum switches and any standards-based 25GbE/40GbE/50GbE/100GbE infrastructure. Software support includes OFED (OpenFabrics Enterprise Distribution), DPDK, and WinOF-2 for Windows.

Technical Specifications
Category Specification
Model MCX516A-CCAT
Form Factor Low-profile PCIe add-in card. Ships with tall bracket mounted, short bracket included.
Ports 2x QSFP28 (100/50/40/25/10/1GbE)
Supported Speeds 100GbE, 50GbE, 40GbE, 25GbE, 10GbE, 1GbE
Host Interface PCIe 3.0 x16 (compatible with x8, x4, x2, x1; auto-negotiated)
Message Rate Up to 200 million messages per second (Mpps); 197 Mpps with DPDK
Latency 750ns (typical cut-through)
Virtualization SR-IOV: up to 512 Virtual Functions, 8 Physical Functions per port
RoCE Support Yes – RDMA over Converged Ethernet (RoCE)
Overlay Offloads VXLAN, NVGRE, GENEVE, MPLS, NSH hardware encapsulation/de-encapsulation
vSwitch/vRouter Offloads ASAP2 – Open vSwitch (OvS) and vRouter data plane offload with flexible match-action tables
Storage Offloads NVMe-oF target offloads, T10-DIF Signature Handover, SRP, iSER, NFS RDMA, SMB Direct
Enhanced Features Tag matching, rendezvous offload, adaptive routing, burst buffer offload, embedded PCIe switch, ODP, XRC, DCT
CPU Offloads TCP/UDP stateless offloads, LSO/LRO, checksum offload, RSS/TSS, HDS, VLAN/MPLS tag insertion/stripping
Management Interfaces NC-SI over MCTP (SMBus/PCIe), BMC interface, PLDM (monitoring and firmware update), SDN eSwitch management, SPI, JTAG
Remote Boot PXE, UEFI, iSCSI remote boot
Power Consumption Not publicly specified – please confirm before ordering
Operating Temperature 0°C to 55°C (typical)
Standards IEEE 802.3bj/3bm (100GbE), 802.3by (25/50GbE), 802.3ba (40GbE), 802.3ae (10GbE), 802.1Qbb PFC, 802.1Qaz ETS, 802.1Qau QCN, 1588v2, PCIe Gen 3.0
RoHS Compliant
Note: Specifications are based on NVIDIA public documentation. Please confirm exact details with sales for your order.
Selection Guide: ConnectX-5 EN Portfolio
OPN (Ordering Part Number) Ports Max Speed Interface Host Interface Key Feature
MCX516A-CCAT 2 100GbE QSFP28 PCIe 3.0 x16 Dual-port 100GbE, enhanced host management
MCX516A-CDAT 2 100GbE QSFP28 PCIe 4.0 x16 ConnectX-5 Ex enhanced performance, PCIe Gen 4.0
MCX512A-ACAT 2 25GbE SFP28 PCIe 3.0 x8 Dual-port 25GbE, UEFI enabled
MCX516A-GCAT 2 50GbE QSFP28 PCIe 3.0 x16 Dual-port 50GbE, enhanced host management
MCX516B-CCAT 2 100GbE QSFP28 PCIe 3.0 x16 Dual-port 100GbE variant
Why Choose MCX516A-CCAT from Starsurge
100GbE Ready
Future-proof your data center with 100GbE connectivity while maintaining backward compatibility to 50/40/25/10/1GbE.
Unmatched Message Rate
200 Mpps enables the highest packet processing density for telco NFV, vSwitch, and high-frequency trading.
Comprehensive Offloads
NVMe-oF, T10-DIF, ASAP2, and RoCE offloads dramatically reduce CPU utilization and improve application performance.
Global Logistics and Support
Hong Kong Starsurge offers competitive pricing, warranty support, and fast worldwide delivery.
Service and Support

Hong Kong Starsurge provides end-to-end support for NVIDIA/Mellanox adapters, including compatibility verification, firmware updates, and technical troubleshooting. Standard warranty aligns with NVIDIA's limited hardware warranty (1 year return-and-repair). Extended support options are available upon request. Our team can assist with driver installation, performance tuning, RoCE configuration, and integration into existing server, storage, and network environments.

Frequently Asked Questions
Q: What is the difference between MCX516A-CCAT and MCX516A-CDAT?
MCX516A-CCAT uses PCIe 3.0 x16 interface, while MCX516A-CDAT (ConnectX-5 Ex) uses PCIe 4.0 x16 for higher theoretical bandwidth. Both offer dual-port 100GbE.
Q: Does this card support RDMA over Converged Ethernet?
Yes. The ConnectX-5 EN fully supports RoCE (RDMA over Converged Ethernet) for low-latency memory access across the network, including RoCE over overlay networks.
Q: Can I use this adapter with a PCIe 4.0 slot?
The card is PCIe 3.0 x16 but is backward compatible with PCIe 4.0 slots (operating at PCIe 3.0 speeds). For PCIe 4.0 performance, consider MCX516A-CDAT.
Q: What cables are compatible with 100GbE operation?
QSFP28 passive DAC cables (up to 5m), QSFP28 active optical cables (AOC), 100GBASE-SR4 (MPO, up to 100m), 100GBASE-LR4 (LC, up to 10km), and breakout cables (100GbE to 4x25GbE or 2x50GbE) are supported.
Q: What is ASAP2 and how does it benefit my deployment?
ASAP2 (Accelerated Switching and Packet Processing) offloads Open vSwitch and vRouter data plane to hardware, achieving wire-speed performance while reducing CPU load by up to 10x in virtualized environments.
Q: Does this card support NVMe over Fabric?
Yes, the ConnectX-5 EN includes hardware offloads for NVMe-oF target, enabling efficient remote NVMe storage access with minimal CPU intervention.
Key Facts
Dual-port 100GbE QSFP28 200 Mpps message rate 750ns latency RoCE supported 512 SR-IOV Virtual Functions VXLAN/NVGRE/GENEVE offload PCIe 3.0 x16 ASAP2 vSwitch offload NVMe-oF target offloads T10-DIF Signature Handover PeerDirect GPUDirect NC-SI BMC management
Compatibility Matrix
Category Supported Options
Operating Systems RHEL/CentOS 7/8/9, Ubuntu 18.04+, Windows Server 2016/2019/2022, FreeBSD 12+, VMware ESXi 6.7/7.0/8.0, Citrix XenServer
Switches NVIDIA Spectrum SN3000/SN3700 series, Cisco Nexus 3000/9000, Arista 7000 series, Juniper QFX series, any standards-based 25/40/50/100GbE switch
Cables and Optics (100GbE) QSFP28 passive DAC (up to 5m), QSFP28 AOC, 100GBASE-SR4 (MPO, 100m), 100GBASE-LR4 (LC, 10km), 100GBASE-ER4 (LC, 40km)
Cables and Optics (Lower Speeds) QSFP28 to SFP28 breakout cables (100G to 4x25G), QSFP+ (40G), SFP28 (25G), SFP+ (10G) with appropriate adapters
Management Protocols NC-SI, MCTP over PCIe/SMBus, PLDM for monitoring and firmware update, SDN eSwitch management
Buyer Checklist
  • Confirm server has an available PCIe x16 slot – Gen 3.0 or higher (PCIe 3.0 x16 provides adequate bandwidth for dual-port 100GbE).
  • Determine required cable type: passive DAC (short distance), active optical (medium distance), or optical transceivers (long distance) for 100GbE operation.
  • Verify operating system driver availability from NVIDIA/Mellanox official site (latest OFED or inbox drivers).
  • Ensure your switch supports 100GbE QSFP28 ports (most modern spine switches do).
  • For RoCE deployments, confirm switch support for DCB (PFC, ETS, ECN) and congestion notification.
  • For NVMe-oF target offloads, verify your storage software stack compatibility.
  • For BMC integration, verify your motherboard supports NC-SI over SMBus or PCIe.
Related Products
NVIDIA MCX516A-CDAT
ConnectX-5 Ex dual-port 100GbE with PCIe 4.0 x16 for enhanced performance.
NVIDIA SN3700 Switch
32x 200GbE spine switch for high-density 100GbE/200GbE aggregation.
NVIDIA LinkX QSFP28 DAC Cables
Passive copper direct-attach cables for 100GbE connections up to 5 meters.
NVIDIA SN3420 Switch
48x 25GbE + 12x 100GbE top-of-rack switch for leaf/spine fabrics.
Related Guides
  • RoCE Deployment Guide for ConnectX-5 Series
  • ASAP2 Open vSwitch Offload Configuration Guide
  • NVMe over Fabric with ConnectX-5 Best Practices
  • SR-IOV Configuration on VMware ESXi with Mellanox Adapters
  • 100GbE Migration: Planning and Implementation
About Hong Kong Starsurge Group

Hong Kong Starsurge Group Co., Limited is a technology-driven provider of network hardware, IT services, and system integration since 2008. Serving government, healthcare, manufacturing, finance, education, and enterprise clients worldwide. We deliver switches, NICs, wireless solutions, IoT systems, and custom software with multilingual support and global delivery. With a customer-first approach, Starsurge ensures reliable quality, responsive service, and tailored network infrastructure solutions.

この製品の詳細を知りたい
に興味があります MCX516A-CCAT Dual-Port 100GbE Ethernet Adapter by NVIDIA タイプ、サイズ、数量、素材などの詳細を送っていただけませんか。
ありがとう!
お返事を待って。