logo
Solutions
GPU Repair
GPU Cloud Rental
Resources
About Us
Contact Us
2m(7ft) NVIDIA/Mellanox MCP7H50-H002R26(980-9I99F-00H002) Compatible 200G QSFP56 to 2x 100G QSFP56 InfiniBand HDR Passive Direct Attach Copper Breakout Cable
P/N: Q200-2Q100-C2
$78.50
1
Solds
2
Reviews
3
Questions
safeguard
4-Year Warranty
30-Day Returns
30-Day Exchange
(1 In Stock)

Product Highlights

Qualified for InfiniBand HDR End-to-end Systems
Lowest-latency, Low Insertion Loss, Low Crosstalk, High-speed Interconnect
4x 50G-PAM4 to Dual 2x 50G-PAM4
100% Verified by the Original, Perfectly Compatible with NVIDIA QM8700 Devices
Minimum Bend Radius 26mm for Flexible Routing
Simplifies Patching and Offers a Cost-effective Way for Short Links
See more
2m(7ft) NVIDIA/Mellanox MCP7H50-H002R26(980-9I99F-00H002) Compatible 200G QSFP56 to 2x 100G QSFP56 InfiniBand HDR Passive Direct Attach Copper Breakout Cable
The NVIDIA/Mellanox MCP7H50-H001R30 is an 200Gb/s QSFP56 to two 100Gb/s QSFP56 InfiniBand HDR Direct Attached Copper dual breakout cable. Using the Quad Small Form Factor Pluggable 56 (QSFP56) and contains four high-speed electrical copper pairs, each operating at data rates of up to 50Gb/s. This cable is compliant with QSFP56 MSA (Multi-Source Agreement) and IEEE 802.3cd standards.
DACs offer the lowest latency, so are widely used in data centers to connect servers and GPU compute systems to the top-of-rack (TOR) switches and link spine-to-super-spine switches inside racks over short cable lengths. DACs are also used to create InfiniBand storage fabrics with hard-disk-drive (HDD) and flash memory subsystems.
Featuring a simple design and minimal components, DAC cables offer the lowest-cost, lowest-latency, and near-zero-power connections for high-speed links. They enhance port bandwidth, density, configurability, and reduce power requirements in supercomputers and hyperscale systems.
Specifications
NVIDIA/Mellanox Ethernet Compatible
MCP7H50-H002R26(980-9I99F-00H002)
Vendor Name
AICPLIGHT
Connector Type
QSFP56 to 2x QSFP56
Max Data Rate
200Gbps
Media
Copper
Cable Type
Passive Twinax
Cable Length
2m (7ft)
Wire AWG
30AWG
Minimum Bend Radius
26mm
Power Consumption
≤0.5W
MTBF
≈50 Million Hours
Jacket Material
PVC (OFNR)
Commercial Temperature Range
0 to 70°C (32 to 158°F)
Application
200G Ethernet
Protocols
IEEE 802.3cd, SFF-8665, QSFP56 MSA
Warranty
5 Years
Certification
Product
Product
Product
Product
Product
Product
Product
Product
Product
Product
Questions & Answers
edit icon Ask a question
All (3) Management (1)Product (1)Performance (1)
Q:

Can these cables be used for GPU Direct RDMA communication?

by a***m on 2025-09-18 20:29:39

A:

Absolutely. These InfiniBand cables support GPU Direct RDMA, enabling low-latency, high-throughput data transfers directly between GPUs across the fabric.

by AICPLIGHT on 2025-09-19 04:10:29

like icon Helpful 0
comment icon Comment 0
Q:

Do InfiniBand AOC cables support long-distance connections?

by G***m on 2025-06-19 15:53:46

A:

Yes, AOC (Active Optical Cables) support longer distances than DACs, typically up to 30 meters or more, while maintaining low latency and high bandwidth transmission.

by AICPLIGHT on 2025-06-19 23:13:51

like icon Helpful 0
comment icon Comment 0
Q:

What is the difference between HDR and NDR InfiniBand cables?

by L***m on 2025-06-01 11:22:55

A:

HDR (High Data Rate) supports up to 200Gb/s per port, while NDR (Next Data Rate) can handle 400Gb/s. Both are used for high-performance computing, but NDR is designed for newer-generation systems requiring ultra-high bandwidth.

by AICPLIGHT on 2025-06-01 13:49:51

like icon Helpful 0
comment icon Comment 0
Customer Reviews
edit icon Write a review
All (2) Compatibility (1)Ease of Use (1)
O
O***m
2025-07-27 00:21:11
Confirmed Purchase
5.0

Worked out of the box with ConnectX-6 NICs and Quantum switches. Auto-negotiation and link detection were flawless across all ports.

link icon Helpful 0
comment icon Comment 0
E
E***m
2025-07-07 14:04:21
Confirmed Purchase
5.0

Cables were clearly labeled and well packaged. Plug-and-play setup made our data center upgrade smooth and fast.

link icon Helpful 0
comment icon Comment 0
contact us