By clicking the "Submit", you agree that your information is collected and used for the purpose of responding to your inquiry in accordance with the
Privacy Policy and
Terms of Use
Submit
1.5m(5ft) NVIDIA/Mellanox MCP7Y10-N02A(980-9I80A-00N01A) Compatible 800G OSFP Finned Top to 2x 400G QSFP112 Ethernet NDR Passive Direct Attach Copper Breakout Cable
P/N: O800-2Q400-C1.5
US$167.00
5
Solds
5
Reviews
2
Questions
4-Year Warranty
30-Day Returns
30-Day Exchange
(1000 In Stock)
Product Highlights
Max. Power Consumption 0.1W
Minimum Bend Radius 40.5mm for Flexible Routing
Support 2x 400G Breakout for Higher Port Density
Simplifies the Patching and Offers a Cost-effective Way for Short Links
High-speed Connectivity Compliant to IEEE 802.3ck and CMIS v5.0
Plug-and-Play Operation, Faster Deployment and Easier Use
See more
Specifications
Q&A
Reviews
Resources
1.5m(5ft) NVIDIA/Mellanox MCP7Y10-N02A(980-9I80A-00N01A) Compatible 800G OSFP Finned Top to 2x 400G QSFP112 Ethernet NDR Passive Direct Attach Copper Breakout Cable
The 800G OSFP to 2x 400G QSFP112 direct attach cable operates over passive copper, which requires no additional power to ensure quality connectivity. This breakout cable is compliant with OSFP MSA, QSFP112 MSA, CMIS Rev 5.0, and IEEE 802.3ck standards. It provides a connection of a OSFP 800G port on one end and two 400G QSFP112 ports on the other end and is suitable for cost-effective close-range interconnections within the equipment room of telecom operators or data centers, etc.
Specifications
Part Number
MCP7Y10-N02A(980-9I80A-00N01A)
Vendor Name
AICPLIGHT
Form Factor
OSFP(Finned Top) to 2x QSFP112
Max Data Rate
800Gbps
Media
Copper
Cable Type
Passive Twinax
Minimum Bend Radius
40.5mm
Power Consumption
0.1W
Cable Length
1.5m(5ft)
Wire AWG
28AWG
Temperature Range
0 to 70°C(32 to 158°F)
Jacket Material
PVC(OFNR)
Protocols
OSFP MSA, QSFP112 MSA, IEEE 802.3ck, CMIS v5.0
Modulation Format
PAM4
Certification
Questions & Answers
Ask a question
All(2)Compatibility(1)Installation(1)
Q:
How should cables be routed in a high-density rack?
by 6***m on
2025-08-23 13:20:50
A:
Use cable managers to maintain airflow and prevent bending.
by AICPLIGHT on
2025-08-24 07:43:03
Helpful0
Comment0
Q:
Can these transceivers auto-negotiate with older models?
by Y***m on
2025-05-05 21:20:15
A:
Yes, backward compatibility is supported in many cases.
by AICPLIGHT on
2025-05-06 02:21:37
Helpful0
Comment0
Customer Reviews
Write a review
All(5)Performance(1)Compatibility(1)Ease of Use(2)Value(1)
g
g***m
2025-07-24 07:28:12
Confirmed Purchase
5.0
Used in our GPU training pod (8x H100s) to connect each server node. Latency remained sub-microsecond, even with large tensor workloads.
Helpful0
Comment0
a
a***m
2025-06-28 01:18:16
Confirmed Purchase
5.0
Strain relief sleeves added during manufacturing helped avoid cable damage during repositioning.
Helpful0
Comment0
H
H***m
2025-06-15 02:42:36
Confirmed Purchase
5.0
Plug-and-play experience was excellent. The cables came clean and ready for immediate deployment.
Helpful0
Comment0
J
J***m
2025-05-13 21:26:07
Confirmed Purchase
5.0
High-quality interconnects at a fraction of the price of branded cables. Ideal for scaling AI clusters on a budget.
Helpful0
Comment0
b
b***m
2025-05-04 23:31:55
Confirmed Purchase
5.0
Fully compatible with our NVIDIA ConnectX-6 NICs and Mellanox switches. No firmware warnings or handshake delays.
Used in our GPU training pod (8x H100s) to connect each server node. Latency remained sub-microsecond, even with large tensor workloads.