d-Matrix Launches JetStream Network Card to Accelerate AI Inference in Data Centers

JetStream is a full-height PCIe Gen5 card capable of delivering up to 400 Gbps.

d-Matrix Launches JetStream Network Card to Accelerate AI Inference in Data Centers

AI infrastructure startup d-Matrix Corp. has unveiled JetStream, a custom-built network card designed to deliver high-speed, ultra-low-latency AI inference for modern data centers.

With the rise of generative AI and increasingly complex multimodal models, data centers are under pressure to scale both compute and networking capacity. JetStream aims to solve the networking bottleneck by enabling faster communication across distributed AI workloads.

“JetStream networking comes at a time when AI is going multimodal, and users are demanding hyper-fast levels of interactivity,” said Sid Sheth, co-founder and CEO of d-Matrix. “Together with our Corsair compute accelerator platform, we’re providing a path forward that makes AI both scalable and blazing fast.”

JetStream is a full-height PCIe Gen5 card capable of delivering up to 400 Gbps, designed for plug-and-play compatibility with standard Ethernet switches. This allows data centers to deploy it without major infrastructure changes.

According to d-Matrix, combining JetStream with its Corsair accelerators and Aviator software can achieve up to 10x faster performance and triple the cost efficiency and energy savings compared to GPU-based systems.

Last month, the startup unveiled a transformative advancement in generative AI infrastructure with the integration of 3D DRAM (3DIMC) into its inference compute platform. The announcement, made during the Hot Chips 2025 conference, marks a bold step toward overcoming one of AI’s biggest bottlenecks: memory bandwidth.