What are you looking for?
Fabric Emulator
Realistically emulate data center fabrics on a single hardware box
Already own this product? View Technical Support
Fabric Emulator Overview
- Emulate complex datacenter topologies with realistic traffic constraint conditions.
- Fairly reproduce real data plane behavior.
- Integrate with CI/CD by using Open Models/Lean API.
- Provide an intuitive and interactive workflow for engineers.
- Set up easily with a light operational burden.
Problem and Solution
The Problem
A multi-rack system testbed has many components, is complex, and very difficult to manage. It’s challenging to reproduce realistic datacenter network behavior in a lab, which also contributes to the excessive cost of innovation when designing modern distributed compute systems.
The Solution
Fabric Emulator removes the complexity, enabling you to emulate and test different customer networks and run experiments on them. With Fabric Emulator you can design and create a datacenter network sandbox to evaluate workload behaviors. Without the headaches!
Product Capabilities – Fabric Emulator
- Emulate two or three tier Clos fabrics.
- Emulate a single or multiple disjoint L2 or L3 switches.
- Programmable oversubscription for ToR to Pod and Pod to Spine tiers.
- 25GE, 50GE, 100GE, 200GE port speeds, uniform throughout the fabric.
- Support for FEC and for auto-negotiation with the option to advertise FEC.
- Multi-stage ECMP with traffic load balancing by using ECMP random spray mode, 5-tuple hashing, and 3-tuple hashing configurable per tier and direction.
- Layer 2 flooding for L2 discovery protocols.
- Knobs to configure QoS at tier or switch level: packet classification and scheduling, ECN marking, control of ingress-admission shared and reserved buffer sizes.
- Create congestion in the fabric by injecting up to 1 Tbps of background traffic in different points at the topology.
- Configure internal traffic sinks, so you can send packets to a certain IP address without having a server physically connected to the front panel where the address is configured.
- Chaos Engineering: drop frames on up to six links during experiments.
- Get insights by mirroring any port in the emulated topology to a front panel port.
- Hop by hop statistics provided by a Prometheus monitoring agent. Data can be explored and analyzed with an external service like Grafana.
- Developer centric Web UI.
- API first, declarative models, and gRPC API.
Realistically emulate Data Center Fabrics on a single hardware box
The Fabric Emulator product brings the realism of customer networks into experimentation and test workflows, for developers of the next generation of system components ranging from Data Processing Units, SmartNICs, and storage to communication libraries and transport protocols.
Use Case – Composable DC Infrastructure Vendor
Customer Persona:
- Fast moving startup environment.
- Building solution for massively distributed I/O in the data center.
Challenge:
- Requirement to test the solution against realistic data center setup in an R&D lab.
Solution:
- Leveraged Fabric Emulator to quickly instantiate data center topologies on-demand to validate next-gen infrastructure for AI acceleration.
Outcome:
- Avoid large CAPEX and time investments on operating multiple HW-based network fabrics.
Use Case – Large DPU Vendor
Customer Persona:
- Major player building out new SmartNIC product line.
Challenge:
- Requirement to reproduce complex data center congestion scenarios for product testing.
Solution:
- Fine-tuned Fabric Emulator to emulate data center-based traffic constraints and to gain insights into SmartNIC behavior under stress.
Outcome:
- Accelerated product development by rapidly testing SmartNIC under challenging conditions.
Protocol and Load Test
Extend the Capabilities
Featured Resources
- Congestion control with PFC and ECN
Fabric Emulator used to emulate an 8-switch topology for the purpose of demonstrating how a combination of PFC and ECN can be used to manage network congestion more effectively.
- Eight priorities
Fabric Emulator emulates a switch topology with PFC enabled and 8-priority groups configured. Followed by verification of the 8-priority groups being in use by traffic flows.
- Maximizing available buffers: Using a shared buffer
Extending total buffer availability of fabric by changing the default memory allocations for the ports used in the experiment.
- Maximizing available buffers: Memory partitions
Increasing total buffer availability by dividing the front panel ports evenly between the ASIC’s two memory partitions.
Want help or have questions?