As is always the case, businesses and service providers rely on and require networks that need to be fast, scalable, and resilient.
However, as networks grow, they all kinds of challenges—from managing multi-location connectivity to ensuring efficient data flow and maintaining security and isolation.
Unfortunately, traditional networking models struggle to keep up with the demands of cloud computing, large-scale data centers, and modern applications.
However, this is where BGP-EVPN comes to the rescue. A solution designed to streamline network operations, enhance scalability, and optimize traffic flow.
Whether you’re an IT manager, a network engineer, or simply someone curious about how modern networks stay flexible and reliable, this blog will walk you through:
- How BGP-EVPN works
- Why BGP-EVPN matters
- How BGP-EVPN integrates with VXLAN
How does BGP-EVPN work?
Modern data centers and enterprise networks need scalable, efficient Layer 2 and Layer 3 connectivity across multiple sites.
Internet (BGP)
|
---------------------------------
| |
DC1 Spine Router DC2 Spine Router
| |
|----------BGP-EVPN Peering------|
| |
DC1 Leaf Switch DC2 Leaf Switch
| |
DC1 Server DC2 Server
L2 supports traditional Layer 2 Ethernet connectivity in EVPN, but over large-scale networks, using VXLAN tunneling. This enables workload mobility and multi-tenancy across data centers.
L3 in EVPN facilitates routing and segmentation across different networks, ensuring scalability and optimized traffic flow without excessive broadcasts.
Border Gateway Protocol – Ethernet VPN solves this by providing a control plane for multi-tenancy, workload mobility, and optimized traffic flow. Ideal for cloud providers, SDN architectures, and large-scale deployments, BGP EVPN integrates with VXLAN to extend L2 over L3, reducing network complexity and improving automation.
BGP EVPN is an extension of the Border Gateway Protocol BGP, that leverages Multiprotocol BGP to distribute endpoint reachability information and provide efficient and scalable Ethernet-based VPN solutions.
Widely adopted in data centers and service provider environments, BGP EVPN automates the creation of VXLAN tunnels, addressing the need for flexible, multi-tenant connectivity and easy workload mobility.
What is VXLAN
Virtual Extensible LAN, specified in RFC 7348 is a L3 encapsulation protocol, designed to address the scaling limitations of traditional VLANs in modern data centers.
The issue with classic VLANs: while effective for segmenting networks, they provide only a limited number of identifiers (4,096) and have difficulties in spanning L2 domains over larger L3 networks.
VXLAN resolves these challenges by extending L2 connectivity over a L3 underlay network, offering scalability, flexibility, and improved performance for virtualized environments.
Why does VXLAN matter in data centers?
Applications often depend on microservices – a type of architecture that breaks applications into independent, modular services, that interact dynamically.
These microservices are typically distributed across multiple servers, racks, or data centers, meaning, seamless communication between them is essential. Furthermore, many microservices rely on Layer 2 connectivity to ensure efficient communication and smooth workload mobility.
VXLAN addresses this need by encapsulating Ethernet frames within UDP packets, allowing them to travel across a Layer 3 network while maintaining Layer 2 functionality.
By enabling Layer 2 connectivity over a Layer 3 infrastructure, VXLAN not only ensures efficient microservice communication but also lays the foundation for network overlays. These overlays provide scalability, segmentation, and flexibility, making them essential for modern data centers and cloud environments.
A quick detour into network overlays: Network overlays are created by encapsulating traffic into a tunneling protocol. Tunneling essentially encapsulates one network protocol within another, creating a “tunnel” for the original data. Here, VXLAN (Virtual Extensible LAN) encapsulates L2 Ethernet frames into L3 UDP packets. This encapsulation enables L2 networks, such as VLANs or subnets, to extend across a L3 infrastructure, like an IP-based data center or WAN.
VXLAN further relies on Virtual Tunnel Endpoints located in servers or network switches. A VTEP is a network component (a physical switch, virtual switch, or software-based node) that processes VXLAN traffic by adding and removing encapsulation. This allows devices in separate L2 networks to communicate seamlessly over a L3 infrastructure. This process ensures that microservices remain interconnected, regardless of their physical locations.
All-in-all, VXLAN is considered essential for microservices and distributed systems, because it:
- Supports protocols like ARP and multicast (commonly used for service discovery and communication)
- Provides scalability – with its support for over 16 million unique identifiers (VNIs),
- Provides network segmentation for isolating tenant traffic
- Increases operational efficiency by extending L2 networks over L3 infrastructures
By ensuring smooth communication between distributed services, VXLAN offers the flexibility & scalability that modern applications require.
What is MP-BGP?
In old-school networking, BGP-4 peers exchanged routing information through Update messages.
These messages inform about reachable routes, sharing the same path attributes, with the routing data carried in the Network Layer Reachability Information field.
The issue: the scope of BGP-4 was limited to handling only IPv4 unicast routing information.
To meet the increasing demand for diverse network layer protocols, such as IPv6 & multicast, the Multiprotocol Border Gateway Protocol was developed. MP-BGP is an extension of traditional BGP, enabling it to support multiple address families, including IPv6, multicast, and EVPN.
Address families refer to different network protocol types or services that MP-BGP can route, providing flexibility to accommodate various networking needs beyond traditional IPv4 routing.
The protocol achieves this by introducing new formats for Network Layer Reachability Information (NLRI).
MP-BGPs in data centers
MP-BGP became an important component in modern data centers due to its ability to scale and efficiently distribute routing information. When integrated with VXLAN & EVPN, MP-BGP serves as the control plane for distributing L2 and L3 reachability information.
What makes MP-BGP ideal for data centers is:
- Protocol versatility, due to its support for multiple address families, including EVPN
- Scalability, for efficient advertisement of MAC and IP addresses across large networks
- Policy control, for detailed control over routing policies and optimal traffic flows
MP-BGP, however, failed short in providing the flexibility and efficiency needed for L2 and L3 VPN services. To address this gap (enhanced network segmentation, optimized traffic flow, or improved redundancy), EVPN was developed and came to the rescue.
BGP (Control Plane) - backbone protocol that distributes routing information across networks
↓
MP-BGP (Multi-Protocol) - supports multiple address families (including EVPN)
↓
EVPN (Ethernet VPN) - multi-tenancy, workload mobility, and optimized MAC/IP routing
↓
VXLAN (Encapsulation) - L2 traffic into L3 packets, enabling data center overlays
↓
Underlay Network (IP Fabric) - physical infrastructure that supports VXLAN tunnels
What is EVPN?
The original VXLAN specification lacked a control plane, requiring manual configuration of VXLAN tunnels and relying on flood-and-learn mechanisms to discover MAC addresses.
The problem: this directly caused significant overhead in large-scale networks and complicated scaling.
To address these issues, the Ethernet Virtual Private Network was introduced as a standards-based control plane for VXLAN.
EVPN leverages MP-BGP to distribute MAC and IP address information, providing a more scalable and efficient solution for large deployments.
How does MP-BGP EVPN help VXLAN?
Because MP-BGP EVPN serves as the control plane for VXLAN, it enables automated and efficient learning of MAC and IP addresses. F
- Automated VTEP discovery, where MP-BGP EVPN allows VTEPs to automatically discover each other and establish VXLAN tunnels.
- Efficient routing, since instead of flooding the network to discover MAC addresses, MP-BGP EVPN advertises this through simpler BGP updates.
- Integrated L2 and L3 connectivity, because EVPN supports both MAC address learning and IP routing, providing seamless integration of Layer 2 and Layer 3 services.
- ARP suppression, which reduces broadcast traffic by advertising ARP information through the control plane.
- Scalability & flexibility: BGP’s proven scalability ensures that MP-BGP EVPN can support large, multi-tenant data center environments.
BGP-EVPN has become the foundation of modern data center and WAN architectures, providing a scalable, efficient, and flexible solution for Layer 2 and Layer 3 VPNs.
By using MP-BGP as the control plane, BGP-EVPN improves network efficiency by reducing broadcast traffic, optimizing routing, and enabling seamless multi-tenancy. All of these are essential for today’s cloud-centric infrastructures.
Data centers, SONiC & orchestration
With the growing adoption of SONiC, open networking, and multi-cloud environments, the role of BGP-EVPN will continue to expand, offering a standardized approach to network segmentation, automation, and intent-driven operations.
However, managing large-scale BGP-EVPN networks is complex, requiring advanced automation to streamline provisioning, ensure real-time validation, and maintain operational consistency.
Built for modern network fabrics, SandWork automates BGP-EVPN overlays, providing intent-based orchestration, real-time reconciliation, and seamless fabric lifecycle management.
By integrating VXLAN provisioning, automated validation, and network-wide configuration changes, SandWork empowers enterprises, service providers, and hyperscalers to operate resilient, scalable, and future-ready networks with minimal manual effort.
FAQ
1. What problem does VXLAN solve?
VXLAN overcomes the scalability limitations of VLANs by enabling over 16 million unique segments. It also extends Layer 2 networks over a Layer 3 underlay, supporting geographically distributed workloads.
2. How does MP-BGP EVPN enhance VXLAN?
MP-BGP EVPN provides a control plane for VXLAN, enabling dynamic and efficient learning of MAC and IP addresses. This replaces the flood-and-learn mechanism, reducing overhead and enhancing scalability.
3. Why is VXLAN-EVPN important for microservices?
Microservices often require Layer 2 connectivity for communication and service discovery. VXLAN-EVPN provides seamless Layer 2 overlays over Layer 3 networks, enabling efficient communication between distributed microservices.
4. How does PANTHEON.tech support VXLAN-EVPN deployments?
PANTHEON.tech offers solutions like SandWork, which simplifies the orchestration and management of VXLAN-EVPN networks, and LightSpeed, which validates the performance of these networks.
5. Is VXLAN-EVPN suitable for small enterprises?
While VXLAN-EVPN is often associated with large-scale data centers, its benefits—scalability, agility, and security—make it valuable for enterprises of all sizes. Small enterprises adopting cloud-native architectures can particularly benefit from its flexibility.
Related Products from PANTHEON.tech
SandWork: A network orchestration platform designed for managing complex data center environments, including VXLAN and MP-BGP EVPN deployments. It simplifies the automation and management of large-scale data center networks.
Custom Solutions: Tailored networking solutions to address specific enterprise needs, ensuring optimal performance and scalability.
Leave us your feedback on this post!
Explore our PANTHEON.tech GitHub.
Watch our YouTube Channel.
(Updated 3/2025)
[What Is] VLAN & VXLAN
/in Blog /by filip.sterlingLet’s start with an analogy – a busy airport. Thousands of passengers, dozens of terminals, countless gates. Now imagine trying to direct all that traffic – keeping passengers moving smoothly, without ending up at the wrong destination.
That’s what modern networks look like today: crowded, fast-paced, and constantly growing.
To manage this digital chaos, network engineers rely on segmentation—ways to divide and organize traffic, to avoid chaos. Two technologies that make this possible in the world of networks are VLANs and VXLANs. Imagine them as the traffic controllers of the network world, orchestrating who goes where and how data travel from point A to point B.
While VLANs have been here for decades, the demand created by cloud computing, virtualization, and multi-tenant environments showed us their time has come to be replaced. VXLANs – built for scale, flexibility, and the demands of today’s data centers.
Both VLANs and VXLANs are often misunderstood or oversimplified. Let’s break them down, compare them, and look at where each one fits in real-world scenarios.
What is a VLAN?
Virtual Local Area Network is a method of logically separating network traffic, even if devices are physically on the same switch or infrastructure.
VLAN assigns a group of ports or devices to their own group – isolated from others, unless you explicitly allow communication between them. This is incredibly useful in office or enterprise environments, where different departments or tenants should not share broadcast domains. VLANs operate at Layer 2 (Data Link Layer) of the OSI model.
Pros:
Cons:
Trying to scale Layer 2 across a large network leads to more broadcast traffic, increased risk of loops, and the something known as spanning tree hell. Network teams will therefore spend more time troubleshooting than scaling and building.
What is a VXLAN?
Virtual Extensible LAN extends the concept of VLANs by allowing L2 networks to be overlaid on top of a L3 infrastructure, using tunneling.
VXLAN encapsulates Ethernet frames inside UDP packets, allowing them to traverse IP networks. It’s designed to address the scalability limitations of VLANs.
Simply put, VXLAN creates a virtual tunnel that allows data from the isolated groups (L2) to travel via the internet (L3) and extend their reach. Meaning, VXLANs operate at Layer 2 over Layer 3 (encapsulation).
Use cases for VXLAN usage include data center interconnects, cloud-scale networks, and multi-tenant environments.
Pros:
Cons:
VLAN vs. VXLAN, side-by-side
Real-World Usage
VLAN: Enterprises and campus networks
Most enterprise networks still rely on VLANs for segmenting traffic between departments. They’re easy to deploy and maintain. A simple Layer 3 switch and managed VLAN setup keeps traffic segmented and secure.
VXLAN: Data centers and cloud environments
VXLANs excel in modern data centers, especially when combined with EVPN as the control plane. VXLANs allow virtual machines or containers to move across physical servers and even geographic locations while maintaining the same network identity.
The choice is yours
VLANs are here to stay. They’re still highly effective for straightforward segmentation. But for modern, cloud-native, and large-scale networks, VXLAN is becoming the new standard.
Understanding the difference and knowing when to use each is a must for any network engineer building future-proof infrastructure.
Leave us your feedback on this post!
Explore our PANTHEON.tech GitHub.
Watch our YouTube Channel.
[What Is] Whitebox Networking?
/in Blog /by filip.sterlingA deep dive into open-source & SONiC Solutions
One of the many challenges in modern networking is the big decision, between vendor-locked and open-source solutions.
While entry prices for vendor-locked solutions seem viable, they can become unbearable in the long run and can block your enterprise from effective scaling or experimenting with various brands.
Open-source solutions are often community-driven and offer more flexibility, in terms of hardware support and functionality. However, they depend on strong leadership and continued interest in its future.
In recent years, we can observe a growing trend in abandoning traditional, vendor-locked solutions and going for more flexible, scalable, and cost-efficient architectures.
At the heart of this transformation is whitebox networking, an approach that separates network hardware from the software that powers it. This enables organizations to deploy open-source solutions and customize their networking stack to suit their needs, leading to increased efficiency, lower costs, and greater innovation.
What is whitebox networking?
Whitebox networking refers to using generic, commodity hardware—often referred to as bare-metal or whitebox switches—instead of proprietary, vendor-specific devices.
Unlike traditional networking, where companies purchase hardware bundled with pre-installed software from a single vendor, whitebox networking allows businesses to choose their preferred network operating system.
A popular option, which we at PANTHEON.tech are wholeheartedly supportive of, is SONiC – Software for Open Networking in the Cloud.
This separation of hardware and software mainly provides organizations with more control over their network infrastructure.
It also fosters a more competitive ecosystem, as companies are no longer locked into a single vendor for everything – from hardware-vendor variety, support, updates, to new features.
With open-source network operating system (NOS) options gaining popularity, businesses can now customize and automate and orchestrate their networks, for example, with SandWork by PANTHEON.tech.
Behind the popularity of Whitebox Networking
One of the biggest motivations behind the increased adoption of whitebox networking is cost reduction.
Traditional networking equipment is expensive due to vendor markups, licensing fees, and maintenance contracts. Whitebox switches, on the other hand, are built using standardized, off-the-shelf components, significantly reducing hardware costs.
Additionally, you get the freedom to choose from a wider range of vendors and their hardware.
Organizations can then install a software-defined networking (SDN) platform or an open-source NOS to manage their network efficiently.
Traditionally, vendors offer hardware with pre-installed, proprietary software/operating systems, which are what we referred to as vendor lock-in
Beyond cost savings, scalability and flexibility are major advantages. Large-scale data centers, telecom providers, and cloud service operators need infrastructure that can grow dynamically. Whitebox networking enables this by allowing enterprises to deploy customized solutions that integrate seamlessly with automation tools, network monitoring systems, and cloud-native architectures.
Speaking of customized solutions – the most important factor is the rise in popularity of open-source solutions, like SONiC.
Developed by Microsoft and later open-sourced, SONiC has become one of the most widely adopted open-source network operating systems for large-scale data center deployments. With:
Even if all of this may seem overwhelming at first, it is well worth the effort to dive into the open world of SONiC. We can boldly claim, that SONiC has made whitebox networking more accessible & manageable than ever before.
How SONiC shapes the future of whitebox networking
Particularly in large-scale cloud and enterprise environments, SONiC is like a dream come true for complex deployments.
Built on Linux, SONiC offers high programmability, multi-vendor support, and deep integration with SDN frameworks.
One of SONiC’s biggest advantages is its hardware abstraction. Software is traditionally tied to specific hardware – a program written for one type of device may not work on another. Hardware abstraction removes this limitation by creating a standardized way for software to interact with hardware.
Since SONiC supports multiple hardware platforms, organizations using whitebox switches can switch vendors without changing their entire network stack. This eliminates vendor lock-in, making it easier to adopt more fitting solutions for different network components.
Furthermore, SONiC’s containerized architecture enables modular updates and faster development cycles. Organizations can choose which networking features they want to enable or disable, providing great customization compared to traditional monolithic network operating systems.
A worthwhile challenge
Despite its advantages, whitebox networking comes with operational challenges that organizations should consider.
Since hardware and software are decoupled, businesses must ensure that their chosen NOS of choice is fully compatible with their whitebox switches. Unlike traditional, ready-made solutions, where vendors provide end-to-end support, whitebox deployments require a higher level of technical expertise.
Fortunately, vendors that sell SONiC enabled hardware offer a pre-installed factory version of SONiC, including various support options.
While whitebox networking provides greater customization and flexibility, it may also require a shift in IT operations. Enterprises acustomed to vendor-managed solutions may need to invest in training their network engineers on open-source NOS platforms, automation tools, and new troubleshooting techniques.
The Future of Whitebox Networking
As data centers and the demand for their performance grow larger and more complex, whitebox networking is destined to become the de facto standard for scalable, cost-effective, and programmable network infrastructure.
With SONiC driving innovation, companies now have access to a vendor-neutral, software-driven networking model that empowers them to build more efficient, customizable networks.
As adoption increases, expect to see greater interoperability between whitebox switches and open-source NOS, improved automation capabilities, and wider support from major hardware vendors.
For enterprises looking to modernize their network infrastructure, reduce costs, and eliminate vendor lock-in, whitebox networking with open-source NOS like SONiC represents a compelling, future-proof solution.
Leave us your feedback on this post!
Explore our PANTHEON.tech GitHub.
Watch our YouTube Channel.
[What Is] BGP EVPN?
/in Blog /by PANTHEON.techAs is always the case, businesses and service providers rely on and require networks that need to be fast, scalable, and resilient.
However, as networks grow, they all kinds of challenges—from managing multi-location connectivity to ensuring efficient data flow and maintaining security and isolation.
Unfortunately, traditional networking models struggle to keep up with the demands of cloud computing, large-scale data centers, and modern applications.
However, this is where BGP-EVPN comes to the rescue. A solution designed to streamline network operations, enhance scalability, and optimize traffic flow.
Whether you’re an IT manager, a network engineer, or simply someone curious about how modern networks stay flexible and reliable, this blog will walk you through:
How does BGP-EVPN work?
Modern data centers and enterprise networks need scalable, efficient Layer 2 and Layer 3 connectivity across multiple sites.
L2 supports traditional Layer 2 Ethernet connectivity in EVPN, but over large-scale networks, using VXLAN tunneling. This enables workload mobility and multi-tenancy across data centers.
L3 in EVPN facilitates routing and segmentation across different networks, ensuring scalability and optimized traffic flow without excessive broadcasts.
Border Gateway Protocol – Ethernet VPN solves this by providing a control plane for multi-tenancy, workload mobility, and optimized traffic flow. Ideal for cloud providers, SDN architectures, and large-scale deployments, BGP EVPN integrates with VXLAN to extend L2 over L3, reducing network complexity and improving automation.
BGP EVPN is an extension of the Border Gateway Protocol BGP, that leverages Multiprotocol BGP to distribute endpoint reachability information and provide efficient and scalable Ethernet-based VPN solutions.
What is VXLAN
Virtual Extensible LAN, specified in RFC 7348 is a L3 encapsulation protocol, designed to address the scaling limitations of traditional VLANs in modern data centers.
The issue with classic VLANs: while effective for segmenting networks, they provide only a limited number of identifiers (4,096) and have difficulties in spanning L2 domains over larger L3 networks.
VXLAN resolves these challenges by extending L2 connectivity over a L3 underlay network, offering scalability, flexibility, and improved performance for virtualized environments.
Why does VXLAN matter in data centers?
Applications often depend on microservices – a type of architecture that breaks applications into independent, modular services, that interact dynamically.
These microservices are typically distributed across multiple servers, racks, or data centers, meaning, seamless communication between them is essential. Furthermore, many microservices rely on Layer 2 connectivity to ensure efficient communication and smooth workload mobility.
VXLAN addresses this need by encapsulating Ethernet frames within UDP packets, allowing them to travel across a Layer 3 network while maintaining Layer 2 functionality.
By enabling Layer 2 connectivity over a Layer 3 infrastructure, VXLAN not only ensures efficient microservice communication but also lays the foundation for network overlays. These overlays provide scalability, segmentation, and flexibility, making them essential for modern data centers and cloud environments.
A quick detour into network overlays: Network overlays are created by encapsulating traffic into a tunneling protocol. Tunneling essentially encapsulates one network protocol within another, creating a “tunnel” for the original data. Here, VXLAN (Virtual Extensible LAN) encapsulates L2 Ethernet frames into L3 UDP packets. This encapsulation enables L2 networks, such as VLANs or subnets, to extend across a L3 infrastructure, like an IP-based data center or WAN.
VXLAN further relies on Virtual Tunnel Endpoints located in servers or network switches. A VTEP is a network component (a physical switch, virtual switch, or software-based node) that processes VXLAN traffic by adding and removing encapsulation. This allows devices in separate L2 networks to communicate seamlessly over a L3 infrastructure. This process ensures that microservices remain interconnected, regardless of their physical locations.
All-in-all, VXLAN is considered essential for microservices and distributed systems, because it:
By ensuring smooth communication between distributed services, VXLAN offers the flexibility & scalability that modern applications require.
What is MP-BGP?
In old-school networking, BGP-4 peers exchanged routing information through Update messages.
These messages inform about reachable routes, sharing the same path attributes, with the routing data carried in the Network Layer Reachability Information field.
The issue: the scope of BGP-4 was limited to handling only IPv4 unicast routing information.
To meet the increasing demand for diverse network layer protocols, such as IPv6 & multicast, the Multiprotocol Border Gateway Protocol was developed. MP-BGP is an extension of traditional BGP, enabling it to support multiple address families, including IPv6, multicast, and EVPN.
Address families refer to different network protocol types or services that MP-BGP can route, providing flexibility to accommodate various networking needs beyond traditional IPv4 routing.
The protocol achieves this by introducing new formats for Network Layer Reachability Information (NLRI).
MP-BGPs in data centers
MP-BGP became an important component in modern data centers due to its ability to scale and efficiently distribute routing information. When integrated with VXLAN & EVPN, MP-BGP serves as the control plane for distributing L2 and L3 reachability information.
What makes MP-BGP ideal for data centers is:
MP-BGP, however, failed short in providing the flexibility and efficiency needed for L2 and L3 VPN services. To address this gap (enhanced network segmentation, optimized traffic flow, or improved redundancy), EVPN was developed and came to the rescue.
What is EVPN?
The original VXLAN specification lacked a control plane, requiring manual configuration of VXLAN tunnels and relying on flood-and-learn mechanisms to discover MAC addresses.
The problem: this directly caused significant overhead in large-scale networks and complicated scaling.
To address these issues, the Ethernet Virtual Private Network was introduced as a standards-based control plane for VXLAN.
EVPN leverages MP-BGP to distribute MAC and IP address information, providing a more scalable and efficient solution for large deployments.
How does MP-BGP EVPN help VXLAN?
Because MP-BGP EVPN serves as the control plane for VXLAN, it enables automated and efficient learning of MAC and IP addresses. F
By using MP-BGP as the control plane, BGP-EVPN improves network efficiency by reducing broadcast traffic, optimizing routing, and enabling seamless multi-tenancy. All of these are essential for today’s cloud-centric infrastructures.
Data centers, SONiC & orchestration
With the growing adoption of SONiC, open networking, and multi-cloud environments, the role of BGP-EVPN will continue to expand, offering a standardized approach to network segmentation, automation, and intent-driven operations.
However, managing large-scale BGP-EVPN networks is complex, requiring advanced automation to streamline provisioning, ensure real-time validation, and maintain operational consistency.
Built for modern network fabrics, SandWork automates BGP-EVPN overlays, providing intent-based orchestration, real-time reconciliation, and seamless fabric lifecycle management.
FAQ
1. What problem does VXLAN solve?
VXLAN overcomes the scalability limitations of VLANs by enabling over 16 million unique segments. It also extends Layer 2 networks over a Layer 3 underlay, supporting geographically distributed workloads.
2. How does MP-BGP EVPN enhance VXLAN?
MP-BGP EVPN provides a control plane for VXLAN, enabling dynamic and efficient learning of MAC and IP addresses. This replaces the flood-and-learn mechanism, reducing overhead and enhancing scalability.
3. Why is VXLAN-EVPN important for microservices?
Microservices often require Layer 2 connectivity for communication and service discovery. VXLAN-EVPN provides seamless Layer 2 overlays over Layer 3 networks, enabling efficient communication between distributed microservices.
4. How does PANTHEON.tech support VXLAN-EVPN deployments?
PANTHEON.tech offers solutions like SandWork, which simplifies the orchestration and management of VXLAN-EVPN networks, and LightSpeed, which validates the performance of these networks.
5. Is VXLAN-EVPN suitable for small enterprises?
While VXLAN-EVPN is often associated with large-scale data centers, its benefits—scalability, agility, and security—make it valuable for enterprises of all sizes. Small enterprises adopting cloud-native architectures can particularly benefit from its flexibility.
Related Products from PANTHEON.tech
SandWork: A network orchestration platform designed for managing complex data center environments, including VXLAN and MP-BGP EVPN deployments. It simplifies the automation and management of large-scale data center networks.
Custom Solutions: Tailored networking solutions to address specific enterprise needs, ensuring optimal performance and scalability.
Leave us your feedback on this post!
Explore our PANTHEON.tech GitHub.
Watch our YouTube Channel.
(Updated 3/2025)
SONiC & FD.io: Exploration and Technical Deep Dive (Webinar)
/in Blog /by filip.sterlingModern challenges need strong, open solutions. If you’re managing networks in data centers, cloud infrastructure, or telecom environments, this must-watch webinar on SONiC (Software for Open Networking in the Cloud) and FD.io (Fast Data Input/Output) is for you.
PANTHEON.tech, Intel Network Builders, and Maciek Konstantynowicz (CSIT Tech Lead & Cisco Distinguished Engineer) teamed up for a webinar to explain these technologies and their importance for the future of networking.
Why SONiC & FD.io are Better Together
Our webinar dives deep into two critical pillars of open networking:
Key Highlights from the Webinar
The History of SONiC: From its inception at Microsoft in 2016 to its global adoption by hyperscalers, enterprises, and telcos, SONiC has been a game-changer. Learn how organizations like Alibaba saved up to 90% in testing time and Walmart is scaling SONiC to stores and distribution centers.
Real-World Use Cases: Explore how companies like Broadcom transitioned to SONiC for enhanced network visibility and cost efficiency, or how Orange utilized SONiC to modernize their telco infrastructure with open, disaggregated switches.
Cutting-Edge Benchmarks & Methodologies: Get insights into FD.io CSIT benchmarks, including advanced testing methodologies. These innovative approaches redefine performance testing, ensuring reliability for large-scale deployments.
What You’ll Learn
Whether you’re an IT decision-maker, network architect, or developer, this webinar covers:
Watch the Webinar Today
Don’t miss the opportunity to learn about all the above and more, at the link below:
[What Is] Cloud-Native Network Functions
/in Blog /by filip.sterlingThis post is a reference to our sunset project CDNF.io, which was a portfolio of cloud-native network functions. Since this post was fairly popular, we will keep the informative parts of this article online. We are shifting towards consultations and custom CNF development.
If you are interested in custom CNF consultations, feel free to contact us.
CNF (Cloud-native Network Function) is a software implementation of a network function, traditionally performed on a physical device (e.g. IPv4/v6 router, L2 bridge/switch, VPN gateway, firewall), but built and deployed in a cloud-native way.
Why do you need CNFs?
CNFs are a new approach to building complex networking solutions, based on the principles of cloud-native computing and microservices. These bring many benefits, such as:
Lowered costs – cloud-native networking infrastructure does not need to run on specialized hardware anymore. It can run on commodity servers connected in a private cluster, or even in public cloud infrastructures like AWS or Google Cloud. With features like auto-scaling, metered billing, and pay-per-use models, you can completely eliminate sub-optimal physical hardware allocations and costs connected with the maintenance of physical hardware.
Agility – with CNFs, feature upgrades do not involve hardware replacement any more. Instead, rolling-out of a new feature usually takes just the implementation of a new networking microservice and its deployment into the existing infrastructure. This decreases time to market dramatically – and lowers the costs of new features again.
Elastic scalability – a cloud-native solution can scale at the level of individual microservices (CNFs), which can automatically go live and terminate in a fraction of second, based on demand for their services. The usage of public clouds allows them to scale almost without limits, with no need for hardware upgrades.
Fault-Tolerance & Resilience – cloud-native architecture patterns are based on loosely coupled microservices, which can greatly reduce the operational and security risk of massive failures. Containers can restart almost instantly, upgrades can be performed on a microservice level, without downtimes, allowing for automated fast rollbacks if needed.
How we build CNFs
Each cloud-native network function consists of three main building blocks: data plane, control plane, and management plane.
For the data plane, we often use FD.io VPP, which is a fast, scalable layer 2-4 network stack that runs in userspace. VPP benefits include high performance, proven technology, modularity, and a rich feature set. If the existing VPP features are not meeting our needs, we often extend the VPP functionality with our own plugins. If the network function that we aim to implement does not fit VPP at all (e.g. is too complex to be implemented on VPP, and/or the top performance is not crucial), we use other open-source projects and tools as well.
The management plane for our CNFs is built on the Ligato.io CNF framework, written in Golang. The API of each CNF is modeled with Google Protocol Buffers, which allows for a unified way of controlling whole CNF deployment via gRPC, Kubernetes CRDs, key-value data stores such as ETCD or Redis, or message brokers such as Apache Kafka.
The control plane usually falls either into the data plane, or the management plane components.
How we connect (wire) the CNFs
For CNFs, the standard networking interconnections between containers provided by general-purpose CNI plugins is not sufficient – the CNFs usually need to talk between each other and with external networks on different network layers, using a wide range of networking protocols. At the same time, since packet processing is the main goal of CNFs, their demand for fast, stable and low-latency throughput is much higher.
With VPP, leveraging technologies like DPDK and memory interfaces (“memifs”), we are able to deliver the packets into the CNFs at speeds of several Mpps / Gbps. To accomplish that in a Kubernetes-native way, we use open-source projects such as Network Service Mesh, Contiv-VPP CNI, SR-IOV K8s Device Plugin.
Our CNFs are CNI plugin/container wiring technology -agnostic, meaning, that the same CNFs can be used on top of Network Service Mesh, as well as other wiring technologies / CNI plugins.
CNF Designer & Provisioner
Since the configuration of complex interconnections between CNFs can be a cumbersome process involving the steep learning curve of container networking technologies and APIs, we have created a graphical user interface that allows defining a complex CNF deployment with a few clicks. Independently from the selected networking technology (e.g. Network Service Mesh or Contiv-VPP CNI), its CNF provisioning backend automatically deploys the CNFs with all the necessary wiring instructions into your Kubernetes cluster based on your GUI input.
CNF Use-Cases
Although the possibilities of CNF deployments are endless, it may be worth to give you an idea of what can be achieved with them.
Carrier-Grade NAT Solution
This example use-case shows a cloud-native carrier-grade NAT solution, which shifts the NAT function and configuration from the customer premises to the Internet service provider’s cloud infrastructure. This solution can lower the requirements on hardware and software features of CPE devices, as well as simplify the management and configuration of the NAT solution, allow for easy horizontal scaling, upgrades and failovers.
Virtual CPE (Customer Premise Equipment)
In this example use-case, the whole functionality of the customer premises equipment is moved into the service provider’s cloud infrastructure. The only networking equipment deployed at the customer premises can be just a cheap simple L2 switch with no additional features, which would be connected to the ISP’s cloud infra. That requires almost no additional maintenance, even in case of a future upgrade of CPE functionality. The CPE features are built as a chain of CNFs (networking microservices), which apart form easy management, scaling and upgrades, also allow for different feature sets per customer, deployed and modified on demand.
The hidden costs of network downtime: Why automation is non-negotiable
/in Hidden /by filip.sterlingA global outage of internet-related services seemed unimaginable – but possible, up until now.
ITIC’s survey data shows that nearly 44% of mid-sized and large enterprises experience at least one significant unplanned outage annually. IDC’s analysis suggests that data center downtimes can average between 1.5 to 4.5 hours per incident.
One outage, which causes downtime in your data centers, can cause up to $594,000 of damages, on average.
Network automation has undergone a remarkable transformation since the early days of IT infrastructure, where manual configurations were the standard. As networks expanded and became more complex, the limitations of manual management became evident.
The introduction of automation brought significant improvements:
Tools, such as SandWork offer sophisticated orchestration & automation capabilities, that streamline processes and enable scalable, robust network management – including root-cause analysis.
While root-cause analysis can explain the reason for errors, their source can be often attributed to manual network management. According to research conducted by NetBox Labs, around 57% of network operations are still performed manually. As a result, companies become more vulnerable to human errors and inefficient time utilization.
Automation has shifted from a luxury to a necessity, providing the backbone for modern network infrastructures and ensuring seamless operations in increasingly complex environments.
Devastating costs
Downtime can impact your business financially due to loss of data As per consulting firm Gartner, network downtime costs $5600 or more/minute, i.e. more than $300,000 per hour.
These numbers include lost sales, reduced worker productivity and damage to the company’s reputation. For a business that relies on a constant network, even a momentary downtime can result in a large loss of cash and operations.
This clear evidence indicates the necessity of network management solutions that decrease downtime and thus ensure business continuity. As networks become increasingly complicated, the expenses associated with downtime are only set to rise; network management solutions like SandWork can reduce such risks.
Mitigate risks by using SandWork today
SandWork is a comprehensive solution designed to minimize network downtime and improve performance through advanced orchestration and automation.
By automating NOS installations, device configurations, and ongoing monitoring, SandWork reduces human error and ensures your network consistently performs at its best. Its greenfield checks and global intent verification proactively identify and address issues before they escalate, reducing potential disruptions.
With lifecycle management features, SandWork keeps all network elements updated for increased reliability. This platform scales network demands automatically while minimizing downtime, helping you increase reliability and significantly reduce operational costs.
Start mitigating risks with SandWork today.
Manual network management – a thing of the past
/in Hidden /by filip.sterlingThe modern data center world is complex and huge. Such complexity and scale need new solutions to efficiently and securely manage dynamic environments. Even the notion of manually managing the network seems like a thing of the past. Investing in senior engineers or manual interventions yields nearly zero ROI.
According to research conducted by NetBox Labs, around 57% of network operations are still performed manually in organizations where network automation projects have been marked as “completed.” Because of this, companies expose themselves to human-error and inefficient time spent.
Introducing SandWork, a modern orchestration, automation and assurance solution of PANTHEON.tech. SandWork is designed to optimize your network operations, delivering peak performance while minimizing operational overhead.
Embracing network orchestration
Manual network management is fraught with challenges that can hinder operational efficiency and scalability. Key issues include:
Worries disappear with SandWork
But how do we resolve this? SandWork addresses these issues by offering a comprehensive suite of features that automate and streamline network management processes:
Adopting SandWork
Boost productivity by taking over mundane tasks. Your IT teams will praise you for it, as it frees them up to work on strategic initiatives, instead of dull maintenance. When the processes are automated, chances of human errors are minimized, which boosts the accuracy of the process. This ensures better organization of network architecture, and can grow without any increase in labor.
Operational costs will immediately decrease since you will not need much manual supervision and intervention. What about improving security? We can achieve a secure network with automation to comply with checks and with robust security measures like role-based access control or RBAC.
SandWork features
SandWork is equipped with advanced features that cater to the needs of modern data centers:
Future-proof your network with SandWork
The increasing demands on data centers are creating new challenges for scaling and management. Would you like to continue to rely on manual intervention?
SandWork is a flexible and powerful system that supports the latest trends in network disaggregation and open source network operating system SONiC.
No more manual network management.
With SandWork by PANTHEON.tech, the companies can tap into a new network. By automating and streamlining processes, SandWork makes managing the complexities of a modern data center simple and future-ready.
When you adopt SandWork, your network infrastructure is ready for today’s needs and tomorrow’s challenges.
[Case Study] Broadcom’s adoption of SONiC to modernize enterprise data center network design
/in Hidden /by filip.sterlingBroadcom’s use of SONiC switches in their network infrastructure is a great example of how open-source solutions, combined with modern network orchestration tools, can redefine data center operations.
Key takeaways
As Broadcom absorbed VMware’s operations, the company needed to replace a legacy three-tier architecture. The reason was simple – they were looking to expand, scale-up and improve their data center operations. Broadcom was looking for a solution that could scale easily, reduce costs, and increase overall operational effectiveness.
Who is Broadcom?
A global tech-leader specializing in semiconductor and infrastructure software solutions. The company invests approximately $5 billion annually in research and development, supporting its extensive operations across 9 data centers. With over 40,000 employees, Broadcom’s innovations power a wide range of applications, from smartphones and data center networking and industrial automation.
The clear choice was SONiC, an open-source, community-driven network operating system. In the end, SONiC allowed Broadcom to achieve the outlined goals, creating an efficient and scalable network. Other sectors should draw inspiration from building a resilient, future-proof infrastructure.
But why SONiC?
Broadcom’s previous network was a conventional 3-tier model, which relied heavily on vendor-specific solutions. This mainly meant high licensing and maintenance costs.
But let’s break down what a 3-tier architecture looks like:
Broadcom mentioned it was drawn to SONiC for its open-source flexibility, automation potential, and scalability. This radical step made them avoid vendor-locked and pricey platforms and enjoy everything SONiC has to offer.
But what does SONiC offer?
One of SONiC’s greatest appeals for Broadcom was the opportunity to break free from vendor lock-in. Their legacy system required proprietary vendor intervention, which not only came with recurring licensing fees but also limited their ability to adopt alternative, more cost-effective hardware options.
SONiC’s adaptability also allowed Broadcom to integrate and scale its infrastructure quickly at a critical moment – as it acquired VMware’s data centers.
Network automation was a central goal in Broadcom’s migration to SONiC. The previous system relied heavily on manual configuration, which was labor-intensive and prone to errors. With SONiC, Broadcom initially leveraged Ansible scripts to automate configuration tasks.
As they became more advanced, they moved to a bespoke orchestration platform, significantly reducing manual interventions. Automation allowed Broadcom to minimize operational complexity, enabled fast deployment, updates, and streamlined maintenance. By automating repetitive tasks, you are freeing staff to focus on higher-value activities, and improve service response times.
Broadcom expanded its data center drastically. Currently, they operate 9 data centers all over the world. The largest being the Las Vegas DC, with up to 4500 network devices.
Source: Broadcom OCP SONiC Summit 2024
It became critical to support high-density traffic with a network architecture that could scale dynamically. The legacy system, with its 3-tier design, had limited flexibility, creating bottlenecks as traffic loads increased.
SONiC enabled Broadcom to scale up or down based on demand, with efficient horizontal and vertical expansion options. SONiC’s scalability ensured that networks could expand without extensive re-architecture or increased operational costs.
The migration to SONiC included adopting EVPN VXLAN, which enabled Broadcom to isolate network traffic into distinct virtual segments, enhancing both security and reliability. By implementing these advancements in SONiC, Broadcom contributed back to the community, readying the platform for broader enterprise adoption.
Introducing these technologies boosted Broadcom’s team confidence in delivering uninterrupted services. Operations remained seamless even during hardware failures or maintenance windows.
Enhanced network security
SONiC provided Broadcom with the ability to secure its network through a decentralized architecture. Instead of a single centralized firewall, which can be a point of failure, Broadcom adopted a distributed firewall strategy. This ensured that traffic across various segments could be monitored and controlled closer to its source. This decentralized approach allows for a more secure network design with fewer vulnerabilities.
Building a future-ready infrastructure
However, this is not the end of Broadcom’s SONiC endeavor.
Broadcom plans to improve the bespoke orchestration system currently used, as well as upgrade to 1.6 Tbps to support its high-speed East-West traffic.
Broadcom’s ongoing efforts to expand SONiC’s footprint into its WAN and campus networks illustrate the potential to create a standardized, end-to-end infrastructure that is automated, flexible, and highly reliable.
When, if not today?
Broadcom’s adoption of SONiC is a great example of how open-source solutions can transform traditional network infrastructure. By introducing a flexible, automated, and resilient network approach, Broadcom has set a new trend for modern data centers, with significant cost savings, scalability, and operational efficiency.
Better Together: SONiC & OpenDaylight Integration
/in Blog /by filip.sterlingIn a world increasingly driven by phenomena, such as AI, it is easy to forget the importance of network reliability and throughput. The fundamental need for robust connectivity has always been present and remains a cornerstone of our digital infrastructure.
OpenDaylight’s centralized control allows for dynamic network management and policy enforcement, providing real-time insights and automated responses to network conditions.
SONiC, with its modular & containerized architecture, ensures high-performance packet processing and robust data plane operations. The integration of these two systems enables seamless coordination between management, control and data planes, resulting in improved network performance, reduced latency, and enhanced fault tolerance through proactive management and rapid recovery from issues.
What is SONiC?
SONiC, developed by Microsoft, is an open-source network operating system designed to run on network switches. Its architecture is modular, containerized, and built on a Linux-based foundation, offering a highly flexible environment for network operations. SONiC manages the control and data planes, which are essential for switch operations, and it allows for dynamic network configuration and monitoring through both CLI (Command-Line Interface) and structured APIs such as gNMI and YANG.
What is OpenDaylight?
OpenDaylight, an open-source SDN controller, was launched as part of the Linux Foundation project. Initially, its goal was to promote SDN adoption by providing vendor-agnostic control over networks. Today, OpenDaylight supports a wide range of network protocols, including BGP, NETCONF, and gNMI, offering cloud-native support and integration into multiple LF projects, such as ONAP and OPNFV.
Better Together
The true strength of these platforms lies in their integration. SONiC and OpenDaylight work together to provide a comprehensive solution for network management. SONiC manages the underlying switches and routers, while OpenDaylight oversees higher-level automation and orchestration. Together, they form a cohesive system that can manage diverse, multi-protocol network environments.
Key benefits of this integration include:
Improved network automation
With SONiC providing operational simplicity through structured APIs and OpenDaylight enabling centralized control and policy enforcement, network administrators can automate routine tasks and complex configurations with ease.
Enhanced network reliability
The combination of SONiC’s efficient control plane and OpenDaylight’s centralized orchestration ensures a robust, reliable network infrastructure. By leveraging both platforms, networks are better equipped to handle increased traffic demands, while maintaining high availability.
Vendor-agnostic & flexible
OpenDaylight’s commitment to vendor-neutrality allows for seamless integration across different hardware vendors. SONiC’s modular architecture enables customization based on specific needs, providing flexibility for network operators.
What are the advantages?
One of the key advantages of using OpenDaylight with SONiC is its support for standardized APIs. The model-driven architecture of OpenDaylight, utilizing YANG, ensures consistent configurations across various network devices. This structure significantly reduces errors, promotes vendor independence, and enhances scalability, making it easier for network operators to expand or modify their network infrastructure without vendor lock-in.
OpenDaylight also excels in network orchestration and programmability. Its ability to work with multiple protocols (e.g., NETCONF, BGP, gNMI) means it can manage a wide range of devices within the same environment, enabling greater control and reducing operational overhead.
Model-Driven APIs vs. CLI
One of the ongoing debates in the networking world is whether to rely on model-driven APIs like gNMI/YANG or CLI-based management.
While CLI may provide more granular control for device-specific configurations, model-driven APIs offer significant advantages in terms of automation, error handling, and transactional support.
With APIs, operators can perform multiple changes in a single transaction, ensuring consistency and avoiding configuration errors. In contrast, CLI-based management is prone to inconsistencies, as each device may require different commands and manual error correction.
gNMI & RESTCONF
gNMI integration in SONiC is still a work-in-progress and promises a lot of neat features.
This unified integration with OpenDaylight’s RESTCONF would provide a consistent and standardized interface for interacting with various network devices. This integration would allow administrators to manage heterogeneous networks efficiently, simplifying operations and enabling a streamlined approach to network management.
Why should I care?
You should care because of real-life use-cases where this integration can be applied:
SONiC and OpenDaylight enable seamless network service abstraction by translating models between RESTCONF and gNMI. This allows administrators to ensure that configuration intents are accurately reflected across different network protocols, simplifying the management of diverse infrastructures.
With the integration of SONiC and OpenDaylight, administrators can leverage a configuration intent datastore. This datastore ensures that configuration changes are made with intent-based execution, reducing the risk of misconfigurations and enabling more reliable operations across network devices.
OpenDaylight’s support for RESTCONF allows for efficient translation to gNMI, creating a consistent method for interacting with different network devices. This unification of interfaces simplifies network management, especially in heterogeneous environments with multiple protocols and vendors.
OpenDaylight’s architecture also allows for the integration of role-based access control. RBAC helps secure access to network devices by ensuring that users are given appropriate permissions based on their roles. This improves network security and reduces the risk of unauthorized changes.
By integrating SONiC’s modular and containerized architecture with OpenDaylight’s orchestration capabilities, networks can adopt a clear separation of concerns. This means that network functions, automation, and control can scale independently, optimizing performance and making future expansions simpler and more efficient.
A SONiC & OpenDaylight integration provides a powerful, flexible, and scalable solution that enhances network reliability, enables automation, and ensures vendor-neutrality. For organizations looking to modernize their network infrastructure, the combination of SONiC and OpenDaylight is a compelling choice.
[OpenDaylight] The Future is Modular
/in Blog /by filip.sterlingA sustained effort to improve the OpenDaylight codebase was focused on restructuring and modularizing the RESTCONF server architecture. What started, according to Robert Varga in 2021,
By modularizing the RESTCONF server architecture, our team was able to achieve a heightened separation of concerns. In simpler terms – greater flexibility and extensibility. For example, unit tests can target specific modules, ensuring each part works as intended.
This shift aims to simplify several parts of the OpenDaylight project and increase flexibility – particularly in how data is managed and how the system interacts with network devices.
State persistence in lighty.io RNC
Particularly, future lighty.io RNC instances will benefit from decoupling the state persistence from the application container itself. By leveraging external systems like ONAP CPS, we can achieve reduced complexity within the RNC container but also allow for more specialized and robust state management solutions.
Statelessness is a crucial property for systems deployed in cloud environments, where resources can dynamically scale in response to demand. This would enable, for example, Kubernetes autoscaling of RNC (download the package here) against an instance of ONAP CPS – or other, similar services, like sysrepo.
MD-SAL Independence
The Model-Driven Service Abstraction Layer (MD-SAL) is a core component in OpenDaylight that provides a common data store and messaging infrastructure. Recent changes allow the RESTCONF server to operate independently of MD-SAL, which was not previously possible.
MD-SAL & RESTCONF communication example
This decoupling enables the implementation of a RESTCONF server that can interface directly with various data stores or backend services, without relying on MD-SAL. For example, data from gNMi (gRPC Network Management Interface) devices can be accessed directly, through the same RESTCONF server interface.
For users who continue to use MD-SAL, a dedicated integration layer has been preserved and will soon be separated into its own component. This layer handles the specific wiring needed to integrate MD-SAL with the new RESTCONF server architecture. By isolating this functionality, the system remains modular, allowing users to opt-in or -out of using MD-SAL, as needed.
Separation of Basic RESTCONF Concepts
The Future is Modular
These recent updates to OpenDaylight have significantly improved the RESTCONF server architecture, making it more modular, flexible, and scalable. These changes do not only simplify the system but also prepare it for future enhancements and integrations, making it a more robust and versatile platform for network management.
Whether handling configuration, monitoring network state, or integrating with various data sources – the new architecture offers a solid and adaptable foundation for diverse networking needs.
Leave us your feedback on this post!
You can contact us here.
Explore our PANTHEON.tech GitHub.
Watch our YouTube Channel.
[Demo] SONiC VPP BGP Multipath
/in Blog /by filip.sterlingManaging large-scale data centers and cloud environments can be incredibly complex, with challenges ranging from traffic congestion to maintaining high availability. The need for efficient, scalable, and reliable solutions is on the mind of most network engineers and network architects.
The combination of:
is a particularly useful and powerful option for optimizing large-scale data center and cloud environments.
Download the demo here:
VPP, a high-performance software packet-processing framework, employs various techniques, including Single Instruction Multiple Data (SIMD) operations, to optimize the processing of packets on general-purpose CPUs. These optimizations enable VPP to manage large volumes of traffic in parallel, significantly enhancing throughput.
Complementing this, SONiC is an open-source network operating system designed for cloud-scale data centers, offering a modular and flexible architecture for deploying a comprehensive suite of network functions across various hardware platforms.
This expansion allows SONiC to extend closer to compute nodes in data centers or even replace vendor hardware with generic purpose “pizza boxes.”
Additionally, BGP ECMP enables the distribution of network traffic across multiple equal-cost paths, ensuring load balancing, redundancy, and efficient bandwidth utilization. Together, these technologies provide scalable, reliable, and efficient solutions for managing complex and demanding network infrastructures.
SONiC VPP Architecture
The architecture of SONiC-VPP closely mirrors the foundational SONiC architecture. In this setup, the syncd component interfaces with VPP through the shared library libsaivpp.so. SONiC’s architecture allows integration with the FRRouting project, which typically encompasses daemons for various routing protocols, including BGP and OSPF.
VPP, on the other hand, uses the linux-cp plugin, which can create a TAP interface in Linux and copy attributes from the VPP interface, effectively mirroring the VPP hardware interface to the kernel. All synchronization is then managed by the netlink listener, which listens for netlink messages and executes the corresponding events in VPP. Supported events include (RTM_LINK, RTM_ADDR, RTM_*ROUTE).
Normally, the linux-cp plugin would suffice for this use-case. However, SONiC uses two approaches:
Both these approaches ensure that SONiC-vpp can effectively handle complex networking requirements while maintaining synchronization between the VPP data plane and SONiC control plane. This seamless integration of SONiC and VPP provides a robust and flexible solution for managing advanced routing protocols and network interfaces in large-scale data center environments.
Leave us your feedback on this post!
You can contact us here.
Explore our PANTHEON.tech GitHub.
Watch our YouTube Channel.
OpenDaylight practical applications
/in Blog /by PANTHEON.techAbstract
The OpenDaylight project is an open source platform for Software Defined Networking (SDN) that uses open protocols to provide centralized, programmatic control and network device monitoring. It allows automating networks of any size and scale. PANTHEON.tech, a TOP open-source contributor, actively participates in the LINUX FOUNDATION NETWORKING environment. Our contributions play a crucial role in shaping the OpenDaylight project. In this article, we delve into the insights of our developers, exploring their perspectives on the OpenDaylight environment and its practical applications.
How important is OpenDaylight for your clients?
Our customers operate modern complex and multi-layered network environments. They face a huge and demanding challenge to integrate many different systems, frameworks, engines, tools, technologies, etc. The OpenDaylight environment is a key enabler to help them automate this integration. We use a variety of OpenDaylight APIs to do this.
Which API do you use the most?
The RESTCONF REST APIs provide a convenient way for the customer’s external systems, applications and scripts to access the data in the OpenDaylight datastore or within the mounted devices. REST is also used to allow external applications to trigger various processes/functions within OpenDaylight using RPC calls. OpenAPI has also been introduced to facilitate this type of REST interaction. JMX APIs are another way for external applications to interact with OpenDaylight, although this is mostly used for monitoring, statistics gathering and health checks. Some customers integrate JMX APIs with their own messaging systems to publish alarm notifications to various external nodes.
How do you deal with backups and disaster recovery?
OpenDaylight is primarily used as a network device connector and configurator. Thanks to its internal data store (Model-driven Service Abstraction Layer MD-SAL), it’s also used as a disaster recovery and backup tool – so in cases where some of the devices die and need to be rebooted cleanly, we use OpenDaylight to restore them to their last known state.
What protocols do you use?
The main protocol used to connect and communicate with devices is NETCONF. Other protocols are also available, but with varying levels of production quality. Connecting and communicating with devices is usually done in 2 ways:
How do you secure communication?
To secure NETCONF communication, OpenDaylight comes with an AAA (Authentication, Authorisation, Accounting) plugin. However, some clients have their own security policies and procedures. For this purpose, the original AAA can be replaced by custom plugins to better meet the client’s requirements.
What are OpenDaylight’s clustering capabilities good for?
OpenDaylight is often used for high availability, fault tolerance and geo-redundancy. This is achieved using either OpenDaylight’s clustering capabilities, Daexim utilities or other hybrid solutions.
Customers can customize their OpenDaylight deployments – the number of OpenDaylight instances (members) that form the cluster, their voting rights and the distribution of data between them. Customers often split data into different shards and then decide which shards to persist and which members to host them on – adding high availability.
Can you cluster multiple OpenDaylight instances?
Some customers prefer to run multiple standalone OpenDaylight instances and connect them into an HA cluster via a higher level orchestrator. This can be particularly useful in cases where load balancing is an important part of their deployment and needs to be carefully managed.
Human Errors in Network Configuration
/in Blog /by PANTHEON.techSONiC: From Hyperscalers to Enterprise
/in Blog /by PANTHEON.techData Center Network Operations: Day 0, Day 1, Day 2 Config
/in Blog /by PANTHEON.techOpenDaylight 2023: A Year of New Contributors
/in News /by PANTHEON.techSandWork Agentless Orchestration with SONiC at the OCP2023
/in News /by PANTHEON.techWhat is VXLAN (Virtual Extensible LAN)?
/in Blog /by PANTHEON.tech[What Is] SONiC (Software for Open Networking in the Cloud)
/in Blog /by PANTHEON.techWhat is Network Address Translation (NAT) ?
/in Blog /by PANTHEON.tech