PANTHEON.tech
  • About Us
    • Expertise
    • Services
    • References & Partners
    • Tools
  • Products
    • Orchestration
    • Automation
    • Network Functions
    • Security
  • Blog & News
  • Career
  • Contact
  • Click to open the search input field Click to open the search input field Search
  • Menu Menu
  • Link to LinkedIn
  • Link to Youtube
  • Link to X
  • Link to Instagram
portchannels thumb

[What Are] PortChannels

May 12, 2025/in Blog /by filip.sterling

Network engineers love throwing around terms like LAG, EtherChannel, MC-LAG – and somewhere in the mix, you’ll hear PortChannel. But what exactly is a PortChannel, and where does it fit into modern data center design?

Let’s break it down in a way that makes sense, even if you’re not knee-deep in switch configs every day.

What is a PortChannel?

At its core, a PortChannel is a way to take multiple physical network links between two devices—like two switches or a server and a switch—and make them act as one logical connection.

But why?

  • More bandwidth: You combine the speed of all the links.
  • High availability: If one link fails, the others keep running—your connection doesn’t drop.
  • Load balancing: Traffic can be spread across the links for better performance.

So instead of managing and monitoring four separate cables, your network sees just one, called a PortChannel.

Is this just link aggregation?

In Cisco environments you’ll hear terms like EtherChannel and PortChannel, whereas the vendor-neutral IEEE standard for link aggregation is commonly referred to by its specification IEEE 802.3ad (which defines the Link Aggregation Control Protocol, LACP).

Except for these (most) common terms, other vendors have their own, specific nomenclature for Link Aggregation – although they use the same concept of combining multiple physical links into one logical link for greater bandwidth and reliability.

Vendor / Platform Terminology for aggregated links Notes
Cisco (Catalyst/Nexus) EtherChannel, Port-Channel interface (logical PortChannel) Cisco-specific names. Uses PAgP or LACP
Juniper (Junos) Aggregated Ethernet (AE) interface (e.g. ae0) Standard LACP or static configuration.
Arista (EOS) Port-Channel (in CLI, similar to Cisco) Supports LACP or static (no PAgP).
HP (Aruba ProCurve) Trunk (for link aggregation group) Can be static or LACP; Trunk in this context is HP’s term​
Huawei Eth-Trunk Supports LACP or static; Huawei’s name for LAG​
Linux / Windows Bond, Team (NIC teaming) Usually uses LACP (802.3ad mode) or static.
Generic/Standard LAG (Link Aggregation Group), LACP Vendor-neutral terms (IEEE 802.3ad/802.1AX).

PortChannels in real-life scenarios

Even in modern networks, PortChannels are everywhere:

  • Between servers and switches for higher throughput and redundancy
  • Between switches to prevent bottlenecks and ensure link failure doesn’t bring down the network
  • In storage networks, where consistent bandwidth and low latency are critical
  • In legacy 3-tier data centers, often used to connect the access and aggregation layers

They’re especially useful in setups that still rely on Layer 2 connectivity and haven’t fully transitioned to routed fabrics, or overlays, like EVPN-VXLAN.

Do PortChannels matter?

PortChannels might not make headlines like EVPN or SONiC, but they’re part of the backbone of real-world networks. They’re the kind of technology that just works and often stays in place, even as the rest of the architecture evolves.

At PANTHEON.tech, we see PortChannels in everything – from enterprise racks to edge deployments—and they’re often a great starting point for making networks more robust without adding unnecessary complexity.

If you’re rethinking your network architecture or wondering whether it’s time to move beyond PortChannels, we’re always happy to talk strategy.


Leave us your feedback on this post!

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

vxlan vpp thumb

[What Is] VLAN & VXLAN

April 2, 2025/in Blog /by filip.sterling

Let’s start with an analogy – a busy airport. Thousands of passengers, dozens of terminals, countless gates. Now imagine trying to direct all that traffic – keeping passengers moving smoothly, without ending up at the wrong destination. 

That’s what modern networks look like today: crowded, fast-paced, and constantly growing.

To manage this digital chaos, network engineers rely on segmentation—ways to divide and organize traffic, to avoid chaos. Two technologies that make this possible in the world of networks are VLANs and VXLANs.  Imagine them as the traffic controllers of the network world, orchestrating who goes where and how data travel from point A to point B.

While VLANs have been here for decades, the demand created by cloud computing, virtualization, and multi-tenant environments showed us their time has come to be replaced.  VXLANs – built for scale, flexibility, and the demands of today’s data centers.

Curious how VXLANs are implemented in open-source solutions like SONiC or FD.io? Reach out to us – we’ll be happy to explore your use case.

Both VLANs and VXLANs are often misunderstood or oversimplified. Let’s break them down, compare them, and look at where each one fits in real-world scenarios.

What is a VLAN?

Virtual Local Area Network is a method of logically separating network traffic, even if devices are physically on the same switch or infrastructure.

VLAN assigns a group of ports or devices to their own group – isolated from others, unless you explicitly allow communication between them. This is incredibly useful in office or enterprise environments, where different departments or tenants should not share broadcast domains. VLANs operate at Layer 2 (Data Link Layer) of the OSI model.

Pros:

  • Reduces broadcast traffic
  • Enhances security via segmentation
  • Easy to set up on managed switches

Cons:

  • VLAN IDs are limited to 4096, which may not be enough for large-scale multi-tenant environments.
  • VLANs are typically limited to a single Layer 2 domain, unless you involve complex bridging.

Trying to scale Layer 2 across a large network leads to more broadcast traffic, increased risk of loops, and the something known as spanning tree hell. Network teams will therefore spend more time troubleshooting than scaling and building.

What is a VXLAN?

Virtual Extensible LAN extends the concept of VLANs by allowing L2 networks to be overlaid on top of a L3 infrastructure, using tunneling. 

VXLAN encapsulates Ethernet frames inside UDP packets, allowing them to traverse IP networks. It’s designed to address the scalability limitations of VLANs.

Simply put, VXLAN creates a virtual tunnel that allows data from the isolated groups (L2) to travel via the internet (L3) and extend their reach. Meaning, VXLANs operate at Layer 2 over Layer 3 (encapsulation).

Use cases for VXLAN usage include data center interconnects, cloud-scale networks, and multi-tenant environments.

Pros:

  • VXLAN Network Identifier (VNI): Instead of 4096 IDs, VXLAN supports up to 16 million (2^24) virtual networks
  • Works across distributed networks and hybrid cloud
  • Enables separation of tenant traffic across IP fabrics

Cons:

  • Requires additional configuration and often hardware/software support (e.g., VTEPs – VXLAN Tunnel Endpoints)
  • Higher complexity, compared to VLANs

Here at PANTHEON.tech, we love to support and empower new community efforts in networking, like SONiC. Make sure to follow our social media to know, where you can find us for live demos (SONiC + VPP in Action: VXLAN & BGP EVPN in Containerized Networks).

VLAN vs. VXLAN, side-by-side

Feature VLAN VXLAN
Protocol Layer Layer 2 Layer 2 over Layer 3
ID Range 12-bit (4096 VLANs) 24-bit (16 million VNIs)
Scalability Limited Very high
Encapsulation None (native Ethernet) UDP-based encapsulation
Use Case Office LANs, small networks Cloud DCs, multi-tenant networks
Interoperability Basic switching Requires VTEPs and IP fabric

Real-World Usage

VLAN: Enterprises and campus networks

Most enterprise networks still rely on VLANs for segmenting traffic between departments. They’re easy to deploy and maintain. A simple Layer 3 switch and managed VLAN setup keeps traffic segmented and secure.

VXLAN: Data centers and cloud environments

VXLANs excel in modern data centers, especially when combined with EVPN as the control plane. VXLANs allow virtual machines or containers to move across physical servers and even geographic locations while maintaining the same network identity.

The choice is yours

VLANs are here to stay. They’re still highly effective for straightforward segmentation. But for modern, cloud-native, and large-scale networks, VXLAN is becoming the new standard. 

Understanding the difference and knowing when to use each is a must for any network engineer building future-proof infrastructure.


Leave us your feedback on this post!

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

whitebox net thumb

[What Is] Whitebox Networking?

March 24, 2025/in Blog /by filip.sterling

A deep dive into open-source & SONiC Solutions

One of the many challenges in modern networking is the big decision, between vendor-locked and open-source solutions. 

While entry prices for vendor-locked solutions seem viable, they can become unbearable in the long run and can block your enterprise from effective scaling or experimenting with various brands. 

Open-source solutions are often community-driven and offer more flexibility, in terms of hardware support and functionality. However, they depend on strong leadership and continued interest in its future.

In recent years, we can observe a growing trend in abandoning traditional, vendor-locked solutions and going for more flexible, scalable, and cost-efficient architectures. 

At the heart of this transformation is whitebox networking, an approach that separates network hardware from the software that powers it. This enables organizations to deploy open-source solutions and customize their networking stack to suit their needs, leading to increased efficiency, lower costs, and greater innovation.

What is whitebox networking?

Whitebox networking refers to using generic, commodity hardware—often referred to as bare-metal or whitebox switches—instead of proprietary, vendor-specific devices. 

Unlike traditional networking, where companies purchase hardware bundled with pre-installed software from a single vendor, whitebox networking allows businesses to choose their preferred network operating system.

A popular option, which we at PANTHEON.tech are wholeheartedly supportive of, is SONiC – Software for Open Networking in the Cloud.

This separation of hardware and software mainly provides organizations with more control over their network infrastructure. 

It also fosters a more competitive ecosystem, as companies are no longer locked into a single vendor for everything – from hardware-vendor variety, support, updates, to new features. 

With open-source network operating system (NOS) options gaining popularity, businesses can now customize and automate and orchestrate their networks, for example, with SandWork by PANTHEON.tech.

Behind the popularity of Whitebox Networking

One of the biggest motivations behind the increased adoption of whitebox networking is cost reduction. 

Traditional networking equipment is expensive due to vendor markups, licensing fees, and maintenance contracts. Whitebox switches, on the other hand, are built using standardized, off-the-shelf components, significantly reducing hardware costs. 

Additionally, you get the freedom to choose from a wider range of vendors and their hardware.

Organizations can then install a software-defined networking (SDN) platform or an open-source NOS to manage their network efficiently.

Traditionally, vendors offer hardware with pre-installed, proprietary software/operating systems, which are what we referred to as vendor lock-in

Beyond cost savings, scalability and flexibility are major advantages. Large-scale data centers, telecom providers, and cloud service operators need infrastructure that can grow dynamically. Whitebox networking enables this by allowing enterprises to deploy customized solutions that integrate seamlessly with automation tools, network monitoring systems, and cloud-native architectures.

Speaking of customized solutions – the most important factor is the rise in popularity of open-source solutions, like SONiC. 

Developed by Microsoft and later open-sourced, SONiC has become one of the most widely adopted open-source network operating systems for large-scale data center deployments. With:

  • a modular architecture,
  • support for advanced networking protocols,
  • robust automation capabilities,

Even if all of this may seem overwhelming at first, it is well worth the effort to dive into the open world of SONiC. We can boldly claim, that SONiC has made whitebox networking more accessible & manageable than ever before.

How SONiC shapes the future of whitebox networking

Particularly in large-scale cloud and enterprise environments, SONiC is like a dream come true for complex deployments.

Built on Linux, SONiC offers high programmability, multi-vendor support, and deep integration with SDN frameworks. 

One of SONiC’s biggest advantages is its hardware abstraction. Software is traditionally tied to specific hardware – a program written for one type of device may not work on another. Hardware abstraction removes this limitation by creating a standardized way for software to interact with hardware.

Since SONiC supports multiple hardware platforms, organizations using whitebox switches can switch vendors without changing their entire network stack. This eliminates vendor lock-in, making it easier to adopt more fitting solutions for different network components.

Furthermore, SONiC’s containerized architecture enables modular updates and faster development cycles. Organizations can choose which networking features they want to enable or disable, providing great customization compared to traditional monolithic network operating systems.

A worthwhile challenge

Despite its advantages, whitebox networking comes with operational challenges that organizations should consider. 

Since hardware and software are decoupled, businesses must ensure that their chosen NOS of choice is fully compatible with their whitebox switches. Unlike traditional, ready-made solutions, where vendors provide end-to-end support, whitebox deployments require a higher level of technical expertise.

Fortunately, vendors that sell SONiC enabled hardware offer a pre-installed factory version of SONiC, including various support options. 

Do you find the idea of SONiC great, but don’t know where to start? Make sure to contact us – we can help you get started with SONiC and move to our orchestration & automation solution – SandWork!

While whitebox networking provides greater customization and flexibility, it may also require a shift in IT operations. Enterprises acustomed to vendor-managed solutions may need to invest in training their network engineers on open-source NOS platforms, automation tools, and new troubleshooting techniques.

The Future of Whitebox Networking

As data centers and the demand for their performance grow larger and more complex, whitebox networking is destined to become the de facto standard for scalable, cost-effective, and programmable network infrastructure. 

With SONiC driving innovation, companies now have access to a vendor-neutral, software-driven networking model that empowers them to build more efficient, customizable networks.

As adoption increases, expect to see greater interoperability between whitebox switches and open-source NOS, improved automation capabilities, and wider support from major hardware vendors. 

For enterprises looking to modernize their network infrastructure, reduce costs, and eliminate vendor lock-in, whitebox networking with open-source NOS like SONiC represents a compelling, future-proof solution.


Leave us your feedback on this post!

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

bgp evpn thumb

[What Is] BGP EVPN?

March 12, 2025/in Blog /by PANTHEON.tech

As is always the case, businesses and service providers rely on and require networks that need to be fast, scalable, and resilient.

However, as networks grow, they all kinds of challenges—from managing multi-location connectivity to ensuring efficient data flow and maintaining security and isolation.

Unfortunately, traditional networking models struggle to keep up with the demands of cloud computing, large-scale data centers, and modern applications.

However, this is where BGP-EVPN comes to the rescue. A solution designed to streamline network operations, enhance scalability, and optimize traffic flow.

Whether you’re an IT manager, a network engineer, or simply someone curious about how modern networks stay flexible and reliable, this blog will walk you through:

  • How BGP-EVPN works
  • Why BGP-EVPN matters
  • How BGP-EVPN integrates with VXLAN

How does BGP-EVPN work?

Modern data centers and enterprise networks need scalable, efficient Layer 2 and Layer 3 connectivity across multiple sites.

            Internet (BGP)
                  |
   ---------------------------------
   |                                |
DC1 Spine Router            DC2 Spine Router
   |                                |
   |----------BGP-EVPN Peering------|
   |                                |
DC1 Leaf Switch             DC2 Leaf Switch
   |                                |
DC1 Server                    DC2 Server

L2 supports traditional Layer 2 Ethernet connectivity in EVPN, but over large-scale networks, using VXLAN tunneling. This enables workload mobility and multi-tenancy across data centers.

L3 in EVPN facilitates routing and segmentation across different networks, ensuring scalability and optimized traffic flow without excessive broadcasts.

Border Gateway Protocol – Ethernet VPN solves this by providing a control plane for multi-tenancy, workload mobility, and optimized traffic flow. Ideal for cloud providers, SDN architectures, and large-scale deployments, BGP EVPN integrates with VXLAN to extend L2 over L3, reducing network complexity and improving automation.

BGP EVPN is an extension of the Border Gateway Protocol BGP, that leverages Multiprotocol BGP to distribute endpoint reachability information and provide efficient and scalable Ethernet-based VPN solutions.

Widely adopted in data centers and service provider environments, BGP EVPN automates the creation of VXLAN tunnels, addressing the need for flexible, multi-tenant connectivity and easy workload mobility.

What is VXLAN

Virtual Extensible LAN, specified in RFC 7348 is a L3 encapsulation protocol, designed to address the scaling limitations of traditional VLANs in modern data centers.

The issue with classic VLANs: while effective for segmenting networks, they provide only a limited number of identifiers (4,096) and have difficulties in spanning L2 domains over larger L3 networks.

VXLAN resolves these challenges by extending L2 connectivity over a L3 underlay network, offering scalability, flexibility, and improved performance for virtualized environments.

Why does VXLAN matter in data centers?

Applications often depend on microservices – a type of architecture that breaks applications into independent, modular services, that interact dynamically.

These microservices are typically distributed across multiple servers, racks, or data centers, meaning, seamless communication between them is essential. Furthermore, many microservices rely on Layer 2 connectivity to ensure efficient communication and smooth workload mobility.

VXLAN addresses this need by encapsulating Ethernet frames within UDP packets, allowing them to travel across a Layer 3 network while maintaining Layer 2 functionality.

By enabling Layer 2 connectivity over a Layer 3 infrastructure, VXLAN not only ensures efficient microservice communication but also lays the foundation for network overlays. These overlays provide scalability, segmentation, and flexibility, making them essential for modern data centers and cloud environments.


A quick detour into network overlays: Network overlays are created by encapsulating traffic into a tunneling protocol. Tunneling essentially  encapsulates one network protocol within another, creating a “tunnel” for the original data. Here, VXLAN (Virtual Extensible LAN) encapsulates L2 Ethernet frames into L3 UDP packets. This encapsulation enables L2 networks, such as VLANs or subnets, to extend across a L3 infrastructure, like an IP-based data center or WAN.


VXLAN further relies on Virtual Tunnel Endpoints located in servers or network switches. A VTEP is a network component (a physical switch, virtual switch, or software-based node) that processes VXLAN traffic by adding and removing encapsulation. This allows devices in separate L2 networks to communicate seamlessly over a L3 infrastructure. This process ensures that microservices remain interconnected, regardless of their physical locations.

All-in-all, VXLAN is considered essential for microservices and distributed systems, because it:

  • Supports protocols like ARP and multicast (commonly used for service discovery and communication)
  • Provides scalability – with its support for over 16 million unique identifiers (VNIs),
  • Provides network segmentation for isolating tenant traffic
  • Increases operational efficiency by extending L2 networks over L3 infrastructures

By ensuring smooth communication between distributed services, VXLAN offers the flexibility & scalability that modern applications require.

What is MP-BGP?

In old-school networking, BGP-4 peers exchanged routing information through Update messages.

These messages inform about reachable routes, sharing the same path attributes, with the routing data carried in the Network Layer Reachability Information field.

The issue: the scope of BGP-4 was limited to handling only IPv4 unicast routing information.

To meet the increasing demand for diverse network layer protocols, such as IPv6 & multicast, the Multiprotocol Border Gateway Protocol was developed. MP-BGP is an extension of traditional BGP, enabling it to support multiple address families, including IPv6, multicast, and EVPN. 

Address families refer to different network protocol types or services that MP-BGP can route, providing flexibility to accommodate various networking needs beyond traditional IPv4 routing. 

The protocol achieves this by introducing new formats for Network Layer Reachability Information (NLRI).

MP-BGPs in data centers

MP-BGP became an important component in modern data centers due to its ability to scale and efficiently distribute routing information. When integrated with VXLAN & EVPN, MP-BGP serves as the control plane for distributing L2 and L3 reachability information.

What makes  MP-BGP ideal for data centers is:

  • Protocol versatility, due to its support for multiple address families, including EVPN
  • Scalability, for efficient advertisement of MAC and IP addresses across large networks
  • Policy control, for detailed control over routing policies and optimal traffic flows

MP-BGP, however, failed short in providing the flexibility and efficiency needed for L2 and L3 VPN services. To address this gap (enhanced network segmentation, optimized traffic flow, or improved redundancy), EVPN was developed and came to the rescue.

   BGP (Control Plane) - backbone protocol that distributes routing information across networks
          ↓
   MP-BGP (Multi-Protocol) - supports multiple address families (including EVPN)
          ↓
   EVPN (Ethernet VPN) - multi-tenancy, workload mobility, and optimized MAC/IP routing
          ↓
   VXLAN (Encapsulation) - L2 traffic into L3 packets, enabling data center overlays
          ↓
Underlay Network (IP Fabric) - physical infrastructure that supports VXLAN tunnels

What is EVPN?

The original VXLAN specification lacked a control plane, requiring manual configuration of VXLAN tunnels and relying on flood-and-learn mechanisms to discover MAC addresses.

The problem: this directly caused significant overhead in large-scale networks and complicated scaling. 

To address these issues, the Ethernet Virtual Private Network was introduced as a standards-based control plane for VXLAN.

EVPN leverages MP-BGP to distribute MAC and IP address information, providing a more scalable and efficient solution for large deployments.

How does MP-BGP EVPN help VXLAN?

Because MP-BGP EVPN serves as the control plane for VXLAN, it enables automated and efficient learning of MAC and IP addresses. F

  • Automated VTEP discovery, where MP-BGP EVPN allows VTEPs to automatically discover each other and establish VXLAN tunnels.
  • Efficient routing, since instead of flooding the network to discover MAC addresses, MP-BGP EVPN advertises this through simpler BGP updates.
  • Integrated L2 and L3 connectivity, because EVPN supports both MAC address learning and IP routing, providing seamless integration of Layer 2 and Layer 3 services.
  • ARP suppression, which reduces broadcast traffic by advertising ARP information through the control plane.
  • Scalability & flexibility: BGP’s proven scalability ensures that MP-BGP EVPN can support large, multi-tenant data center environments.

BGP-EVPN has become the foundation of modern data center and WAN architectures, providing a scalable, efficient, and flexible solution for Layer 2 and Layer 3 VPNs.

By using MP-BGP as the control plane, BGP-EVPN improves network efficiency by reducing broadcast traffic, optimizing routing, and enabling seamless multi-tenancy. All of these are essential for today’s cloud-centric infrastructures.

Data centers, SONiC & orchestration

With the growing adoption of SONiC, open networking, and multi-cloud environments, the role of BGP-EVPN will continue to expand, offering a standardized approach to network segmentation, automation, and intent-driven operations.

However, managing large-scale BGP-EVPN networks is complex, requiring advanced automation to streamline provisioning, ensure real-time validation, and maintain operational consistency.

Built for modern network fabrics, SandWork automates BGP-EVPN overlays, providing intent-based orchestration, real-time reconciliation, and seamless fabric lifecycle management.

By integrating VXLAN provisioning, automated validation, and network-wide configuration changes, SandWork empowers enterprises, service providers, and hyperscalers to operate resilient, scalable, and future-ready networks with minimal manual effort.


FAQ

1. What problem does VXLAN solve?

VXLAN overcomes the scalability limitations of VLANs by enabling over 16 million unique segments. It also extends Layer 2 networks over a Layer 3 underlay, supporting geographically distributed workloads.

2. How does MP-BGP EVPN enhance VXLAN?

MP-BGP EVPN provides a control plane for VXLAN, enabling dynamic and efficient learning of MAC and IP addresses. This replaces the flood-and-learn mechanism, reducing overhead and enhancing scalability.

3. Why is VXLAN-EVPN important for microservices?

Microservices often require Layer 2 connectivity for communication and service discovery. VXLAN-EVPN provides seamless Layer 2 overlays over Layer 3 networks, enabling efficient communication between distributed microservices.

4. How does PANTHEON.tech support VXLAN-EVPN deployments?

PANTHEON.tech offers solutions like SandWork, which simplifies the orchestration and management of VXLAN-EVPN networks, and LightSpeed, which validates the performance of these networks.

5. Is VXLAN-EVPN suitable for small enterprises?

While VXLAN-EVPN is often associated with large-scale data centers, its benefits—scalability, agility, and security—make it valuable for enterprises of all sizes. Small enterprises adopting cloud-native architectures can particularly benefit from its flexibility.


Related Products from PANTHEON.tech

SandWork: A network orchestration platform designed for managing complex data center environments, including VXLAN and MP-BGP EVPN deployments. It simplifies the automation and management of large-scale data center networks.

Custom Solutions: Tailored networking solutions to address specific enterprise needs, ensuring optimal performance and scalability.


Leave us your feedback on this post!

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

(Updated 3/2025)

TITLE SLIDE Pantheon V 0.02

SONiC & FD.io: Exploration and Technical Deep Dive (Webinar)

February 3, 2025/in Blog /by filip.sterling

Modern challenges need strong, open solutions. If you’re managing networks in data centers, cloud infrastructure, or telecom environments, this must-watch webinar on SONiC (Software for Open Networking in the Cloud) and FD.io (Fast Data Input/Output) is for you.

PANTHEON.tech, Intel Network Builders, and Maciek Konstantynowicz (CSIT Tech Lead & Cisco Distinguished Engineer) teamed up for a webinar to explain these technologies and their importance for the future of networking.

Why SONiC & FD.io are Better Together

Our webinar dives deep into two critical pillars of open networking:

  1. SONiC – An open-source, community-driven network operating system, adopted by hyperscalers, enterprises, and telcos. From hyperscaler pioneers like Microsoft and Alibaba to enterprise giants like Walmart and Broadcom, SONiC has reshaped network scalability, hardware agnosticism, and operational efficiency.
  2. FD.io – FD.io is the driving force behind efficient and high-performance network functions. With benchmarks showcasing terabit throughput and real-world deployments on Commercial-off-the-shelf hardware, FD.io is leading the charge in software-defined networking.

Key Highlights from the Webinar

The History of SONiC: From its inception at Microsoft in 2016 to its global adoption by hyperscalers, enterprises, and telcos, SONiC has been a game-changer. Learn how organizations like Alibaba saved up to 90% in testing time and Walmart is scaling SONiC to stores and distribution centers.

Real-World Use Cases: Explore how companies like Broadcom transitioned to SONiC for enhanced network visibility and cost efficiency, or how Orange utilized SONiC to modernize their telco infrastructure with open, disaggregated switches.

Cutting-Edge Benchmarks & Methodologies: Get insights into FD.io CSIT benchmarks, including advanced testing methodologies. These innovative approaches redefine performance testing, ensuring reliability for large-scale deployments.

What You’ll Learn

Whether you’re an IT decision-maker, network architect, or developer, this webinar covers:

  • The architecture and scalability of SONiC.
  • How FD.io accelerates software networking innovation
  • How can I benefit from SONiC-vpp

Watch the Webinar Today

Don’t miss the opportunity to learn about all the above and more, at the link below:

Watch our webinar for free (Intel Network Builders)

[What Is] Cloud-Native Network Functions

December 3, 2024/in Blog /by filip.sterling

This post is a reference to our sunset project CDNF.io, which was a portfolio of cloud-native network functions. Since this post was fairly popular, we will keep the informative parts of this article online. We are shifting towards consultations and custom CNF development.

If you are interested in custom CNF consultations, feel free to contact us.


CNF (Cloud-native Network Function) is a software implementation of a network function, traditionally performed on a physical device (e.g. IPv4/v6 router, L2 bridge/switch, VPN gateway, firewall), but built and deployed in a cloud-native way.

Why do you need CNFs?

CNFs are a new approach to building complex networking solutions, based on the principles of cloud-native computing and microservices. These bring many benefits, such as:

Lowered costs – cloud-native networking infrastructure does not need to run on specialized hardware anymore. It can run on commodity servers connected in a private cluster, or even in public cloud infrastructures like AWS or Google Cloud. With features like auto-scaling, metered billing, and pay-per-use models, you can completely eliminate sub-optimal physical hardware allocations and costs connected with the maintenance of physical hardware.

Agility – with CNFs, feature upgrades do not involve hardware replacement any more. Instead, rolling-out of a new feature usually takes just the implementation of a new networking microservice and its deployment into the existing infrastructure. This decreases time to market dramatically – and lowers the costs of new features again.

Elastic scalability – a cloud-native solution can scale at the level of individual microservices (CNFs), which can automatically go live and terminate in a fraction of second, based on demand for their services. The usage of public clouds allows them to scale almost without limits, with no need for hardware upgrades.

Fault-Tolerance & Resilience – cloud-native architecture patterns are based on loosely coupled microservices, which can greatly reduce the operational and security risk of massive failures. Containers can restart almost instantly, upgrades can be performed on a microservice level, without downtimes, allowing for automated fast rollbacks if needed.

How we build CNFs

Each cloud-native network function consists of three main building blocks: data plane, control plane, and management plane.

For the data plane, we often use FD.io VPP, which is a fast, scalable layer 2-4 network stack that runs in userspace. VPP benefits include high performance, proven technology, modularity, and a rich feature set. If the existing VPP features are not meeting our needs, we often extend the VPP functionality with our own plugins. If the network function that we aim to implement does not fit VPP at all (e.g. is too complex to be implemented on VPP, and/or the top performance is not crucial), we use other open-source projects and tools as well.

The management plane for our CNFs is built on the Ligato.io CNF framework, written in Golang. The API of each CNF is modeled with Google Protocol Buffers, which allows for a unified way of controlling whole CNF deployment via gRPC, Kubernetes CRDs, key-value data stores such as ETCD or Redis, or message brokers such as Apache Kafka.

The control plane usually falls either into the data plane, or the management plane components.

How we connect (wire) the CNFs

For CNFs, the standard networking interconnections between containers provided by general-purpose CNI plugins is not sufficient – the CNFs usually need to talk between each other and with external networks on different network layers, using a wide range of networking protocols. At the same time, since packet processing is the main goal of CNFs, their demand for fast, stable and low-latency throughput is much higher.

With VPP, leveraging technologies like DPDK and memory interfaces (“memifs”), we are able to deliver the packets into the CNFs at speeds of several Mpps / Gbps. To accomplish that in a Kubernetes-native way, we use open-source projects such as Network Service Mesh, Contiv-VPP CNI, SR-IOV K8s Device Plugin.

Our CNFs are CNI plugin/container wiring technology -agnostic, meaning, that the same CNFs can be used on top of Network Service Mesh, as well as other wiring technologies / CNI plugins.

CNF Designer & Provisioner

Since the configuration of complex interconnections between CNFs can be a cumbersome process involving the steep learning curve of container networking technologies and APIs, we have created a graphical user interface that allows defining a complex CNF deployment with a few clicks. Independently from the selected networking technology (e.g. Network Service Mesh or Contiv-VPP CNI), its CNF provisioning backend automatically deploys the CNFs with all the necessary wiring instructions into your Kubernetes cluster based on your GUI input.

CNF Use-Cases

Although the possibilities of CNF deployments are endless, it may be worth to give you an idea of what can be achieved with them.

Carrier-Grade NAT Solution

This example use-case shows a cloud-native carrier-grade NAT solution, which shifts the NAT function and configuration from the customer premises to the Internet service provider’s cloud infrastructure. This solution can lower the requirements on hardware and software features of CPE devices, as well as simplify the management and configuration of the NAT solution, allow for easy horizontal scaling, upgrades and failovers.

Virtual CPE (Customer Premise Equipment)

In this example use-case, the whole functionality of the customer premises equipment is moved into the service provider’s cloud infrastructure. The only networking equipment deployed at the customer premises can be just a cheap simple L2 switch with no additional features, which would be connected to the ISP’s cloud infra. That requires almost no additional maintenance, even in case of a future upgrade of CPE functionality. The CPE features are built as a chain of CNFs (networking microservices), which apart form easy management, scaling and upgrades, also allow for different feature sets per customer, deployed and modified on demand.

better together sonic odl

Better Together: SONiC & OpenDaylight Integration

September 23, 2024/in Blog /by filip.sterling

In a world increasingly driven by phenomena, such as AI, it is easy to forget the importance of network reliability and throughput. The fundamental need for robust connectivity has always been present and remains a cornerstone of our digital infrastructure.

SONiC, an efficient data and control plane, and OpenDaylight, as a powerful and flexible management plane, can work together to significantly enhance network reliability and speed.

OpenDaylight’s centralized control allows for dynamic network management and policy enforcement, providing real-time insights and automated responses to network conditions.

SONiC, with its modular & containerized architecture, ensures high-performance packet processing and robust data plane operations. The integration of these two systems enables seamless coordination between management, control and data planes, resulting in improved network performance, reduced latency, and enhanced fault tolerance through proactive management and rapid recovery from issues.

PANTHEON.tech @ SONiC Mini-Summit 2024

What is SONiC?

SONiC, developed by Microsoft, is an open-source network operating system designed to run on network switches. Its architecture is modular, containerized, and built on a Linux-based foundation, offering a highly flexible environment for network operations. SONiC manages the control and data planes, which are essential for switch operations, and it allows for dynamic network configuration and monitoring through both CLI (Command-Line Interface) and structured APIs such as gNMI and YANG.

What is OpenDaylight?

OpenDaylight, an open-source SDN controller, was launched as part of the Linux Foundation project. Initially, its goal was to promote SDN adoption by providing vendor-agnostic control over networks. Today, OpenDaylight supports a wide range of network protocols, including BGP, NETCONF, and gNMI, offering cloud-native support and integration into multiple LF projects, such as ONAP and OPNFV.

Better Together

The true strength of these platforms lies in their integration. SONiC and OpenDaylight work together to provide a comprehensive solution for network management. SONiC manages the underlying switches and routers, while OpenDaylight oversees higher-level automation and orchestration. Together, they form a cohesive system that can manage diverse, multi-protocol network environments.

Interested in combining the strenghts of OpenDaylight & SONiC? Contact us today for a consultation!

Key benefits of this integration include:

Improved network automation

With SONiC providing operational simplicity through structured APIs and OpenDaylight enabling centralized control and policy enforcement, network administrators can automate routine tasks and complex configurations with ease.

Enhanced network reliability

The combination of SONiC’s efficient control plane and OpenDaylight’s centralized orchestration ensures a robust, reliable network infrastructure. By leveraging both platforms, networks are better equipped to handle increased traffic demands, while maintaining high availability.

Vendor-agnostic & flexible

OpenDaylight’s commitment to vendor-neutrality allows for seamless integration across different hardware vendors. SONiC’s modular architecture enables customization based on specific needs, providing flexibility for network operators.

What are the advantages?

One of the key advantages of using OpenDaylight with SONiC is its support for standardized APIs. The model-driven architecture of OpenDaylight, utilizing YANG, ensures consistent configurations across various network devices. This structure significantly reduces errors, promotes vendor independence, and enhances scalability, making it easier for network operators to expand or modify their network infrastructure without vendor lock-in.

OpenDaylight also excels in network orchestration and programmability. Its ability to work with multiple protocols (e.g., NETCONF, BGP, gNMI) means it can manage a wide range of devices within the same environment, enabling greater control and reducing operational overhead.

  • Introduction to OpenDaylight and SDN
  • OpenDaylight Architecture Overview

Model-Driven APIs vs. CLI

One of the ongoing debates in the networking world is whether to rely on model-driven APIs like gNMI/YANG or CLI-based management.

While CLI may provide more granular control for device-specific configurations, model-driven APIs offer significant advantages in terms of automation, error handling, and transactional support.

With APIs, operators can perform multiple changes in a single transaction, ensuring consistency and avoiding configuration errors. In contrast, CLI-based management is prone to inconsistencies, as each device may require different commands and manual error correction.

gNMI & RESTCONF

gNMI integration in SONiC is still a work-in-progress and promises a lot of neat features.

This unified integration with OpenDaylight’s RESTCONF would provide a consistent and standardized interface for interacting with various network devices. This integration would allow administrators to manage heterogeneous networks efficiently, simplifying operations and enabling a streamlined approach to network management.

Why should I care?

You should care because of real-life use-cases where this integration can be applied:

  • Network service abstraction & model-to-model translation

SONiC and OpenDaylight enable seamless network service abstraction by translating models between RESTCONF and gNMI. This allows administrators to ensure that configuration intents are accurately reflected across different network protocols, simplifying the management of diverse infrastructures.

  • Configuration intent datastore

With the integration of SONiC and OpenDaylight, administrators can leverage a configuration intent datastore. This datastore ensures that configuration changes are made with intent-based execution, reducing the risk of misconfigurations and enabling more reliable operations across network devices.

  • RESTCONF-2-gNMI translation

OpenDaylight’s support for RESTCONF allows for efficient translation to gNMI, creating a consistent method for interacting with different network devices. This unification of interfaces simplifies network management, especially in heterogeneous environments with multiple protocols and vendors.

  • RBAC

OpenDaylight’s architecture also allows for the integration of role-based access control. RBAC helps secure access to network devices by ensuring that users are given appropriate permissions based on their roles. This improves network security and reduces the risk of unauthorized changes.

  • Separation of concerns and scalability

By integrating SONiC’s modular and containerized architecture with OpenDaylight’s orchestration capabilities, networks can adopt a clear separation of concerns. This means that network functions, automation, and control can scale independently, optimizing performance and making future expansions simpler and more efficient.

A SONiC & OpenDaylight integration provides a powerful, flexible, and scalable solution that enhances network reliability, enables automation, and ensures vendor-neutrality. For organizations looking to modernize their network infrastructure, the combination of SONiC and OpenDaylight is a compelling choice.

The question remains: are you ready to simplify and optimize your network?

 

OpenDaylight modular

[OpenDaylight] The Future is Modular

August 1, 2024/in Blog /by filip.sterling

A sustained effort to improve the OpenDaylight codebase was focused on restructuring and modularizing the RESTCONF server architecture. What started, according to Robert Varga in 2021,

has been completed, and we can start reaping its benefits.

By modularizing the RESTCONF server architecture, our team was able to achieve a heightened separation of concerns. In simpler terms – greater flexibility and extensibility. For example, unit tests can target specific modules, ensuring each part works as intended.

This shift aims to simplify several parts of the OpenDaylight project and increase flexibility – particularly in how data is managed and how the system interacts with network devices.

State persistence in lighty.io RNC

Particularly, future lighty.io RNC instances will benefit from decoupling the state persistence from the application container itself. By leveraging external systems like ONAP CPS, we can achieve reduced complexity within the RNC container but also allow for more specialized and robust state management solutions.

Statelessness is a crucial property for systems deployed in cloud environments, where resources can dynamically scale in response to demand. This would enable, for example, Kubernetes autoscaling of RNC (download the package here) against an instance of ONAP CPS – or other, similar services, like sysrepo. 

MD-SAL Independence

The Model-Driven Service Abstraction Layer (MD-SAL) is a core component in OpenDaylight that provides a common data store and messaging infrastructure. Recent changes allow the RESTCONF server to operate independently of MD-SAL, which was not previously possible.  

MD-SAL and RESTCONF communication example

MD-SAL & RESTCONF communication example

This decoupling enables the implementation of a RESTCONF server that can interface directly with various data stores or backend services, without relying on MD-SAL. For example, data from gNMi (gRPC Network Management Interface) devices can be accessed directly, through the same RESTCONF server interface.

For users who continue to use MD-SAL, a dedicated integration layer has been preserved and will soon be separated into its own component. This layer handles the specific wiring needed to integrate MD-SAL with the new RESTCONF server architecture. By isolating this functionality, the system remains modular, allowing users to opt-in or -out of using MD-SAL, as needed.

Separation of Basic RESTCONF Concepts

  • The RESTCONF API module serves as the foundation for RESTCONF operations. By isolating these core concepts into a dedicated module, developers can build RESTCONF clients and servers without being tied to specific implementations or underlying infrastructure.
  • The RestconfServer Interface supports multiple server implementations, such as the current JAX-RS and a newer, lighter implementation using Netty – a high-performance, non-blocking I/O framework for Java. This allows for flexibility in choosing the appropriate server technology based on performance needs or deployment constraints.
  • RESTCONF Server SPI (Server Provider Interface) module provides the necessary interfaces and classes for implementing and integrating various components into the RESTCONF server. This layer ensures that different data stores or backend services can be plugged into the system, facilitating a wide range of use cases and integrations.

The Future is Modular

These recent updates to OpenDaylight have significantly improved the RESTCONF server architecture, making it more modular, flexible, and scalable. These changes do not only simplify the system but also prepare it for future enhancements and integrations, making it a more robust and versatile platform for network management. 

Whether handling configuration, monitoring network state, or integrating with various data sources – the new architecture offers a solid and adaptable foundation for diverse networking needs.

Leave us your feedback on this post!

You can contact us here.

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

sonic bgp vpp thumb 1

[Demo] SONiC VPP BGP Multipath

July 22, 2024/in Blog /by filip.sterling

Managing large-scale data centers and cloud environments can be incredibly complex, with challenges ranging from traffic congestion to maintaining high availability. The need for efficient, scalable, and reliable solutions is on the mind of most network engineers and network architects.

The combination of:

  • Vector Packet Processing (VPP)
  • Software for Open Networking in the Cloud (SONiC)
  • Border Gateway Protocol Equal-Cost Multi-Path (BGP ECMP)

is a particularly useful and powerful option for optimizing large-scale data center and cloud environments.

Download the demo here:


VPP, a high-performance software packet-processing framework, employs various techniques, including Single Instruction Multiple Data (SIMD) operations, to optimize the processing of packets on general-purpose CPUs. These optimizations enable VPP to manage large volumes of traffic in parallel, significantly enhancing throughput.

Complementing this, SONiC is an open-source network operating system designed for cloud-scale data centers, offering a modular and flexible architecture for deploying a comprehensive suite of network functions across various hardware platforms.

Thanks to its integration with VPP, SONiC can now be deployed on general-purpose hardware.

This expansion allows SONiC to extend closer to compute nodes in data centers or even replace vendor hardware with generic purpose “pizza boxes.”

Additionally, BGP ECMP enables the distribution of network traffic across multiple equal-cost paths, ensuring load balancing, redundancy, and efficient bandwidth utilization. Together, these technologies provide scalable, reliable, and efficient solutions for managing complex and demanding network infrastructures.

SONiC VPP Architecture

The architecture of SONiC-VPP closely mirrors the foundational SONiC architecture. In this setup, the syncd component interfaces with VPP through the shared library libsaivpp.so. SONiC’s architecture allows integration with the FRRouting project, which typically encompasses daemons for various routing protocols, including BGP and OSPF.

VPP, on the other hand, uses the linux-cp plugin, which can create a TAP interface in Linux and copy attributes from the VPP interface, effectively mirroring the VPP hardware interface to the kernel. All synchronization is then managed by the netlink listener, which listens for netlink messages and executes the corresponding events in VPP. Supported events include (RTM_LINK, RTM_ADDR, RTM_*ROUTE).

Normally, the linux-cp plugin would suffice for this use-case. However, SONiC uses two approaches:

  1. The syncd process (e.g., portsyncd) listens to netlink messages from the kernel.
  2. The syncd process (fpmsyncd) listens to netlink messages using the FPM protocol from Zebra. This bypasses the handling of RTM_*ROUTE messages in linux-cp, which only listens to kernel netlink messages.

SONiC-VPP BGP Routing example architecture

Both these approaches ensure that SONiC-vpp can effectively handle complex networking requirements while maintaining synchronization between the VPP data plane and SONiC control plane. This seamless integration of SONiC and VPP provides a robust and flexible solution for managing advanced routing protocols and network interfaces in large-scale data center environments.

Leave us your feedback on this post!

You can contact us here.

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

img pahEPhg0xUWn3pW7STuMS

OpenDaylight practical applications

May 7, 2024/in Blog /by PANTHEON.tech

Abstract

The OpenDaylight project is an open source platform for Software Defined Networking (SDN) that uses open protocols to provide centralized, programmatic control and network device monitoring. It allows automating networks of any size and scale. PANTHEON.tech, a TOP open-source contributor, actively participates in the LINUX FOUNDATION NETWORKING environment. Our contributions play a crucial role in shaping the OpenDaylight project. In this article, we delve into the insights of our developers, exploring their perspectives on the OpenDaylight environment and its practical applications.

 

How important is OpenDaylight for your clients? 

Our customers operate modern complex and multi-layered network environments. They face a huge and demanding challenge to integrate many different systems, frameworks, engines, tools, technologies, etc. The OpenDaylight environment is a key enabler to help them automate this integration. We use a variety of OpenDaylight APIs to do this.

 

Which API do you use the most?

The RESTCONF REST APIs provide a convenient way for the customer’s external systems, applications and scripts to access the data in the OpenDaylight datastore or within the mounted devices. REST is also used to allow external applications to trigger various processes/functions within OpenDaylight using RPC calls. OpenAPI has also been introduced to facilitate this type of REST interaction. JMX APIs are another way for external applications to interact with OpenDaylight, although this is mostly used for monitoring, statistics gathering and health checks. Some customers integrate JMX APIs with their own messaging systems to publish alarm notifications to various external nodes. 

 

How do you deal with backups and disaster recovery?

OpenDaylight is primarily used as a network device connector and configurator. Thanks to its internal data store (Model-driven Service Abstraction Layer MD-SAL), it’s also used as a disaster recovery and backup tool – so in cases where some of the devices die and need to be rebooted cleanly, we use OpenDaylight to restore them to their last known state.

 

What protocols do you use?

The main protocol used to connect and communicate with devices is NETCONF. Other protocols are also available, but with varying levels of production quality. Connecting and communicating with devices is usually done in 2 ways:

  • Directly – by calling the NETCONF APIs to create Netty-based NETCONF sessions with the devices. These sessions are then used by custom applications.
  • Via mount points – accessing the internal state of the device.

 

How do you secure communication?

To secure NETCONF communication, OpenDaylight comes with an AAA (Authentication, Authorisation, Accounting) plugin. However, some clients have their own security policies and procedures. For this purpose, the original AAA can be replaced by custom plugins to better meet the client’s requirements.

 

What are OpenDaylight’s clustering capabilities good for?

OpenDaylight is often used for high availability, fault tolerance and geo-redundancy. This is achieved using either OpenDaylight’s clustering capabilities, Daexim utilities or other hybrid solutions.

Customers can customize their OpenDaylight deployments – the number of OpenDaylight instances (members) that form the cluster, their voting rights and the distribution of data between them. Customers often split data into different shards and then decide which shards to persist and which members to host them on – adding high availability.

 

Can you cluster multiple OpenDaylight instances? 

Some customers prefer to run multiple standalone OpenDaylight instances and connect them into an HA cluster via a higher level orchestrator. This can be particularly useful in cases where load balancing is an important part of their deployment and needs to be carefully managed.

 

img pahEPhg0xUWn3pW7STuMS

img podQdXdsqnm4B6a228OUx 1200x627

Human Errors in Network Configuration

January 25, 2024/in Blog /by PANTHEON.tech
Read more →
DC1111

SONiC: From Hyperscalers to Enterprise

January 17, 2024/in Blog /by PANTHEON.tech
Read more →
img Vcp58krCnMC73BMzNPsEh 1

Data Center Network Operations: Day 0, Day 1, Day 2 Config

December 19, 2023/in Blog /by PANTHEON.tech
Read more →
dc last

What is VXLAN (Virtual Extensible LAN)?

October 26, 2023/in Blog /by PANTHEON.tech
Read more →
jd 1200x627

[What Is] SONiC (Software for Open Networking in the Cloud)

October 19, 2023/in Blog /by PANTHEON.tech
Read more →
all cnfs

What is Network Address Translation (NAT) ?

October 10, 2023/in Blog /by PANTHEON.tech
Read more →
graph StoneWork

Empowering Service Providers Provider Edge: PANTHEON.tech’s StoneWork Multi Service Router leveraging Intel’s latest Xeon CPU – Performance / Scale Test

September 27, 2023/in Blog /by PANTHEON.tech
Read more →
VLAN

What is Virtual Local Area Network (VLAN) / 802.1.q ?

September 26, 2023/in Blog /by PANTHEON.tech
Read more →
StoneWork ss

What is Virtual Routing and Forwarding (VRF) ?

September 18, 2023/in Blog /by PANTHEON.tech
Read more →
sonic-enterprise-dc-orchestrator-sandwork-ga

SONiC Enterprise Data Center Orchestration

April 13, 2023/in Blog, News /by PANTHEON.tech

SandWork is now Generally Available

SONiC (Software for Open Networking in the Cloud) is a popular open-source network operating system that has become increasingly popular in recent years. This powerful and flexible platform is used by hyperscalers, large enterprises and cloud providers to build their data centers’ networks.

SONiC was originally developed as a network operating system for large-scale data center environments. It is an open-source project, originally developed by Microsoft and is now gaining popularity in recent years due to its flexibility, scalability, and extensibility with a lot of contributions from many large organizations and individual contributors.

Another key benefit of SONiC is its support for multi-vendor environments. Unlike many proprietary network operating systems, SONiC is vendor-agnostic and can be used with a wide range of network hardware from different manufacturers. This enables enterprises to choose the best network hardware for their needs, without being tied to a single vendor’s solutions.

In addition, SONiC is highly secure, with built-in support for secure communication, role-based access control, and other security features. This makes it an ideal platform for enterprises that require robust security measures to protect their data center infrastructure.

To maximize and build upon the benefits of SONiC in the Data Centre, PANTHEON.tech have leveraged their open source heritage and experience in Network automation and Software Defined Networking to develop SandWork, which is now generally available.

One of the key benefits of SandWork is its ability to orchestrate network infrastructure in the enterprise data center. SandWork provides a programmable framework and intuitive user interface that enables network administrators to automate the configuration and management of datacenter network devices. This significantly reduces the time and effort required to manage the data center network fabric, while also increasing efficiency and scalability.

SandWork uses a modular design that enables administrators to customize the platform to meet their specific needs. It provides a rich set of APIs and tools that enable developers to build their own applications and scripts to automate network management tasks. This approach makes SandWork highly extensible and adaptable, making it an ideal solution for complex and dynamic data center environments.

SandWork is also highly scalable, supporting large-scale data center environments with thousands of network devices. It provides advanced features for managing network infrastructure. This makes it easier for network administrators to identify and troubleshoot issues in the data center, ensuring maximum uptime and availability.

Overall, SandWork is a powerful and flexible platform for managing and orchestrating enterprise data centers. With its advanced features for network management and security, SandWork is a valuable tool for any organization looking to optimize their data center operations.

sandwork-network-orchestrator

Free Initial Consultation

 


by the PANTHEON.tech Team | Leave us your feedback on this post!

You can contact us here!

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

Page 1 of 6123›»

More @ PATHEON.tech

  • [What Are] PortChannels
  • [What Is] VLAN & VXLAN
  • [What Is] Whitebox Networking?
© 2025 PANTHEON.tech s.r.o | Privacy Policy | Cookie Policy
  • Link to LinkedIn
  • Link to Youtube
  • Link to X
  • Link to Instagram
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}
Scroll to top