Network Glossary

A

AF_XDP

Data Plane

A Linux socket address family that lets userspace applications send and receive packets directly from a network interface card, skipping most of the kernel network stack through the XDP (eXpress Data Path) framework. Unlike DPDK, AF_XDP works with standard kernel drivers and does not require hugepage configuration, which makes it practical for high-speed packet processing on stock Linux deployments where DPDK integration adds too much operational complexity. Performance falls between a full kernel stack and DPDK, and the trade-off is worth it for many software networking use cases.

Akka Clustering

Cloud Native

A JVM toolkit for building distributed, actor-model applications that need to stay consistent and available across multiple nodes. In the OpenDaylight and lighty.io context, Akka is the engine behind MD-SAL's high-availability configuration store - multiple controller nodes share state, elect a leader, and handle failures without an external database. The result is an SDN controller cluster that survives individual node loss without losing network state or dropping in-flight operations.

B

BGP (Border Gateway Protocol)

Protocols

The routing protocol that holds the internet together, exchanging reachability information between autonomous systems based on policy attributes rather than pure shortest-path metrics. BGP selects routes using a ranked set of attributes - AS path length, local preference, MED, and others - giving operators fine-grained control over traffic flows. In data center networks, BGP has moved inside the fabric too, replacing OSPF as the interior routing protocol in large-scale spine-leaf deployments because it scales further and carries richer policy without the flooding overhead of link-state protocols.

At PANTHEON.TECH: lighty.io includes a BGP route reflector implementation used to centralise BGP peering in SDN deployments.

BGP-EVPN (BGP Ethernet VPN)

Protocols

A BGP address family that distributes MAC and IP reachability information across layer-2 and layer-3 boundaries, making it the standard control plane for modern data center overlay networks. BGP-EVPN is almost always paired with VXLAN as the data plane encapsulation - BGP-EVPN handles the control plane decisions (which host is where, which VNI it belongs to) while VXLAN does the actual tunnelling. Key capabilities include ARP suppression to cut broadcast traffic, symmetric IRB for distributed routing, and built-in multi-tenancy through VRF per-tenant separation.

At PANTHEON.TECH: SandWork uses BGP-EVPN for overlay control plane management in SONiC-based data center deployments.

BPMN (Business Process Model and Notation)

Automation

A graphical notation for modelling automated workflows as sequences of tasks, decision points, and parallel branches. BPMN diagrams are readable by both engineers and business stakeholders, which makes them a useful bridge between service design and operational implementation. In network automation, BPMN is used to define the step-by-step logic for service provisioning, device onboarding, and change management. ONAP's Service Design and Creation (SDC) component uses BPMN to model the lifecycle workflows executed by its Service Orchestrator.

At PANTHEON.TECH: BPMN workflows have been used to model ONAP-driven service chains and CNF orchestration sequences.
C

Cisco NSO (Network Services Orchestrator)

Automation

A multi-vendor network service orchestration platform built around YANG data models and NETCONF. NSO keeps a live copy of every managed device's configuration in a network-wide database, which means it can validate a change against the real network state before committing it and roll back automatically if something fails. Service providers use it for automated fulfilment across large, heterogeneous device populations where manual CLI management is simply not viable.

At PANTHEON.TECH: lighty.io has been integrated with Cisco NSO as a lightweight SDN-C replacement in ONAP architectures.

CNF (Cloud Native Network Function)

Cloud Native

A network function packaged as a container and deployed on Kubernetes, applying the same principles that made cloud application development faster - immutable images, declarative configuration, horizontal scaling, and operator-managed lifecycle. CNFs replace VNFs running on virtual machines, cutting the overhead of full VM management while enabling the same continuous delivery workflows used for application software. The shift matters because it brings operational consistency: the same pipelines, tooling, and observability that run production applications can now run network functions too.

At PANTHEON.TECH: StoneWork and the cdnf.io project deliver production-ready CNF implementations built on FD.io VPP.

CNCF (Cloud Native Computing Foundation)

Cloud Native

A Linux Foundation project that stewards the open-source technologies underpinning modern cloud infrastructure - Kubernetes, Prometheus, Envoy, Helm, and dozens of others. CNCF manages each project through a maturity lifecycle (sandbox, incubating, graduated) that signals production readiness to the wider community. For networking vendors, CNCF membership signals alignment with the same ecosystem where cloud-native network functions are built and deployed.

At PANTHEON.TECH: PANTHEON.TECH became a certified CNCF member in 2022, reflecting its commitment to cloud-native networking delivery.
D

Day-0 / Day-1 / Day-2 Operations

Automation

A three-phase framework for thinking about network infrastructure lifecycle. Day-0 is the design and planning stage - topology decisions, capacity modelling, configuration templates. Day-1 is the initial deployment - bringing hardware online, pushing base configuration, validating connectivity. Day-2 is everything that follows: monitoring, software updates, scaling, configuration drift detection, and fault response. The distinction matters for tooling selection because many automation products handle one or two phases well but fall short on the others. A platform that covers all three removes the need to stitch multiple tools together for basic operations.

At PANTHEON.TECH: SandWork addresses all three operational phases for SONiC-based data center networks, from fabric design through ongoing configuration drift detection.

Declarative vs Imperative Configuration

Automation

Two fundamentally different ways to express what you want from a network. Imperative configuration spells out exactly how to get there - a sequence of commands the device executes in order. Declarative configuration states only the desired end state and lets the system figure out the steps needed to reach it. Declarative approaches are idempotent: running the same configuration twice produces the same result without side effects. This makes them safer to version-control, easier to audit, and far more compatible with automated pipelines. YANG-modelled configuration pushed over NETCONF is inherently declarative, which is one reason it has become the foundation of modern network automation.

DPDK (Data Plane Development Kit)

Data Plane

A set of userspace libraries and poll-mode drivers that move packet processing out of the Linux kernel entirely. Instead of relying on interrupts and kernel network stack overhead, DPDK applications run on dedicated CPU cores in a tight polling loop, using hugepages for DMA-accessible memory and bypassing the OS scheduler for packet I/O. The result is deterministic, line-rate forwarding on commodity x86 hardware - the same hardware foundation used by FD.io VPP. DPDK is the reason software routers and firewalls can now compete with purpose-built hardware ASICs on throughput.

F

FD.io VPP (Vector Packet Processor)

Data Plane

A high-performance software data plane that processes packets in variable-sized vectors rather than one at a time. Instead of handling each packet individually, VPP pulls a batch of packets through the same processing node before moving to the next, maximising instruction cache efficiency and dramatically cutting per-packet CPU overhead compared to traditional single-packet processing. The vector size adapts dynamically to load - growing under high traffic, shrinking toward one under light traffic - so the scheduler always operates at the most efficient batch size for current conditions. Running in userspace on top of DPDK, VPP reaches forwarding rates that were previously only possible with purpose-built hardware. The feature set covers routing, bridging, NAT, IPsec, VXLAN, segment routing, and much more - all through a plugin architecture where features are added as shared libraries loaded at runtime.

At PANTHEON.TECH: VPP is a core technology across StoneWork, SandWork, and several CNF products. PANTHEON.TECH engineers are active contributors to the FD.io project.
G

gNMI (gRPC Network Management Interface)

Protocols

A network management protocol built on gRPC that supports configuration, state retrieval, and streaming telemetry from network devices using YANG-modelled data paths. gNMI defines four operations - Get, Set, Subscribe, and Capabilities - and its Subscribe operation is what makes real-time telemetry practical at scale. Instead of polling devices on a schedule, an operator subscribes to a data path and the device pushes updates whenever values change or on a defined sample interval. This is faster, cheaper on device CPU, and far more useful for detecting transient events that a polling cycle would miss.

At PANTHEON.TECH: lighty.io includes a gNMI application enabling OpenDaylight-based controllers to manage gNMI-capable devices alongside NETCONF devices.

gRPC

Protocols

A high-performance remote procedure call framework that uses Protocol Buffers (protobuf) for interface definition and HTTP/2 as its transport. Protobuf serialisation is compact and fast compared to JSON or XML, and HTTP/2 multiplexing allows multiple streams over a single connection - including bidirectional streaming that makes gRPC well-suited for real-time telemetry. In networking, gRPC is the transport layer underneath gNMI and gNOI, and it is used for agent-to-controller communication in cloud-native data plane architectures like the Ligato VPP Agent.

H

Honeycomb / HC2VPP

Data Plane

A NETCONF/RESTCONF management agent that runs alongside FD.io VPP and translates YANG-modelled configuration into VPP binary API calls. HC2VPP was the original bridge between the OpenDaylight ecosystem and VPP instances, letting operators configure VPP through standard management protocols without writing custom integration code per deployment. It established the pattern - northbound standard protocol, southbound binary API translation - that later architectures like the Ligato VPP Agent refined for cloud-native environments.

At PANTHEON.TECH: HC2VPP was a foundational component in early VPP-based CNF architectures. Its role has since been superseded by the Ligato VPP Agent in cloud-native deployments.
I

Intent-Based Networking (IBN)

SDN & Control

A network management approach where operators describe what they want the network to do, not how to configure individual devices to achieve it. The IBN system translates that high-level intent into specific device configurations, deploys them, and then continuously validates that the actual network state matches the declared intent. When it drifts, the system either corrects it automatically or raises an alert. IBN builds on SDN and automation but adds a closed feedback loop that distinguishes it from simple push-and-forget configuration management.

IPsec (Internet Protocol Security)

Protocols

A protocol suite that provides authentication and encryption at the IP layer, securing traffic between endpoints without requiring changes to the applications above it. IPsec operates in two modes: transport mode encrypts only the packet payload between two hosts, while tunnel mode wraps the entire original packet inside a new encrypted IP packet - the standard approach for site-to-site VPNs. Modern software data planes like FD.io VPP implement IPsec with hardware acceleration support (Intel QAT, AES-NI), enabling encrypted tunnel throughput that scales with available CPU cores rather than dedicated crypto hardware.

At PANTHEON.TECH: StoneWork includes an IPsec implementation enabling SONiC-based switches and VPP-powered routers to establish encrypted tunnels.

IS-IS (Intermediate System to Intermediate System)

Protocols

A link-state interior gateway routing protocol that builds a complete picture of the network by flooding topology advertisements to all routers in the domain. Each router runs Dijkstra's algorithm on that shared map to compute its own forwarding table, with no single point of calculation. IS-IS was originally designed for OSI networks and extended to support IP - a design decision that makes it more flexible than OSPF, which was built for IP from the start. Large-scale service provider networks and modern data center spine-leaf fabrics favour IS-IS for its scalability and clean support for traffic engineering extensions.

At PANTHEON.TECH: StoneWork added native IS-IS protocol support, enabling deployment in service provider edge and core routing scenarios.

IPv4 / IPv6

Protocols

The two active versions of the Internet Protocol. IPv4 uses 32-bit addresses, giving a theoretical maximum of around 4.3 billion unique addresses - a pool that IANA exhausted at the central level in February 2011. Regional registries continued allocating from their own reserves for years after: ARIN (North America) ran out in 2015, RIPE NCC (Europe) in 2019. IPv6 uses 128-bit addresses, providing a space large enough that every device ever manufactured could have a globally unique address with room to spare. Most production networks run dual-stack, carrying both protocols in parallel during a transition that has been slower than anticipated. IPv6 also eliminates most NAT requirements and simplifies routing by removing broadcast and replacing it with multicast and anycast.

K

Karaf

SDN & Control

An OSGi-based runtime container that was the standard packaging format for OpenDaylight from its early releases. Karaf allows OSGi bundles to be installed, started, and stopped independently at runtime, which gave ODL its modular feature system. In practice, the flexibility came with significant complexity - slow startup, high memory consumption, and a steep learning curve for operators accustomed to conventional application deployment. lighty.io was built specifically to replace Karaf by packaging the same ODL core as a standard Java library that starts in seconds and runs in a plain JVM process.

Kubernetes

Cloud Native

An open-source container orchestration platform that automates the deployment, scaling, and lifecycle of containerised workloads across a cluster of machines. For networking, Kubernetes is the runtime environment where CNFs live - it provides the pod and service abstractions, manages container networking through the CNI plugin interface, and enforces network policies for traffic segmentation. Running a network function on Kubernetes means it inherits the same upgrade, rollback, health-check, and scaling mechanisms as any other application in the cluster.

L

LAG / MLAG (Link Aggregation Group / Multi-Chassis LAG)

Data Center

LAG bonds multiple physical links between two devices into a single logical interface, increasing available bandwidth and providing resilience against individual link failure. LACP (now standardised as IEEE 802.1AX, originally 802.3ad) defines how the two endpoints negotiate and maintain the aggregation automatically. MLAG extends this across two separate switches: a downstream device bonds links to two different upstream switches simultaneously, eliminating the single point of failure at the aggregation layer while keeping the topology loop-free and avoiding spanning tree. MLAG is a standard building block for redundant uplinks in spine-leaf data center fabrics.

At PANTHEON.TECH: SandWork supports LAG and MLAG configuration on SONiC-based switches as part of data center fabric automation.

LFN (Linux Foundation Networking)

Telecom / RAN

The umbrella organisation under the Linux Foundation that hosts the major open networking projects - OpenDaylight, ONAP, FD.io, OPNFV, and Tungsten Fabric among them. LFN provides neutral governance, shared infrastructure, and a framework for cross-project collaboration that lets competing vendors contribute to the same codebase without any single company controlling the roadmap. Active contribution to LFN projects is one of the clearest signals of genuine engineering depth in the open networking space - it is verifiable through commit history rather than marketing.

At PANTHEON.TECH: PANTHEON.TECH has held leadership positions across multiple LFN projects, including serving as a top contributor to OpenDaylight.

Ligato VPP Agent

Data Plane

An open-source control plane agent for FD.io VPP designed to work inside Kubernetes. The Ligato VPP Agent exposes a gRPC and REST API, uses etcd as a key-value configuration store for declarative intent, and translates that intent into VPP binary API calls. This means a VPP-based CNF can be configured and monitored like any other cloud-native service - no CLI, no custom integration code. The agent also handles VPP process lifecycle, restoring the last-known configuration automatically if VPP restarts.

lighty.io

SDN & Control

A lightweight SDK that packages the OpenDaylight core - MD-SAL, YANG Tools, NETCONF, RESTCONF - as a standard Java library rather than an OSGi application. Removing the Karaf container cuts startup time from minutes to seconds and reduces memory footprint dramatically, making it practical to deploy an ODL-based controller in a container alongside other services. Developers include only the components their application actually needs, which keeps the runtime lean and the dependency tree manageable. lighty.io is used in production by telcos and cloud operators globally.

At PANTHEON.TECH: lighty.io is one of PANTHEON.TECH's primary open-source products, used in production deployments at telcos and cloud operators worldwide.
M

MD-SAL (Model-Driven Service Abstraction Layer)

SDN & Control

The core infrastructure layer of OpenDaylight and lighty.io that everything else is built on. MD-SAL provides a YANG-driven datastore where all network state is stored in a schema-aware tree, a notification bus for publishing and subscribing to data change events, and an RPC framework for calling functions exposed by plugins. Because the datastore is schema-driven, any application reading from it knows exactly what data it will find and can validate changes before committing them. MD-SAL is what makes it possible to write northbound applications and southbound device plugins that talk to each other without knowing each other's implementation details.

memif (Memory Interface)

Data Plane

A shared-memory packet interface for passing packets between network functions running on the same host without any kernel involvement. Two processes sharing a memif interface read and write directly to the same memory region, which eliminates the copy operations and system call overhead of conventional inter-process networking. The result is packet handoff at speeds close to in-process forwarding - useful in service chains where multiple CNFs need to pass traffic to each other at high rates without the cost of looping through a virtual switch.

Microservices

Cloud Native

An architectural pattern that structures an application as a set of small, independently deployable services, each owning a specific capability and communicating with others over well-defined APIs. In network management, the microservices approach lets orchestrators and controllers decompose into components that can be scaled, updated, or replaced individually - a BGP route reflector can be upgraded without touching the NETCONF configuration engine next to it. The trade-off is operational complexity: more services means more network connections, more failure modes, and more observability tooling required to understand what is happening across the system.

Multus CNI

Cloud Native

A Kubernetes CNI meta-plugin that allows pods to attach to multiple network interfaces at the same time. Standard Kubernetes gives every pod a single network interface managed by whichever CNI plugin is installed. Multus chains multiple CNI plugins together, assigning additional interfaces from SR-IOV, DPDK, or other high-performance networks alongside the default cluster network. For CNFs, this separation is essential: management traffic stays on the default interface while data plane traffic flows through a dedicated, high-throughput interface without competing with Kubernetes control plane traffic.

N

NAT (Network Address Translation)

Data Center

A technique that rewrites IP address information in packet headers as traffic crosses a boundary between address spaces - most commonly translating private RFC 1918 addresses to a smaller pool of public IPs for internet access. NAT is implemented everywhere from home routers to carrier-grade platforms handling millions of concurrent sessions. In software networking, NAT is one of the most common functions implemented as a VNF or CNF, and FD.io VPP's NAT plugin handles it at line rate in userspace. The long-term direction is IPv6 adoption, which eliminates the need for most NAT, though the transition is slow.

NETCONF (Network Configuration Protocol)

Protocols

An IETF protocol (RFC 6241) for installing, modifying, and deleting network device configuration over an SSH transport using XML-encoded, YANG-modelled data. What sets NETCONF apart from older approaches like CLI scripting or SNMP is its transactional model: a configuration change can be staged in a candidate datastore, validated against the device's YANG schema, committed atomically, and rolled back automatically if the commit fails. This makes large, multi-step configuration changes reliable in a way that sequential CLI commands never were - if step four fails, the device returns to the state before step one.

At PANTHEON.TECH: NETCONF support is a core capability of lighty.io. PANTHEON.TECH engineers have contributed extensively to the OpenDaylight NETCONF implementation.

Network Automation

Automation

The practice of using software to handle network tasks that were previously done by hand - configuration, provisioning, testing, and routine operations. The goal is not just speed, but consistency: a script or pipeline applies the same logic every time, without the variation that comes from different engineers interpreting the same runbook differently. Network automation ranges from simple Python scripts wrapping SSH commands up to full intent-based platforms with closed-loop validation. The key enabling technologies are NETCONF/YANG for structured configuration, REST APIs for orchestration integration, and version-controlled infrastructure-as-code for audit trails and rollback.

At PANTHEON.TECH: Network automation is the central theme of both lighty.io and SandWork, targeting carrier and enterprise data center use cases respectively.

Network Fabric

Data Center

A switching architecture that provides any-to-any connectivity across all nodes in a data center with consistent, predictable latency. Traditional hierarchical designs - access, distribution, core - funnel traffic through shared aggregation points that become bottlenecks under load and single points of failure under faults. A fabric eliminates both problems by distributing forwarding intelligence across every node and providing multiple equal-cost paths between any two endpoints. Modern data center fabrics are built on spine-leaf topologies running BGP-EVPN/VXLAN, with SONiC as the NOS on whitebox hardware.

Network Orchestration

Automation

The coordination layer above network automation that ties together multiple controllers, automation tools, and configuration systems to deliver end-to-end services across heterogeneous infrastructure. Automation handles the mechanics of configuring a single device or domain; orchestration handles the workflow that spans multiple domains - for example, provisioning a VPN service that requires coordinated changes across a WAN controller, a data center controller, and a firewall policy engine simultaneously. Orchestrators translate service-level intent from a customer or operations team into a sequenced chain of automation tasks, handling dependencies, failure recovery, and rollback across the whole chain.

At PANTHEON.TECH: SandWork serves as the orchestration layer for SONiC-based data center networks, coordinating configuration across switches, overlays, and monitoring systems.

NFV (Network Function Virtualisation)

Cloud Native

An ETSI-defined architecture that decouples network functions from the proprietary hardware appliances they traditionally run on and deploys them as software on commodity x86 servers. A firewall that once required a dedicated hardware appliance becomes a VNF running in a VM, managed by the same infrastructure as any other virtual machine. NFV was the first large-scale attempt to bring software economics to carrier networking, and it largely succeeded in proving the concept - though the operational complexity of managing large VNF estates drove the next wave of evolution toward cloud-native CNFs on Kubernetes.

NSM (Network Service Mesh)

Cloud Native

A CNCF project that extends Kubernetes networking beyond its standard service model to support advanced connectivity patterns needed by network functions. Standard Kubernetes services work well for stateless HTTP applications but are not designed for use cases like direct CNF-to-CNF interfaces, cross-cluster VPN tunnels, or on-demand network service chaining. NSM introduces a control plane that negotiates and establishes these connections dynamically on request, using pluggable forwarders (kernel-based or VPP-based) to set up the actual data path between workloads without modifying the network function code itself.

O

OCP (Open Compute Project)

Data Center

A collaborative hardware community founded by Meta (formerly Facebook) that publishes open specifications for servers, storage, networking, and data center infrastructure. OCP-compliant network hardware - whitebox switches built on merchant silicon from Broadcom or Marvell - forms the physical layer for open networking deployments. Hyperscalers built OCP to escape proprietary vendor lock-in at the hardware level, and the designs have since spread to enterprise and service provider data centers where the same economics apply.

ONAP (Open Network Automation Platform)

Automation

A Linux Foundation Networking project that provides a comprehensive framework for automating the full lifecycle of physical, virtual, and cloud-native network functions. ONAP spans service design (SDC), orchestration (SO), controller design (CDS), analytics and closed-loop automation (DCAE), and policy enforcement - a deliberately broad scope that reflects the complexity of operating large carrier networks. Its architecture assumes a heterogeneous environment: multiple vendors, multiple technologies, and multiple control domains that all need to be coordinated from a single operational layer.

At PANTHEON.TECH: lighty.io has been deployed as a production SDN controller within ONAP at Orange. PANTHEON.TECH has also contributed ONAP performance testing and CDS integration work.

OpenDaylight (ODL)

SDN & Control

An open-source SDN controller framework hosted by Linux Foundation Networking, built around a model-driven architecture where every data path runs through YANG-modelled schemas in the MD-SAL core. OpenDaylight supports a wide range of southbound protocols - NETCONF, OpenFlow, BGP, gNMI, OVSDB - through a plugin system that lets device-specific code be added without touching the controller core. Applications sit above MD-SAL and interact with any connected device through the same datastore and RPC interfaces, regardless of which protocol the device speaks. ODL has been deployed in production carrier networks for over a decade.

At PANTHEON.TECH: PANTHEON.TECH is one of the largest contributors to OpenDaylight and maintains several active sub-projects. lighty.io is built on the OpenDaylight core.

OpenFlow

SDN & Control

The protocol that started the practical SDN movement, defined by the Open Networking Foundation, enabling an external controller to directly program the forwarding table of a switch. An OpenFlow controller pushes flow entries specifying match conditions - source IP, destination port, VLAN tag - and associated actions: forward out a port, drop, modify headers, or send to the controller for inspection. OpenFlow proved that centralised, programmable control of forwarding hardware was viable at scale. NETCONF/YANG has since become the dominant approach for device management, but OpenFlow remains useful for applications that need direct, granular control over individual flow entries.

O-RAN (Open Radio Access Network)

Telecom / RAN

An industry initiative that disaggregates the radio access network into open, interoperable components connected through standardised interfaces. Traditional RAN is a vertically integrated stack where hardware, software, and management all come from a single vendor - changing one means changing all three. O-RAN separates them, defining interfaces like O1 (management), E2 (near-real-time RIC), and the fronthaul split between radio unit and distributed unit. Mobile operators adopt O-RAN to source components from multiple vendors, accelerate feature deployment, and reduce the cost of RAN upgrades that previously required hardware replacement.

OVSDB (Open vSwitch Database Protocol)

SDN & Control

A JSON-RPC management protocol for configuring Open vSwitch instances running in hypervisors and containers. OVSDB handles the control plane configuration of OVS - creating bridges, adding ports, defining tunnel endpoints - while OpenFlow programs the data plane flow tables of those same OVS instances. The two protocols are complementary: OVSDB sets up the switching infrastructure, OpenFlow controls what that infrastructure does with each packet. In SDN deployments, an OpenDaylight controller manages both through separate southbound plugins that share the same northbound application interface.

oc4vpp (OpenConfig for VPP)

Data Plane

A project that implements OpenConfig YANG models on top of FD.io VPP, making VPP instances configurable and observable through standard NETCONF or gNMI interfaces without custom integration code. OpenConfig is a set of vendor-neutral YANG models maintained by a consortium of network operators, and any controller that speaks NETCONF or gNMI can manage a device exposing OpenConfig models. oc4vpp bridges the gap between VPP's native binary API and that standard management layer, letting existing OpenConfig-compatible tooling manage VPP-based network functions out of the box.

P

PortChannel

Data Center

The SONiC and Cisco term for a Link Aggregation Group - a logical interface that presents multiple physical ports as a single, higher-bandwidth link to the rest of the network. PortChannels operate in static mode or LACP mode, where the two endpoints continuously negotiate membership and detect link failures automatically. In SONiC-based fabrics, PortChannels are configured between ToR (top-of-rack) leaf switches and spine switches to provide both bandwidth aggregation and uplink redundancy as a single logical construct.

At PANTHEON.TECH: SandWork automates PortChannel configuration as part of SONiC fabric provisioning workflows.
R

RESTCONF (RFC 8040)

Protocols

An HTTP-based protocol that exposes YANG-modelled network configuration as a REST API, mapping NETCONF operations to standard HTTP methods - GET for retrieval, PUT and PATCH for modification, DELETE for removal - with JSON or XML encoding. RESTCONF makes YANG-modelled device data accessible to any HTTP client, which dramatically lowers the integration barrier compared to raw NETCONF over SSH. CI/CD pipelines, monitoring systems, and web-based management consoles can interact with network devices using the same HTTP tooling they use for everything else.

At PANTHEON.TECH: RESTCONF is a core northbound interface in lighty.io. PANTHEON.TECH contributed the Bierman RESTCONF RFC 8040 implementation to OpenDaylight.
S

SandWork

Automation

PANTHEON.TECH's enterprise data center orchestration platform built specifically for SONiC-based networks. SandWork provides a graphical and API-driven interface for Day-0 through Day-2 operations across SONiC switch fabrics - topology design, bulk configuration, VLAN and VRF management, PortChannel setup, BGP-EVPN overlay provisioning, and ongoing monitoring. It uses OpenDaylight as its southbound control plane and supports multi-vendor SONiC hardware from Edgecore, Celestica, MICAS Networks, and others.

At PANTHEON.TECH: SandWork is commercially available with enterprise support.

SDN (Software-Defined Networking)

SDN & Control

An architecture that separates the network control plane from the data plane, moving forwarding decisions out of individual devices and into a centralised software controller. Network devices become programmable forwarding elements; the controller holds the global network view and translates policies into per-device instructions pushed via southbound protocols like OpenFlow, NETCONF, or gNMI. Northbound APIs expose that view to applications and orchestrators above. The separation enables programmatic, vendor-neutral automation that is simply not possible when control logic is locked inside proprietary device firmware.

At PANTHEON.TECH: SDN is the foundation of both lighty.io (SDN controller SDK) and the broader PANTHEON.TECH product portfolio.

SDN Controller

SDN & Control

The centralised software component in an SDN architecture that maintains a real-time view of network topology and translates high-level policy into device-level forwarding instructions. A controller receives topology events from devices via southbound protocols, builds a network graph from that information, and uses it to make decisions that are then pushed back to devices as configuration or flow rules. Well-known examples include OpenDaylight, ONOS, and custom controllers built on the lighty.io SDK. The controller is often the component with the highest availability requirements in an SDN deployment - if it loses its topology view, the automation built on top of it loses its ground truth.

SONiC (Software for Open Networking in the Cloud)

Data Center

An open-source network operating system originally developed by Microsoft Azure for its own data center switches and now hosted under the Linux Foundation. SONiC runs on whitebox switches across a wide range of merchant silicon ASICs (Broadcom Tomahawk, Trident, Marvell Prestera) and delivers a full-featured NOS through a container-based architecture where each network function - BGP, LLDP, VXLAN, ACL management - runs in its own Docker container. Containers communicate through a centralised Redis database (housing stores such as CONFIGDB and APPDB), with the Switch State Service (SWSS) daemon reading from Redis and programming the ASIC accordingly - giving SONiC clean service isolation and independent restartability per function. The same NOS that runs in Microsoft's hyperscale data centers is now available to enterprise and service provider deployments.

At PANTHEON.TECH: SONiC is the primary target platform for SandWork. PANTHEON.TECH also integrates FD.io VPP with SONiC as a high-performance software forwarding complement.

Spine-leaf topology

Data Center

A two-tier data center fabric where leaf switches connect directly to servers (and to every spine switch), while spine switches connect only to leaf switches. The full-mesh between leaf and spine creates a non-blocking fabric with equal-cost paths between every pair of servers - traffic from any server to any other server crosses exactly the same number of hops. This consistency eliminates the latency asymmetry of three-tier hierarchical designs and makes ECMP load balancing predictable. Spine-leaf is the reference architecture for modern data center networks running BGP-EVPN/VXLAN overlays on SONiC or other NOS platforms.

Spring Boot

Cloud Native

An opinionated Java framework that handles the boilerplate of building production-ready microservices - embedded HTTP server, health checks, metrics endpoints, dependency injection, and configuration management - so developers focus on application logic instead. In the SDN and network automation context, Spring Boot is used to host northbound REST APIs on top of lighty.io, exposing application-specific endpoints alongside or in addition to the standard RESTCONF interface. lighty.io ships a Spring Boot integration module that connects the two with minimal configuration.

StoneWork

Data Plane

An open-source, production-grade multi-service network function platform built on FD.io VPP and the Ligato VPP Agent. StoneWork consolidates what would otherwise be separate CNFs - router, firewall, NAT gateway, IPsec tunnel endpoint, VPN, IS-IS routing - into a single container with a unified configuration and management interface. Running everything in one container cuts the resource overhead of per-function sidecar containers and simplifies the operational model: one process to manage, one interface to configure, one log stream to monitor. This makes StoneWork practical for service provider edge deployments and enterprise branch sites where resource constraints rule out a full CNF service chain.

At PANTHEON.TECH: StoneWork is an open-source PANTHEON.TECH project available on GitHub, with enterprise support offered commercially.
T

T-API (Transport API)

Telecom / RAN

An Open Networking Foundation standard API for controlling optical and packet transport networks through a unified northbound interface. T-API abstracts the physical complexity of ROADM-based optical networks, OTN switching, and SDH/SONET equipment behind a consistent REST/YANG interface that network controllers and orchestrators can use regardless of the underlying vendor hardware. Multi-domain, multi-vendor transport orchestration - which previously required bespoke integration per vendor - becomes feasible when every domain exposes the same T-API interface.

V

vCPE (Virtual Customer Premises Equipment)

Data Plane

A virtualised approach to customer premises networking that replaces dedicated hardware appliances at each site with software functions running either in the service provider's cloud or on low-cost whitebox hardware at the customer location. A physical router, firewall, and VPN gateway that once required a truck roll to install and another truck roll to upgrade becomes a set of VNFs or CNFs that can be provisioned and reconfigured remotely. The model reduces hardware costs, cuts time-to-service, and lets service providers offer new features without touching site hardware.

VLAN (Virtual Local Area Network)

Data Center

A logical layer-2 segment that partitions a physical switch fabric into isolated broadcast domains without requiring separate hardware. IEEE 802.1Q defines the VLAN standard, inserting a 12-bit VLAN ID tag into Ethernet frames to identify which logical segment each frame belongs to. A single physical switch can carry traffic for up to 4,094 VLANs simultaneously. VLANs provide network segmentation, tenant isolation, and traffic policy enforcement at layer 2, and they are the building block that VXLAN was designed to scale beyond when 4,094 segments stopped being enough.

VNF (Virtual Network Function)

Cloud Native

A network function deployed as a virtual machine on a hypervisor rather than a dedicated hardware appliance. VNFs were the first generation of software-based network functions under the NFV architecture - firewalls, load balancers, WAN optimisers, and session border controllers running as VMs on commodity servers. The model proved that hardware independence was achievable at scale, but the operational overhead of managing full VMs - image management, hypervisor licences, slow startup times - drove the shift toward CNFs on Kubernetes. VNFs remain in production in many carrier environments where the transition to cloud-native is ongoing.

VRF (Virtual Routing and Forwarding)

Data Center

A technology that creates multiple independent routing table instances on a single physical router or switch. Each VRF has its own routing table, forwarding table, and set of interfaces, so traffic in one VRF is completely isolated from traffic in another even though both share the same hardware. VRFs are the standard mechanism for multi-tenant layer-3 isolation in data center fabrics - each tenant gets its own VRF, and BGP-EVPN distributes per-tenant routing information while VXLAN carries the encapsulated tenant traffic across the fabric.

At PANTHEON.TECH: SandWork manages VRF configuration as part of SONiC fabric multi-tenancy automation.

VXLAN (Virtual Extensible LAN)

Data Center

A network overlay encapsulation protocol (RFC 7348) that tunnels layer-2 Ethernet frames inside UDP/IP packets, stretching layer-2 segments across layer-3 boundaries. VXLAN uses a 24-bit Virtual Network Identifier (VNI) to distinguish up to approximately 16.7 million logical segments - a practical answer to the 4,094-VLAN ceiling of 802.1Q. In modern data center fabrics, VXLAN is the data plane encapsulation used by BGP-EVPN: BGP-EVPN decides where MAC and IP addresses live, VXLAN carries the frames there. FD.io VPP handles VXLAN encapsulation and decapsulation at line rate in software.

At PANTHEON.TECH: SandWork configures VXLAN overlays on SONiC fabrics and FD.io VPP provides high-performance VXLAN encapsulation/decapsulation in software forwarding paths.
W

Whitebox Networking

Data Center

An approach that separates the switching ASIC from the network operating system, letting operators choose hardware and software independently from any vendor. A whitebox switch is commodity hardware built on merchant silicon - Broadcom Tomahawk or Trident, Marvell Prestera - from an ODM manufacturer, with no vendor NOS bundled in. The operator installs their NOS of choice: SONiC, DENT, or a commercial option. This disaggregation eliminates hardware-level vendor lock-in and brings hyperscaler-grade economics to enterprise and service provider data centers. The same switch that costs a fraction of a vendor-branded equivalent runs identical forwarding silicon.

Y

YANG (Yet Another Next Generation)

Protocols

A data modelling language (RFC 6020 / RFC 7950) that defines the structure, data types, and constraints for configuration and operational data on network devices and management systems. YANG is the schema layer that makes NETCONF, RESTCONF, and gNMI useful: without YANG models, those protocols transport unstructured data that requires custom parsing per device and per feature. With YANG models, every piece of data has a known type, valid value range, and relationship to other data - so a controller can validate a configuration before pushing it, and an application can read device state without understanding vendor-specific encoding. Vendors publish YANG models for their devices; standards bodies publish vendor-neutral OpenConfig and IETF models that work across implementations.

At PANTHEON.TECH: YANG Tools is a core library within OpenDaylight and lighty.io. PANTHEON.TECH maintains a standalone YANG model validator tool.