PANTHEON.tech
  • About Us
    • Expertise
    • Services
    • References & Partners
    • Tools
  • Products
    • Orchestration
    • Automation
    • Network Functions
    • Security
  • Blog & News
  • Career
  • Contact
  • Click to open the search input field Click to open the search input field Search
  • Menu Menu
  • Link to LinkedIn
  • Link to Youtube
  • Link to X
  • Link to Instagram

CDNF.io related posts on PANTHEON.tech.

What is Multus? Explanation by PANTHEON.tech

[What Is] Multus CNI

May 20, 2022/in Blog, CDNF.io /by PANTHEON.tech

Multus CNI (Container Network Interface) is a novel approach to managing multiple CNIs in your container network (Kubernetes). Based on its name, which means multiple in Latin, Multus is an open-source plug-in, which serves as an additional layer in a container network, to enable multi-interface support. For example, Virtual Network Functions (VNFs) often depend on connectivity towards multiple network interfaces.

The CNI project itself, backed by the Cloud Native Computing Foundation, defines a minimum specification on what a common interface should look like. The CNI project consists of three primary components:

  • Specification: an API that lies between the network plugins and runtimes
  • Plugins: depending on use-cases, they help set up the network
  • Library: CNI specifications as Go implementations, which are then utilized by runtimes

Each CNI can deliver different results, which makes Multus a wonderful plugin to manage these functionalities and make them work together.

Multus delivers this functionality in form of a contact between the container runtime and a selection of plugins, which are called upon to do the actual net configuration tasks.

Multus Characteristics

  • Manage contact between container runtime and plugins
  • No net configuration by itself (dependent on other plugins)
  • Uses Flannel to group plugins into delegates
  • Support for reference & 3rd party plugins
  • Supports SRIOV, DPDK, OVS-DPDK & VPP workloads with cloud-native & NFV based applications

Multus Plugin Support & Management

Currently, Multus supports all plugins maintained in the official CNI repository, as well as 3rd party plugins like Contiv, Cilium or Calico.

Management of plugins done by handling plugins as delegates (using Flannel), which can be invoked into a certain sequence, based on either a JSON scheme or CNI configuration. Flannel is an overlay network in Kubernetes, which configures layer 3 network fabric and therefore satisfies Kubernetes requirements (run by default on many plugins). Multus then invokes the eth0 interface in the pod for the primary/master plugin, while the rest of the plugins receive netx interfaces (net0, net1, etc.).

StoneWork in K8s Pod with Multiple Interfaces

Our team created a demo on how to run StoneWork in a Microk8s pod with multiple interfaces attached via the Multus add-on.

This example attaches two existing host interfaces to the Stonework container running on a Microk8s pod. Highlights include the option to add multiple DPDK interfaces, as well as multiple af_packet interfaces to StoneWork with this configuration.

If you are interested in more details regarding this implementation, contact us for more information!

Utilizing Cloud-Native Network Functions

If you are interested in high-quality CNFs for your next or existing project, make sure to check out our portfolio of cloud-native network functions, by PANTHEON.tech.


Leave us your feedback on this post!

You can contact us here.

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

 

BPMN ONAP

BPMN & ONAP in Spine-Leaf DC’s

September 2, 2021/in Blog, CDNF.io /by PANTHEON.tech

Enterprises require workflows to understand internal processes, how they apply to different branches, and divide responsibility to achieve a common goal. Using a workflow enables to pick & choose, which models are required.

Although there are many alternatives, BPMN is a standard widely used across several fields to graphically depict business processes and manage them.

Notable, although underrated, are its benefits for network administrators. BPMN enables network device management & automation, without having to fully comprehend the different programming languages involved in each task.

What is BPMN?

The Business Process Model & Notation (BPMN) standard graphically represents specifics of business processes in a business process model. In cooperation with the Camunda platform, which provides its own BPMN engine, it can do wonders with network orchestration automation.

BPMN lets enterprises graphically depict internal business procedures and enables companies to render these procedures in a standardized manner. Using BPMN removes the need for software developers to adjust business logic since the entire workflow can be managed through a UI.

In the case of network management, it provides a level of independence, abstracted from the network devices themselves.

This logic behind how business processes are standardized as workflows is present in the Open Network Automation Platform (ONAP) as well.

What is ONAP?

ONAP is an orchestration and automation framework, featuring an open-source software concept, that is robust, real-time, policy-driven, for physical and virtual network functions.

ONAP allows network scaling and VNF/CNF implementations in a fully automated manner. Read our in-depth post on what ONAP is and how you can benefit from its usage. BPMN is implemented within ONAP via Camunda.

Camunda is an open-source platform, used in the ONAP Service Orchestrator – where it serves as one of the core components of the project to handle BPMN 2.0 process flows.

Relationship between ONAP & BPMN

The Service Orchestrator (SO) component, includes a BPMN Execution Engine. Two Camunda products are utilized within ONAP SO:

  • Cockpit: View BPMN 2.0 workflow definitions
  • Modeler: Edit BPMN 2.0 process flows

The SO component is mostly composed of Java & Groovy code, including a Camunda BPMN code-flow.

PANTHEON.tech circumvents the need for SO and uses the Camunda BPMN engine directly. This resulted in a project with SO functionality, without the additional SO components – sort of a microONAP concept.

Features: Camunda & BPMN

The business process modeling is a single action of network orchestration. As with any project integration, it is important to emphasize the project’s strong points, which enabled us to achieve a successful use case.

Benefits of Camunda/BPMN

  • Automation: BPMN provides a library of reusable boxes, which make their use more accessible by avoiding/hiding unnecessary complexity
  • Performant BPMN Engine: the engine provides good out-of-the-box performance, with a variety of operator/DevOps UI tools, as well as BPMN modeling tools
  • User Interface: OOTB user interface, with the option of creating a custom user interface
  • DevOps: easy manipulation & development of processes
  • Scalability: in terms of performance tuning and architecture development for lots of tasks
  • Interoperability: extensible components, REST integration, or script hooks for Groovy, JavaScript & more
  • REST API: available for BPMN engine actions
  • Exceptional Error Handling
  • Scalability: tasks with high execution cadence can be externalized and be implemented as scalable microservices. That provides not only scalability to the system itself but can be applied to the teams and organizations as well
  • Process tracking: the execution of the process is persisted and tracked, which helps with system recovery and continuation of the process execution in partial and complete failure scenarios.

What PANTHEON.tech had to mitigate is, for example, parallelism – running several processes at once. Timing estimation limits the high precision configuration of network devices. Imagine you want to automate a process starting with Task 1. After a certain time, Task 2 takes effect. Timers in BPMN however need manual configuration to tune the interval between jobs & processes.

Our deep dive into this topic resulted in a concept for automating network configurations in spine-leaf data centers, using a lightweight ONAP SO architecture alternative.


Use Case: Virtual Network Configuration in Spine-Leaf Data Centers

PANTHEON.tech has achieved, that the design of this use-cases custom architecture is fully functional and meets the required criteria – to fully adopt network automation in a demanding environment.

Our use-case shows how BPMN can be used as a network configuration tool in, for example, data centers. In other words – how ONAP’s SO and lighty.io could be used to automate your data center.

If you are interested in this use case, make sure to contact us and we can brief you on the details.


by Filip Gschwandtner | Leave us your feedback on this post!

You can contact us here!

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

firewall onap

Cloud-Native Firewall + ONAP (CDS) Integration

April 26, 2021/in Blog, CDNF.io /by PANTHEON.tech

PANTHEON.tech’s Firewall CNF can be integrated with the ONAP Controller Design Studio (CDS) component.

We achieved a successful & effective integration with the Firewall CNF and CDS, in an easy-to-understand use-case: block and allow traffic between two Docker containers.

Cloud-Native Firewall & CDS

With ONAP, orchestration management and automation of network services is simple, yet effective. It allows defining policies and act on network changes in real-time.

With CDS, users can configure other ONAP components as well – such as SDN-C or SDN-R, and thereby directly configure the network itself.

CDS is responsible for designing and controlling self-services – a fully self-defined software system. It makes these self-services so accessible, that minimal to no code development is required. It is usable also by non-programmers.

CDS in ONAP

Position of CDS within the ONAP architecture

Self-contained services are defined by a Controller Blueprint Archive (CBA). The core of the CBA structure defines the service, according to TOSCA – the topology and orchestration specification for cloud applications. These blueprints are modeled, enriched to become fully self-contained TOSCA blueprints, and uploaded to CDS.

ONAP Demo Simplification

Our VPP-Agent-based Firewall CNF can be configured using CDS and afterward, effectively blocks or allows traffic between two Alpine Linux containers.

The workflow of applying a configuration to our Firewall CNF is comprised of two steps:

  1. Resolve the configuration template
  2. Apply the resolved configuration to the CNF, using the REST API

This shows the versatility and agility of our CNFs, by showcasing another possible integration in a popular project, such as ONAP.

Try our Firewall CNF + CDS Demo

This demonstration is available on our GitHub!

The script in our demonstration provides a setup, where necessary containers are started and the data plane and control plane are brought in place.

The script will then showcase traffic (pinging) from the start point to endpoint in three scenarios:

  1. Firewall CNF is not configured
  2. Firewall CNF is configured by CDS to deny traffic
  3. Firewall CNF is configured by CDS to allow traffic

PANTHEON.tech & ONAP

PANTHEON.tech is closely involved and following the development of various ONAP components.

The CPS component is of crucial importance in the ONAP project since it serves as a common data layer service, which preserves network-element runtime information, in form of database functionality.

PANTHEON.tech’s involvement in ONAP CPS includes creating an easy and common platform for testing deployments easier which highlights, where optimization is needed or achieved.

We hope you enjoyed this demonstration!


Make sure to visit our cloud-network functions (CNF) portfolio!

by Filip Gschwandtner | Leave us your feedback on this post!

You can contact us here.

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

StoneWork + GNS3

[Tutorial] StoneWork + GNS3

April 5, 2021/in Blog, CDNF.io /by PANTHEON.tech

PANTHEON.tech has made its data plane for managing cloud-native network functions, StoneWork, available on the GNS 3 marketplace. This makes it easy for anybody to try out our all-in-one solution, which can combine multiple cloud-native network functions from our CNF portfolio, in a separate environment.

This tutorial will give you the basics on how to set-up StoneWork in an environment, where you can safely test out interaction and its positioning within your (simulated) network.

The goal of this tutorial is to have a basic setup, where we will:

  • Setup StoneWork interface IP address
  • Set the status of StoneWork to UP
  • Verify the connection by pinging the address

Read the complete post after subscribing:


CDNF.io YAML Editor

[Release] Cloud-Native Network Function YAML Editor

March 18, 2021/in Blog, CDNF.io /by PANTHEON.tech

Verify & Edit CNF YAML Configurations

CDNF.io YAML Editor, is an open-source, YAML configuration editor & verification tool. It is part of the CNF portfolio – as an added bonus, you can verify your cloud-native network function configuration with our tool!

CDNF.io YAML Editor Logo

The editor is available on the official website!

Features

  • YAML & JSON Schema Validation
  • Generating YAML Examples
  • Importing & Export of Configurations

YAML Configuration Validation

Import, or copy & paste a YAML configuration via the three-dot menu in the Configuration tab. We have conveniently placed an Examples folder, with a JSON Schema that serves as the

Errors will then be highlighted, against the imported JSON schema.

How-To: Validate your YAML file

  1. Visit the CDNF.io YAML Editor website
  2. Import/paste a valid draft-04 JSON Schema, or use the existing example, via the folder icon, in the JSON Schema tab, on the right.
    {
      "type": "object",
      "properties": {
        "user": {
          "type": "object",
          "properties": {
            "id": {
              "$ref": "#/definitions/positiveInt"
            },
            "name": {
              "type": "string"
            },
            "birthday": {
              "type": "string",
              "chance": {
                "birthday": {
                  "string": true
                }
              }
            },
            "email": {
              "type": "string",
              "format": "email"
            }
          },
          "required": [
            "id",
            "name",
            "birthday",
            "email"
          ]
        }
      },
      "required": [
        "user"
      ],
      "definitions": {
        "positiveInt": {
          "type": "integer",
          "minimum": 0,
          "minimumExclusive": true
        }
      }
    }
  3. Have a look at the generated Example YAML code in the YAML Example tab.

Invalid YAML File

  • Import, or copy & paste this invalid YAML example into the Configuration window
user:
  id: -33524623
  name: "Jon Snow"
  birthday: "19/12/283"
  email: "jonsnow@gmail.com"

Valid YAML File

  • Import, or copy & paste this valid YAML example into the Configuration window
user:
  id: 33524623
  name: "John Snow"
  birthday: "19/12/283"
  email: "jonsnow@gmail.com"

Limitations

The JSON Schema specification recommends to use the definitions key, where all definitions should be located. Then, you should use a relative path to point to the definitions.

Our implementation of the JSON schema requires a definitions object, if the ref ID links to a definition and does not use a relative path.

  • Supported: JSON Schema draft-04 (and included features, such as valid formats, etc.)
  • Not supported: Loading definitions from external URIs

Feedback for CNF Tools

Leave us your feedback here or create an Issue in the repository of the CDNF.io YAML Editor. Explore our portfolio of cloud-native network functions, developed by PANTHEON.tech.

Make sure to visit our playlist on YouTube!

SONiC w/ IPSec & StoneWork

Secure Access to SONiC Switch w/ IPSec & StoneWork

March 8, 2021/in Blog, CDNF.io /by PANTHEON.tech

StoneWork | An IPSec Appliance

Our We have created a portfolio is steadily growing. Our latest addition is StoneWork.

Here, StoneWork enables you to securely and remotely access your management plane.

StoneWork is a solution which, thanks to its modular architecture, enables you to combine multiple CNFs from the CNF portfolio, using only one data-plane, to increase the overall throughput, while keeping rich functionality.

One of the many features of StoneWork is IPSec, which we will talk about in this post.

StoneWork IPSec + SONiC

This case study briefly describes, how the StoneWork IPsec appliance can be used on your SONiC enabled switch to secure & tunnel your OOB management SONiC interface.

Stonework is part of our CNF portfolio. It is an enhanced VPP distribution, which serves as an all-in-one switch/router/firewall.

Stoneworks (IPSec) test setup by PANTHEON.tech

Stoneworks (IPSec) test setup by PANTHEON.tech

If you are interested in the deployment script, click here to contact us!

In this demonstration, two SONiC OS instances are provisioned to represent two IPSec gateways. But instead of actual physical switches, each SONiC OS runs inside a Docker container with a P4-simulated SAI behavioral model software switch ASIC underneath.

This P4 ASIC is also running as a separate container, to keep the emulated physical interfaces separated from kernel-space ports. A link between the ASIC and SONiC container is a network namespace reference /var/run/netns/sw_net that P4 ASIC expects to point to ASIC container from the filesystem of the SONiC container.

On top of that, there is a StrongSwan appliance running in a container, using the same network namespace as SONiC for the sake of AF_PACKET. In total there are three containers to represent one switch.

In-between the switches there is a “bridge” container, used only to capture traffic and verify that it is indeed encrypted. On the opposite side of switches, there are containers representing hosts – one is used as a TCP client, the other as a server.

What is SONiC?

SONiC is a Linux-based, network operating system, available as an open-source project, meant for network routers & switches.

The architecture is similar to that of OpenDaylight or lighty.io – it is composed of modules, on top of a centralized infrastructure, which is easily scalable.

Its main benefits are the usage of the Redis-engine infrastructure & placement of modules into Docker containers. The primary functional components are DHCP-Relay, PMon, SNMP, LLDP, BGP, TeamD, Database, SWSS, SyncD.

SONiC covers all the components needed for a complete L3 device. Its main use-case presents a cloud-data center, with the possibility of sharing software stacks among different platforms. Currently, over 100 platforms are officially supported.

An important concept of SONiC is that it does not interact with the hardware directly. Instead, its programs switch ASIC via the vendor-neutral Switch Abstraction Interface or SAI for short.

This approach, on one hand, allows maintaining vendor independence, while decoupling the network software and hardware. On the other hand, it creates boundaries on what can be performed with the underlying networking hardware.

Linux Foundation Network 2020 - Project Contribution Overview - PANTHEON.tech

PANTHEON.tech Proves 2020 Leadership in Contributions to Linux Foundation Networking Projects

January 26, 2021/in Blog, CDNF.io /by PANTHEON.tech

The Linux Foundation Networking: 2020 Year in Review shows PANTHEON.tech leading open-source enrichment and customer innovation adoption in SDN Automation, Cloud-Native, 5G & O-RAN.

Linux Foundation Network 2020 - Project Contribution Overview - PANTHEON.tech

Source: LFX Insights

Leadership and Contribution

PANTHEON.tech is pleased to showcase the Linux Foundation Networking “2020 Year in Review”, which highlights our continued commitment to open-source enrichment and customer adoption.

This report reflects a series of metrics for last year and we are extremely proud to be highlighting our continued leading levels of participation and contribution in LFN’s technical communities. As an example, PANTHEON.tech provided over 60% of the commits to OpenDaylight in 2020.

This is an extraordinary achievement, given this is in the company of such accoladed peers as AT&T, Orange S.A., Cisco Systems Inc., Ericsson, and Samsung.

Customer Enablement

Clearly, this report demonstrates open source software solutions have secured themselves in many customer’s network architectures and strategies, with even more customers following this lead. Leveraging its expertise and experience, PANTHEON.tech, since its inception has been focused on offering customers; application development services, Enterprise-Grade tailored or productized open source solutions with an accompanying full support model

PANTHEON.tech leads the way in enabling customers with Software Defined Network automation, comprehensively integrating into an ecosystem of vendor and open orchestration, systems, and network devices across all domains of customer’s networks.  Our solutions facilitate automation, for such services as O-RAN, L2/L3/E-VPN, 5G, or Data Centre, amongst many others.

Leveraging multiple open-source projects, including FD.io, we assist customers in embracing cloud-native, developing tailored enterprise-grade network functions, which focus on customer’s immediate and future requirements and performance objectives.

We help our customers unlock the potential of their network assets, whether; new, legacy, proprietary, open, multi-domain, or multi-layer, PANTHEON.tech has solutions to simplify and optimize customer’s networks, systems, and operations.

The key-takeaway is, that customers can rely on PANTHEON.tech to deliver, unlocking services in your existing networks, innovate and adopt new networks and services, while simplifying your operations.

You can contact us here!

Please contact PANTHEON.tech to discuss how we can assist your open-source network and application goals with our comprehensive range of services, subscriptions, and training.

EntGuard - The Ultimate Enterprise VPN

EntGuard | Next-Gen Enterprise VPN

January 25, 2021/in Blog, CDNF.io /by PANTHEON.tech

At present, enterprises practice approaches in securing external perimeters of their networks. From centralized Virtual Private Networks (VPN), through access without a VPN to using solutions, such as EntGuard VPN.

For encryption, protection, security, meet EntGuard – the ultimate, enterprise VPN solution.

The most dangerous cyber-threats are those, which are not yet identified. Enterprises need to act proactively and secure access to their networks.

Contact us for more information!

Work-From-Home & Cybersecurity

We saw an increased need of working from home last year. But what was first a necessity, seems to stay as a popular alternative to working from an office.

That also means, that as an enterprise, you need to go the extra mile to protect your employees, your, and their data. A VPN will:

  • Encrypt your internet traffic
  • Protect you from data-leaks
  • Provide secure access to internal networks – with an extra layer of security!

Encrypt – Secure – Protect.

With EntGuard VPN, PANTHEON.tech utilized years of working on network technologies and software, to give you an enterprise-grade product, that is built for the cloud.

With the world rapidly shifting towards virtual spaces, it is projected that the spending on cybersecurity will increase by 10% in 2021. EntGuard will save you costs, without compromising quality.

Built on WireGuard®

We decided to build EntGuard VPN on the critically-acclaimed WireGuard® protocol. The protocol focuses on ease-of-use & simplicity, as opposed to existing solutions like OpenVPN – while achieving incredible performance! Did you know that WireGuard® is natively supported in the Linux kernel and FD.io VPP since 2020?

WireGuard® is relied on for high-speeds and privacy protection. Complex, state-of-the-art cryptography, with lightweight architecture. An incredible combination.

Unfortunately, it’s not easy to maintain WireGuard® in enterprise environments, that’s why we have decided to bring you EntGuard, which gives you the ability to use WireGuard® tunnels in your enterprise environment.

EntGuard Highlights

  • Supported server platforms: Linux
  • Clients: Windows, Linux; Android, macOS
  • Simple Management
  • State-of-the-art Cryptography (WireGuard®)
  • Built-In Roaming
  • Container Ready
  • Certification & Support Services
  • Premium Features: Be the first to try out new features, such as – MFA, LDAP, Radius, end-station remote support, traffic monitoring, problem analysis and more!

About our CNFs

The PANTHEON.tech, cloud-native network functions portfolio keeps on growing. At the start of 2020, we introduced you to the project, which at the moment houses 18 CNF’s. Make sure to keep up-to-date with our future products, by following us and our social media!

PANTHEON.tech Solutions: ONAP Integration

[What Is] ONAP | Open Network Automation Platform

January 18, 2021/in Blog, CDNF.io /by PANTHEON.tech

ONAP (Open Network Automation Platform) is quite a trend in the contemporary SDN world. It is a broad project, consisting of a variety of sub-projects (or components),  which together form a network function orchestration and automation platform. Several enterprises are active in ONAP and its growth is accelerating rapidly. PANTHEON.tech is a proud contributor as well.

What is ONAP?

The platform itself emerged from the AT&T ECOMP (Enhanced Control, Orchestration, Management & Policy) and Open-O (Open Orchestrator) initiatives. ONAP is an open-source software platform, that offers a robust, real-time, policy-driven orchestration and automation framework, for physical and virtual network functions. It exists above the infrastructure layer, which automates the network.

ONAP enables end-users to connect services through the infrastructure. It allows network scaling and VNF/CNF implementations in a fully automated manner. Among other benefits, like:

  • Bring agile deployment & best practices to the telecom world
  • Add & deploy new features on a whim
  • Improve network efficiency & sink costs

Its goal is to enable operators and developers, networks, IT, and the cloud to quickly automate new technologies and support full lifecycle management. It is capable of managing (build, plan, orchestrate) Virtual Network Functions (VNF), as well as Software-Defined Networks (SDN).

ONAP’s high-level architecture involves numerous software subsystems (components). PANTHEON.tech is involved in multiple ONAP projects, but mostly around controllers (like SDN-C). For a detailed view, visit the official wiki page for the architecture of ONAP.

SDN-C

SDN-C is one of the components of ONAP – the SDN controller. It is basically OpenDaylight, with additional Directed Graph Execution capabilities. In terms of architecture, ONAP SDN-C is composed of multiple Docker containers.

Directed Graph Creator runs one of these containers. It’s a user-friendly web UI, that can be used to create directed graphs. Another container runs the Admin Portal. The next one runs the relational database, which is the focal point of the implementation of SDN-C and used for each container. Lastly, the SDN-C container, that runs the controller itself.

This component is of particular interest to us because it has all the rationale behind the execution of graphs that are directed. We have previously shown, how lighty.io can integrate well with SDN-C and drastically improve performance.

According to the latest 5G use-case paper for ONAP, SDN-C has managed to implement “radio-related optimizations through the SDN-R sub-project and support for the A1 interface”.

CDS: Controller Design Studio

As the official documentation puts it:

CDS Designer UI is a framework to automate the resolution of resources for instantiation and any config provisioning operation, such as day0, day1, or day2 configuration.

CDS has both design-time & run-time activities. During design time, the designer can define what actions are required for a given service, along with anything comprising the action. The design produces a CBA Package. Its content is driven by a catalog of reusable data dictionaries and components, delivering a reusable and simplified self-service experience.

CDS enables users to adapt resources in a way, where no direct code-changes are needed. The Design Studio gives users, not only developers, the option to customize the system, to meet the customer’s demands. The two main components of CDS are the frontend (GUI) and backend (run-time). It is possible to run CDS in Kubernetes or an IDE of your choice.

One interesting use-case shows the creation of a WordPress CNF via CDS.

SO: Service Orchestration

The primary role of SO is the automation of the provisioning operations of end-to-end service instances. In favor of overall end-to-end service instantiation, processes, and maintenance, SO is accountable for the instantiation and setup of VNFs.

To accomplish its purpose, Service Orchestration performs well-defined processes – usually triggered by receiving service requests, created by other ONAP components, or by Order Lifecycle Management in the BSS layer.

The orchestration procedure is either manually developed or received from ONAP’s Service Design and Development (SDC) portion, where all service designs are created for consumption and exposed/distributed.

The latest achievement of the Service Orchestrator is the implementation of new workflows such as:

  • CSMF – Communication Service Management Function
  • NSMF – Network Slice Management Function
  • NSSMF – Network Slice Sub-Net Management Function

DMaaP: Data Movement as a Platform

The DMaaP component is a data movement service, which transports and processes data from a selected source to the desired target. It is capable of transferring data and messages between ONAP components, data filtering/compression/routing, as well as message routing and batch/event-based processing.

DCAE: Data Collection Analytics & Events

The Data Collection Analytics & Events component does exactly what’s in its name – gather performance, usage & configuration data from the managed environment. The component guards events in a sense – if something significant occurs or an anomaly is detected, DCAE takes appropriate actions.

The component collects and stores data that is necessary for analysis while providing a framework for the development of needed analytics.

DCAE: Collectors and other microservices required to support the telemetry collection for 5G network optimization; this includes the O1 interface from O-RAN.

A&AI: Active & Available Inventory

The Active & Available Inventory functionality offers real-time views of relationships with the products and services operated by them. It gives real-time insights into the managed products and services, as well as their connections.

A&AI is a list of properties that are active, available, and allocated. It establishes a multi-dimensional relationship between the programs and infrastructure under administration. It provides interfaces for dynamic network topology requests, both canned and ad-hoc network topology queries.

Recently AAI gained schema support for 5G service design and slicing models.

Is ONAP worth it?

Yes, it is. Since you have come up to this conclusion, then you might feel that ONAP is the right fit for your needs. It is an enormous project with around 20 components.

If you feel overwhelmed, don’t worry and leave it to the experts – contact PANTHEON.tech today for your ONAP integration needs!

PANTHEON.tech in 2020

PANTHEON.tech 2020: A Look Back

December 22, 2020/in Blog, CDNF.io /by PANTHEON.tech

Join us in reminiscing and reminding you, what PANTHEON.tech has managed to create, participate in, or inform about in 2020.

Project: CDNF.io

In the first quarter of the year, we have made our latest project, CDNF.io, accessible to the public. Cloud-native functions were long overdue in our portfolio and let me tell you – there are lots of them, ready to be deployed anytime.

We have prepared a series of videos, centered around our CNFs, which you can conveniently view here:

Perhaps you like to read more than hear someone explain things to you? We wrote a few posts on:

  • The Road from PNFs, to VNFs, to CNFs
  • Integrating Network Service Mesh with Cloud-Native Network Functions | by Milan Lenčo & Pavel Kotúček
  • A Cloud-Native Firewall | by Filip Gschwandtner

Integration Scenarios

Apart from our in-house solutions, we have worked on demonstrating several scenarios with common technologies behind them: ServiceNow® & Cisco’s Network Services Orchestrator.

In terms of ServiceNow®, our posts centered around:

  • Network Automation with ServiceNow® & OpenDaylight | by Miroslav Kováč
  • Cloud-Native Firewall Orchestration with ServiceNow® | by Slavomír Mazúr

Cisco’s NSO got a nearly all-inclusive treatment, thanks to Samuel Kontriš, with a defacto NSO Guide on:

  • NSO Integration w/ SDN-C (ONAP)
  • NSO Integration w/ lighty.io
  • Cisco Network Service Orchestrator

This includes two videos about the Network Service Orchestrator:

Open-Source Software Releases

We have made several projects available on our GitHub, which we regularly maintain and update. What stole the spotlight was the lighty.io NETCONF Device Simulator & Monitoring Tool, which you can download here.

PANTHEON.tech has also been active in adding new features to existing open-source projects, such as:

  • Integrating memif with T-REX | by Július Milan
  • Updating Swagger in OpenDaylight to OpenAPI 3.0

lighty.io, our open-source passion project, celebrated its 13th release, which also included a separate post highlighting improvements and changes. 

Thoughts, Opinions & Information

Since we did not want to exclude people who might not be that knowledgable about what we do, we have created a few series on technologies and concepts PANTHEON.tech is engaged in, such as:

  • What is AF_XDP | by Marek Závodský
  • The Difference Between Binding-Aware & Binding Independent
  • What is SDN & NFV

We try to listen closely to what Robert Varga, the top-single contributor to the OpenDaylight source-code, has to say about OpenDaylight. That allowed us to publish opinion/informative pieces like:

  • The Future of Karaf
  • A Developers perspective on OpenDaylight Sodium
  • Ultimate Guide to OpenDaylight | with Samuel Kontriš

Step into a new decade

We would like to thank everybody who does their part in working and contributing to projects in PANTHEON.tech, but open-source projects as well. 2020 was challenging, to say the least, but pulling together, makes us stronger – together.

Happy holidays and new years to our colleagues, partners, and readers – from PANTHEON.tech.

Network Functions 1

Road to Cloud-Native Network Functions

October 20, 2020/in Blog, CDNF.io /by PANTHEON.tech

We have come a long way to enjoy all the benefits that cloud-native network functions bring us – lowered costs, agility, scalability & resilience. This post will break down the road to CNFs – from PNF to VNF, to CNF.

What are PNFs (physical network functions)?

Back in the ’00s, network functions were utilized in the form of physical, hardware boxes, where each box served the purpose of a specific network function. Imagine routers, firewalls, load balancers, or switches as PNFs, utilized in data centers for decades before another technology replaced them. PNF boxes were difficult to operate, install, and manage.

Just as it was unimaginable to have a personal computer 20 years ago, we were unable to imagine virtualized network functions. Thanks to cheaper, off-the-shelf hardware and expansion of cloud services, enterprises were able to afford to move some network parts from PNFs to generic, commodity hardware.

What are VNFs (virtual network functions)?

The approach of virtualization enabled us to share hardware resources between multiple tenants while keeping the isolation of environments in place. The next logical step was the move from the physical, to the virtual world.

A VNF is a virtualized network function, that runs on top of a hardware networking infrastructure. Individual functions of a network may be implemented or combined, in order to create a complete package of a networking-communication service. A virtual network function can be part of an SDN architecture or used as a singular entity within a network.

Today’s standardization of VNFs would not be possible without ETSIs Open-Source Mano architecture, or the TOSCA standard, which can serve as lifecycle management. These are, for example, used in the open-source platform ONAP (Open Network Automation Platform).

What are CNFs (cloud-native network functions)?

Cloud-native network functions are software implementations of functions, which are traditionally performed by PNFs – and they need to conform to cloud-native principles. They can be packaged within a container image, are always ready to be deployed & orchestrated, chained together to perform a series of complex network functions.

Why should I use CNFs?

Microservices and the overall benefits of adapting cloud-native principles, come with several benefits, which show a natural evolution of network functions in the 2020s. Imagine the benefits of:

  • Reduced Costs
  • Immediate Deployment
  • Easy Control
  • Agility, Scalability & Resilience

Our CNF project delivers on all of these promises. Get up-to-date with your network functions and contact us today, to get a quote.

Imperative & Declarative Programming

[What Is] Declarative vs. Imperative Approach

August 27, 2020/in Blog, CDNF.io /by PANTHEON.tech
by Filip Čúzy | Leave us your feedback on this post!
A DevOps paradigm, programmatic approach, or Kubernetes management. The decision between a declarative or imperative approach is not really a choice – which we will explain in this post.
The main difference between the declarative and imperative approach is:
  • Declarative: You will say what you want, but not how
  • Imperative: You describe how to do something

Declarative Approach

Users will mainly use the declarative approach when describing how services should start, for example: “I want 3 instances of this service to run simultaneously”.

In the declarative approach, a YAML file containing the wished configuration will be read and applied towards the declarative statement. A controller will then know about the YAML file and apply it where needed. Afterwards, the K8s scheduler will start the services, where it has the capacity to do so.

Kubernetes, or K8s for short, lets you decide between what approach you choose. When using the imperative approach, you will explain to Kubernetes in detail, how to deploy something. An imperative way includes the commands create, run, get & delete – basically any verb-based command.

Will I ever manage imperatively?

Yes, you will. Even when using declarative management, there is always an operator, which translates the intent to a sequence of orders and operations which he will do. Or there might be several operators who cooperate or split the responsibility for parts of the system.

Although declarative management is recommended in production environments, imperative management can serve as a faster introduction to managing your deployments, with more control over each step you would like to introduce.

Each approach has its pro’s and con’s, where the choice ultimately depends on your deployment and how you want to manage it.

While software-defined networking aims for automation, once your network is fully automated, enterprises should consider IBN (Intent-Based Networking) the next big step.

Intent-Based Networking (IBN)

Intent-Based Networking is an idea introduced by Cisco, which makes use of artificial intelligence, as well as machine learning to automate various administrative tasks in a network. This would be telling the network, in a declarative way, what you want to achieve, relieving you of the burden of exactly describing what a network should do.

For example, we can configure our CNFs in a declarative way, where we state the intent – how we want the CNF to function, but we do not care how the configuration of the CNF will be applied to, for example, VPP.

For this purpose, VPP-Agent will send the commands in the correct sequence (with additional help from KVscheduler), so that the configuration will come as close as possible to the initial intent.

Network Service Mesh & CNF's by PANTHEON.tech

[Integration] Network Service Mesh & Cloud-Native Functions

July 17, 2020/in CDNF.io, News /by PANTHEON.tech

by Milan Lenčo & Pavel Kotúček | Leave us your feedback on this post!

As part of a webinar, in cooperation with the Linux Foundation Networking, we have created two repositories with examples from our demonstration “Building CNFs with FD.io VPP and Network Service Mesh + VPP Traceability in Cloud-Native Deployments“:

  • CNF NSM Example
  • VPP Traceability

Check out our full-webinar, in cooperation with the Linux Foundation Networking on YouTube:

What is Network Service Mesh (NSM)?

Recently, Network Service Mesh (NSM) has been drawing lots of attention in the area of network function virtualization (NFV). Inspired by Istio, Network Service Mesh maps the concept of a Service Mesh to L2/L3 payloads. It runs on top of (any) CNI and builds additional connections between Kubernetes Pods in the run-time, based on the Network Service definition deployed via CRD.

Unlike Contiv-VPP, for example, NSM is mostly controlled from within applications through the provided SDK. This approach has its pros and cons.

Pros: Gives programmers more control over the interactions between their applications and NSM

Cons: Requires a deeper understanding of the framework to get things right

Another difference is, that NSM intentionally offers only the minimalistic point-to-point connections between pods (or clients and endpoints in their terminology). Everything that can be implemented via CNFs, is left out of the framework. Even things as basic as connecting a service chain with external physical interfaces, or attaching multiple services to a common L2/L3 network, is not supported and instead left to the users (programmers) of NSM to implement.

Integration of NSM with Ligato

At PANTHEON.tech, we see the potential of NSM and decided to tackle the main drawbacks of the framework. For example, we have developed a new plugin for Ligato-based control-plane agents, that allows seamless integration of CNFs with NSM.

Instead of having to use the low-level and imperative NSM SDK, the users (not necessarily software developers) can use the standard northbound (NB) protobuf API, in order to define the connections between their applications and other network services in a declarative form. The plugin then uses NSM SDK behind the scenes to open the connections and creates corresponding interfaces that the CNF is then ready to use.

The CNF components, therefore, do not have to care about how the interfaces were created, whether it was by Contiv, via NSM SDK, or in some other way, and can simply use logical interface names for reference. This approach allows us to decouple the implementation of the network function provided by a CNF from the service networking/chaining that surrounds it.

The plugin for Ligato-NSM integration is shipped both separately, ready for import into existing Ligato-based agents, and also as a part of our NSM-Agent-VPP and NSM-Agent-Linux. The former extends the vanilla Ligato VPP-Agent with the NSM support while the latter also adds NSM support but omits all the VPP-related plugins when only Linux networking needs to be managed.

Furthermore, since most of the common network features are already provided by Ligato VPP-Agent, it is often unnecessary to do any additional programming work whatsoever to develop a new CNF. With the help of the Ligato framework and tools developed at Pantheon, achieving the desired network function is often a matter of defining network configuration in a declarative way inside one or more YAML files deployed as Kubernetes CRD instances. For examples of Ligato-based CNF deployments with NSM networking, please refer to our repository with CNF examples.

Finally, included in the repository is also a controller for K8s CRD defined to allow deploying network configuration for Ligato-based CNFs like any other Kubernetes resource defined inside YAML-formatted files. Usage examples can also be found in the repository with CNF examples.

CNF Chaining using Ligato & NSM (example from LFN Webinar)

In this example, we demonstrate the capabilities of the NSM agent – a control-plane for Cloud-native Network Functions deployed in a Kubernetes cluster. The NSM agent seamlessly integrates the Ligato framework for Linux and VPP network configuration management, together with Network Service Mesh (NSM) for separating the data plane from the control plane connectivity, between containers and external endpoints.

In the presented use-case, we simulate a scenario in which a client from a local network needs to access a web server with a public IP address. The necessary Network Address Translation (NAT) is performed in-between the client and the webserver by the high-performance VPP NAT plugin, deployed as a true CNF (Cloud-Native Network Functions) inside a container. For simplicity, the client is represented by a K8s Pod running image with cURL installed (as opposed to being an external endpoint as it would be in a real-world scenario). For the server-side, the minimalistic TestHTTPServer implemented in VPP is utilized.

In all the three Pods an instance of NSM Agent is run to communicate with the NSM manager via NSM SDK and negotiate additional network connections to connect the pods into a chain client:

Client <-> NAT-CNF <-> web-server (see diagrams below)ne

The agents then use the features of the Ligato framework to further configure Linux and VPP networking around the additional interfaces provided by NSM (e.g. routes, NAT).

The configuration to apply is described declaratively and submitted to NSM agents in a Kubernetes native way through our own Custom Resource called CNFConfiguration. The controller for this CRD (installed by cnf-crd.yaml) simply reflects the content of applied CRD instances into an ETCD datastore from which it is read by NSM agents. For example, the configuration for the NSM agent managing the central NAT CNF can be found in cnf-nat44.yaml.

More information about cloud-native tools and network functions provided by PANTHEON.tech can be found on our website here.

Networking Diagram

Network Service Mesh Manager Architecture

Routing Diagram

CNF NAT 44 RoutingSteps to recreate the Demo

    1. Clone the following repository.
    2. Create Kubernetes cluster; deploy CNI (network plugin) of your preference
    3. Install Helm version 2 (latest NSM release v0.2.0 does not support Helm v3)
    4. Run helm init to install Tiller and to set up a local configuration for the Helm
    5. Create a service account for Tiller
      $ kubectl create serviceaccount --namespace kube-system tiller
      $ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
      $ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
    6. Deploy NSM using Helm:
      $ helm repo add nsm https://helm.nsm.dev/
      $ helm install --set insecure=true nsm/nsm
    7. Deploy ETCD + controller for CRD, both of which will be used together to pass configuration to NSM agents:
      $ kubectl apply -f cnf-crd.yaml
    8. Submit the definition of the network topology for this example to NSM:
      $ kubectl apply -f network-service.yaml
    9. Deploy and start simple VPP-based webserver with NSM-Agent-VPP as control-plane:
      $ kubectl apply -f webserver.yaml
    10. Deploy VPP-based NAT44 CNF with NSM-Agent-VPP as control-plane:
      $ kubectl apply -f cnf-nat44.yaml
    11. Deploy Pod with NSM-Agent-Linux control-plane and curl for testing connection to the webserver through NAT44 CNF:
      $ kubectl apply -f client.yaml
    12. Test connectivity between client and webserver:
      $ kubectl exec -it  client curl 80.80.80.80/show/version
    13. To confirm that client’s IP is indeed source NATed (from 192.168.100.10 to 80.80.80.102) before reaching the web server, one can use the VPP packet tracing:
      $ kubectl exec -it webserver vppctl trace add memif-input 10
      $ kubectl exec -it client curl 80.80.80.80/show/version
      $ kubectl exec -it webserver vppctl show trace
      
      00:01:04:655507: memif-input
        memif: hw_if_index 1 next-index 4
          slot: ring 0
      00:01:04:655515: ethernet-input
        IP4: 02:fe:68:a6:6b:8c -> 02:fe:b8:e1:c8:ad
      00:01:04:655519: ip4-input
        TCP: 80.80.80.100 -> 80.80.80.80
      ...
firewall blog

A Cloud-Native & Unified Firewall

April 28, 2020/in Blog, CDNF.io /by PANTHEON.tech

by Filip Gschwandtner | Leave us your feedback on this post!

Updated 11/05/2020: Our Unified Firewall Demo was updated with additional insight, as to how we achieved great results with our solution.

We differentiate generally between a hardware and software firewall. Software firewalls can reside in the userspace (for example, VPP) or the kernel space (for example, NetFilter). These serve as a basis for cloud-native firewalls. The main advantage of software firewalls is the ability to scale without hardware. This is done in the virtual machines or containers (Docker), where these firewalls reside and function from.

One traditional firewall utility in Linux is named iptables. It is configured via command-line and acts as an enforcer of rules and configuration of Netfilter. You can find a great how-to in the Ubuntu Documentation on configuring iptables, which is found pre-installed in most Linux distributions.

For a more performance-oriented firewall solution, you can turn to the evergreen, Vector Packet Processing framework and Access Control Lists (ACLs).

Our CNF Project offers such a cloud-native function – Access Control List (ACL)-based firewall between CNF interfaces with FD.io VPP dataplane and Ligato management plane.

If we have sparked your interest in this solution, make sure to contact us directly. Until then, make sure to watch our CNF project closely – there is more to come!

Firewall Solutions

Multiple solutions mean a wide variety of a user or company is able to choose from. But since each firewall uses a different API, we can almost immediately see an issue with the management of multiple solutions. Some APIs are more fully-fledged than others while requiring various levels of access (high level vs. low-level API) and several layers of features.

At PANTHEON.tech, we found that having a unified API, above which a management system would reside, would make a perfectly balanced firewall.

Cloud-Native: We will be using the open-source Ligato, micro-services platform. The advantage is, Ligato being cloud-native.

Implementation: The current implementation unifies the ACL in FD.io‘s VPP and the NetFilter in the Linux Kernel. For this purpose, we will be using the open-source VPP-Agent from Ligato.

Separate Layers: This architecture enables us to extend it to any configurable firewall, as seen below.

image2020 4 28 14 5 19

Layer Responsibilities: Computer networks are divided into network layers, where each layer has a different responsibility. We have modeled (proto-model) a unification API and translation to technology-specific firewall configuration. The unified layer has a unified API, which it translates and sends to the technology-specific API. The current implementation is via the VPP-Agent Docker container.

Ligato and VPP-Agent: In this implementation, we make full-use of VPP-Agent and Ligato, via gRPC communication. Each firewall has an API, modeled like a proto-model. This makes resolving failures a breeze.

Resolving Failures: Imagine that, in a cloud, software can end with a fatal error. The common solution is to suspend the container and restart it. This means, however, that you need to set up the configuration again or synchronize it with an existing configuration from higher layers.

Fast Reading of Configurations: There is no need to load everything again throughout all layers, up until the concrete firewall technology. These can be often slow in loading the configuration. Ligato resolves this via the configurations residing in the Ligato platform, in an external key-value storage (ETCD, if integrated with Ligato).

How did we do this?

We created this unifying API by using a healthy subset of all technologies. We preferred simplified API writing – since, for example in iptables, there can be lots of rules which can be written in a more compact way.

We analyzed several firewall APIs, which we broke down into basic blocks. We defined the basic filters for packet traffic, meaning the way from which interface, which way the traffic is flowing. Furthermore, we defined rules, based on the selector being the final filter for rules and actions, which should occur for selected traffic (simple allow/deny operation).

There are several types of selectors:

  • L2 (according to the sources MAC address)
  • L3 (IP and ICMP Selector)
  • L4 (Only TCP traffic via flags and ports / UDP traffic via ports)

The read/write performance of our Unified Firewall Layer solution, was tested using VPP and iptables (netfilter), at 250k rules. The initial tests ended with poor writing speed. But we experimented with various combinations and ended up putting a lot of rules into a few rule-groups.

That did not go as planned either.

A deep analysis showed that the issue is not within Ligato, since task-manager showed that the VPP/Linux kernel was fully working. We made an additional verification for iptables, only by using go-iptables library. It was very slow when adding too many rules in one chain. Fortunately, iptables provides us with additional tools, which are able to export and import data fast. The disadvantage is, that the export format is poorly documented. However, I did an iptables export and insert of data closely before the commit, and imported the data back afterward.

# Generated by iptables-save v1.6.1
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:testchain - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
<<insert new data here>>
COMMIT

Our Open-Source Commitment

We achieved a speed increase for 20k rules in 1 iptable chain – from 3 minutes and 14 seconds to a few seconds. This showed a perfect performance fix for the VPP-Agent, which we committed to the Ligato VPP-Agent repository.

This also benefited updates, since each updated has to be implemented as a delete and create case (recreated each time). I made it as an optional method with a custom number of rules, from which it applies. Using too few rules can result in great speed with the default approach (via API iptables rule). Now, we have a solution for using a lot of rules as well. Due to the lack of detailed documentation of the iptables-save output format, I decided on turning this option off by default.

The results of the performance test are:

  • 25 rule-groups x 10000 rules for each rule-group
  • Write: 1 minute 49 seconds
  • Read: 359.045785ms

Reading is super-fast, due to all data being in the RAM in the Unified Layer. This means, that it’s all about one gRPC call with encoding/decoding.

If we have sparked your interest in this solution, make sure to contact us directly.

trex memif

memif + T-REX: CNF Testing Made Easy

February 18, 2020/in CDNF.io, News /by PANTHEON.tech

PANTHEON.tech’s developer Július Milan has managed to integrate memif into the T-REX Traffic Generator. T-REX is a traffic generator, which you can use to test the speed of network devices. Now you can test Cloud-Native Functions, which support memif natively in the cloud, without specialized network cards!

Imagine a situation, where multiple cloud-native functions are interconnected or chained via memif. Tracking their utilization would be a nightmare. With our memif + T-REX solution, you can make arbitrary measurements – effortlessly and straightforward. The results will be more precise and direct, as opposed to creating adapters and interconnecting them, in order to be able to measure traffic.

Our commitment to open-source has a long track record. With lighty-core being open-sourced and our CTO Robert Varga being the top-single contributor to OpenDaylight source code, we are proving once again that our heart belongs to the open-source community.

The combination of memif & T-REX makes measuring cloud-native function performance easy & straightforward.

memif, the “shared memory packet interface”, allows for any client (VPP, libmemif) to communicate with DPDK using shared memory. Our solution makes memif highly efficient, with zero-copy capability. This saves memory bandwidth and CPU cycles while adding another piece to the puzzle for achieving a high-performance CNF.

It is important to note, that zero-copy works on the newest version of DPDK. However, memif & T-REX can be used in zero-copy mode, when the T-REX side of the pair is the master. The other side of the memif pair (VPP or some cloud-native function) is the zero-copy slave.

T-REX, developed by Cisco, solves the issue of buying stateful/realistic traffic generators, which can set your company back by up to 500 000$. This limits the testing capabilities and slows down the entire process. T-REX solves this by being an accessible, open-source, stateful/stateless traffic generator, fueled by DPDK.

Services that function in the cloud are characterized by an unlimited presence. They are accessed from anywhere, with a functional connection and are located on remote servers. This may curb costs since you do not have to create and maintain your servers in a dedicated, physical space.

PANTHEON.tech is proud to be a technology enabler, with continuous support for open-source initiatives, communities & solutions.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

More @ PATHEON.tech

  • [What Is] VLAN & VXLAN
  • [What Is] Whitebox Networking?
  • [What Is] BGP EVPN?
© 2025 PANTHEON.tech s.r.o | Privacy Policy | Cookie Policy
  • Link to LinkedIn
  • Link to Youtube
  • Link to X
  • Link to Instagram
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}
Scroll to top