Multus CNI (Container Network Interface) is a novel approach to managing multiple CNIs in your container network (Kubernetes). Based on its name, which means multiple in Latin, Multus is an open-source plug-in, which serves as an additional layer in a container network, to enable multi-interface support. For example, Virtual Network Functions (VNFs) often depend on connectivity towards multiple network interfaces.
The CNI project itself, backed by the Cloud Native Computing Foundation, defines a minimum specification on what a common interface should look like. The CNI project consists of three primary components:
Specification: an API that lies between the network plugins and runtimes
Plugins: depending on use-cases, they help set up the network
Library: CNI specifications as Go implementations, which are then utilized by runtimes
Each CNI can deliver different results, which makes Multus a wonderful plugin to manage these functionalities and make them work together.
Multus delivers this functionality in form of a contact between the container runtime and a selection of plugins, which are called upon to do the actual net configuration tasks.
Multus Characteristics
Manage contact between container runtime and plugins
No net configuration by itself (dependent on other plugins)
Management of plugins done by handling plugins as delegates (using Flannel), which can be invoked into a certain sequence, based on either a JSON scheme or CNI configuration. Flannel is an overlay network in Kubernetes, which configures layer 3 network fabric and therefore satisfies Kubernetes requirements (run by default on many plugins). Multus then invokes the eth0 interface in the pod for the primary/master plugin, while the rest of the plugins receive netx interfaces (net0, net1, etc.).
StoneWork in K8s Pod with Multiple Interfaces
Our team created a demo on how to run StoneWork in a Microk8s pod with multiple interfaces attached via the Multus add-on.
This example attaches two existing host interfaces to the Stonework container running on a Microk8s pod. Highlights include the option to add multiple DPDK interfaces, as well as multiple af_packet interfaces to StoneWork with this configuration.
If you are interested in high-quality CNFs for your next or existing project, make sure to check out CDNF.io – a portfolio of cloud-native network functions, by PANTHEON.tech.
Enterprises require workflows to understand internal processes, how they apply to different branches, and divide responsibility to achieve a common goal. Using a workflow enables to pick & choose, which models are required.
Although there are many alternatives, BPMN is a standard widely used across several fields to graphically depict business processes and manage them.
Notable, although underrated, are its benefits for network administrators. BPMN enables network device management & automation, without having to fully comprehend the different programming languages involved in each task.
What is BPMN?
The Business Process Model & Notation (BPMN) standard graphically represents specifics of business processes in a business process model. In cooperation with the Camunda platform, which provides its own BPMN engine, it can do wonders with network orchestration automation.
BPMN lets enterprises graphically depict internal business procedures and enables companies to render these procedures in a standardized manner. Using BPMN removes the need for software developers to adjust business logic since the entire workflow can be managed through a UI.
In the case of network management, it provides a level of independence, abstracted from the network devices themselves.
This logic behind how business processes are standardized as workflows is present in the Open Network Automation Platform (ONAP) as well.
What is ONAP?
ONAP is an orchestration and automation framework, featuring an open-source software concept, that is robust, real-time, policy-driven, for physical and virtual network functions.
Camunda is an open-source platform, used in the ONAP Service Orchestrator – where it serves as one of the core components of the project to handle BPMN 2.0 process flows.
The SO component is mostly composed of Java & Groovy code, including a Camunda BPMN code-flow.
PANTHEON.tech circumvents the need for SO and uses the Camunda BPMN engine directly. This resulted in a project with SO functionality, without the additional SO components – sort of a microONAP concept.
Features: Camunda & BPMN
The business process modeling is a single action of network orchestration. As with any project integration, it is important to emphasize the project’s strong points, which enabled us to achieve a successful use case.
Benefits of Camunda/BPMN
Automation: BPMN provides a library of reusable boxes, which make their use more accessible by avoiding/hiding unnecessary complexity
Performant BPMN Engine: the engine provides good out-of-the-box performance, with a variety of operator/DevOps UI tools, as well as BPMN modeling tools
User Interface: OOTB user interface, with the option of creating a custom user interface
DevOps: easy manipulation & development of processes
Scalability: in terms of performance tuning and architecture development for lots of tasks
Interoperability: extensible components, REST integration, or script hooks for Groovy, JavaScript & more
REST API: available for BPMN engine actions
Exceptional Error Handling
Scalability: tasks with high execution cadence can be externalized and be implemented as scalable microservices. That provides not only scalability to the system itself but can be applied to the teams and organizations as well
Process tracking: the execution of the process is persisted and tracked, which helps with system recovery and continuation of the process execution in partial and complete failure scenarios.
What PANTHEON.tech had to mitigate is, for example, parallelism – running several processes at once. Timing estimation limits the high precision configuration of network devices. Imagine you want to automate a process starting with Task 1. After a certain time, Task 2 takes effect. Timers in BPMN however need manual configuration to tune the interval between jobs & processes.
Our deep dive into this topic resulted in a concept for automating network configurations in spine-leaf data centers, using a lightweight ONAP SO architecture alternative.
Use Case: Virtual Network Configuration in Spine-Leaf Data Centers
PANTHEON.tech has achieved, that the design of this use-cases custom architecture is fully functional and meets the required criteria – to fully adopt network automation in a demanding environment.
Our use-case shows how BPMN can be used as a network configuration tool in, for example, data centers. In other words – how ONAP’s SO and lighty.io could be used to automate your data center.
If you are interested in this use case, make sure to contact us and we can brief you on the details.
We achieved a successful & effectiveintegration with the Firewall CNF and CDS, in an easy-to-understand use-case: block and allow traffic between two Docker containers.
CDNF.io Firewall & CDS
With ONAP, orchestration management and automation of network services is simple, yet effective. It allows to define policies and act on network changes in real-time.
With CDS, users can configure other ONAP components as well – such as SDN-C or SDN-R, and thereby directly configure the network itself.
CDS is responsible for designing and controlling self-services – a fully self-defined software system. It makes these self-services so accessible, that minimal to no code development is required. It is usable also by non-programmers.
Position of CDS within the ONAP architecture
Self-contained services are defined by a Controller Blueprint Archive (CBA). The core of the CBA structure defines the service, according to TOSCA – the topology and orchestration specification for cloud applications. These blueprints are modeled, enriched to become fully self-contained TOSCA blueprints, and uploaded to CDS.
Our VPP-Agent-based Firewall CNF can be configured using CDS and afterward, effectively blocks or allows traffic between two Alpine Linux containers.
The workflow of applying a configuration to our Firewall CNF is comprised of two steps:
Resolve the configuration template
Apply the resolved configuration to the CNF, using the REST API
This shows the versatility and agility of our CNFs, by showcasing another possible integration in a popular project, such as ONAP.
The script in our demonstration provides a setup, where necessary containers are started and the data plane and control plane are brought in place.
The script will then showcase traffic (pinging) from the start point to endpoint in three scenarios:
Firewall CNF is not configured
Firewall CNF is configured by CDS to deny traffic
Firewall CNF is configured by CDS to allow traffic
PANTHEON.tech & ONAP
PANTHEON.tech is closely involved and following the development of various ONAP components.
The CPS component is of crucial importance in the ONAP project since it serves as a common data layer service, which preserves network-element runtime information, in form of database functionality.
PANTHEON.tech’s involvement in ONAP CPS includes creating an easy and common platform for testing deployments easier which highlights, where optimization is needed or achieved.
PANTHEON.tech has made its data plane for managing cloud-native network functions, StoneWork, available on the GNS 3 marketplace. This makes it easy for anybody to try out our all-in-one solution, which can combine multiple cloud-native network functions from our CDNF.io portfolio, in a separate environment.
This tutorial will give you the basics on how to set-up StoneWork in an environment, where you can safely test out interaction and its positioning within your (simulated) network.
The goal of this tutorial is to have a basic setup, where we will:
Import, or copy & paste a YAML configuration via the three-dot menu in the Configuration tab. We have conveniently placed an Examples folder, with a JSON Schema that serves as the
Errors will then be highlighted, against the imported JSON schema.
The JSON Schema specification recommends to use the definitions key, where all definitions should be located. Then, you should use a relative path to point to the definitions.
Our implementation of the JSON schema requires a definitions object, if the ref ID links to a definition and does not use a relative path.
Not supported: Loading definitions from external URIs
Feedback for CDNF.io Tools
Leave us your feedback here or create an Issue in the repository of the CDNF.io YAML Editor. CDNF.io is a portfolio of cloud-native network functions, developed by PANTHEON.tech.
Make sure to visit CDNF.io and our playlist on YouTube!
Our CDNF.io portfolio is steadily growing. Our latest addition to the CDNF.io family is StoneWork.
Here, StoneWork enables you to securely and remotely access your management plane.
StoneWork is a solution which, thanks to its modular architecture, enables you to combine multiple CNFs from the CDNF.io portfolio, using only one data-plane, to increase the overall throughput, while keeping rich functionality.
One of the many features of StoneWork is IPSec, which we will talk about in this post.
StoneWork IPSec + SONiC
This case study briefly describes, how the StoneWork IPsec appliance can be used on your SONiC enabled switch to secure & tunnel your OOB management SONiC interface.
Stonework is part of our CDNF.io portfolio. It is an enhanced VPP distribution, which serves as an all-in-one switch/router/firewall.
In this demonstration, two SONiC OS instances are provisioned to represent two IPSec gateways. But instead of actual physical switches, each SONiC OS runs inside a Docker container with a P4-simulated SAI behavioral model software switch ASIC underneath.
This P4 ASIC is also running as a separate container, to keep the emulated physical interfaces separated from kernel-space ports. A link between the ASIC and SONiC container is a network namespace reference /var/run/netns/sw_net that P4 ASIC expects to point to ASIC container from the filesystem of the SONiC container.
On top of that, there is a StrongSwan appliance running in a container, using the same network namespace as SONiC for the sake of AF_PACKET. In total there are three containers to represent one switch.
In-between the switches there is a “bridge” container, used only to capture traffic and verify that it is indeed encrypted. On the opposite side of switches, there are containers representing hosts – one is used as a TCP client, the other as a server.
What is SONiC?
SONiC is a Linux-based, network operating system, available as an open-source project, meant for network routers & switches.
The architecture is similar to that of OpenDaylight or lighty.io – it is composed of modules, on top of a centralized infrastructure, which is easily scalable.
Its main benefits are the usage of the Redis-engine infrastructure & placement of modules into Docker containers. The primary functional components are DHCP-Relay, PMon, SNMP, LLDP, BGP, TeamD, Database, SWSS, SyncD.
SONiC covers all the components needed for a complete L3 device. Its main use-case presents a cloud-data center, with the possibility of sharing software stacks among different platforms. Currently, over 100 platforms are officially supported.
An important concept of SONiC is that it does not interact with the hardware directly. Instead, its programs switch ASIC via the vendor-neutral Switch Abstraction Interface or SAI for short.
This approach, on one hand, allows maintaining vendor independence, while decoupling the network software and hardware. On the other hand, it creates boundaries on what can be performed with the underlying networking hardware.
This report reflects a series of metrics for last year and we are extremely proud to be highlighting our continued leading levels of participation and contribution in LFN’s technical communities. As an example, PANTHEON.tech provided over 60% of the commits to OpenDaylight in 2020.
This is an extraordinary achievement, given this is in the company of such accoladed peers as AT&T, Orange S.A., Cisco Systems Inc., Ericsson, and Samsung.
Customer Enablement
Clearly, this report demonstrates open source software solutions have secured themselves in many customer’s network architectures and strategies, with even more customers following this lead. Leveraging its expertise and experience, PANTHEON.tech, since its inception has been focused on offering customers; application development services, Enterprise-Grade tailored or productized open source solutions with an accompanying full support model
PANTHEON.tech leads the way in enabling customers with Software Defined Network automation, comprehensively integrating into an ecosystem of vendor and open orchestration, systems, and network devices across all domains of customer’s networks. Our solutions facilitate automation, for such services as O-RAN, L2/L3/E-VPN, 5G, or Data Centre, amongst many others.
Leveraging multiple open-source projects, including FD.io, we assist customers in embracing cloud-native, developing tailored enterprise-grade network functions, which focus on customer’s immediate and future requirements and performance objectives.
We help our customers unlock the potential of their network assets, whether; new, legacy, proprietary, open, multi-domain, or multi-layer, PANTHEON.tech has solutions to simplify and optimize customer’s networks, systems, and operations.
The key-takeaway is, that customers can rely on PANTHEON.tech to deliver, unlocking services in your existing networks, innovate and adopt new networks and services, while simplifying your operations.
Please contact PANTHEON.tech to discuss how we can assist your open-source network and application goals with our comprehensive range of services, subscriptions, and training.
At present, enterprises practice approaches in securing external perimeters of their networks. From centralized Virtual Private Networks (VPN), through access without a VPN to using solutions, such as EntGuard VPN.
That also means, that as an enterprise, you need to go the extra mile to protect your employees, your, and their data. A VPN will:
Encrypt your internet traffic
Protect you from data-leaks
Provide secure access to internal networks – with an extra layer of security!
Encrypt – Secure – Protect.
With EntGuard VPN, PANTHEON.tech utilized years of working on network technologies and software, to give you anenterprise-grade product, that is built for the cloud.
We decided to build EntGuard VPN on the critically-acclaimed WireGuard® protocol. The protocol focuses on ease-of-use & simplicity, as opposed to existing solutions like OpenVPN – while achieving incredible performance! Did you know that WireGuard® is natively supported in the Linux kernel and FD.io VPP since 2020?
WireGuard® is relied on for high-speeds and privacy protection. Complex, state-of-the-art cryptography, with lightweight architecture. An incredible combination.
Unfortunately, it’s not easy to maintain WireGuard® in enterprise environments, that’s why we have decided to bring you EntGuard, which gives you the ability to use WireGuard® tunnels in your enterprise environment.
Premium Features: Be the first to try out new features, such as – MFA, LDAP, Radius, end-station remote support, traffic monitoring, problem analysis and more!
The PANTHEON.tech, cloud-native network functions portfolio, CDNF.io, keeps on growing. At the start of 2020, we introduced you to the CDNF.io project, which at the moment houses 18 CNF’s. Make sure to keep up-to-date with our future products, by following CDNF.io and our social media!
ONAP (Open Network Automation Platform) is quite a trend in the contemporary SDN world. It is a broad project, consisting of a variety of sub-projects (or components), which together form a network function orchestration and automation platform. Several enterprises are active in ONAP and its growth is accelerating rapidly. PANTHEON.tech is a proud contributor as well.
What is ONAP?
The platform itself emerged from the AT&T ECOMP (Enhanced Control, Orchestration, Management & Policy) and Open-O (Open Orchestrator) initiatives. ONAP is an open-source software platform, that offers a robust, real-time, policy-driven orchestration and automation framework, for physical and virtual network functions. It exists above the infrastructure layer, which automates the network.
ONAP enables end-users to connect services through the infrastructure. It allows network scaling and VNF/CNF implementations in a fully automated manner. Among other benefits, like:
Bring agile deployment & best practices to the telecom world
Add & deploy new features on a whim
Improve network efficiency & sink costs
Its goal is to enable operators and developers, networks, IT, and the cloud to quickly automate new technologies and support full lifecycle management. It is capable of managing (build, plan, orchestrate) Virtual Network Functions (VNF), as well as Software-Defined Networks (SDN).
ONAP’s high-level architecture involves numerous software subsystems (components). PANTHEON.tech is involved in multiple ONAP projects, but mostly around controllers (like SDN-C). For a detailed view, visit the official wiki page for the architecture of ONAP.
SDN-C
SDN-C is one of the components of ONAP – the SDN controller. It is basically OpenDaylight, with additional Directed Graph Execution capabilities. In terms of architecture, ONAP SDN-C is composed of multiple Docker containers.
Directed Graph Creator runs one of these containers. It’s a user-friendly web UI, that can be used to create directed graphs. Another container runs the Admin Portal. The next one runs the relational database, which is the focal point of the implementation of SDN-C and used for each container. Lastly, the SDN-C container, that runs the controller itself.
According to the latest 5G use-case paper for ONAP, SDN-C has managed to implement “radio-related optimizations through the SDN-R sub-project and support for the A1 interface”.
CDS: Controller Design Studio
As the official documentation puts it:
CDS Designer UI is a framework to automate the resolution of resources for instantiation and any config provisioning operation, such as day0, day1, or day2 configuration.
CDS has both design-time & run-time activities. During design time, the designer can define what actions are required for a given service, along with anything comprising the action. The design produces a CBA Package. Its content is driven by a catalog of reusable data dictionaries and components, delivering a reusable and simplified self-service experience.
CDS enables users to adapt resources in a way, where no direct code-changes are needed. The Design Studio gives users, not only developers, the option to customize the system, to meet the customer’s demands. The two main components of CDS are the frontend (GUI) and backend (run-time). It is possible to run CDS in Kubernetes or an IDE of your choice.
The primary role of SO is the automation of the provisioning operations of end-to-end service instances. In favor of overall end-to-end service instantiation, processes, and maintenance, SO is accountable for the instantiation and setup of VNFs.
To accomplish its purpose, Service Orchestration performs well-defined processes – usually triggered by receiving service requests, created by other ONAP components, or by Order Lifecycle Management in the BSS layer.
The orchestration procedure is either manually developed or received from ONAP’s Service Design and Development (SDC) portion, where all service designs are created for consumption and exposed/distributed.
The latest achievement of the Service Orchestrator is the implementation of new workflows such as:
CSMF – Communication Service Management Function
NSMF – Network Slice Management Function
NSSMF – Network Slice Sub-Net Management Function
DMaaP: Data Movement as a Platform
The DMaaP component is a data movement service, which transports and processes data from a selected source to the desired target. It is capable of transferring data and messages between ONAP components, data filtering/compression/routing, as well as message routing and batch/event-based processing.
DCAE: Data Collection Analytics & Events
The Data Collection Analytics & Events component does exactly what’s in its name – gather performance, usage & configuration data from the managed environment. The component guards events in a sense – if something significant occurs or an anomaly is detected, DCAE takes appropriate actions.
The component collects and stores data that is necessary for analysis while providing a framework for the development of needed analytics.
The Active & Available Inventory functionality offers real-time views of relationships with the products and services operated by them. It gives real-time insights into the managed products and services, as well as their connections.
A&AI is a list of properties that are active, available, and allocated. It establishes a multi-dimensional relationship between the programs and infrastructure under administration. It provides interfaces for dynamic network topology requests, both canned and ad-hoc network topology queries.
Recently AAI gained schema support for 5G service design and slicing models.
Is ONAP worth it?
Yes, it is. Since you have come up to this conclusion, then you might feel that ONAP is the right fit for your needs. It is an enormous project with around 20 components.
Join us in reminiscing and reminding you, what PANTHEON.tech has managed to create, participate in, or inform about in 2020.
Project: CDNF.io
In the first quarter of the year, we have made our latest project, CDNF.io, accessible to the public. Cloud-native functions were long overdue in our portfolio and let me tell you – there are lots of them, ready to be deployed anytime. Check out the list of available CNFs or ask us about them.
We have prepared a series of videos, centered around our CNFs, which you can conveniently view here:
Perhaps you like to read more than hear someone explain things to you? We wrote a few posts on:
Apart from our in-house solutions, we have worked on demonstrating several scenarios with common technologies behind them: ServiceNow® & Cisco’s Network Services Orchestrator.
In terms of ServiceNow®, our posts centered around:
Since we did not want to exclude people who might not be that knowledgable about what we do, we have created a few series on technologies and concepts PANTHEON.tech is engaged in, such as:
We try to listen closely to what Robert Varga, the top-single contributor to the OpenDaylight source-code, has to say about OpenDaylight. That allowed us to publish opinion/informative pieces like:
We would like to thank everybody who does their part in working and contributing to projects in PANTHEON.tech, but open-source projects as well. 2020 was challenging, to say the least, but pulling together, makes us stronger – together.
Happy holidays and new years to our colleagues, partners, and readers – from PANTHEON.tech.
We have come a long way to enjoy all the benefits that cloud-native network functions bring us – lowered costs, agility, scalability & resilience. This post will break down the road to CNFs – from PNF to VNF, to CNF.
What are PNFs (physical network functions)?
Back in the ’00s, network functions were utilized in the form of physical, hardware boxes, where each box served the purpose of a specific network function. Imagine routers, firewalls, load balancers, or switches as PNFs, utilized in data centers for decades before another technology replaced them. PNF boxes were difficult to operate, install, and manage.
Just as it was unimaginable to have a personal computer 20 years ago, we were unable to imagine virtualized network functions. Thanks to cheaper, off-the-shelf hardware and expansion of cloud services, enterprises were able to afford to move some network parts from PNFs to generic, commodity hardware.
What are VNFs (virtual network functions)?
The approach of virtualization enabled us to share hardware resources between multiple tenants while keeping the isolation of environments in place. The next logical step was the move from the physical, to the virtual world.
A VNF is a virtualized network function, that runs on top of a hardware networking infrastructure. Individual functions of a network may be implemented or combined together, in order to create a complete package of a networking-communication service. A virtual network function can be part of an SDN architecture or used as a singular entity within a network.
Cloud-native network functions are software implementations of functions, which are traditionally performed by PNFs – and they need to conform to cloud-native principles. They can be packaged within a container image, are always ready to be deployed & orchestrated, chained together to perform a series of complex network functions.
Why should I use CNFs?
Microservices and the overall benefits of adapting cloud-native principles, come with several benefits, which show a natural evolution of network functions in the 2020s. Imagine the benefits of:
Reduced Costs
Immediate Deployment
Easy Control
Agility, Scalability & Resilience
Our CDNF.io project delivers on all of these promises. Get up-to-date with your network functions and contact us today, to get a quote.
A DevOps paradigm, programmatic approach, or Kubernetes management. The decision between a declarative or imperative approach is not really a choice – which we will explain in this post.
The main difference between the declarative and imperative approach is:
Declarative: You will say what you want, but not how
Imperative: You describe howto do something
Declarative Approach
Users will mainly use the declarative approach when describing how services should start, for example: “I want 3 instances of this service to run simultaneously”.
In the declarative approach, a YAML file containing the wished configuration will be read and applied towards the declarative statement. A controller will then know about the YAML file and apply it where needed. Afterwards, the K8s scheduler will start the services, where it has the capacity to do so.
Kubernetes, or K8s for short, lets you decide between what approach you choose. When using the imperative approach, you will explain to Kubernetes in detail, how to deploy something. An imperative way includes the commands create, run, get & delete – basically any verb-based command.
Will I ever manage imperatively?
Yes, you will. Even when using declarative management, there is always an operator, which translates the intent to a sequence of orders and operations which he will do. Or there might be several operators who cooperate or split the responsibility for parts of the system.
Although declarative management is recommended in production environments, imperative management can serve as a faster introduction to managing your deployments, with more control over each step you would like to introduce.
Each approach has its pro’s and con’s, where the choice ultimately depends on your deployment and how you want to manage it.
While software-defined networking aims for automation, once your network is fully automated, enterprises should consider IBN (Intent-Based Networking) the next big step.
Intent-Based Networking (IBN)
Intent-Based Networking is an idea introduced by Cisco, which makes use of artificial intelligence, as well as machine learning to automate various administrative tasks in a network. This would be telling the network, in a declarative way, what you want to achieve, relieving you of the burden of exactly describing what a network should do.
For example, we can configure our CNFs in a declarative way, where we state the intent – how we want the CNF to function, but we do not care how the configuration of the CNF will be applied to, for example, VPP.
For this purpose, VPP-Agent will send the commands in the correct sequence (with additional help from KVscheduler), so that the configuration will come as close as possible to the initial intent.
As part of a webinar, in cooperation with the Linux Foundation Networking, we have created two repositories with examples from our demonstration “Building CNFs with FD.io VPP and Network Service Mesh + VPP Traceability in Cloud-Native Deployments“:
Check out our full-webinar, in cooperation with the Linux Foundation Networking on YouTube:
What is Network Service Mesh (NSM)?
Recently, Network Service Mesh (NSM) has been drawing lots of attention in the area of network function virtualization (NFV). Inspired by Istio, Network Service Mesh maps the concept of a Service Mesh to L2/L3 payloads. It runs on top of (any) CNI and builds additional connections between Kubernetes Pods in the run-time, based on the Network Service definition deployed via CRD.
Unlike Contiv-VPP, for example, NSM is mostly controlled from within applications through the provided SDK. This approach has its pros and cons.
Pros: Gives programmers more control over the interactions between their applications and NSM
Cons: Requires a deeper understanding of the framework to get things right
Another difference is, that NSM intentionally offers only the minimalistic point-to-point connections between pods (or clients and endpoints in their terminology). Everything that can be implemented via CNFs, is left out of the framework. Even things as basic as connecting a service chain with external physical interfaces, or attaching multiple services to a common L2/L3 network, is not supported and instead left to the users (programmers) of NSM to implement.
Integration of NSM with Ligato
At PANTHEON.tech, we see the potential of NSM and decided to tackle the main drawbacks of the framework. For example, we have developed a new plugin for Ligato-based control-plane agents, that allows seamless integration of CNFs with NSM.
Instead of having to use the low-level and imperative NSM SDK, the users (not necessarily software developers) can use the standard northbound (NB) protobuf API, in order to define the connections between their applications and other network services in a declarative form. The plugin then uses NSM SDK behind the scenes to open the connections and creates corresponding interfaces that the CNF is then ready to use.
The CNF components, therefore, do not have to care about how the interfaces were created, whether it was by Contiv, via NSM SDK, or in some other way, and can simply use logical interface names for reference. This approach allows us to decouple the implementation of the network function provided by a CNF from the service networking/chaining that surrounds it.
The plugin for Ligato-NSM integration is shipped both separately, ready for import into existing Ligato-based agents, and also as a part of our NSM-Agent-VPP and NSM-Agent-Linux. The former extends the vanilla Ligato VPP-Agent with the NSM support while the latter also adds NSM support but omits all the VPP-related plugins when only Linux networking needs to be managed.
Furthermore, since most of the common network features are already provided by Ligato VPP-Agent, it is often unnecessary to do any additional programming work whatsoever to develop a new CNF. With the help of the Ligato framework and tools developed at Pantheon, achieving the desired network function is often a matter of defining network configuration in a declarative way inside one or more YAML files deployed as Kubernetes CRD instances. For examples of Ligato-based CNF deployments with NSM networking, please refer to our repository with CNF examples.
Finally, included in the repository is also a controller for K8s CRD defined to allow deploying network configuration for Ligato-based CNFs like any other Kubernetes resource defined inside YAML-formatted files. Usage examples can also be found in the repository with CNF examples.
CNF Chaining using Ligato & NSM (example from LFN Webinar)
In this example, we demonstrate the capabilities of the NSM agent – a control-plane for Cloud-native Network Functions deployed in a Kubernetes cluster. The NSM agent seamlessly integrates the Ligato framework for Linux and VPP network configuration management, together with Network Service Mesh (NSM) for separating the data plane from the control plane connectivity, between containers and external endpoints.
In the presented use-case, we simulate a scenario in which a client from a local network needs to access a web server with a public IP address. The necessary Network Address Translation (NAT) is performed in-between the client and the webserver by the high-performance VPP NAT plugin, deployed as a true CNF (Cloud-Native Network Functions) inside a container. For simplicity, the client is represented by a K8s Pod running image with cURL installed (as opposed to being an external endpoint as it would be in a real-world scenario). For the server-side, the minimalistic TestHTTPServer implemented in VPP is utilized.
In all the three Pods an instance of NSM Agent is run to communicate with the NSM manager via NSM SDK and negotiate additional network connections to connect the pods into a chain client:
Client <-> NAT-CNF <-> web-server (see diagrams below)ne
The agents then use the features of the Ligato framework to further configure Linux and VPP networking around the additional interfaces provided by NSM (e.g. routes, NAT).
The configuration to apply is described declaratively and submitted to NSM agents in a Kubernetes native way through our own Custom Resource called CNFConfiguration. The controller for this CRD (installed by cnf-crd.yaml) simply reflects the content of applied CRD instances into an ETCD datastore from which it is read by NSM agents. For example, the configuration for the NSM agent managing the central NAT CNF can be found in cnf-nat44.yaml.
More information about cloud-native tools and network functions provided by PANTHEON.tech can be found on our website CDNF.io.
To confirm that client’s IP is indeed source NATed (from 192.168.100.10 to 80.80.80.102) before reaching the web server, one can use the VPP packet tracing:
Updated 11/05/2020:Our Unified Firewall Demo was updated with additional insight, as to how we achieved great results with our solution.
We differentiate generally between a hardware and software firewall. Software firewalls can reside in the userspace (for example, VPP) or the kernel space (for example, NetFilter). These serve as a basis for cloud-native firewalls. The main advantage of software firewalls is the ability to scale without hardware. This is done in the virtual machines or containers (Docker), where these firewalls reside and function from.
One traditional firewall utility in Linux is named iptables. It is configured via command-line and acts as an enforcer of rules and configuration of Netfilter. You can find a great how-to in the Ubuntu Documentation on configuring iptables, which is found pre-installed in most Linux distributions.
If we have sparked your interest in this solution, make sure to contact us directly. Until then, make sure to watch our CDNF.io project closely – there is more to come!
Firewall Solutions
Multiple solutions mean a wide variety of a user or company is able to choose from. But since each firewall uses a different API, we can almost immediately see an issue with the management of multiple solutions. Some APIs are more fully-fledged than others while requiring various levels of access (high level vs. low-level API) and several layers of features.
At PANTHEON.tech, we found that having a unified API, above which a management system would reside, would make a perfectly balanced firewall.
Cloud-Native: We will be using the open-source Ligato, micro-services platform. The advantage is, Ligato being cloud-native.
Implementation: The current implementation unifies the ACL in FD.io‘s VPP and the NetFilter in the Linux Kernel. For this purpose, we will be using the open-source VPP-Agent from Ligato.
Separate Layers: This architecture enables us to extend it to any configurable firewall, as seen below.
Layer Responsibilities: Computer networks are divided into network layers, where each layer has a different responsibility. We have modeled (proto-model) a unification API and translation to technology-specific firewall configuration. The unified layer has a unified API, which it translates and sends to the technology-specific API. The current implementation is via the VPP-Agent Docker container.
Ligato and VPP-Agent: In this implementation, we make full-use of VPP-Agent and Ligato, via gRPC communication. Each firewall has an API, modeled like a proto-model. This makes resolving failures a breeze.
Resolving Failures: Imagine that, in a cloud, software can end with a fatal error. The common solution is to suspend the container and restart it. This means, however, that you need to set up the configuration again or synchronize it with an existing configuration from higher layers.
Fast Reading of Configurations: There is no need to load everything again throughout all layers, up until the concrete firewall technology. These can be often slow in loading the configuration. Ligato resolves this via the configurations residing in the Ligato platform, in an external key-value storage (ETCD, if integrated with Ligato).
How did we do this?
We created this unifying API by using a healthy subset of all technologies. We preferred simplified API writing – since, for example in iptables, there can be lots of rules which can be written in a more compact way.
We analyzed several firewall APIs, which we broke down into basic blocks. We defined the basic filters for packet traffic, meaning the way from which interface, which way the traffic is flowing. Furthermore, we defined rules, based on the selector being the final filter for rules and actions, which should occur for selected traffic (simple allow/deny operation).
There are several types of selectors:
L2 (according to the sources MAC address)
L3 (IP and ICMP Selector)
L4 (Only TCP traffic via flags and ports / UDP traffic via ports)
The read/write performance of our Unified Firewall Layer solution, was tested using VPP and iptables (netfilter), at 250k rules. The initial tests ended with poor writing speed. But we experimented with various combinations and ended up putting a lot of rules into a few rule-groups.
That did not go as planned either.
A deep analysis showed that the issue is not within Ligato, since task-manager showed that the VPP/Linux kernel was fully working. We made an additional verification for iptables, only by using go-iptables library. It was very slow when adding too many rules in one chain. Fortunately, iptables provides us with additional tools, which are able to export and import data fast. The disadvantage is, that the export format is poorly documented. However, I did an iptables export and insert of data closely before the commit, and imported the data back afterward.
# Generated by iptables-save v1.6.1
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:testchain - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
<<insert new data here>>
COMMIT
Our Open-Source Commitment
We achieved a speed increase for 20k rules in 1 iptable chain – from 3 minutes and 14 seconds to a few seconds. This showed a perfect performance fix for the VPP-Agent, which we committed to the Ligato VPP-Agent repository.
This also benefited updates, since each updated has to be implemented as a delete and create case (recreated each time). I made it as an optional method with a custom number of rules, from which it applies. Using too few rules can result in great speed with the default approach (via API iptables rule). Now, we have a solution for using a lot of rules as well. Due to the lack of detailed documentation of the iptables-save output format, I decided on turning this option off by default.
The results of the performance test are:
25 rule-groups x 10000rules for each rule-group
Write: 1 minute 49 seconds
Read: 359.045785ms
Reading is super-fast, due to all data being in the RAM in the Unified Layer. This means, that it’s all about one gRPC call with encoding/decoding.
If we have sparked your interest in this solution, make sure to contact us directly. Until then, make sure to watch our CDNF.io project closely – there is more to come!
PANTHEON.tech’s developer Július Milan has managed to integrate memif into the T-REX Traffic Generator. T-REX is a traffic generator, which you can use to test the speed of network devices. Now you can test Cloud-Native Functions, which support memif natively in the cloud, without specialized network cards!
Imagine a situation, where multiple cloud-native functions are interconnected or chained via memif. Tracking their utilization would be a nightmare. With our memif + T-REX solution, you can make arbitrary measurements – effortlessly and straightforward. The results will be more precise and direct, as opposed to creating adapters and interconnecting them, in order to be able to measure traffic.
Our commitment to open-source has a long track record. With lighty-core being open-sourced and our CTO Robert Varga being the top-single contributor to OpenDaylight source code, we are proving once again that our heart belongs to the open-source community.
The combination of memif & T-REX makes measuring cloud-native function performance easy & straightforward.
memif, the “shared memory packet interface”, allows for any client (VPP, libmemif) to communicate with DPDK using shared memory. Our solution makes memif highly efficient, with zero-copy capability. This saves memory bandwidth and CPU cycles while adding another piece to the puzzle for achieving a high-performance CNF.
It is important to note, that zero-copy works on the newest version of DPDK. However, memif & T-REX can be used in zero-copy mode, when the T-REX side of the pair is the master. The other side of the memif pair (VPP or some cloud-native function) is the zero-copy slave.
T-REX, developed by Cisco, solves the issue of buying stateful/realistic traffic generators, which can set your company back by up to 500 000$. This limits the testing capabilities and slows down the entire process. T-REX solves this by being an accessible, open-source, stateful/stateless traffic generator, fueled by DPDK.
Services that function in the cloud are characterized by an unlimited presence. They are accessed from anywhere, with a functional connection and are located on remote servers. This may curb costs since you do not have to create and maintain your servers in a dedicated, physical space.
PANTHEON.tech is proud to be a technology enabler, with continuous support for open-source initiatives, communities & solutions.
We may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.
Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.
Essential Website Cookies
These cookies are strictly necessary to provide you with services available through our website and to use some of its features.
Because these cookies are strictly necessary to deliver the website, refuseing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.
We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.
We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.
Analytics Cookies
These cookies collect information that is used either in aggregate form to help us understand how our website is being used or how effective our marketing campaigns are or to help us customize our website and application for you in order to enhance your experience.
If you do not want that we track your visit to our site you can disable tracking in your browser here:
Google Analytics:
LeadFeeder:
Other external services
We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.
Google Webfont Settings:
Google Map Settings:
Google reCaptcha Settings:
Vimeo and Youtube video embeds:
Privacy Policy
You can read about our cookies and privacy settings in detail on our Privacy Policy Page.