Customers can create, validate and visualize the YANG data model of their application, without the need to call any other external tool – just by using the lighty.io framework.
YANG Tools helps to parse YANG modules, represent the YANG model in Java, and serialize/deserialize YANG model data. However, a custom YANG module can contain improper data that would result in an application failure. To avoid such annoying situations, PANTHEON.tech engineers created the lighty YANG Validator.
Its LightyController component utilizes OpenDaylight’s core components, including YANGtools, that provides a set of tools and libraries for YANG modeling of the network topology, configuration, and state data as defined by YANG 1.0 and YANG 1.1 models.
Prerequisites
Download the distribution from this page.
Make sure to run the tool in Linux and with Java installed.
Unzip the folder and read through the README.md file
What does the lighty YANG Validator offer?
The lighty YANG Validator (lighty-yang-validator) was inspired by pyang – python YANG validation tool. It checks the YANG module using the YANG Parser module. In case of any problem during parsing, the corresponding stack trace is returned to let you know what’s wrong and where.
In addition to the pyang implementation, the lighty YANG Validator, built on top of OpenDaylight’s YANG engine, checks not only the standard YANG compatibility but it validates the given module as a module compatible with lighty.io or OpenDaylight framework.
Users can choose to validate only one module or all modules within the given directory.
It’s not necessary to have all the imported and included modules of a validating module in the same path. It is possible to use -p, –path option with a path or colon-separated paths to needed module(s). YANG Validator can search for modules recursively within the file structure.
Of course, the customer can decide to search for the file just by module name instead of specifying the whole path!
Backwards Compatibility
The lighty YANG Validator allows checking the backward compatibility of the updated YANG module via –check-update-from option. Customers can select to validate backward compatibility according to RFC6020 or RFC7950.
The lighty YANG Validator can be further used for:
Verification of backward-compatibility for a module
Notification of users about module status change (removal/deprecation)
Simplify the YANG file
The YANG file is possible to simplify based on the XML payload. The resulting data model can be reduced by removing all nodes that are defined with an “if-feature”. This functionality is very useful with huge YANG files, that are tested with some basic configuration and not all schema nodes are used.
Utilization of such trimmed YANG files can significantly speed uploading of customer’s application in the development phase when the application is started repetitively. Thus, it saves overall development time. A simplified YANG file is printed out to standard output unless the output directory is defined.
User can choose between the following output types:
Tree in format \<status>–\<flags> \<name>\<opts> \<type> <if-features>
Name-Revision in format \<module_name>@\<revision>
List of all modules that validated module depends on
JSON Tree with all the node information
HTML Page with javascript for the visualization of the yang tree
YANG File / simplified YANG file
Goal: Create a stable and reliable custom application
lighty.io was developed to provide a lightweight implementation of core OpenDaylight components so customers are able to run their applications in a plain Java SE environment. PANTHEON.tech keeps working on the improvements for that framework to make its usage as easy as possible to the customers to create stable and reliable applications.
One step forward in this journey is the lighty YANG Validator – customers can create, validate and visualize the YANG data model of their application just by using the lighty.io framework without the need to call any other external tool.
A network can get messy. That is why many service providers require a Network Orchestrator, to fill the gap between managing hundreds of devices & corresponding services like SNMP, NETCONF, REST and others. This is where Cisco’s Network Services Orchestrator comes into play and translates service orders to various network devices in your network.
An NSO serves as a translator. It breaks up high-level service layers, from management & resource layers – connecting various network functions, which may run in a virtualized or hardware environment. It defines how these network functions interact with other infrastructures and technologies within the network.
We have introduced Ansible & AWX for automation in the past. Since we also enjoy innovation, we decided to create this guide on installing Cisco NSO, and it’s usage with lighty.io & ONAP (SDN-C).
The installation package can be downloaded from the official Cisco developer website. This guide contains steps on how to install the Cisco NSO. We will use the NSO 5.1.0.1 version in this tutorial. This tutorial was tested on Ubuntu 18.04 LTS.
Don’t forget to set NCS_DIR variable and source ncsrc file!
In the output, you should see connect-result and sync-result from all three devices.
To leave CLI, press CTRL+D.
Create Cisco NSO Service
Go to the packages directory and use ncs-make-package command:
cd packages
ncs-make-package --service-skeleton template acl-service --augment /ncs:services
This will create the directory acl-service with a structure containing templates and default YANG models. Templates are used for applying configurations to devices. With the YANG file, we can model how our service can be activated and what parameters it uses.
Now, open the template XML file acl-service/templates/acl-service-template.xml and replace its content with:
This template will be used for configuring selected devices. It will add access-group with specified interface_type, interface_number,ACL_Name and ACL_Direction variables to their configuration.
The values of the mentioned variables will be set when we will activate this service. These variables are modeled in the YANG file, which we are going to update now.
Replace the content of the acl-service/src/yang/acl-service.yang file with:
And now log into the Cisco NSO CLI and reload the packages:
ncs_cli -C -u admin
packages reload
The output should look similar to this:
admin@ncs# packages reload
>>> System upgrade is starting.
>>> Sessions in configure mode must exit to operational mode.
>>> No configuration changes can be performed until upgrade has completed.
>>> System upgrade has completed successfully.
reload-result {
package acl-service
result true
}
reload-result {
package cisco-ios-cli-3.0
result true
}
Now a Cisco NSO instance with three simulated devices should be up and running!
Turn off and clean Cisco NSO
Later when you will want to stop and clean what you started, call these commands in your project directory:
OpenDaylight‘s distribution package remained the same for several years. But what if there is a different way to do this, making distribution more aligned with the latest containerization trends? This is where an OpenDaylight Static Distribution comes to the rescue.
Original Distribution & Containerized Deployments
Let’s take a quick look at the usual way.
A standard distribution is made up of:
a pre-configured Apache Karaf
a full set of OpenDaylight’s bundles (modules)
It’s an excellent strategy for when the user wants to choose modules and build his application dynamically from construction blocks. Additionally, Karaf provides a set of tools that can affect configuration and features in runtime.
However, when it comes to micro-services and containerized deployments, this approach confronts some best practices for operating containers – statelessness and immutability.
Perks of a Static Distribution
Starting from version 4.2.x, Apache Karaf provides the capability to build a static distribution, aiming to be more compatible with the containerized environment – and OpenDaylight can use that as well.
So, what are the differences between a static vs. dynamic distribution?
Specified List of Features
Instead of adding everything to the distribution, you only need to specify a minimal list of features and required bundles in your runtime, so only they will be installed. This would help produce a lightweight distribution package and omit unnecessary stuff, including some Karaf features from the default distribution.
Pre-Configured Boot-Features
Boot features are pre-configured, no need to execute any feature installations from Karaf’s shell.
Configuration Admin
Configuration admin is replaced with a read-only version that only picks up configuration files from the ‘/etc/’ folder.
Speed
Bundle dependencies are resolved and verified during the build phase, which leads to more stable builds overall.
How to Build a Static Distribution with OpenDaylight’s Components
The latest version of the odl-parent component introduced a new project called karaf-dist-static, which defines a minimal list of features needed by all OpenDaylight’s components (static framework, security libraries, etc.).
This can be used as a parent POM to create our own static distribution. Let’s try to use it and assemble a static distribution with some particular features.
Assuming that you already have an empty pom.xml file, in the first step, we’re going to declare the karaf-dist-static project as a parent for our one:
Optionally, you can override two properties to disable the assembling of .zip/tar.gz archives with a distribution. Default values are ‘true’ for both properties.Let’s make an assumption, that we only need the ZIP:
This example aims to demonstrate how to produce a static distribution containing NETCONF southbound connectors and RESTCONF northbound implementation.Let’s add the corresponding items to the dependencies section:
Once we have these features on the dependency list, we can add them to Karaf’s Maven plugin configuration. Usually, when you want to add some OpenDaylight’s features, you can use the <bootFeatures> container.This should work fine for everything, except features delivered with a Karaf framework (like ssh,diagnostic, etc.).
When it comes to adding features provided by the Karaf framework, a <startupFeatures> block should be used; as we are going to check the installation of the features within the static distribution.
If you check the log messages, you probably will notice the KAR artifact is not the same one we had for dynamic distribution (in dynamic distribution, you can expect the following one – org.apache.karaf.features/framework/4.3.0/kar).
[INFO] Loading direct KAR and features XML dependencies
[INFO] Standard startup Karaf KAR found: mvn:org.apache.karaf.features/static/4.3.0/kar
[INFO] Feature static will be added as a startup feature
Eventually, we can check the output directory of the maven build – it should contain an ‘assembly’ folder with a static distribution we created and netconf-karaf-static-1.0.0-SNAPSHOT.zip archive that contains this distribution.
$ ls --group-directories-first -1 ./target
antrun
assembly
classes
dependency-maven-plugin-markers
site
checkstyle-cachefile
checkstyle-checker.xml
checkstyle-header.txt
checkstyle-result.xml
checkstyle-suppressions.xml
cpd.xml
netconf-karaf-static-1.0.0-SNAPSHOT.zip
While a ZIP archive can be used as an artifact, you would usually like to push to some repository; we will verify our distribution by running Karaf from the assembly folder.
./assembly/bin/karaf
If everything goes well, you should see some system messages saying that Karaf is started, following by a shell command-line interface:
Apache Karaf starting up. Press Enter to open the shell now...
100% [========================================================================]
Karaf started in 1s. Bundle stats: 50 active, 51 total
________ ________ .__ .__ .__ __
\_____ \ ______ ____ ____ \______ \ _____ ___.__.| | |__| ____ | |___/ |_
/ | \\____ \_/ __ \ / \ | | \\__ \< | || | | |/ ___\| | \ __\
/ | \ |_> > ___/| | \| ` \/ __ \\___ || |_| / /_/ > Y \ |
\_______ / __/ \___ >___| /_______ (____ / ____||____/__\___ /|___| /__|
\/|__| \/ \/ \/ \/\/ /_____/ \/
Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or type 'system:shutdown' or 'logout' to shutdown OpenDaylight.
opendaylight-user@root>
With a static distribution, you don’t need to do any feature installation manually.
Let’s just check if our features are running by executing the following command
feature:list | grep 'Started'
The produced output will contain a list of already started features; among them, you should find features we selected in our previous steps.
...
odl-netconf-connector | 1.10.0.SNAPSHOT │ Started │ odl-netconf-1.10.0-SNAPSHOT │ OpenDaylight :: Netconf Connector
odl-restconf-nb-rfc8040 | 1.13.0.SNAPSHOT │ Started │ odl-restconf-nb-rfc8040-1.13.0-SNAPSHOT │ OpenDaylight :: Restconf :: NB :: RFC8040
...
We can also run an additional check by sending a request to the corresponding RESTCONF endpoint:
Now, we can produce immutable & lightweight OpenDaylight distributions with a selected number of pre-installed features, which can be the first step to create Docker images that would be fully compliant for the containerized deployment.
Our next steps would be to make logging and clustered configuration more suitable for running in containers, but that’s a topic for another article.
Binding Query (BQ) is an MD-SAL module, currently located at OpenDaylight version master (7.0.5), 6.0.x, and 5.0.x. Its primary function is to filter data from the Binding Awareness model.
To use BQ, it is required to create QueryExpression and QueryExecutor. QueryExecutor contains BindingCodecTree and data represented by the Binding Awareness model. A filter will be applied to this data and all operations from QueryExpression.
QueryExpression is created from the QueryFactory class. Creating a QueryFactory is started with the querySubtree method. Here, it is entered as an instance identifier, and it has to be at the root of data from QueryExecutor.
The next step will be to create a path to the data, which we want to filter – and then apply the required filter. When QueryExpression is ready, it will be applied with the method executeQuery in QueryExecutor. One QueryExpression can be used on multiple QueryExecutors with the same data schema.
Prerequisites for Binding Query
Now, we will demonstrate how to actually use Binding Query. We will create a YANG model for this purpose:
module queryTest {
yang-version 1.1;
namespace urn:yang.query;
prefix qt;
revision 2021-01-20 {
description
"Initial revision";
}
grouping container-root {
container container-root {
leaf root-leaf {
type string;
}
leaf-list root-leaf-list {
type string;
}
container container-nested {
leaf nested-leaf {
type uint32;
}
}
}
}
grouping list-root {
container list-root {
list top-list {
key "key-a key-b";
leaf key-a {
type string;
}
leaf key-b {
type string;
}
list nested-list {
key "identifier";
leaf identifier {
type string;
}
leaf weight {
type int16;
}
}
}
}
}
grouping choice {
choice choice {
case case-a {
container case-a-container {
leaf case-a-leaf {
type int32;
}
}
}
case case-b {
list case-b-container {
key "key-cb";
leaf key-cb {
type string;
}
}
}
}
}
container root {
uses container-root;
uses list-root;
uses choice;
}
}
Then, we will build and create a Binding Awareness model, with some test data from the provided YANG model.
public Root generateQueryData() {
HashMap<NestedListKey, NestedList> nestedMap = new HashMap<>() {{
put(new NestedListKey("NestedId"), new NestedListBuilder()
.setIdentifier("NestedId")
.setWeight((short) 10)
.build());
put(new NestedListKey("NestedId2"), new NestedListBuilder()
.setIdentifier("NestedId2")
.setWeight((short) 15)
.build());
}};
HashMap<NestedListKey, NestedList> nestedMap2 = new HashMap<>() {{
put(new NestedListKey("Nested2Id"), new NestedListBuilder()
.setIdentifier("Nested2Id")
.setWeight((short) 10)
.build());
}};
HashMap<TopListKey, TopList> topMap = new HashMap<>() {{
put(new TopListKey("keyA", "keyB"),
new TopListBuilder()
.setKeyA("keyA")
.setKeyB("keyB")
.setNestedList(nestedMap)
.build());
put(new TopListKey("keyA2", "keyB2"),
new TopListBuilder()
.setKeyA("keyA2")
.setKeyB("keyB2")
.setNestedList(nestedMap2)
.build());
}};
HashMap<CaseBContainerKey, CaseBContainer> caseBMap = new HashMap<>() {{
put(new CaseBContainerKey("test@test.com"),
new CaseBContainerBuilder()
.setKeyCb("test@test.com")
.build());
put(new CaseBContainerKey("test"),
new CaseBContainerBuilder()
.setKeyCb("test")
.build());
}};
RootBuilder rootBuilder = new RootBuilder();
rootBuilder.setContainerRoot(new ContainerRootBuilder()
.setRootLeaf("root leaf")
.setContainerNested(new ContainerNestedBuilder()
.setNestedLeaf(Uint32.valueOf(10))
.build())
.setRootLeafList(new ArrayList<>() {{
add("data1");
add("data2");
add("data3");
}})
.build());
rootBuilder.setListRoot(new ListRootBuilder().setTopList(topMap).build());
rootBuilder.setChoiceRoot(new CaseBBuilder()
.setCaseBContainer(caseBMap)
.build());
return rootBuilder.build();
}
For better orientation in the test-data structure, there is also a JSON representation of the data we will use:
From the Binding Awareness model queryTest shown above, we can create a QueryExecutor. In this example, we will use the SimpleQueryExecutor. As a builder parameter, we entered BindingCodecTree. Afterwards, this will be added into the Binding Awareness data by method, which we created above.
public QueryExecutor createExecutor() {
return SimpleQueryExecutor.builder(CODEC)
.add(generateQueryData())
.build();
}
Create a Query & Filter Data
Now, we can start with an example on how to create a query and filter some data. In the first example, we will describe how to filter the container by the value of his leaf. In the next steps, we will create a QueryExpression.
First, we will create a QueryFactory from the DefaultQueryFactory. The DefaultQueryFactory constructor takes BindingCodecTree as a parameter.
QueryFactory factory = new DefaultQueryFactory(CODEC);
The next step is to create the DescendantQueryBuilder from QueryFactory. The querySubtree method takes the instance identifier as a parameter. This identifier should be a root node from our model. In this case, it is a container with the name root.
The last step is to define which values should be filtered and then build the QueryExpression. For this case, we will filter a specific leaf, with the value “root leaf”.
Now, the QueryExpression can be used to filter data from QueryExecutor. For creating QueryExecutor, we use the method defined above in “test query data”.
The next example will show how to use Binding Query to filter data from nested-list. This example will filter nested-list items, where the weight parameter equals 10.
In case that we wanted to get top-list elements, but only those which contain nested-leaf items with a weight greater than, or equals to, 15. It is possible to set a match on top-list containers and then continue with a condition to nested-list. With number operations, we can execute greaterThanOrEqual, lessThanOrEqual, greaterThan, and lessThan methods.
The last example shows how to filter choice data and matching their values in key-cb leaf. Conditions that are required to meet are defined in the pattern, which matches the email address.
Binding Query can be used to filter important data, as is shown in previous examples. With Binding Query, it is possible to filter data with various options and get all the required information. Binding Query can also support matching string by patterns and simple filter operations with numbers.
This report reflects a series of metrics for last year and we are extremely proud to be highlighting our continued leading levels of participation and contribution in LFN’s technical communities. As an example, PANTHEON.tech provided over 60% of the commits to OpenDaylight in 2020.
This is an extraordinary achievement, given this is in the company of such accoladed peers as AT&T, Orange S.A., Cisco Systems Inc., Ericsson, and Samsung.
Customer Enablement
Clearly, this report demonstrates open source software solutions have secured themselves in many customer’s network architectures and strategies, with even more customers following this lead. Leveraging its expertise and experience, PANTHEON.tech, since its inception has been focused on offering customers; application development services, Enterprise-Grade tailored or productized open source solutions with an accompanying full support model
PANTHEON.tech leads the way in enabling customers with Software Defined Network automation, comprehensively integrating into an ecosystem of vendor and open orchestration, systems, and network devices across all domains of customer’s networks. Our solutions facilitate automation, for such services as O-RAN, L2/L3/E-VPN, 5G, or Data Centre, amongst many others.
Leveraging multiple open-source projects, including FD.io, we assist customers in embracing cloud-native, developing tailored enterprise-grade network functions, which focus on customer’s immediate and future requirements and performance objectives.
We help our customers unlock the potential of their network assets, whether; new, legacy, proprietary, open, multi-domain, or multi-layer, PANTHEON.tech has solutions to simplify and optimize customer’s networks, systems, and operations.
The key-takeaway is, that customers can rely on PANTHEON.tech to deliver, unlocking services in your existing networks, innovate and adopt new networks and services, while simplifying your operations.
Please contact PANTHEON.tech to discuss how we can assist your open-source network and application goals with our comprehensive range of services, subscriptions, and training.
At present, enterprises practice approaches in securing external perimeters of their networks. From centralized Virtual Private Networks (VPN), through access without a VPN to using solutions, such as EntGuard VPN.
That also means, that as an enterprise, you need to go the extra mile to protect your employees, your, and their data. A VPN will:
Encrypt your internet traffic
Protect you from data-leaks
Provide secure access to internal networks – with an extra layer of security!
Encrypt – Secure – Protect.
With EntGuard VPN, PANTHEON.tech utilized years of working on network technologies and software, to give you anenterprise-grade product, that is built for the cloud.
We decided to build EntGuard VPN on the critically-acclaimed WireGuard® protocol. The protocol focuses on ease-of-use & simplicity, as opposed to existing solutions like OpenVPN – while achieving incredible performance! Did you know that WireGuard® is natively supported in the Linux kernel and FD.io VPP since 2020?
WireGuard® is relied on for high-speeds and privacy protection. Complex, state-of-the-art cryptography, with lightweight architecture. An incredible combination.
Unfortunately, it’s not easy to maintain WireGuard® in enterprise environments, that’s why we have decided to bring you EntGuard, which gives you the ability to use WireGuard® tunnels in your enterprise environment.
Premium Features: Be the first to try out new features, such as – MFA, LDAP, Radius, end-station remote support, traffic monitoring, problem analysis and more!
The PANTHEON.tech, cloud-native network functions portfolio keeps on growing. At the start of 2020, we introduced you to the project, which at the moment houses 18 CNF’s. Make sure to keep up-to-date with our future products, by following us and our social media!
ONAP (Open Network Automation Platform) is quite a trend in the contemporary SDN world. It is a broad project, consisting of a variety of sub-projects (or components), which together form a network function orchestration and automation platform. Several enterprises are active in ONAP and its growth is accelerating rapidly. PANTHEON.tech is a proud contributor as well.
What is ONAP?
The platform itself emerged from the AT&T ECOMP (Enhanced Control, Orchestration, Management & Policy) and Open-O (Open Orchestrator) initiatives. ONAP is an open-source software platform, that offers a robust, real-time, policy-driven orchestration and automation framework, for physical and virtual network functions. It exists above the infrastructure layer, which automates the network.
ONAP enables end-users to connect services through the infrastructure. It allows network scaling and VNF/CNF implementations in a fully automated manner. Among other benefits, like:
Bring agile deployment & best practices to the telecom world
Add & deploy new features on a whim
Improve network efficiency & sink costs
Its goal is to enable operators and developers, networks, IT, and the cloud to quickly automate new technologies and support full lifecycle management. It is capable of managing (build, plan, orchestrate) Virtual Network Functions (VNF), as well as Software-Defined Networks (SDN).
ONAP’s high-level architecture involves numerous software subsystems (components). PANTHEON.tech is involved in multiple ONAP projects, but mostly around controllers (like SDN-C). For a detailed view, visit the official wiki page for the architecture of ONAP.
SDN-C
SDN-C is one of the components of ONAP – the SDN controller. It is basically OpenDaylight, with additional Directed Graph Execution capabilities. In terms of architecture, ONAP SDN-C is composed of multiple Docker containers.
Directed Graph Creator runs one of these containers. It’s a user-friendly web UI, that can be used to create directed graphs. Another container runs the Admin Portal. The next one runs the relational database, which is the focal point of the implementation of SDN-C and used for each container. Lastly, the SDN-C container, that runs the controller itself.
According to the latest 5G use-case paper for ONAP, SDN-C has managed to implement “radio-related optimizations through the SDN-R sub-project and support for the A1 interface”.
CDS: Controller Design Studio
As the official documentation puts it:
CDS Designer UI is a framework to automate the resolution of resources for instantiation and any config provisioning operation, such as day0, day1, or day2 configuration.
CDS has both design-time & run-time activities. During design time, the designer can define what actions are required for a given service, along with anything comprising the action. The design produces a CBA Package. Its content is driven by a catalog of reusable data dictionaries and components, delivering a reusable and simplified self-service experience.
CDS enables users to adapt resources in a way, where no direct code-changes are needed. The Design Studio gives users, not only developers, the option to customize the system, to meet the customer’s demands. The two main components of CDS are the frontend (GUI) and backend (run-time). It is possible to run CDS in Kubernetes or an IDE of your choice.
The primary role of SO is the automation of the provisioning operations of end-to-end service instances. In favor of overall end-to-end service instantiation, processes, and maintenance, SO is accountable for the instantiation and setup of VNFs.
To accomplish its purpose, Service Orchestration performs well-defined processes – usually triggered by receiving service requests, created by other ONAP components, or by Order Lifecycle Management in the BSS layer.
The orchestration procedure is either manually developed or received from ONAP’s Service Design and Development (SDC) portion, where all service designs are created for consumption and exposed/distributed.
The latest achievement of the Service Orchestrator is the implementation of new workflows such as:
CSMF – Communication Service Management Function
NSMF – Network Slice Management Function
NSSMF – Network Slice Sub-Net Management Function
DMaaP: Data Movement as a Platform
The DMaaP component is a data movement service, which transports and processes data from a selected source to the desired target. It is capable of transferring data and messages between ONAP components, data filtering/compression/routing, as well as message routing and batch/event-based processing.
DCAE: Data Collection Analytics & Events
The Data Collection Analytics & Events component does exactly what’s in its name – gather performance, usage & configuration data from the managed environment. The component guards events in a sense – if something significant occurs or an anomaly is detected, DCAE takes appropriate actions.
The component collects and stores data that is necessary for analysis while providing a framework for the development of needed analytics.
The Active & Available Inventory functionality offers real-time views of relationships with the products and services operated by them. It gives real-time insights into the managed products and services, as well as their connections.
A&AI is a list of properties that are active, available, and allocated. It establishes a multi-dimensional relationship between the programs and infrastructure under administration. It provides interfaces for dynamic network topology requests, both canned and ad-hoc network topology queries.
Recently AAI gained schema support for 5G service design and slicing models.
Is ONAP worth it?
Yes, it is. Since you have come up to this conclusion, then you might feel that ONAP is the right fit for your needs. It is an enormous project with around 20 components.
It is a long-term goal of several enterprises, including PANTHEON.tech, to embrace an open(-source) ecosystem for network development and connectivity.
An open approach to software development opens doors to all the talents around the globe, to contribute to projects that will shape the future of networking. One such project is the Open Radio Access Network or O-RAN for short.
Next In Line: O-RAN
Originally launched as OpenRAN, the project was started in 2017 by the Telecom Infra Project. The goal was to build a vendor-neutral, hardware & software-defined technology for 2-3-4G RAN solutions.
Then, the O-RAN Alliance was founded to increase community engagement, as well as to motivate operators to be included in this development. The alliance has made it a point, to create a standardization – meaning a description, of how this concept should function in reality.
O-RAN Architecture
O-RAN is part of the massive evolution from 4G networks, into the 5G generation. In 5G, due to higher bandwidths, more antenna and the use of multiple-input multiple-output (MIMO) technology, even more data needs to go back and forth.
We can observe the formation of two solutions: the high-level split (HLS) and the low-level split (LLS). With so much of the processing shifting to the edge, the high-level split is a two-box solution. The F1 interface lies between the DU+RU and links to the centralized device. Alternatively, further processing is shifted to the middle by LLS and the antenna is held at the edge.
Three separate units are deployed with O-RAN:
O-RU: Radio Unit
O-DU: Distributed Unit
O-CU: Centralized Unit
At the edge sits the O-RU. In the center, the O-DU sits and performs some of the processing. Both HLS and LLS are included in O-RAN. They standardize the interfaces. For CUs, DUs, or RUs, operators may use different vendors. With one working group concentrating on the F1 interface and another on the front-haul, the components are much more interoperable and the protocols more clearly defined.
What’s more, O-RAN selected SDN-R as the project’s SDN controller. PANTHEON.tech is part of the SDN-R community.
What is a RAN?
A radio access network implements radio access technology, which makes it able for user devices (anything able to receive this signal) to receive a connection to the core network, above the specific RAN.
A visual representation of core networks, radio access networks, and user devices.
The types of radio access networks include GSM, EDGE, and LTE standards, named GRAN, GERAN, E-UTRAN in that order.
The core network provides a path for information exchanging between subnetworks or different LANs. Imagine the core network as the backbone of an enterprise’s entire network.
The technology behind RANs is called RAT (radio access technology) and represents the principal technology behind radio-based communication. RATs include known network standards like GSM or LTE, or Bluetooth and WiFi.
Linux Foundation Networking Presents: O-RAN Software Community
In the first half of 2019, The Linux Foundation, in collaboration with the O-RAN Alliance, created the O-RAN Software Community, where members can contribute their knowledge & know-how to the O-RAN project.
Currently, the goal is to create a common O-RAN specification, that all RAN vendors would potentially adopt. This would mean a common interface, independent of the radio unit type.
This move certainly makes sense, since, at its core, O-RAN stands for openness – open-source, nonproprietary radio access networks. As the technical charter of the project puts it:
The mission of the Project is to develop open-source software enabling modular open, intelligent, efficient, and agile radio access networks, aligned with the architecture specified by O-RAN Alliance.
The further goal of creating a software community centered around this project is to include projects such as OPNFV, ONAP, and others, to create a complete package for future, open networking.
Join us in reminiscing and reminding you, what PANTHEON.tech has managed to create, participate in, or inform about in 2020.
Project: CDNF.io
In the first quarter of the year, we have made our latest project, CDNF.io, accessible to the public. Cloud-native functions were long overdue in our portfolio and let me tell you – there are lots of them, ready to be deployed anytime.
We have prepared a series of videos, centered around our CNFs, which you can conveniently view here:
Perhaps you like to read more than hear someone explain things to you? We wrote a few posts on:
Apart from our in-house solutions, we have worked on demonstrating several scenarios with common technologies behind them: ServiceNow® & Cisco’s Network Services Orchestrator.
In terms of ServiceNow®, our posts centered around:
Since we did not want to exclude people who might not be that knowledgable about what we do, we have created a few series on technologies and concepts PANTHEON.tech is engaged in, such as:
We try to listen closely to what Robert Varga, the top-single contributor to the OpenDaylight source-code, has to say about OpenDaylight. That allowed us to publish opinion/informative pieces like:
We would like to thank everybody who does their part in working and contributing to projects in PANTHEON.tech, but open-source projects as well. 2020 was challenging, to say the least, but pulling together, makes us stronger – together.
Happy holidays and new years to our colleagues, partners, and readers – from PANTHEON.tech.
These thoughts were originally sent on the public karaf-dev mailing list, where Robert Varga wrote a compelling opinion on what the future holds for Karaf and where its currently is headed. The text below was slightly edited from the original.
With my various OpenDaylight hats on, let me summarize our project-wide view, with a history going back to the project that was officially announced (early 2013).
From the get-go, our architectural requirement for OpenDaylight was OSGi compatibility. This means every single production (not maven-plugin obviously) artifact has to be a proper bundle.
This highly-technical and implementation-specific requirement was set down because of two things:
What OSGi brings to MANIFEST.MF in terms of headers and intended wiring, incl. Private-Package
Typical OSGi implementation (we inherited Equinox and are still using it) uses multiple class loaders and utterly breaks on split packages
This serves as an architectural requirement that translates to an unbreakable design requirement of how the code must be structured.
We started up with a home-brew OSGi container. We quickly replaced it for Karaf 3.0.x (6?), massively enjoying it being properly integrated, with shell, management, and all that. Also, feature:install.
At the end of the day, though, OpenDaylight is a toolkit of a bunch of components that you throw together and they work.
Our initial thinking was far removed from the current world of containers when operations go. The deployment was envisioned more like an NMS with a dedicated admin team (to paint a picture), providing a flexible platform.
The world has changed a lot, and the focus nowadays is on containers providing a single, hard-wired use-case.
We now also require Java 11, hence we have JPMS – and it can fulfill our architectural requirement just as well as OSGi. Thanks to OSGi, we have zero split packages.
We do not expect to ditch Karaf anytime soon, but rather leverage static-framework for a light-weight OSGi environment, as that is clearly the best option for us short-to-medium term, and definitely something we will continue supporting for the foreseeable future.
The shift to nimble single-purpose wirings is not going away and hence we will be expanding there anyway.
To achieve that, we will not be looking for a framework-of-frameworks, we will do that through native integration ourselves.
If Karaf can do the same, i.e. have its general-purpose pieces available as components, easily thrown together with @Singletons or @Components, with multiple frameworks, as well as nicely jlinkable – now that would be something.
From the get-go, the MD-SAL architecture was split into two distinct worlds: Binding-Independent (BI, DOM) and Binding-Aware (BA, Binding).
This split comes from two competing requirements:
Type-safety provided by Java, for application developers who interact with specific data models
Infrastructure services that are independent of data models.
Type-safety is supported by interfaces and classes generated from YANG models. It generally feels like any code, where you deal with DTOs.
Infrastructure services are supported by an object, model similar to XML DOM, where you deal with hierarchical “document” trees. All you have to go by, are QNames.
For obvious reasons, most developers interacting with OpenDaylight have never touched the Binding Independent world, even though it underpins pretty much every single feature available on the platform.
The old OpenDaylight SAL architecture looked like this:
A very dated picture of how the system is organized.
It is obvious that the two worlds need to seamlessly interoperate.
For example, RPCs invoked by one world, must be able to be serviced by the other. Since RPCs are the equivalent of a method call, this process needs to be as fast as possible, too.
That leads to a design, where each world has its own broker and the two brokers are connected. Invocations within the world would be handled by that world’s broker, foregoing any translation.
The Binding-Aware layer sits on top of the Binding Independent one. But it is not a one-to-one mapping.
This comes from the fact, that the Binding-Independent layer is centered around what makes sense in YANG, whereas the Binding-Aware layer is centered around what makes sense in Java, including various trade-offs and restrictions coming from them.
Binding-Aware: what makes sense in Java.
Binding-Independent: what makes sense in YANG.
Remote Procedure Calls
For RPCs, this meant that there were two independent routing tables, with repeated exports being done from each of them.
The idea of an RPC router was generalized in the (now long-forgotten) RpcRouter interface. Within a single node, the Binding & DOM routers would be interconnected.
For clustered scenarios, a connector would be used to connect the DOM routers across all nodes. So an inter-node Binding-Aware RPC request from node A to node B would go through:
Both the BI and connector speak the same language – hence they can communicate without data translation.
The design was simple and effective but has not survived the test of time. Most notably, the transition to dynamic loading of models in the Karaf container.
BA/BI Debacle: Solution
Model loading impacts data translation services needed to cross the BA/BI barrier, leading to situations where an RPC implementation was available in the BA world, but could not yet be exported to the BI world. This, in turn, leads to RPC routing loops, and in the case of data-store services – missing data & deadlocks.
To solve these issues, we have decided to remove the BA/BI split from the implementation and turn the Binding-Aware world into an overlay on top of the Binding-Independent world.
This means, that all infrastructure services always go through BI, and the Binding RPC Broker was gradually taken behind the barn, there was a muffled sound in 2015.
Welcome to Part 1 of the PANTHEON.tech Ultimate Guide to OpenDaylight! We will start off lightly with some tips & tricks regarding the tricky documentation, as well as some testing & building tips to speed up development!
Documentation
1. Website, Docs & Wiki
The differences between these three sources can be staggering. But no worries, we have got you covered!
OpenDaylight Docs – The holy grail for developers. The Docs page provides developers with all the important information to get started or go further.
OpenDaylight Wiki – A Confluence based wiki, for meeting minutes and other information, regarding the governance, projects structure, and other related stuff.
There are tens (up to hundreds) of mailing lists you can join, so you are up-to-date with all the important information – even dev talks, thoughts, and discussions!
DEV – 231 members – all projects development list with high traffic.
Release – 180 members – milestones & coordination of releases, informative if you wish to stay on top of all releases!
TSC – 236 members – the Technical Steering Committee acts as the guidance-council for the project
Testing & Building
1. Maven “Quick” Profile
There’s a “Quick” maven profile in most OpenDaylight projects. This profile skips a lot of tests and checks, which are unnecessary to run with each build.
This way, the build is much faster:
mvn clean install -Pq
2. GitHub x OpenDaylight
The OpenDaylight code is mirrored on GitHub! Since more people are familiar with the GitHub environment, rather than Gerrit, make sure to check out the official GitHub repo of ODL!
We have come a long way to enjoy all the benefits that cloud-native network functions bring us – lowered costs, agility, scalability & resilience. This post will break down the road to CNFs – from PNF to VNF, to CNF.
What are PNFs (physical network functions)?
Back in the ’00s, network functions were utilized in the form of physical, hardware boxes, where each box served the purpose of a specific network function. Imagine routers, firewalls, load balancers, or switches as PNFs, utilized in data centers for decades before another technology replaced them. PNF boxes were difficult to operate, install, and manage.
Just as it was unimaginable to have a personal computer 20 years ago, we were unable to imagine virtualized network functions. Thanks to cheaper, off-the-shelf hardware and expansion of cloud services, enterprises were able to afford to move some network parts from PNFs to generic, commodity hardware.
What are VNFs (virtual network functions)?
The approach of virtualization enabled us to share hardware resources between multiple tenants while keeping the isolation of environments in place. The next logical step was the move from the physical, to the virtual world.
A VNF is a virtualized network function, that runs on top of a hardware networking infrastructure. Individual functions of a network may be implemented or combined, in order to create a complete package of a networking-communication service. A virtual network function can be part of an SDN architecture or used as a singular entity within a network.
Cloud-native network functions are software implementations of functions, which are traditionally performed by PNFs – and they need to conform to cloud-native principles. They can be packaged within a container image, are always ready to be deployed & orchestrated, chained together to perform a series of complex network functions.
Why should I use CNFs?
Microservices and the overall benefits of adapting cloud-native principles, come with several benefits, which show a natural evolution of network functions in the 2020s. Imagine the benefits of:
Reduced Costs
Immediate Deployment
Easy Control
Agility, Scalability & Resilience
Our CNF project delivers on all of these promises. Get up-to-date with your network functions and contact us today, to get a quote.
This is a continuation of our guide on the Cisco Network Service Orchestrator. In our previous article, we have shown you how to install and run Cisco NSO with three virtual devices. We believe you had time to test it out and get to know this great tool.
Now, we will show you how to use the Cisco NSO with our SDN Framework – lighty.io. You can read more about lighty.io here, and even download lighty-core from our GitHub here.
Prerequisites
This tutorial was tested on Ubuntu 18.04 LTS. In this tutorial we are going to use:
After the build, locate the lighty-community-restconf-netconf-app artifact and unzip its distribution from the target directory:
cd lighty-examples/lighty-community-restconf-netconf-app/target
unzip lighty-community-restconf-netconf-app-11.2.0-bin.zip
cd lighty-community-restconf-netconf-app-11.2.0
Now we can start lighty.io application by running its .jar file:
After a few seconds we should see in the logs message that everything was started successfully:
INFO [main] (Main.java:97) - lighty.io and RESTCONF-NETCONF started in 7326.731ms
The lighty.io application should now be up and running. The default RESTCONF port is 8888.
Connect Cisco NSO to the lighty.io application
To connect Cisco NSO to the lighty.io application via NETCONF protocol we must add it as a node to the configuration datastore using RESTCONF. To do this, call a PUT request on the URI:
The parameter nodeIdspecifies the name, under which we will address Cisco NSO in the lighty.io application. Parameters host and port specify, where the Cisco NSO instance is running. The default username and password for Cisco NSO is admin/admin. In case you would like to change node-id be sure to change it in the URI too.
To check if Cisco NSO was connected successfully, call a GET request on the URI:
If Cisco NSO was connected successfully, the value of the connection-status should be connected.
Activate Cisco NSO service using lighty.io
Activation of the Cisco NSO service is similar to connecting Cisco NSO to lighty.io. We are going to activate the ACL service we created in the previous tutorial, by calling PUT REST request on URI:
This payload is modeled in a YANG model we created together with the ACL service in our previous tutorial. Feel free to change the values of the ACL parameters (first, check what types they are in the ACL service YANG model) and if you are changing ACL_Name, don’t forget to change it in the URI too.
Unfortunately, in the time of writing this tutorial, there is a bug in the OpenDaylight NETCONF (NETCONF-568) with parsing the output from this call. It prevents lighty.io from sending a response to the RESTCONF request we sent and we need to manually stop waiting for this response in Postman (or another REST client you are using).
Now, our service should be activated! To check activated services in Cisco NSO, call a GET request on the URI:
You can simulate hundreds or thousands of NETCONF devices within your development or CI/CD. We are of course talking about our lighty NETCONF Simulator, which is now available on GitHub! This tool is free & open-source, while based on OpenDaylight’s state-of-art NETCONF implementation.
We have recently finished the implementation of get-schema RPC from NETCONF Monitoring, which is based on the RFC 6022 by the IETF and brings users a missing monitoring possibility for NETCONF devices.
Let us know, what NETCONF device you would like to simulate!
What is a get-schema
Part of NETCONF Monitoring features is get-schema RPC, which allows the controller to download schemas from the NETCONF device to the controller, so they don’t have to be added manually.
In the points, one after another, the process of device connection looks like this (when controller and device are started):
1. Connection between NETCONF device and controller is established
2. When established and hello message capabilities exchanged, the controller requests a list of available models from the NETCONF device
3. When NETCONF device supports this feature, it sends its models to the controller
4. Controller then processes those models and builts schema context
In a more technical perspective, here is the process of connecting devices:
1. SSH connection from the controller to the NETCONF device is established
2. NETCONF device sends a hello message with its capabilities
3. Controller sends hello message with its capabilities
4. Controller requests (gets) a list of available schemas (models) from the NETCONF device datastore (ietf-netconf-monitoring:netconf-state/schemas)
5. NETCONF device sends a list of available schemas to the controller
6. controller goes through this list, download each model via get-schema RPC, and stores them in the cache/schema directory
7. Schema context is built in the controller from models in the cache/schema directory
How does the feature work in an enabled device?
In the device, there is a monitoring flag that can be set up with EnabledNetconfMonitoring(boolean) method. The feature is enabled by default. If the flag is enabled, when the device is built and then started, the device’s operational datastore is populated with schemas from the device’s schemaContext.
In our device, we use NetconfDevice implementation which is built with NetconfDeviceBuilder pattern. This feature is by default enabled and can be disabled by calling with NetconfMonitoringEnabled(false) on the NetconfDeviceBuilder, which sets a flag that netconf-monitoring will be enabled.
When the build() command is called on device builder, if that flag is set, the netconf-monitoring model is added to the device and is created NetconfDeviceImpl instance with a monitoring flag from the builder. Then, when the device is started, prepareSchemasForNetconfMonitoring is called if monitoring is enabled and the datastore is populated with schemas, which are then stored in the netconf-state/schemas path.
It is done via write transaction, where each module and submodule in the device’s schema context is converted to a schema and written into a map with schema key (if the map doesn’t already contain a schema with a given key) When the device is then connected to the controller, get-schema RPC will ask for each of these schemas in netconf-state/schemas path and download them to the cache/schema directory.
What is the purpose of the get-schema?
It helps to automate the device connection process. When a new device is connected, there is no need to manually find and add all models that the device supports in its capabilities, to the controller, but those are downloaded from the device by the controller.
[Example 1] NETCONF Monitoring schemas on our Toaster simulator device
To get a list of all schemas, it is needed to send a get request with a specified netconf-state/schemas path.
To get a particular schema with its content in YANG format, the following RPC is sent – an example of getting toaster schema, with revision version 2009-11-20. XML RPC request:
A DevOps paradigm, programmatic approach, or Kubernetes management. The decision between a declarative or imperative approach is not really a choice – which we will explain in this post.
The main difference between the declarative and imperative approach is:
Declarative: You will say what you want, but not how
Imperative: You describe howto do something
Declarative Approach
Users will mainly use the declarative approach when describing how services should start, for example: “I want 3 instances of this service to run simultaneously”.
In the declarative approach, a YAML file containing the wished configuration will be read and applied towards the declarative statement. A controller will then know about the YAML file and apply it where needed. Afterwards, the K8s scheduler will start the services, where it has the capacity to do so.
Kubernetes, or K8s for short, lets you decide between what approach you choose. When using the imperative approach, you will explain to Kubernetes in detail, how to deploy something. An imperative way includes the commands create, run, get & delete – basically any verb-based command.
Will I ever manage imperatively?
Yes, you will. Even when using declarative management, there is always an operator, which translates the intent to a sequence of orders and operations which he will do. Or there might be several operators who cooperate or split the responsibility for parts of the system.
Although declarative management is recommended in production environments, imperative management can serve as a faster introduction to managing your deployments, with more control over each step you would like to introduce.
Each approach has its pro’s and con’s, where the choice ultimately depends on your deployment and how you want to manage it.
While software-defined networking aims for automation, once your network is fully automated, enterprises should consider IBN (Intent-Based Networking) the next big step.
Intent-Based Networking (IBN)
Intent-Based Networking is an idea introduced by Cisco, which makes use of artificial intelligence, as well as machine learning to automate various administrative tasks in a network. This would be telling the network, in a declarative way, what you want to achieve, relieving you of the burden of exactly describing what a network should do.
For example, we can configure our CNFs in a declarative way, where we state the intent – how we want the CNF to function, but we do not care how the configuration of the CNF will be applied to, for example, VPP.
For this purpose, VPP-Agent will send the commands in the correct sequence (with additional help from KVscheduler), so that the configuration will come as close as possible to the initial intent.
For newcomers to our blog – welcome to a series on explanations from the world of PANTHEON.tech software development. Today, we will be looking at what software-defined networking is – what it stands for, it’s past, present, future – and more.
What is SDN – Software Defined Networking?
Networks can exponentially scale and require around the clock troubleshooting, in case something goes wrong – which always can. Software-Defined Networking is a concept of decluttering enterprises of physical network devices and replacing them with software. The goal is to improve the traditional network management and ease the entire process by removing pricey, easily obsolete hardware and replace it with their virtualized counterparts.
The core component is the control plane, which encompasses one (or several) controllers, like OpenDaylight. This makes centralization of the network a breeze and provides an overview of its entirety. The main benefits of utilizing SDN are:
Centralization
Open Standards
Scheduling
Most network admins can relate to the feeling when you have to manage multiple network devices separately, with different ones requiring proprietary software and making your network a decentralized mess. Utilizing SDN enables you to make use of a network controller and centralize the management, security, and other aspects of your network in one place.
Network topologies enable full control of the network flow. Bandwidth can be managed to go where it needs, but it does not end there – network resources, in general, can be secured, managed, and optimized, in order to accommodate current needs. Scheduling or programmability is what differs software-defined networking from a traditional network approach.
Open standards let you know, that you do not have to rely on one hardware provider, with vendor-specific protocols and devices. Projects, such as OpenDaylight, which has been around since 2013, with contributions from major companies like Orange, RedHat, but with leading contributions from PANTHEON.tech. Being an open-source project, you can rely on the community of expert technicians on perfecting the solution with each commit or pull request.
The idea of a software-defined network supposedly started at Stanford University, where researchers played with the idea of virtualizing the network. The idea was to virtualize the network by making the control plane and data plane two separate entities, independent of each other.
What is NFV – Network Function Virtualization?
On the other hand, NFV or Network Function Virtualization aims to replace hardware, which serves a specific purpose, with virtual network functions (Virtual Customer Premise Equipment – vCPE). Imagine getting rid of most proprietary hardware, the difficulty of upgrading each part, and making them more accessible, scalable, and centralized.
SDN & NFV go therefore hand-in-hand in most of the aspects covered, but mainly in the goal of virtualizing most parts of the network equipment or functions.
As for the future, PANTHEON.tech’s mission is to bring enterprises closer to a complete SDN & NFV coverage, with training, support, and custom network software that will make the transition easier. Contact us today – the future of networking awaits.
As part of a webinar, in cooperation with the Linux Foundation Networking, we have created two repositories with examples from our demonstration “Building CNFs with FD.io VPP and Network Service Mesh + VPP Traceability in Cloud-Native Deployments“:
Check out our full-webinar, in cooperation with the Linux Foundation Networking on YouTube:
What is Network Service Mesh (NSM)?
Recently, Network Service Mesh (NSM) has been drawing lots of attention in the area of network function virtualization (NFV). Inspired by Istio, Network Service Mesh maps the concept of a Service Mesh to L2/L3 payloads. It runs on top of (any) CNI and builds additional connections between Kubernetes Pods in the run-time, based on the Network Service definition deployed via CRD.
Unlike Contiv-VPP, for example, NSM is mostly controlled from within applications through the provided SDK. This approach has its pros and cons.
Pros: Gives programmers more control over the interactions between their applications and NSM
Cons: Requires a deeper understanding of the framework to get things right
Another difference is, that NSM intentionally offers only the minimalistic point-to-point connections between pods (or clients and endpoints in their terminology). Everything that can be implemented via CNFs, is left out of the framework. Even things as basic as connecting a service chain with external physical interfaces, or attaching multiple services to a common L2/L3 network, is not supported and instead left to the users (programmers) of NSM to implement.
Integration of NSM with Ligato
At PANTHEON.tech, we see the potential of NSM and decided to tackle the main drawbacks of the framework. For example, we have developed a new plugin for Ligato-based control-plane agents, that allows seamless integration of CNFs with NSM.
Instead of having to use the low-level and imperative NSM SDK, the users (not necessarily software developers) can use the standard northbound (NB) protobuf API, in order to define the connections between their applications and other network services in a declarative form. The plugin then uses NSM SDK behind the scenes to open the connections and creates corresponding interfaces that the CNF is then ready to use.
The CNF components, therefore, do not have to care about how the interfaces were created, whether it was by Contiv, via NSM SDK, or in some other way, and can simply use logical interface names for reference. This approach allows us to decouple the implementation of the network function provided by a CNF from the service networking/chaining that surrounds it.
The plugin for Ligato-NSM integration is shipped both separately, ready for import into existing Ligato-based agents, and also as a part of our NSM-Agent-VPP and NSM-Agent-Linux. The former extends the vanilla Ligato VPP-Agent with the NSM support while the latter also adds NSM support but omits all the VPP-related plugins when only Linux networking needs to be managed.
Furthermore, since most of the common network features are already provided by Ligato VPP-Agent, it is often unnecessary to do any additional programming work whatsoever to develop a new CNF. With the help of the Ligato framework and tools developed at Pantheon, achieving the desired network function is often a matter of defining network configuration in a declarative way inside one or more YAML files deployed as Kubernetes CRD instances. For examples of Ligato-based CNF deployments with NSM networking, please refer to our repository with CNF examples.
Finally, included in the repository is also a controller for K8s CRD defined to allow deploying network configuration for Ligato-based CNFs like any other Kubernetes resource defined inside YAML-formatted files. Usage examples can also be found in the repository with CNF examples.
CNF Chaining using Ligato & NSM (example from LFN Webinar)
In this example, we demonstrate the capabilities of the NSM agent – a control-plane for Cloud-native Network Functions deployed in a Kubernetes cluster. The NSM agent seamlessly integrates the Ligato framework for Linux and VPP network configuration management, together with Network Service Mesh (NSM) for separating the data plane from the control plane connectivity, between containers and external endpoints.
In the presented use-case, we simulate a scenario in which a client from a local network needs to access a web server with a public IP address. The necessary Network Address Translation (NAT) is performed in-between the client and the webserver by the high-performance VPP NAT plugin, deployed as a true CNF (Cloud-Native Network Functions) inside a container. For simplicity, the client is represented by a K8s Pod running image with cURL installed (as opposed to being an external endpoint as it would be in a real-world scenario). For the server-side, the minimalistic TestHTTPServer implemented in VPP is utilized.
In all the three Pods an instance of NSM Agent is run to communicate with the NSM manager via NSM SDK and negotiate additional network connections to connect the pods into a chain client:
Client <-> NAT-CNF <-> web-server (see diagrams below)ne
The agents then use the features of the Ligato framework to further configure Linux and VPP networking around the additional interfaces provided by NSM (e.g. routes, NAT).
The configuration to apply is described declaratively and submitted to NSM agents in a Kubernetes native way through our own Custom Resource called CNFConfiguration. The controller for this CRD (installed by cnf-crd.yaml) simply reflects the content of applied CRD instances into an ETCD datastore from which it is read by NSM agents. For example, the configuration for the NSM agent managing the central NAT CNF can be found in cnf-nat44.yaml.
More information about cloud-native tools and network functions provided by PANTHEON.tech can be found on our website here.
To confirm that client’s IP is indeed source NATed (from 192.168.100.10 to 80.80.80.102) before reaching the web server, one can use the VPP packet tracing:
PANTHEON.tech s.r.o., its products or services, are not affiliated with ServiceNow®, neither is this post an advertisement of ServiceNow® or its products.
ServiceNow® is a cloud-based platform, that enables enterprise organizations to automate business processes across the enterprise. We have previously shown, how to use ServiceNow® & OpenDaylight to automate your network.
We will demonstrate the possibility of using ServiceNow®, to interact with a firewall device. More precisely, we will manage Access Controls Lists (ACLs), which work on a set of rules that define how to forward or block packets in network traffic.
User Administration
The Now® platform offers, among other things, user administration, which allows us to work with users, assign them to groups, as well as assigning both to roles, based on their privileges. In this solution/demonstration, two different groups of users, with corresponding roles are used.
The first group of users are requestors, which may represent a basic end-user, employees, or customers of an enterprise organization. This user can create new rule requests by submitting a simple form. Without any knowledge of networking, the user can briefly describe his request in the description field.
This request will then be handled by the network admin. At the same time, users can monitor their requests and their status:
The custom table used in the request process is inherited from the Task table, which is one of the core tables provided with the base system. It provides a series of fields, which can be used in the process of request-items management and provide us access to approval logic.
Approval Strategy
Network admins form the second group of users. They receive requests from end-user and decide, if they will fulfill a request, or reject it.
If they decide to fulfill a request, they have an available, extended view of the previous form, which offers more specific fields and simply fills the necessary data. This data represents the ACL rule information, that will be later applied. There are several types of rules (IP, TCP, UDP, ICMP, MAC), and different properties (form fields) must be filled for each of these types.
NOTE: It is possible to add another group of users, which can for example fill details of the rule. This group will create another layer in the entire process, network admin then may focus only on requests approval or rejection.
Network admin has an existing set of rules available, which are stored in tables, according to their type. Existing rules can be accessed from the Application navigator and viewed inside of the created rule request, which the admin is currently reviewing. Data in tables are updated on regular intervals, as well as after a new rule is added.
Workflow Overview
The network admin can decide to approve or reject the request. Once the request is approved, a flow of actions will be triggered. Everything after approval will be done automatically. A list of existing rules is GET from VPP-Agent, using the REST API call. Based on the type of ACL rule, the corresponding action is performed.
Each action consists of two steps. First, the payload is created by inserting new rules into a list of existing rules (if ACL already exists) or creating a new Access Control List (ACL). In the second step, a payload from the previous step is sent back to VPP-agent, using the REST API. At the end of this action flow, tables that contain data describing existing rules are updated.
Managing existing rules
In addition to the approval process, the network admin can also update existing rules, or create new rules. The network admin fills the data into a simple form. After submitting the form, a request is sent directly to the device, without the need of the approval process. Meanwhile, the rule is applied.
MID server
ServiceNow® applications need to communicate with external systems due to data transfer. For this purpose, the MID server is used, which runs as a Windows service or UNIX daemon. In our case, we need to get a list of existing rules from VPP-Agent or send a request to VPP-Agent, when we want to create or update rule. The advantage of a MID server is, that communications are initiated inside the enterprise’s firewall and therefore do not require any special firewall rules or VPNs.
This docker-compose file is based on this one from the official sdnc/oam Gerrit repository. The most important images are dgbuilder (which will start a webserver, where directed graphs can be created) and sdnc (the SDN-Controller itself).
To download and start images specified in the docker-compose file call this command:
docker-compose up
Be patient, it may take a while.
In the end, when everything is up & running, we should see a log stating that Karaf was started successfully. It should look similar to this:
sdnc_controller_container | Karaf started in 0s. Bundle stats: 12 active, 12 total
Directed Graph builder should be accessible through this address (port is specified in the docker-compose file):
https://localhost:3000
Default login for dgbuilder is:
username: dguser
password: test123
Upload and activate Directed Graphs
Steps how to upload DG from clipboard:
On the upper right side of the webpage click on the menu button
In the menu click on the “Import…” button
Select “Clipboard…” option
Paste json representation of the graph to the text field
Click “Ok”
Place graph on the sheet
Steps to activate DG:
Click on the small square at the left side of the beginning of the graph (DGSTART node)
Click on the “Upload XML” button
Click on the “ViewDGList” button
Click on the “Activate” button in the “Activate/Deactivate” column of the table
Click on the “Activate” button
In these files are exported, parametrized Directed Graphs to connect your Cisco NSO instance via NETCONF protocol. You can get information about connected the Cisco NSO instance from the operational datastore. To activate ACL service (that we created in this tutorial). We will use these in later steps, so you can upload and activate them in your SDN-C instance.
You can download the corresponding JSON files here:
In the previous tutorial, we started Cisco NSO with three simulated devices. Now, we are going to connect a running Cisco NSO instance to SDN-C, using the directed graphs we just imported and activated.
But first, we need to obtain the address of Cisco NSO which we will use in the connect request. Run docker inspect command from the terminal like this:
docker inspect sdnc_controller_container
Search for “NetworkSettings” – “Networks” – “yaml_default” – “Gateway”. The field “Gateway” contains an IP address that we will use, so save it for later. In my case it looks like this:
...
"Gateway": "172.18.0.1",
...
Now, we are going to connect to the SDN-C Karaf so we can see the log because some of the DGs write information in there. Execute these commands:
docker exec -it sdnc_controller_container /bin/bash
cd /opt/opendaylight/bin/
./client
log:tail
To execute the Directed Graph, call RESTCONF RPC SLI-API: execute-graph. To do this, call a POST request on URI:
Where <module-name> is the name of the module, where the RPC you want to call is located. <rpc-name> is the name of the RPC. Additionally, you can specify parameters if they are required. We are using port 8282, which we specified in the docker-compose file.
This Postman collection contains all the requests we are going to use now. Feel free to change any attributes, according to your needs.
Don’t forget to set the correct nodeAddress to this request – we got this value before by executing the docker inspect command.
The parameter nodeId specifies the name, under which we will address Cisco NSO in SDN-C. Other parameters are default for the Cisco NSO.
After executing this RPC, we should see our DG – ID of the Cisco NSO node and its connection status (which will be most probably “connecting”), in the SDN-C logs output.
...
12:57:14.654 INFO [qtp1682691455-1614] About to execute node #2 block node in graph SvcLogicGraph [module=NSO-operations, rpc=getNSO, mode=sync, version=1.0, md5sum=f7ed8e2805f0b823ab05ca9e7bb1b997]
12:57:14.656 INFO [qtp1682691455-1614] About to execute node #3 record node in graph SvcLogicGraph [module=NSO-operations, rpc=getNSO, mode=sync, version=1.0, md5sum=f7ed8e2805f0b823ab05ca9e7bb1b997]
12:57:14.671 INFO [qtp1682691455-1614] |Node ID is: nso|
12:57:14.672 INFO [qtp1682691455-1614] About to execute node #4 record node in graph SvcLogicGraph [module=NSO-operations, rpc=getNSO, mode=sync, version=1.0, md5sum=f7ed8e2805f0b823ab05ca9e7bb1b997]
12:57:14.674 INFO [qtp1682691455-1614] |Connection status is: connecting|
...
To check if Cisco NSO node was connected successfully, call getNSO DG. Execute SLI-API:execute-graph RPC with payload:
In the SDN-C logs, we should now see the “connected” status:
...
13:02:15.888 INFO [qtp1682691455-188] About to execute node #2 block node in graph SvcLogicGraph [module=NSO-operations, rpc=getNSO, mode=sync, version=1.0, md5sum=f7ed8e2805f0b823ab05ca9e7bb1b997]
13:02:15.889 INFO [qtp1682691455-188] About to execute node #3 record node in graph SvcLogicGraph [module=NSO-operations, rpc=getNSO, mode=sync, version=1.0, md5sum=f7ed8e2805f0b823ab05ca9e7bb1b997]
13:02:15.892 INFO [qtp1682691455-188] |Node ID is: nso|
13:02:15.893 INFO [qtp1682691455-188] About to execute node #4 record node in graph SvcLogicGraph [module=NSO-operations, rpc=getNSO, mode=sync, version=1.0, md5sum=f7ed8e2805f0b823ab05ca9e7bb1b997]
13:02:15.895 INFO [qtp1682691455-188] |Connection status is: connected|
...
Activate Cisco NSO service using Directed Graph
We are now going to activate the ACL service we created in this tutorial, by executing activateACL directed graph.
Execute SLI-API:execute-graph RPC with this payload:
Feel free to change the values of ACL parameters (but first check what types they are in the ACL service YANG model).
Unfortunately, at the time of writing this tutorial, there is a bug in the OpenDaylight NETCONF (NETCONF-568) with parsing output from this RPC call. It prevents the ODL from sending a response to the RESTCONF request we sent (SLI-API:execute-graph RPC) and we need to manually stop waiting for this response in the Postman (or another REST client you are using).
Now, the service should be activated! To check services activated in the Cisco NSO call GET request on URI:
To check if the device was configured log into Cisco NSO CLI and execute show command:
ncs_cli -u admin
show configuration devices device c1 config ios:interface
You should see an output, similar to this:
admin@ncs> show configuration devices device c1 config ios:interface
FastEthernet 1/0;
GigabitEthernet 1/1 {
ip {
access-group {
access-list aclFromDG;
direction in;
}
}
}
Congratulations
You have successfully connected SDN-C with the Cisco NSO and concluded our series! In case you would like a custom integration, feel free to contact us.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.