Binding Query (BQ) is an MD-SAL module, currently located at OpenDaylight version master (7.0.5), 6.0.x, and 5.0.x. Its primary function is to filter data from the Binding Awareness model.
To use BQ, it is required to create QueryExpression and QueryExecutor. QueryExecutor contains BindingCodecTree and data represented by the Binding Awareness model. A filter will be applied to this data and all operations from QueryExpression.
QueryExpression is created from the QueryFactory class. Creating a QueryFactory is started with the querySubtree method. Here, it is entered as an instance identifier, and it has to be at the root of data from QueryExecutor.
The next step will be to create a path to the data, which we want to filter – and then apply the required filter. When QueryExpression is ready, it will be applied with the method executeQuery in QueryExecutor. One QueryExpression can be used on multiple QueryExecutors with the same data schema.
Prerequisites for Binding Query
Now, we will demonstrate how to actually use Binding Query. We will create a YANG model for this purpose:
module queryTest {
yang-version 1.1;
namespace urn:yang.query;
prefix qt;
revision 2021-01-20 {
description
"Initial revision";
}
grouping container-root {
container container-root {
leaf root-leaf {
type string;
}
leaf-list root-leaf-list {
type string;
}
container container-nested {
leaf nested-leaf {
type uint32;
}
}
}
}
grouping list-root {
container list-root {
list top-list {
key "key-a key-b";
leaf key-a {
type string;
}
leaf key-b {
type string;
}
list nested-list {
key "identifier";
leaf identifier {
type string;
}
leaf weight {
type int16;
}
}
}
}
}
grouping choice {
choice choice {
case case-a {
container case-a-container {
leaf case-a-leaf {
type int32;
}
}
}
case case-b {
list case-b-container {
key "key-cb";
leaf key-cb {
type string;
}
}
}
}
}
container root {
uses container-root;
uses list-root;
uses choice;
}
}
Then, we will build and create a Binding Awareness model, with some test data from the provided YANG model.
public Root generateQueryData() {
HashMap<NestedListKey, NestedList> nestedMap = new HashMap<>() {{
put(new NestedListKey("NestedId"), new NestedListBuilder()
.setIdentifier("NestedId")
.setWeight((short) 10)
.build());
put(new NestedListKey("NestedId2"), new NestedListBuilder()
.setIdentifier("NestedId2")
.setWeight((short) 15)
.build());
}};
HashMap<NestedListKey, NestedList> nestedMap2 = new HashMap<>() {{
put(new NestedListKey("Nested2Id"), new NestedListBuilder()
.setIdentifier("Nested2Id")
.setWeight((short) 10)
.build());
}};
HashMap<TopListKey, TopList> topMap = new HashMap<>() {{
put(new TopListKey("keyA", "keyB"),
new TopListBuilder()
.setKeyA("keyA")
.setKeyB("keyB")
.setNestedList(nestedMap)
.build());
put(new TopListKey("keyA2", "keyB2"),
new TopListBuilder()
.setKeyA("keyA2")
.setKeyB("keyB2")
.setNestedList(nestedMap2)
.build());
}};
HashMap<CaseBContainerKey, CaseBContainer> caseBMap = new HashMap<>() {{
put(new CaseBContainerKey("test@test.com"),
new CaseBContainerBuilder()
.setKeyCb("test@test.com")
.build());
put(new CaseBContainerKey("test"),
new CaseBContainerBuilder()
.setKeyCb("test")
.build());
}};
RootBuilder rootBuilder = new RootBuilder();
rootBuilder.setContainerRoot(new ContainerRootBuilder()
.setRootLeaf("root leaf")
.setContainerNested(new ContainerNestedBuilder()
.setNestedLeaf(Uint32.valueOf(10))
.build())
.setRootLeafList(new ArrayList<>() {{
add("data1");
add("data2");
add("data3");
}})
.build());
rootBuilder.setListRoot(new ListRootBuilder().setTopList(topMap).build());
rootBuilder.setChoiceRoot(new CaseBBuilder()
.setCaseBContainer(caseBMap)
.build());
return rootBuilder.build();
}
For better orientation in the test-data structure, there is also a JSON representation of the data we will use:
{
"queryTest:root": {
"container-root": {
"root-leaf": "root leaf",
"root-leaf-list": [
"data1",
"data2",
"data3"
],
"container-nested": {
"nested-leaf": 10
}
},
"list-root": {
"top-list": [
{
"key-a": "keyA",
"key-b": "keyB",
"nested-list": [
{
"identifier": "NestedId",
"weight": 10
},
{
"identifier": "NestedId2",
"weight": 15
}
]
},
{
"key-a": "keyA2",
"key-b": "keyB2",
"nested-list": []
}
]
},
"choice": {
"case-b-container": {
"top-list": [
{
"key-cb": "test@test.com"
},
{
"key-cb": "test"
}
]
}
}
}
}
From the Binding Awareness model queryTest shown above, we can create a QueryExecutor. In this example, we will use the SimpleQueryExecutor. As a builder parameter, we entered BindingCodecTree. Afterwards, this will be added into the Binding Awareness data by method, which we created above.
public QueryExecutor createExecutor() {
return SimpleQueryExecutor.builder(CODEC)
.add(generateQueryData())
.build();
}
Create a Query & Filter Data
Now, we can start with an example on how to create a query and filter some data. In the first example, we will describe how to filter the container by the value of his leaf. In the next steps, we will create a QueryExpression.
-
First, we will create a QueryFactory from the DefaultQueryFactory. The DefaultQueryFactory constructor takes BindingCodecTree as a parameter.
QueryFactory factory = new DefaultQueryFactory(CODEC);
- The next step is to create the DescendantQueryBuilder from QueryFactory. The querySubtree method takes the instance identifier as a parameter. This identifier should be a root node from our model. In this case, it is a container with the name root.
DescendantQueryBuilder<Root> decadentQueryRootBuilder
= factory.querySubtree(InstanceIdentifier.create(Root.class));
- Then we will set the path to the parent container of leaf, depending on which value we want to filter.
DescendantQueryBuilder<ContainerRoot> decadentQueryContainerRootBuilder
= decadentQueryRootBuilder.extractChild(ContainerRoot.class);
- Now we create the StringMatchingBuilder, with the value of the leaf and name root-leaf, which we want to match.
StringMatchBuilder<ContainerRoot> stringMatchBuilder = decadentQueryContainerRootBuilder.matching()
.leaf(ContainerRoot::getRootLeaf);
- The last step is to define which values should be filtered and then build the QueryExpression. For this case, we will filter a specific leaf, with the value “root leaf”.
QueryExpression<ContainerRoot> matchRootLeaf = stringMatchBuilder.valueEquals("root leaf").build();
Now, the QueryExpression can be used to filter data from QueryExecutor. For creating QueryExecutor, we use the method defined above in “test query data”.
QueryExecutor executor = createExecutor();
QueryResult<ContainerRoot> items = executor.executeQuery(matchRootLeaf);
The entire previous example in one block will look like this:
QueryFactory factory = new DefaultQueryFactory(CODEC);
QueryExpression<ContainerRoot> rootLeafQueryExpression = factory
.querySubtree(InstanceIdentifier.create(Root.class))
.extractChild(ContainerRoot.class)
.matching()
.leaf(ContainerRoot::getRootLeaf)
.valueEquals("root leaf")
.build();
QueryExecutor executor = createExecutor();
QueryResult<ContainerRoot> result = executor.executeQuery(rootLeafQueryExpression);
When we validate the result, we will find, that only one item matched our condition in the query:
assertEquals(1, result.getItems().size());
String resultItem = result.getItems().stream()
.map(item -> item.object().getRootLeaf())
.findFirst()
.orElse(null);
assertEquals("root leaf", resultItem);
Filter Nested-List Data
The next example will show how to use Binding Query to filter data from nested-list. This example will filter nested-list items, where the weight parameter equals 10.
QueryFactory factory = new DefaultQueryFactory(CODEC);
QueryExpression<NestedList> queryExpression = factory
.querySubtree(InstanceIdentifier.create(Root.class))
.extractChild(ListRoot.class)
.extractChild(TopList.class)
.extractChild(NestedList.class)
.matching()
.leaf(NestedList::getWeight)
.valueEquals((short) 10)
.build();
QueryExecutor executor = createExecutor();
QueryResult<NestedList> result = executor.executeQuery(queryExpression);
assertEquals(2, result.getItems().size())
If we are required to filter nested-list items, but only from top-list with specific keys, then it will look like this:
QueryFactory factory = new DefaultQueryFactory(CODEC);
QueryExpression<NestedList> queryExpression = factory
.querySubtree(InstanceIdentifier.create(Root.class))
.extractChild(ListRoot.class)
.extractChild(TopList.class, new TopListKey("keyA", "keyB"))
.extractChild(NestedList.class)
.matching()
.leaf(NestedList::getWeight)
.valueEquals((short) 10)
.build();
QueryExecutor executor = createExecutor();
QueryResult<NestedList> result = executor.executeQuery(queryExpression);
assertEquals(1, result.getItems().size())
In case that we wanted to get top-list elements, but only those which contain nested-leaf items with a weight greater than, or equals to, 15. It is possible to set a match on top-list containers and then continue with a condition to nested-list. With number operations, we can execute greaterThanOrEqual, lessThanOrEqual, greaterThan, and lessThan methods.
QueryExpression<TopList> queryExpression = factory
.querySubtree(InstanceIdentifier.create(Root.class))
.extractChild(ListRoot.class)
.extractChild(TopList.class)
.matching()
.childObject(NestedList.class)
.leaf(NestedList::getWeight).greaterThanOrEqual((short) 15)
.build();
QueryExecutor executor = createExecutor();
QueryResult<TopList> result = executor.executeQuery(queryExpression);
assertEquals(1, result.getItems().size());
List<TopList> topListResult = result.getItems().stream()
.map(Item::object)
.filter(item -> item.getKeyA().equals("keyA"))
.filter(item -> item.getKeyB().equals("keyB"))
.collect(Collectors.toList());
assertEquals(1, topListResult.size());
The last example shows how to filter choice data and matching their values in key-cb leaf. Conditions that are required to meet are defined in the pattern, which matches the email address.
QueryFactory factory = new DefaultQueryFactory(CODEC);
QueryExpression<CaseBContainer> queryExpression = factory
.querySubtree(InstanceIdentifier.create(Root.class))
.extractChild(CaseBContainer.class)
.matching()
.leaf(CaseBContainer::getKeyCb)
.matchesPattern(Pattern.compile("^[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,6}$",
Pattern.CASE_INSENSITIVE))
.build();
QueryExecutor executor = createExecutor();
QueryResult<CaseBContainer> result = executor.executeQuery(queryExpression);
assertEquals(1, result.getItems().size());
Binding Query can be used to filter important data, as is shown in previous examples. With Binding Query, it is possible to filter data with various options and get all the required information. Binding Query can also support matching string by patterns and simple filter operations with numbers.
by Peter Šuňa | Leave us your feedback on this post!
You can contact us at https://pantheon.tech/
Explore our Pantheon GitHub.
Watch our YouTube Channel.
[OpenDaylight] Static Distribution
/in Blog, OpenDaylight /by PANTHEON.techOpenDaylight‘s distribution package remained the same for several years. But what if there is a different way to do this, making distribution more aligned with the latest containerization trends? This is where an OpenDaylight Static Distribution comes to the rescue.
Original Distribution & Containerized Deployments
Let’s take a quick look at the usual way.
A standard distribution is made up of:
It’s an excellent strategy for when the user wants to choose modules and build his application dynamically from construction blocks. Additionally, Karaf provides a set of tools that can affect configuration and features in runtime.
However, when it comes to micro-services and containerized deployments, this approach confronts some best practices for operating containers – statelessness and immutability.
Perks of a Static Distribution
Starting from version 4.2.x, Apache Karaf provides the capability to build a static distribution, aiming to be more compatible with the containerized environment – and OpenDaylight can use that as well.
So, what are the differences between a static vs. dynamic distribution?
Instead of adding everything to the distribution, you only need to specify a minimal list of features and required bundles in your runtime, so only they will be installed. This would help produce a lightweight distribution package and omit unnecessary stuff, including some Karaf features from the default distribution.
Boot features are pre-configured, no need to execute any feature installations from Karaf’s shell.
Configuration admin is replaced with a read-only version that only picks up configuration files from the ‘/etc/’ folder.
Bundle dependencies are resolved and verified during the build phase, which leads to more stable builds overall.
With all these changes in place, we can achieve an almost entirely immutable distribution, which can be used for the containerized deployments.
How to Build a Static Distribution with OpenDaylight’s Components
The latest version of the odl-parent component introduced a new project called karaf-dist-static, which defines a minimal list of features needed by all OpenDaylight’s components (static framework, security libraries, etc.).
This can be used as a parent POM to create our own static distribution. Let’s try to use it and assemble a static distribution with some particular features.
When it comes to adding features provided by the Karaf framework, a <startupFeatures> block should be used; as we are going to check the installation of the features within the static distribution.
First, let’s add the ‘ssh’ feature to the list.
After applying all of these things, you should get a pom.xml file similar to the one below:
Once everything is ready, let’s build a project!
Building a project
If you check the log messages, you probably will notice the KAR artifact is not the same one we had for dynamic distribution (in dynamic distribution, you can expect the following one – org.apache.karaf.features/framework/4.3.0/kar).
Eventually, we can check the output directory of the maven build – it should contain an ‘assembly’ folder with a static distribution we created and netconf-karaf-static-1.0.0-SNAPSHOT.zip archive that contains this distribution.
While a ZIP archive can be used as an artifact, you would usually like to push to some repository; we will verify our distribution by running Karaf from the assembly folder.
If everything goes well, you should see some system messages saying that Karaf is started, following by a shell command-line interface:
With a static distribution, you don’t need to do any feature installation manually.
Let’s just check if our features are running by executing the following command
The produced output will contain a list of already started features; among them, you should find features we selected in our previous steps.
We can also run an additional check by sending a request to the corresponding RESTCONF endpoint:
The expected output would be the following:
What’s next?
Now, we can produce immutable & lightweight OpenDaylight distributions with a selected number of pre-installed features, which can be the first step to create Docker images that would be fully compliant for the containerized deployment.
Our next steps would be to make logging and clustered configuration more suitable for running in containers, but that’s a topic for another article.
by Oleksii Mozghovyi | Leave us your feedback on this post!
You can contact us here.
Explore our PANTHEON.tech GitHub.
Watch our YouTube Channel.
[OpenDaylight] Binding Query
/in Blog, OpenDaylight /by PANTHEON.techBinding Query (BQ) is an MD-SAL module, currently located at OpenDaylight version master (7.0.5), 6.0.x, and 5.0.x. Its primary function is to filter data from the Binding Awareness model.
To use BQ, it is required to create QueryExpression and QueryExecutor. QueryExecutor contains BindingCodecTree and data represented by the Binding Awareness model. A filter will be applied to this data and all operations from QueryExpression.
QueryExpression is created from the QueryFactory class. Creating a QueryFactory is started with the querySubtree method. Here, it is entered as an instance identifier, and it has to be at the root of data from QueryExecutor.
The next step will be to create a path to the data, which we want to filter – and then apply the required filter. When QueryExpression is ready, it will be applied with the method executeQuery in QueryExecutor. One QueryExpression can be used on multiple QueryExecutors with the same data schema.
Prerequisites for Binding Query
Now, we will demonstrate how to actually use Binding Query. We will create a YANG model for this purpose:
Then, we will build and create a Binding Awareness model, with some test data from the provided YANG model.
For better orientation in the test-data structure, there is also a JSON representation of the data we will use:
From the Binding Awareness model queryTest shown above, we can create a QueryExecutor. In this example, we will use the SimpleQueryExecutor. As a builder parameter, we entered BindingCodecTree. Afterwards, this will be added into the Binding Awareness data by method, which we created above.
Create a Query & Filter Data
Now, we can start with an example on how to create a query and filter some data. In the first example, we will describe how to filter the container by the value of his leaf. In the next steps, we will create a QueryExpression.
First, we will create a QueryFactory from the DefaultQueryFactory. The DefaultQueryFactory constructor takes BindingCodecTree as a parameter.
Now, the QueryExpression can be used to filter data from QueryExecutor. For creating QueryExecutor, we use the method defined above in “test query data”.
The entire previous example in one block will look like this:
When we validate the result, we will find, that only one item matched our condition in the query:
Filter Nested-List Data
The next example will show how to use Binding Query to filter data from nested-list. This example will filter nested-list items, where the weight parameter equals 10.
If we are required to filter nested-list items, but only from top-list with specific keys, then it will look like this:
In case that we wanted to get top-list elements, but only those which contain nested-leaf items with a weight greater than, or equals to, 15. It is possible to set a match on top-list containers and then continue with a condition to nested-list. With number operations, we can execute greaterThanOrEqual, lessThanOrEqual, greaterThan, and lessThan methods.
The last example shows how to filter choice data and matching their values in key-cb leaf. Conditions that are required to meet are defined in the pattern, which matches the email address.
Binding Query can be used to filter important data, as is shown in previous examples. With Binding Query, it is possible to filter data with various options and get all the required information. Binding Query can also support matching string by patterns and simple filter operations with numbers.
by Peter Šuňa | Leave us your feedback on this post!
You can contact us at https://pantheon.tech/
Explore our Pantheon GitHub.
Watch our YouTube Channel.
[OpenDaylight] Migrating to AKKA 2.6.X
/in News, OpenDaylight /by PANTHEON.techPANTHEON.tech has enabled OpenDaylight to migrate to the current version of AKKA, 2.6.x. Today, we will review recent changes to AKKA, which is the heart of OpenDaylight’s Clustering functionality.
As the largest committer to the OpenDaylight source-code, PANTHEON.tech will regularly keep you updated and posted about our efforts in projects surrounding OpenDaylight.
Released in November 2019, added many improvements, including better documentation, and introduced a lot of new features. In case you are new to AKKA, here is a description from the official website:
OpenDaylight Migrates to AKKA 2.6
The most important features of AKKA 2.6 are shortly described below; for the full list, please see the release notes.
Make sure to check out the AKKA Migration Guide 2.5.x to 2.6.x, and the PANTHEON.tech OpenDaylight page!
AKKA Typed
AKKA Typed is the new typed Actor API. It was declared ready for production use since AKKA 2.5.22, and since 2.6.0 it’s the officially recommended Actor API for new projects. Also, the documentation was significantly improved.
In a typed API, each actor declares an acceptable message type, and only messages of this type can be sent to the actor. This is enforced by the system.
For untyped extensions, seamless access is allowed.
The classic APIs for modules, such as Persistence, Cluster Sharding and Distributed Data, are still fully supported, so the existing applications can continue to use those.
Artery
Artery is a reimplementation of the old remoting module, aimed at improving performance and stability, based on Aeron (UDP) and AKKA Streams TCP/TLS, instead of Netty TCP.
Artery provides more stability and better performance for systems using AKKA Cluster. Classic remoting is deprecated but still can be used. More information is provided in the migration guide.
Jackson-Based Serialization
AKKA 2.6 includes a new Jackson-based serializer, supporting both JSON and CBOR formats. This is the recommended serializer for applications using AKKA 2.6. Java serialization is disabled by default.
Distributed Publish-Subscribe
With AKKA Typed, actors on any node in the cluster can subscribe to specific topics. The message published to one of these topics will be delivered to all subscribed actors. This feature also works in a non-cluster setting.
Passivation in Cluster
With the Cluster passivation feature, you may stop persistent entities that are not used to reduce memory consumption by defining a timeout for message receiving. Passivation timeout can be pre-configured in cluster settings or set explicitly for an entity.
Cluster: External shard allocation
A possibility to use a new alternative external shard allocation strategy is provided, which allows explicit control over the allocation of shards. This covers use cases such as Kafka Partition consumption matching with shard locations to avoid network hops.
Sharded Daemon Process
Sharded Daemon Process allows specific actors in a cluster, to keep it alive and balanced. For rebalancing, an actor can be stopped and then started on a new node. This feature can be useful in case the data processing workload needs to be split across a set number of workers.
by Konstantin Blagov | Leave us your feedback on this post!
You can contact us at https://pantheon.tech/
Explore our Pantheon GitHub.
Watch our YouTube Channel.
PANTHEON.tech Proves 2020 Leadership in Contributions to Linux Foundation Networking Projects
/in Blog, CDNF.io /by PANTHEON.techThe Linux Foundation Networking: 2020 Year in Review shows PANTHEON.tech leading open-source enrichment and customer innovation adoption in SDN Automation, Cloud-Native, 5G & O-RAN.
Source: LFX Insights
Leadership and Contribution
PANTHEON.tech is pleased to showcase the Linux Foundation Networking “2020 Year in Review”, which highlights our continued commitment to open-source enrichment and customer adoption.
This report reflects a series of metrics for last year and we are extremely proud to be highlighting our continued leading levels of participation and contribution in LFN’s technical communities. As an example, PANTHEON.tech provided over 60% of the commits to OpenDaylight in 2020.
This is an extraordinary achievement, given this is in the company of such accoladed peers as AT&T, Orange S.A., Cisco Systems Inc., Ericsson, and Samsung.
Customer Enablement
Clearly, this report demonstrates open source software solutions have secured themselves in many customer’s network architectures and strategies, with even more customers following this lead. Leveraging its expertise and experience, PANTHEON.tech, since its inception has been focused on offering customers; application development services, Enterprise-Grade tailored or productized open source solutions with an accompanying full support model
PANTHEON.tech leads the way in enabling customers with Software Defined Network automation, comprehensively integrating into an ecosystem of vendor and open orchestration, systems, and network devices across all domains of customer’s networks. Our solutions facilitate automation, for such services as O-RAN, L2/L3/E-VPN, 5G, or Data Centre, amongst many others.
Leveraging multiple open-source projects, including FD.io, we assist customers in embracing cloud-native, developing tailored enterprise-grade network functions, which focus on customer’s immediate and future requirements and performance objectives.
We help our customers unlock the potential of their network assets, whether; new, legacy, proprietary, open, multi-domain, or multi-layer, PANTHEON.tech has solutions to simplify and optimize customer’s networks, systems, and operations.
The key-takeaway is, that customers can rely on PANTHEON.tech to deliver, unlocking services in your existing networks, innovate and adopt new networks and services, while simplifying your operations.
Please contact PANTHEON.tech to discuss how we can assist your open-source network and application goals with our comprehensive range of services, subscriptions, and training.
EntGuard | Next-Gen Enterprise VPN
/in Blog, CDNF.io /by PANTHEON.techAt present, enterprises practice approaches in securing external perimeters of their networks. From centralized Virtual Private Networks (VPN), through access without a VPN to using solutions, such as EntGuard VPN.
For encryption, protection, security, meet EntGuard – the ultimate, enterprise VPN solution.
The most dangerous cyber-threats are those, which are not yet identified. Enterprises need to act proactively and secure access to their networks.
Work-From-Home & Cybersecurity
We saw an increased need of working from home last year. But what was first a necessity, seems to stay as a popular alternative to working from an office.
That also means, that as an enterprise, you need to go the extra mile to protect your employees, your, and their data. A VPN will:
With EntGuard VPN, PANTHEON.tech utilized years of working on network technologies and software, to give you an enterprise-grade product, that is built for the cloud.
With the world rapidly shifting towards virtual spaces, it is projected that the spending on cybersecurity will increase by 10% in 2021. EntGuard will save you costs, without compromising quality.
Built on WireGuard®
We decided to build EntGuard VPN on the critically-acclaimed WireGuard® protocol. The protocol focuses on ease-of-use & simplicity, as opposed to existing solutions like OpenVPN – while achieving incredible performance! Did you know that WireGuard® is natively supported in the Linux kernel and FD.io VPP since 2020?
WireGuard® is relied on for high-speeds and privacy protection. Complex, state-of-the-art cryptography, with lightweight architecture. An incredible combination.
Unfortunately, it’s not easy to maintain WireGuard® in enterprise environments, that’s why we have decided to bring you EntGuard, which gives you the ability to use WireGuard® tunnels in your enterprise environment.
EntGuard Highlights
About our CNFs
The PANTHEON.tech, cloud-native network functions portfolio keeps on growing. At the start of 2020, we introduced you to the project, which at the moment houses 18 CNF’s. Make sure to keep up-to-date with our future products, by following us and our social media!
[What Is] ONAP | Open Network Automation Platform
/in Blog, CDNF.io /by PANTHEON.techONAP (Open Network Automation Platform) is quite a trend in the contemporary SDN world. It is a broad project, consisting of a variety of sub-projects (or components), which together form a network function orchestration and automation platform. Several enterprises are active in ONAP and its growth is accelerating rapidly. PANTHEON.tech is a proud contributor as well.
What is ONAP?
The platform itself emerged from the AT&T ECOMP (Enhanced Control, Orchestration, Management & Policy) and Open-O (Open Orchestrator) initiatives. ONAP is an open-source software platform, that offers a robust, real-time, policy-driven orchestration and automation framework, for physical and virtual network functions. It exists above the infrastructure layer, which automates the network.
ONAP enables end-users to connect services through the infrastructure. It allows network scaling and VNF/CNF implementations in a fully automated manner. Among other benefits, like:
Its goal is to enable operators and developers, networks, IT, and the cloud to quickly automate new technologies and support full lifecycle management. It is capable of managing (build, plan, orchestrate) Virtual Network Functions (VNF), as well as Software-Defined Networks (SDN).
ONAP’s high-level architecture involves numerous software subsystems (components). PANTHEON.tech is involved in multiple ONAP projects, but mostly around controllers (like SDN-C). For a detailed view, visit the official wiki page for the architecture of ONAP.
SDN-C
SDN-C is one of the components of ONAP – the SDN controller. It is basically OpenDaylight, with additional Directed Graph Execution capabilities. In terms of architecture, ONAP SDN-C is composed of multiple Docker containers.
Directed Graph Creator runs one of these containers. It’s a user-friendly web UI, that can be used to create directed graphs. Another container runs the Admin Portal. The next one runs the relational database, which is the focal point of the implementation of SDN-C and used for each container. Lastly, the SDN-C container, that runs the controller itself.
This component is of particular interest to us because it has all the rationale behind the execution of graphs that are directed. We have previously shown, how lighty.io can integrate well with SDN-C and drastically improve performance.
According to the latest 5G use-case paper for ONAP, SDN-C has managed to implement “radio-related optimizations through the SDN-R sub-project and support for the A1 interface”.
CDS: Controller Design Studio
As the official documentation puts it:
CDS has both design-time & run-time activities. During design time, the designer can define what actions are required for a given service, along with anything comprising the action. The design produces a CBA Package. Its content is driven by a catalog of reusable data dictionaries and components, delivering a reusable and simplified self-service experience.
CDS enables users to adapt resources in a way, where no direct code-changes are needed. The Design Studio gives users, not only developers, the option to customize the system, to meet the customer’s demands. The two main components of CDS are the frontend (GUI) and backend (run-time). It is possible to run CDS in Kubernetes or an IDE of your choice.
One interesting use-case shows the creation of a WordPress CNF via CDS.
SO: Service Orchestration
The primary role of SO is the automation of the provisioning operations of end-to-end service instances. In favor of overall end-to-end service instantiation, processes, and maintenance, SO is accountable for the instantiation and setup of VNFs.
To accomplish its purpose, Service Orchestration performs well-defined processes – usually triggered by receiving service requests, created by other ONAP components, or by Order Lifecycle Management in the BSS layer.
The orchestration procedure is either manually developed or received from ONAP’s Service Design and Development (SDC) portion, where all service designs are created for consumption and exposed/distributed.
The latest achievement of the Service Orchestrator is the implementation of new workflows such as:
DMaaP: Data Movement as a Platform
The DMaaP component is a data movement service, which transports and processes data from a selected source to the desired target. It is capable of transferring data and messages between ONAP components, data filtering/compression/routing, as well as message routing and batch/event-based processing.
DCAE: Data Collection Analytics & Events
The Data Collection Analytics & Events component does exactly what’s in its name – gather performance, usage & configuration data from the managed environment. The component guards events in a sense – if something significant occurs or an anomaly is detected, DCAE takes appropriate actions.
The component collects and stores data that is necessary for analysis while providing a framework for the development of needed analytics.
DCAE: Collectors and other microservices required to support the telemetry collection for 5G network optimization; this includes the O1 interface from O-RAN.
A&AI: Active & Available Inventory
The Active & Available Inventory functionality offers real-time views of relationships with the products and services operated by them. It gives real-time insights into the managed products and services, as well as their connections.
A&AI is a list of properties that are active, available, and allocated. It establishes a multi-dimensional relationship between the programs and infrastructure under administration. It provides interfaces for dynamic network topology requests, both canned and ad-hoc network topology queries.
Recently AAI gained schema support for 5G service design and slicing models.
Is ONAP worth it?
Yes, it is. Since you have come up to this conclusion, then you might feel that ONAP is the right fit for your needs. It is an enormous project with around 20 components.
If you feel overwhelmed, don’t worry and leave it to the experts – contact PANTHEON.tech today for your ONAP integration needs!
[What Is] O-RAN | Open Radio Access Network
/in Blog /by PANTHEON.techIt is a long-term goal of several enterprises, including PANTHEON.tech, to embrace an open(-source) ecosystem for network development and connectivity.
An open approach to software development opens doors to all the talents around the globe, to contribute to projects that will shape the future of networking. One such project is the Open Radio Access Network or O-RAN for short.
Next In Line: O-RAN
Originally launched as OpenRAN, the project was started in 2017 by the Telecom Infra Project. The goal was to build a vendor-neutral, hardware & software-defined technology for 2-3-4G RAN solutions.
Then, the O-RAN Alliance was founded to increase community engagement, as well as to motivate operators to be included in this development. The alliance has made it a point, to create a standardization – meaning a description, of how this concept should function in reality.
O-RAN Architecture
O-RAN is part of the massive evolution from 4G networks, into the 5G generation. In 5G, due to higher bandwidths, more antenna and the use of multiple-input multiple-output (MIMO) technology, even more data needs to go back and forth.
We can observe the formation of two solutions: the high-level split (HLS) and the low-level split (LLS). With so much of the processing shifting to the edge, the high-level split is a two-box solution. The F1 interface lies between the DU+RU and links to the centralized device. Alternatively, further processing is shifted to the middle by LLS and the antenna is held at the edge.
Three separate units are deployed with O-RAN:
At the edge sits the O-RU. In the center, the O-DU sits and performs some of the processing. Both HLS and LLS are included in O-RAN. They standardize the interfaces. For CUs, DUs, or RUs, operators may use different vendors. With one working group concentrating on the F1 interface and another on the front-haul, the components are much more interoperable and the protocols more clearly defined.
What’s more, O-RAN selected SDN-R as the project’s SDN controller. PANTHEON.tech is part of the SDN-R community.
What is a RAN?
A radio access network implements radio access technology, which makes it able for user devices (anything able to receive this signal) to receive a connection to the core network, above the specific RAN.
A visual representation of core networks, radio access networks, and user devices.
The types of radio access networks include GSM, EDGE, and LTE standards, named GRAN, GERAN, E-UTRAN in that order.
The core network provides a path for information exchanging between subnetworks or different LANs. Imagine the core network as the backbone of an enterprise’s entire network.
The technology behind RANs is called RAT (radio access technology) and represents the principal technology behind radio-based communication. RATs include known network standards like GSM or LTE, or Bluetooth and WiFi.
Linux Foundation Networking Presents: O-RAN Software Community
In the first half of 2019, The Linux Foundation, in collaboration with the O-RAN Alliance, created the O-RAN Software Community, where members can contribute their knowledge & know-how to the O-RAN project.
Currently, the goal is to create a common O-RAN specification, that all RAN vendors would potentially adopt. This would mean a common interface, independent of the radio unit type.
This move certainly makes sense, since, at its core, O-RAN stands for openness – open-source, nonproprietary radio access networks. As the technical charter of the project puts it:
The further goal of creating a software community centered around this project is to include projects such as OPNFV, ONAP, and others, to create a complete package for future, open networking.
PANTHEON.tech 2020: A Look Back
/in Blog, CDNF.io /by PANTHEON.techJoin us in reminiscing and reminding you, what PANTHEON.tech has managed to create, participate in, or inform about in 2020.
Project: CDNF.io
In the first quarter of the year, we have made our latest project, CDNF.io, accessible to the public. Cloud-native functions were long overdue in our portfolio and let me tell you – there are lots of them, ready to be deployed anytime.
We have prepared a series of videos, centered around our CNFs, which you can conveniently view here:
Perhaps you like to read more than hear someone explain things to you? We wrote a few posts on:
Integration Scenarios
Apart from our in-house solutions, we have worked on demonstrating several scenarios with common technologies behind them: ServiceNow® & Cisco’s Network Services Orchestrator.
In terms of ServiceNow®, our posts centered around:
Cisco’s NSO got a nearly all-inclusive treatment, thanks to Samuel Kontriš, with a defacto NSO Guide on:
This includes two videos about the Network Service Orchestrator:
Open-Source Software Releases
We have made several projects available on our GitHub, which we regularly maintain and update. What stole the spotlight was the lighty.io NETCONF Device Simulator & Monitoring Tool, which you can download here.
PANTHEON.tech has also been active in adding new features to existing open-source projects, such as:
lighty.io, our open-source passion project, celebrated its 13th release, which also included a separate post highlighting improvements and changes.
Thoughts, Opinions & Information
Since we did not want to exclude people who might not be that knowledgable about what we do, we have created a few series on technologies and concepts PANTHEON.tech is engaged in, such as:
We try to listen closely to what Robert Varga, the top-single contributor to the OpenDaylight source-code, has to say about OpenDaylight. That allowed us to publish opinion/informative pieces like:
Step into a new decade
We would like to thank everybody who does their part in working and contributing to projects in PANTHEON.tech, but open-source projects as well. 2020 was challenging, to say the least, but pulling together, makes us stronger – together.
Happy holidays and new years to our colleagues, partners, and readers – from PANTHEON.tech.
[Thoughts] On Karaf & Its Future
/in Blog, OpenDaylight /by PANTHEON.techThese thoughts were originally sent on the public karaf-dev mailing list, where Robert Varga wrote a compelling opinion on what the future holds for Karaf and where its currently is headed. The text below was slightly edited from the original.
With my various OpenDaylight hats on, let me summarize our project-wide view, with a history going back to the project that was officially announced (early 2013).
From the get-go, our architectural requirement for OpenDaylight was OSGi compatibility. This means every single production (not maven-plugin obviously) artifact has to be a proper bundle.
This highly-technical and implementation-specific requirement was set down because of two things:
This serves as an architectural requirement that translates to an unbreakable design requirement of how the code must be structured.
We started up with a home-brew OSGi container. We quickly replaced it for Karaf 3.0.x (6?), massively enjoying it being properly integrated, with shell, management, and all that. Also, feature:install.
At the end of the day, though, OpenDaylight is a toolkit of a bunch of components that you throw together and they work.
Our initial thinking was far removed from the current world of containers when operations go. The deployment was envisioned more like an NMS with a dedicated admin team (to paint a picture), providing a flexible platform.
We now provide out-of-the-box use-case wiring. using both dynamic Karaf and Guice (at least for one use case). We have an external project which shows the same can be done with pure Java, Spring Boot, and Quarkus.
We now also require Java 11, hence we have JPMS – and it can fulfill our architectural requirement just as well as OSGi. Thanks to OSGi, we have zero split packages.
We do not expect to ditch Karaf anytime soon, but rather leverage static-framework for a light-weight OSGi environment, as that is clearly the best option for us short-to-medium term, and definitely something we will continue supporting for the foreseeable future.
The shift to nimble single-purpose wirings is not going away and hence we will be expanding there anyway.
To achieve that, we will not be looking for a framework-of-frameworks, we will do that through native integration ourselves.
If Karaf can do the same, i.e. have its general-purpose pieces available as components, easily thrown together with @Singletons or @Components, with multiple frameworks, as well as nicely jlinkable – now that would be something.
[OpenDaylight] Binding-Independent & Binding-Aware
/in Blog /by PANTHEON.techFrom the get-go, the MD-SAL architecture was split into two distinct worlds: Binding-Independent (BI, DOM) and Binding-Aware (BA, Binding).
This split comes from two competing requirements:
Type-safety is supported by interfaces and classes generated from YANG models. It generally feels like any code, where you deal with DTOs.
Infrastructure services are supported by an object, model similar to XML DOM, where you deal with hierarchical “document” trees. All you have to go by, are QNames.
For obvious reasons, most developers interacting with OpenDaylight have never touched the Binding Independent world, even though it underpins pretty much every single feature available on the platform.
The old OpenDaylight SAL architecture looked like this:
A very dated picture of how the system is organized.
For example, RPCs invoked by one world, must be able to be serviced by the other. Since RPCs are the equivalent of a method call, this process needs to be as fast as possible, too.
That leads to a design, where each world has its own broker and the two brokers are connected. Invocations within the world would be handled by that world’s broker, foregoing any translation.
The Binding-Aware layer sits on top of the Binding Independent one. But it is not a one-to-one mapping.
This comes from the fact, that the Binding-Independent layer is centered around what makes sense in YANG, whereas the Binding-Aware layer is centered around what makes sense in Java, including various trade-offs and restrictions coming from them.
Remote Procedure Calls
For RPCs, this meant that there were two independent routing tables, with repeated exports being done from each of them.
The idea of an RPC router was generalized in the (now long-forgotten) RpcRouter interface. Within a single node, the Binding & DOM routers would be interconnected.
For clustered scenarios, a connector would be used to connect the DOM routers across all nodes. So an inter-node Binding-Aware RPC request from node A to node B would go through:
BA-A → BI-A → Connector-A → Connector-B → BI-B → BA-B (and back again)
Both the BI and connector speak the same language – hence they can communicate without data translation.
The design was simple and effective but has not survived the test of time. Most notably, the transition to dynamic loading of models in the Karaf container.
BA/BI Debacle: Solution
Model loading impacts data translation services needed to cross the BA/BI barrier, leading to situations where an RPC implementation was available in the BA world, but could not yet be exported to the BI world. This, in turn, leads to RPC routing loops, and in the case of data-store services – missing data & deadlocks.
This means, that all infrastructure services always go through BI, and the Binding RPC Broker was gradually taken behind the barn, there was a muffled sound in 2015.
Ultimate OpenDaylight Guide | Part 1: Documentation & Testing
/in OpenDaylight /by PANTHEON.techby Samuel Kontriš, Robert Varga, Filip Čúzy | Leave us your feedback on this post!
Welcome to Part 1 of the PANTHEON.tech Ultimate Guide to OpenDaylight! We will start off lightly with some tips & tricks regarding the tricky documentation, as well as some testing & building tips to speed up development!
Documentation
1. Website, Docs & Wiki
The differences between these three sources can be staggering. But no worries, we have got you covered!
2. Dependencies between projects & distributions
3. Contributing to OpenDaylight
4. Useful Mailing Lists
There are tens (up to hundreds) of mailing lists you can join, so you are up-to-date with all the important information – even dev talks, thoughts, and discussions!
Testing & Building
1. Maven “Quick” Profile
There’s a “Quick” maven profile in most OpenDaylight projects. This profile skips a lot of tests and checks, which are unnecessary to run with each build.
This way, the build is much faster:
2. GitHub x OpenDaylight
The OpenDaylight code is mirrored on GitHub! Since more people are familiar with the GitHub environment, rather than Gerrit, make sure to check out the official GitHub repo of ODL!
3. Gerrit
Working with Gerrit can be challenging and new for newcomers. Here is a great guide on the differences between the two.
You can contact us at https://pantheon.tech/
Explore our Pantheon GitHub.
Watch our YouTube Channel.
Road to Cloud-Native Network Functions
/in Blog, CDNF.io /by PANTHEON.techWe have come a long way to enjoy all the benefits that cloud-native network functions bring us – lowered costs, agility, scalability & resilience. This post will break down the road to CNFs – from PNF to VNF, to CNF.
What are PNFs (physical network functions)?
Back in the ’00s, network functions were utilized in the form of physical, hardware boxes, where each box served the purpose of a specific network function. Imagine routers, firewalls, load balancers, or switches as PNFs, utilized in data centers for decades before another technology replaced them. PNF boxes were difficult to operate, install, and manage.
Just as it was unimaginable to have a personal computer 20 years ago, we were unable to imagine virtualized network functions. Thanks to cheaper, off-the-shelf hardware and expansion of cloud services, enterprises were able to afford to move some network parts from PNFs to generic, commodity hardware.
What are VNFs (virtual network functions)?
The approach of virtualization enabled us to share hardware resources between multiple tenants while keeping the isolation of environments in place. The next logical step was the move from the physical, to the virtual world.
A VNF is a virtualized network function, that runs on top of a hardware networking infrastructure. Individual functions of a network may be implemented or combined, in order to create a complete package of a networking-communication service. A virtual network function can be part of an SDN architecture or used as a singular entity within a network.
Today’s standardization of VNFs would not be possible without ETSIs Open-Source Mano architecture, or the TOSCA standard, which can serve as lifecycle management. These are, for example, used in the open-source platform ONAP (Open Network Automation Platform).
What are CNFs (cloud-native network functions)?
Cloud-native network functions are software implementations of functions, which are traditionally performed by PNFs – and they need to conform to cloud-native principles. They can be packaged within a container image, are always ready to be deployed & orchestrated, chained together to perform a series of complex network functions.
Why should I use CNFs?
Microservices and the overall benefits of adapting cloud-native principles, come with several benefits, which show a natural evolution of network functions in the 2020s. Imagine the benefits of:
Our CNF project delivers on all of these promises. Get up-to-date with your network functions and contact us today, to get a quote.
[Release] lighty.io 13
/in News /by PANTHEON.techWith enterprises already deploying lighty.io in their networks, what are you waiting for? Check out the official lighty.io website, as well as references.
13 is an unlucky number in some cultures – but not in the case of the 13th release of lighty.io!
What’s new in lighty.io 13?
PANTHEON.tech has released lighty.io 13, which is keeping up-to-date with OpenDaylight’s Aluminium release. A lot of major changes happened to lighty.io itself, which we will break-down for you here:
Our team managed to fix start-scripts for examples in the repository, as well as bump the Maven Compiler Plugin & Maven JAR Plugin, for compiling and building JARs respectively. Fixes include Coverity issues & refactoring code, in order to comply with a source-quality profile (the source-quality profile was also enabled in this release). Furthermore, we have fixed the NETCONF delete-config preconditions, so they work as intended in RFC 6241.
As for improvements, we have reworked disabled tests and managed to improve AAA (Authentication, Authorization, Accounting) tests. Checkstyle was updated to the 8.34 version.
Since we are managing compatibility with OpenDaylight Aluminium, it is worth noting the several accomplishments of the 13th release of OpenDaylight as well.
The largest new features are the support of incremental data recovery & support of L4Z compression. L4Z offers lossless compression with speeds up to 500 MB/s (per core), which is quite impressive and can be utilized within OpenDaylight as well!
Incremental Data Recovery allows for datastore journal recovery and increased compression of datastore snapshots – which is where L4Z support comes to the rescue!
Another major feature, if you remember, is the PANTHEON.tech initiative towards OpenAPI 3.0 support in OpenDaylight. Formerly known as Swagger, OpenAPI helps visualize API resources while giving the user the possibility to interact with them.
What is lighty.io?
Remember, lighty.io is an OpenDaylight feature, which enables us to run its core features without Karaf, while working with any available Java platform. Contact us today for a demo or custom integration!
[NSO Guide] Cisco NSO® with lighty.io
/in Blog, SDN /by PANTHEON.techby Samuel Kontriš | Leave us your feedback on this post!
This is a continuation of our guide on the Cisco Network Service Orchestrator. In our previous article, we have shown you how to install and run Cisco NSO with three virtual devices. We believe you had time to test it out and get to know this great tool.
Now, we will show you how to use the Cisco NSO with our SDN Framework – lighty.io. You can read more about lighty.io here, and even download lighty-core from our GitHub here.
Prerequisites
This tutorial was tested on Ubuntu 18.04 LTS. In this tutorial we are going to use:
Get and start the lighty.io application
To get the lighty.io 11.2.0 release, clone its GitHub repository and build it with Maven.
After the build, locate the lighty-community-restconf-netconf-app artifact and unzip its distribution from the target directory:
Now we can start lighty.io application by running its .jar file:
After a few seconds we should see in the logs message that everything was started successfully:
The lighty.io application should now be up and running. The default RESTCONF port is 8888.
Connect Cisco NSO to the lighty.io application
To connect Cisco NSO to the lighty.io application via NETCONF protocol we must add it as a node to the configuration datastore using RESTCONF. To do this, call a PUT request on the URI:
with the payload:
The parameter nodeId specifies the name, under which we will address Cisco NSO in the lighty.io application. Parameters host and port specify, where the Cisco NSO instance is running. The default username and password for Cisco NSO is admin/admin. In case you would like to change node-id be sure to change it in the URI too.
To check if Cisco NSO was connected successfully, call a GET request on the URI:
The output should look similar to this:
If Cisco NSO was connected successfully, the value of the connection-status should be connected.
Activate Cisco NSO service using lighty.io
Activation of the Cisco NSO service is similar to connecting Cisco NSO to lighty.io. We are going to activate the ACL service we created in the previous tutorial, by calling PUT REST request on URI:
with payload:
This payload is modeled in a YANG model we created together with the ACL service in our previous tutorial. Feel free to change the values of the ACL parameters (first, check what types they are in the ACL service YANG model) and if you are changing ACL_Name, don’t forget to change it in the URI too.
Unfortunately, in the time of writing this tutorial, there is a bug in the OpenDaylight NETCONF (NETCONF-568) with parsing the output from this call. It prevents lighty.io from sending a response to the RESTCONF request we sent and we need to manually stop waiting for this response in Postman (or another REST client you are using).
Now, our service should be activated! To check activated services in Cisco NSO, call a GET request on the URI:
In response, you should see the service we just activated. It should look similar to this:
To check if the device was configured, log into Cisco NSO CLI and execute a show command:
You should see an output, similar to this:
A Postman collection containing all REST requests we executed in this tutorial can be found here or downloaded directly from here.
Today we showed you how to connect Cisco NSO with lighty.io. Up next, our tutorial will depict how to connect ONAP SDN-C with Cisco NSO.
Leave us your feedback on this post!
10/03/2020 Update: Added the video demonstration, enjoy!
You can contact us at https://pantheon.tech/
Explore our Pantheon GitHub.
Watch our YouTube Channel.
[Free Tool] NETCONF Device Simulator & Monitoring
/in Blog /by PANTHEON.techby Martin Bugáň | Leave us your feedback on this post!
You can simulate hundreds or thousands of NETCONF devices within your development or CI/CD. We are of course talking about our lighty NETCONF Simulator, which is now available on GitHub! This tool is free & open-source, while based on OpenDaylight’s state-of-art NETCONF implementation.
We have recently finished the implementation of get-schema RPC from NETCONF Monitoring, which is based on the RFC 6022 by the IETF and brings users a missing monitoring possibility for NETCONF devices.
Download NETCONF Device Simulator (from GitHub)
Let us know, what NETCONF device you would like to simulate!
What is a get-schema
Part of NETCONF Monitoring features is get-schema RPC, which allows the controller to download schemas from the NETCONF device to the controller, so they don’t have to be added manually.
In the points, one after another, the process of device connection looks like this (when controller and device are started):
1. Connection between NETCONF device and controller is established
2. When established and hello message capabilities exchanged, the controller requests a list of available models from the NETCONF device
3. When NETCONF device supports this feature, it sends its models to the controller
4. Controller then processes those models and builts schema context
In a more technical perspective, here is the process of connecting devices:
1. SSH connection from the controller to the NETCONF device is established
2. NETCONF device sends a hello message with its capabilities
3. Controller sends hello message with its capabilities
4. Controller requests (gets) a list of available schemas (models) from the NETCONF device datastore (ietf-netconf-monitoring:netconf-state/schemas)
5. NETCONF device sends a list of available schemas to the controller
6. controller goes through this list, download each model via get-schema RPC, and stores them in the cache/schema directory
7. Schema context is built in the controller from models in the cache/schema directory
How does the feature work in an enabled device?
In the device, there is a monitoring flag that can be set up with EnabledNetconfMonitoring(boolean) method. The feature is enabled by default. If the flag is enabled, when the device is built and then started, the device’s operational datastore is populated with schemas from the device’s schemaContext.
In our device, we use NetconfDevice implementation which is built with NetconfDeviceBuilder pattern. This feature is by default enabled and can be disabled by calling with NetconfMonitoringEnabled(false) on the NetconfDeviceBuilder, which sets a flag that netconf-monitoring will be enabled.
When the build() command is called on device builder, if that flag is set, the netconf-monitoring model is added to the device and is created NetconfDeviceImpl instance with a monitoring flag from the builder. Then, when the device is started, prepareSchemasForNetconfMonitoring is called if monitoring is enabled and the datastore is populated with schemas, which are then stored in the netconf-state/schemas path.
It is done via write transaction, where each module and submodule in the device’s schema context is converted to a schema and written into a map with schema key (if the map doesn’t already contain a schema with a given key) When the device is then connected to the controller, get-schema RPC will ask for each of these schemas in netconf-state/schemas path and download them to the cache/schema directory.
What is the purpose of the get-schema?
It helps to automate the device connection process. When a new device is connected, there is no need to manually find and add all models that the device supports in its capabilities, to the controller, but those are downloaded from the device by the controller.
[Example 1] NETCONF Monitoring schemas on our Toaster simulator device
To get a list of all schemas, it is needed to send a get request with a specified netconf-state/schemas path.
Get request for netconf-state/schemas:
[Example 2] netconf-state/schemas path in datastore, after get request:
This is an example of a reply with schemas stored in the data store (albeit a little shortened). Reply for a get netconf-state/schemas request:
[Example 3] get-schema RPC
To get a particular schema with its content in YANG format, the following RPC is sent – an example of getting toaster schema, with revision version 2009-11-20. XML RPC request:
In reply, there is a YANG module schema of the requested toaster schema (again, shortened). XML get-schema RPC reply:
[What Is] Declarative vs. Imperative Approach
/in Blog, CDNF.io /by PANTHEON.techDeclarative Approach
Users will mainly use the declarative approach when describing how services should start, for example: “I want 3 instances of this service to run simultaneously”.
In the declarative approach, a YAML file containing the wished configuration will be read and applied towards the declarative statement. A controller will then know about the YAML file and apply it where needed. Afterwards, the K8s scheduler will start the services, where it has the capacity to do so.
Kubernetes, or K8s for short, lets you decide between what approach you choose. When using the imperative approach, you will explain to Kubernetes in detail, how to deploy something. An imperative way includes the commands create, run, get & delete – basically any verb-based command.
Will I ever manage imperatively?
Yes, you will. Even when using declarative management, there is always an operator, which translates the intent to a sequence of orders and operations which he will do. Or there might be several operators who cooperate or split the responsibility for parts of the system.
Although declarative management is recommended in production environments, imperative management can serve as a faster introduction to managing your deployments, with more control over each step you would like to introduce.
Each approach has its pro’s and con’s, where the choice ultimately depends on your deployment and how you want to manage it.
While software-defined networking aims for automation, once your network is fully automated, enterprises should consider IBN (Intent-Based Networking) the next big step.
Intent-Based Networking (IBN)
Intent-Based Networking is an idea introduced by Cisco, which makes use of artificial intelligence, as well as machine learning to automate various administrative tasks in a network. This would be telling the network, in a declarative way, what you want to achieve, relieving you of the burden of exactly describing what a network should do.
For example, we can configure our CNFs in a declarative way, where we state the intent – how we want the CNF to function, but we do not care how the configuration of the CNF will be applied to, for example, VPP.
For this purpose, VPP-Agent will send the commands in the correct sequence (with additional help from KVscheduler), so that the configuration will come as close as possible to the initial intent.
[What Is] SDN & NFV
/in Blog, SDN /by PANTHEON.techby Filip Čúzy | Leave us your feedback on this post!
For newcomers to our blog – welcome to a series on explanations from the world of PANTHEON.tech software development. Today, we will be looking at what software-defined networking is – what it stands for, it’s past, present, future – and more.
What is SDN – Software Defined Networking?
Networks can exponentially scale and require around the clock troubleshooting, in case something goes wrong – which always can. Software-Defined Networking is a concept of decluttering enterprises of physical network devices and replacing them with software. The goal is to improve the traditional network management and ease the entire process by removing pricey, easily obsolete hardware and replace it with their virtualized counterparts.
The core component is the control plane, which encompasses one (or several) controllers, like OpenDaylight. This makes centralization of the network a breeze and provides an overview of its entirety. The main benefits of utilizing SDN are:
Most network admins can relate to the feeling when you have to manage multiple network devices separately, with different ones requiring proprietary software and making your network a decentralized mess. Utilizing SDN enables you to make use of a network controller and centralize the management, security, and other aspects of your network in one place.
Network topologies enable full control of the network flow. Bandwidth can be managed to go where it needs, but it does not end there – network resources, in general, can be secured, managed, and optimized, in order to accommodate current needs. Scheduling or programmability is what differs software-defined networking from a traditional network approach.
Open standards let you know, that you do not have to rely on one hardware provider, with vendor-specific protocols and devices. Projects, such as OpenDaylight, which has been around since 2013, with contributions from major companies like Orange, RedHat, but with leading contributions from PANTHEON.tech. Being an open-source project, you can rely on the community of expert technicians on perfecting the solution with each commit or pull request.
You are also free to modify OpenDaylight yourself, but it gets easier with features from lighty.io, commercial support, or training.
The idea of a software-defined network supposedly started at Stanford University, where researchers played with the idea of virtualizing the network. The idea was to virtualize the network by making the control plane and data plane two separate entities, independent of each other.
What is NFV – Network Function Virtualization?
On the other hand, NFV or Network Function Virtualization aims to replace hardware, which serves a specific purpose, with virtual network functions (Virtual Customer Premise Equipment – vCPE). Imagine getting rid of most proprietary hardware, the difficulty of upgrading each part, and making them more accessible, scalable, and centralized.
SDN & NFV go therefore hand-in-hand in most of the aspects covered, but mainly in the goal of virtualizing most parts of the network equipment or functions.
As for the future, PANTHEON.tech’s mission is to bring enterprises closer to a complete SDN & NFV coverage, with training, support, and custom network software that will make the transition easier. Contact us today – the future of networking awaits.
You can contact us here.
Explore our PANTHEON.tech GitHub.
Watch our YouTube Channel.
[What Is] XDP/AF_XDP and its potential
/in News /by PANTHEON.techWhat is XDP?
XDP (eXpress Data Path) is an eBPF (extended Berkeley Packet Filter) implementation for early packet interception. Received packets are not sent to kernel IP stack directly, but can be sent to userspace for processing. Users may decide what to do with the packet (drop, send back, modify, pass to the kernel). A detailed description can be found here.
XDP is designed as an alternative to DPDK. It is slower and at the moment, less mature than DPDK. However, it offers features/mechanisms already implemented in the kernel (DPDK users have to implement everything in userspace).
At the moment, XDP is under heavy development and features may change with each kernel version. There comes the first requirement – to run the latest kernel version. Changes between the kernel version may not be compatible.
IO Visor description of XDP packet processing
XDP Attachment
The XDP program can be attached to an interface and can process the RX queue of that interface (incoming packets). It is not possible to intercept the TX queue (outgoing packets), but kernel developers are continuously extending the XDP feature-set. TX queue is one of the improvements with high interest from the community.
XDP program can be loaded in several modes:
XDP runs in an emulated environment. There are multiple constraints implied, which should protect the kernel from errors in the XDP code. There is a limit on how many instructions one XDP can receive. However, there is a workaround in the Call Table, referencing various XDP programs that can call each other.
The XDP emulator checks the range of used variables. Sometimes it’s helpful – it doesn’t allow you to access packet offset higher than already validated by packet size.
Sometimes it is annoying because the packet pointer can be passed to a subroutine, where access may fail with out of bounds access even if the original packet was already checked for that size.
BPF Compilation
Errors reported by the BPF compiler are quite tricky, due to the program ID compiled into byte code. Errors reported by that byte code usually do not make it obvious, which C program part it is related to.
The error message is sometimes hidden at the beginning of the dump, sometimes at the end of the dump. The instruction dump itself may be many pages long. Sometimes, the only way how to identify the issue is to comment out parts of the code, to figure out which line introduced it.
XDP can’t (as of November 2019):
One of the requirements was to forward traffic between host and namespaces, containers or VMs. Namespaces do the job properly so, XDP can access either host interfaces or namespaced interfaces. I wasn’t able to use it as a tunnel to pass traffic between them. The workaround is to use a veth pair to connect the host with a namespace and attach 2 XDP handlers (one on each side to process traffic). I’m not sure, whether they can share TABLES to pass data. However, using the veth pair mitigates the performance benefit of using XDP.
Another option is to create AF_XDP socket as a sink for packets received in the physical interface and processed by the attached XDP. But there are 2 major limitations:
XDP can (as of November 2019):
by Marek Závodský, PANTHEON.tech
AF_XDP
AF_XDP is a new type of socket, presented into the Linux Kernel 4.18, which does not completely bypass the kernel, but utilizes its functionality and enables to create something alike DPDK or the AF_Packet.
DPDK (Data Plane Development Kit) is a library, developed in 2010 by Intel and now under the Linux Foundation Projects umbrella, which accelerates packet processing workloads on a broad pallet of CPU architectures.
AF_Packet is a socket in the Linux Kernel, which allows applications to send & receive raw packets through the Kernel. It creates a shared mmap ring buffer between the kernel and userspace, which reduces the number of calls between these two.
At the moment XDP is under heavy development and features may change with each kernel version. There comes the first requirement, to run the latest kernel version. Changes between the kernel version may not be compatible.
As opposed to AF_Packet, AF_XDP moves frames directly to the userspace, without the need to go through the whole kernel network stack. They arrive in the shortest possible time. AF_XDP does not bypass the kernel but creates an in-kernel fast path.
It also offers advantages like zero-copy (between kernel space & userspace) or offloading of the XDP bytecode into NIC. AF_XDP can run in interrupt mode, as well as polling mode, while DPDK polling mode drivers always poll – this means that they use 100% of the available CPU processing power.
Future potential
One of the potentials in the future for an offloaded XDP (being one of the possibilities of how an XDP bytecode can be executed) is, that such an offloaded program can be executed directly in a NIC and therefore does not use any CPU power, as noted at FOSDEM 2018:
Decentralization
Furthermore, all signs lead to a theoretical, decentralized architecture – with emphasis on community efforts in offloading workloads to NICs – for example in a decentralized NIC switching architecture. This type of offloading would decrease costs on various expensive tasks, such as the CPU itself having to process the incoming packets.
We are excited about the future of AF_XDP and looking forward to the mentioned possibilities!
For a more detailed description, you can download a presentation with details surrounding AF_XDP & DPDK and another from FOSDEM 2019.
Update 08/15/2020: We have upgraded this page, it’s content and information for you to enjoy!
You can contact us at https://pantheon.tech/
Explore our Pantheon GitHub.
Watch our YouTube Channel.
[Integration] Network Service Mesh & Cloud-Native Functions
/in CDNF.io, News /by PANTHEON.techby Milan Lenčo & Pavel Kotúček | Leave us your feedback on this post!
As part of a webinar, in cooperation with the Linux Foundation Networking, we have created two repositories with examples from our demonstration “Building CNFs with FD.io VPP and Network Service Mesh + VPP Traceability in Cloud-Native Deployments“:
Check out our full-webinar, in cooperation with the Linux Foundation Networking on YouTube:
What is Network Service Mesh (NSM)?
Recently, Network Service Mesh (NSM) has been drawing lots of attention in the area of network function virtualization (NFV). Inspired by Istio, Network Service Mesh maps the concept of a Service Mesh to L2/L3 payloads. It runs on top of (any) CNI and builds additional connections between Kubernetes Pods in the run-time, based on the Network Service definition deployed via CRD.
Unlike Contiv-VPP, for example, NSM is mostly controlled from within applications through the provided SDK. This approach has its pros and cons.
Pros: Gives programmers more control over the interactions between their applications and NSM
Cons: Requires a deeper understanding of the framework to get things right
Another difference is, that NSM intentionally offers only the minimalistic point-to-point connections between pods (or clients and endpoints in their terminology). Everything that can be implemented via CNFs, is left out of the framework. Even things as basic as connecting a service chain with external physical interfaces, or attaching multiple services to a common L2/L3 network, is not supported and instead left to the users (programmers) of NSM to implement.
Integration of NSM with Ligato
At PANTHEON.tech, we see the potential of NSM and decided to tackle the main drawbacks of the framework. For example, we have developed a new plugin for Ligato-based control-plane agents, that allows seamless integration of CNFs with NSM.
Instead of having to use the low-level and imperative NSM SDK, the users (not necessarily software developers) can use the standard northbound (NB) protobuf API, in order to define the connections between their applications and other network services in a declarative form. The plugin then uses NSM SDK behind the scenes to open the connections and creates corresponding interfaces that the CNF is then ready to use.
The CNF components, therefore, do not have to care about how the interfaces were created, whether it was by Contiv, via NSM SDK, or in some other way, and can simply use logical interface names for reference. This approach allows us to decouple the implementation of the network function provided by a CNF from the service networking/chaining that surrounds it.
The plugin for Ligato-NSM integration is shipped both separately, ready for import into existing Ligato-based agents, and also as a part of our NSM-Agent-VPP and NSM-Agent-Linux. The former extends the vanilla Ligato VPP-Agent with the NSM support while the latter also adds NSM support but omits all the VPP-related plugins when only Linux networking needs to be managed.
Furthermore, since most of the common network features are already provided by Ligato VPP-Agent, it is often unnecessary to do any additional programming work whatsoever to develop a new CNF. With the help of the Ligato framework and tools developed at Pantheon, achieving the desired network function is often a matter of defining network configuration in a declarative way inside one or more YAML files deployed as Kubernetes CRD instances. For examples of Ligato-based CNF deployments with NSM networking, please refer to our repository with CNF examples.
Finally, included in the repository is also a controller for K8s CRD defined to allow deploying network configuration for Ligato-based CNFs like any other Kubernetes resource defined inside YAML-formatted files. Usage examples can also be found in the repository with CNF examples.
CNF Chaining using Ligato & NSM (example from LFN Webinar)
In this example, we demonstrate the capabilities of the NSM agent – a control-plane for Cloud-native Network Functions deployed in a Kubernetes cluster. The NSM agent seamlessly integrates the Ligato framework for Linux and VPP network configuration management, together with Network Service Mesh (NSM) for separating the data plane from the control plane connectivity, between containers and external endpoints.
In the presented use-case, we simulate a scenario in which a client from a local network needs to access a web server with a public IP address. The necessary Network Address Translation (NAT) is performed in-between the client and the webserver by the high-performance VPP NAT plugin, deployed as a true CNF (Cloud-Native Network Functions) inside a container. For simplicity, the client is represented by a K8s Pod running image with cURL installed (as opposed to being an external endpoint as it would be in a real-world scenario). For the server-side, the minimalistic TestHTTPServer implemented in VPP is utilized.
In all the three Pods an instance of NSM Agent is run to communicate with the NSM manager via NSM SDK and negotiate additional network connections to connect the pods into a chain client:
Client <-> NAT-CNF <-> web-server (see diagrams below)ne
The agents then use the features of the Ligato framework to further configure Linux and VPP networking around the additional interfaces provided by NSM (e.g. routes, NAT).
The configuration to apply is described declaratively and submitted to NSM agents in a Kubernetes native way through our own Custom Resource called
CNFConfiguration
. The controller for this CRD (installed by cnf-crd.yaml) simply reflects the content of applied CRD instances into an ETCD datastore from which it is read by NSM agents. For example, the configuration for the NSM agent managing the central NAT CNF can be found in cnf-nat44.yaml.Networking Diagram
Routing Diagram
helm init
to install Tiller and to set up a local configuration for the HelmCloud-Native Firewall Orchestration w/ ServiceNow®
/in Blog /by PANTHEON.techby Slavomír Mazúr | Leave us your feedback on this post!
PANTHEON.tech s.r.o., its products or services, are not affiliated with ServiceNow®, neither is this post an advertisement of ServiceNow® or its products.
ServiceNow® is a cloud-based platform, that enables enterprise organizations to automate business processes across the enterprise. We have previously shown, how to use ServiceNow® & OpenDaylight to automate your network.
We will demonstrate the possibility of using ServiceNow®, to interact with a firewall device. More precisely, we will manage Access Controls Lists (ACLs), which work on a set of rules that define how to forward or block packets in network traffic.
User Administration
The Now® platform offers, among other things, user administration, which allows us to work with users, assign them to groups, as well as assigning both to roles, based on their privileges. In this solution/demonstration, two different groups of users, with corresponding roles are used.
The first group of users are requestors, which may represent a basic end-user, employees, or customers of an enterprise organization. This user can create new rule requests by submitting a simple form. Without any knowledge of networking, the user can briefly describe his request in the description field.
This request will then be handled by the network admin. At the same time, users can monitor their requests and their status:
The custom table used in the request process is inherited from the Task table, which is one of the core tables provided with the base system. It provides a series of fields, which can be used in the process of request-items management and provide us access to approval logic.
Approval Strategy
Network admins form the second group of users. They receive requests from end-user and decide, if they will fulfill a request, or reject it.
If they decide to fulfill a request, they have an available, extended view of the previous form, which offers more specific fields and simply fills the necessary data. This data represents the ACL rule information, that will be later applied. There are several types of rules (IP, TCP, UDP, ICMP, MAC), and different properties (form fields) must be filled for each of these types.
Network admin has an existing set of rules available, which are stored in tables, according to their type. Existing rules can be accessed from the Application navigator and viewed inside of the created rule request, which the admin is currently reviewing. Data in tables are updated on regular intervals, as well as after a new rule is added.
Workflow Overview
The network admin can decide to approve or reject the request. Once the request is approved, a flow of actions will be triggered. Everything after approval will be done automatically. A list of existing rules is GET from VPP-Agent, using the REST API call. Based on the type of ACL rule, the corresponding action is performed.
Each action consists of two steps. First, the payload is created by inserting new rules into a list of existing rules (if ACL already exists) or creating a new Access Control List (ACL). In the second step, a payload from the previous step is sent back to VPP-agent, using the REST API. At the end of this action flow, tables that contain data describing existing rules are updated.
Managing existing rules
In addition to the approval process, the network admin can also update existing rules, or create new rules. The network admin fills the data into a simple form. After submitting the form, a request is sent directly to the device, without the need of the approval process. Meanwhile, the rule is applied.
MID server
ServiceNow® applications need to communicate with external systems due to data transfer. For this purpose, the MID server is used, which runs as a Windows service or UNIX daemon. In our case, we need to get a list of existing rules from VPP-Agent or send a request to VPP-Agent, when we want to create or update rule. The advantage of a MID server is, that communications are initiated inside the enterprise’s firewall and therefore do not require any special firewall rules or VPNs.
You can contact us at https://pantheon.tech/
Explore our PANTHEON.tech GitHub.
Watch our YouTube Channel.