[OpenDaylight] Static Distribution

OpenDaylight‘s distribution package remained the same for several years. But what if there is a different way to do this, making distribution more aligned with the latest containerization trends? This is where an OpenDaylight Static Distribution comes to the rescue.

Original Distribution & Containerized Deployments

Let’s take a quick look at the usual way.

A standard distribution is made up of:

  • a pre-configured Apache Karaf
  • a full set of OpenDaylight’s bundles (modules)

It’s an excellent strategy for when the user wants to choose modules and build his application dynamically from construction blocks. Additionally, Karaf provides a set of tools that can affect configuration and features in runtime.

However, when it comes to micro-services and containerized deployments, this approach confronts some best practices for operating containers – statelessness and immutability.

Perks of a Static Distribution

Starting from version 4.2.x, Apache Karaf provides the capability to build a static distribution, aiming to be more compatible with the containerized environment – and OpenDaylight can use that as well.

So, what are the differences between a static vs. dynamic distribution?

  • Specified List of Features

Instead of adding everything to the distribution, you only need to specify a minimal list of features and required bundles in your runtime, so only they will be installed. This would help produce a lightweight distribution package and omit unnecessary stuff, including some Karaf features from the default distribution.

  • Pre-Configured Boot-Features

Boot features are pre-configured, no need to execute any feature installations from Karaf’s shell.

  • Configuration Admin

Configuration admin is replaced with a read-only version that only picks up configuration files from the ‘/etc/’ folder.

  • Speed

Bundle dependencies are resolved and verified during the build phase, which leads to more stable builds overall.

With all these changes in place, we can achieve an almost entirely immutable distribution, which can be used for the containerized deployments.

How to Build a Static Distribution with OpenDaylight’s Components

The latest version of the odl-parent component introduced a new project called karaf-dist-static, which defines a minimal list of features needed by all OpenDaylight’s components (static framework, security libraries, etc.).

This can be used as a parent POM to create our own static distribution. Let’s try to use it and assemble a static distribution with some particular features.

  1. Assuming that you already have an empty pom.xml file, in the first step, we’re going to declare the karaf-dist-static project as a parent for our one:
    <parent>
        <groupId>org.opendaylight.odlparent</groupId>
        <artifactId>karaf-dist-static</artifactId>
        <version>8.1.1-SNAPSHOT</version>
    </parent>
  2. Optionally, you can override two properties to disable the assembling of .zip/tar.gz archives with a distribution. Default values are ‘true’ for both properties.

    Let’s make an assumption, that we only need the ZIP:

    <properties>
        <karaf.archiveTarGz>false</karaf.archiveTarGz>
        <karaf.archiveZip>true</karaf.archiveZip>
    </properties>

     

  3. This example aims to demonstrate how to produce a static distribution containing NETCONF southbound connectors and RESTCONF northbound implementation.

    Let’s add the corresponding items to the dependencies section:

    <dependencies>
       <dependency>
          <groupId>org.opendaylight.netconf</groupId>
          <artifactId>odl-netconf-connector-all</artifactId>
          <version>1.10.0-SNAPSHOT</version>
          <classifier>features</classifier>
          <type>xml</type>
       </dependency>
       <dependency>
          <groupId>org.opendaylight.netconf</groupId>
          <artifactId>odl-restconf-nb-rfc8040</artifactId>
          <version>1.13.0-SNAPSHOT</version>
          <classifier>features</classifier>
          <type>xml</type>
       </dependency>
    </dependencies>

     

  4. Once we have these features on the dependency list, we can add them to Karaf’s Maven plugin configuration. Usually, when you want to add some OpenDaylight’s features, you can use the <bootFeatures> container.

    This should work fine for everything, except features delivered with a Karaf framework (like ssh, diagnostic, etc.).

    When it comes to adding features provided by the Karaf framework, a <startupFeatures> block should be used; as we are going to check the installation of the features within the static distribution.

    First, let’s add the ‘ssh’ feature to the list.

    <build>
      <plugins>
        <plugin>
          <groupId>org.apache.karaf.tooling</groupId>
          <artifactId>karaf-maven-plugin</artifactId>
          <configuration>
             <bootFeatures combine.children="append">
                <feature>odl-netconf-connector-all</feature>
                <feature>odl-restconf-nb-rfc8040</feature>
             </bootFeatures>
          </configuration>
        </plugin>
      </plugins>
    </build>

    After applying all of these things, you should get a pom.xml file similar to the one below:

    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <parent>
            <groupId>org.opendaylight.odlparent</groupId>
            <artifactId>karaf-dist-static</artifactId>
            <version>8.1.1-SNAPSHOT</version>
        </parent>
     
        <modelVersion>4.0.0</modelVersion>
        <groupId>org.opendaylight.examples</groupId>
        <artifactId>netconf-karaf-static</artifactId>
        <version>1.0.0-SNAPSHOT</version>
        <packaging>karaf-assembly</packaging>
     
        <properties>
            <karaf.archiveTarGz>false</karaf.archiveTarGz>
            <karaf.archiveZip>true</karaf.archiveZip>
        </properties>
     
        <dependencies>
            <dependency>
                <groupId>org.opendaylight.netconf</groupId>
                <artifactId>odl-netconf-connector-all</artifactId>
                <version>1.10.0-SNAPSHOT</version>
                <classifier>features</classifier>
                <type>xml</type>
            </dependency>
            <dependency>
                <groupId>org.opendaylight.netconf</groupId>
                <artifactId>odl-restconf-nb-rfc8040</artifactId>
                <version>1.13.0-SNAPSHOT</version>
                <classifier>features</classifier>
                <type>xml</type>
            </dependency>
        </dependencies>
        <build>
            <plugins>
                <plugin>
                    <groupId>org.apache.karaf.tooling</groupId>
                    <artifactId>karaf-maven-plugin</artifactId>
                    <configuration>
                        <startupFeatures combine.children="append">
                            <feature>ssh</feature>
                        </startupFeatures>
                        <bootFeatures combine.children="append">
                            <feature>odl-netconf-connector-all</feature>
                            <feature>odl-restconf-nb-rfc8040</feature>
                        </bootFeatures>
                    </configuration>
                </plugin>
            </plugins>
        </build>
    </project>

Once everything is ready, let’s build a project!

Building a project

mvn clean package

If you check the log messages, you probably will notice the KAR artifact is not the same one we had for dynamic distribution (in dynamic distribution, you can expect the following one – org.apache.karaf.features/framework/4.3.0/kar).

[INFO] Loading direct KAR and features XML dependencies
[INFO]    Standard startup Karaf KAR found: mvn:org.apache.karaf.features/static/4.3.0/kar
[INFO]    Feature static will be added as a startup feature

Eventually, we can check the output directory of the maven build – it should contain an ‘assembly’ folder with a static distribution we created and netconf-karaf-static-1.0.0-SNAPSHOT.zip archive that contains this distribution.

$ ls --group-directories-first -1 ./target
antrun
assembly
classes
dependency-maven-plugin-markers
site
checkstyle-cachefile
checkstyle-checker.xml
checkstyle-header.txt
checkstyle-result.xml
checkstyle-suppressions.xml
cpd.xml
netconf-karaf-static-1.0.0-SNAPSHOT.zip

While a ZIP archive can be used as an artifact, you would usually like to push to some repository; we will verify our distribution by running Karaf from the assembly folder.

./assembly/bin/karaf

If everything goes well, you should see some system messages saying that Karaf is started, following by a shell command-line interface:

Apache Karaf starting up. Press Enter to open the shell now...
100% [========================================================================]
Karaf started in 1s. Bundle stats: 50 active, 51 total
                                                                                            
    ________                       ________                .__  .__       .__     __      
    \_____  \ ______   ____   ____ \______ \ _____  ___.__.|  | |__| ____ |  |___/  |_    
     /   |   \\____ \_/ __ \ /    \ |    |  \\__  \<   |  ||  | |  |/ ___\|  |  \   __\   
    /    |    \  |_> >  ___/|   |  \|    `   \/ __ \\___  ||  |_|  / /_/  >   Y  \  |     
    \_______  /   __/ \___  >___|  /_______  (____  / ____||____/__\___  /|___|  /__|     
            \/|__|        \/     \/        \/     \/\/            /_____/      \/         
                                                                                            
 
Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or type 'system:shutdown' or 'logout' to shutdown OpenDaylight.
 
opendaylight-user@root>

With a static distribution, you don’t need to do any feature installation manually.

Let’s just check if our features are running by executing the following command

feature:list | grep 'Started'

The produced output will contain a list of already started features; among them, you should find features we selected in our previous steps.

...
odl-netconf-connector    | 1.10.0.SNAPSHOT  │ Started │ odl-netconf-1.10.0-SNAPSHOT             │ OpenDaylight :: Netconf Connector
odl-restconf-nb-rfc8040  | 1.13.0.SNAPSHOT  │ Started │ odl-restconf-nb-rfc8040-1.13.0-SNAPSHOT │ OpenDaylight :: Restconf :: NB :: RFC8040
...

We can also run an additional check by sending a request to the corresponding RESTCONF endpoint:

curl -vs --user admin:admin 'http://localhost:8181/rests/data/network-topology:network-topology/topology=topology-netconf' | jq

The expected output would be the following:

{
  "network-topology:topology": [
    {
      "topology-id": "topology-netconf"
    }
  ]
}

What’s next?

Now, we can produce immutable & lightweight OpenDaylight distributions with a selected number of pre-installed features, which can be the first step to create Docker images that would be fully compliant for the containerized deployment.

Our next steps would be to make logging and clustered configuration more suitable for running in containers, but that’s a topic for another article.


by Oleksii Mozghovyi | Leave us your feedback on this post!

You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

Filtering Data with Binding Query, using OpenDaylight

[OpenDaylight] Binding Query

Binding Query (BQ) is an MD-SAL module, currently located at OpenDaylight version master (7.0.5), 6.0.x, and 5.0.x. Its primary function is to filter data from the Binding Awareness model.

To use BQ, it is required to create QueryExpression and QueryExecutor. QueryExecutor contains BindingCodecTree and data represented by the Binding Awareness model. A filter will be applied to this data and all operations from QueryExpression.

QueryExpression is created from the QueryFactory class. Creating a QueryFactory is started with the querySubtree method. Here, it is entered as an instance identifier, and it has to be at the root of data from QueryExecutor.

The next step will be to create a path to the data, which we want to filter – and then apply the required filter. When QueryExpression is ready, it will be applied with the method executeQuery in QueryExecutor. One QueryExpression can be used on multiple QueryExecutors with the same data schema.

Prerequisites for Binding Query

Now, we will demonstrate how to actually use Binding Query. We will create a YANG model for this purpose:

module queryTest {
    yang-version 1.1;
    namespace urn:yang.query;
    prefix qt;
 
    revision 2021-01-20 {
        description
          "Initial revision";
    }
 
    grouping container-root {
        container container-root {
            leaf root-leaf {
                type string;
            }
 
            leaf-list root-leaf-list {
                type string;
            }
 
            container container-nested {
                leaf nested-leaf {
                    type uint32;
                }
            }
        }
    }
 
    grouping list-root {
        container list-root {
            list top-list {
                key "key-a key-b";
 
                leaf key-a {
                    type string;
                }
                leaf key-b {
                    type string;
                }
                list nested-list {
                    key "identifier";
 
                    leaf identifier {
                        type string;
                    }
 
                    leaf weight {
                        type int16;
                    }
                }
            }
        }
    }
 
    grouping choice {
        choice choice {
            case case-a {
                container case-a-container {
                    leaf case-a-leaf {
                        type int32;
                    }
                }
            }
            case case-b {
                list case-b-container {
                    key "key-cb";
                    leaf key-cb {
                        type string;
                    }
                }
            }
        }
    }
 
    container root {
        uses container-root;
        uses list-root;
        uses choice;
    }
}

Then, we will build and create a Binding Awareness model, with some test data from the provided YANG model.

public Root generateQueryData() {
    HashMap<NestedListKey, NestedList> nestedMap = new HashMap<>() {{
        put(new NestedListKey("NestedId"), new NestedListBuilder()
            .setIdentifier("NestedId")
            .setWeight((short) 10)
            .build());
        put(new NestedListKey("NestedId2"), new NestedListBuilder()
            .setIdentifier("NestedId2")
            .setWeight((short) 15)
            .build());
    }};

    HashMap<NestedListKey, NestedList> nestedMap2 = new HashMap<>() {{
        put(new NestedListKey("Nested2Id"), new NestedListBuilder()
            .setIdentifier("Nested2Id")
            .setWeight((short) 10)
            .build());
    }};

    HashMap<TopListKey, TopList> topMap = new HashMap<>() {{
        put(new TopListKey("keyA", "keyB"),
            new TopListBuilder()
                .setKeyA("keyA")
                .setKeyB("keyB")
                .setNestedList(nestedMap)
                .build());
        put(new TopListKey("keyA2", "keyB2"),
            new TopListBuilder()
                .setKeyA("keyA2")
                .setKeyB("keyB2")
                .setNestedList(nestedMap2)
                .build());
    }};

    HashMap<CaseBContainerKey, CaseBContainer> caseBMap = new HashMap<>() {{
        put(new CaseBContainerKey("test@test.com"),
            new CaseBContainerBuilder()
                .setKeyCb("test@test.com")
                .build());
        put(new CaseBContainerKey("test"),
            new CaseBContainerBuilder()
                .setKeyCb("test")
                .build());
    }};

    RootBuilder rootBuilder = new RootBuilder();
    rootBuilder.setContainerRoot(new ContainerRootBuilder()
                                     .setRootLeaf("root leaf")
                                     .setContainerNested(new ContainerNestedBuilder()
                                                             .setNestedLeaf(Uint32.valueOf(10))
                                                             .build())
                                     .setRootLeafList(new ArrayList<>() {{
                                         add("data1");
                                         add("data2");
                                         add("data3");
                                     }})
                                     .build());
    rootBuilder.setListRoot(new ListRootBuilder().setTopList(topMap).build());
    rootBuilder.setChoiceRoot(new CaseBBuilder()
                                  .setCaseBContainer(caseBMap)
                                  .build());
    return rootBuilder.build();
}

For better orientation in the test-data structure, there is also a JSON representation of the data we will use:

{
  "queryTest:root": {
    "container-root": {
      "root-leaf": "root leaf",
      "root-leaf-list": [
        "data1",
        "data2",
        "data3"
      ],
      "container-nested": {
        "nested-leaf": 10
      }
    },
    "list-root": {
      "top-list": [
        {
          "key-a": "keyA",
          "key-b": "keyB",
          "nested-list": [
            {
              "identifier": "NestedId",
              "weight": 10
            },
            {
              "identifier": "NestedId2",
              "weight": 15
            }
          ]
        },
        {
          "key-a": "keyA2",
          "key-b": "keyB2",
          "nested-list": []
        }
      ]
    },
    "choice": {
      "case-b-container": {
        "top-list": [
          {
            "key-cb": "test@test.com"
          },
          {
            "key-cb": "test"
          }
        ]
      }
    }
  }
}

From the Binding Awareness model queryTest shown above, we can create a QueryExecutor. In this example, we will use the SimpleQueryExecutor. As a builder parameter, we entered BindingCodecTree. Afterwards, this will be added into the Binding Awareness data by method, which we created above.

public QueryExecutor createExecutor() {
    return SimpleQueryExecutor.builder(CODEC)
        .add(generateQueryData())
        .build();
}

Create a Query & Filter Data

Now, we can start with an example on how to create a query and filter some data. In the first example, we will describe how to filter the container by the value of his leaf. In the next steps, we will create a QueryExpression.

  1. First, we will create a QueryFactory from the DefaultQueryFactory. The DefaultQueryFactory constructor takes BindingCodecTree as a parameter.

    QueryFactory factory = new DefaultQueryFactory(CODEC);
  2. The next step is to create the DescendantQueryBuilder from QueryFactory. The querySubtree method takes the instance identifier as a parameter. This identifier should be a root node from our model. In this case, it is a container with the name root.
    DescendantQueryBuilder<Root> decadentQueryRootBuilder
        = factory.querySubtree(InstanceIdentifier.create(Root.class));
  3. Then we will set the path to the parent container of leaf, depending on which value we want to filter.
    DescendantQueryBuilder<ContainerRoot> decadentQueryContainerRootBuilder 
    = decadentQueryRootBuilder.extractChild(ContainerRoot.class);
  4. Now we create the StringMatchingBuilder, with the value of the leaf and name root-leaf, which we want to match.
    StringMatchBuilder<ContainerRoot> stringMatchBuilder = decadentQueryContainerRootBuilder.matching()
        .leaf(ContainerRoot::getRootLeaf);
  5. The last step is to define which values should be filtered and then build the QueryExpression. For this case, we will filter a specific leaf, with the value “root leaf”.
    QueryExpression<ContainerRoot> matchRootLeaf = stringMatchBuilder.valueEquals("root leaf").build();

     

Now, the QueryExpression can be used to filter data from QueryExecutor. For creating QueryExecutor, we use the method defined above in “test query data”.

QueryExecutor executor = createExecutor();
      QueryResult<ContainerRoot> items = executor.executeQuery(matchRootLeaf);

The entire previous example in one block will look like this:

QueryFactory factory = new DefaultQueryFactory(CODEC);
        QueryExpression<ContainerRoot> rootLeafQueryExpression = factory
            .querySubtree(InstanceIdentifier.create(Root.class))
            .extractChild(ContainerRoot.class)
            .matching()
            .leaf(ContainerRoot::getRootLeaf)
            .valueEquals("root leaf")
            .build();
        
        QueryExecutor executor = createExecutor();
        QueryResult<ContainerRoot> result = executor.executeQuery(rootLeafQueryExpression);

When we validate the result, we will find, that only one item matched our condition in the query:

assertEquals(1, result.getItems().size());
      String resultItem = result.getItems().stream()
          .map(item -> item.object().getRootLeaf())
          .findFirst()
          .orElse(null);
      assertEquals("root leaf", resultItem);

Filter Nested-List Data

The next example will show how to use Binding Query to filter data from nested-list. This example will filter nested-list items, where the weight parameter equals 10.

QueryFactory factory = new DefaultQueryFactory(CODEC);
 QueryExpression<NestedList> queryExpression = factory
     .querySubtree(InstanceIdentifier.create(Root.class))
     .extractChild(ListRoot.class)
     .extractChild(TopList.class)
     .extractChild(NestedList.class)
     .matching()
     .leaf(NestedList::getWeight)
     .valueEquals((short) 10)
     .build();

 QueryExecutor executor = createExecutor();
 QueryResult<NestedList> result = executor.executeQuery(queryExpression);
 assertEquals(2, result.getItems().size())

If we are required to filter nested-list items, but only from top-list with specific keys, then it will look like this:

QueryFactory factory = new DefaultQueryFactory(CODEC);
        QueryExpression<NestedList> queryExpression = factory
            .querySubtree(InstanceIdentifier.create(Root.class))
            .extractChild(ListRoot.class)
            .extractChild(TopList.class, new TopListKey("keyA", "keyB"))
            .extractChild(NestedList.class)
            .matching()
            .leaf(NestedList::getWeight)
            .valueEquals((short) 10)
            .build();

        QueryExecutor executor = createExecutor();
        QueryResult<NestedList> result = executor.executeQuery(queryExpression);
        assertEquals(1, result.getItems().size())

In case that we wanted to get top-list elements, but only those which contain nested-leaf items with a weight greater than, or equals to, 15. It is possible to set a match on top-list containers and then continue with a condition to nested-list. With number operations, we can execute greaterThanOrEqual, lessThanOrEqual, greaterThan, and lessThan methods.

QueryExpression<TopList> queryExpression = factory
            .querySubtree(InstanceIdentifier.create(Root.class))
            .extractChild(ListRoot.class)
            .extractChild(TopList.class)
            .matching()
            .childObject(NestedList.class)
            .leaf(NestedList::getWeight).greaterThanOrEqual((short) 15)
            .build();

        QueryExecutor executor = createExecutor();
        QueryResult<TopList> result = executor.executeQuery(queryExpression);
        assertEquals(1, result.getItems().size());

        List<TopList> topListResult = result.getItems().stream()
            .map(Item::object)
            .filter(item -> item.getKeyA().equals("keyA"))
            .filter(item -> item.getKeyB().equals("keyB"))
            .collect(Collectors.toList());
        assertEquals(1, topListResult.size());

The last example shows how to filter choice data and matching their values in key-cb leaf. Conditions that are required to meet are defined in the pattern, which matches the email address.

QueryFactory factory = new DefaultQueryFactory(CODEC);
       QueryExpression<CaseBContainer> queryExpression = factory
           .querySubtree(InstanceIdentifier.create(Root.class))
           .extractChild(CaseBContainer.class)
           .matching()
           .leaf(CaseBContainer::getKeyCb)
           .matchesPattern(Pattern.compile("^[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,6}$",
                                           Pattern.CASE_INSENSITIVE))
           .build();

       QueryExecutor executor = createExecutor();
       QueryResult<CaseBContainer> result = executor.executeQuery(queryExpression);

       assertEquals(1, result.getItems().size());

Binding Query can be used to filter important data, as is shown in previous examples. With Binding Query, it is possible to filter data with various options and get all the required information. Binding Query can also support matching string by patterns and simple filter operations with numbers.


by Peter Šuňa | Leave us your feedback on this post!

You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

Ultimate OpenDaylight Guide Part 1: Documentation & Testing

Ultimate OpenDaylight Guide | Part 1: Documentation & Testing

by Samuel Kontriš, Robert Varga, Filip Čúzy | Leave us your feedback on this post!


Welcome to Part 1 of the PANTHEON.tech Ultimate Guide to OpenDaylight! We will start off lightly with some tips & tricks regarding the tricky documentation, as well as some testing & building tips to speed up development!


Documentation

1. Website, Docs & Wiki

The differences between these three sources can be staggering. But no worries, we have got you covered!

2. Dependencies between projects & distributions

3. Contributing to OpenDaylight

4. Useful Mailing Lists

There are tens (up to hundreds) of mailing lists you can join, so you are up-to-date with all the important information – even dev talks, thoughts, and discussions!

Testing & Building

1. Maven “Quick” Profile

There’s a “Quick” maven profile in most OpenDaylight projects. This profile skips a lot of tests and checks, which are unnecessary to run with each build.

This way, the build is much faster:

mvn clean install -Pq

2. GitHub x OpenDaylight

The OpenDaylight code is mirrored on GitHub! Since more people are familiar with the GitHub environment, rather than Gerrit, make sure to check out the official GitHub repo of ODL!

3. Gerrit

Working with Gerrit can be challenging and new for newcomers. Here is a great guide on the differences between the two.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

OpenAPI 3.0 & OpenDaylight: A PANTHEON.tech Initiative

PANTHEON.tech has created a commit in the official OpenDaylight repository, which updates the version of Swagger generator to OpenAPI 3.0.

This feature allows us to easily generate a JSON with RESTCONF API documentation of OpenDaylight RESTCONF applications and import it into various services, such as ServiceNow®. This feature is not only about the generation of JSON with OpenAPI. It also includes Swagger UI based on generated JSON.

What is RESTCONF API?

RESTCONF API is an interface, which allows access to datastores in the controller, via HTTP requests. OpenDaylight supports two versions of RESTCONF protocol:

What is OpenAPI?

OpenAPI, formerly known as Swagger UI, visualizes API resources and enables the user to interact with them. This kind of visualization provides an easier way to implement APIs in the back-end while automating the creation of documentation for the APIs in question.

OpenAPI Specification on the other hand (OAS for short), is a language-agnostic interface description for RESTful APIs. Its purpose is to visualize them and make the APIs readable for people and PCs alike, in YAML or JSON formats.

OAS 3.0 introduced several major changes, which made the specification structure clearer and more efficient. For a rundown of changes from OpenAPI 2 to version 3, make sure to visit this page detailing them.

How does it work?

OpenAPI is generated on the fly, with every manual request for the OpenAPI specification of the selected resource. The resource can be the OpenDaylight datastore or a device mount point. 

You can conveniently access the list of all available resources over the apidoc web application. The resources are located on the top right part of the screen. Once you select the resource you want to generate the OpenAPI specification for, you just pick the desired resource and the OpenAPI specification will be displayed below.

OpenAPI 3.0 (Swagger) in OpenDaylight

The apidoc is packed within the odl-restconf-all Karaf feature. To access it, you only need to type

feature:install odl-restconf-all

in the Karaf console. Then, you can use a web browser of your choice to access the apidoc web application over the following URL:

http://localhost:8181/apidoc/explorer/index.html

Once an option is selected, the page will load the documentation of your chosen resource, with the chosen protocol version.

The documentation of any resource endpoint (node, RPC’s, actions), is located under its module spoiler. When you click on the link:

http://localhost:8181/apidoc/openapi3/${RESTCONF_version}/apis/${RESOURCE}

you will get the OpenAPI JSON for the particular RESTCONF version and selected resource. Here is a code snippet from the resulting OpenAPI specification:

{
  "openapi": "3.0.3",
  "info": {
    "version": "1.0.0",
    "title": "simulator-device21 modules of RestConf version RFC8040"
  },
  "servers": [
    {
      "url": "http://localhost:8181/"
    }
  ],
  "paths": {
    "/rests/data/network-topology:network-topology/topology=topology-netconf/node=simulator-device21/yang-ext:mount": {
      "get": {
        "description": "Queries the operational (running) datastore on the mounted hosted.",
        "summary": "GET - simulator-device21 - data",
        "tags": [
          "mounted simulator-device21 GET root"
        ],
        "responses": {
          "200": {
            "description": "OK"
          }
        }
      }
    },
    "/rests/operations/network-topology:network-topology/topology=topology-netconf/node=simulator-device21/yang-ext:mount": {
      "get": {
        "description": "Queries the available operations (RPC calls) on the mounted hosted.",
        "summary": "GET - simulator-device21 - operations",
        "tags": [
          "mounted simulator-device21 GET root"
        ],
        "responses": {
          "200": {
            "description": "OK"
          }
        }
      }
    }
...

You can look through the entire export by clicking here.

Our Commitment to Open-Source

PANTHEON.tech is one of the largest contributors to the OpenDaylight source-code, with extensive knowledge that goes beyond a general service or integration.

This just goes to show, that PANTHEON.tech is heavily involved in the development and progress of OpenDaylight. We are glad to be part of the open-source community and contributors.


You can contact us at https://pantheon.tech/

Explore our PANTHEOn.tech GitHub.

Watch our YouTube Channel.

[Hands-On] Network Automation with ServiceNow® & OpenDaylight

by Miroslav Kováč | Leave us your feedback on this post!

PANTHEON.tech s.r.o., its products or services, are not affiliated with ServiceNow®, neither is this post an advertisement of ServiceNow® or its products.

ServiceNow® is a complex cloud application, used to manage companies, their employees, and customers. It was designed to help you automate the IT aspects of your business – service, operations, and business management. It creates incidents where using flows, you can automate part of the work that is very often done manually. All this can be easily set up by any person, even if you are not a developer.

An Example

If a new employee is hired in the company, he will need access to several things, based on his position. An incident will be created in ServiceNow® by HR. This will trigger a pre-created, generic flow, which might, for example, notify his direct supervisor (probably manager) and he would be asked to approve this request of access.

Once approved, the flow may continue and set everything up for this employee. It may notify the network engineer, to provision the required network services like (VPN, static IPs, firewall rules, and more), in order to give a new employee a computer. Once done, he will just update the status of this task to done, which may trigger another action. It can automatically give him access to the company intranet. Once everything is done, it will notify everyone it needs to, about a successful job done, with an email or any other communication resource the company is using.

Showing the ServiceNow® Flow Designer

 

Setting Up the Flow

Let’s take it a step further, and try to replace the network engineer, who has to manually configure the services needed for the device.

In a simple environment with a few network devices, we could set up the ServiceNow® Workflow, so that it can access them directly and edit the configuration, according to the required parameters.

In a complex, multi-tenant environment we could leverage a network controller, that can serve the required service and maintain the configuration of several devices. This will make the required service functional. In that case, we will need ServiceNow® to communicate with the controller, which secures this required network service.

The ServiceNow® orchestration understands and reads REST, OpenDaylight & lighty.io – in our case, the controller. It provides us with the RESTCONF interface, with which we can easily integrate ServiceNow®, OpenDaylight, or lighty.io, thanks to the support of both these technologies.

Now, we look at how to simplify this integration. For this purpose, we used OpenAPI.

This is one of the features, thanks to which we can generate a JSON according to the OpenAPI specification for every OpenDaylight/lighty.io application with RESTCONF, which we can then import into ServiceNow®.

If your question is, whether it is possible to integrate a network controller, for example, OpenDaylight or lighty.io, the answer is yes. Yes, it is.

Example of Network Automation

Let’s say we have an application with a UI, that will let us manage the network with a control station. We want to connect a new device to it and set up its interfaces. Manually, you would have to make sure that the device is running. If not, we have to contact IT support to plug it in, create a request to connect to it. Once done, we have to create another request to set up the interfaces and verify the setup.

Using flows in ServiceNow® will let you do all that automatically. All your application needs to do, is create an incident in ServiceNow ®. This incident would be set up as a trigger, for a flow to start. It would try to create a connection using a REST request, that would be chosen from API operations, which we have from our OpenAPI JSON. This was automatically generated from YANG files, that are used in the project.

If a connection fails, then it would automatically send an email to IT support, creating a new, separate incident, that would have to be marked as done before this flow can continue. Once done, we can try to connect again using the same REST. When the connection is successful, we can choose a new API operation again, that would process the interfaces.

After that, we can choose another API operation that would get all the created settings and send that to the person, that created this incident using an email and mark this incident as done.

OpenAPI & oneOf

Showing the ServiceNow® API Operation

Since the “New York” release of ServiceNow®, the import of OpenAPI is a new feature, it has some limitations.

During usage, we noticed a few inconsistencies, which we would like to share with you. Here are some tips, what you should look out for when using this feature.

OpenAPI & ServiceNow®

OpenAPI supports the oneOf feature, which is something that is needed for choice keywords in YANG. You can choose, which nodes you want to use. Currently, the workaround for this is to use the Swagger 2.0 implementation, which does not support the oneOf feature and will list all the cases that exist in a choice statement. If you go to input variables, you may delete any input variables that you don’t want yourself.

JSONs & identical item names

Another issue is when we have a JSON that contains the same item names in different objects or levels. So if I need the following JSON:

{
    "username": "foo",
    "password": "bar":,
    "another-log-in": {
        "username": "foo",
        "password": "bar"
    }
}

The workaround is, to add other input variables manually, that will have the same name, like the one that is missing. Suddenly, it may appear twice in input variables, but during testing, it appears only once – where it’s supposed to. Therefore, you need to manually fill in all the missing variables using the “+” button in the input variables tab.we have the username and password twice. However, it would appear in the input variables just once. When testing the action, I was unable to fill it in like the above JSON.

showing the ServiceNow® inputs

Input Variables in ServiceNow®

The last issue that we have, is with ServiceNow® not requiring input variables. Imagine you create an action with REST Step. If there are some variables that you don’t need to set up, you would normally not assign any value to that variable and it would not be set.

Here, it would automatically set it to a default value or an empty string if there is no default value, which can cause some problems with decimals as well – since you should not put strings into a decimal variable.

Again, the workaround is to remove all the input variables, that you are not going to use.

This concludes our network automation with the ServiceNow guide. Leave us your feedback on this post!


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

YANG Tools 2.0.1 integrated in ODL Oxygen

YANG Tools 2.0.1 integrated in OpenDaylight Oxygen

OpenDaylight’s YANG Tools project, forms the bottom-most layer of OpenDaylight as an application platform. It defines and implements interfaces for modeling, storing and transforming data modeled in RFC7950, known as YANG 1.1 — such as a YANG parser and compiler.

What is YANG Tools?

Pantheon engineers started developing yangtools some 5 years ago. It originally supported RFC6020, going through a number of different versions. After releasing yangtools-1.0.0, we introduced semantic versioning as an API contract. Since then, we have retrofitted original RFC6020 meta-model to support RFC7950. We also implemented the corresponding parser bits, which were finalized in yangtools-1.2.0 and shipped with the Nitrogen Simultaneous Release.

This release entered its development phase on August 14th 2017. yangtools-2.0.0 was released on November 27th 2017, which is when the search of an integration window started. Even though we had the most critical downstream integration patches prepared, most of down-streams did not have their patches even started. Integration work and coordination was quickly escalated to the TSC. The integration finally kicked off on January 11, 2018.

Integration was mostly complicated by the fact that odlparent-3.0.x was riding with us, along with the usual Karaf/Jetty/Jersey/Jackson integration mess. It is now sorted out, with  yangtools-2.0.1 being the release to be shipped in the Oxygen simultaneous Release.

What is new in yangtools-2.0.1?

  • 309 commits
  • 2009 files changed
  • 54126 insertions(+)
  • 45014 deletions(-)

The most user-visible change is that in-memory data tree now enforces mandatory leaf node presence for operational store by default. This can be tweaked via the DataTreeConfiguration interface on a per-instance basis, if need be, but we recommend against switching it off.

For downstream users using karaf packaging, we have split our features into stable and experimental ones. Stable features are available from features-yangtools and contain the usual set of functionality, which will only expand in its capabilities. Experimental features are available from features-yangtools-experimental and carry functionality which is not stabilized yet and may get removed — this currently includes ObjectCache, which is slated for removal, as Guava’s Interners are better suited for the job.

Users of yang-maven-plugin will find that YANG files packaged in jars now have their names normalized to RFC7950 guidelines. This includes using the actual module or submodule name as well as capturing the revision in the filename.

API Changes

From API change perspective, there are two changes which stand out. We have pruned all deprecated methods and all YANG 1.1 API hacks marked with ‘FIXME: 2.0.0’ have been cleared up. This results in better ergonomics for both API users and implementors.

yang-model-api has seen some incompatible changes, ranging from renaming of AugmentationNode, TypedSchemaNode and ChoiceCaseNode to some targetted use of Optional instead of nullable returns. Most significant change here is the introduction of EffectiveStatement specializations — I will cover these in detail in a follow-up post, but these have enabled us to do the next significant item.

YANG parser has been refactored into multiple components. Its internal structure changed, in order to hide most of the implementation classes and methods. It is now split into:

  • yang-parser-reactor (language-independent inference pipeline)
  • yang-parser-rfc7950 (hosting baseline RFC6020/RFC7950 parser)
  • yang-parser-impl (being the default-configured parser instance)
  • and a slew of parser extensions (RFC6536, RFC7952, RFC8040)

There is an yang-parser-spi artifact, too, which hosts common namespaces and utility classes, but its layout is far from stabilized. Overall the parser has become a lot more efficient, better at detecting and reporting model issues. Implementing new semantic extensions has become really a breeze.

YANG Codecs

YANG codecs have seen a major shift, with the old XML parser in yang-data-impl removed in favor of yang-data-codec-xml. yang-data-codec-gson gains the ability to parse and emit RFC7951 documents. This allows RFC8040 NETCONF module to come closer to full compliance. Since the SchemaContext is much more usable now, with Modules being indexed by their  NameModule, the codec operations have become significantly faster.

Overall, we are in a much better and cleaner shape. We are currently not looking at a 3.0.0 release anytime soon and can actually deliver incremental improvements to YANG Tools in a much more rapid cadence than previously possible with the entire OpenDaylight simultaneous release cycle being in the way.

We already have another round of changes ready for yangtools-2.0.2 and are looking forward to publishing them.

Robert Varga