[Tutorial] Create & Use Containerized RNC Application

The lighty.io RESTCONF-NETCONF application allows to easily initialize, start and utilize the most used OpenDaylight services and optionally add custom business logic.

We provide a pre-prepared Helm 2 & Helm 3 chart inside the lighty.io RNC application, which can be easily used for Kubernetes deployment.

This article tutorial shows how to deploy RNC applications with Helm and a custom local Kubernetes engine. Let us know what you thought of and missed in this tutorial!


Deploy RNC application with Helm 2

lighty.io releases to version 15.1.0 contain Helm charts supported only by Helm 2 and Kubernetes to version 1.21 or lower. Kubernetes in version 1.22 release removed support for networking.k8s.io/v1beta1 which is required for successful Helm chart build.

Deploy RNC app with local Kubernetes engine

For deploying the RNC application, we will use and show how to install the microk8s Local Kubernetes engine. Feel free to use any other of your favorite local Kubernetes engine which you have installed. You just need to meet the condition to use k8s versions 1.21 or lower.

1) Install microk8s with a snap. We will need to specify the version to 1.21 which uses k8s on version 1.21.

sudo snap install microk8s --classic --channel=1.21/stable
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
2) Verify running microK8s instance
microk8s status --wait-ready
   microk8s is running
   high-availability: no
   datastore master nodes: 127.0.0.1:19001
   datastore standby nodes: none

3) Enable required add-ons

microk8s enable dns helm

4) Initialize Helm. In microk8s, it is required to change the repository and tiller image for a successful initialization

microk8s.helm init --stable-repo-url=https://charts.helm.sh/stable --tiller-image ghcr.io/helm/tiller:v2.17.0

5) Check if all required k8s pods are working correctly.

microk8s.kubectl get pods -n kube-system

5.1) If not, check the error messages inside pods and try to resolve problems to run them correctly.

microk8s.kubectl describe pod [FAILED_POD_NAME] -n kube-system

6) Add PANTHEON.tech repositories to your Helm and update

microk8s.helm repo add pantheon-helm-repo https://pantheontech.github.io/helm-charts/
microk8s.helm repo update

7) Deploy the RNC app at version 15.1.0 with Helm.

microk8s.helm install --name lighty-rnc-app pantheon-helm-repo/lighty-rnc-app-helm --version 15.1.0

8) Check if the RNC app was successfully deployed and the k8s pod is running

microk8s.helm ls
microk8s.kubectl get pods

Configuration for your RNC app

RNC application could be configured through the Helm values file. Default RNC app values.yaml file can be found inside lighty.io GitHub.

1) Set  RESTCONF port to 8181 through the –set flag

microk8s.helm install --name lighty-rnc-app pantheon-helm-repo/lighty-rnc-app-helm --set lighty.restconf.restconfPort=8181

2) Set the RESTCONF port with providing configured values.yaml file

2.1) Download the values.yaml file

2.2) Update the image to your desired version.

image:
name: ghcr.io/pantheontech/lighty-rnc
version: 15.1.0
pullPolicy: IfNotPresent

2.3) Update the RESTCONF port or any required changes in the values.yaml file.

2.4) Deploy the RNC app with the changed values.yaml file. Use upgrade if you have already deployed the RNC application.

microk8s.helm upgrade lighty-rnc-app pantheon-helm-repo/lighty-rnc-app-helm --values [VALUES_YAML_FILE]

Deploy RNC application with Helm 3

The current lighty.io Master contains an updated Helm chart compatible with Helm 3 and Kubernetes v1.22. This example will show how to deploy the RNC app application with Helm 3 which is definitely recommended.

Download lighty.io Master

For this example, we will use the Helm chart and Docker for the NETCONF Simulator located in the lighty.io master branch. We will modify the helm chart to download the latest version of the RNC docker image.

1) Download lighty.io from GitHub repository and checkout to master branch, or download the lighty.io master zip file

git clone https://github.com/PANTHEONtech/lighty.git
git checkout master

2) Move to the lighty-rnc-app-helm directory

cd lighty/lighty-applications/lighty-rnc-app-aggregator/lighty-rnc-app-helm/helm/lighty-rnc-app-helm

3) Change Docker image inside values.yaml file to:

image:
name: ghcr.io/pantheontech/lighty-rnc
version: latest
pullPolicy: IfNotPresent

Deploy RNC App w/ local Kubernetes engine

For deploying the RNC application, we will use and show how to install the microk8s Local Kubernetes engine. Feel free to use any other of your favorite local Kubernetes engine which you have installed.

1) Install microk8s with Snap

sudo snap install microk8s --classic
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube

2) Enable the required add-ons

microk8s enable dns helm3

3) Deploy the RNC app

microk8s helm3 install lighty-rnc-app ./lighty-rnc-app-helm/

4) Check if the RNC app was successfully deployed and k8s pods is running.

microk8s.helm3 ls
microk8s.kubectl get pods

Create testing device from lighty.io NETCONF simulator

For testing purposes, we will need some devices. PANTHEON.tech has already created a testing tool, that simulates NETCONF devices.

We will use this device and start it inside a Docker container. A Docker file can be found inside lighty.io, which can create an image for this simulated device.

1) Download the NETCONF simulator Docker file from lighty.io to a separate folder

2) Create a Docker image from the Docker file

sudo docker build -t lighty-netconf-simulator

3) Start the Docker container with a testing device at port 17830, or any other port, by changing the -p parameter.

sudo docker run -d --rm --name netconf-simulator -p17830:17830 lighty-netconf-simulator:latest

Test RNC application with simple CRUD operation on a device

This part will show a simple use case of how to connect a device and perform some basic CRUD operations on deployed RNC applications.

1) Check the IP assigned to k8s pod for RNC app. This IP will be used as a HOST_IP parameter in requests.

microk8s.kubectl get pod lighty-rnc-app-lighty-rnc-app-helm-548774945b-4tjvz -o custom-columns=":status.podIP" | xargs

2) Check the IP assigned to the Docker container. This parameter will be used as DEVICE_IP parameter in requests.

docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' netconf-simulator

3) Connect the simulated device to the RNC application

curl --request PUT 'http://[HOST_IP]:8888/restconf/data/network-topology:network-topology/topology=topology-netconf/node=new-node' \
--header 'Content-Type: application/json' \
--data-raw '{
    "netconf-topology:node": [
        {
            "node-id": "new-node",
            "host": [DEVICE_IP],
            "port": 17830,
            "username": "admin",
            "password": "admin",
            "tcp-only": false,
            "keepalive-delay": 0
        }
    ]
}'

4) Get device information from the RNC app. Check-in response if connection-status is “connected”

curl --request GET 'http://[HOST_IP]:8888/restconf/data/network-topology:network-topology/topology=topology-netconf/node=new-node'

# Response
{
    "network-topology:node": [
        {
            "node-id": "new-node",
            "netconf-node-topology:connection-status": "connected",
            "netconf-node-topology:username": "admin",
            "netconf-node-topology:password": "admin",
            "netconf-node-topology:available-capabilities": {
               ...
            },
            "netconf-node-topology:host": "[DEVICE_IP]",
            "netconf-node-topology:port": 17830,
            "netconf-node-topology:tcp-only": false,
            "netconf-node-topology:keepalive-delay": 0
        }
    ]
}

5) Write a new topology-id to simulate device data

curl --request PUT 'http://[HOST_IP]:8888/restconf/data/network-topology:network-topology/topology=topology-netconf/node=new-node/yang-ext:mount/network-topology:network-topology' \
--header 'Content-Type: application/json' \
--data-raw '{
    "network-topology:network-topology": {
        "topology": [
            {
                "topology-id": "new-topology"
            }
        ]
    }
}'

6) Get data from the simulated device

curl  --request GET 'http://[HOST_IP]:8888/restconf/data/network-topology:network-topology/topology=topology-netconf/node=new-node/yang-ext:mount/network-topology:network-topology'

# Response
{
    "network-topology:network-topology": {
        "topology": [
            {
                "topology-id": "new-topology"
            },
            {
                "topology-id": "default-topology"
            }
        ]
    }
}

7) Remove the device from the RNC application

curl --request DELETE 'http://[HOST_IP]:8888/restconf/data/network-topology:network-topology/topology=topology-netconf/node=new-node'

8) Device Logs: Logs from the device can be shown by executing the following command:

sudo docker logs [CONTAINER ID]

9) RNC Logs: Logs from the RNC app can be shown by executing the following command:

microk8s.kubectl logs [POD_NAME]

We hope you enjoyed this tutorial! If you are interested in commercial support or a custom lighty.io integration, make sure to contact us.

Let us know what you thought of this tutorial and what you missed!


by Peter Šuňa | Leave us your feedback on this post!

You can contact us here!

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.