Karaf in OpenDaylight

[Thoughts] On Karaf & Its Future

These thoughts were originally sent on the public karaf-dev mailing list, where Robert Varga wrote a compelling opinion on what the future holds for Karaf and where its currently is headed. The text below was slightly edited from the original.


With my various OpenDaylight hats on, let me summarize our project-wide view, with a history going back to the project that was officially announced (early 2013).

From the get-go, our architectural requirement for OpenDaylight was OSGi compatibility. This means every single production (not maven-plugin obviously) artifact has to be a proper bundle.

This highly-technical and implementation-specific requirement was set down because of two things:

  1. What OSGi brings to MANIFEST.MF in terms of headers and intended wiring, incl. Private-Package
  2. Typical OSGi implementation (we inherited Equinox and are still using it) uses multiple class loaders and utterly breaks on split packages

This serves as an architectural requirement that translates to an unbreakable design requirement of how the code must be structured.

We started up with a home-brew OSGi container. We quickly replaced it for Karaf 3.0.x (6?), massively enjoying it being properly integrated, with shell, management, and all that. Also, feature:install.

At the end of the day, though, OpenDaylight is a toolkit of a bunch of components that you throw together and they work.

Our initial thinking was far removed from the current world of containers when operations go. The deployment was envisioned more like an NMS with a dedicated admin team (to paint a picture), providing a flexible platform.

The world has changed a lot, and the focus nowadays is on containers providing a single, hard-wired use-case.

We now provide out-of-the-box use-case wiring. using both dynamic Karaf and Guice (at least for one use case). We have an external project which shows the same can be done with pure Java, Spring Boot, and Quarkus.

We now also require Java 11, hence we have JPMS – and it can fulfill our architectural requirement just as well as OSGi. Thanks to OSGi, we have zero split packages.

We do not expect to ditch Karaf anytime soon, but rather leverage static-framework for a light-weight OSGi environment, as that is clearly the best option for us short-to-medium term, and definitely something we will continue supporting for the foreseeable future.

The shift to nimble single-purpose wirings is not going away and hence we will be expanding there anyway.

To achieve that, we will not be looking for a framework-of-frameworks, we will do that through native integration ourselves.

If Karaf can do the same, i.e. have its general-purpose pieces available as components, easily thrown together with @Singletons or @Components, with multiple frameworks, as well as nicely jlinkable – now that would be something. 

lighty.io's 13th release!

[Release] lighty.io 13

With enterprises already deploying lighty.io in their networks, what are you waiting for? Check out the official lighty.io website, as well as references.

 

 

13 is an unlucky number in some cultures – but not in the case of the 13th release of lighty.io!


What’s new in lighty.io 13?

PANTHEON.tech has released lighty.io 13, which is keeping up-to-date with OpenDaylight’s Aluminium release. A lot of major changes happened to lighty.io itself, which we will break-down for you here:

  • ODL Parent 7.0.5
  • MD-SAL 6.0.4
  • YANG Tools 5.0.5
  • AAA 0.12.0
  • NETCONF 1.9.0
  • OpenFlow 0.11.0
  • ServiceUtils 0.6.0
  • Maven SAL API Gen Plugin 6.0.4

Our team managed to fix start-scripts for examples in the repository, as well as bump the Maven Compiler Plugin & Maven JAR Plugin, for compiling and building JARs respectively. Fixes include Coverity issues & refactoring code, in order to comply with a source-quality profile (the source-quality profile was also enabled in this release). Furthermore, we have fixed the NETCONF delete-config preconditions, so they work as intended in RFC 6241.

As for improvements, we have reworked disabled tests and managed to improve AAA (Authentication, Authorization, Accounting) tests. Checkstyle was updated to the 8.34 version.

Since we are managing compatibility with OpenDaylight Aluminium, it is worth noting the several accomplishments of the 13th release of OpenDaylight as well.

The largest new features are the support of incremental data recovery & support of L4Z compression. L4Z offers lossless compression with speeds up to 500 MB/s (per core), which is quite impressive and can be utilized within OpenDaylight as well!

Incremental Data Recovery allows for datastore journal recovery and increased compression of datastore snapshots – which is where L4Z support comes to the rescue!

Another major feature, if you remember, is the PANTHEON.tech initiative towards OpenAPI 3.0 support in OpenDaylight. Formerly known as Swagger, OpenAPI helps visualize API resources while giving the user the possibility to interact with them.


What is lighty.io?

Remember, lighty.io is an OpenDaylight feature, which enables us to run its core features without Karaf, while working with any available Java platform. Contact us today for a demo or custom integration!

 

 

AF_XDP featured image / logo

[What Is] XDP/AF_XDP and its potential

What is XDP?

XDP (eXpress Data Path) is an eBPF (extended Berkeley Packet Filter) implementation for early packet interception. Received packets are not sent to kernel IP stack directly, but can be sent to userspace for processing. Users may decide what to do with the packet (drop, send back, modify, pass to the kernel). A detailed description can be found here.

XDP is designed as an alternative to DPDK. It is slower and at the moment, less mature than DPDK. However, it offers features/mechanisms already implemented in the kernel (DPDK users have to implement everything in userspace).

At the moment, XDP is under heavy development and features may change with each kernel version. There comes the first requirement – to run the latest kernel version. Changes between the kernel version may not be compatible.

XDP Packet Processing by IO Visor

IO Visor description of XDP packet processing

XDP Attachment

The XDP program can be attached to an interface and can process the RX queue of that interface (incoming packets).  It is not possible to intercept the TX queue (outgoing packets), but kernel developers are continuously extending the XDP feature-set. TX queue is one of the improvements with high interest from the community.

XDP program can be loaded in several modes:

  • Generic – Driver doesn’t have support for XDP, but the kernel fakes it. XDP program works, but there’s no real performance benefit because packets are handed to kernel stack anyways which then emulates XDP – this is usually supported with generic network drivers used in home computers, laptops, and virtualized HW.
  • Native – Driver has XDP support and can hand then to XDP without kernel stack interaction – Few drivers can support it and those are usually for enterprise HW
  • Offloaded – XDP can be loaded and executed directly on the NIC – just a handful of NICs can do that

XDP runs in an emulated environment. There are multiple constraints implied, which should protect the kernel from errors in the XDP code. There is a limit on how many instructions one XDP can receive. However, there is a workaround in the Call Table, referencing various XDP programs that can call each other.

The XDP emulator checks the range of used variables. Sometimes it’s helpful – it doesn’t allow you to access packet offset higher than already validated by packet size.

Sometimes it is annoying because the packet pointer can be passed to a subroutine, where access may fail with out of bounds access even if the original packet was already checked for that size.

BPF Compilation

Errors reported by the BPF compiler are quite tricky, due to the program ID compiled into byte code. Errors reported by that byte code usually do not make it obvious, which C program part it is related to.

The error message is sometimes hidden at the beginning of the dump, sometimes at the end of the dump. The instruction dump itself may be many pages long. Sometimes, the only way how to identify the issue is to comment out parts of the code, to figure out which line introduced it.

XDP can’t (as of November 2019):

One of the requirements was to forward traffic between host and namespaces, containers or VMs. Namespaces do the job properly so, XDP can access either host interfaces or namespaced interfaces. I wasn’t able to use it as a tunnel to pass traffic between them. The workaround is to use a veth pair to connect the host with a namespace and attach 2 XDP handlers (one on each side to process traffic). I’m not sure, whether they can share TABLES to pass data. However, using the veth pair mitigates the performance benefit of using XDP.

Another option is to create AF_XDP socket as a sink for packets received in the physical interface and processed by the attached XDP. But there are 2 major limitations:

  • One cannot create dozens of AF_XDP sockets and use XDP to redirects various traffic classes into own AF_XDP for processing because each AF_XDP socket binds to the TX queue of physical interfaces. Most of the physical and emulated HW supports only a single RX/TX queue per interface. If there’s one AF_XDP already bound, another one will fail. There are few combinations of HW and drivers, which support multiple RX/TX queues but they have 2/4/8 queues, which doesn’t scale with hundreds of containers running in the cloud.
  • Another limitation is, that XDP can forward traffic to an AF_XDP socket, where the client reads the data. But when the client writes something to AF_XDP, the traffic is going out immediately via the physical interface and XDP cannot see it. Therefore, XDP + AF_XDP is not viable for symmetric operation like encapsulation/decapsulation. Using a veth pair may mitigate this issue.

XDP can (as of November 2019):

  • Fast incoming packet filtering. XDP can inspect fields in incoming packets and take simple action like DROP, TX to send it out the same interface it was received, REDIRECT to other interface or PASS to kernel stack for processing. XDP can alternate packet data like swap MAC addresses, change ip addresses, ports, ICMP type, recalculate checksums, etc. So obvious usage is for implementing:
  • Filerwalls (DROP)
  • L2/L3 lookup & forward
  • NAT – it is possible to implement static NAT indirectly (two XDP programs, each attached to own interface, processing and forwarding the traffic out, via the other interface). Connection tracking is possible, but more complicated with preserving and exchanging session-related data in TABLES.

by Marek Závodský, PANTHEON.tech


AF_XDP

AF_XDP is a new type of socket, presented into the Linux Kernel 4.18, which does not completely bypass the kernel, but utilizes its functionality and enables to create something alike DPDK or the AF_Packet.

DPDK (Data Plane Development Kit) is a library, developed in 2010 by Intel and now under the Linux Foundation Projects umbrella, which accelerates packet processing workloads on a broad pallet of CPU architectures.

AF_Packet is a socket in the Linux Kernel, which allows applications to send & receive raw packets through the Kernel. It creates a shared mmap ring buffer between the kernel and userspace, which reduces the number of calls between these two.

At the moment XDP is under heavy development and features may change with each kernel version. There comes the first requirement, to run the latest kernel version. Changes between the kernel version may not be compatible.

AF_XDP Basics described by redhat

As opposed to AF_Packet, AF_XDP moves frames directly to the userspace, without the need to go through the whole kernel network stack. They arrive in the shortest possible time. AF_XDP does not bypass the kernel but creates an in-kernel fast path.

It also offers advantages like zero-copy (between kernel space & userspace) or offloading of the XDP bytecode into NIC. AF_XDP can run in interrupt mode, as well as polling mode, while DPDK polling mode drivers always poll – this means that they use 100% of the available CPU processing power.

Future potential

One of the potentials in the future for an offloaded XDP (being one of the possibilities of how an XDP bytecode can be executed) is, that such an offloaded program can be executed directly in a NIC and therefore does not use any CPU power, as noted at FOSDEM 2018:

Because XDP is so low-level, the only way to move packet processing further down to earn additional performances is to involve the hardware. In fact, it is possible since kernel 4.9 to offload eBPF programs directly onto a compatible NIC, thus offering acceleration while retaining the flexibility of such programs.

Decentralization

Furthermore, all signs lead to a theoretical, decentralized architecture – with emphasis on community efforts in offloading workloads to NICs – for example in a decentralized NIC switching architecture. This type of offloading would decrease costs on various expensive tasks, such as the CPU itself having to process the incoming packets.

We are excited about the future of AF_XDP and looking forward to the mentioned possibilities!

For a more detailed description, you can download a presentation with details surrounding AF_XDP & DPDK and another from FOSDEM 2019.

Update 08/15/2020: We have upgraded this page, it’s content and information for you to enjoy!


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

Network Service Mesh & CNF's by PANTHEON.tech

[Integration] Network Service Mesh & Cloud-Native Functions

by Milan Lenčo & Pavel Kotúček | Leave us your feedback on this post!

As part of a webinar, in cooperation with the Linux Foundation Networking, we have created two repositories with examples from our demonstration “Building CNFs with FD.io VPP and Network Service Mesh + VPP Traceability in Cloud-Native Deployments“:

If you would like to view the webinar on-demand, visit this link for registration, after which you can view the recording.

What is Network Service Mesh (NSM)?

Recently, Network Service Mesh (NSM) has been drawing lots of attention in the area of network function virtualization (NFV). Inspired by Istio, Network Service Mesh maps the concept of a Service Mesh to L2/L3 payloads. It runs on top of (any) CNI and builds additional connections between Kubernetes Pods in the run-time, based on Network Service definition deployed via CRD.

Unlike Contiv-VPP, for example, NSM is mostly controlled from within applications through the provided SDK. This approach has its pros and cons.

Pros: Gives programmers more control over the interactions between their applications and NSM

Cons: Requires a deeper understanding of the framework to get things right

Another difference is, that NSM intentionally offers only the minimalistic point-to-point connections between pods (or clients and endpoints in their terminology). Everything that can be implemented via CNFs, is left out of the framework. Even things as basic as connecting a service chain with external physical interfaces, or attaching multiple services to a common L2/L3 network, is not supported and instead left to the users (programmers) of NSM to implement.

Check out our full-webinar, in cooperation with the Linux Foundation Networking on YouTube:

Integration of NSM with Ligato

At PANTHEON.tech, we see the potential of NSM and decided to tackle the main drawbacks of the framework. For example, we have developed a new plugin for Ligato-based control-plane agents, that allows seamless integration of CNFs with NSM.

Instead of having to use the low-level and imperative NSM SDK, the users (not necessary software developers) can use the standard northbound (NB) protobuf API, in order to define the connections between their applications and other network services in a declarative form. The plugin then uses NSM SDK behind the scenes to open the connections and creates corresponding interfaces that the CNF is then ready to use.

The CNF components, therefore, do not have to care about how the interfaces were created, whether it was by Contiv, via NSM SDK, or in some other way, and can simply use logical interface names for reference. This approach allows us to decouple the implementation of the network function provided by a CNF from the service networking/chaining that surrounds it.

The plugin for Ligato-NSM integration is shipped both separately, ready for import into existing Ligato-based agents, and also as a part of our NSM-Agent-VPP and NSM-Agent-Linux. The former extends the vanilla Ligato VPP-Agent with the NSM support while the latter also adds NSM support but omits all the VPP-related plugins when only Linux networking needs to be managed.

Furthermore, since most of the common network features are already provided by Ligato VPP-Agent, it is often unnecessary to do any additional programming work whatsoever to develop a new CNF. With the help of the Ligato framework and tools developed at Pantheon, achieving the desired network function is often a matter of defining network configuration in a declarative way inside one or more YAML files deployed as Kubernetes CRD instances. For examples of Ligato-based CNF deployments with NSM networking, please refer to our repository with CNF examples.

Finally, included in the repository is also a controller for K8s CRD defined to allow deploying network configuration for Ligato-based CNFs like any other Kubernetes resource defined inside YAML-formatted files. Usage examples can also be found in the repository with CNF examples.

CNF Chaining using Ligato & NSM (example from LFN Webinar)

In this example, we demonstrate the capabilities of the NSM agent – a control-plane for Cloud-native Network Functions deployed in a Kubernetes cluster. The NSM agent seamlessly integrates the Ligato framework for Linux and VPP network configuration management, together with Network Service Mesh (NSM) for separating the data plane from the control plane connectivity, between containers and external endpoints.

In the presented use-case, we simulate scenario in which a client from a local network needs to access a web server with a public IP address. The necessary Network Address Translation (NAT) is performed in-between the client and the webserver by the high-performance VPP NAT plugin, deployed as a true CNF (Cloud-Native Network Functions) inside a container. For simplicity the client is represented by a K8s Pod running image with cURL installed (as opposed to being an external endpoint as it would be in a real-world scenario). For the server-side, the minimalistic TestHTTPServer implemented in VPP is utilized.

In all the three Pods an instance of NSM Agent is run to communicate with the NSM manager via NSM SDK and negotiate additional network connections to connect the pods into a chain client:

Client <-> NAT-CNF <-> web-server (see diagrams below)ne

The agents then use the features of Ligato framework to further configure Linux and VPP networking around the additional interfaces provided by NSM (e.g. routes, NAT).

The configuration to apply is described declaratively and submitted to NSM agents in a Kubernetes native way through our own Custom Resource called CNFConfiguration. The controller for this CRD (installed by cnf-crd.yaml) simply reflects the content of applied CRD instances into an ETCD datastore from which it is read by NSM agents. For example, the configuration for the NSM agent managing the central NAT CNF can be found in cnf-nat44.yaml.

More information about cloud-native tools and network functions provided by PANTHEON.tech can be found on our website CDNF.io.

Networking Diagram

Network Service Mesh Manager Architecture

Routing Diagram

CNF NAT 44 RoutingSteps to recreate the Demo

    1. Clone the following repository.
    2. Create Kubernetes cluster; deploy CNI (network plugin) of your preference
    3. Install Helm version 2 (latest NSM release v0.2.0 does not support Helm v3)
    4. Run helm init to install Tiller and to set up a local configuration for the Helm
    5. Create a service account for Tiller
      $ kubectl create serviceaccount --namespace kube-system tiller
      $ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
      $ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
    6. Deploy NSM using Helm:
      $ helm repo add nsm https://helm.nsm.dev/
      $ helm install --set insecure=true nsm/nsm
    7. Deploy ETCD + controller for CRD, both of which will be used together to pass configuration to NSM agents:
      $ kubectl apply -f cnf-crd.yaml
    8. Submit the definition of the network topology for this example to NSM:
      $ kubectl apply -f network-service.yaml
    9. Deploy and start simple VPP-based webserver with NSM-Agent-VPP as control-plane:
      $ kubectl apply -f webserver.yaml
    10. Deploy VPP-based NAT44 CNF with NSM-Agent-VPP as control-plane:
      $ kubectl apply -f cnf-nat44.yaml
    11. Deploy Pod with NSM-Agent-Linux control-plane and curl for testing connection to the webserver through NAT44 CNF:
      $ kubectl apply -f client.yaml
    12. Test connectivity between client and webserver:
      $ kubectl exec -it  client curl 80.80.80.80/show/version
    13. To confirm that client’s IP is indeed source NATed (from 192.168.100.10 to 80.80.80.102) before reaching the web server, one can use the VPP packet tracing:
      $ kubectl exec -it webserver vppctl trace add memif-input 10
      $ kubectl exec -it client curl 80.80.80.80/show/version
      $ kubectl exec -it webserver vppctl show trace
      
      00:01:04:655507: memif-input
        memif: hw_if_index 1 next-index 4
          slot: ring 0
      00:01:04:655515: ethernet-input
        IP4: 02:fe:68:a6:6b:8c -> 02:fe:b8:e1:c8:ad
      00:01:04:655519: ip4-input
        TCP: 80.80.80.100 -> 80.80.80.80
      ...
Business Update for March 2020

Business Update: March 2020

Here at PANTHEON.tech, we are saddened by the current situation in the world, regarding the COVID-19 outbreak and are fully supporting everybody in the front-line of this fight.

As the world is continuously relying on stable networks and improvements in technology, our day-to-day business needs to continue as usual.

Firstly, we needed to asses the situation. That is, why we have put the following, responsible counter-measures in place, as of 13th of March, 2020:

Off-Site Work: We have put our staff on off-site work. Our employees were able to utilize off-site work as a benefit at PANTHEON.tech. This way, we were sure that most of our staff would be used to this change and quickly adapt to it.

Great Software: A stable & fast VPN is a must, as well as a stable conferencing software and an efficient alternative to e-mails.

Reliable Connection: We have renewed our ISO certificates through an online-meeting system on the 27th of March, 2020. This guarantees that all standards for remote-work are also being followed. This makes off-site work and staying in touch a breeze.

Business-as-usual: PANTHEON.tech is lucky enough to work in a field, where we have the option to effectively work off-site. Our business hours are the same and work continues, as it did before.

PANTHEON.tech is continuing to strive for perfection in its solutions. It doesn’t matter if it’s in our office space, or off-site. Because our employees do not need to travel to work, they spend more time working on their own.

We miss the direct contact with our colleagues and customers but appreciate the opportunity to continue in our work. We sincerely hope that the current situation will be resolved as quickly as possible. But until then, it is business-as-usual for us.

We encourage anybody with needs within custom open-source solutions, software prototyping or network technologies to contact us. We are happy to be of assistance and create new partnerships, even in these trying times.

Nothing is changing for PANTHEON.tech – in terms of quality & delivery. We always strive for perfection.

[OpenDaylight] Sodium: A Developers Perspective

by Robert Varga | Leave us your feedback on this post!

PANTHEON.tech continued to be the leader in terms of contributions to the OpenDaylight codebase. While our focus remained on the core platform, we also dabbled into individual plugins to deliver significant performance, scalability and correctness improvements. We are therefore glad to be a significant part of the effort that went into the newest release of OpenDaylight – Sodium.

In Sodium, we have successfully transitioned OpenDaylight codebase to require Java 11 – an effort we have been spearheading since the mid-Neon timeframe. This allows our users to not only reap the runtime improvements of Java 11 (which was possible to do with Sodium), but it allows OpenDaylight code to take advantage of features available in Java 11.

Our continued stewardship of YANG Tools has seen us deliver multiple improvements and new features, most notable of which are:

YANG Parser: has been extended to support Errata 5617 leaf-ref path expressions, which violate RFC7950, but are used by various models seen in the wild — such as ETSI NFV models. YANG parser has also received some improvements in areas of CPU and memory usage when faced with large models, ranging from 10 to 25%, depending on the models and features used.

In-memory Data Tree: the technology underlying most datastore implementations. It has been improved in multiple areas, yielding an overall memory footprint reduction of 30%. This allows better out-of-the-box scalability.

Adoption of Java 11: Java has allowed PANTHEON.tech to deploy improvements to MD-SAL Binding runtime, resulting in measurably faster dispatch of methods across the system, providing benefits to all OpenDaylight plugin code.

We have also continued improvements in the Distributed Datastore. We achieved further improvements to the persistence format, reducing in-memory & on-disk footprint by as much as 25%.

Last but not least, we have taken a hard look at the OVSDB project and provided major refactors of the codebase. This has immensely improved its ability to scale the number of connected devices, as well as individual connection throughput.

If you would like a custom OpenDaylight integration, feel free to contact us!

memif + T-REX: CNF Testing Made Easy

PANTHEON.tech’s developer Július Milan has managed to integrate memif into the T-REX Traffic Generator. T-REX is a traffic generator, which you can use to test the speed of network devices. Now you can test Cloud-Native Functions, which support memif natively in the cloud, without specialized network cards!

Imagine a situation, where multiple cloud-native functions are interconnected or chained via memif. Tracking their utilization would be a nightmare. With our memif + T-REX solution, you can make arbitrary measurements – effortlessly and straightforward. The results will be more precise and direct, as opposed to creating adapters and interconnecting them, in order to be able to measure traffic.

Our commitment to open-source has a long track record. With lighty-core being open-sourced and our CTO Robert Varga being the top-single contributor to OpenDaylight source code, we are proving once again that our heart belongs to the open-source community.

The combination of memif & T-REX makes measuring cloud-native function performance easy & straightforward.

memif, the “shared memory packet interface”, allows for any client (VPP, libmemif) to communicate with DPDK using shared memory. Our solution makes memif highly efficient, with zero-copy capability. This saves memory bandwidth and CPU cycles while adding another piece to the puzzle for achieving a high-performance CNF.

It is important to note, that zero-copy works on the newest version of DPDK. However, memif & T-REX can be used in zero-copy mode, when the T-REX side of the pair is the master. The other side of the memif pair (VPP or some cloud-native function) is the zero-copy slave.

T-REX, developed by Cisco, solves the issue of buying stateful/realistic traffic generators, which can set your company back by up to 500 000$. This limits the testing capabilities and slows down the entire process. T-REX solves this by being an accessible, open-source, stateful/stateless traffic generator, fueled by DPDK.

Services that function in the cloud are characterized by an unlimited presence. They are accessed from anywhere, with a functional connection and are located on remote servers. This may curb costs since you do not have to create and maintain your servers in a dedicated, physical space.

PANTHEON.tech is proud to be a technology enabler, with continuous support for open-source initiatives, communities & solutions.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

Cloud-Native Definition

[What Is] Cloud-Native and its future

The methodology of cloud-native enables applications, as we know them, to function in a cloud environment and utilize it to the fullest. But what does the cloud offer, that wasn’t here before?

Forecast: Cloudy

Services that function in the cloud are characterized by an unlimited presence – they are accessed from anywhere, with a functional connection and are located on remote servers. This can curb costs, as you do not have to create and maintain your servers in a dedicated, physical space. If you do, even more costs can arise from the expertise needed to be built within your company to support such a solution.

Cloud-native apps are designed from the ground-up to be present in the cloud and use its potential to the fullest.

Apps re-imagined

The cloud-native approach breaks down apps into microservices. One app contains a set of services, which are independent of each other – one failure does not break the entire app. If paired with service standards like Kubernetes, containers can automatically heal failures once they occur. Furthermore, cloud-native apps can use a variety of languages to build each service. This is possible due to the specified independence of each service. There is a clear separation by API

Apps are packed into containers. Containers guarantee, that all components which are needed for the application in these container images are present and ready to be immediately deployed within a container runtime. Since these containers or blocks, function as one, users can utilize horizontal scaling capabilities, based on actual demands.

Since services are split-up and independent of each other, cloud-native apps can apply a continuous delivery approach. Here, different teams can collaborate on different parts of the container, and manage a continuous development, independent development and improvement of services in the container without redeploying the whole environment.

Industry shift towards clouds

It is hard to move on from an approach, which has proven itself over the years and became a standard. However, the cloud-native approach can offer unmatched scalability & agility for the end-user, unlike traditional applications.

Integration is made easier due to the enormous involvement of the open-source community, namely the Cloud Native Computing Foundation. This foundation covers most of the major projects present in common cloud architectures, including Kubernetes.

If your company is looking forward to adapting to these changes, stay tuned for what PANTHEON.tech can offer on this front.

lighty.io Akka Clustering capability

lighty.io is Cloud-Native Ready, are you?

lighty.io, at its core, manages network devices in a virtualized network. It is therefore crucial, that it works as a robust, and at the same time, stable tool. When deployed in modern networks, lighty.io itself can not guarantee stability and an error-free network.

PANTHEON.tech engineers have managed to extend OpenDayligh/lighty.io capabilities with Akka Cluster. This fulfills requirements for dynamic, horizontal scaling of lighty.io applications.

Welcome, lighty.io, to the world of cloud-native & micro-services. 

lighty.io & containers

Thanks to lighty.io removing Karaf and using Java SE as its run-time, we have made it one of the most lightweight SDN controllers in the world. This makes not only lighty.io‘s but OpenDaylight’s components usable in the world of containers & micro-services.

We have further managed to include horizontal scaling in lighty.io and are proud to announce, that auto-scaling, up & down functionality in lighty.io Akka Cluster is available. 

Akka

Akka is a set of open-source libraries, which allow for designing scalable, resilient systems and micro-services. Its strongest points are:

  • Multi-threaded behavior
  • Transparency: in regards to communication between components and systems using them
  • Clustered architecture: for designing a scalable, on-demand and reactive system

This means, that Akka addresses common situations in any virtualized architecture – components failing without responding, messages getting lost in transit and fluctuating network latency. The key to fully utilizing Akka is combining devices into clusters.

Cluster

As we mentioned earlier, networks can experience down-time. This may be due to the collapse of one machine, and this breaking the entire chain of events. Running applications as clusters, instead of single instances, avoid this situation.

Imagine, grouping objects with a similar purpose together as Clustering. If one of the cluster members fails, another member can come to the rescue and continue in its function. This happens in an instance – the end-user doesn’t even notice that anything happened.

Requests are evenly distributed across the cluster members, thanks to load-balancing. One device won’t be overloaded with requests, as these are shared within the cluster.

If the amount of connected devices is low, the cluster will decrease in size and use fewer resources. The cluster adapts to the needs of the network and can increase or decrease in size. Due to this balance, hardware efficiency is guaranteed.

Clusters can, in some cases, fail, but potentially do not require manual intervention. If used with tools like Kubernetes, potential outages in the cluster are automatically healed. This can be a lifesaver in critical situations.

PANTHEON.tech @ Open Networking Summit Europe ’19

Here at PANTHEON.tech, we are proud to be a corporate member of the Linux Foundation and one of the sponsors of this the Open Networking Summit Europe 2019.

We were in attendance for this year’s Open Networking Summit Europe in Belgium, Antwerp. The organization was, as always, handled by the Linux Foundation Networking.

As is tradition, we were participating, networking and presenting our know-how to potential business-partners & like-minded open-source enthusiasts.

Belgium Meets Open-Source

The conference was held in Antwerp, Belgium, right next to the Antwerp Zoo. The Flanders Meeting & Convention Center does continue in the animal theme of the surroundings and contains halls filled with whale skeletons, historical zoo photos & other themed objects. It was a wonderful, spacious place, ideal for a conference of this size. We were also more than pleased, that so many attendees wore lanyards with our logo.

Part of our booth at ONS was a presentation of two demos:

  • CNF Designer: Design and visualize your CNFs and export the settings. In this demo, people used PANTHEON.tech instances of NAT44, DHCP, Ligato and ACL CNFs.
  • Raspberry Pi Access Point: We have managed to run a VPP instance on a Raspberry Pi acting as a WIFI access point, send flow data using IPFIX to Elastiflow, and visualize it in Kibana

Orange x PANTHEON.tech

PANTHEON.tech developer Samuel Kontriš was busy at a demo-stand, in cooperation with Orange, named “TransportPCE: SDN Controller & Simulators for WDM Optical Networks (OpenDaylight, FD.io, Open ROADM, lighty.io)“. TransportPCE supports both the original ODL Karaf (OSGi) and lighty.io build (without any proprietary component). We had the chance to present why lighty.io should be the right choice for your SDN controller.

It was a great opportunity to show why lighty.io deserves attention. Many people have come to ask about lighty.io and afterwards, I gladly referred them further to our PANTHEON.tech booth staff.

TransportPCE is an open-source implementation of an optical SDN controller, officially integrated into OpenDaylight Fluorine and Neon releases.

Edge Computation Offloading by PANTHEON.tech

We continued with a session/talk, lead by PANTHEON.tech VP of Engineering Miroslav Mikluš, called “Edge Computation Offloading”. Its premise was the promise of 5G and the high expectations it has set up. In his talk, he showed how Radio Access Network could be used to offload tasks from the end-user device to the edge.

 

We have shown a demo of a client application (running on an Android mobile device), that can execute compute-intensive tasks on an edge compute node. K8s has been used on a RAN site to host specialized container application, based on TensorFlow. It had received pre-trained models to classify image content.

Miroslav Mikluš - ONS Session "ECO"

As always, we appreciate the organizational skills of the Linux Foundation and would like to thank LFN, Orange and our colleagues who participated at this year’s Open Networking Summit in Antwerp.

We are proud to be working on several open-source projects, centered not only around the Linux Foundation Networking. PANTHEON.tech is looking forward to overcoming all challenges, which will ultimately contribute to the open-source community and possibly, change networking forever.

 

PANTHEON.tech at ONS19 LinkedIn ad

[Announcement] PANTHEON.tech @ ONS EU 2019!

PANTHEON.tech will be attending this years Open Network Summit Europe in Belgium, Antwerp. We are looking forward to this event, organized by the Linux Foundation Networking & Linux Network Foundation.

As is tradition, we will be participating, networking and presenting a variety of projects, ideas, and our skills to potential business-partners & like-minded open-source enthusiasts.

You will be able to meet with:

  • COO Štefan Kobza 
  • VP of Engineering Miroslav Mikuš
  • Technical Business Development Manager Martin Varga
  • Software Engineer Samuel Kontriš
  • Technical Copywriter Filip Čúzy

Come talk to us @ Booth B8

Our demo with Orange

This year, we will be presenting a demo (as part of the LF Networking Demo section), in a partnership with Orange. The presentation itself is named:

TransportPCE: SDN Controller & Simulators for WDM Optical Networks (OpenDaylight, FD.io, Open ROADM, lighty.io)

TransportPCE is an open-source implementation of an optical SDN controller officially integrated into OpenDaylight Fluorine and Neon releases. It allows managing optical WDM devices, compliant with the Open ROADM specifications, the current only open standard that focuses on WDM devices full interoperability.

Along with the controller implementation, TransportPCE also embeds a device simulator, derived from the FD.io project honeycomb. A full functional test suite was built on top of this simulator for CI/CD sake. The demo will simulate a small WDM network topology with FD.io honeycomb and shows how to create and delete a WDM service with the TransportPCE test suite. The design of TransportPCE relies on a modular approach that leverages the classical OSGi OpenDaylight framework.

This design allows considering more deployment scenarios than the monolithic WDM network management systems classically found on commercial products. lighty.io is an alternative framework to OSGI for OpenDaylight, that is developed by PANTHEON.tech and is partially open-sourced. It is thought for deployment scenarios, that require streamlined applications with minimalistic resource consumption (e.g. microservices).

TransportPCE supports both OSGi and lighty.io (without any proprietary component). The demo will propose a comparison of OSGi and lighty.io.

Who is PANTHEON.tech?

PANTHEON.tech is a software research & development company, located in Bratislava, Slovakia. Our focus lies on network technologies and the development of software, with over 17 years of experience.

We were part of developing OpenDaylight components: MD-SAL, YANG Tools, Clustering, NETCONF, RESTCONF, SNMP, OpenFlow, BGP & PCEP. We look forward to helping speed up the development of open-source networking technology & participating in the SDN revolution.

lighty.io Summer 2019 News

[News] all-lighty.io Summer 2019

It has been a busy summer for PANTHEON.tech & our developers of our leading product lighty.io. For all those interested in new information, here is a round-up of all the examples and demos we have published on our social media.

Integrating lighty.io & CCSDK (ONAP)

In the last weeks, we are intensively working, together with the wonderful ONAP community, on proposing to remove existing dependencies on the OSGi Framework and Karaf from the CCSDK project – while still maintaining the same OpenDaylight services, which the community knows and uses. We will deep-dive on our proposal soon in a separate post, so stay tuned!

Spring Boot Example

Full NETCONF/RESTCONF stack for the Spring Boot runtime.

We have recently succeeded in running RESTCONF in a Springboot example, designed for lighty. Springboot is now able to run in a full OpenDaylight/lighty.io stack.

Spring Boot makes it easy to just run & create stand-alone, Spring-based Applications. It boasts no code generation or requirement for XML configuration, automatic configuration of Spring functionality and radically improves the pure Spring deployment and development.

Clustered Application Demo

lighty.io running as a clustered RESTCONF/NETCONF SDN controller application. From now on, it supports deployment in Akka cluster, so you are not limited to single-node SDN controllers.

Furthermore, you can deploy lighty.io as a Kubernetes cluster.

TransportPCE Controller

lighty.io TransportPCE Controller is now upstreamed in the OpenDaylight Project! We have previously managed to migrate TransportPCE into lighty. Now, you can see the code for yourself in the OpenDaylight repository. 

We have previously published a how-to on how to migrate the OpenDaylight TransportPCE to lighty.io.

BGP/EVPN Router

We are planning on extending the BGP function of an SDN controller with an EVPN extension in the BGP control-plane. We will discuss BGP-EVPN functions in SDN controllers and how the lighty.io BGP function can replace existing legacy BGP route reflectors, running in service provider’s WAN or DC networks.

spring.io & lighty.io integration

Match Made In Heaven: Spring Boot & lighty.io

lighty.io brings you full NETCONF/RESTCONF stack for the Spring Boot runtime.

Check out our example, which runs core OpenDaylight components integrated into a Springboot application.

You are no longer restricted to the cumbersome legacy ODL Karaf runtime. lighty.io makes accessible spring component ecosystem for wide SDN development. Go ahead and enjoy features of ODL ecosystem and advantages of the vast spring project ecosystem.

Popular Java Framework

We have recently succeeded in running RESTCONF in a Springboot example, available for lighty.io. Springboot is now able to run in a full OpenDaylight/lighty.io stack, this includes:

  • RESTCONF
  • MD-SAL
  • NETCONF

Spring Boot makes it easy to just run & create stand-alone, Spring-based Applications. It boasts no code generation or requirement for XML configuration, automatic configuration of Spring functionality and radically improves the pure Spring deployment and development.

MD-SAL, part of the OpenDaylight Controller, provides infrastructure services for data stores, RPC (& Service Routing) and notification subscriptions, as well as publish services.

Model-Driven SAL Queries is a tool developed by PANTHEON.tech, aimed at OpenDaylight developers. Its goal is to speed up work with the MD-SAL API. On top of this, we added two new features: query operations on a data store. Check out our introductory video on this project:

RESTCONF is a REST-like protocol running over HTTP. It helps to access data defined in YANG, for which it uses datastores defined in NETCONF. Its aim is to provide a simple interface with REST-like features and principles, with device abstraction based on available resources.

NETCONF, on the other hand, is a network management protocol, developed and maintained by the Internet Engineering Task Force. It is able to manage (install, delete, manipulate) network device configurations. All of these processes are done on top of an RPC layer.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

OpenDaylight Neon logo

[Comparison] OpenDaylight Neon SR1 vs. SR2

The development of OpenDaylight Sodium is already producing significant improvements, some of which are finding their way to the upcoming Service Release of Neon.

These are the test results for OpenDaylight Neon SR2. We have recorded significant improvements in the following areas:

  • Datastore snapshot-size: ~49% reduction
  • Processing time: ~58% reduction
  • In-memory size: ~25% reduction
  • Object count: ~25% reduction
  • NodeIdentifier: ~99.9% reduction
  • AugmentationIdentifier: ~99.9% reduction

These enhancements will also be present in the future, long-awaited release of Sodium.

Our commitment – more commits

PANTHEON.tech’s CTO, Robert Varga, is currently the top-single committer to OpenDaylights source-code, according to recent reports. PANTHEON.tech also resides as the number 1 contributor to its source-code.

Bitergia, a software development analytics company, regularly tracks the number of commits to OpenDaylight – based on single commiters and companies. OpenDaylight plays a significant role in PANTHEON.tech’s offerings and solutions.

Apart from PANTHEON.tech’s position as a top contributing organization to ODL, PANTHEON.tech has always been a strong supporter of the open-source community. This support covers a wide range of developers from local, private open-source development groups to international open-source platforms – such as ONAP, OPNFV or FD.io.

What is OpenDaylight?

OpenDaylight is a collaborative, open-source project, established in 2013. It aims to speed up the adoption of SDN and create a solid foundation for Network Functions Virtualization.

It was founded by global industry leaders, such as Cisco, Ericsson, Intel, IBM, Dell, HP, Red Hat, Microsoft, PANTHEON.tech and open to all.

PANTHEON.tech’s involvement in OpenDaylight goes way back to its beginnings. We have led the way in what an SDN controller is and should aspire to be. This requires dedication, which was proven over the years with an extensive amount of contribution and a successful track record, thanks to our expert developers.

Contact us, if you are interested in our solutions, which are based on, or integrate OpenDaylights framework.

OpenDaylight Sodium logo

[Release] OpenDaylight Sodium, YANG Tools & MD-SAL

It is no secret, that OpenDaylight is more than a passion project here at PANTHEON.tech. As one of the staples in the SDN revolution, we are also proud that we are one of the largest contributors to the project, with our PANTHEON.tech Fellow Robert Varga leading in the number of individual commits. Let us have a look at what is new.

YANG Tools 3.0.0

YANG Tools is a set of libraries and data modeling language, used to model state & configuration data. It’s libraries provide support for NETCONF and YANG for Java-based projects and applications.  What’s new in the world of the latest Yangtools release?

It implements new RFCs: RFC7952 (Defining and Using Metadata with YANG) and RFC8528 (YANG Schema Mount).

YANG parser now supports RFC6241 & RFC8528, while adding strict QName rules for “node identifiers”.

JDK11 support has been added together with lots of small bugfixes, which you can read in detail in its changelog.

MD-SAL 4.0.0

The Model-Driven Service Abstraction Layer is OpenDaylight’s kernel – it interfaces between different layers and modules. MD-SAL uses APIs to connect & bind requests and services and delegates certain types of requests.

The new version of MD-SAL has been released for ODL. MD-SAL is one of the central OpenDaylight components. This release contains mainly bugfixes and brings pre-packaged model updates for newly released IETF  models: RFC8519, RFC8528, RFC8529, RFC8530.

OpenDaylight Release Sequence

Get ready for OpenDaylight Sodium

OpenDaylights development has a new important milestone ahead: the Sodium release, currently scheduled around September 2019. This release will bring new versions of YANG Tools, MD-SAL and other core OpenDaylight components.

JDK11, as well as JKD8, is supported, providing an opportunity for OpenDaylight users to try new features of JDK11. It bids farewell to Java EE modules & JavaFX, as well as performance & bug fixes.

We believe you are also looking forward to these releases and encourage you to try them out!

Don’t forget to contact us, in case you are interested in a commercial solution for your business!


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

VPPTop product logo

[Release] VPPTop

Here at PANTHEON.tech, we really enjoy perfecting the user experience around VPP and creating projects based around this framework. That is why we have developed a handy tool, which lets you analyze several crucial data points while using VPP. We call it VPPTop.

Requirements

In order to run VPPTop, you will need to install these dependencies first:

  1. [Go] version 1.11+
  2. [VPP], version 19.04-3~g1cb333cdf~b41 is recommended. You can find the installation guide here.
  3. Configure VPP, according to the README file

How to install & run

1. Follow the above requirements, in order to have all dependencies for this software installed

2. Make sure VPP is running beforehand

3. Install VPPTop via Terminal to $GOPATH/bin:

go get -u github.com/PantheonTechnologies/vpptop

4. Run the tool from Terminal:

sudo -E vpptop

What is VPPTop?

VPPTop is an application for sysadmins and testers, implemented in Go. The goal of VPPTop is to provide VPP statistics which are updated in real-time, in one terminal window.

Before the release of our tool, admins could only rely on the usage of a CLI, to first request the data flow and then receive information, which are not up-to-date, nor in real-time.

Now, you can use our VPPTop tool, to find up-to-date information on your framework!

Version 1.11 supports statistics for:

  • Interfaces
  • Nodes
  • Errors
  • Memory Usage per Thread
  • Per Thread-Data

Version 1.11 supports these functionalities:

  • Clearing counters
  • Filter by Name
  • Sorting

Why should you start to use Vector Package Processing?

The main advantages are:

  • high performance with a proven technology
  • production level quality
  • flexible and extensible

The main advantage of VPP is, that you can plug in a new graph node, adapt it to your networks purposes and run it right away. Including a new plugin does not mean, you need to change your core-code with each new addition. Plugins can be either included in the processing graph, or they can be built outside the source tree and become an individual component in your build.

Furthermore, this separation of plugins makes crashes a matter of a simple process restart, which does not require your whole build to be restarted because of one plugin failure.

You can read more in our PANTHEON.tech series on VPP.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

PANTHEON.tech joins the Telecom Infra Project

In January 2019, PANTHEON.tech has become a member of the Telecom Infra Project community – a collaborative telecom community.

The Telecom Infra Project is a network of companies which aim to share, create and collaborate on innovative technologies in the field of telecommunication. Its members comprise a variety of operators, technology providers, developers, startups and other institutions, outside of telecommunication services. It was launched in February 2016, with the aim of accelerating the pace of innovation in the telecom industry.

How it works

The Telecom Infra Project has a structure, in which its members can actively contribute and share their expertise in the wide range of telecom technologies:

Project groups design and build technologies. They are divided into three network areas, that altogether create an end-to-end network: AccessBackhaulCore and Management.

Community Labs test and validate the technologies from Project Groups in field and production trials. There are 6 active TIP Community Labs from all around the world, which share their testing results and respond to the members outputs.

Ecosystem Acceleration Centers aim to build a dedicated space with venture capital funding and experienced advisors, in order to help upcoming and promising telecom startups bring their ideas to the market and consumers.

PANTHEON.tech is proud to be part of the driving force in telecommunication innovation.We believe that we will prove ourselves worthy of this membership with our expertise in SDN, NFV or open-source solutions.

We are looking forward to cooperating with other TIP members on improving and creating technologies.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub. Follow us on Twitter.

Watch our YouTube Channel.

 

 

PANTHEON.tech adds OC4VPP code to Sweetcomb

Recently, PANTHEON.tech has decided to contribute to FD.io’s Sweetcomb project source-code. This was mainly due to the fact, that our new project OC4VPP would compliment the idea and possibilities of the project, while improving its functionality.

What was the OC4VPP project?

Network administrators have the possibility to use protocols, such as gNMI or NETCONF, in order to communicate with sysrepo.

OC4VPP was a sysrepo plugin, with which we were able to setup and orchestrate the VPP framework. This plugin was part of our in-house solution that we were working on.

Sweetcomb and OC4VPP both use VAPI, in order to communicate with the VPP API. Here at PANTHEON.tech, our developers managed to include more YANG models into OC4VPP. But we  realized soon, that Sweetcomb was in advanced development and provided some of the functionality, we wanted to work on.

How will Sweetcomb benefit from OC4VPP?

Sweetcomb is a management agent, used to expose YANG modules, with the help of RESTCONF and NETCONF, in order to immediately allow  management of the VPP instance. It translates all communication between the Northbound interface and the VPP API.

We believe, that Sweetcomb will directly benefit from PANTHEON.tech’s solution, in form of the source-code of OC4VPP. This will provide the project with more YANG models to expand its functionality.

Due to Sweetcomb being a new project and mainly proof of concept, it currently supports only these 5 IETF-YANG models:

  • /ietf-interfaces:interfaces/interface
  • /ietf-interfaces:interfaces/interface/enabled
  • /ietf-interfaces:interfaces/interface/ietf-ip:ipv4/address
  • /ietf-interfaces:interfaces/interface/ietf-ip:ipv6/address
  • /ietf-interfaces:interfaces-state

First of all, Sweetcomb will gain support for OpenConfig models, which expands its usability and improves the projects deployment scale.

In case of OC4VPP, we managed to include 10 additional YANG models we would like to implement into the actual list of supported modules in Sweetcomb:

  • /openconfig-interfaces:interfaces/interface/config
  • /openconfig-interfaces:interfaces/interface/state
  • /openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state
  • /openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/openconfig-if-ip:ipv4/openconfig-if-ip:addresses/openconfig-if-ip:address/openconfig-if-ip:config
  • /openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/openconfig-if-ip:ipv4/openconfig-if-ip:addresses/openconfig-if-ip:address/openconfig-if-ip:state
  • /openconfig-local-routing:local-routes/static-routes/static/next-hops/next-hop/config
  • /openconfig-local-routing:local-routes/static-routes/static/next-hops/next-hop/interface-ref/config
  • /openconfig-local-routing:local-routes/static-routes/static/state
  • /openconfig-local-routing:local-routes/static-routes/static/next-hops/next-hop/state
  • /openconfig-local-routing:local-routes/static-routes/static/next-hops/next-hop/interface-ref/state

As a regular open-source contributor and supporter, we are glad that we were able to make this important decision and showcase crucial principles of open-source development:

Communication – Participation – Collaboration

You can find the source-code for Sweetcomb in the official GitHub.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub. Follow us on Twitter.

Watch our YouTube Channel.

PANTHEON.tech & OpenDaylight

PANTHEON.tech is the 2nd largest OpenDaylight contributor for Q3/2018

In the last week of November 2018, Bitergia, a software development analytics company, published a report on the past and current status of the OpenDaylight project, which plays a significant role in PANTHEON.tech’s offerings and solutions.

PANTHEON.tech’s CTO Robert Varga, is leading the list of per-user-contributions to the source code of OpenDaylight, with over 980 commits to the source code in Q3 of 2018. This achievement further establishes PANTHEON.tech’s position as one of the largest contributors to the OpenDaylight project.

As for the list of companies which contribute to the source code of OpenDaylight, PANTHEON.tech is the 2nd largest contributor for Q3/2018, with 1034 commits. We were just 34 commits shy of the top contributor position, which belongs to Red Hat.

Due to ODL’s open-source nature, anyone can contribute to the project and improve it in the long-run. Any change that gets added to the source code is defined as a commit. These types of changes need an approval and can not be any type of automated activity – including bot-actions or merges. This means, that each single commit is a unique change, added to the source code.

PANTHEON.tech will continue its commitment to improving OpenDaylight and we are looking forward to being a part of the future of this project.

What is OpenDaylight?

ODL is a collaborative open source project aimed to speed up the adoption of SDN and create a solid foundation for Network Functions Virtualization (NFV).

PANTHEON.tech’s nurture for ODL goes back when it was forming. In a sense, PANTHEON.tech has led the way is and how an SDN controller is and should be. This requires dedication, which was proven over the years with the extensive amount of contribution thanks to its expert developers.

Click here if you are interested in our solutions, which are based on, or integrate ODL’s framework.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

PANTHEON.tech @ OTKD – 345 km in 33,5 hours.

PANTHEON.tech’s running team participated in the From Tatry To Danube Run (OTKD) 2018.

The relay run named From Tatry To Danube (OTKD) is 345 kilometers long run and may comprise a maximum of 12 runners per team. This year, 214 teams had taken part. PANTHEON.tech running team took part in this adventure with our 12 runners who had to cover 345 kilometers in less than 36 hours. We are proud that our colleagues were successful. The running team finished the run in the overall 33 hours 34 minutes.

The run took place from Saturday, August 18 to Sunday, August 19. The runners started on Saturday morning in Demenovská valley and ran non-stop. Day and night, in harsh weather and terrain, to reach the finish in Bratislava on the bank of Danube river. Each of the runners covered approximately 30 kilometers in average.

PANTHEON.tech believes, a healthy mind can only cherish on a healthy body is happy to support its employees in participating in such activities. See you at OTKD 2019!