VPP & Honeycomb (with bees)

Integrating VPP and Honeycomb and the Extension of VPP Services

In this short article, I would like to share our experience in the field of integrating VPP and Honeycomb, and about extension of VPP services. Among our colleagues are many developers who contribute to both projects as well as people who work on integrating these two projects, and also on integrating them with the rest of the networking world.

First, let’s define the basic terms.


What is VPP?

According to its wiki page it is “an extensible framework that provides out-of-the-box production quality switch/router functionality”. There is definitely more to say about VPP, but from my perspective, what’s most important is that it

  • provides switch and router functionality,
  • is in production quality level,
  • is platform independent.

“Platform independent” means that it is up to your decision where you will run it (virtualized environment, bare-metal,…). VPP is a piece of software, which is by default spread in the form of packages. Final VPP packages are available from Nexus repositories. Let’s say we decide to use stable VPP in version 17.04 on a stable Ubuntu version 16.04. You can download all available packages from the corresponding Nexus site. If there is no such platform available at Nexus, you can still download VPP and build it on the platform, which you need.

VPP will process packets, which flow in your network similarly to a physical router, but with one big advantage: you do not need to buy a router. You can use whatever physical device you have and just install the VPP modules.

What is Honeycomb?

It is a specific VPP interface. Honeycomb provides NETCONF and RESTCONF interface on northbound and stores required configuration (in form of XML or JSON) in local data store. There is also the hc2vpp project, which calls the corresponding VPP API as reaction to a new configuration stored in data store. In VPP, there is a special CLI that is used to communicate with VPP. It is in text form (similarly as in OS). To make it easier to use VPP, we also have Honeycomb. It provides an interface, which is somewhere between a GUI and a CLI. You can also request VPP state or statistics of via XML, and you will get the response in an XML form. Honeycomb can be installed in the same way as VPP, through packages, which can be accessed from the Nexus site.

mist with low visibility

Where can the combination of VPP and Honeycomb be used?

We’ve already showcased several use cases on our Pantheon Technologies’ YouTube channel:

Another alternative is to use the two as vCPE (Virtual Customer Premises Equipment) as specified in this draft. One of projects which wants to implement it is ONAP. VPP used as vCPE-endpoint for the internet connection from a provider. According to this use case, vCPE should provide several services. In standalone VPP, such services aren’t supported, but they still can be added to a machine where VPP is running. For demonstration, we have chosen DHCP and DNS.


In this case, we have two VMs. VM0 simulates the client side (DHCP client) which wants IP address to be assigned to interface enp0s9. VM1 contains VPP and a DHCP server. The DHCP request is broadcasted via enp0s9 at VM0 to VPP1 via port VPP1 is set as proxy DHCP server and DHCP request message is forwarded to, where the DHCP server will response with a DHCP offer. Finally, after all DHCP configuration steps are done, interface enp0s9 at VM0 is configured with IP address



In this case, we also have two VMs. VM0 simulates the client side (DNS client) which needs to resolve domain name to IP address. This request is routed via local port to VPP1, where it is routed to DNS server in VM1. If this resolution is required for the first time, then the request will be sent to the external DNS server. Otherwise, local DNS server will serve this request.


Jozef Glončák

books database cassandra datastore

Give us a requirement, we’ll provide a solution: Cassandra Datastore

As a company with highly skilled people and experience in networking and ODL, Pantheon Technologies provides solutions to any problem or requirement our clients bring up. In this case, we are going to illustrate what we can do on showcasing the workflow of a project.


Identifying a need

The first step was to identify a need; one of the main issues of working with datastore is that we lose data when the Controller goes down.

books repository Cassandra datastore data

Propose a solution

Once we’ve identified the need, we start looking for possible solutions, analyzing each one’s pros and cons, looking for the best answer available.

In this case, the best available solution was to replace the in-memory ODL datastore with a persistent database: the Cassandra Database.


Cassandra logoWhat is Cassandra?

If you need scalability and high availability without compromising the performance, the Apache Cassandra database is the right choice for you. It is the perfect platform for mission-critical data thanks to linear scalability and proven fault tolerance on cloud infrastructure or commodity hardware. Cassandra is able of replicating across multiple datacenters and it’s best in the class. With her, your users are provided with lower latency – and you with a peace of mind, if you realize how simple surviving a regional outages is.


Defining the solution requirements

We need to define the requirements for the proposed solution: what will it do, and how, requirements from the user, etcetera…

For this project, we’ve decided that the user would need to register the service at a specific prefix, pointing at a specific path on shard which the user is interested in storing.

The service will be listening to any changes under this subshard, and whenever the information is updated, it will take care of transforming the information into the JSON format, and store it in Cassandra.




Implementing the solution + testing

We’ve defined the requirements and have selected the solution. We’ve identified the steps required/wanted to achieve the results expected. Based upon them, we’ve created the tasks required and have implemented them.

Finally, we shall test the result. We can see some of the anticipated results in the table below.

Cassandra values

  • Rate:         Writes per second rate.
  • Duration:  Request duration in milliseconds.
  • Count:       An amount of changes applied simulated

* Benchmark, Karaf and Cassandra were running under same Virtual Machine, with 8G RAM and 4 Processors dedicated.


Use cases

We’ve identified one use case for this project, which was to have a persistent datastore. But the list of possible benefits does not end there.

Given the case that we were storing the Openflow statistics, we could beneficiate from having that information using Spark for applying Real time data analytics & visualization on it, allowing us to react and improve our network by, for example, banning or redirecting heavy traffic.

Once we have the information, everything we need is to pick up the fruit.


For more information, please feel free to contact us via sales@nullpantheon.tech

Claudio David Gasparini

SDN NFV Hague Conference / Binnenhof with tulips

Sponsoring the SDN NFV World Congress Confirmed

In mid-October, the SDN NFV World Congress will dominate Europe’s IT landscape. Taking place in Netherlands’ Hague, the event is Europe’s largest dedicated forum addressing the growing markets of software-defined networking (SDN) and network functions virtualization (NFV).

SDN NFV Conference - the Hague

Naturally, this is the type of event we at Pantheon Technologies gravitate towards sponsoring. Long story short, we’re one of the partners. There were already a couple of interesting names on board (Open Networking Foundation, Intel, Telefonica, BT, Konia, Orange…) so how could we be the one to miss out?

If you’d like to hear about technologies such as OpenDaylight, FD.io, OPNFV and many more – and learn about the magic we can work with them, we’ll be looking forward to talking to you live! Also, if you just want to know us, or only have a chat, feel free to drop by!

Martin Firak


blog - Singapore Tech XLR8 Asia 600x373 px

Pantheon partners up with TechXLR8 Asia

We’ve already started establishing a tradition of Pantheon Technologies partnering with the best tech events around the globe. To keep up with it, we’ll be sponsoring the Network Virtualization & SDN Asia conference, which will be taking place this fall in Singapore as a part of TechXLR8 Asia. On board with partners such as Juniper Networks, Fujitsu and VMware, we’ll be joining as a silver sponsor.

Singapore Marina Bay Sands hotel

What does this mean in practice? Our colleagues will be able to showcase the Pantheon skills and know-how both as speakers and in the exhibition area.

As we were recently proven at TechXLR8 London, our portfolio is quite unique. The topics revolving around ODL, SysRepo, FD.io, Honeycomb and Vector Packet Processing have struck the cord. Not only that we’ve met lots of interesting people from telco, SDN and content delivery companies, but our business card supply wasn’t able to cover the demand!

Is there anything specific you’d like to hear us talk about?

See you in Singapore on October 3-4!

Martin Firak

ODL DDF Nitrogen

OpenDaylight Developer Design Forum 2017: Nitrogen

On a regular basis, OpenDaylight (ODL) developers meet in order to discuss their ideas as well as plans for upcoming releases. Pantheon Technologies’ Robert Varga and Vratko Polak have joined this year’s gathering. Vratko’s account of the event follows.

OpenDaylight Developer Design Forum 2017: Nitrogen

Brief introduction to ODL

OpenDaylight is an open source project aimed at supporting Software Defined Networking, mainly through a Java application (also called ODL). It’s capable of communicating with network elements via various protocols (southbound) while accepting requests from humans and other programs (northbound), again, via various protocols (although RESCONF is currently the main one).

ODL as a project is hosted by the Linux Foundation (LF), but has its own governance. ODL itself consists of (sub)projects, each has its own Git repository, committers and Project Lead. The Technical Steering Committee (TSC) allows creation of new projects, archival of old projects, and provides guidance for inter-project matters. Most projects focus on providing code for the Java application, so most of their code is in Java, together with Maven definitions used to build artifacts. Those projects depend on each other, ODLParent is the most “upstream” of such projects. Leaf projects are those which do not have other ODL Java project depending on them, not counting Integration/Distribution, which is a project aggregating all artifacts of a particular release into a file archive containing ODL installation.

Integration/Test then runs system tests (CSIT stands for Continuous System Integration Testing) against this archive. Both building and testing is done in Jenkins, Releng/Builder is the project responsible for configuring those Jenkis jobs (and other minutae of infrastructure). In between releases, ODL projects build Snapshot artifacts that are stored in a Nexus server, so artifact version does not define a unique code, and there are possible race conditions when one job uploads new artifacts while other job downloads them. To avoid these downsides, Releng/Autorelease is a project which downloads all the code, bumps it to a non-snapshot version, builds that, and uploads to a staging repository, thus creating a release candidate. Integration/ and Releng/ projects are examples of Support projects.

ODL releases are named after periodic table elements. This Forum has taken place just after the Carbon release, and its goal was to bring developers together in order to speed up discussion and planning of the Nitrogen release. One of the few things every project has to agree with, is the choice of the Java container. From Beryllium up to Carbon, the container of choice was Karaf, versions from the 3.0.x series. Karaf is a Java container based on OSGi. The main concept in Karaf is a Feature, which can contain OSGi bundles, config files and other Karaf features. ODL seems to be using Karaf features in a slightly different way from what Karaf developers have intended, therefore the Carbon initiative to upgrade to Karaf 4 has failed. Previous ODL releases tended to come in roughly 8-month cycles. But ODL is now part of larger ecosystem of networking-focused projects, so TSC decided to change to a 6-month cycle. And to fit into a correct slot, Nitrogen is scheduled to be released only 4 months after Carbon, with upgrading to Karaf 4 as its main goal.

The Developer Design Forum (DDF) for Nitrogen has taken place in Hotel Marriott, Santa Clara, California. The official program was two days long, opening on May 31 and concluding on June 1, 2017. DDF gatherings usually consist of scheduled “conference” sessions, accompanied by parallel “unconference” sessions, created on the spot. Compared to previous DDFs, there were less participants than usual (roughly 50 compared to 150 in the past), leading to only one meeting room being used for conferences and leaving the other available for unconferences.

A list of sessions that I attended follows, together with short descriptions. Please note that the descriptions (and session names) are very loose paraphrases of what was actually discussed, based rather on my personal impressions than the official program.


Karaf 4 planning conference session

After reiterating facts about Nitrogen being a “short” release focused on Karaf 4 transition, a rough timeline was presented. It was stressed that active participation of all projects is required. Projects too slow to respond will be dropped from the release mercilessly.

ODL DDF Nitrogen karaf logo

Not many technical details were discussed at this point, aside from notifying projects that there will be a time period where usual build and test jobs will not be running (at least not for every project) as incompatible changes will require time for rebuilds, to be performed in order throughout the project dependency graph.


Emergency leaf project removal plan unconference session

Around half of current projects are in dormant state, not being developed anymore, usually with only one person performing critical maintenance in their spare time. It is expected that multiple projects in this state will be unable to perform their Karaf 4 migration duties in time. Therefore, many Carbon projects are not going to make it into Nitrogen official release. Yet, there is a backup plan in place, at least for leaf projects: they could release their artifacts in a standalone release. That means their artifacts will not be built within the usual Autorelease job. Releng/Builder can create a job template for that kind of release, so that project won’t need much work to perform such release. Integration/Test would need more changes to allow CSIT for such projects, but we do not envision many projects asking for that.


ODLParent standalone release unconference session

It is a long-standing plan to “decentralize” the ODL release process, so that it depends less on Releng/Autorelease forcing everyone to release at the same time. ODLParent will be the first project to do separate releases (and still end up in Integration/Distribution builds). This needs a new job template, basically the same one as for the removed leafs. Version bumping in downstream will be somewhat painful at first, but the Autorelease project already has all the scripts and rights needed, and an automated job can be created later.


Karaf 4 specific changes unconference session

In Carbon it was discovered that two main ways to install features (the featuresBoot configuration line and feature:install runtime command) use different code paths in Karaf 4, and therefore supporting both of them might not be possible. If Linux Foundation pays a Karaf developer, it might become possible, but we cannot count on that within the Nitrogen cycle. The first Karaf 4 ready ODLParent release will drop support for Karaf 3, Integration/Distribution will stop building Karaf 3 distribution, and all CSIT testing will be switched to Karaf 4. That means we do not need to support a transition period of both versions being built and tested at the same time. If we decide to only support feature:install, changes to Releng/Builder scripts (for CSIT) will be needed.


Releng/Builder needed changes unconference session

This was a technical session, hashing out details of how items from the two previous sessions will be implemented. Few general enhancements were also discussed briefly, however, with no plans of implementing them in the Nitrogen cycle.


Jira instead of Bugzilla conference session

There is a long-standing plan of migrating from Bugzilla to Jira. We’ve discussed several technical reasons why we really need that, as well as a few risks involved. The general consensus is that we want Jira, but it takes some work and we need a person to take the responsibility and make it happen. Not likely within Nitrogen.


ODLParent planning conference session

Technical explanation of what went wrong with Karaf 4 in Carbon. We have a general plan to finally fix that, consisting of 4 approaches we intend to try. Explicit steps of how ODLParent standalone releases and Karaf 4 support will be done, with milestones and deadlines for ODLParent, Java projects, Integration/Distribution and Integration/Test. There will be at least one period where the usual Jenkins jobs will not work, perhaps more if multiple ODLParent releases are needed. Karaf 3 support will be propped as soon as possible, so that projects are motivated to help their upstream with migration.

ODL DDF Nitrogen write

Integration/Test planning unconference session

Few ideas were mentioned, but they were postponed in general, as Karaf 4 migration will consume most of the time. The old plan of migrating ODL installation logic from Releng/Builder bash scripts to Robot Framework suites is still good, but demanding. General Robot code maintenance will remain a slow gradual process. Having a small set of reliable “sanity” tests is still desired. We have a stub already running; all we need is to add more suites which are stable and quick enough. Test result availability and comprehensibility is still a major issue. The current plan is to export the test results to a database, and have a dashboard to render results in a user-friendly way. We have new interns to work on both steps.


MD-SAL usage conference session

A highly technical session where our colleague from Pantheon Technologies, Robert Varga, was talking about the ways MD-SAL (Model Driven Service Abstraction Layer) can be interacted with. Each has its pros and cons. Single listener subscribed to a set of subtrees seems to be the approach avoiding the most of pitfalls, but the cluster implementation is not ready yet.


Infrastructure and CSIT, retrospective and improvements conference session

The changes to Integration/Test and Releng/Builder done in Carbon. Current gaps and how we plan to bridge them, rehashing some ideas from the unconference earlier.


Upgrade-ability conference session

initially, we will be satisfied with reliable offline upgrades. We know that there are significant API changes between releases, and MD-SAL lacks a service which would tell the user that ODL has finished booting up. ODL has a built-in persistence, but some of it is cleared on startup and, perhaps, also corrupted on shutdown. Nevertheless, companies that create ODL-based solutions usually have a way to transfer data from earlier to later version of ODL, so it should be possible to create a basic mechanism in ODL itself. The Daexim project provides a basic set of tools, but it is not equipped to handle data structure changes caused by API changes in each project. The ODL core can help by sticking to the current schema.


Service recovery mechanisms conference session

As the ‘uninstall’ feature does not really work correctly in ODL, current recovery options are limited to restarting the Java Virtual Machine. However, some services present in ODL support a softer restart on demand. A simple model was presented to abstract services and some actions on them, which would allow a client application to query service state and cause a restart without knowing details of a particular service implementation.


Unit testing async code conference session

One of the criteria for ODL code quality is test coverage. Instead of testing each class as a unit, a higher-level “component” tests are the more common option. They still rely upon JUnit executed during a Maven build, but they test a construct consisting of several classes wired together. This is quite positive, as a “real” unit test would frequently have more complicated assertions, and it would still not be clear whether a composite would behave correctly (while such unit tests would take significantly longer to develop). During Carbon development, a significant progress has been achieved in the wiring part of component tests, yet there still is one area that needs improvement: most of ODL code is asynchronous, which means the component consists of several Java threads running concurrently.

One issue is that JUnit requires the assertion to be executed in the main thread to take effect. Another issue is that many asynchronous components lack visible intermediate state changes, which the main thread could check. Most current tests just use sleep for a fixed time before launching the final assert. However, everybody knows, that a test which relies on sleep is a bad test. The ideal solution would be for each class within a component to support dependency injection of asynchronous building blocks, such as executors and listeners. That way the component test can inject specialized building blocks with all hooks the test needs. Failing that, the cheapest solution is to use Awaitility, which, basically, spins an assert (not changing the state) until it passes, or a predefined time runs out. That is better that sleep in that it can pass more quickly.


Closing remarks conference session

The closing session mostly consisted of discussing, why we were joined by way less attendees than is usual. What can be done? One possibility is to merge the Developer Design Forum with some other LF event, however, people argued that this would take away focus from ODL planning. Another option is to ask member organizations to provide the venue, so that a smaller event like this could be hosted without hotel-high venue cost.

Vratko Polák

TechXLR8 large 600x373 px

TechXLR8, London

In mid-June, the TechXLR8 multi-genre tech festival took place in London. Although being part of the London Tech Week 2017, it comprised of further eight ‘smaller’ events: 5G World, IoT World Europe, Cloud & DevOps World, Apps World Evolution, VR & AR World, AI & Machine Learning World, Connected Cars & Autonomous Vehicles Europe and Project Kairos.

TechXLR8 banner

Well, ‘smaller’ events… We are talking about a happening with more than 15 000 participants from 8 000 companies catered to by more than eight hundred tech guru speakers. Thus, these were not really what you would call small family gatherings…

Since it was, from a global perspective, one of the key industry meetings, Pantheon Technologies could not have missed it. We’ve participated in TechXLR8’s Cloud & DevOps World section where we showcased our SDN, ODL and networking skills and know-how: we’ve seen a lot of great things, we’ve managed to acquire interesting contacts with international companies active in telco, content delivery and SDN segments. Products from our portfolio such as SysRepo, ODL, HoneyComb, VPP, FD.io turned out to be really great topics for discussion.

Which keywords did the participants respond to best? Linux Foundation, OpenStack, Docker, Kubernetes, BigData. The demand for Pantheon’s business cards was so high that it caught us by surprise. We even had to ration them on the last day, such was the appetite for Pantheon!


Juraj Veverka

GeeCon 2017, Krakow

Every year, Krakow welcomes some of the biggest industry names to talk about Java and everything related. This time, we couldn’t miss it.


May 16th

The proverbial long and winding road does exist. It sits between Žilina in northern Slovakia and Polish Krakow. After a couple of hours of tiresome driving, we’ve safely arrived in the city. It was a lonesome journey with only radio Pogoda keeping us company by talking gibberish and playing some traditional Polish songs (also in gibberish). The city of Wypadki is surely a magical place. A place where trucks have voting rights and bikers outnumber pigeons 3 to 1. Unfortunately, there was no time to explore further. We checked-in with the cutest receptionist available and prepared a schedule of talks to visit.


May 17th

GeeCon took place in a well-equipped multiplex near the city centre. As it turned out, the venue was not built for this type of events. The corridors‘ bottleneck started to fill with attendees blocking the passage to talk rooms, and you could have spent the whole breaks standing in line in front of a bathroom.

However, the 2017 GeeCon brought out the big guns right at the beginning. David Moore from Sabre showed us the true meaning of “experience.” Although his talk had a rather bland title “Platform and Product Evolution at Sabre,” he touched a broad spectrum of topics – from organizational structures and their need to reflect the software architecture to his hatred towards “layered-cake” architecture designs.

Next on the schedule were some sub-par talks about Java 9 in general, mixed with some never-ending Docker hype, CUDA computing, and introductory profiling. And then we got the juicy stuff. Milen Dyankov from Liferay was not afraid to speak openly about the state and purpose of Jigsaw, the need for the OSGi, and where it all fits together. Great talk for an audience of all levels of familiarity with modular concepts in Java. And of all genders, of course.

We were really pumped up for Monica Beckwith’s talk boldly called “Java Performance Engineers’ Survival guide.” The abstract was attractive and her CV was, so to put it, quite impressive: JavaOne rock star, previously working in AMD as performance engineer, then Sun, later at Oracle working on GC… Suffice to say, the expectations were really high. However, this was probably the biggest disappointment of the entire event.

We ended the day with a dry sauna back at the hotel and went to sleep.


May 18th

After such an exhausting first day, we started with a well-prepared soft-skills talk promising to improve our client presentations, only to continue with the trend of microservices and reactive programming. Right before lunch, Jarosław Pałka showed us the magic of bytecode. It stood up to the high anticipations and made us want to –javaagent something.

Avast people demonstrated how to utilize Docker in production and Marcin Grzejszczak explained the idea behind consumer-driven contracts of APIs. This certainly got our attention and we will consider it for future projects.

After Steve Poole’s light talk about Java vulnerabilities, we headed back to the hotel to get ready for the biggest IT party of the year. A large club located inside an old fort hosted geeks the entire night and they seriously did show their mad dancing skills, as you can see in the photo.


May 19th

The morning after the party, waking up was a bit more painful. We ate the breakfast quickly. Another pretty receptionist did the checkout.

And back to the conference… Even though the party was hard, the audience listened carefully at the first presentation about interrupted exception. We decided to fork us and take a part at different presentations. To the roots of JVM – Java native runtime and another hype – Akka (full auditorium with no spare room left). Later on, we continued with some general JavaScript and JPA lectures. We joined together at the presentation called “Distributed systems explained (with NodeJS),” given by Bruno Bossola, also known as the “network is a bitch” guy. Our long-standing question of how to do testing properly was answered by Anton Arhipov – TestContainers.

There was a great presentation about code generation and the reasons why we should generate configurations instead of code at the very end of the conference. Here we felt as if the future was already here. Rod Johnson presented Atomist – a bot for Slack.

Big thanks goes to Pantheon Technologies and to the organizers of GeeCon for this amazing experience.

Martin Dindoffer

Milan Frátrik

Sponsoring Tokyo’s Automotive Linux Summit

Pantheon Technologies is proud to announce that we’ve become a Silver Sponsor of the Automotive Linux Summit, which will be taking place at Tokyo Conference Center Ariake from May 31 till June 2, 2017. In practice, this means more visibility for our brand plus a lot of networking potential. Which equals great potential for meeting new customers.

The Automotive Linux Summit is a one-of-a-kind event where automotive innovators meet with Linux ninjas, research & development managers and business executives. The result? Connecting developers with their peers and vendors, driving innovation towards the automotive future.

With Pantheon Technologies’ background, skills and global plans, this is a place where we naturally belong.

And we’re not going to miss the chance.

Martin Firák

YANG Catalog: Making models work together

With YANG establishing itself as the standard modeling language of choice, the industry hit a speed bump. Although the entire industry develops YANG models, we lack a way to ensure they will work together in order for operators to automate coherent services.

The problem was, as the IETF blog elaborates, tackled at Internet Engineering Task Force’s IETF 98 Hackathon by a group of ten-ish enthusiasts, including Pantheon Technologies’ Miroslav Kováč, via integrating tools around a YANG catalog.

The idea behind the catalog is to become a reference for all available YANG modules, serving both the YANG developers as well as operators. The catalog should also provide metadata on YANG models, offering information on module implementation, availability of open-source code and possibly much more.

Compared to a Github repository, added value of the YANG catalog resides in the toolchain and the additional metadata, more about which you’ll find in the IETF article.

Martin Firák

PyCon 2017 Slovakia

The conference PyConSK 2017 took place at Slovak University of Technology in Bratislava, the  Faculty of Informatics and Information Technologies over the weekend from March 10th to 12th. Read more

We believe in women in IT

The biggest Python conference in Slovakia, Pycon 2017, is being held during the weekend of March 10 – 12, 2017. We have decided to grant sponsorship to a one-day workshop called Django Girls. The project believes in women’s potential in IT and since the co-owner of Pantheon Technologies, Janka Švorcová, is a woman, we considered our support as a matter of course.

The workshop is focused on website development and thanks to sponsor contributions is completely free of charge. Also, a grant programme covering the travel and accommodation costs for the participants was set up. The application was open to all girls who speak Slovak or English and own a computer. The participants do not need any previous skills or knowledge in this field, since the programming curriculum covers even the very basics.
The whole project and workshop take place as a part of Django Girls, an international initiative and NGO aiming at making IT more attractive to women. Django Girls’ learning tools are being used by volunteers to teach programming skills all around the world.


Gabriel Žifčák

Marketing officer at Pantheon





Your Time is Now

  • 12,000 registered visitors at place –
  • 14,000 connected devices –
  • 25,000 registered online visitors –
  • 29,374,360 words presented in sessions –
  • 67,000 meals packed for Rise Against Hunger (17,000 more than planned!) –
  • 2,713 customer and partner meetings in the Meeting Village –
  • 31,000,000 people have seen the CLEUR (Cisco Live Europe) content –

This year’s Cisco Live (CL) Berlin 2017 rocked the Messe Berlin. From a Cisco Data Center standpoint ACI, Tetration and ASAP continued to grab the headlines. In particular, Cisco ACI has established itself as the dominant SDN technology with over 2,700 customers and a growing eco-system of 65 partners in just two and a half years.



Future-Proof your Business – fantastic and catchy opening keynotes were delivered by Cisco Vice President of Growth Initiatives (and Chief of Staff to CEO Chuck Robbins) Ruba Borno, who shared Cisco’s vision that the only future-proofed solution for digital transformation would be the next-generation secure network.

Cisco’s intelligence unit consists of more than 250 leading security experts, data scientists and hackers. These are the guys who are hacking the hackers. This is an organization that has the back of every Cisco partner’s customer. Cisco’s products teams then take all this intelligence and add automation, add machine learning, and provide Cisco’s partners and customers with integrated security architecture. All of this in order to protect their partners, protect their employees, their assets and their intellectual property.

By the way, did you know that Cisco has the best breach detection time in the market? They are able of detecting over 90% security incidents within three minutes and since Cisco put the web everywhere, it’s now time to abandon the legacy point product security behavior and adopt an integrated and dynamic self-learning holistic approach. So, that you not only have less complexity, but also feel more secure.

With Cisco, we know that we have the best networking hardware with the most advanced software. Yet you shouldn’t be satisfied with the best software of today. What you should go for is the best software for tomorrow. And it not only needs to be advanced, it also needs to be advanceable.

Tetration, according to Ruba Borno, is one of the coolest platforms Cisco has. It understands your entire data center in the context of the application environment. It can automatically map your application landscape, it can automatically map your dependences across applications, it can also determine which security policy to apply. And it can also enforce it and it, doing so at scale.

The closing guest keynote was delivered by Virgin Galactic’s Commercial Director Stephen Attenborough, who, at one point, was also the company’s first employee. He was the one who had established Virgin Galactic’s commercial foundations including a community of 700 future astronauts, and is now also responsible for work streams investigating additional applications and markets for space vehicles. This also includes the now very actively pursued small satellite launch program.


ACI Solutions Partners

CL Berlin’s platinum sponsor was Citrix who’ve had a significant presence in the partner area this year. At booth P2, you could engage their experts on how to securely deliver apps and data over any network with Citrix XenDesktop, XenApp and NetScaler on Cisco UCS/HyperFlex virtualization infrastructure in order to increase your business’s productivity, agility and differentiation.


DevNet Zone

DevNet is Cisco’s new developer program which provides their partners with tools to produce Cisco-enabled applications. These can then be sold to Cisco’s customers and (or) use the company’s API to enhance or manage your existing Cisco network. Or, to put it more simply, DevNet is where applications meet the infrastructure. DevNet is often considered an old tool, but did you know it’s only been around for three years, launched in December 2013?

DevNet is teaching researchers and engineers how to use new tools and resources as well as helping them in their daily work and careers to impact their companies. DevNet is about helping people innovate. And what specific role did DevNet play at this year’s Cisco Live? According to Cisco’s Senior Director Rick Tywoniak, one of DevNet’s contributions to CL Berlin week was working with 13 innovation centers around the world, helping the CL visitors in cooperation with the engineers working out there, and highlighting some of the cool apps that were developed using Cisco’s APIs. One of such examples can be found in the retail space: MishiPay is a company whose software allows integrating self-checkouts via wi-fi. In case you decide to leave the store without checking out all your items, the MishiPay application learns of your misbehavior and sets the alarm off!


Data Center Innovation: Speaker one

Liz Centoni (Senior Vice President & General Manager, Computing Systems Product Group at Cisco)

The world of Data Centers and the Cloud happens to be very dynamic. The amount of network changes that we see in modern data centers is much bigger than probably in any other segment in the IT space. The real challenge is managing all of these. The user community is evolving – this is something known and traditional. Applications that can be anywhere from bare metal to virtualization to containers, and can sit anywhere – multiple clouds and on the premises as well. In order to address all of these, let’s have a look at the overall holistic approach that is based on four elements (four design principles), that build an integrated architecture: Analyze – Simplify – Automate – Protect.



What Liz Centoni sees as a well working customer driven product can be described by three keywords: one architecture – standardized operations – simplicity. An example of this is Cisco integrated system for Microsoft Azure Stack. Its advantages comprise of unified infrastructure management, Cisco generation 4 VIC card, optimized fabric design, and proven policy driven architecture.


Data Center Innovation: Speaker two

Ishmael Limkakeng (Vice President, Product Marketing at Cisco)

“Combination of Cisco’s portfolio in a Data Center right now is the best it has ever been. We use Tetration to understand how an application works. We use ACI to automate how an application gets installed. We use Data Center to deploy that, wherever the right workload for the right environment. And then we come back again to Analytics to understand what we did, to make sure, we did, what we intended. That’s how we see this all coming together.”



Demos and Theater Presentations at World of Solutions

At this year’s World of Solutions this year, SDN/ACI, Tetration Analytics, UCS and Cloud took center stage in the Data Center category. There were multiple demos showcasing ACI and Tetration innovations.

Generally speaking, as Cisco’s recent ground-breaking innovation, Tetration was a hot topic during the whole CL. At the Tetration demo area, customers learned the details about end-to-end application visibility and automated white-list policies for granular segmentation. It was a unique occasion to meet Cisco’s experts and discuss with them recent innovations such as automatic policy enforcement, Tetration Apps, flexible form-factor based deployment options. Following the Tetration launch on February 1, 2017, its innovations have attracted endorsements from customers, partners and media. Check out ecosystem partner quotes here.

Social Networking

Last but not least, as a Cisco Live attendee, you benefited from the opportunity to interact with your peers, Cisco staff and partner technical experts in both structured and informal settings. And this is what counts the most!

Andrej Vanko, MSc.

IT Project Manager at Pantheon Technologies, s.r.o.





youtube channel: Cisco Live Europe

OPNFV Fast Data Stack on FOSDEM 2017

On February 5th we presented OPNFV Fast Data Stack on FOSDEM conference that is hosted every year in Université libre de Bruxelles in Brussels. It was a great gathering of software developers who presented their work in form of 30 minute presentation. People came not just from Europe, but also oversees and other parts of world.  Lectures took place in more than 30 rooms and more than 600 speakers were presenting their projects.

There were many interesting lectures not only in networking field, but robotics, neural networks, microprocessors, algorithms, data modeling. Some presenters were members of large teams, some were presenting their own projects. The scope was very wide including almost every programing language one ever heard about. Visitors could see everything from startups up to trending projects like Kubernetes, Opendaylight or Openstack. Every lecture was recorded and videos can be found on FOSDEM website. Our presentation was scheduled in NFV (Network Function Virtualization) section.

 About virtualization and networking

Virtualization became very popular over the last years. Virtual machines lower need for physical resources and makes data centers more flexible and accessible. Today’s servers are really powerful and therefore can host many VMs. This shed a new point of view on networking and as a response it got virtualized too in form of virtual forwarders – processes capable of forwarding traffic within a hosting machine. OVS and VPP are popular technologies these days and both support a very powerful set of data plane libraries and network interface controller drivers for fast packet processing called DPDK. You may think of VPP and OVS as a virtual forwarder between physical NICs and the virtual machines.

What is OPNFV Fast Data Stack

OPNFV FDS makes it easier to maintain complicated datacenter environments. It’s a complex multilayer suite that includes software components designed for creating virtual machines and forwarding traffic. All the components are built with Apex installer on given set of host machines that need to match demanding performance needs and also have a basic connectivity. As a result a complex stack is created providing a rich user interface to network operators. The input exposes abstract set of tools for managing lifecycle of network, virtual machines and policies across given nodes.

Under the hood

Let’s have a look on key components of OPNFV FDS suite. As mentioned above, multiple components operate at different layers of the stack. Each component participates in transforming defined abstraction to an actual configuration for underlying infrastructure.  On top of the stack resides Openstack. This software is famous for its scalability, loads of plugins and gigantic community. FDS uses Openstack for managing VMs and for defining forwarding topology and policy rules. Forwarding input can be characterized by elements like network, subnet, router or port. Policy input by security groups and security group rules. One layer bellow is Opendaylight controller also popular for its community, and plugins. In the OPNFV FDS setup it is used as a controller unit that consumes Openstack’s abstractions and applies it to an underlying infrastructure. By using Opendaylight’s Group Based Policy plugin. When the plugin detects that a policy can be resolved for at least two endpoints, configuration is generated and flushed to forwarders. OPNFV FDS setup, presented on FOSDEM, is using VPP in the hypervisor to forward packets between physical NICs and the VMs. VPP is a virtual switching/routing technology operating at very impressive rate. It is impresively fast thanks to DPDK library and CPU cache optimisation techniques. The beauty of vector packet processing is that instead of handling packets one by one, VPP will perform one micro operation after another to a group of packets which performs better with heavy load and results in increased throughput. VPP exposes C APIs and CLI for configuration. It’s not possible yet to use C API remotely because VPP does not run any management client.  Therefore, Honeycomb is used in the setup to provide NETCONF interface for the VPP forwarder. Opendaylight uses NETCONF to talk to a HC Agent.

Supported scenarios

The FDS Demo presented on FOSEDEM showed the L2 scenario, meaning that L2 traffic is passed via VXLAN tunnels between nodes. Traffic is routed on centralized node and routing is not performed by VPP itself but by Openstack Qrouter service that is interconnected into every L2 domain in VPP via tap ports. NAT and routing towards external networks is also done by Qrouter.

Moving forward, FDS project is also looking at the L3 scenarios, where routing could be either distributed or centralized and will be done by VPP process together with NAT. All this efforts need attention on every layer of the stack including Apex installer.


We were pleased to present the FDS project on FOSDEM conference. We believe that OPNFV FDS is a key component in network virtualization with very bright future. For more information about the setup, and project itself, please visit https://wiki.opnfv.org/display/fds.

Tomáš Čechvala

Software Engineer


Michal Čmarada

Software Engineer

OpenDaylight RPCs or What Could Possibly Go Wrong With Adding This One Cool Feature

OpenDaylight uses YANG as its Interface Definition Language. This is an architecture decision we have made way back in 2013 and it works reasonably well for the most part.

One of YANG concepts used rather heavily is the concept of an RPC. For YANG and its intended use in NETCONF’s client/server model it works perfectly fine, but trouble starts brewing when you borrow concepts and try to make them fit your use case.

OpenDaylight uses YANG RPCs to not only define its northbound model, but also model interactions between its individual plugins. It does this is an environment, which is a single process, but rather a cluster of nodes, each having a mesh of plugins, some activated some not.

From architecture’s view, which looks at things from an elevation 10,000 feet, the problem of making RPCs work in this sort of environment is quite simple: all you need are registries and request routers. From implementation perspective, though, things can easily go wrong … implementations have bugs, quirks and limitations which are not immediately apparent. They just surface when you try and push the system closer to its architectural limits.

The Trouble with Names

RFC 6020 defines only the basic RPC concept and assumes there is a single implementation servicing any request for that RPC. This is okay as long as you are targeting singleton actions — like ‘ping IP’, ‘clear system log’ and similar. In a complex system, though, requests are typically associated with a particular resource — like ‘create a flow on this switch’. Since YANG did not give us this tool, we have decided to create an OpenDaylight extension to allow an RPC to be bound to a context. This gave rise to two unfortunate names: ‘Global RPCs‘ and ‘Routed RPCs‘, the first being normal RPCs and the second being bounded to a context. Plus a third name, ‘RPCs‘, to refer to either one of those concepts. Are you confused yet?

The initial implementation of these concepts was done back in 2013, when there was no clustering in sight, by a team who has spent days upon days discussing the difference. When clustering came into the implementation picture, in 2014, the implementation team attached their own meaning to the word ‘Routed’ and we ended up with an implementation, where Routed RPCs are routed between cluster nodes, but the default ones are not. That is the subject matter behind BUG-3128. It did not matter much as long as all cluster-enabled applications used Routed RPCs, but that changed with emergence of Cluster Singleton Service and its wide-spread adoption among plugins.

These days we have YANG 1.1, defined in RFC 7950, which has the same underlying concept with much less confusing names. ‘Global RPCs’ are ‘RPCs‘. ‘Routed RPCs’ are ‘actions‘. Since those terms make the conversation about semantics a reasonable affair, this is the last you hear about Global and Routed RPCs from me.

Fun with Concepts, Contexts and Contracts

In order to support both RPCs and actions, OpenDaylight’s MD-SAL infrastructure has to define a concept to identify them both. Since the two are utterly similar in what they do, DOMRpcIdentifier was born. It is used to identify either an action or an RPC. To do that is is an abstract class with two concrete, private final implementations: DOMRpcIdentifier$Global and DOMRpcIdentifier$Local. Why those names? I do not remember the details, but I could wager a guess about what I was thinking back then. At any rate, the two implementations differ only in their implementation of DOMRpcIdentifier.getContextReference(). DOMRpcIdentifier$Global’s is always empty and DOMRpcIdentifier$Local’s is always non-empty.

This is consistent with how RPCs (without a context reference) and actions (with a context reference) are invoked and it makes the API involved in the context of RPC/action invocation clean and simple. API contract. In the context of registering an RPC or action implementation, things are slightly less straightforward. It is a separate interface, with a rather terse Javadoc. In both cases there is a hint of ‘a conceptual dynamic router’, but not much in terms of details.

Unless you were very curious as to the details of the API contracts involved, after reading the documentation available, with some OpenDaylight tutorials under your belt, you would feel this is a dead-simple matter and just use the interfaces provided. Run a few test cases and everything works just fine. No trouble in sight.

About That Router Thing…

The Simultaneous Release name of OpenDaylight for the release currently in development is Carbon, meaning we have shipped 5 major releases, so this ‘dynamic router’ thing vaguely referenced actually exists somewhere and it does something to fulfill the API contracts imposed on it, otherwise the applications would not be able to work at all. The entry point into the implementation is DOMRpcRouter. Glancing over that, it contains some ugliness, but it gets the general outline of the two sides of the contract done.

Digging a bit deeper into the invocation path, you get into the fork at AbstractDOMRpcRoutingTableEntry.invokeRpc(). The RPC invocation path is rather straightforward, but the invocation path for actions  is far from simple. Out of two code paths (actions and RPCs) we suddenly have 4, as an action can be invoked without a context reference as if it were an RPC and there is a brief mention of remote rpc connector registering action implementations with an empty context reference … wait … WHAT???!!!

Okay, we seem to have two implementations integrated based on implementation details, without that being supported by a single line in the API contract. The connector referenced is actually sal-remoterpc-connector and is something that is meaningful in clusters. To make some sense of this, we have to go back to 2013 again.

A Tale of Three Routers

From the get go, the MD-SAL architecture was split into two distinct worlds: Binding-Independent (BI, DOM) and Binding-Aware (BA, Binding). This split comes from two competing requirements: type-safety provided by Java for application developers who interact with specific data models and infrastructure services which are independent of data models. The former is supported by interfaces and classes generated from YANG models and generally feels like any code where you deal with DTOs. The latter is supported by an object model similar to XML DOM, where you deal with hierarchical ‘document’ trees and all you have to go by are QNames. For obvious reasons most developers interacting with OpenDaylight have never touched the BI world, even though it underpins pretty much every single feature available in the platform.

A very dated picture of how the system is organized can be found here. It is obvious that the two worlds need to seamlessly interoperate — for example RPCs invoked by one world must be able to be serviced by the other and the caller should be none the wiser. Since RPCs are the equivalent of a method call, this process needs to be as fast as possible, too. That lead to a design, where each world has its own Broker and the two brokers are connected. Invocations within the world would be handled by that world’s broker, foregoing any translation. A very old picture of how an inter-world call would look like can be seen in this diagram.

For RPCs this meant that there were two independent routing tables with re-exports being done from each of them. The idea of an RPC router was generalized in the (now long-forgotten) RpcRouter interface. Within a single node, the Binding and DOM routers would be interconnected. For clustered scenarios, a connector would be used to connect the DOM routers across all nodes. So an inter-node BA RPC request from node A to node B would go through: BA-A -> BI-A -> Connector-A -> Connector-B -> BI-B -> BA-B (and back again). Both the BI and connector speak the same language, hence can communicate without data translation.

The design was simple and effective, but has not quite survived the test of time, most notably the transition to dynamic loading of models in the karaf container. Model loading impacts data translation services needed to cross the BA/BI barrier, leading to situations where an RPC implementation was available in BA world, but could not yet be exported to the BI world — leading to RPC routing loops, and in case of data store services missing data and deadlocks.

To solve these issues, we have decided to remove the BA/BI split from the implementation and turn the Binding-Aware world into an overlay on top of the Binding-Independent world. This means that all infrastructure services always go through BI, and the Binding RPC Broker was gradually taken behind the barn, there was a muffled sound in 2015, and these days we only have two routers, one hiding behind a connector name.

Blueprint for a New Feature

Probably the most significant pain point identified by people coming to OpenDaylight have is that the technology stack is a snowflake, providing few familiar components, with implementation and documentation being borderline hostile to newcomers. One of such pieces is the Configuration Subsystem (CSS) — driven by invalid YANG and magic XMLs, it is a model-driven service activation, dependency injection and configuration framework built on top of JMX. While it offers the ability to re-wire a running instance in a way, which does not break anything half-way through reconfiguration, it is a major pain to get right. It pre-dates MD-SAL (which offers nicer configuration change interactions) and is utterly slow (because the JMX implementation is horrible). It was also designed for to safeguard against operator errors and this is quite contrary to what Karaf’s feature service provides — if you hit feature:uninstall, those services are going down without any safeties whatsoever.

To fix this particular sore spot, one of the decisions from the Beryllium design summit was to extend Blueprint with a few capabilities and start the long journey to OpenDaylight without CSS, where internal wiring would be done in Blueprint and user-visible configuration would be stored in MD-SAL configuration data store. The crash-course page is a very easy read.

You will note that there is support for injecting and publishing RPC implementations — which is a nice feature for developers. Rather than having to deal with registries, I can declare a dependency on an RPC service and have Blueprint activate me when it becomes available like this:

<odl:rpc-service id="fooRpcService" interface="org.opendaylight.app.FooRpcService"/>

I can also publish my bean as an implementation, just with a single declaration, like this:

<bean id="fooRpcService" class="org.opendaylight.app.FooRpcServiceImpl">
  <!-- constructor args -->
<odl:rpc-implementation ref="fooRpcService"/>

This is beyond neat, this is awesome.

FooRpcService vs. DOMRpcIdentifier

We have already covered how Binding Aware layer sits on top of the Binding Independent one, but it is not a one-to-one mapping. This comes from the fact that Binding Independent layer is centered around what makes sense in YANG, whereas the Binding Aware layer is centered around what makes sense in Java, including various trade-offs and restrictions coming from them. One such difference is that RPCs do not have individual mappings, i.e. we do not generate an interface class for each RPC, but rather we generate a single interface for all RPC definitions in a particular YANG module. Hence for a model like

module foo {
    rpc first { input { ... } output { ... } }
    rpc second { input { ... } output { ... } }

we generate a single FooService interface

public interface FooService {
    Future<FirstOutput> first(FirstInput input);
    Future<FirstOutput> second(SecondInput input);

The reasoning behind this is that a particular module’s RPCs (in the broad sense, including actions) will always be implemented by a single OpenDaylight plugin and hence it makes sense to bundle them together.

An unfortunate side-effect of this is that in the Binding Aware layer, both RPCs and actions are packaged in the same interface and it is up to the intermediate layers to sort out the ambiguities. This problem is being addressed in Binding V2, where each action has its own interface, but we have to have a solution which works even in this weird setup.

Fix Some, Break Some

Considering these complexities and gaps in the API contract documentation department, it is not quite surprising that the fix for BUG-3128, while making RPCs work correctly across the cluster had the unfortunate side-effect of breaking blueprint wiring in a downstream project (OpenFlow Plugin). In order to understand why that happened, we need to explore the interactions between DOMRpcRouter, blueprint and sal-remoterpc-connector.

When blueprint sees an <odl:rpc-service/> declaration, it will wire a dependency on the specified RPC (Binding Aware) interface being available in DOMRpcService (which is a facet of DOMRpcRouter). As soon as it sees a registration, it considers the dependency satisfied and proceeds to with the wiring of the component. This is true for LLDP Speaker, too. Note how it declares a dependency on an implementation of PacketProcessingService. Try as you may, you will not find a place where the corresponding <odl:rpc-implementation/> lives. The reason for this is quite simple: this service contains a single action and an implementation is registered when an OpenFlow switch connects to the OpenDaylight instance. So how is it possible this works?

Well, it does not. At least not the way it is intended to work.

What happens is that Blueprint starts listening for an implementation of PacketProcessingService becoming available with an empty context, just as with any old RPC. Except this is an action, so somebody has to register as a global provider for the action, i.e. as being capable to dynamically invoke it based on its content and not being tied to a particular context. That someone is sal-remoterpc-connector, in its current shape an form, which does precisely what is mentioned in that terse comment. It registers itself as a dynamic router for all actions and when a request comes in, it will try to find a remote node which has registered an implementation for the specified in the invocation. That means that unbeknownst to the Blueprint extension, all actions appear to have an implementation — even if there is no component actually providing it — and therefore LLDP Speaker will always activate, just as if that dependency declaration wes not there.

The fix to address BUG-3128 performed a simple thing: rather than using blanket registrations, it only propagates registrations observed on other nodes — becoming really a connector rather than a dynamic router. Since no component provides the registration at startup time, blueprint will not see the LLDP Speaker dependency as satisfied, leading to a failure to activate. Unless an OpenFlow switch happens to connect while we are waiting — in that case activation will go through.

So we are at a fork: we either have blueprint ‘working’, or we have RPC routing in cluster working. Getting both to work at the same time, and actually fixing LLDP Speaker to activate when appropriate, we will obviously have to perform some amount surgery on multiple components.

I will detail what changes are needed to close this little can of worms in my next post, so stay tuned 🙂


Róbert Varga

CTO Pantheon Technologies

ngPoland and beyond

In late November we visited one of the world’s biggest Angular conference – ngPoland. Just two months before, Angular2 had been released, so all sessions were more or less about it.

The first session was about Angular CLI. Tracy Lee showed us how to make simple application and put it into Firebase in 30 minutes. All with help of Angular CLI – command line tool which helps to build application faster, since your dev environment is prepared and you can start coding right away.
We already tried Angular CLI in our project and it’s great. Do you want to watch functionality with live reload, do unit testing with Karma, and end-to-end testing with Protractor? It’s there and many more.

Shai Reznik told us Legend Of ngModules. Pretty funny story with lot of interesting info on how to write, yes, modules. Looks like skilled developer should know how to structure applications, but it’s good to remind those best practices, especially, when it’s your first try with Angular 2 and TypeScript.

There were few moments we call „It‘s (put year here), so use (put library/pattern/language here)“. Like „It’s 2005, use asynchronous calls“, or „It‘s 2015, use promises, callbacks are baaaad“. Now we have another one: „It’s 2016, use observables“. On this topic, Ben Lesh had good talk about RxJS library, which implements Observer pattern for composing asynchronous and event-base programs.
We tried RxJS, and it works pretty well. We replaced promises in our AJAX calls and events in components. It needs some time to get used to it, but then it’s pretty straightforward.

There were more good talks on Ng Poland conference, so it’s great we all can watch recordings on Youtube.


I would like to finish this article with some advice. If you are about to start a new project and deciding between Angular 1 which you used before and have knowledge of, skills and code snippets, and Angular 2, use the second one. Angular 2 is just better.

PS: If you accept my advice, be prepared for lack of documentation. But it’s getting better every day, trust me.


Daniel Malachovský

Technical Leader in Pantheon Technologies

Sysrepo at IETF 96 Hackathon in Berlin

Sysrepo, an open source project developed by several partners including Pantheon Technologies, participated in the IETF 96 Hackathon in Berlin held from July 16th to July 17th, 2016.

The IETF Hackathon is all about promoting the collaborative spirit of open source development and integrating it into IETF standards. Sysrepo project provides a framework that can be used to bring NETCONF & YANG management to any existing or new Unix/Linux applications, which should help spreading these IETF standards into the wider open source community.

The hackathon was our first opportunity to introduce Sysrepo project to the audience experienced with NETCONF & YANG standards. In front of our poster (see below), we led many constructive discussions with other participants and gained a lot of feedback.

ietf96-hackathon-posterApart from presenting the project to other participants of the IETF meeting, we spent the weekend by hacking on three sub-projects based on Sysrepo:

NETCONF/YANG management of Raspberry Pi

To demonstrate that NETCONF & YANG is applicable also in the IoT (Internet of Things) domain, as well as to demonstrate that Sysrepo can work also on systems with limited resources, we prepared a simple [Sysrepo plugin that can control GPIO pins of the Raspberry Pi]. We demonstrated  this on a relay switch and a thermal sensor connected to the GPIO of the Pi running on Raspbian Linux with Sysrepo and Netopeer2 – we were able to turn the relay on or off via NETCONF, or retrieve the current temperature gained from the sensor via NETCONF.


Sysrepo plugin for the ietf-system YANG module

Another part of the team formed out of the hackathon participants focused on development of a [Sysrepo plugin that implements the ietf-system YANG module] on a generic Linux host. During the hackathon they managed to write the code that allows NETCONF management of the hostname, clock & timezone settings and is capable of restarting and shut-downing of the device via NETCONF RPCs.

NETCONF/YANG management of DHCPv6 in ISC Kea

The developers of the ISC Kea DHCP server joined our team with a clear goal: enable NETCONF/YANG management of their DHCP daemon using Sysrepo and Netopeer2. During the hackathon they wrote a [Sysrepo plug-in for ISC Kea] that is able to mange some part of Kea’s configuration via NETCONF. Their work hasn’t stopped after the hackathon ended – they expressed an interest to continue in this direction in the future too.

After the hacking ended, each team prepared a short presentation of their achievements. This was streamed online and is accessible on YouTube:

Although the biggest achievement for us was the high interest in Sysrepo project among the IETF meeting participants and all the feedback we gained from them, we were also selected as a winner in the „Most Importance to IETF“ category, as you can read in this blog post.

Rastislav Szabo

                                                                                 Software Engineer in Pantheon Technologies

More information on Sysrepo:

Project page: http://www.sysrepo.org/

GitHub: https://github.com/sysrepo/sysrepo

Mailing lists: http://lists.sysrepo.org/listinfo/


Topology processin framework for OpenDaylight made by Pantheon Technologies

Topology processing framework

Topology Processing Framework is a project in Opendaylight developed by Pantheon Technologies. The framework provides advanced topology related functionality for other components and applications. Let’s take a closer look  at Topology Processing Framework functionality in video presentation, made for SDN & OpenFlow World Congress in Düsseldorf, Germany.


OpenDaylight at OpenSource Weekend

Open Source has a long history in Slovakia, reaching back to late nineties, when the comminuty was organized around Slovak Linux Users Group (SK-LUG), which was quite successful in its Linux Weekends gathering a following of young enthusiasts, mostly college students. Read more