KubeCon 2017, Austin

KubeCon & CloudNativeCon 2017, Austin

At the beginning of December 2017, we attended the KubeCon & CloudNativeCon 2017 conference in Austin, Texas. The conference, organized by the Linux foundation, brought together leading contributors in cloud native applications and computing, containers, microservices, central orchestration processing and related projects.

KubeCon 2017, Austin

More than four thousands developers, together with other people interested in cloud-native technologies, visited the event in Austin. The growing number of attendees is a testimony to the rising importance of Kubernetes and containerized applications for companies of all sizes.

The schedule was full of talks about various CNCF technologies such as Kubernetes, Prometheus, Docker, Envoy, CNI and many others. “Kubernetes is the new Linux,” pointed out Google’s Kelsey Hightower in his keynote, predicting bright future for these technologies.

KubeCon 2017, Austin

In addition to talks, the sponsors showcased their projects in a huge exhibit hall. The FD.io booth presented a project our friends from Cisco contributed to – VPP centric network plugin for Kubernetes which aims to provide the fastest connectivity for containers by bypassing the kernel network stack. During the presentation of the project, we were involved in many conversations with attendees from various companies, which proves their interest in the solution.

KubeCon 2017, Austin

Rastislav Szabo, Lukas Macko

Moscow business district under construction

Building Infrastructure Systems 2017 Conference, Moscow

At the end of October 2017, I had a chance to visit one of the world’s largest cities – beautiful Moscow, capital of Russia, where the BIS 2017 event took place. BIS – Building Infrastructure Systems – focused at data centers, networks and technologies connected to these topics. The venue was the impressive Azimut Olympic hotel, which pleasantly surprised everyone by being a fully smoking-free zone with lots of photos on the walls picturing healthy ways of life.

Moscow business district under construction

The event was very well organized and the timing precise; everything was on time and easy to find. The event was attended by nearly 1000 delegates, among them many representatives of businesses and government bodies, highly skilled technical specialists and CxOs managing large companies. Since the very beginning I literally had no time to sit down for a while, such was the number of visitors to our booth. Most of them showed great interest in our company’s scope of work, the level of expertise we provide, projects we participated at; and there were hundreds of other questions they wanted to ask 🙂

BIS 2017 Moscow servers

At 11:20 of the event day, we had a presentation slot allocated to Pantheon Technologies. The room was full of people, showing great interest in the SDN, NFV and IoT technologies. I have had 15 minutes to discuss the latest trends in SDN and NFV and to introduce our company to the audience. Unfortunately, there was almost no time left for the Q&A part, so I invited everyone to our booth. And people came. Right after the presentation, and until the very end of the day, people kept coming and asking questions, asking for references, contacts. That was truly amazing!

BIS 2017, Moscow, Pantheon Technologies brochures

I’ve spoken to people from the Government of Moscow, from financial bodies, telecom and development companies. There were several representatives from largest Russian system integration companies who were interested in cooperation.

At the same time, it was inspiring to listen to their practical “field” experience and their understanding of the market. The overall impression I had is that the SDN/NFV technologies are being actively researched and tested in Russia recently, although significant ROI is still a rare case here. We need more work and time until that point is reached.

BIS 2017, Moscow, robot

My final impression was that we came to show Pantheon Technologies to Russia just in the right time. There are many interesting projects out there where our long-term expertise in the field of networking software development may prove useful.

 

Denis Rasulev

ONUG 2017 stage

ONUG Fall 2017

Open Networking User Group, New York, USA, October 17 – 18, 2017

ONUG 2017 stage

ONUG belongs to the group of conferences rather smaller in size, but surely not in importance. This year it took place in New York. The Big Apple is a truly interesting place and so was the conference. This event was a combination of trade show and a panel discussion. Pantheon Technologies did not actively participate in the trade show part this time, as our focus was more on potential business hunting.

ONUG 2017 crowd

ONUG is a 2-day event fully packed with big names on stage, as part of panel discussions, and a good selection of vendors, community leaders, service and solution providers.

The conference includes keynotes from IT business enterprise leaders as they address their open software-defined cloud-based infrastructure journeys, updates from the Working Group Initiative members, hands-on tutorials and interactive labs, real world use cases, proof of concept demonstrations and a vendor technology showcase.

ONUG 2017 website screenshot - recap

The goal of all ONUG events and initiatives is to bring together the full IT community, to allow IT business leaders to learn from peers, make informed open infrastructure deployment decisions, and to open up the dialogue between the vendor and user communities in order to collectively drive open infrastructure.

ONUG 2017 Pantheon brochures

For Pantheon Technologies this means a good opportunity to understand current networking needs of service providers, enterprises and vendors. This helps us to improve promoting Pantheon even better in the field of our expertise, in customized software development. ONUG clearly showed that service providers are heading more and more towards SD-WAN solutions. We have discussed our expertise in SDN and NFV with almost all of the ONUG participants and have found several potential partners to explore this exciting business with. Software Defined Networking is not only a buzzword anymore, it’s been well established and the market is very competitive, especially the US territory. That is why we at Pantheon Technologies need to be on top of it.

Peter Takáč

Windmills in the Netherlands - SDN NFV cover photo

SDN NFV World Congress: Intent-based Networking Still not in Sight

This year, our colleagues from Pantheon Technologies visited quite a couple of tech events around the globe. Among them, the SDN NFV World Congress, taking place in The Hague, was one we definitely couldn’t have missed. As one of the largest conferences focused at network transformation, it attracted more than 1700 visitors from companies all over the world. And it weren’t only large companies, many of whom are among our long-term clients; a fairly large number of start-ups joined in order to present their solutions.

 

Haag SDN NFV Forum animated GIF

Pantheon Technologies booth @ SDN NFV, Hague

It’s thrilling to follow the gradual transformation of proprietary solutions into those based on open-source. The reason is simple: at Pantheon Technologies, we contribute into several open-source projects, as we firmly believe that it’s the only way to ensure interoperability and standardization of individual building blocks of SDN and NFV solutions.

Yet, SDN, software-defined networking, is still under development. Until the present day, most use-cases have only been dealing with automation. The bottom line is that it’s still HDN, a human-defined network. It’s still people who express the desired state of the network, it’s not done by a software. Therefore, after solving the issues with automation and interoperability of the building blocks, a new adventure from the intent-based networking world might await: the current SDN solutions, offered by the market, will only provide the infrastructure to be used to fulfill the network users’ intentions.

Stefan @ SDN NFV, Hague

During the week which we spent at the conference, we’ve had plenty of interesting discussions, both sales-oriented and technical. Now, we’re very much looking forward to further meetings and talks.

 

 

Miroslav Miklus, Martin Firak

Carbon Cluster / cloud picture

Quick Carbon Cluster Setup in Amazon Cloud

Have you ever needed to setup an SDN controller cluster? Did you do it manually? Did it not take too long?

No worries – since we’ve got you covered from now on, you can automate your setup using our CloudFormation 3-node cluster template with Pantheon Technologies’ Carbon SDN controller.

The recipe is quite straightforward. Use your Amazon account to subscribe to the Carbon product here (or look here to check out the instructions how to do it). Open the CloudFormation console. Start creating a new „stack.“ Continue with feeding the creation wizard with our template, which is available here. Fill the customization parameters – that goes something like this:

Carbon Cluster configuration

Some parameters will be pre-filled with default values. Other parameters must be filled manually. For instance, you have to be creative and name your stack. You’ll also need to select a virtual private cloud (VPC), which will be something like your own private playground in the Amazon cloud.

Don’t worry – every Amazon account which is not too old should have one by default. Just use the combobox and click “select.”

 

Carbon Cluster / cloud picture

The last thing is the SSH access key pair that is also used for the automatic cluster configuration. Due to the private key upload, I would recommend you to create a new key pair just for this purpose (look here).

Additionally, you can define basic machine parameters of your nodes by choosing the correct instance type. You can also enhance the security by reducing the reachability of certain ports.

After filling the parameters, you can leave the other wizard pages untouched and finish with the creation wizard. After some 11 minutes, the cluster will be up and running.

Now you can play with it as much as you wish. You can access its nodes using SSH (see the „outputs“ tab of your stack) or you can access the data model of the Toaster example using the RESTCONF and simulate cluster failovers. Last but not least, our marketing department wants us to mention that the Carbon SDN controller costs you nothing. It’s free, so that you can try it out and decide whether you like it or not.

 

Filip Gschwandtner

Senior Software Engineer

 

moscow business district

Ready to Discuss Automation, Data & Networks in Moscow?

Already on October 25, 2017, a very special event will be taking place in Moscow: CIS Event Group’s BIS 2017 / Around Networks, Around Automation, Industry 4.0.

If we had to pick one single event focused on modern engineering infrastructure where we could meet our friends, peers and clients from Russia and the neighboring countries, this would be the one.

moscow business district

Let us introduce ourselves to the Russia’s market: Pantheon Technologies is among the leaders in Network Function Virtualization with deep expertise in the Internet of Things, Software Defined Networking, OpenDaylight and several other fields, such as Sysrepo, Honeycomb and Ligato. As these technologies are gaining traction in Russia and starting to spread throughout the neighborhood, now the time has come for us to get more involved and offer our expertise where it is required most.

We are working towards developing the future of the internet. Are you ready to join?

 

Denis Rasulev

OPNFV Fast Data Stack on FOSDEM 2017

On February 5th, we presented the OPNFV Fast Data Stack on FOSDEM conference that is hosted every year at Brussels’ Université libre de Bruxelles. It was a great gathering of software developers who presented their work in the form of 30-minute presentation. People came not just from Europe, but also oversees and other parts of world.  Lectures took place in more than 30 rooms and more than 600 speakers were presenting their projects.

There was a number of interesting lectures not only in the field of networking, but also robotics, neural networks, microprocessors, algorithms and data modeling. Some presenters were members of large teams, some were presenting their own projects. The scope was very wide including almost every programing language one had ever heard about. Visitors could see everything from startups up to trending projects such as Kubernetes, OpenDaylight or OpenStack. Every lecture was recorded and videos can be found on the FOSDEM website. Our presentation was scheduled in the NFV (Network Function Virtualization) section.

 

About virtualization and networking

Virtualization became very popular over the last years. Virtual machines curb the need for physical resources and make data centers more flexible and accessible. Today’s servers are really powerful and therefore able of hosting many VMs. This shed a new point of view on networking and, as a response, it got virtualized too in the form of virtual forwarders – processes capable of forwarding traffic within a hosting machine. OVS and VPP are the popular technologies these days and both support a very powerful set of data plane libraries and network interface controller drivers for fast packet processing, called DPDK. You may think of VPP and OVS as virtual forwarders between physical NICs and the virtual machines.

 

What is OPNFV Fast Data Stack?

OPNFV FDS makes it easier to maintain complicated data center environments. It’s a complex multilayer suite that includes software components designed for creating virtual machines and forwarding traffic. All the components are built with Apex installer on given set of host machines that need to match demanding performance needs and have a basic connectivity as well. As a result, a complex stack is created, providing a rich user-interface to network operators. The input exposes abstract set of tools for managing the life cycle of network, virtual machines and policies across given nodes.

 

Under the hood

Let’s have a look on key components of the OPNFV FDS suite. As mentioned above, multiple components operate at different layers of the stack. Each component participates in transforming defined abstraction to an actual configuration for underlying infrastructure.  On top of the stack resides OpenStack. This software is known for its scalability, loads of plugins and vast community. FDS uses OpenStack for managing VMs and for defining forwarding topology and policy rules. Forwarding inputs can be characterized by elements such as networks, subnets, routers or ports. Policy inputs by security groups and security group rules. One layer bellow is the OpenDaylight controller, also popular for its community, and plugins.

In the OPNFV FDS setup, it is used as a controller unit that consumes OpenStack’s abstractions and applies it to an underlying infrastructure using OpenDaylight’s Group Based Policy plugin. When the plugin detects that a policy can be resolved for at least two endpoints, configuration is generated and flushed to forwarders. OPNFV FDS setup, presented on FOSDEM, is using VPP in the hypervisor to forward packets between physical NICs and the VMs.

VPP, Vector Packet Processing, is a virtual switching/routing technology operating at a very impressive rate. It is impressively fast thanks to the DPDK library and CPU cache optimizing techniques. The beauty of Vector Packet Processing is that instead of handling packets one by one, VPP will perform one micro-operation after another to a group of packets which performs better with heavy load and results in increased throughput. VPP exposes C APIs and CLI for configuration. However, it’s not yet possible to use C API remotely because VPP does not run any management client.  Therefore, Honeycomb is used in the setup to provide NETCONF interface for the VPP forwarder. OpenDaylight uses NETCONF to talk to a HC Agent.

 

Supported scenarios

The FDS Demo presented on FOSEDEM showed the L2 scenario, meaning that L2 traffic is passed via VXLAN tunnels between the nodes. Traffic is routed on centralized node and routing is not performed by VPP itself, but by the OpenStack Qrouter service that is interconnected into every L2 domain in VPP via tap ports. NAT and routing towards external networks is also done by Qrouter.

Moving forward, FDS project is also looking at the L3 scenarios, where routing could be either distributed or centralized and will be done by VPP process together with NAT. All this efforts need attention on every layer of the stack including Apex installer.

Conclusion

We were pleased to present the FDS project at the FOSDEM conference. We believe that OPNFV FDS is a key component in network virtualization with a very bright future. For more information about the setup, and project itself, please visit https://wiki.opnfv.org/display/fds.

Tomáš Čechvala

and

Michal Čmarada

Software Engineers

OpenDaylight RPCs or What Could Possibly Go Wrong With Adding This One Cool Feature

OpenDaylight uses YANG as its Interface Definition Language. This is an architecture decision we have made way back in 2013 and it works reasonably well for the most part.

One of YANG concepts used rather heavily is the concept of an RPC. For YANG and its intended use in NETCONF’s client/server model it works perfectly fine, but trouble starts brewing when you borrow concepts and try to make them fit your use case.

OpenDaylight uses YANG RPCs to not only define its northbound model, but also model interactions between its individual plugins. It does this in an environment, which is a single process, but rather a cluster of nodes, each having a mesh of plugins, some activated some not.

From architecture’s view, which looks at things from an elevation of 10,000 feet, the problem of making RPCs work in this sort of environment is quite simple: all you need are registries and request routers. From implementation perspective, though, things can easily go wrong… implementations have bugs, quirks and limitations which are not immediately apparent. They just surface when you try and push the system closer to its architectural limits.

The Trouble with Names

RFC 6020 defines only the basic RPC concept and assumes there is a single implementation servicing any request for that RPC. This is okay as long as you are targeting singleton actions — like ‘ping IP’, ‘clear system log’ and similar. In a complex system, though, requests are typically associated with a particular resource — like ‘create a flow on this switch’. Since YANG did not give us this tool, we have decided to create an OpenDaylight extension to allow an RPC to be bound to a context. This gave rise to two unfortunate names: ‘Global RPCs‘ and ‘Routed RPCs‘, the first being normal RPCs and the second being bound to a context. Plus, a third name, ‘RPCs‘, to refer to either one of those concepts. Are you confused yet?

The initial implementation of these concepts was done back in 2013, when there was no clustering in sight, by a team who have spent days upon days discussing the difference. When clustering came into the implementation picture, in 2014, the implementation team attached their own meaning to the word ‘Routed’ and we ended up with an implementation, where Routed RPCs are routed between cluster nodes, but the default ones are not. That is the subject matter behind BUG-3128. It did not matter much as long as all cluster-enabled applications used Routed RPCs, but that changed with emergence of Cluster Singleton Service and its wide-spread adoption among plugins.

These days we have YANG 1.1, defined in RFC 7950, which has the same underlying concept with much less confusing names. ‘Global RPCs’ are ‘RPCs‘. ‘Routed RPCs’ are ‘actions‘. Since those terms make the conversation about semantics a reasonable affair, this is the last you hear about Global and Routed RPCs from me.

Fun with Concepts, Contexts and Contracts

In order to support both RPCs and actions, OpenDaylight’s MD-SAL infrastructure has to define a concept to identify them both. Since the two are utterly similar in what they do, DOMRpcIdentifier was born. It is used to identify either an action or an RPC. To do that is is an abstract class with two concrete, private final implementations: DOMRpcIdentifier$Global and DOMRpcIdentifier$Local. Why those names? I do not remember the details, but I could wager a guess about what I was thinking back then. At any rate, the two implementations differ only in their implementation of DOMRpcIdentifier.getContextReference(). DOMRpcIdentifier$Global’s is always empty and DOMRpcIdentifier$Local’s is always non-empty.

This is consistent with how RPCs (without a context reference) and actions (with a context reference) are invoked and it makes the API involved in the context of RPC/action invocation clean and simple. API contract. In the context of registering an RPC or action implementation, things are slightly less straightforward. It is a separate interface, with a rather terse Javadoc. In both cases there is a hint of ‘a conceptual dynamic router’, but not much in terms of details.

Unless you were very curious as to the details of the API contracts involved, after reading the documentation available, with some OpenDaylight tutorials under your belt, you would feel this is a dead-simple matter and just use the interfaces provided. Run a few test cases and everything works just fine. No trouble in sight.

About That Router Thing…

The Simultaneous Release name of OpenDaylight for the release currently in development is Carbon, meaning we have shipped 5 major releases, so this ‘dynamic router’ thing vaguely referenced actually exists somewhere and it does something to fulfill the API contracts imposed on it, otherwise the applications would not be able to work at all. The entry point into the implementation is DOMRpcRouter. Glancing over that, it contains some ugliness, but it gets the general outline of the two sides of the contract done.

Digging a bit deeper into the invocation path, you get into the fork at AbstractDOMRpcRoutingTableEntry.invokeRpc(). The RPC invocation path is rather straightforward, but the invocation path for actions is far from simple. Out of two code paths (actions and RPCs) we suddenly have 4, as an action can be invoked without a context reference as if it were an RPC and there is a brief mention of remote rpc connector registering action implementations with an empty context reference … wait … WHAT???!!!

Okay, we seem to have two implementations integrated based on implementation details, without that being supported by a single line in the API contract. The connector referenced is actually sal-remoterpc-connector and is something that is meaningful in clusters. To make some sense of this, we have to go back to 2013 again.

A Tale of Three Routers

From the get go, the MD-SAL architecture was split into two distinct worlds: Binding-Independent (BI, DOM) and Binding-Aware (BA, Binding). This split comes from two competing requirements: type-safety provided by Java for application developers who interact with specific data models and infrastructure services which are independent of data models. The former is supported by interfaces and classes generated from YANG models and generally feels like any code where you deal with DTOs. The latter is supported by an object model similar to XML DOM, where you deal with hierarchical ‘document’ trees and all you have to go by are QNames. For obvious reasons most developers interacting with OpenDaylight have never touched the BI world, even though it underpins pretty much every single feature available in the platform.

A very dated picture of how the system is organized can be found here. It is obvious that the two worlds need to seamlessly interoperate — for example RPCs invoked by one world must be able to be serviced by the other and the caller should be none the wiser. Since RPCs are the equivalent of a method call, this process needs to be as fast as possible, too. That lead to a design, where each world has its own Broker and the two brokers are connected. Invocations within the world would be handled by that world’s broker, foregoing any translation. A very old picture of how an inter-world call would look like can be seen in this diagram.

For RPCs this meant that there were two independent routing tables with re-exports being done from each of them. The idea of an RPC router was generalized in the (now long-forgotten) RpcRouter interface. Within a single node, the Binding and DOM routers would be interconnected. For clustered scenarios, a connector would be used to connect the DOM routers across all nodes. So an inter-node BA RPC request from node A to node B would go through: BA-A -> BI-A -> Connector-A -> Connector-B -> BI-B -> BA-B (and back again). Both the BI and connector speak the same language, hence can communicate without data translation.

The design was simple and effective, but has not quite survived the test of time, most notably the transition to dynamic loading of models in the Karaf container. Model loading impacts data translation services needed to cross the BA/BI barrier, leading to situations where an RPC implementation was available in BA world, but could not yet be exported to the BI world — leading to RPC routing loops, and in case of data store services missing data and deadlocks.

To solve these issues, we have decided to remove the BA/BI split from the implementation and turn the Binding-Aware world into an overlay on top of the Binding-Independent world. This means that all infrastructure services always go through BI, and the Binding RPC Broker was gradually taken behind the barn, there was a muffled sound in 2015, and these days we only have two routers, one hiding behind a connector name.

Blueprint for a New Feature

Probably the most significant pain point identified by new people coming to OpenDaylight is that the technology stack is a snowflake, providing few familiar components, with implementation and documentation being borderline hostile to newcomers. One of such pieces is the Configuration Subsystem (CSS). Driven by invalid YANG and magic XMLs, it is a model-driven service activation, dependency injection and configuration framework built on top of JMX. While it offers the ability to re-wire a running instance in a way which does not break anything half-way through reconfiguration, it is a major pain to get right.

It pre-dates MD-SAL (which offers nicer configuration change interactions) and is utterly slow (because the JMX implementation is horrible). It was also designed to safeguard against operator errors and this is quite contrary to what Karaf’s feature service provides — if you hit feature:uninstall, those services are going down without any safeties whatsoever.

To fix this particular sore spot, one of the decisions from the Beryllium design summit was to extend Blueprint with a few capabilities and start the long journey to OpenDaylight without CSS, where internal wiring would be done in Blueprint and user-visible configuration would be stored in MD-SAL configuration data store. The crash-course page is a very easy read.

You will note that there is support for injecting and publishing RPC implementations — which is a nice feature for developers. Rather than having to deal with registries, I can declare a dependency on an RPC service and have Blueprint activate me when it becomes available like this:

<odl:rpc-service id="fooRpcService" interface="org.opendaylight.app.FooRpcService"/>

I can also publish my bean as an implementation, just with a single declaration, like this:

<bean id="fooRpcService" class="org.opendaylight.app.FooRpcServiceImpl">
  <!-- constructor args -->
</bean>
<odl:rpc-implementation ref="fooRpcService"/>

This is beyond neat, this is awesome.

FooRpcService vs. DOMRpcIdentifier

We have already covered how Binding Aware layer sits on top of the Binding Independent one, but it is not a one-to-one mapping. This comes from the fact that Binding Independent layer is centered around what makes sense in YANG, whereas the Binding Aware layer is centered around what makes sense in Java, including various trade-offs and restrictions coming from them. One such difference is that RPCs do not have individual mappings, i.e. we do not generate an interface class for each RPC, but rather we generate a single interface for all RPC definitions in a particular YANG module. Hence for a model like

module foo {
    rpc first { input { ... } output { ... } }
    rpc second { input { ... } output { ... } }
}

we generate a single FooService interface

public interface FooService {
    Future<FirstOutput> first(FirstInput input);
    Future<FirstOutput> second(SecondInput input);
}

The reasoning behind this is that a particular module’s RPCs (in the broad sense, including actions) will always be implemented by a single OpenDaylight plugin and hence it makes sense to bundle them together.

An unfortunate side-effect of this is that in the Binding Aware layer, both RPCs and actions are packaged in the same interface and it is up to the intermediate layers to sort out the ambiguities. This problem is being addressed in Binding V2, where each action has its own interface, but we have to have a solution which works even in this weird setup.

Fix Some, Break Some

Considering these complexities and gaps in the API contract documentation department, it is not quite surprising that the fix for BUG-3128, while making RPCs work correctly across the cluster had the unfortunate side-effect of breaking blueprint wiring in a downstream project (OpenFlow Plugin). In order to understand why that happened, we need to explore the interactions between DOMRpcRouter, blueprint and sal-remoterpc-connector.

When blueprint sees an <odl:rpc-service/> declaration, it will wire a dependency on the specified RPC (Binding Aware) interface being available in DOMRpcService (which is a facet of DOMRpcRouter). As soon as it sees a registration, it considers the dependency satisfied and proceeds to with the wiring of the component. This is true for LLDP Speaker, too. Note how it declares a dependency on an implementation of PacketProcessingService. Try as you may, you will not find a place where the corresponding <odl:rpc-implementation/> lives. The reason for this is quite simple: this service contains a single action and an implementation is registered when an OpenFlow switch connects to the OpenDaylight instance. So how is it possible this works?

Well, it does not. At least not the way it is intended to work.

What happens is that Blueprint starts listening for an implementation of PacketProcessingService becoming available with an empty context, just as with any old RPC. Except this is an action, so somebody has to register as a global provider for the action, i.e. as being capable to dynamically invoke it based on its content and not being tied to a particular context. That someone is sal-remoterpc-connector, in its current shape an form, which does precisely what is mentioned in that terse comment. It registers itself as a dynamic router for all actions and when a request comes in, it will try to find a remote node which has registered an implementation for the specified in the invocation. That means that unbeknownst to the Blueprint extension, all actions appear to have an implementation — even if there is no component actually providing it — and therefore LLDP Speaker will always activate, just as if that dependency declaration was not there.

The fix to address BUG-3128 performed a simple thing: rather than using blanket registrations, it only propagates registrations observed on other nodes — becoming really a connector rather than a dynamic router. Since no component provides the registration at startup time, blueprint will not see the LLDP Speaker dependency as satisfied, leading to a failure to activate. Unless an OpenFlow switch happens to connect while we are waiting — in that case, activation will go through.

So we are at a fork: we either have blueprint ‘working’, or we have RPC routing in cluster working. Getting both to work at the same time, and actually fixing LLDP Speaker to activate when appropriate, we will obviously have to perform some amount surgery on multiple components.

I will detail what changes are needed to close this little can of worms in my next post, so stay tuned 🙂

 

Róbert Varga

CTO Pantheon Technologies