KubeCon 2017, Austin

KubeCon & CloudNativeCon 2017, Austin

At the beginning of December 2017, we attended the KubeCon & CloudNativeCon 2017 conference in Austin, Texas. The conference, organized by the Linux foundation, brought together leading contributors in cloud native applications and computing, containers, microservices, central orchestration processing and related projects.

KubeCon 2017, Austin

More than four thousands developers, together with other people interested in cloud-native technologies, visited the event in Austin. The growing number of attendees is a testimony to the rising importance of Kubernetes and containerized applications for companies of all sizes.

The schedule was full of talks about various CNCF technologies such as Kubernetes, Prometheus, Docker, Envoy, CNI and many others. “Kubernetes is the new Linux,” pointed out Google’s Kelsey Hightower in his keynote, predicting bright future for these technologies.

KubeCon 2017, Austin

In addition to talks, the sponsors showcased their projects in a huge exhibit hall. The FD.io booth presented a project our friends from Cisco contributed to – VPP centric network plugin for Kubernetes which aims to provide the fastest connectivity for containers by bypassing the kernel network stack. During the presentation of the project, we were involved in many conversations with attendees from various companies, which proves their interest in the solution.

KubeCon 2017, Austin

Rastislav Szabo, Lukas Macko

honeycomb cropped

Integrating VPP and Honeycomb and the Extension of VPP Services

In this short article, I would like to share our experience in the field of integrating VPP and Honeycomb, and about extension of VPP services. Among our colleagues are many developers who contribute to both projects as well as people who work on integrating these two projects, and also on integrating them with the rest of the networking world.

First, let’s define the basic terms.

honeycomb

What is VPP?

According to its wiki page it is “an extensible framework that provides out-of-the-box production quality switch/router functionality”. There is definitely more to say about VPP, but from my perspective, what’s most important is that it

  • provides switch and router functionality,
  • is in production quality level,
  • is platform independent.

“Platform independent” means that it is up to your decision where you will run it (virtualized environment, bare-metal,…). VPP is a piece of software, which is by default spread in the form of packages. Final VPP packages are available from Nexus repositories. Let’s say we decide to use stable VPP in version 17.04 on a stable Ubuntu version 16.04. You can download all available packages from the corresponding Nexus site. If there is no such platform available at Nexus, you can still download VPP and build it on the platform, which you need.

VPP will process packets, which flow in your network similarly to a physical router, but with one big advantage: you do not need to buy a router. You can use whatever physical device you have and just install the VPP modules.

 

What is Honeycomb?

It is a specific VPP interface. Honeycomb provides NETCONF and RESTCONF interface on northbound and stores required configuration (in form of XML or JSON) in local data store. There is also the hc2vpp project, which calls the corresponding VPP API as reaction to a new configuration stored in data store. In VPP, there is a special CLI that is used to communicate with VPP. It is in text form (similarly as in OS). To make it easier to use VPP, we also have Honeycomb. It provides an interface, which is somewhere between a GUI and a CLI. You can also request VPP state or statistics of via XML, and you will get the response in an XML form. Honeycomb can be installed in the same way as VPP, through packages, which can be accessed from the Nexus site.

mist with low visibility

Where can the combination of VPP and Honeycomb be used?

We’ve already showcased several use cases on our Pantheon Technologies’ YouTube channel:

Another alternative is to use the two as vCPE (Virtual Customer Premises Equipment) as specified in this draft. One of projects which wants to implement it is ONAP. VPP used as vCPE-endpoint for the internet connection from a provider. According to this use case, vCPE should provide several services. In standalone VPP, such services aren’t supported, but they still can be added to a machine where VPP is running. For demonstration, we have chosen DHCP and DNS.

 

DHCP

In this case, we have two VMs. VM0 simulates the client side (DHCP client) which wants IP address to be assigned to interface enp0s9. VM1 contains VPP and a DHCP server. The DHCP request is broadcasted via enp0s9 at VM0 to VPP1 via port 192.168.40.2. VPP1 is set as proxy DHCP server and DHCP request message is forwarded to 192.168.60.2, where the DHCP server will response with a DHCP offer. Finally, after all DHCP configuration steps are done, interface enp0s9 at VM0 is configured with IP address 192.168.40.10.

DHCP

DNS

In this case, we also have two VMs. VM0 simulates the client side (DNS client) which needs to resolve domain name to IP address. This request is routed via local port to VPP1, where it is routed to DNS server in VM1. If this resolution is required for the first time, then the request will be sent to the external DNS server. Otherwise, local DNS server will serve this request.

DNS

Jozef Glončák

Software Engineer

OPNFV Fast Data Stack on FOSDEM 2017

On February 5th, we presented the OPNFV Fast Data Stack on FOSDEM conference that is hosted every year at Brussels’ Université libre de Bruxelles. It was a great gathering of software developers who presented their work in the form of 30-minute presentation. People came not just from Europe, but also oversees and other parts of world.  Lectures took place in more than 30 rooms and more than 600 speakers were presenting their projects.

There was a number of interesting lectures not only in the field of networking, but also robotics, neural networks, microprocessors, algorithms and data modeling. Some presenters were members of large teams, some were presenting their own projects. The scope was very wide including almost every programing language one had ever heard about. Visitors could see everything from startups up to trending projects such as Kubernetes, OpenDaylight or OpenStack. Every lecture was recorded and videos can be found on the FOSDEM website. Our presentation was scheduled in the NFV (Network Function Virtualization) section.

 

About virtualization and networking

Virtualization became very popular over the last years. Virtual machines curb the need for physical resources and make data centers more flexible and accessible. Today’s servers are really powerful and therefore able of hosting many VMs. This shed a new point of view on networking and, as a response, it got virtualized too in the form of virtual forwarders – processes capable of forwarding traffic within a hosting machine. OVS and VPP are the popular technologies these days and both support a very powerful set of data plane libraries and network interface controller drivers for fast packet processing, called DPDK. You may think of VPP and OVS as virtual forwarders between physical NICs and the virtual machines.

 

What is OPNFV Fast Data Stack?

OPNFV FDS makes it easier to maintain complicated data center environments. It’s a complex multilayer suite that includes software components designed for creating virtual machines and forwarding traffic. All the components are built with Apex installer on given set of host machines that need to match demanding performance needs and have a basic connectivity as well. As a result, a complex stack is created, providing a rich user-interface to network operators. The input exposes abstract set of tools for managing the life cycle of network, virtual machines and policies across given nodes.

 

Under the hood

Let’s have a look on key components of the OPNFV FDS suite. As mentioned above, multiple components operate at different layers of the stack. Each component participates in transforming defined abstraction to an actual configuration for underlying infrastructure.  On top of the stack resides OpenStack. This software is known for its scalability, loads of plugins and vast community. FDS uses OpenStack for managing VMs and for defining forwarding topology and policy rules. Forwarding inputs can be characterized by elements such as networks, subnets, routers or ports. Policy inputs by security groups and security group rules. One layer bellow is the OpenDaylight controller, also popular for its community, and plugins.

In the OPNFV FDS setup, it is used as a controller unit that consumes OpenStack’s abstractions and applies it to an underlying infrastructure using OpenDaylight’s Group Based Policy plugin. When the plugin detects that a policy can be resolved for at least two endpoints, configuration is generated and flushed to forwarders. OPNFV FDS setup, presented on FOSDEM, is using VPP in the hypervisor to forward packets between physical NICs and the VMs.

VPP, Vector Packet Processing, is a virtual switching/routing technology operating at a very impressive rate. It is impressively fast thanks to the DPDK library and CPU cache optimizing techniques. The beauty of Vector Packet Processing is that instead of handling packets one by one, VPP will perform one micro-operation after another to a group of packets which performs better with heavy load and results in increased throughput. VPP exposes C APIs and CLI for configuration. However, it’s not yet possible to use C API remotely because VPP does not run any management client.  Therefore, Honeycomb is used in the setup to provide NETCONF interface for the VPP forwarder. OpenDaylight uses NETCONF to talk to a HC Agent.

 

Supported scenarios

The FDS Demo presented on FOSEDEM showed the L2 scenario, meaning that L2 traffic is passed via VXLAN tunnels between the nodes. Traffic is routed on centralized node and routing is not performed by VPP itself, but by the OpenStack Qrouter service that is interconnected into every L2 domain in VPP via tap ports. NAT and routing towards external networks is also done by Qrouter.

Moving forward, FDS project is also looking at the L3 scenarios, where routing could be either distributed or centralized and will be done by VPP process together with NAT. All this efforts need attention on every layer of the stack including Apex installer.

Conclusion

We were pleased to present the FDS project at the FOSDEM conference. We believe that OPNFV FDS is a key component in network virtualization with a very bright future. For more information about the setup, and project itself, please visit https://wiki.opnfv.org/display/fds.

Tomáš Čechvala

and

Michal Čmarada

Software Engineers