[Guide] Intro to Vector Packet Processing (VPP)

27/04/2026

Welcome to our new series on how to build and program FD.io‘s Vector Packet Processing framework, also known as VPP.

The name stems from VPP’s usage of vector processing, which can process multiple packets at a time with low latency. Single packet processing and high latency were a common occurrence in the older, scalar processing approach, which VPP aims to make obsolete.

What will this series include?

This four-part series will include the following features, with the ultimate goal on getting to know your VPP framework and adapting it to your network:

  1. Binary API
  2. Honeycomb/hc2vpp (Archived project)
  3. Ligato VPP Agent
  4. gRPC/REST
  5. Memory Management & DPDK APIs

Note on Honeycomb / hc2vpp: The hc2vpp and Honeycomb projects have been archived by the FD.io community – the GitHub repository now lives at FDio/archived-hc2vpp.

What’s new in 2026

Since this guide was first published, the FD.io community has continued to ship a stable cadence of releases. The most recent at the time of this update is VPP 26.02, released on 25 February 2026, which includes 32 new features across device drivers, the host stack, HTTP/3 and QPACK, network policies, and more – alongside more than 500 commits and over 120 fixes since the previous release.

A few notable highlights from the 26.02 cycle:

  • A native IGE driver for Intel Gigabit Adapters (i211, i225, i226).
  • DPDK 25.11 integration and rdma-core 60.0.
  • An HTTP/3 framing layer, H3 client-side support, and QPACK encoder/decoder improvements.
  • mTLS server-side support and peer-certificate retrieval in the TLS engine.
  • New npol_* API messages backing the new network policies capability.

VPP releases continue to follow a roughly four-month cadence (e.g. 25.10 → 26.02 → 26.06), with each cycle paired with a corresponding CSIT report that publishes performance data and side-by-side comparisons against the previous release.

Why should I start using Vector Package Processing?

The main advantages are:

  • high performance with a proven technology
  • production level quality
  • flexible and extensible

The principle of VPP is, that you can plugin a new graph node, adapt it to your networks purposes and run it right off the bat. Including a new plugin does not mean, you need to change your core-code with each new addition. Plugins can be either included in the processing graph, or they can be built outside the source tree and become an individual component in your build.

Furthermore, this separation of plugins makes crashes a matter of a simple process restart, which does not require your whole build to be restarted because of one plugin failure.

For a full list of features, please visit the official Vector Package Processing Wiki.You can also check our previous installments on VPP integration.

Preparation of VPP packages

In order to build and start with VPP yourself, you will have to:

  1. Download VPP’s repository from this page or follow the installation instructions
  2. Clone the repository inside your system, or from VPP’s GitHub

Enjoy and explore the repository as you wish. We will continue exploring the Binary API in the next part of our series.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

Related Articles

Vector Packet Processing 104: gRPC & REST

Vector Packet Processing 104: gRPC & REST

Welcome back to our Vector Packet Processing implementation guide, Part 4. Today, we will go through the essentials of gRPC and REST and introduce their core concepts, while introducing one missing functionality into our VPP build. This part will also introduce the...

read more
Vector Packet Processing 103: Ligato & VPP Agent

Vector Packet Processing 103: Ligato & VPP Agent

Welcome back to our guide on Vector Packet Processing. In today's post number three from our VPP series, we will take a look at Ligato and its VPP Agent. Since the original version of this article, the Ligato project has consolidated. The VPP Agent and CN-Infra remain...

read more