VPP MASTERY · PHASE 3B · WEEKS 10–11
🔗 memif - Shared Memory Interface
Server/client roles · Unix socket control path · Zero-copy shared memory · libmemif · DPDK net_memif PMD
src/plugins/memif/ extras/libmemif/ net_memif PMD Project 5

MEMIF ARCHITECTURE

🧩

memif Design - Control Plane vs Data Plane

ARCHITECTURE

memif (memory interface) is a shared-memory, zero-copy interface for connecting two processes - most commonly two VPP instances or VPP + a DPDK application. It is the highest-performance inter-process interface available for container-to-container packet forwarding.

memif has a strict two-plane design:

  • Control plane (Unix socket): Used once during connection setup. A server listens on a Unix socket; a client connects. They exchange memif_msg_t messages to negotiate region count, queue count, ring size, and buffer size. After handshake, the socket is idle.
  • Data plane (shared memory): After handshake, both sides mmap the same physical memory regions. TX/RX rings (ring buffers of memif_desc_t descriptors) in this shared memory allow zero-copy packet passing - no copies, no system calls, no kernel involvement.
/* memif topology */

Process A (VPP master)              Process B (VPP slave / DPDK app)
┌──────────────────────┐            ┌──────────────────────┐
│  memif server        │            │  memif client        │
│  listen(/run/vpp/m0) │←─socket──→│  connect(/run/vpp/m0)│
│                      │  handshake │                      │
│  [shared mem region] │←─mmap────→│  [shared mem region] │
│  TX ring (A→B)       │            │  RX ring (reads A→B) │
│  RX ring (B→A)       │            │  TX ring (writes B→A)│
└──────────────────────┘            └──────────────────────┘

/* Key properties */
Zero copies:   packet data never leaves shared memory
No syscalls:   data path uses only memory reads/writes
Interrupt mode: optionally signal peer via eventfd (avoids busy poll)
Blocking:      VPP always uses polling (same as DPDK)

SHARED MEMORY RING LAYOUT

🧮

memif Ring and Descriptor Structure

INTERNALS
/* Shared memory region layout */
Region 0: Control - ring headers, metadata
  offset 0: memif_shm_t { cookie, version, ... }
  offset N: memif_ring_t[0] { head, tail, flags, desc[ring_size] }
  offset M: memif_ring_t[1] { ... }  /* one ring per queue */

Region 1+: Data - packet buffers
  Large contiguous buffer space subdivided into fixed-size slots
  Each slot = memif_buffer_size bytes (default 2048)

/* Descriptor - one per buffer slot */
typedef struct {
    u16  flags;         /* MEMIF_DESC_FLAG_NEXT = chained buffer */
    u16  region;        /* which shared memory region holds the data */
    u32  length;        /* bytes of valid data */
    u32  offset;        /* byte offset within the region */
    u32  metadata;      /* opaque: user can store anything */
} memif_desc_t;

/* Ring header */
typedef struct {
    u16  head;          /* producer writes here */
    u16  tail;          /* consumer reads here */
    u16  flags;         /* MEMIF_RING_FLAG_MASK_INT: disable interrupts */
    memif_desc_t desc[ring_size];
} memif_ring_t;

/* TX side: advance head after filling descriptors */
/* RX side: read from tail, advance tail after processing */
/* ring is full when (head - tail) == ring_size */
⚙️ DPDK PARALLEL - rte_ring vs memif ring
  • memif ring is semantically equivalent to an rte_ring of buffer descriptors shared between two processes. The key difference: rte_ring uses atomic CAS operations; memif uses plain memory reads/writes (single producer single consumer per queue - no atomics needed)
  • memif is designed for SPSC (single producer single consumer) per queue - each queue pair has exactly one writer and one reader. For multi-queue, create multiple queue pairs
  • memif's zero-copy model means the packet bytes sit in the shared region and are never copied between peers - analogous to what you'd achieve with DPDK's rte_ring of rte_mbuf pointers, but without the IPC overhead of separate mempools

VPP CLI SETUP

💻

Complete memif CLI Reference

CLI
# ── VPP INSTANCE A: server (master) ──

# Create a memif socket (path to Unix socket file)
create memif socket id 1 filename /run/vpp/memif-a.sock

# Create memif interface in server (master) mode
create interface memif id 0 socket-id 1 master rx-queues 2 tx-queues 2 \
  ring-size 1024 buffer-size 2048

# Bring up and configure
set interface state memif0/0 up
set interface ip address memif0/0 10.10.0.1/30

# ── VPP INSTANCE B: client (slave) ──
create memif socket id 1 filename /run/vpp/memif-a.sock
create interface memif id 0 socket-id 1 slave rx-queues 2 tx-queues 2 \
  ring-size 1024 buffer-size 2048
set interface state memif0/0 up
set interface ip address memif0/0 10.10.0.2/30

# Verify connection status
show memif
# Should show: id 0, socket memif-a.sock, state connected, role master

show interface memif0/0
# Should show: link-up, rx/tx packet counters

# ── Zero-copy mode (VPP ↔ VPP only) ──
# Both sides must use VPP's memif plugin
# Add 'zero-copy' to the create command:
create interface memif id 1 socket-id 1 master zero-copy

# ── L2 bridge use case (two memif ports in a VPP bridge domain) ──
create bridge-domain 10 learn 1 forward 1 flood 1
set interface l2 bridge memif0/0 10
set interface l2 bridge memif0/1 10
CLI CommandPurpose
show memifAll memif interfaces with socket path, role, connection state
show memif socketAll registered memif sockets
show memif <if>Detailed: queue count, ring size, buffer size, descriptor counts
delete memif <if>Remove a memif interface (disconnects peer)
delete memif socket id NRemove a memif socket (must have no interfaces using it)

LIBMEMIF - C API FOR THIRD-PARTY APPS

📚

libmemif API - Connect Any Process to VPP

LIBRARY

libmemif (extras/libmemif/) is a standalone C library that implements the memif protocol. Any process - DPDK app, Python via ctypes, Go via cgo - can use it to create a memif peer that connects to VPP without running a full VPP instance.

/* Include */
#include "libmemif.h"

/* Step 1: Initialise the library */
memif_init(NULL, "my_app", NULL, NULL, NULL);

/* Step 2: Create a socket (path must match VPP's socket) */
memif_socket_handle_t sock;
memif_socket_args_t sock_args = {
    .path = "/run/vpp/memif-a.sock",
};
memif_create_socket(&sock, &sock_args, NULL);

/* Step 3: Create the memif connection as client (slave) */
memif_conn_handle_t conn;
memif_conn_args_t args = {
    .socket     = sock,
    .interface_id = 0,
    .is_master  = 0,        /* 0 = client/slave */
    .num_s2m_rings = 1,     /* slave-to-master queues */
    .num_m2s_rings = 1,
    .buffer_size   = 2048,
    .log2_ring_size = 10,   /* ring_size = 1024 */
};
memif_create(&conn, &args,
    on_connect_cb, on_disconnect_cb, on_interrupt_cb, NULL);

/* Step 4: Poll the socket (drives connection setup) */
while (running) {
    memif_poll_event(sock, 0 /* timeout ms */);
}

/* Step 5: TX - after on_connect_cb fires */
memif_buffer_t bufs[16];
u16 n_alloc;
memif_buffer_alloc(conn, 0 /* queue */, bufs, 16, &n_alloc, 2048);
for (i = 0; i < n_alloc; i++) {
    /* bufs[i].data points to the shared memory region */
    memcpy(bufs[i].data, my_packet_data, my_packet_len);
    bufs[i].len = my_packet_len;
}
u16 n_tx;
memif_tx_burst(conn, 0, bufs, n_alloc, &n_tx);

/* Step 6: RX */
memif_buffer_t rx_bufs[256];
u16 n_rx;
memif_rx_burst(conn, 0, rx_bufs, 256, &n_rx);
for (i = 0; i < n_rx; i++) {
    process_packet(rx_bufs[i].data, rx_bufs[i].len);
}
memif_refill_queue(conn, 0, n_rx, 0);

libmemif also has Python bindings via ctypes: extras/libmemif/python/libmemif.py. This is what you use in Project 5 to build the Python control-plane client.

DPDK net_memif PMD - CONNECT TESTPMD TO VPP

net_memif PMD - testpmd ↔ VPP

DPDK INTEGRATION

DPDK's net_memif PMD (drivers/net/memif/) implements the memif protocol as a DPDK poll-mode driver. This means testpmd, your DPDK forwarding application, or any DPDK-based app can connect directly to VPP as a memif peer - without running a second VPP instance.

# ── VPP side: set up as master ──
create memif socket id 1 filename /run/vpp/memif-dpdk.sock
create interface memif id 0 socket-id 1 master rx-queues 1 tx-queues 1
set interface state memif0/0 up

# ── DPDK testpmd side: connect as slave ──
dpdk-testpmd \
  --vdev="net_memif,socket=/run/vpp/memif-dpdk.sock,id=0,role=slave" \
  --no-pci \
  -- -i \
     --port-topology=chained \
     --rxq=1 --txq=1 \
     --nb-cores=1

# ── In testpmd interactive shell ──
testpmd> set fwd txonly
testpmd> start
# Now VPP receives packets on memif0/0
# Check: vppctl show interface memif0/0

# ── For zero-copy (DPDK side must match VPP buffer layout) ──
--vdev="net_memif,socket=/run/vpp/memif-dpdk.sock,id=0,role=slave,zero-copy=yes"
# zero-copy requires DPDK mbufs sized to match VPP's buffer-size (2048)
net_memif PMD OptionDescriptionMust Match VPP
socketPath to Unix socket fileYes - exact path
idmemif interface IDYes - must match VPP's id N
role=slaveClient role (VPP is master)Yes - roles must be opposite
ring-sizeRing descriptor countNo - negotiated during handshake
pkt-buffer-sizeBuffer size in bytesRecommended: match VPP's buffer-size
zero-copyEnable zero-copy modeBoth sides must agree
PROJECT 5

memif vSwitch - 3-Container Topology

Objective: Build a 3-container virtual switch using VPP as the central switch with memif interfaces. Container A and Container C are DPDK testpmd instances connected to VPP via memif. A Python libmemif client from Container B monitors traffic on a third memif interface (mirror port).

1
Container A: run testpmd with net_memif PMD as slave connected to /run/shared/memif-a.sock. Container C: testpmd as slave on /run/shared/memif-c.sock. Use Docker volumes to share the socket directory.
2
VPP (Container B): create two memif sockets, create memif interfaces in master mode for each socket, create a bridge domain, add both interfaces as L2 bridge members. Verify connectivity A→C with testpmd txonly/rxonly.
3
Add a third memif interface to VPP as a "mirror port". Use a feature arc or a custom tap-output node to copy each forwarded packet's metadata (src MAC, dst MAC, length) to the mirror memif.
4
Write a Python script using libmemif Python bindings that connects to the mirror memif socket and prints per-second packet counts, unique source MACs seen, and bytes forwarded. Run it while A→C traffic is flowing.
5
Benchmark: send at increasing rates (100Kpps → 1Mpps → 5Mpps) from testpmd. Record the maximum forwarding rate VPP sustains without packet drops (check show error for drops). Note the VPP CPU utilisation at each rate.
6
Test zero-copy mode: enable zero-copy on all memif interfaces (both VPP and testpmd sides). Re-run the benchmark. Compare peak throughput and CPU usage with and without zero-copy.

P3B COMPLETION CHECKLIST

✅ Next: P3C - TAP v2, AF_XDP, vhost-user, and AF_PACKET. These complete your knowledge of every interface type in VPP's arsenal.

← DPDK Plugin 🗺️ Roadmap Next: TAP · AF_XDP →