POLICY ENGINE — THE BRAIN OF THE NGFW
What a Policy Engine Does
OVERVIEWThe policy engine translates human-readable security rules into machine-executable lookup structures that can classify millions of packets per second. It is the authoritative decision-maker for every packet: permit, deny, inspect, rate-limit, NAT, or log.
Policy engines face a fundamental tension: rules are specified in human terms (zones, users, applications, threat levels) that are rich and overlapping, but packet processing requires O(1) or O(log n) decisions per packet. The policy compiler's job is to resolve this tension by pre-computing decision structures at rule-load time, not at packet-time.
Policy Inputs (Rule Fields)
- Source zone / interface
- Destination zone / interface
- Source IP / prefix / address-object
- Destination IP / prefix / address-object
- Application (app_id from DPI)
- Service (port, protocol)
- User / user-group (from AD/LDAP)
- URL category (from URL filter DB)
- Threat level (from IPS score)
- Time-of-day / schedule
Policy Actions
- permit — forward without further inspection
- deny — drop packet (silent)
- reject — drop + send TCP RST or ICMP unreachable
- inspect — continue to IPS + DLP engine
- ssl-decrypt — force TLS inspection
- nat — apply NAT rule
- rate-limit — traffic shaping
- log — record to SIEM
- quarantine — redirect to captive portal
RULE COMPILATION — FROM HUMAN RULES TO FAST LOOKUP STRUCTURES
Rule Compiler Architecture
COMPILATION/* Human-readable policy rule example */
Rule 47:
from-zone: trust
to-zone: untrust
source: HR-Group-Subnet (10.10.50.0/24)
dest: any
app: social-media (facebook, instagram, twitter, tiktok)
time: work-hours (Mon-Fri 08:00-18:00)
action: deny
log: yes
/* Challenge: at packet time, we only have a five-tuple + app_id */
/* We need to evaluate 10K+ rules in nanoseconds */
/* Solution: compile rules into optimised lookup structures */
/* Step 1: Decompose rules into primitive match fields */
typedef struct compiled_rule {
/* IP prefix ranges (compiled from address objects) */
uint32_t src_ip_lo, src_ip_hi;
uint32_t dst_ip_lo, dst_ip_hi;
/* Port ranges */
uint16_t src_port_lo, src_port_hi;
uint16_t dst_port_lo, dst_port_hi;
/* Protocol bitmap */
uint32_t proto_mask; /* bit per protocol number */
/* Zone IDs */
uint16_t src_zone_id;
uint16_t dst_zone_id;
/* App IDs (up to 32 apps per rule) */
uint32_t app_id_bitmap[4]; /* 128 app IDs as bitmap */
/* Action */
uint8_t action;
uint8_t log;
uint8_t ssl_inspect;
uint8_t ips_profile;
/* Rule metadata */
uint32_t rule_id;
uint32_t hit_count;
uint64_t last_hit_ns;
} compiled_rule_t;
/* Step 2: Build classifier structures */
/* For IP-range matching: interval tree or PATRICIA trie */
/* For most rules: two-level hash (zone pair → rule subset) */
typedef struct policy_table {
/* Index 1: zone pair (src_zone × dst_zone) → rule_list */
/* Typical: 10 zones → 100 zone pairs → small list per pair */
rule_list_t zone_rules[MAX_ZONES][MAX_ZONES];
/* For each zone pair: sorted by specificity for first-match */
/* More specific rules listed first: /32 before /24 before /0 */
/* Index 2: 5-tuple prefix hash for most common rules */
/* Pre-computed: all /32 source + /32 dest combinations → direct action */
rte_hash_t *exact_match_cache;
} policy_table_t;
/* Step 3: Fast-path lookup */
uint8_t policy_lookup(policy_table_t *pt, session_t *s) {
/* Fast path: exact match cache (pre-populated for common flows) */
uint8_t action;
if (rte_hash_lookup_data(pt->exact_match_cache,
&s->key, (void **)&action) >= 0)
return action;
/* Slow path: walk rule list for this zone pair */
rule_list_t *rl = &pt->zone_rules[s->src_zone][s->dst_zone];
for (int i = 0; i < rl->n_rules; i++) {
compiled_rule_t *r = &rl->rules[i];
if (rule_matches(r, s)) {
/* Add to exact match cache to speed up future identical flows */
rte_hash_add_key_data(pt->exact_match_cache, &s->key,
(void *)(uintptr_t)r->action);
r->hit_count++;
r->last_hit_ns = rte_get_tsc_cycles();
return r->action;
}
}
return ACTION_DEFAULT_DENY; /* implicit deny at end of rule list */
}💡 Rule compilation is triggered by every policy change. The compilation step can take 100ms–10s depending on rule complexity. During this time, packets continue using the old policy table. The atomic pointer swap (same pattern as threat intel updates) ensures zero-disruption policy updates — critical for carrier-grade NGFWs.
POLICY EVALUATION — FIRST-MATCH vs BEST-MATCH
Rule Matching Semantics
EVALUATION/* First-match semantics (most NGFW products, iptables, Snort) */
/* Rules evaluated in order; FIRST matching rule wins */
/* Implication: rule ordering MATTERS CRITICALLY */
/* More specific rules must come BEFORE less specific rules */
/* Example — correct ordering (first-match) */
Rule 1: src=10.1.0.5/32 dst=8.8.8.8/32 proto=UDP port=53 → PERMIT
Rule 2: src=10.1.0.5/32 dst=any proto=any → DENY (block this host)
Rule 3: src=10.1.0.0/24 dst=any proto=any → PERMIT
Rule 99: src=any dst=any → DENY (implicit)
/* Packet from 10.1.0.5 → 8.8.8.8:53 → matches Rule 1 → PERMIT */
/* Packet from 10.1.0.5 → 1.2.3.4 → matches Rule 2 → DENY */
/* Packet from 10.1.0.10 → anywhere → matches Rule 3 → PERMIT */
/* If Rule 3 were placed before Rule 2: Rule 2 would never fire! */
/* Best-match semantics (BGP routing, some firewall vendors) */
/* Most specific matching rule wins (longest prefix) */
/* Rule ordering does NOT matter */
/* More complex to implement but harder to get wrong */
/* Used by: Juniper SRX (route-based mode), some SDN firewalls */
/* Shadow rule detection — compiler-time check */
/* Rule A is "shadowed" by Rule B if: */
/* Rule B appears before Rule A AND Rule B matches all packets Rule A would match */
/* Shadow = Rule A can never fire (dead code) */
int detect_shadow(compiled_rule_t *rules, int n) {
for (int i = 1; i < n; i++) {
for (int j = 0; j < i; j++) {
if (rule_covers(rules[j], rules[i])) {
fprintf(stderr, "Rule %d shadowed by Rule %d\n",
rules[i].rule_id, rules[j].rule_id);
/* In production: warn admin and optionally remove shadow rule */
}
}
}
}
/* Rule conflict detection */
/* Rule A and Rule B conflict if: same traffic can match both */
/* but they have different actions */
/* Resolution: first-match resolves automatically, but warn admin */
/* Policy diff — show changes between two policy versions */
void policy_diff(policy_table_t *old_pt, policy_table_t *new_pt) {
/* For each rule in new: present in old? Same action? */
/* For each rule in old: removed from new? */
/* Output: added, removed, changed, reordered rules */
/* Critical for audit trail: every policy change must be logged */
}ZONE-BASED POLICY — ENTERPRISE SEGMENTATION
Security Zones and Inter-Zone Policy
ZONES/* Security zones: logical groups of interfaces/subnets with same trust level */
Zone model (typical enterprise NGFW):
INTERNET — untrusted external connections (trust=0)
DMZ — public-facing servers: web, DNS, SMTP (trust=10)
TRUST — internal corporate network (trust=50)
SERVERS — internal server segment (trust=60)
MGMT — management network: SSH/SNMP to NGFW itself (trust=90)
VPN — remote access VPN clients (trust=40)
GUEST — guest WiFi (trust=5)
/* Default inter-zone policy (implicit) */
Same-zone: PERMIT (traffic within same zone flows freely)
Cross-zone: DENY (all inter-zone traffic denied unless explicitly permitted)
/* This is the zero-trust baseline */
/* Zone definitions in VPP / iproute2 terms */
typedef struct {
char name[32];
uint8_t zone_id;
uint8_t trust_level; /* 0=untrusted, 100=fully trusted */
uint32_t interfaces[16]; /* sw_if_index list */
uint32_t subnets[16]; /* address ranges in this zone */
uint8_t n_interfaces;
uint8_t n_subnets;
} security_zone_t;
/* Zone determination for a packet */
uint8_t get_src_zone(session_t *s, security_zone_t *zones, int n_zones) {
/* Check which interface the packet arrived on */
uint32_t ingress_if = s->ingress_interface;
for (int z = 0; z < n_zones; z++)
for (int i = 0; i < zones[z].n_interfaces; i++)
if (zones[z].interfaces[i] == ingress_if)
return zones[z].zone_id;
return ZONE_UNKNOWN;
}
/* Standard inter-zone policy matrix */
/*
FROM/TO INTERNET DMZ TRUST SERVERS MGMT VPN GUEST
INTERNET - lim deny deny deny deny deny
DMZ any - deny lim deny deny deny
TRUST lim any - any deny - deny
SERVERS deny deny any - deny deny deny
MGMT deny deny deny deny - deny deny
VPN lim lim any lim deny - deny
GUEST HTTP/S deny deny deny deny deny -
lim = limited (specific ports only)
*/
/* Intra-zone security (lateral movement prevention) */
/* Even within TRUST zone, east-west traffic can be restricted */
/* Micro-segmentation: HR-VLAN cannot reach Finance-VLAN directly */
/* Implementation: sub-zones, or additional per-prefix rules */
/* Zone policy for your Jio NGFW project */
/* CUSTOMER-LAN: customer traffic requiring NGFW inspection */
/* CORE: peering/transit links */
/* MGMT: out-of-band management */
/* IDS-COPY: mirrored traffic for Suricata passive inspection */LOGGING AND SIEM INTEGRATION
Structured Logging at NGFW Scale
LOGGING/* Log record schema — one record per session close */
typedef struct ngfw_log_record {
/* Timestamps */
uint64_t session_start_ns;
uint64_t session_end_ns;
uint32_t duration_ms;
/* Five-tuple */
char src_ip[40]; /* text form */
char dst_ip[40];
uint16_t src_port;
uint16_t dst_port;
uint8_t proto;
char proto_str[8]; /* "TCP", "UDP", "ICMP" */
/* Policy */
uint32_t policy_id;
char policy_name[64];
char src_zone[32];
char dst_zone[32];
char action[16]; /* "permit", "deny", "reset" */
/* Application */
uint16_t app_id;
char app_name[64]; /* "HTTPS", "Netflix", "BitTorrent" */
char url_category[32]; /* "Streaming", "Social Media", etc. */
char url[512]; /* if HTTP inspection active */
/* Traffic */
uint64_t bytes_sent;
uint64_t bytes_received;
uint64_t pkts_sent;
uint64_t pkts_received;
/* Security */
uint8_t ssl_inspected;
char tls_sni[256];
char ja3_hash[33];
uint16_t threat_id;
char threat_name[128];
uint8_t threat_severity; /* 1=critical, 2=high, 3=medium, 4=low */
/* NAT */
char nat_src_ip[40];
uint16_t nat_src_port;
} ngfw_log_record_t;
/* High-performance logging architecture */
/* Problem: at 1M flows/second, synchronous write blocks forwarding */
/* Solution: lockless ring buffer → background logger thread */
#define LOG_RING_SIZE (1 << 20) /* 1M entries */
rte_ring_t *log_ring;
/* In forwarding thread (non-blocking) */
void session_close_log(session_t *s) {
ngfw_log_record_t *rec = log_record_alloc(); /* from pool */
session_to_log_record(s, rec);
if (rte_ring_enqueue(log_ring, rec) != 0) {
/* Ring full → drop log record (or overflow counter++) */
log_record_free(rec);
}
}
/* In logger thread (background) */
void *logger_thread(void *arg) {
ngfw_log_record_t *recs[64];
while (1) {
int n = rte_ring_dequeue_burst(log_ring, (void **)recs, 64, NULL);
if (n > 0) {
/* Format as JSON and write to syslog / Kafka / Elasticsearch */
for (int i = 0; i < n; i++) {
char buf[4096];
record_to_json(recs[i], buf, sizeof(buf));
syslog_send(buf); /* or Kafka / HTTP */
log_record_free(recs[i]);
}
} else {
rte_delay_us(100); /* no-op if ring empty */
}
}
}
/* SIEM integration targets */
/* Kafka → Elasticsearch → Kibana (ELK stack): standard for large-scale */
/* Splunk: popular commercial SIEM */
/* Graylog: open-source alternative */
/* syslog-ng / rsyslog: for traditional syslog-based SIEMs */
/* CEF (Common Event Format) for interoperability */
/* "CEF:0|Jio|NGFW|1.0|100|Connection Denied|3|src=10.1.0.5 dst=8.8.8.8 ..." */COMPLETE NGFW ARCHITECTURE — ALL MODULES INTEGRATED
Full NGFW Data Plane — Component Integration
ARCHITECTURE/* Complete NGFW packet processing pipeline */
/* Built on VPP (from M18) with all modules integrated */
INGRESS PACKET (from NIC via DPDK)
│
▼
[dpdk-input] DPDK PMD, burst receive, mbuf allocation
│
▼
[ethernet-input] L2 demux, MAC learning, VLAN stripping
│
▼
[ip4-input] IP header validation, TTL check, checksum verify
IP defragmentation (reassemble before conntrack)
│
▼
[ip4-unicast feature arc] ← VPP feature arc — ordered insertion points
│
├── [ngfw-zone-lookup] Classify src_zone and dst_zone
│ Set vnet_buffer meta: src/dst zone IDs
│
├── [acl-plugin-in-ip4-fa] Conntrack (M23): session lookup or create
│ TCP state machine
│ First-packet: evaluate policy (M26)
│ Cache: action in session entry
│
├── [ngfw-nat-in2out] NAT44-ED (M23): DNAT inbound
│ Rewrite dst_ip, dst_port, update checksums
│
├── [ngfw-dpi-node] DPI (M24): protocol dissection
│ App identification (app_id → session)
│ Hyperscan stream scan
│
├── [ngfw-ips-node] IPS (M25): Suricata rules inline
│ Threat intel IoC check
│ Beacon / anomaly scores
│
└── [ngfw-ssl-bump-node] SSL inspection (M22): TLS MITM if required
Generate forged cert, maintain two TLS legs
│
▼
[ip4-lookup] FIB lookup (M18 VPP FIB): find output interface
│
▼
[ip4-rewrite] Next-hop MAC rewrite (adjacency)
│
▼
[ip4-output feature arc]
│
├── [ngfw-nat-out2in] NAT44-ED: SNAT outbound
│ Rewrite src_ip, src_port, checksums
│
└── [ngfw-log-node] Session log (background ring buffer)
│
▼
[interface-output] TX queue, DPDK PMD transmit, batch to NIC
/* Control plane (separate from data plane) */
Control Plane Components:
Policy Manager: compile and install policy tables
Threat Intel: ingest feeds, maintain IoC databases
Certificate Manager: generate inspection certs, manage CA
Session Manager: monitor session table, enforce limits
Stats Collector: per-rule hit counts, per-app bytes, per-zone traffic
SIEM Exporter: consume log ring, format, forward to Kafka/syslog
REST API: policy CRUD, stats queries, operational commands
CLI: vppctl + custom NGFW CLI commands/* Performance targets for production NGFW on 10G dual-port Mellanox */ /* (Based on your team's ConnectX infrastructure) */ Throughput: 10 Gbps bidirectional (line rate) Sessions: 1M concurrent New sessions/sec: 100K/s (TCP with 3-way handshake) DPI throughput: 5–8 Gbps (with Hyperscan, 1000 sigs) SSL inspect: 2–4 Gbps (crypto is the bottleneck) Latency (add): <100µs for established flows (DPDK) Latency (add): <500µs for new flows (session creation + policy eval) CPU cores needed: 6–10 worker cores + 2 management cores Memory: 16GB (1M sessions + DPI state + threat intel) /* VPP worker affinity */ /* Workers 0-3: packet processing (pinned to NUMA 0, same as NIC) */ /* Workers 4-5: SSL inspection offload (CPU-intensive) */ /* Worker 6: management plane (policy updates, CLI) */ /* Worker 7: logging + SIEM export */
PERFORMANCE BENCHMARKING — MEASURING NGFW THROUGHPUT
NGFW Performance Testing Methodology
BENCHMARKING/* NGFW performance testing: RFC 2544 + security-specific extensions */
/* Tool: TRex (Cisco) — stateful traffic generator running on DPDK */
/* Alternative: Ixia, Spirent (commercial); MoonGen (academic) */
/* Test 1: Maximum Throughput (Raw forwarding, no inspection) */
/* Establish baseline: how fast can the data plane forward? */
/* Packet sizes: 64B, 128B, 256B, 512B, 1024B, 1518B */
/* Target: line rate (14.88 Mpps at 10Gbps for 64B packets) */
/* Test 2: Connections Per Second */
/* Generate new TCP connections rapidly */
/* Measure: how many SYN→SYN-ACK→ACK→FIN per second */
/* Bottleneck: session table insertion, policy evaluation */
/* Target: 100K+ CPS */
/* Test 3: Maximum Concurrent Sessions */
/* Fill session table: open millions of connections, keep alive */
/* Measure: throughput degradation as table fills */
/* Observe: when does hash collision rate become significant? */
/* Test 4: DPI Impact */
/* Repeat Test 1 with DPI enabled */
/* Compare throughput with DPI on vs off */
/* Test with: 100 sigs, 1000 sigs, 10000 sigs */
/* Measure: Gbps lost per 1000 additional signatures */
/* Test 5: SSL Inspection Throughput */
/* TLS 1.3 connections at various key sizes */
/* Compare: AES-128-GCM vs AES-256-GCM vs ChaCha20-Poly1305 */
/* With hardware offload (QAT or Mellanox IPsec): compare vs software */
/* TRex stateful test configuration */
/*
port: 0
flows:
- clients: 10.0.0.0/16 # 65K clients
servers: 200.0.0.0/16 # 65K servers
transport: tcp
connections: 100000 # 100K concurrent
cps: 10000 # new connections per second
http: # HTTP/1.1 traffic profile
request_size: 512
response_size: 4096
*/
/* Metrics to capture */
typedef struct perf_metrics {
double throughput_gbps;
uint64_t pps; /* packets per second */
uint64_t cps; /* connections per second */
double latency_avg_us;
double latency_p99_us; /* 99th percentile latency */
double latency_p999_us; /* 99.9th percentile */
uint32_t drop_rate_ppm; /* drops per million packets */
uint32_t active_sessions;
uint32_t session_table_util_pct;
double cpu_util_pct;
double dpi_scan_gbps;
} perf_metrics_t;
/* Monitoring during tests */
watch -n 1 'vppctl show run summary' /* VPP node performance */
watch -n 1 'vppctl show interface' /* TX/RX stats */
watch -n 1 'vppctl show nat44 summary' /* NAT session stats */
perf stat -C 2,3,4,5 sleep 10 /* CPU hardware counters */
numastat -m /* NUMA memory access */CAPSTONE PROJECT — YOUR TEAM'S NGFW
Design Document: Jio NGFW — Capstone Project
CAPSTONEYour capstone project is to produce a detailed technical design document for your team's NGFW, incorporating all the knowledge from this curriculum. This document should be usable as the actual technical specification for your R&D work.
Capstone Deliverable Structure
- Executive Summary — What the NGFW must do; performance targets; technology stack choices and rationale (VPP + DPDK + Mellanox ConnectX)
- Data Plane Architecture — Complete VPP graph node pipeline diagram; all processing nodes, their order, and inter-node interfaces; how M18 VPP knowledge is applied
- Connection Tracking Design — Session table implementation: hash table choice, session_t struct fields, timer wheel, per-protocol state machines; sizing for your expected traffic profile
- NAT Implementation — Which NAT types required; NAPT pool sizing; DNAT rules for published services; hairpinning strategy; VPP NAT44-ED configuration
- DPI Engine — Pattern matching library choice (Hyperscan); initial signature set; protocol dissectors; app ID signals; per-flow state allocation strategy; memory budget
- Threat Detection — IPS integration (Suricata vs custom); threat intel feeds; beacon detection; DNS monitoring; alert thresholds and scoring
- Policy Engine — Zone model (which zones, trust levels); rule schema; compilation strategy; first-match vs best-match decision; shadow rule detection
- SSL Inspection — Which flows to inspect; CA hierarchy; certificate generation and caching; bypass list; ECH roadmap
- Performance Model — Expected throughput per subsystem; CPU core allocation; memory budget; NUMA topology; Mellanox offload utilisation (XFRM, checksum, TSO)
- Logging and Observability — Log schema; ring buffer sizing; SIEM target; operational metrics to expose
/* Capstone: suggested technology stack for Jio NGFW */ /* Data Plane: FD.io VPP 23.x on DPDK 23.x NIC: Mellanox ConnectX-6 Dx (100G, IPsec offload) OS: Ubuntu 22.04 LTS with RT kernel patch Session Table: clib_bihash_48_8 (VPP native) DPI Engine: Intel Hyperscan / Vectorscan (open-source) IPS Rules: Emerging Threats + custom Jio-specific rules Pattern Matching: Hyperscan streaming mode (per-flow hs_stream) SSL Inspect: OpenSSL 3.x for cert generation; BoringSSL option Threat Intel: Feodo Tracker + AbuseCH + commercial feed TBD Policy: Custom compiled rule engine in C Logging: Ring buffer → Kafka → Elasticsearch API: REST (gRPC protobuf for performance-sensitive ops) CLI: vppctl + custom NGFW CLI (using vppctl framework) Testing: TRex for traffic generation; Suricata for IDS validation */ /* Decision: why VPP over custom DPDK */ /* Custom DPDK requires reimplementing: IPv4/IPv6 forwarding, ARP/ND, */ /* routing, fragmentation, GRE, VxLAN, MPLS, etc. — years of work */ /* VPP provides all of these plus a plugin framework and graph engine */ /* Estimated 12–18 months saved vs building raw DPDK pipeline from scratch */ /* VPP performance is within 5% of hand-optimised DPDK for most workloads */
Policy Engine with Rule Compiler
Objective: Build a policy engine that compiles human-readable rules into a fast lookup structure. Implement shadow rule detection and a zone matrix.
policy_lookup() as shown in Tab 1. Benchmark: 100K lookups against a 500-rule policy table. Target <1µs per lookup.Structured Logging and SIEM Integration
Objective: Build a high-throughput logging pipeline from your NGFW data plane to Elasticsearch. Handle log ring overflow gracefully.
End-to-End NGFW Integration Test
Objective: Wire together all components built across M23–M26 into a single test harness. Verify the complete packet processing pipeline handles all scenarios correctly.
M26 MASTERY CHECKLIST
- Know policy engine role: translate human rules into O(1)–O(log n) lookup structures for line-rate packet classification
- Know policy rule fields: src/dst zone, src/dst IP, application, service, user, URL category, time, threat level
- Know policy actions: permit, deny, reject, inspect, ssl-decrypt, nat, rate-limit, log, quarantine
- Know rule compilation steps: parse rules → resolve address objects → build compiled_rule_t → zone-pair index → exact-match cache
- Know first-match semantics: rule ordering critical; specific before general; shadow rules cannot fire
- Know best-match semantics: longest prefix wins regardless of order; harder to implement, harder to get wrong
- Know shadow rule detection: Rule A shadowed if earlier Rule B covers all of A's traffic
- Know zero-downtime policy update: compile in background thread, atomic pointer swap, brief grace period
- Know zone-based policy: security zones group interfaces by trust level; cross-zone traffic denied by default
- Know typical enterprise zones: INTERNET(0), GUEST(5), DMZ(10), VPN(40), TRUST(50), SERVERS(60), MGMT(90)
- Know intra-zone security and micro-segmentation for lateral movement prevention
- Know log record schema: timestamps, five-tuple, policy, application, bytes, NAT info, threat info
- Know ring-buffer logging architecture: forwarding thread enqueues non-blocking; background thread drains and formats
- Know why synchronous logging blocks forwarding: disk/network I/O is orders of magnitude slower than packet forwarding
- Know CEF format: Common Event Format for SIEM interoperability
- Know SIEM integration stack: ring buffer → Kafka → Elasticsearch → Kibana
- Know complete NGFW pipeline: dpdk-input → L2/L3 → conntrack → NAT-in → DPI → IPS → SSL-bump → FIB → rewrite → NAT-out → logging → TX
- Know NGFW performance targets on 10G Mellanox: 10 Gbps forwarding, 100K CPS, 1M sessions, <100µs established flow latency
- Know VPP worker thread affinity model: NIC-local workers for forwarding, separate cores for SSL/logging
- Know RFC 2544 benchmarking: throughput, CPS, max sessions, latency, drop rate
- Know TRex as the standard DPDK-based stateful traffic generator for NGFW testing
- Know capstone document structure: executive summary, data plane architecture, conntrack, NAT, DPI, threat detection, policy, SSL, performance model, logging
- Completed Lab 1: policy engine with rule compiler, shadow detection, zone-pair index, exact-match cache, zero-downtime update
- Completed Lab 2: ring-buffer logging pipeline with Elasticsearch + Kibana dashboard
- Completed Capstone: end-to-end integration test; performance baseline measured; design document written
🎓 Networking Mastery Curriculum Complete
You have completed the full Networking Mastery curriculum — from OSI fundamentals through to a production NGFW data plane design. The journey covered: TCP/IP foundations, routing protocols, Linux networking, kernel bypass (eBPF, DPDK, VPP), security protocols (TLS, IPsec, PKI), and NGFW development (conntrack, NAT, DPI, IDS/IPS, policy engine).
Your capstone project is the synthesis: a complete technical design for your team's NGFW that applies every technique from every module. Use it to guide your R&D work. Update it as your team learns. Share it with colleagues joining the project.