M04 — gRPC & Protocol Buffers Deep Dive

Phase 1 Proto3 syntax & field encoding · Wire types & varint · 4 streaming modes · gRPC error model · Interceptors & middleware · gRPC-Gateway · Health checking · Schema evolution & field rules
⚡ Why gRPC Exists
REST over JSON is human-readable and universally supported — but every hop parses text, allocates strings, and re-serialises. Inside a datacenter where services talk thousands of times per second, that overhead compounds fast.

gRPC solves this with two decisions: binary serialisation (Protocol Buffers) and HTTP/2 multiplexing. A 100-field message that takes 2 KB as JSON may compress to 300 bytes as Protobuf. HTTP/2 lets 1,000 concurrent RPCs share one TCP connection — no head-of-line blocking per stream.
Analogy — REST vs gRPC: REST is a postcard: readable by anyone, slow to write and parse. gRPC is a binary radio protocol: compact, fast, typed — but you need the schema (proto file) to decode it.
✅ Use gRPC When…
  • Internal microservice-to-microservice calls
  • You need server or bidirectional streaming
  • Latency < 5 ms is a hard requirement
  • Strong contract (IDL) enforcement matters
  • Polyglot services (generated clients in 12 languages)
  • Mobile apps — smaller payload = less battery
🚫 Prefer REST When…
  • Public API consumed by unknown clients
  • Browser JS front-ends (gRPC-Web workaround exists)
  • Simple CRUD with low traffic
  • Team unfamiliar with Protobuf toolchain
  • You need human-readable request/response in logs
  • Firewall/CDN doesn't pass HTTP/2 trailers
🏗 gRPC Stack Layers
┌─────────────────────────────────────────────────────────────┐ │ Application Code │ │ Generated Stub (client) Generated Service (server) │ ├─────────────────────────────────────────────────────────────┤ │ gRPC Framework Layer │ │ Serialise/deserialise · Interceptors · Deadline propagation │ │ Health check · Retry · Load balancing │ ├─────────────────────────────────────────────────────────────┤ │ Protocol Buffers (Encoding/Decoding) │ │ Field tags · Varint encoding · Length-delimited bytes │ ├─────────────────────────────────────────────────────────────┤ │ HTTP/2 Transport │ │ HEADERS frame (metadata) · DATA frames (Protobuf body) │ │ TRAILERS frame (status code + error details) │ ├─────────────────────────────────────────────────────────────┤ │ TLS 1.3 / TCP │ └─────────────────────────────────────────────────────────────┘
FeatureREST / JSONgRPC / Protobuf
TransportHTTP/1.1 or HTTP/2HTTP/2 only
Payload formatJSON (text)Protobuf (binary)
SchemaOptional (OpenAPI)Mandatory (.proto IDL)
StreamingSSE / WebSocket (ad-hoc)4 built-in modes
Code generationOptional (openapi-generator)Core requirement (protoc)
Browser supportNativegRPC-Web proxy needed
Payload size (typical)~3–10× largerBaseline
Error modelHTTP status + bodyStatus code + rich details
📝 Proto3 File Structure
Every .proto file begins with a syntax declaration and optional package/import directives. Field numbers (not names) are the stable API contract — they become the wire-format tag.
// user_service.proto
syntax = "proto3";
package user.v1;

// Go package option — ignored by other languages
option go_package = "github.com/acme/user/v1;userv1";

import "google/protobuf/timestamp.proto";
import "google/protobuf/empty.proto";

// ── Messages ────────────────────────────────────────────────
message User {
string id = 1; // field number = wire tag
string email = 2;
string username = 3;
Role role = 4; // enum field
google.protobuf.Timestamp created_at = 5;
repeated string tags = 6; // list field
oneof contact { // only one field set at a time
string phone = 7;
string slack = 8;
}
}

enum Role {
ROLE_UNSPECIFIED = 0; // proto3 default; first value MUST be 0
ROLE_USER = 1;
ROLE_ADMIN = 2;
}

message GetUserRequest { string user_id = 1; }
message ListUsersRequest { int32 page_size = 1; string page_token = 2; }
message ListUsersResponse{ repeated User users = 1; string next_page_token = 2; }

// ── Service Definition ──────────────────────────────────────
service UserService {
rpc GetUser (GetUserRequest) returns (User);
rpc ListUsers (ListUsersRequest) returns (ListUsersResponse);
rpc WatchUser (GetUserRequest) returns (stream User);
rpc UploadUsers (stream User) returns (google.protobuf.Empty);
rpc SyncUsers (stream GetUserRequest) returns (stream User);
}
📋 Field Type Cheat-Sheet
Proto TypeWire TypeC/C++ MappingNotes
double1 (64-bit)doubleIEEE 754 little-endian
float5 (32-bit)float
int32 / int640 (varint)int32_t / int64_tNegative values cost 10 bytes; use sint32 instead
uint32 / uint640 (varint)uint32_t / uint64_t
sint32 / sint640 (varint)int32_t / int64_tZigZag-encoded; efficient for negatives
fixed32 / sfixed325 (32-bit)uint32_t / int32_tAlways 4 bytes; efficient if values > 2²⁸
fixed64 / sfixed641 (64-bit)uint64_t / int64_tAlways 8 bytes
bool0 (varint)bool0 = false, 1 = true
string2 (length-delimited)char* / std::stringMust be valid UTF-8
bytes2 (length-delimited)uint8_t* + lenArbitrary binary
message (nested)2 (length-delimited)struct pointerEncoded as its own byte sequence
enum0 (varint)int32_tUnknown values preserved in proto3
🔒 Schema Evolution Rules (Backward & Forward Compatibility)
✅ Safe Changes
  • Add a new field (new number)
  • Remove a field (mark number as reserved)
  • Add a value to an enum
  • Change a singular to repeated
  • Change string to bytes (compatible wire type)
❌ Breaking Changes
  • Reuse a field number with a different type
  • Rename a field (breaks JSON mapping)
  • Change a field number
  • Remove the first enum value (changes default)
  • Move a field out of / into a oneof
// Always reserve deleted field numbers and names:
message User {
reserved 3, 7; // numbers can't be reused
reserved "old_phone", "legacy_id"; // names can't be reused
string id = 1;
string email = 2;
// field 3 was "username" — reserved above
}
Proto3 Default Values Trap: In proto3, every field has a default (0/""/""/false). You cannot distinguish "field not set" from "field set to zero." Use google.protobuf.Int32Value wrappers or optional keyword (proto3 optional) when you need a three-state: unset / zero / non-zero.
🔬 The Wire Format
Protobuf encodes each field as a key-value pair. The key is a varint combining the field number and wire type: key = (field_number << 3) | wire_type. There are 6 wire types:
Wire TypeMeaningUsed For
0Varintint32/64, uint32/64, sint32/64, bool, enum
164-bitfixed64, sfixed64, double
2Length-delimitedstring, bytes, embedded messages, packed repeated
3Start group (deprecated)Legacy — do not use
4End group (deprecated)Legacy — do not use
532-bitfixed32, sfixed32, float
📐 Varint Encoding Step by Step
Varints encode integers using 1–10 bytes. Each byte contributes 7 bits; the MSB is a continuation bit (1 = more bytes follow, 0 = last byte).
// Encoding the integer 300 as a varint:
// 300 in binary = 0000 0001 0010 1100
// Split into 7-bit groups (LSB first): 010 1100 | 000 0010
// Add continuation bits: 1010 1100 | 0000 0010
// Result bytes: 0xAC 0x02

// Encoding field_number=1, wire_type=0, value=150:
// key = (1 << 3) | 0 = 0x08
// value 150 = 0x96 0x01
// Wire bytes: 08 96 01

uint8_t* encode_varint(uint8_t *buf, uint64_t v) {
while (v > 0x7F) {
*buf++ = (uint8_t)(v | 0x80); // set MSB = more bytes
v >>= 7;
}
*buf++ = (uint8_t)v; // last byte, MSB clear
return buf;
}

const uint8_t* decode_varint(const uint8_t *buf, uint64_t *out) {
uint64_t result = 0;
int shift = 0;
do {
if (shift >= 64) return NULL; // malformed
result |= (uint64_t)(*buf & 0x7F) << shift;
shift += 7;
} while (*buf++ & 0x80);
*out = result;
return buf;
}
🔄 ZigZag Encoding (sint32/sint64)
Normal varint encoding of −1 uses 10 bytes (it's treated as a large unsigned number 2⁶⁴−1). ZigZag maps signed ints to unsigned: positive n → 2n, negative n → 2|n|−1. This means small negative numbers also get small varint encodings.
// ZigZag encode: (n << 1) ^ (n >> 31) for int32
// 0 → 0 -1 → 1 1 → 2
// -2 → 3 2 → 4 -3 → 5

uint32_t zigzag_encode32(int32_t n) {
return ((uint32_t)n << 1) ^ ((uint32_t)(n >> 31));
}
int32_t zigzag_decode32(uint32_t n) {
return (int32_t)((n >> 1) ^ -(n & 1));
}
📦 Length-Delimited Encoding (strings, bytes, nested messages)
// Field number=2 (string "hi"), wire type=2:
// key = (2 << 3) | 2 = 0x12
// length varint = 0x02
// data bytes = 0x68 0x69 ('h', 'i')
// Wire bytes: 12 02 68 69

// Nested message: serialise inner message, prefix its byte length
// User { id="abc", email="x@y.com" } encoded then embedded in:
// GetUserResponse { user=<User bytes> }

// Packed repeated fields (default in proto3 for numeric types):
// [1, 2, 3] as repeated int32 field_number=4:
// key = (4<<3)|2 = 0x22 (wire type 2, not 0!)
// length = 3 bytes
// data: 01 02 03 (three varints packed together)
Why know the wire format? Debugging gRPC in Wireshark, writing custom serialisers (e.g., embedded firmware without a Protobuf runtime), optimising field ordering for cache locality, or implementing partial-decode ("field 5 only") for read amplification reduction.
📡 The 4 Streaming Patterns
Every gRPC RPC is fundamentally a function: request → response. The streaming variants replace one or both sides with an ordered sequence of messages over the same HTTP/2 stream.
  1. 1
    Unary RPCrpc GetUser(Request) returns (Response)
    One request, one response. Equivalent to a REST POST. Client sends request frame → server processes → server sends response + TRAILERS. Simplest; 99% of internal APIs start here.
  2. 2
    Server Streamingrpc WatchPrices(Symbol) returns (stream Tick)
    Client sends one message; server sends N messages then closes. Ideal for live feeds, log tailing, paginated results without cursor round-trips. HTTP/2 DATA frames keep arriving until FIN.
  3. 3
    Client Streamingrpc UploadChunks(stream Chunk) returns (Summary)
    Client sends N messages, server responds once at the end. For bulk ingestion (file upload, sensor telemetry). Server buffers or processes incrementally, replies after client sends EOF.
  4. 4
    Bidirectional Streamingrpc Chat(stream Msg) returns (stream Msg)
    Both sides send independently. Order within each side is preserved; the two streams interleave freely. Real-time collaboration, game state sync, interactive ML inference pipelines.
Mode Client → Server → ───────────────────────────────────────────────────────────── Unary ──[Req]──────────→ ←──[Res]────────── Server stream ──[Req]──────────→ ←──[Res1][Res2]...[END] Client stream ──[Req1][Req2]...[END]→ ←──[Res]────────── Bidirectional ──[R1][R2][R3]...[END]→ ←──[S1][S2]...[END] Both sides independent; order within side preserved
⚙️ HTTP/2 Mechanics Under the Hood
Each gRPC call maps to one HTTP/2 stream (unique stream ID). The request uses:
  • HEADERS frame:method POST, :path /pkg.Service/Method, content-type: application/grpc, grpc-timeout, custom metadata as headers
  • DATA frames — 5-byte length-prefix + Protobuf bytes. First byte is compression flag (0=none, 1=compressed); next 4 bytes are message length big-endian.
  • TRAILERS (HTTP/2 HEADERS with END_STREAM)grpc-status (int) + grpc-message (percent-encoded string) + optional grpc-status-details-bin
/* gRPC length-prefix framing — hand-decode a DATA payload */
typedef struct {
uint8_t compressed; // 0 = none, 1 = gzip/deflate/snappy
uint32_t length; // big-endian message length
uint8_t *data; // Protobuf bytes (length bytes)
} grpc_frame_t;

int grpc_decode_frame(const uint8_t *buf, size_t buflen,
grpc_frame_t *out) {
if (buflen < 5) return -1;
out->compressed = buf[0];
out->length = ((uint32_t)buf[1] << 24) |
((uint32_t)buf[2] << 16) |
((uint32_t)buf[3] << 8) |
(uint32_t)buf[4];
if (buflen < 5 + out->length) return -2;
out->data = (uint8_t*)&buf[5];
return 0;
}
⏱ Deadlines & Cancellation
gRPC propagates deadlines automatically. The client sets grpc-timeout in the HEADERS frame; every hop decrements the remaining budget. If the deadline expires mid-stream:
  • Client sends RST_STREAM with error code CANCEL
  • Server receives context cancellation; I/O operations return error
  • All open streams on the RPC are torn down
This enables deadline propagation: a 200 ms end-to-end budget shrinks as it passes through each service, preventing cascading timeouts where upstream services pile up waiting for a dead downstream.
// gRPC timeout header format: ASCII integer + unit suffix
// grpc-timeout: 200m → 200 milliseconds
// grpc-timeout: 5S → 5 seconds
// grpc-timeout: 100000u → 100 ms in microseconds
// Units: H(hours) M(minutes) S(seconds) m(ms) u(µs) n(ns)
🚨 gRPC Status Codes
gRPC defines 16 canonical status codes transmitted in the grpc-status trailer. Map HTTP status codes to gRPC equivalents when building gRPC-Gateway or REST bridges.
CodeNameHTTP ≈When to Use
0OK200Success
1CANCELLED499Client cancelled (RST_STREAM)
2UNKNOWN500Unknown server error
3INVALID_ARGUMENT400Bad request field values
4DEADLINE_EXCEEDED504Timeout expired
5NOT_FOUND404Resource not found
6ALREADY_EXISTS409Create of existing resource
7PERMISSION_DENIED403Authenticated but not authorised
8RESOURCE_EXHAUSTED429Quota / rate limit exceeded
9FAILED_PRECONDITION400Precondition not met (e.g., non-empty bucket before delete)
10ABORTED409Concurrency conflict (optimistic lock failed)
11OUT_OF_RANGE400Value out of valid range (e.g., seek past end)
12UNIMPLEMENTED501Method not implemented
13INTERNAL500Internal invariant broken
14UNAVAILABLE503Server temporarily unavailable — safe to retry
15DATA_LOSS500Unrecoverable data corruption
16UNAUTHENTICATED401Missing / invalid credentials
📋 Rich Error Details (google.rpc.Status)
A bare status code is not enough for clients to act. The google.rpc.Status proto embeds structured error payloads in the grpc-status-details-bin trailer (base64-encoded).
// google/rpc/status.proto (simplified)
message Status {
int32 code = 1; // gRPC status code integer
string message = 2; // Human-readable, not for machines
repeated google.protobuf.Any details = 3;
}

// Common detail types (google/rpc/error_details.proto):
// ErrorInfo — domain + reason + metadata
// RetryInfo — retry_delay (client should wait before retry)
// BadRequest — list of field violations
// QuotaFailure — which quota was exceeded
// RequestInfo — request_id for correlation

// Example: INVALID_ARGUMENT with field violations (Go pseudocode)
// st, _ := status.New(codes.InvalidArgument, "validation failed")
// br := &errdetails.BadRequest{}
// br.FieldViolations = append(br.FieldViolations,
// &errdetails.BadRequest_FieldViolation{
// Field: "email", Description: "must be valid RFC 5321 address"})
// st.WithDetails(br)
🔗 Interceptors (Middleware)
Interceptors wrap RPC handler calls — analogous to HTTP middleware. They run for every RPC on that connection. Typical uses: auth token validation, request logging, tracing span injection, rate limiting, retry logic, metrics counters.
Client side: [Retry] → [Auth token inject] → [Logging] → [Codec] → network Server side: network → [Auth validator] → [Rate limiter] → [Logging] → [Handler]
/* Conceptual unary interceptor signature (language-agnostic) */
// invoke(ctx, request, method_info, handler) → (response, error)

// Server unary interceptor: JWT auth check (pseudocode)
func AuthInterceptor(ctx, req, info, handler) (resp, err) {
token := metadata.FromIncomingContext(ctx)["authorization"]
if !validate_jwt(token) {
return nil, status.Error(UNAUTHENTICATED, "invalid token")
}
return handler(ctx_with_claims, req)// pass enriched context
}

// Chain multiple interceptors:
// grpc.ChainUnaryInterceptor(LoggingInterceptor, AuthInterceptor, RateLimitInterceptor)
🔄 Retry Policy
gRPC supports automatic client-side retries defined in service config JSON (passed during channel creation or via xDS). Only UNAVAILABLE and RESOURCE_EXHAUSTED (with retry hint) are safe to retry transparently.
// Service config JSON snippet for retry policy:
{
"methodConfig": [{
"name": [{ "service": "user.v1.UserService", "method": "GetUser" }],
"retryPolicy": {
"maxAttempts": 4,
"initialBackoff": "0.1s",
"maxBackoff": "1s",
"backoffMultiplier": 2,
"retryableStatusCodes": ["UNAVAILABLE"]
}
}]
}
Idempotency & Retries: Only retry RPCs that are idempotent (GET-like unary reads, or marked with the idempotency annotation). Retrying a CreateOrder can result in duplicate orders. Use retry_push_back from RetryInfo for server-directed backoff on RESOURCE_EXHAUSTED.
🌐 gRPC-Gateway: Serve REST + gRPC from One Proto
gRPC-Gateway is a protoc plugin that generates a reverse proxy. It reads HTTP annotations in your .proto file and transcodes REST/JSON requests to gRPC, forwarding them to the gRPC server. One service definition, two surfaces: gRPC for internal services, REST/JSON for browsers and third-party consumers.
Browser / curl gRPC-Gateway Proxy gRPC Server ───────────────────────────────────────────────────────────────────────────── HTTP GET /v1/users/42 ──────→ Transcode to GetUserRequest ──────→ Handler (JSON → Protobuf) (C/Go/Java) HTTP 200 {"id":"42"…} ←────── Transcode response ←────── (Protobuf → JSON) grpc://svc:50051 ──────────────────────────────────────→ (direct)
✏️ HTTP Annotations in Proto
import "google/api/annotations.proto";

service UserService {
rpc GetUser (GetUserRequest) returns (User) {
option (google.api.http) = {
get: "/v1/users/{user_id}" // user_id binds from path
};
}
rpc CreateUser (User) returns (User) {
option (google.api.http) = {
post: "/v1/users"
body: "*" // entire JSON body maps to User
};
}
rpc ListUsers (ListUsersRequest) returns (ListUsersResponse) {
option (google.api.http) = {
get: "/v1/users" // page_size, page_token become query params
};
}
rpc UpdateUser (UpdateUserRequest) returns (User) {
option (google.api.http) = {
patch: "/v1/users/{user.id}"
body: "user" // only "user" sub-message from body
};
}
}
💊 gRPC Health Checking Protocol
The standard gRPC health check service lets Kubernetes liveness/readiness probes and load balancers check service health without custom endpoints.
// grpc/health/v1/health.proto (standard)
service Health {
rpc Check (HealthCheckRequest) returns (HealthCheckResponse);
rpc Watch (HealthCheckRequest) returns (stream HealthCheckResponse);
}
message HealthCheckRequest { string service = 1; }
message HealthCheckResponse {
enum ServingStatus {
UNKNOWN = 0;
SERVING = 1;
NOT_SERVING = 2;
SERVICE_UNKNOWN = 3; // health-check for unknown service name
}
ServingStatus status = 1;
}

// Kubernetes grpc probe (k8s 1.24+):
// livenessProbe:
// grpc:
// port: 50051
// service: "user.v1.UserService"
🔍 gRPC Reflection
Server reflection allows clients to query the available services and their proto schemas at runtime — without a .proto file. Tools like grpcurl and Postman use reflection to discover and call services dynamically.
# grpcurl: REST-like tool for gRPC
# List services (reflection required):
grpcurl -plaintext localhost:50051 list
# → user.v1.UserService
# → grpc.health.v1.Health

# Describe a method:
grpcurl -plaintext localhost:50051 describe user.v1.UserService.GetUser

# Call with JSON input:
grpcurl -plaintext -d '{"user_id": "42"}' \
localhost:50051 user.v1.UserService/GetUser
🔧 protobuf-c: Using Protocol Buffers in C
protobuf-c is the official C runtime for Protobuf. The protoc-gen-c plugin generates a .pb-c.h / .pb-c.c pair from each .proto file.
/* Install: sudo apt install libprotobuf-c-dev protobuf-c-compiler */
/* Generate: protoc --c_out=. user.proto */
/* Generated files: user.pb-c.h, user.pb-c.c */

/* Compile: gcc -o demo demo.c user.pb-c.c -lprotobuf-c */
📝 Encode / Decode a Message in C (protobuf-c)
#include <stdio.h>
#include <stdlib.h>
#include "user.pb-c.h" // generated from user.proto

int main(void) {
/* ── Encode ─────────────────────────────────────────── */
UserV1__User user;
user__v1__user__init(&user); // zero-init with defaults
user.id = "42";
user.email = "alice@example.com";
user.username = "alice";
user.role = USER_V1__ROLE__ROLE_ADMIN;

size_t packed_size = user__v1__user__get_packed_size(&user);
uint8_t *buf = malloc(packed_size);
user__v1__user__pack(&user, buf);

printf("Packed %zu bytes\n", packed_size);

/* ── Decode ─────────────────────────────────────────── */
UserV1__User *decoded =
user__v1__user__unpack(NULL, packed_size, buf);
if (!decoded) {
fprintf(stderr, "decode failed\n");
free(buf); return 1;
}

printf("id=%s email=%s role=%d\n",
decoded->id, decoded->email, decoded->role);

user__v1__user__free_unpacked(decoded, NULL);
free(buf);
return 0;
}
🚀 Minimal gRPC Unary Server in C (grpc-c core)
/* grpc_server.c — unary GetUser over gRPC */
#include <grpc/grpc.h>
#include <grpc/support/log.h>
#include "user.pb-c.h"
#include <string.h>

static void run_server(const char *addr) {
grpc_init();
grpc_server *server = grpc_server_create(NULL, NULL);
grpc_completion_queue *cq = grpc_completion_queue_create_for_next(NULL);
grpc_server_register_completion_queue(server, cq, NULL);

grpc_server_credentials *creds = grpc_insecure_server_credentials_create();
grpc_server_add_http2_port(server, addr, creds);
grpc_server_credentials_release(creds);
grpc_server_start(server);
gpr_log(GPR_INFO, "gRPC server listening on %s", addr);

/* ── Event loop ────────────────────────────────────── */
while (1) {
grpc_call *call;
grpc_call_details details;
grpc_metadata_array req_meta;
grpc_call_details_init(&details);
grpc_metadata_array_init(&req_meta);

/* request the next incoming call */
grpc_server_request_call(server, &call, &details, &req_meta, cq, cq, (void*)1);
grpc_event ev = grpc_completion_queue_next(
cq, gpr_inf_future(GPR_CLOCK_REALTIME), NULL);
if (ev.type != GRPC_OP_COMPLETE) continue;

gpr_log(GPR_INFO, "RPC: %s", grpc_slice_to_c_string(details.method));

/* receive request message */
grpc_byte_buffer *recv_buf = NULL;
grpc_op recv_ops[1] = ;
grpc_call_start_batch(call, recv_ops, 1, (void*)2, NULL);
grpc_completion_queue_next(cq, gpr_inf_future(GPR_CLOCK_REALTIME), NULL);

/* decode request protobuf */
grpc_byte_buffer_reader rdr;
grpc_byte_buffer_reader_init(&rdr, recv_buf);
grpc_slice req_slice = grpc_byte_buffer_reader_readall(&rdr);
UserV1__GetUserRequest *req = user__v1__get_user_request__unpack(
NULL, GRPC_SLICE_LENGTH(req_slice),
(const uint8_t*)GRPC_SLICE_START_PTR(req_slice));

/* build response */
UserV1__User resp;
user__v1__user__init(&resp);
resp.id = req ? req->user_id : "unknown";
resp.email = "alice@example.com";

size_t resp_len = user__v1__user__get_packed_size(&resp);
uint8_t *resp_buf = malloc(resp_len);
user__v1__user__pack(&resp, resp_buf);

/* gRPC length-prefix framing: 1 byte flag + 4 bytes length */
uint8_t frame_hdr[5] = {0};
frame_hdr[1] = (resp_len >> 24) & 0xFF;
frame_hdr[2] = (resp_len >> 16) & 0xFF;
frame_hdr[3] = (resp_len >> 8) & 0xFF;
frame_hdr[4] = (resp_len ) & 0xFF;

grpc_slice slices[2] = {
grpc_slice_from_copied_buffer((char*)frame_hdr, 5),
grpc_slice_from_copied_buffer((char*)resp_buf, resp_len)
};
grpc_byte_buffer *send_buf = grpc_raw_byte_buffer_create(slices, 2);

/* send response + trailers */
grpc_metadata trailing_meta[1];
memset(trailing_meta, 0, sizeof(trailing_meta));
grpc_op send_ops[3] = {
{.op = GRPC_OP_SEND_INITIAL_METADATA, .data.send_initial_metadata = {0, NULL}},
{.op = GRPC_OP_SEND_MESSAGE, .data.send_message.send_message = send_buf},
{.op = GRPC_OP_SEND_STATUS_FROM_SERVER, .data.send_status_from_server = {
.trailing_metadata_count = 0,
.status = GRPC_STATUS_OK,
.status_details = &grpc_empty_slice()
}},
};
grpc_call_start_batch(call, send_ops, 3, (void*)3, NULL);
grpc_completion_queue_next(cq, gpr_inf_future(GPR_CLOCK_REALTIME), NULL);

/* cleanup */
free(resp_buf);
if (req) user__v1__get_user_request__free_unpacked(req, NULL);
grpc_byte_buffer_destroy(recv_buf);
grpc_byte_buffer_destroy(send_buf);
grpc_call_unref(call);
}
}
In production C services, use the higher-level grpc-c wrapper or switch to C++ with gRPC's C++ API — it handles framing, completion queues, and threading for you. The C core API above is valuable for understanding the protocol mechanics and for embedding gRPC in constrained environments (RTOS, firmware).
🧪 Lab 1 — Build a Protobuf Serialiser from Scratch
Understand wire encoding at the byte level by writing a minimal varint + length-delimited encoder without using any Protobuf library.
1Define a simple 3-field message in a .proto file: string name = 1; int32 age = 2; bool active = 3;
2Write encode_varint() and decode_varint() in C (target: handle up to 64-bit values).
3Write encode_field(field_num, wire_type, value) that emits the key varint followed by the value.
4Encode a test struct (name="Bob", age=30, active=true) into a byte buffer manually.
5Cross-verify: use protoc --encode to encode the same values and compare bytes with xxd.
6Benchmark: encode 1 million structs — hand-coded C vs protobuf-c library. Record ns/op.

Expected outcome: Your manual encoding matches protobuf-c output byte-for-byte. Performance within 20% of library.

🧪 Lab 2 — Bidirectional Streaming Chat Service
Implement a bidirectional streaming gRPC service that simulates a chat session, exercising flow control and concurrent send/receive.
1Define ChatService with rpc Chat(stream ChatMessage) returns (stream ChatMessage). Messages: string sender=1; string text=2; int64 timestamp=3;
2Implement a Go (or Python) server that echoes each message back prefixed with "Echo: " after a 50 ms artificial delay.
3Write a client that sends 100 messages and receives 100 replies, measuring P50/P99 RTT per message.
4Add server-side deadline enforcement: cancel the stream if the client sends nothing for 5 seconds.
5Test cancellation: have the client hang after sending 50 messages; verify server receives context cancellation.
6Add a logging interceptor on the server that prints sender + text length for every message.

Expected outcome: P99 RTT < 10 ms on localhost. Cancellation visible in server logs within 100 ms of client hang.

🧪 Lab 3 — Schema Evolution & gRPC-Gateway
Practice backward-compatible schema evolution and expose your gRPC service as REST using gRPC-Gateway.
1Start with UserService v1: fields id, email, username (field numbers 1–3).
2Serialize 100 User objects with v1. Save bytes to disk.
3Add a new field string department = 4 and an enum Role role = 5 — creating user_v2.proto.
4Deserialize the v1 bytes using the v2 schema. Verify: old fields intact, new fields at defaults.
5Try a breaking change: reuse field number 2 with a different type. Document the corruption you see.
6Add gRPC-Gateway annotations to GetUser and ListUsers. Run the gateway. Test with curl.
7Compare JSON payload size vs Protobuf payload size for the same 100-user list.

Expected outcome: v1→v2 migration is seamless. REST endpoints work with curl. JSON ≈ 3–5× larger than Protobuf.

— Concept Checklist —
✅ Phase 1 gRPC Mastery Checklist
  • Can write a .proto file with messages, enums, oneof, repeated, and map fields
  • Explain field number vs field name and why numbers are the stable API contract
  • Decode a varint by hand: given 0xAC 0x02, produce 300
  • Know all 6 wire types; identify which one string and int32 use
  • Describe ZigZag encoding and when to prefer sint32 over int32
  • Implement all 4 streaming modes: unary, server-stream, client-stream, bidirectional
  • Explain how deadlines propagate through a gRPC call chain
  • Map gRPC status codes to HTTP equivalents for at least 8 codes
  • Use google.rpc.Status with rich error details (BadRequest field violations)
  • Write a server-side unary interceptor for JWT auth validation
  • Add HTTP annotations to a proto and run gRPC-Gateway transcoding
  • Implement the gRPC health check protocol; wire it to a Kubernetes liveness probe
  • List 3 safe and 3 breaking schema changes; always reserved deleted field numbers
  • Use grpcurl with server reflection to list services and call methods
  • Encode and decode a Protobuf message using protobuf-c in C