Interview Cheat Sheet
Complete Reference | Pattern Selector | Complexity Tables | C++ STL | Top 50 Problems
📚 12 Chapters Covered
🏆 50+ Top Problems
📊 All Complexity Tables
FAANG Target Level
🗺 Section 1 — Pattern Selector: Which Algorithm to Use?
Read the problem, identify the signals, select the pattern. This table covers the decision process for ~90% of LeetCode-style problems.
| If the problem involves… | Primary Pattern | Secondary / Fallback |
|---|---|---|
| Sorted array + find/count target | Binary Search | Two Pointers |
| Optimal contiguous subarray/substring | Sliding Window | Two Pointers |
| Pairs/triplets summing to target | Two Pointers (sorted) | Hash Map (unsorted) |
| Next greater/smaller in array | Monotonic Stack | — |
| Histogram / rectangle area | Monotonic Stack | Divide & Conquer |
| Prefix queries / autocomplete | Trie | Hash Set |
| Dynamic connectivity / cycle detection | Union-Find | BFS/DFS |
| Shortest path (unweighted) | BFS | — |
| Shortest path (weighted, non-neg) | Dijkstra (BFS + Min-Heap) | — |
| Shortest path (negative edges) | Bellman-Ford | — |
| Minimum Spanning Tree | Kruskal (Union-Find) or Prim | — |
| Topological order / dependency | Kahn's BFS or DFS post-order | — |
| All subsets / combinations / paths | Backtracking | — |
| Minimum / maximum over sequence | Dynamic Programming | Greedy (if exchange arg holds) |
| Count ways / number of paths | DP (counting) | — |
| String alignment / edit operations | 2D DP (LCS / Edit Distance) | — |
| Pack items into capacity | 0/1 or Unbounded Knapsack DP | — |
| Interval scheduling (max non-overlap) | Greedy (earliest finish) | — |
| Merge / insert intervals | Sort + linear scan | — |
| Kth largest / smallest element | Min-Heap of size k | QuickSelect O(n) avg |
| Running median | Two Heaps (max-heap + min-heap) | — |
| Merge k sorted lists/arrays | Min-Heap of k heads | — |
| Level-order tree traversal | BFS | — |
| In/pre/post-order traversal | DFS (recursive or iterative) | — |
| LCA in binary tree | DFS post-order | Binary lifting for repeated queries |
| Detect cycle in graph | Union-Find or DFS with colour | — |
| Anagram / frequency matching | Sliding Window + freq array | Hash Map |
| Palindrome check / construction | Two Pointers or DP | — |
| Calculator / expression parsing | Stack | Recursive descent |
📊 Section 2 — Master Complexity Reference
2.1 — Data Structures
| Structure | Access | Search | Insert | Delete | Space |
|---|---|---|---|---|---|
| Array | O(1) | O(n) | O(n) | O(n) | O(n) |
| Linked List | O(n) | O(n) | O(1) head | O(1) given ptr | O(n) |
| Stack / Queue | O(n) | O(n) | O(1) | O(1) | O(n) |
| Hash Map / Set | O(1) avg | O(1) avg | O(1) avg | O(1) avg | O(n) |
| Binary Search Tree | O(log n) avg | O(log n) avg | O(log n) avg | O(log n) avg | O(n) |
| AVL / Red-Black Tree | O(log n) | O(log n) | O(log n) | O(log n) | O(n) |
| Min/Max Heap | O(1) peek | O(n) | O(log n) | O(log n) | O(n) |
| Trie | O(L) | O(L) | O(L) | O(L) | O(n*L*26) |
| Union-Find (DSU) | — | O(alpha(n)) | O(alpha(n)) | — | O(n) |
2.2 — Sorting Algorithms
| Algorithm | Best | Average | Worst | Space | Stable? |
|---|---|---|---|---|---|
| Bubble Sort | O(n) | O(n²) | O(n²) | O(1) | ✅ Yes |
| Insertion Sort | O(n) | O(n²) | O(n²) | O(1) | ✅ Yes |
| Selection Sort | O(n²) | O(n²) | O(n²) | O(1) | ❌ No |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) | ✅ Yes |
| Quick Sort | O(n log n) | O(n log n) | O(n²) | O(log n) | ❌ No |
| Heap Sort | O(n log n) | O(n log n) | O(n log n) | O(1) | ❌ No |
| Counting Sort | O(n+k) | O(n+k) | O(n+k) | O(k) | ✅ Yes |
| Radix Sort | O(d*(n+k)) | O(d*(n+k)) | O(d*(n+k)) | O(n+k) | ✅ Yes |
| Tim Sort (std::sort) | O(n) | O(n log n) | O(n log n) | O(n) | ✅ Yes |
2.3 — Graph Algorithms
| Algorithm | Time | Space | Use Case |
|---|---|---|---|
| BFS | O(V+E) | O(V) | Shortest path (unweighted), level order |
| DFS | O(V+E) | O(V) | Cycle detection, topological sort, connected components |
| Dijkstra | O((V+E) log V) | O(V) | Shortest path, non-negative weights |
| Bellman-Ford | O(V*E) | O(V) | Shortest path, negative weights, detect neg cycles |
| Floyd-Warshall | O(V³) | O(V²) | All-pairs shortest path, dense graph |
| Kruskal MST | O(E log E) | O(V) | Minimum spanning tree (sparse graph) |
| Prim MST | O((V+E) log V) | O(V) | Minimum spanning tree (dense graph) |
| Kahn's (Topo Sort) | O(V+E) | O(V) | Topological order, detect cycle in DAG |
| Tarjan SCC | O(V+E) | O(V) | Strongly connected components |
2.4 — Key DSA Algorithms
| Algorithm | Time | Space | Notes |
|---|---|---|---|
| Binary Search | O(log n) | O(1) | Requires sorted / monotone input |
| Two Pointers | O(n) | O(1) | Requires sorted or monotone property |
| Sliding Window | O(n) | O(1) or O(k) | Optimal contiguous window |
| Monotonic Stack | O(n) | O(n) | Each element pushed/popped at most once |
| Backtracking (subsets) | O(2ⁿ * n) | O(n) | Exponential, pruning helps constant |
| Backtracking (permutations) | O(n! * n) | O(n) | — |
| Dynamic Programming 1D | O(n)–O(n²) | O(n) | Depends on recurrence |
| Dynamic Programming 2D | O(m*n) | O(n) opt | LCS, Edit Distance, Grid paths |
| 0/1 Knapsack | O(n*W) | O(W) | Reverse capacity iteration |
| LIS O(n log n) | O(n log n) | O(n) | Patience sort with binary search |
| Heap: Build | O(n) | O(1) in-place | Floyd's build-heap |
| Heap: Extract/Insert | O(log n) | O(1) | Sift-down / sift-up |
| Union-Find | O(alpha(n)) | O(n) | Path compression + union by rank |
| Trie: Insert/Search | O(L) | O(L) | L = length of string |
⚙️ Section 3 — C++ STL Quick Reference
3.1 — Containers
// ── vector ──────────────────────────────────────────────────
vector<int> v;
v.push_back(x); // O(1) amortised
v.pop_back(); // O(1)
v[i]; // O(1) random access
v.size(); v.empty(); v.back(); v.front();
sort(v.begin(), v.end()); // O(n log n)
reverse(v.begin(), v.end()); // O(n)
int idx = lower_bound(v.begin(),v.end(),x) - v.begin(); // O(log n)
// ── stack ────────────────────────────────────────────────────
stack<int> stk;
stk.push(x); stk.pop(); stk.top(); stk.empty(); // all O(1)
// ── queue ────────────────────────────────────────────────────
queue<int> q;
q.push(x); q.pop(); q.front(); q.back(); q.empty(); // all O(1)
// ── deque ────────────────────────────────────────────────────
deque<int> dq;
dq.push_back(x); dq.push_front(x); // O(1)
dq.pop_back(); dq.pop_front(); // O(1)
dq[i]; // O(1) random access
// ── priority_queue ──────────────────────────────────────────
priority_queue<int> maxH; // max at top
priority_queue<int,vector<int>,greater<int>> minH; // min at top
maxH.push(x); maxH.pop(); maxH.top(); // O(log n) except top
// ── set / multiset ────────────────────────────────────────────
set<int> s; // sorted, unique
s.insert(x); s.erase(x); s.count(x); s.find(x); // O(log n)
s.lower_bound(x); s.upper_bound(x); // O(log n)
// ── map / unordered_map ───────────────────────────────────────
unordered_map<string,int> um; // O(1) avg
map<string,int> m; // O(log n), sorted by key
um[key] = val; um.count(key); um.find(key);
for (auto& [k,v] : um) { } // structured binding C++173.2 — Useful Algorithms & Functions
// ── Numeric utilities ────────────────────────────────────────
#include <numeric>
int sum = accumulate(v.begin(), v.end(), 0);
iota(v.begin(), v.end(), 0); // fill 0,1,2,...,n-1
// ── Min/Max ──────────────────────────────────────────────────
int mx = *max_element(v.begin(), v.end());
int mn = *min_element(v.begin(), v.end());
int res = __gcd(a, b); // GCD, O(log min(a,b))
int res = __builtin_popcount(x); // count set bits
// ── String ───────────────────────────────────────────────────
string s = to_string(42);
int n = stoi("42");
s.substr(start, len); // O(len)
s.find(sub); // O(n*m) naive
// ── Functional / lambda ──────────────────────────────────────
sort(v.begin(), v.end(), [](int a, int b){ return a > b; }); // descending
function<int(int)> dfs = [&](int node) -> int { return 0; }; // recursive lambda
// ── Bit operations ───────────────────────────────────────────
// x & (x-1) → clear lowest set bit (power of 2: x&(x-1)==0)
// x | (1 << k) → set bit k
// x & ~(1 << k) → clear bit k
// (x >> k) & 1 → check bit k
// x ^ x = 0 → XOR self = 0 (find single number)3.3 — Graph BFS/DFS Templates
// ── BFS from source ──────────────────────────────────────────
vector<int> dist(n, INT_MAX);
queue<int> q;
dist[src] = 0; q.push(src);
while (!q.empty()) {
int u = q.front(); q.pop();
for (int v : adj[u]) {
if (dist[v] == INT_MAX) {
dist[v] = dist[u] + 1;
q.push(v);
}
}
}
// ── DFS iterative ────────────────────────────────────────────
vector<bool> visited(n, false);
stack<int> stk; stk.push(src);
while (!stk.empty()) {
int u = stk.top(); stk.pop();
if (visited[u]) continue;
visited[u] = true;
for (int v : adj[u]) if (!visited[v]) stk.push(v);
}
// ── Dijkstra ─────────────────────────────────────────────────
vector<long long> dist(n, LLONG_MAX);
priority_queue<pair<long long,int>,
vector<pair<long long,int>>,
greater<>> pq;
dist[src] = 0; pq.push({0, src});
while (!pq.empty()) {
auto [d, u] = pq.top(); pq.pop();
if (d > dist[u]) continue; // skip stale entry
for (auto [v, w] : adj[u]) {
if (dist[u] + w < dist[v]) {
dist[v] = dist[u] + w;
pq.push({dist[v], v});
}
}
}🎯 Section 4 — Interview Problem-Solving Framework
Step 1 — UNDERSTAND (2–3 min)
- Restate the problem in your own words. Confirm your understanding with the interviewer.
- Ask about constraints: array size n, value range, negative numbers, duplicates, sorted?
- Ask about edge cases: empty input, single element, all equal, n=1.
- Clarify output format: return value, modify in-place, 1-indexed or 0-indexed?
- Write 1–2 concrete examples including an edge case.
Step 2 — PLAN (3–5 min)
- State the brute force first. Say: 'The naive solution is O(n²) by…'
- Identify the bottleneck: inner loop, repeated work, wrong data structure.
- Map to a known pattern using the pattern selector (Section 1).
- State your approach clearly and the expected complexity before writing code.
Step 3 — CODE (10–15 min)
- Write clean, readable code. Use meaningful variable names (lo/hi over i/j for pointers).
- Code the happy path first. Add edge case handling at the start.
- Think out loud as you code: 'Here I'm updating the window by removing the leftmost element…'
- Avoid premature optimisation. Get a working solution, then optimise.
Step 4 — TEST (3–5 min)
- Trace through your example from Step 1 line by line.
- Test with edge cases: empty array, single element, all same, maximum n.
- For graph problems: test disconnected graph, single node, cycle.
- Announce bugs before fixing: 'I see that my loop should be <= not <, let me fix that.'
Step 5 — OPTIMISE & DISCUSS (2–3 min)
- State final time and space complexity, explain why.
- Discuss trade-offs: 'We could reduce space from O(n) to O(1) by using rolling variables.'
- Mention alternative approaches and proactively discuss follow-up variations.
⚡ Section 5 — Complexity Signals from Constraints
FAANG interviewers set constraints that hint at the expected time complexity. Use these to validate your approach before coding.
| Constraint (n) | Target Complexity | Algorithms That Fit |
|---|---|---|
| n ≤ 10 | O(n!) or O(2ⁿ · n) | Backtracking (all permutations/subsets), brute force |
| n ≤ 20 | O(2ⁿ) | Bitmask DP, backtracking with heavy pruning |
| n ≤ 100 | O(n³) | Floyd-Warshall, interval DP, 3D DP |
| n ≤ 1,000 | O(n²) | 2D DP (LCS, Edit Distance), O(n²) DP, naive graph |
| n ≤ 10,000 | O(n² tight) or O(n·√n) | Acceptable O(n²), Sqrt decomposition |
| n ≤ 100,000 | O(n log n) | Sorting, binary search, segment tree, heap, Dijkstra |
| n ≤ 1,000,000 | O(n) | Two pointers, sliding window, monotonic stack, hash map |
| n ≤ 10⁹ | O(log n) | Binary search on answer, math formula |
| n ≤ 10¹⁸ | O(log n) or O(√n) | Binary search, fast exponentiation, prime factorisation |
Quick Sanity Check: Will My Solution TLE?
- Modern CPUs execute ~10⁸ simple operations per second.
- O(n²) with n=10⁵: 10¹⁰ ops → TLE. Need O(n log n) or better.
- O(n log n) with n=10⁶: ~2·10⁷ ops → Fast ✓
- O(2ⁿ) with n=30: 10⁹ ops → borderline TLE. With n=20: 10⁶ ops → OK.
- O(n!) with n=12: 4.8·10⁸ ops → borderline. With n=10: 3.6·10⁶ ops → OK.
- When unsure: calculate n² or n·log(n) mentally and check against 10⁸.
🏆 Section 6 — Top 50 Must-Know LeetCode Problems
Arrays & Hashing
| # | Problem | Pattern | Difficulty |
|---|---|---|---|
| 1 | Two Sum | Hash Map | Easy |
| 2 | Best Time to Buy & Sell Stock | Greedy / Kadane variant | Easy |
| 3 | Contains Duplicate | Hash Set | Easy |
| 4 | Product of Array Except Self | Prefix Product | Medium |
| 5 | Maximum Subarray (Kadane) | Greedy / DP | Medium |
| 6 | Maximum Product Subarray | DP (track min & max) | Medium |
| 7 | Find Minimum in Rotated Array | Binary Search | Medium |
| 8 | 3Sum | Sort + Two Pointers | Medium |
| 9 | Container With Most Water | Two Pointers | Medium |
| 10 | Trapping Rain Water | Monotonic Stack / Two Ptr | Hard |
Strings
| # | Problem | Pattern | Difficulty |
|---|---|---|---|
| 11 | Longest Substring Without Repeating | Sliding Window | Medium |
| 12 | Minimum Window Substring | Sliding Window | Hard |
| 13 | Valid Anagram | Frequency Count | Easy |
| 14 | Group Anagrams | Sort as Key + Hash Map | Medium |
| 15 | Longest Palindromic Substring | Expand Around Centre / DP | Medium |
| 16 | Encode and Decode Strings | Prefix Length Encoding | Medium |
| 17 | Valid Parentheses | Stack | Easy |
Trees & Graphs
| # | Problem | Pattern | Difficulty |
|---|---|---|---|
| 18 | Invert Binary Tree | DFS/BFS | Easy |
| 19 | Maximum Depth of Binary Tree | DFS | Easy |
| 20 | Same Tree | DFS | Easy |
| 21 | Binary Tree Level Order Traversal | BFS | Medium |
| 22 | Validate Binary Search Tree | DFS + bounds | Medium |
| 23 | Lowest Common Ancestor | DFS post-order | Medium |
| 24 | Binary Tree Right Side View | BFS (last per level) | Medium |
| 25 | Clone Graph | DFS + Hash Map | Medium |
| 26 | Course Schedule (Topo Sort) | Kahn's BFS | Medium |
| 27 | Number of Islands | BFS/DFS or Union-Find | Medium |
| 28 | Word Ladder | BFS + Level | Hard |
| 29 | Word Search II | Trie + DFS Backtrack | Hard |
Binary Search & Heaps
| # | Problem | Pattern | Difficulty |
|---|---|---|---|
| 30 | Binary Search | Classic Template 1 | Easy |
| 31 | Search in Rotated Sorted Array | Rotated Binary Search | Medium |
| 32 | Find Minimum in Rotated Array | Compare mid to hi | Medium |
| 33 | Koko Eating Bananas | Answer Space BS | Medium |
| 34 | Kth Largest Element in Array | Min-Heap of size k | Medium |
| 35 | Merge K Sorted Lists | Min-Heap of k heads | Hard |
| 36 | Top K Frequent Elements | Min-Heap or Bucket Sort | Medium |
| 37 | Find Median from Data Stream | Two Heaps | Hard |
Dynamic Programming
| # | Problem | Pattern | Difficulty |
|---|---|---|---|
| 38 | Climbing Stairs | Fibonacci 1D DP | Easy |
| 39 | House Robber | 1D DP rolling | Medium |
| 40 | Coin Change | Unbounded Knapsack | Medium |
| 41 | Longest Increasing Subsequence | 1D DP or O(n log n) | Medium |
| 42 | Longest Common Subsequence | 2D DP | Medium |
| 43 | Edit Distance | 2D DP 3-way recurrence | Hard |
| 44 | Partition Equal Subset Sum | 0/1 Knapsack | Medium |
| 45 | Unique Paths | Grid path counting DP | Medium |
| 46 | Word Break | 1D DP + set | Medium |
| 47 | Best Time to Buy Stock w/ Cooldown | State Machine DP | Medium |
| 48 | Burst Balloons | Interval DP | Hard |
Backtracking & Stack
| # | Problem | Pattern | Difficulty |
|---|---|---|---|
| 49 | Combination Sum | Backtracking + pruning | Medium |
| 50 | N-Queens | Backtracking + O(1) check | Hard |
⚠️ Section 7 — Universal Mistakes & Red Flags
7.1 — Integer Overflow
// MISTAKE: int overflow in sum/product
int sum = 0;
for (int x : nums) sum += x; // overflows if nums has large values
// FIX: use long long
long long sum = 0;
for (int x : nums) sum += x;
// MISTAKE: mid calculation overflow
int mid = (lo + hi) / 2; // lo+hi can overflow if both ~2^30
// FIX:
int mid = lo + (hi - lo) / 2;
// MISTAKE: multiplying two ints before assigning to long long
long long area = height * width; // height*width computed as int first!
// FIX:
long long area = (long long)height * width;7.2 — Off-By-One
// Binary Search variants:
while (lo <= hi) { hi = mid - 1; lo = mid + 1; } // exact search
while (lo < hi) { hi = mid; lo = mid + 1; } // lower bound
// Array bounds: always check before accessing
if (i >= 0 && i < n && j >= 0 && j < m) grid[i][j];
// Substring length
s.substr(start, end - start + 1); // inclusive end
s.substr(start, end - start); // exclusive end7.3 — Graph & Tree Pitfalls
// MISTAKE: Forgetting to check visited before pushing to BFS queue -> infinite loop
// FIX: mark visited WHEN PUSHING, not when popping
if (!visited[v]) { visited[v] = true; q.push(v); }
// Dijkstra: forgetting to skip stale heap entries
auto [d, u] = pq.top(); pq.pop();
if (d > dist[u]) continue; // CRITICAL: skip outdated entries
// Tree: confusing null check
if (!node) return 0; // null node contributes 0 to depth
if (!node->left && !node->right) // leaf node check7.4 — DP Pitfalls
// Missing base case: dp[0] must be set before the loop
vector<int> dp(amount+1, amount+1);
dp[0] = 0; // CRITICAL base case
// 0/1 Knapsack: iterating capacity forward (allows item reuse)
for (int cap = w[i]; cap <= W; cap++) // WRONG: unbounded knapsack
for (int cap = W; cap >= w[i]; cap--) // CORRECT: 0/1 knapsack
// 2D DP string indexing: text[i-1] not text[i] when using 1-indexed dp
if (s1[i-1] == s2[j-1]) dp[i][j] = dp[i-1][j-1] + 1;📈 Section 8 — Big-O Growth Cheat Card
| Notation | Name | n=10 | n=100 | n=1,000 | n=10⁶ |
|---|---|---|---|---|---|
| O(1) | Constant | 1 | 1 | 1 | 1 |
| O(log n) | Logarithmic | 3 | 7 | 10 | 20 |
| O(√n) | Square Root | 3 | 10 | 32 | 1,000 |
| O(n) | Linear | 10 | 100 | 1,000 | 1,000,000 |
| O(n log n) | Linearithmic | 33 | 664 | 10,000 | 20,000,000 |
| O(n²) | Quadratic | 100 | 10,000 | 10⁶ | 10¹² (TLE) |
| O(n³) | Cubic | 1,000 | 10⁶ | 10⁹ (TLE) | — |
| O(2ⁿ) | Exponential | 1,024 | 10³⁰ (TLE) | — | — |
| O(n!) | Factorial | 3.6M | 10¹⁵⁷ (TLE) | — | — |
Rules for Simplifying Big-O
- DROP CONSTANTS: O(3n) = O(n). O(2n² + 5n) = O(n²).
- DROP LOWER TERMS: O(n² + n) = O(n²). O(n log n + n) = O(n log n).
- DIFFERENT INPUTS use different variables: O(a + b) is NOT O(n). Keep separate.
- NESTED LOOPS multiply: outer O(n) × inner O(n) = O(n²). Unless inner shrinks per outer.
- RECURSION: T(n) = 2T(n/2) + O(n) ⇒ O(n log n) by Master Theorem (Merge Sort).
- AMORTISED: dynamic array doubling is O(1) amortised per push_back despite occasional O(n) resize.
🔥 Section 9 — Last-Minute Interview Reminders
Before You Code
- Always ask: what are the constraints? (n, value range, sorted?)
- Always ask: can there be duplicates? negative numbers? empty input?
- State brute force first, then optimise. Never jump straight to the optimal.
- Announce your approach and complexity BEFORE writing code.
While Coding
- Use
long longfor sums/products that might exceed 2×10⁹. - Use
lo + (hi - lo) / 2, never(lo + hi) / 2. - Check array bounds before every access:
if (i >= 0 && i < n). - In BFS: mark visited when PUSHING, not when POPPING.
- In Dijkstra: skip stale heap entries with
if (d > dist[u]) continue. - In 0/1 Knapsack 1D: iterate capacity in REVERSE.
- In Backtracking: always undo (
pop_back/used[i]=false) after recursion.
Common Gotchas by Topic
- BINARY SEARCH:
lo <= hifor exact,lo < hifor bound.hi = n(not n-1) for bound. - TWO POINTERS: only works on sorted arrays or when property is monotone.
- SLIDING WINDOW: shrink left WHILE constraint violated (while, not if).
- HEAP: C++ priority_queue is max-heap by default. For min-heap:
greater<int>. - GRAPH BFS: use queue, not stack. Visited set prevents revisiting.
- TREE DFS: handle null node at start:
if (!node) return base_val. - BACKTRACKING: collect at every node for subsets; only at leaves for permutations.
- DP: define state precisely BEFORE writing recurrence. Base cases BEFORE loop.
- TRIE:
search()returns false ifisEnd=falseeven if path exists. - UNION-FIND: compare
find(a)==find(b), NOTa==b.
Complexity Red Flags
- O(n²) with n > 10⁴: you probably need sliding window, binary search, or a hash map.
- O(2ⁿ) with n > 25: need DP or bitmask DP instead of backtracking.
- O(n!) with n > 12: almost certainly needs pruning or a completely different approach.
- Calling
sort()inside a loop: O(n² log n) — sort once outside. - Using
substr()inside a loop without memoisation: O(n² · L) — cache or use indices.
🎓 End of DSA Course — Chapters 0–12 Complete!
Arrays | Linked Lists | Stacks & Queues | Hashing | Trees | Graphs | Heaps | Greedy | Binary Search | Backtracking | Dynamic Programming | Bonus Topics