Algorithms
Recruitment and knowledge question base. Filter, search and test your knowledge.
easysearchbinary-searchalgorithm+1
Answer
Binary search finds a target in a sorted array/list by comparing with the middle element and discarding half of the range each step. It requires sorted input and runs in O(log n) time (iteratively using O(1) extra space).
function binarySearch(arr: number[], target: number): number {
let left = 0;
let right = arr.length - 1;
while (left <= right) {
const mid = Math.floor((left + right) / 2);
if (arr[mid] === target) return mid;
if (arr[mid] < target) left = mid + 1;
else right = mid - 1;
}
return -1;
}mediumsortingquicksortmergesort+2
Answer
Both are divide‑and‑conquer sorts. QuickSort partitions around a pivot, is in‑place and very fast on average O(n log n), but has O(n²) worst‑case and is not stable. MergeSort splits and merges, is stable with guaranteed O(n log n) worst‑case, but needs O(n) extra memory and works well for linked lists/external sorting.
mediumgraphbfsdfs+2
Answer
BFS explores a graph level by level using a queue and finds the shortest path in unweighted graphs. DFS explores as deep as possible using recursion/stack; it’s useful for topological sort, cycle detection and often uses less memory on wide graphs.
hardgraphshortest-pathdijkstra+1
Answer
Dijkstra’s algorithm computes shortest paths from a single source in a graph with non‑negative edge weights. It repeatedly picks the closest unvisited node (usually via a priority queue) and relaxes outgoing edges, achieving O((V+E) log V) time with a heap.
harddynamic-programmingoptimizationmemoization
Answer
Dynamic programming solves a problem by solving smaller subproblems and saving their results so you don’t recompute them. Use it when subproblems overlap and the best solution can be built from best sub‑solutions (memoization/top‑down or a bottom‑up table).
easybfsdfsgraph+1
Answer
BFS explores level by level and finds the shortest path in an unweighted graph. DFS goes deep first and is handy for traversals, cycle detection, and topological-style problems.
function bfs(start: number, graph: number[][]) {
const q: number[] = [start];
const seen = new Set<number>([start]);
while (q.length) {
const v = q.shift()!;
for (const n of graph[v]) {
if (!seen.has(n)) {
seen.add(n);
q.push(n);
}
}
}
}easybig-ocomplexityperformance
Answer
Big-O describes how time or memory grows with input size n (the growth rate). It helps compare algorithms independent of hardware.
easytwo-pointersarraytechnique
Answer
You keep two indices (e.g., left/right) and move them based on a rule, often on a sorted array or a sliding window. It can reduce nested loops to O(n) in many problems.
function hasPairSum(arr: number[], target: number) {
let l = 0;
let r = arr.length - 1;
while (l < r) {
const sum = arr[l] + arr[r];
if (sum === target) return true;
if (sum < target) l++;
else r--;
}
return false;
}mediumsortingstable-sortalgorithm
Answer
A stable sort keeps the relative order of elements with equal keys. It matters when you sort by multiple fields (e.g., sort by last name, then by first name).
mediumgreedydynamic-programmingoptimization
Answer
Greedy makes the best local choice at each step and never revisits it; it works only when the problem has the greedy-choice property. DP solves overlapping subproblems and reuses results (memoization/tabulation) to guarantee optimality.
mediumbinary-searchbugsoverflow
Answer
Common bugs are inconsistent bounds (infinite loop) and overflow when computing mid. Use `mid = left + (right - left) / 2` and be consistent with inclusive/exclusive ranges.
function binarySearch(arr: number[], target: number): number {
let left = 0;
let right = arr.length - 1;
while (left <= right) {
const mid = left + Math.floor((right - left) / 2);
if (arr[mid] === target) return mid;
if (arr[mid] < target) left = mid + 1;
else right = mid - 1;
}
return -1;
}hardshortest-pathbfsdijkstra+1
Answer
If all edges have equal weight (e.g., weight 1), BFS finds shortest paths. If weights are only 0 or 1, you can use 0–1 BFS with a deque; otherwise you need Dijkstra.
hardtopological-sortdaggraph
Answer
It orders vertices so every directed edge u→v goes from earlier to later. It exists only for a DAG (directed acyclic graph); any cycle means no valid topological order.
function topoSort(n: number, edges: Array<[number, number]>) {
const indeg = Array(n).fill(0);
const g: number[][] = Array.from({ length: n }, () => []);
for (const [u, v] of edges) {
g[u].push(v);
indeg[v]++;
}
const q: number[] = [];
for (let i = 0; i < n; i++) if (indeg[i] === 0) q.push(i);
const order: number[] = [];
while (q.length) {
const u = q.shift()!;
order.push(u);
for (const v of g[u]) {
indeg[v]--;
if (indeg[v] === 0) q.push(v);
}
}
return order.length === n ? order : null; // null => cycle
}hardkadanedynamic-programmingarray
Answer
It finds the maximum sum of a contiguous subarray in O(n) time by tracking the best sum ending at the current position and the best overall.
function maxSubarraySum(nums: number[]): number {
let best = nums[0];
let cur = nums[0];
for (let i = 1; i < nums.length; i++) {
cur = Math.max(nums[i], cur + nums[i]);
best = Math.max(best, cur);
}
return best;
}hardkmpstring-searchpattern-matching
Answer
KMP precomputes a prefix function (LPS) so when a mismatch happens, it shifts the pattern without re-checking characters. This makes search O(n+m) instead of O(n·m).
easyrecursionbase-casecall-stack
Answer
Recursion is when a function solves a problem by calling itself on a smaller input. The base case is the stopping condition that does not recurse, so the calls eventually end.
mediummemoizationdynamic-programmingcache
Answer
Memoization caches results of a function for given inputs, so repeated calls reuse the cached value. It helps when you have overlapping subproblems (common in dynamic programming) and the same states are computed many times.
function fib(n: number, memo = new Map<number, number>()): number {
if (n <= 1) return n;
const cached = memo.get(n);
if (cached !== undefined) return cached;
const value = fib(n - 1, memo) + fib(n - 2, memo);
memo.set(n, value);
return value;
}mediumquickselectpartitionselection+1
Answer
Quickselect finds the k-th smallest element by partitioning like QuickSort, but recursing only into one side. Average time is O(n), worst case is O(n²) (bad pivots).
harddijkstrabellman-fordnegative-weights+1
Answer
Dijkstra assumes that once a node has the smallest known distance, it will never improve later — negative edges break this assumption. Use Bellman–Ford for negative weights (and it can detect negative cycles).
hardstackmonotonic-stacknext-greater+1
Answer
A monotonic stack keeps elements in increasing or decreasing order. It’s used for “next greater/smaller element”, stock span, and histogram problems, often in O(n) time by pushing/popping each element at most once.
function nextGreater(nums: number[]): number[] {
const res = Array(nums.length).fill(-1);
const st: number[] = []; // stack of indices
for (let i = 0; i < nums.length; i++) {
while (st.length && nums[st[st.length - 1]] < nums[i]) {
res[st.pop()!] = nums[i];
}
st.push(i);
}
return res;
}easycorrectnessinvariantloops
Answer
A loop invariant is a statement that is true before and after each loop iteration. It helps you prove correctness: if it holds initially, stays true each step, and implies the goal at the end, the algorithm is correct.
mediumdynamic-programmingmemoizationtabulation
Answer
Top-down uses recursion with memoization (compute states on demand). Bottom-up fills a table iteratively from smaller subproblems to bigger ones. Both reuse results; choose based on clarity and memory/control needs.
mediumrandomizationquicksortquickselect+1
Answer
A random pivot makes it unlikely to hit adversarial inputs that cause worst-case O(n²). In practice it gives good expected performance and avoids patterns like already-sorted arrays producing bad pivots.
harda-stardijkstraheuristics+1
Answer
Dijkstra expands nodes by current distance. A* adds a heuristic `h(n)` (estimated remaining cost) and prioritizes `g(n)+h(n)`. With an admissible heuristic (never overestimates), A* is optimal and usually explores fewer nodes, so it can be faster.
hardmstkruskalprim+2
Answer
Kruskal sorts edges and adds the smallest ones that don’t create cycles (uses Union-Find). Prim grows a tree from a start node using a priority queue of edges. Both can be O(E log E) / O(E log V) depending on implementation.
easybacktrackingsearchpruning+1
Answer
Backtracking is a depth-first search over a decision tree: you build a partial solution, and when it becomes invalid you undo the last choice and try another. It’s used for problems like permutations, N-Queens, Sudoku, and subset search with pruning.
mediumamortizedcomplexitydynamic-array+1
Answer
Amortized means “average cost per operation over a whole sequence”, even if some single operations are expensive. In a dynamic array, most appends are O(1), and once in a while you pay O(n) to resize/copy—spread across many appends it becomes O(1) amortized.
mediumcounting-sortsortingstability+1
Answer
Counting sort is good when keys are integers in a small range 0..k. It runs in O(n + k) by counting occurrences and building the output, and it can be stable (useful as a building block for radix sort).
hardcomplexity-theorynp-hardnp-complete+1
Answer
NP-complete problems are both in NP (a solution can be verified in polynomial time) and NP-hard. NP-hard means “at least as hard as NP problems” but it might not be in NP (e.g., optimization versions). If any NP-complete problem has a polynomial-time algorithm, then P = NP.
hardgraphsshortest-pathfloyd-warshall+1
Answer
Floyd–Warshall computes shortest paths between all pairs of vertices. It runs in O(V^3) time and uses O(V^2) memory, so it’s practical mainly for smaller or dense graphs. It can handle negative edges, but not negative cycles.
easyprefix-sumrange-querypreprocessing
Answer
A prefix sum array stores sums up to each index. After O(n) preprocessing, you can answer range sum queries in O(1): sum[l..r] = prefix[r] - prefix[l-1]. It’s also used for counting frequencies and quick “how many in a range” queries.
mediumsliding-windowtwo-pointerscomplexity
Answer
Sliding window keeps a moving range [l..r] and updates it in one pass. You expand `r` and move `l` to maintain a condition (e.g., sum <= X, at most K distinct). Many problems become O(n) instead of O(n^2) because each pointer moves forward at most n times.
mediumstringhashingrabin-karp+1
Answer
A rolling hash lets you update the hash of a window when you shift by one character (remove left, add right). Rabin–Karp compares hashes to find candidate matches, then usually verifies the substring to avoid false positives. Caveat: hashes can collide, so without verification it’s not guaranteed correct.
harddequemonotonic-queuesliding-window+1
Answer
A monotonic queue is a deque that keeps elements in decreasing (or increasing) order. For sliding window maximum, you pop smaller elements from the back when adding a new one, so the front is always the max. You also pop from the front when it leaves the window. Each element is added and removed at most once, so total time is O(n).
harddpbitmasksubset+2
Answer
Bitmask DP uses a bitmask to represent a subset (e.g., which nodes are visited). A common form is dp[mask][i] = best result for subset `mask` ending at `i` (used in problems like TSP). Typical complexity is exponential, often O(n^2 * 2^n) time and O(n * 2^n) memory.
mediumgreedycorrectnessexchange-argument+1
Answer
A greedy algorithm is correct when the problem has the greedy-choice property and optimal substructure. Intuitively, you can prove that making the locally best choice can be exchanged into an optimal solution (exchange argument), so the local choice leads to a global optimum.
hardkmpstringpattern-matching+1
Answer
KMP precomputes the longest proper prefix that is also a suffix (LPS) for each pattern prefix. On mismatch, it shifts the pattern using the LPS value instead of moving the text pointer back, so characters in the text are not re-checked.
mediumbellman-fordshortest-pathgraphs+1
Answer
Use Bellman–Ford when you have negative edge weights. It can detect negative cycles by doing one extra relaxation round. The trade‑off is slower time complexity: O(V·E).
mediummstkruskalprim+2
Answer
Kruskal builds the MST by sorting edges and adding them if they don’t create a cycle (usually with DSU/Union‑Find). Prim grows the MST from a start node using a priority queue of edges. Kruskal is often good for sparse graphs; Prim is convenient when you have adjacency lists.
mediumquickselectselectionpartition+1
Answer
Quickselect finds the k‑th smallest/largest element without fully sorting the array. It uses partitioning like quicksort and runs in expected O(n) time, but can degrade to O(n^2) in the worst case.
mediumbinary-searchparametric-searchmonotonic+1
Answer
It applies when you can define a monotonic predicate over the answer space (e.g., “is there a solution with value ≤ X?”). Then you can binary search the minimal/maximal X that satisfies the predicate.
easycycle-detectiontortoise-harelinked-list+1
Answer
It detects cycles in a linked list or any iterative sequence by moving one pointer twice as fast as the other. If they meet, there’s a cycle. It runs in O(n) time and O(1) extra space.
mediumheapsortsortingcomplexity+1
Answer
Heapsort runs in O(n log n) time, uses O(1) extra space (in‑place), and is not stable (equal elements can change order).
harda-starheuristicshortest-path+1
Answer
A* uses f(n) = g(n) + h(n). If the heuristic h is admissible (never overestimates) and consistent, A* is optimal and explores fewer nodes than Dijkstra. If h overestimates, optimality is not guaranteed. With h = 0, A* reduces to Dijkstra.
mediummajority-votearraylinear-time+1
Answer
It finds the element that appears more than n/2 times (if it exists). The idea is to cancel pairs of different elements; the remaining candidate is the majority. It runs in O(n) time and O(1) space.
function majority(nums: number[]): number {
let cand = 0;
let count = 0;
for (const x of nums) {
if (count === 0) cand = x;
count += x === cand ? 1 : -1;
}
return cand;
}