Resizing often means allocating a bigger table and rehashing/copying all entries, which is O(n) work done at once. That can create a pause and a latency spike. Mitigations: pick a good load factor, pre-size when you know the size, use incremental rehashing (move entries gradually), or use a concurrent map implementation that spreads the work.
Advanced answer
Deep dive
Expanding on the short answer — what usually matters in practice:
Complexity: compare typical operations (average vs worst-case).
Invariants: what must always hold for correctness.
When the choice is wrong: production symptoms (latency, GC, cache misses).
Explain the "why", not just the "what" (intuition + consequences).
Trade-offs: what you gain/lose (time, memory, complexity, risk).
Edge cases: empty inputs, large inputs, invalid inputs, concurrency.
Examples
A tiny example (an explanation template):
// Example: discuss trade-offs for "why-can-a-hash-table-resize-cause-latency-spikes"
function explain() {
// Start from the core idea:
// Resizing often means allocating a bigger table and rehashing/copying all entries, which is
}
Common pitfalls
Too generic: no concrete trade-offs or examples.
Mixing average-case and worst-case (e.g., complexity).