The JVM starts by interpreting bytecode, then JIT‑compiles “hot” methods to native code based on profiling. Early requests can be slower; after warm‑up, optimized machine code runs faster.
Advanced answer
Deep dive
Expanding on the short answer — what usually matters in practice:
Context (tags): java, jit, performance, jvm
JVM: memory (heap/stack), GC, and what drives latency.
Contracts: equals/hashCode/toString, mutability and consequences.
Explain the "why", not just the "what" (intuition + consequences).
Trade-offs: what you gain/lose (time, memory, complexity, risk).
Edge cases: empty inputs, large inputs, invalid inputs, concurrency.
Examples
A tiny example (an explanation template):
// Example: discuss trade-offs for "jit-compilation:-what-is-it-and-why-do-java-apps"
function explain() {
// Start from the core idea:
// The JVM starts by interpreting bytecode, then JIT‑compiles “hot” methods to native code ba
}
Common pitfalls
Too generic: no concrete trade-offs or examples.
Mixing average-case and worst-case (e.g., complexity).