JavaScript Event Loop Demystified: What Senior Devs Know That Juniors Don't

Technical PM + Software Engineer
Most explanations of the JavaScript event loop either get too academic too fast or stay so simplified that they stop being useful the moment you hit a real bug.
If you have ever stared at a log order that made no sense, watched setTimeout(fn, 0) run later than expected, or wondered why a resolved Promise beat a timer to execution, you were not really fighting JavaScript. You were fighting a fuzzy mental model.
The event loop gets much easier once you stop treating it like magic and start treating it like a scheduling system with a few very specific rules.
The mental model that actually helps
At a practical level, JavaScript execution is shaped by five moving parts:
- the call stack
- Web APIs or runtime APIs
- the task queue
- the microtask queue
- the event loop itself
The call stack is where JavaScript runs synchronous code right now.
Web APIs in the browser, or runtime APIs in Node.js, handle work that JavaScript asks for but does not execute directly, like timers, network requests, or DOM events.
The task queue, often called the macrotask queue, holds scheduled callbacks that are ready to run later.
The microtask queue holds higher-priority follow-up work, most commonly Promise reactions and queueMicrotask callbacks.
The event loop keeps checking one core thing: is the call stack empty? If it is, JavaScript can pull the next unit of scheduled work into execution.
That sounds simple, but the priority rules are what create most of the surprises.
Start with synchronous execution
Before you think about queues, remember that synchronous code always runs first.
console.log('A')
console.log('B')
console.log('C')
The output is boring:
A
B
C
That matters because a lot of event loop confusion comes from mentally interleaving async work too early. JavaScript will finish the current synchronous frame before it touches queued work.
What setTimeout really does
A timer does not mean "run this in exactly N milliseconds."
It means "after at least N milliseconds, make this callback eligible to run."
console.log('start')
setTimeout(() => {
console.log('timer')
}, 0)
console.log('end')
The output is:
start
end
timer
Why? Because setTimeout hands the callback to the runtime timer system. Once the delay expires, the callback is moved into the task queue. It still cannot run until the call stack is empty and all pending microtasks have been drained.
So 0 does not mean immediate. It means minimum delay before queue eligibility.
That is a much more accurate way to reason about timers.
Why Promises run before timers
This is the behavior that trips up a lot of developers.
console.log('start')
setTimeout(() => {
console.log('timeout')
}, 0)
Promise.resolve().then(() => {
console.log('promise')
})
console.log('end')
The output is:
start
end
promise
timeout
The key rule is this: after synchronous code finishes, JavaScript drains the microtask queue before taking the next task from the task queue.
Promise .then(...) callbacks go into the microtask queue. Timer callbacks go into the task queue.
That means microtasks cut the line.
If you only remember one thing about the event loop, make it this:
- synchronous code runs first
- then microtasks run
- then the next task runs
That one rule explains a surprising amount of real-world behavior.
queueMicrotask is not obscure trivia
Developers often learn Promise behavior first and forget that microtasks are not limited to Promises.
console.log('A')
queueMicrotask(() => {
console.log('microtask')
})
setTimeout(() => {
console.log('task')
}, 0)
console.log('B')
Output:
A
B
microtask
task
queueMicrotask is useful when you explicitly want "after the current synchronous work, but before the next task."
That can be useful in framework internals, custom scheduling, and cleanup patterns where timing matters.
It can also be dangerous if overused.
Microtask starvation is real
Because the runtime drains the entire microtask queue before moving to the next task, it is possible to create starvation.
function loop() {
queueMicrotask(loop)
}
loop()
setTimeout(() => {
console.log('will never get a chance')
}, 0)
This is a terrible idea, but it demonstrates an important truth: if microtasks keep scheduling more microtasks, normal tasks can get delayed indefinitely.
In production, this usually shows up less dramatically. You are more likely to see UI responsiveness degrade, timers feel delayed, or rendering appear blocked by a chain of Promise-driven work.
So while microtasks are "higher priority," that does not mean "always better."
A better way to reason about async/await
async/await feels special because it reads like synchronous code. Under the hood, it still rides on Promise mechanics.
async function run() {
console.log('inside start')
await Promise.resolve()
console.log('inside after await')
}
console.log('outside start')
run()
console.log('outside end')
Output:
outside start
inside start
outside end
inside after await
The code before await runs synchronously as part of the current call stack.
Once JavaScript hits await, the rest of the function is scheduled like a Promise continuation, which means it resumes in a microtask.
That explains why await often feels "almost immediate" while still happening after surrounding synchronous code.
The browser adds rendering to the story
In the browser, rendering does not happen randomly. The runtime gets opportunities to paint between tasks, not in the middle of a long synchronous block.
That is why this kind of code can freeze the UI even though it contains no network request and no obvious while-true loop.
button.addEventListener('click', () => {
heavyWork()
console.log('done')
})
If heavyWork() takes 300 milliseconds, the browser cannot repaint during that blocking work. The event loop is busy. Your spinner might not show, your click feedback might lag, and the app feels broken.
This is where event loop knowledge stops being trivia and starts becoming product quality.
If you need responsiveness, you may need to:
- split work into chunks
- yield between chunks
- offload work to a Web Worker
- avoid long synchronous loops on the main thread
A real bug pattern: forEach with async callbacks
One of the most common event-loop-adjacent mistakes is assuming forEach understands async flow.
const ids = [1, 2, 3]
ids.forEach(async (id) => {
await save(id)
console.log('saved', id)
})
console.log('done')
A lot of people expect done to print last. It does not.
forEach does not wait for async callbacks to settle. It schedules the callback calls synchronously, and each await continues later through Promise microtasks.
That means done logs before the saves finish.
If order matters, use for...of.
for (const id of ids) {
await save(id)
console.log('saved', id)
}
console.log('done')
If parallelism matters, use Promise.all intentionally.
await Promise.all(
ids.map(async (id) => {
await save(id)
console.log('saved', id)
})
)
console.log('done')
The event loop is not the entire story here, but understanding microtask scheduling makes the behavior much easier to predict.
Another real bug pattern: assuming timer order means execution order
Consider this:
setTimeout(() => console.log('first'), 0)
setTimeout(() => console.log('second'), 0)
People often assume this is deterministic forever. Usually it is in straightforward cases, but once you mix timers with microtasks, rendering, I/O, or runtime-specific behavior, relying on "I scheduled it first, so it must run first in every broader context" becomes fragile.
The safer habit is not to encode critical logic around timing assumptions when you really mean dependency ordering.
If one step depends on another, model the dependency directly instead of hoping queue order happens to preserve it.
Node.js adds a few runtime-specific wrinkles
In Node.js, the broad mental model still holds: call stack, queues, event loop. But Node has additional phases and special scheduling behavior, including things like process.nextTick.
That means some examples differ slightly between browser and Node environments.
The practical takeaway is not that your browser mental model is wrong. It is that you should not assume every scheduling detail is identical across runtimes.
For most app-level debugging, this hierarchy still gets you very far:
- synchronous work first
- Promise continuations and microtasks next
- timer and other queued callbacks after that
If you are debugging low-level Node timing behavior, then it is worth learning the full event loop phases. For day-to-day frontend and full-stack work, the simpler model is enough most of the time.
How senior developers usually think about this
What senior developers tend to know is not some secret internal spec detail. It is mostly a better instinct for where work is being queued and when the current execution context actually ends.
They tend to ask questions like:
- Is this still on the current call stack?
- Does this continue as a microtask or a task?
- Am I blocking rendering with synchronous work?
- Am I assuming a callback has finished when it is only scheduled?
- Does this loop create concurrency I did not intend?
That mindset is what turns the event loop from a memorization topic into a debugging tool.
A compact cheat sheet that is actually useful
If you want a minimal practical summary, use this:
Synchronous code
Runs immediately on the call stack.
Promise.then, await, queueMicrotask
Run as microtasks after the current synchronous work completes, before the next normal task.
setTimeout, DOM events, most deferred callbacks
Run as tasks after microtasks are drained and the stack is empty.
Rendering in the browser
Cannot happen during long synchronous JavaScript execution.
That is enough to explain most surprising log order bugs.
Final takeaway
The JavaScript event loop is not hard because the rules are impossible. It is hard because most people learn partial rules and then build the rest from intuition.
The more reliable mental model is simple:
- finish current synchronous work
- drain microtasks
- take the next task
- repeat
Once that clicks, a lot of confusing async behavior stops feeling random. Timers make more sense. Promises stop feeling magical. async/await becomes easier to reason about. And debugging weird execution order stops feeling like guesswork.
That is the real payoff. The event loop is not something you memorize to sound smart. It is something you understand so you can trust your own predictions when the code gets weird.