What Is INP? Interaction to Next Paint Explained
Need to test your site right now? Use the INP Checker — it pulls real-user INP, LCP, and CLS from Google's Chrome UX Report with a mobile/desktop breakdown. No signup.
What is this?
Interaction to Next Paint (INP) is a Core Web Vital that measures the latency between a user interaction — a click, a tap, or a keypress — and the next frame the browser paints in response. It captures the full round-trip: input delay while the main thread is busy, the time your event handler takes to run, and the presentation delay before the browser can commit the next frame. Low INP means the page feels snappy; high INP means users see a visible lag between their action and the UI's response.
INP replaced First Input Delay (FID) as a Core Web Vital on 2024-03-12. FID only measured the delay before the first event handler on the page started running, on the first interaction only. That turned out to be a weak proxy for perceived responsiveness — most pages had great FID because the first click usually happens after the main thread has quieted down, but users still experienced jank during the rest of the session. INP fixes this by measuring every interaction and reporting the worst one (or the 98th percentile on pages with more than 50 interactions).
Google's thresholds for INP, measured at the 75th percentile across real users:
- Good: under 200 ms
- Needs improvement: 200 ms – 500 ms
- Poor: over 500 ms
If Search Console flags your URL group as "Poor INP," at least 25% of visits recorded an interaction slower than 500 ms.
Why it matters
- Ranking signal. INP is part of Core Web Vitals, which Google has confirmed as a ranking factor. A poor INP puts a URL group into the failing bucket for the page experience signal.
- User experience. INP correlates directly with perceived responsiveness. Sites that improve INP from 500 ms to under 200 ms typically see measurable lifts in engagement, session depth, and conversion.
- Coverage is broader than FID. FID only looked at one interaction per session; INP looks at all of them. A site that was passing FID easily can fail INP outright because a checkout click or a filter toggle is slow — interactions that never showed up in FID's first-interaction-only model.
- Mobile-first metric. Low-end Android devices dominate the long tail of INP. Code that runs in 50 ms on a developer laptop can take 400 ms on a Moto G4, and CrUX weights field data toward real mobile traffic.
How INP is measured
INP is the 98th percentile of interaction latency across the user session (or the single worst interaction on pages with fewer than 50 interactions). For each interaction, the browser measures three components:
- Input delay — time from the user event to the first event handler starting. Caused by main-thread congestion: a long task is already running when the user clicks.
- Processing time — time your event handler takes to execute. Caused by heavy synchronous work inside the handler itself.
- Presentation delay — time from the handler finishing to the next frame being painted. Caused by layout, style, paint, and composite work triggered by DOM mutations in the handler.
INP is sampled in two places:
- Lab data (Lighthouse, Chrome DevTools Performance panel) — runs a synthetic interaction against a throttled browser. Useful for local debugging and pre-deploy checks. Not what Google ranks on.
- Field data (Chrome User Experience Report, or CrUX) — aggregates real-user INP across every Chrome user who visited your page over a rolling 28-day window. This is what PageSpeed Insights shows in the green "Discover what your real users are experiencing" section, and it is the authoritative number for Search Console and ranking.
A page can have a fine lab INP and a poor field INP because field data captures the long tail: slow networks, low-end devices, browser extensions, stale cache states, and interaction patterns Lighthouse's one synthetic click never exercises. Always trust field data when it disagrees with lab.
How to fix it
Break up long tasks
Any task longer than 50 ms blocks the main thread and pushes out input delay. Split them. The modern API is scheduler.yield(), which yields to the browser and lets it process pending user input before resuming your work:
async function processItems(items) {
for (const item of items) {
doWork(item);
// Yield to the browser every iteration so pending input
// can be handled before we continue.
if (scheduler.yield) {
await scheduler.yield();
} else {
await new Promise(r => setTimeout(r, 0));
}
}
}
The setTimeout(0) fallback works in every browser. For broader control, scheduler.postTask() lets you assign priorities (user-blocking, user-visible, background) so the scheduler can bump high-priority input ahead of your background work automatically.
Move work off the main thread with Web Workers
Anything that does not touch the DOM — JSON parsing of large payloads, cryptographic work, image resizing, search-index building — belongs in a Web Worker. Workers run on a separate thread, so they cannot block input handling:
// main.js
const worker = new Worker('/search-index.js');
worker.postMessage({ type: 'build', docs });
worker.onmessage = e => renderResults(e.data);
Libraries like Comlink hide the postMessage boilerplate and let you call worker functions as if they were local async calls. Moving a 400 ms JSON parse into a worker is often the single largest INP win on data-heavy pages.
Defer non-critical JavaScript
Every byte of JavaScript your page downloads competes for the main thread. Ship less on the critical path. Add defer to script tags that are not needed for first render:
<script src="/analytics.js" defer></script>
<script src="/chat-widget.js" defer></script>
Use dynamic import() to load interactive widgets only when the user reveals them — a modal's code should not load until the modal opens. Audit your bundle with Chrome DevTools' Coverage tab: anything over 50% unused on the initial route is a candidate for splitting.
Use requestIdleCallback for low-priority work
Analytics beacons, prefetch hints, and background sync all belong in idle time, not in the path of user input:
requestIdleCallback(() => {
sendAnalyticsBatch();
prefetchNextRoute();
}, { timeout: 2000 });
The callback runs only when the main thread is idle, so it cannot extend the input-delay component of INP. The timeout option guarantees the callback eventually fires even under sustained main-thread pressure. Safari supports it via a polyfill that falls back to setTimeout.
Avoid layout thrashing in event handlers
Reading a layout-sensitive property (offsetWidth, getBoundingClientRect, scrollTop) after writing to the DOM forces the browser to run layout synchronously before returning the value. In a loop, this produces O(n²) reflows. Batch reads first, then writes:
// Bad: reads and writes interleaved, one reflow per iteration
items.forEach(el => {
const h = el.offsetHeight; // read
el.style.height = (h * 2) + 'px'; // write — invalidates layout
});
// Good: all reads, then all writes
const heights = items.map(el => el.offsetHeight);
items.forEach((el, i) => {
el.style.height = (heights[i] * 2) + 'px';
});
FastDOM and requestAnimationFrame-based scheduling both enforce this pattern. In click handlers, defer the write to the next frame with requestAnimationFrame() so the handler returns quickly and the paint is coalesced with the next tick.
Prefer CSS animations over JS for visual updates
CSS transitions and animations run on the compositor thread, not the main thread. A button's hover state, a menu slide-in, or a loading spinner should never be driven by setInterval mutating style.left. Use transform and opacity:
.menu {
transform: translateX(-100%);
transition: transform 200ms ease-out;
}
.menu.open {
transform: translateX(0);
}
Toggling a class is cheap and runs off the main thread entirely, so it adds zero processing or presentation delay to the interaction. Avoid animating properties that trigger layout (width, height, top, left) — use transform instead. Add will-change: transform sparingly on elements that are about to animate, and remove it when the animation ends.
How to test INP on a site
Three tools, in order of which you should trust:
- PageSpeed Insights. Paste your URL into pagespeed.web.dev. The top card shows CrUX field data (real users, 28-day window) — this is what Google ranks on. Below that, Lighthouse runs a synthetic audit. If the two disagree, the CrUX number wins.
- Chrome DevTools Performance panel. Open DevTools, switch to the Performance tab, record while interacting. The "Interactions" lane at the top of the flame chart highlights each interaction with its full latency broken into input delay, processing, and presentation. Use this to pinpoint which handler or long task is responsible.
- Real-user monitoring with the web-vitals library. Install
web-vitalsand reportonINP()to your analytics endpoint. This gives you INP per page, per device class, per session — granularity that CrUX's aggregated view cannot provide. For a new site that does not yet have CrUX data (you need roughly 10k visits in 28 days), this is the only way to see INP at all.
BeaverCheck's full audit pulls INP directly from CrUX when Chrome has collected enough traffic for the origin, and surfaces it in the Performance tab alongside LCP and CLS. For origin-wide checks that sit alongside INP investigation, see the security headers tool and the DNS lookup tool.