Core Web Vitals are still the most concrete user-experience signal Google exposes to site owners. In 2024 INP replaced FID as the responsiveness metric, and two years later most of the sites we audit are still failing on it — usually without realizing.
This is how we run a CWV audit in 2026, what the real-world failure modes look like, and the fixes that actually work.
The three metrics, and the 2026 thresholds
Nothing about the thresholds has changed since the INP transition. To pass, your site needs the 75th percentile of real-user measurements to meet:
- Largest Contentful Paint (LCP): ≤ 2.5s
- Interaction to Next Paint (INP): ≤ 200ms
- Cumulative Layout Shift (CLS): ≤ 0.1
The 75th percentile matters. If one in four of your users has a bad experience, you fail — even if your median is fine.
Field data beats lab data
A Lighthouse run in DevTools gives you lab numbers from one synthetic device on one network. Useful for debugging, useless for deciding whether you're passing.
Your source of truth is field data from real Chrome users via the Chrome UX Report (CrUX). Three ways to see it:
- Google Search Console → Core Web Vitals report. The authoritative view for SEO purposes. Groups your URLs by pattern and shows pass/needs improvement/fail counts.
- PageSpeed Insights. Shows 28-day CrUX data for a single URL alongside a Lighthouse lab run. We use this for spot-checking.
- web-vitals JavaScript library. Embed it, ship metrics to your analytics. Gives you real-time trend data for your own traffic rather than the 28-day lag.
If you're not looking at field data, you're guessing.
Why most React and Next.js sites fail INP
We audit a lot of React-based sites. The pattern is almost always the same:
- LCP is fine. Next.js with static generation and a properly configured hero image passes LCP easily. 1.5–2.2s on mobile is typical.
- CLS is fine. Modern CSS and fixed-dimension
next/imageeliminates most shift. - INP is bad. 300–500ms at the 75th percentile is common. This is almost always the hydration cost and event-handler cost on a heavy client component.
INP measures the time from user input (tap, click, keypress) to the next paint. Long INP is user-visible: the tap lands, nothing happens for half a second, then the UI reacts. It feels broken.
The usual culprits in order of frequency:
- A giant client component tree under a layout — a navigation, modal system, or analytics wrapper marked
"use client"that rehydrates on every route change - Synchronous work in event handlers — filtering a 2000-item list, running expensive formatting, building React state from a fetched response
- Long tasks from third-party scripts — ad tags, chat widgets, A/B testing snippets that block the main thread when they finally execute
The audit flow we actually run
For a new client site, this is the 60-minute pass:
- Pull CrUX data for the top 10 pages. Search Console → Performance → get the highest-traffic URLs, then PageSpeed Insights each one. Note which metric fails on which page.
- Profile the worst INP page in Chrome DevTools → Performance. Start recording, click the slowest-feeling element, stop recording. Look for long tasks in the flame chart. Anything over 50ms on the main thread after a user interaction is a candidate.
- Check the Network tab for render-blocking resources. Anything in the critical path that isn't
asyncordeferneeds a reason to exist. - Run the coverage tool. DevTools → Coverage → reload. Unused CSS and JS jumps out. On most WordPress sites we audit, 60–80% of the CSS is unused on the page being rendered.
- Measure twice. Throttle to 4x CPU and Slow 4G, then repeat. If the numbers are wildly different from field data, your CrUX sample is probably dominated by a different user segment than you think.
The fixes that move the needle
Listed in order of return on effort:
- Move work off the main thread. If you're running expensive JavaScript in an event handler, wrap it in
setTimeout(..., 0)to yield to the browser, or better, userequestIdleCallbackor a web worker. React 18+ lets you split work withuseTransitionfor non-urgent updates. - Remove the third-party scripts you don't need. Almost every site we audit has at least one script (often chat, always analytics) that could be deferred or removed. A good rule: if a script costs you more than 10% on INP, it has to justify itself.
- Reduce the client component tree. In Next.js App Router, anything that can be a server component should be. The less JavaScript shipped, the less hydration cost.
- Precompute, don't postcompute. Format dates, sort lists, and resolve relationships at build or request time on the server, not in client render. An MDX-powered blog like this one does zero client-side work to render a post.
- Lazy-load below the fold. Hero image eager, everything else
loading="lazy". If you're still usingnext/image, this is default behavior. - Fix your fonts.
font-display: swapplus preloading the primary font file eliminates most font-related LCP regression.next/fontdoes this correctly out of the box.
What not to bother with
- Micro-optimizing CSS selectors. The browser is fast. Spending hours on selector specificity for CWV is wasted time.
- Inlining critical CSS manually. Next.js and most modern frameworks handle this. Unless you're on a hand-rolled stack, skip it.
- Chasing 100 on Lighthouse. Lighthouse is a lab metric. Search Console is the metric that matters for SEO. A 92 desktop / 72 mobile Lighthouse score with field data in the green is a passing site.
The monthly habit
CWV drifts. A plugin update, a new marketing tag, a reshuffled hero layout — any of these can silently degrade metrics. Once you're passing, the real discipline is monthly:
- Export the Core Web Vitals report from Search Console
- Compare top-page 75th percentile numbers month over month
- If anything regressed by more than 10%, investigate before it becomes a ranking issue
We build this into our retainer engagements. For sites we don't manage, a monthly reminder and 30 minutes of review is usually enough.
If you want us to audit your site against the 2026 thresholds, get in touch — first pass is free.
