Google Core Web Vitals Explained: The Complete Guide

What the vitals metrics actually measure, how Google uses them as a ranking signal, and the practical steps to move your scores from poor to good.

By Josh Willett · Updated March 2026 · 12 min read

Core Web Vitals are the standardised metrics Google uses to measure real-world user experience. They directly affect rankings and they tell you exactly what your visitors are experiencing. This guide covers what the metrics measure, how to diagnose problems, and the specific fixes that make the biggest difference.

What are Core Web Vitals?

Core Web Vitals are a set of standardised metrics Google uses to measure the real-world user experience of a webpage. Introduced in 2020, they focus on three dimensions of web performance: loading speed, interactivity, and visual stability. Google uses them as a ranking factor and publishes the thresholds that define a "good" user experience.

Before Core Web Vitals existed, web performance was a somewhat subjective conversation. Page speed tools gave different results depending on how you tested, and there was no industry standard for what "fast enough" actually meant. Google addressed that by defining specific vitals metrics with clear pass/fail thresholds tied to the 75th percentile of real users.

The key word is "real user." Core Web Vitals are collected from the Chrome User Experience Report (CrUX), which aggregates anonymous performance data from Chrome users who opt in to sharing usage statistics. This means your scores reflect what visitors to your site actually experience, not a synthetic test in a controlled environment.

Google updates the Core Web Vitals definition roughly once a year. The metrics themselves are designed to be stable, with Google committing that stable core web vitals metrics won't change more than once per year. This gives site owners time to act on the data rather than constantly chasing moving targets.

The three Core Web Vitals metrics

The three current Core Web Vitals are Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS). Each measures a distinct aspect of user experience, and each has defined thresholds for good, needs improvement, and poor performance.

Metric 1

Largest Contentful Paint (LCP) - Loading

LCP measures the time from when a page starts loading to when the largest visible element in the viewport finishes rendering. That element is typically a hero image, a large text block, or a video thumbnail.

The benchmark is that LCP should happen within 2.5 seconds for a good score. Between 2.5 and 4 seconds needs improvement. Above 4 seconds is poor.

Metric 2

Interaction to Next Paint (INP) - Interactivity

INP replaced First Input Delay (FID) in March 2024. Where FID only measured the first interaction on a page, INP captures the responsiveness of all interactions throughout the page lifecycle. It records the time between a user interaction (click, tap, or key press) and the moment the browser paints the next frame.

A good INP score is under 200 milliseconds. Between 200 and 500ms needs improvement. Above 500ms is poor.

Metric 3

Cumulative Layout Shift (CLS) - Visual Stability

CLS measures how much page content moves around unexpectedly while loading. If an image loads and pushes text down, causing you to misclick a button, that is a layout shift. CLS is scored as a unitless number representing the total impact of unexpected layout shifts during the page's lifespan.

A good CLS score is 0.1 or below. Between 0.1 and 0.25 needs improvement. Above 0.25 is poor.

MetricGoodNeeds ImprovementPoor
LCP≤ 2.5s2.5 - 4s> 4s
INP≤ 200ms200 - 500ms> 500ms
CLS≤ 0.10.1 - 0.25> 0.25

To pass the Core Web Vitals assessment at page level, you need 75% of real user visits to hit the "good" threshold on all three metrics simultaneously. This 75th percentile measurement is important because it means you need to fix performance for the majority of users, not just the fastest connections.

Why the 75th percentile matters

Developers often test on fast machines with fast broadband. Real users include people on mid-range Android phones with 4G connections in areas with variable signal, older laptops running multiple browser tabs, and mobile devices throttled by low battery mode. The 75th percentile requirement forces you to optimise for the range of conditions your actual audience uses, not the ideal conditions you test under.

Core Web Vitals and SEO

Google incorporated Core Web Vitals into its ranking systems as part of the Page Experience update, which rolled out fully in 2021. They are one of several page experience signals, alongside HTTPS, mobile-friendliness, and the absence of intrusive interstitials. Their direct weight in rankings is modest but real, and the indirect effects are significant.

The honest answer about how much Core Web Vitals affect rankings is that the direct weighting is relatively small compared to content quality and backlinks. Google has said it uses Core Web Vitals as a tiebreaker when two pages are otherwise comparable. In practice, that means a technically excellent competitor can outrank you on user experience signals even if your content is strong.

The more important effect is indirect. Poor web performance hurts engagement metrics. A slow-loading page has a higher bounce rate. Users who bounce quickly send negative signals that influence how Google interprets the value of your pages over time. A site that frustrates users on mobile will convert less, rank less, and gradually lose visibility compared to one that delivers a smooth experience.

For competitive commercial terms, I always recommend treating Core Web Vitals as a basic hygiene requirement rather than a significant ranking lever. Fix them because poor scores lose you visitors who could convert, not because a 0.1 improvement in CLS will suddenly catapult you to position one.

Core Web Vitals in Google Search Console

Google Search Console has a dedicated Core Web Vitals report that groups your pages into good, needs improvement, and poor buckets based on CrUX data. The report shows you which URL groups are failing and which metric is causing the failure. This is typically the right starting point for any Core Web Vitals investigation because it shows you real user data at scale rather than a single synthetic test.

The Search Console report groups similar URLs together, so you will see groups like "Product pages" or "Blog posts" rather than individual URLs. To diagnose specific pages, you can then use PageSpeed Insights on individual URLs and look at the field data (real user data) alongside the lab data from Lighthouse.

Core Web Vitals are a ranking tiebreaker, not a silver bullet

The direct ranking weight of Core Web Vitals is modest compared to content quality and backlinks. But when two pages are otherwise comparable, the one with better user experience metrics wins. The indirect effects on bounce rate, engagement, and conversion are often more significant than the direct ranking impact.

Real user data versus lab data

Real user data, called field data, comes from actual Chrome users visiting your site and is what Google uses for rankings. Lab data comes from automated tools like Lighthouse that simulate a page load under controlled conditions. The two often diverge significantly, and the difference matters for how you prioritise fixes.

Field data is authoritative but has limitations. You need enough traffic for the CrUX dataset to have sufficient real user measurements to report on. Low-traffic pages often show as having no data at origin level, which means you cannot get page-level field data from Search Console or PageSpeed Insights for those URLs.

Lab data is available for any URL regardless of traffic, but it simulates one specific device and connection speed. Lighthouse in Chrome DevTools defaults to a mid-range Android device on a 4G connection. This is more representative than testing on your own machine, but it still does not capture the full range of conditions your users experience.

What to do when field data is not available

For low-traffic pages, use the origin-level CrUX data as a proxy. If your domain's overall field data shows good scores, it is reasonable to assume individual pages are likely performing similarly, provided they share the same architecture and asset loading patterns. For pages with unusual content, like very image-heavy pages or those with complex JavaScript components, test with Lighthouse directly and aim to pass the lab thresholds with enough margin to account for real-world variation.

Chrome's web vitals JavaScript library and the web vitals extension let you collect real user data yourself. If you install the web vitals extension on Chrome, you will see field data overlaid with lab data for any page you visit, which is useful for debugging specific user experience issues that Lighthouse does not surface.

How to measure Core Web Vitals

The main tools for measuring Core Web Vitals are Google Search Console, PageSpeed Insights, Chrome DevTools with Lighthouse, and the CrUX API. Each serves a different purpose, and a thorough web performance investigation typically uses at least two of them together.

Google Search Console

The Core Web Vitals report in Search Console is the best place to get a high-level view of how your whole site is performing on real user data. It identifies the URL groups where most of your poor and needs-improvement pages sit, which lets you prioritise the fixes with the highest impact. The data is typically 28 days old, so it reflects your site's recent performance rather than live state.

PageSpeed Insights

PageSpeed Insights (PSI) combines field data from CrUX with lab data from Lighthouse for individual URLs. When you paste a URL into PSI, you see both sets of data side by side. The field data section shows how real users are experiencing that specific URL or, if there is insufficient page-level data, the origin. The lab section shows Lighthouse results from a single synthetic test.

Chrome DevTools and Lighthouse

For deeper diagnosis, Chrome DevTools gives you the Lighthouse tab, the Performance tab, and the Performance Insights panel. These are most useful once you have identified which metric is failing from field data and need to understand the specific element or resource causing the problem. The Performance panel shows a waterfall of resource loading, main thread activity, and rendering events, which is invaluable for finding what is blocking LCP or causing layout shifts.

Monitoring and alerting

One-off measurements are useful for diagnosis, but ongoing monitoring lets you catch regressions before they affect rankings. Tools like SpeedCurve, Calibre, and Ahrefs Site Audit can run scheduled Lighthouse tests and alert you when scores degrade. Google also provides the CrUX API and a BigQuery dataset for more advanced monitoring of real user data over time.

Technical SEO performance analysis including Core Web Vitals assessment

Improving LCP

LCP is most often caused by slow image loading, a slow server response time, or render-blocking resources that delay the browser from starting to render the page. The most impactful fixes are reducing the time to first byte, optimising the LCP element itself, and eliminating resource loading chains that push LCP back.

Identify the LCP element

The first step is finding out what element is currently acting as the LCP element. PageSpeed Insights and Lighthouse both highlight this in their output. On most marketing and editorial pages, it is the hero image. On some pages, it is a large text heading. The fix depends on what the element is.

Preload the LCP image

If the LCP element is an image, add a preload link in the head of the HTML so the browser fetches it immediately rather than waiting for it to be discovered in the CSS or body. The link tag looks like this: <link rel="preload" as="image" href="/hero.webp">. For responsive images, use the imagesrcset attribute on the preload tag.

Serve images in modern formats

WebP and AVIF images are significantly smaller than JPEG and PNG for equivalent visual quality. Smaller file sizes mean faster loading, which directly improves LCP. Use the picture element or srcset to serve WebP to browsers that support it, with a JPEG fallback for older browsers. AVIF offers even better compression than WebP but has slightly lower browser support in older versions.

Reduce server response time

A slow time to first byte (TTFB) delays everything that happens after it, including LCP. If your server consistently takes over 600 milliseconds to respond, investigate server-side caching, database query optimisation, or moving to a faster hosting infrastructure. For static sites, a CDN like Cloudflare significantly reduces TTFB by serving pages from edge nodes close to the user.

Eliminate render-blocking resources

Stylesheets and synchronous scripts in the head of your HTML block the browser from rendering anything until they have loaded and parsed. Move non-critical CSS to inline styles or defer it using JavaScript-based loading. Add async or defer attributes to script tags that are not required for initial render. Removing render-blocking resources often produces the largest LCP improvement on older codebases.

Improving INP

Poor INP scores are almost always caused by heavy JavaScript execution on the main thread. When a user interacts with the page, the browser queues that interaction behind whatever work is already in progress on the main thread. If JavaScript tasks are long, the interaction response feels sluggish even if the page loaded quickly.

Break up long tasks

Any JavaScript task that runs for more than 50 milliseconds is considered a long task and risks blocking interactions. Use the Performance panel in Chrome DevTools to identify which scripts are generating long tasks. The fix is typically to split large synchronous tasks into smaller chunks using setTimeout or the scheduler.yield() API, which yields control back to the browser to process pending interactions between task chunks.

Reduce JavaScript bundle size

Excessive JavaScript is the root cause of poor INP on most sites. Audit your JavaScript dependencies and remove unused packages. Use code splitting to load only the JavaScript needed for the current page rather than a monolithic bundle. Tools like Bundlephobia show you the cost of individual npm packages before you add them to a project.

Defer third-party scripts

Analytics tags, chat widgets, advertising scripts, and social media embeds all add JavaScript that runs on the main thread. Load them with the defer attribute or after the page has loaded using a tag manager rule triggered on user interaction. For particularly heavy third-party scripts, consider whether the business value justifies the INP cost.

Optimise event handlers

Interaction handlers that do too much work synchronously will block the next paint regardless of how fast the rest of your JavaScript is. Move expensive work out of click and input handlers into background threads using web workers, or defer it to the next idle period using requestIdleCallback. Keep event handlers focused on updating the UI immediately and offloading heavier processing.

Improving CLS

CLS is caused by elements appearing or changing size after the initial render, which shifts content that the user can already see. The most common causes are images and videos without explicit dimensions, dynamically injected content above existing content, and web fonts that cause text to reflow as they load.

Set explicit width and height on images and videos

When the browser loads HTML, it needs to know how much space to reserve for images before they have loaded. If an image element has no width and height attributes, the browser allocates no space, then shifts content when the image loads. Adding explicit width and height attributes (or setting aspect-ratio in CSS) eliminates this shift. This is the single most common cause of poor CLS and the quickest fix.

Avoid inserting content above the fold dynamically

Cookie banners, notification bars, and promotion popups that appear above existing page content after load are a major source of layout shift. Reserve the space for these elements in the initial layout so that when the content loads, it fills a space that was already allocated rather than pushing everything down. If that is not possible, position them as overlays rather than in-flow content.

Preload fonts and use font-display: optional or swap

When a page uses a web font that has not been downloaded yet, the browser either renders invisible text (flash of invisible text, or FOIT) or renders fallback text that shifts when the web font loads (flash of unstyled text, or FOUT). Both can cause CLS. Preload critical fonts and use font-display: optional to prevent layout shifts by only using the font if it loads quickly enough, or font-display: swap combined with a closely matched fallback font stack to minimise reflow.

Use CSS transforms for animations

Animations that change top, left, width, or height properties trigger layout recalculation and contribute to CLS. Animations using CSS transform and opacity run on the compositor thread without triggering layout, so they do not contribute to CLS. Audit any existing animations on your pages and replace layout-triggering properties with transforms where possible.

Key takeaways

What to remember

Frequently asked questions

Web core vitals (more formally, Core Web Vitals) are three standardised performance metrics defined by Google to measure real-world user experience on web pages. The three metrics are Largest Contentful Paint (LCP), which measures loading performance. Interaction to Next Paint (INP), which measures responsiveness. And Cumulative Layout Shift (CLS), which measures visual stability. Google uses them as a ranking signal as part of its Page Experience assessment.

LCP (Largest Contentful Paint) measures how long it takes for the largest visible element on a page to finish loading. A good LCP score is 2.5 seconds or less. CLS (Cumulative Layout Shift) measures how much page content moves unexpectedly while loading, which creates a disorienting experience for users. A good CLS score is 0.1 or below. Both are part of the three Core Web Vitals metrics that Google uses to assess page experience quality.

To improve Core Web Vitals, start by identifying which metric is failing using Google Search Console or PageSpeed Insights. For LCP, the most impactful fixes are preloading the LCP image, serving images in WebP format, and reducing server response time. For INP, focus on reducing JavaScript execution time by breaking up long tasks and deferring third-party scripts. For CLS, set explicit dimensions on all images and videos, and avoid inserting content above the fold after load. Always prioritise fixes using real user field data rather than lab scores alone.

The most common causes of poor Core Web Vitals scores are: large unoptimised images that slow LCP, heavy JavaScript bundles that cause long main thread tasks hurting INP, images and embeds without explicit dimensions that cause layout shifts hurting CLS, slow server response times that delay the start of page loading, render-blocking CSS and JavaScript that prevent the browser from painting the page early, and third-party scripts from analytics, advertising, or chat tools that add significant JavaScript overhead.

Want expert SEO help?

Get senior SEO expertise for your business.

Work with me