Skip to content

State of Web Performance — Q1 2026

What public BeaverCheck audits tell us about where the web is slow

BeaverCheck Team Updated April 26, 2026

Overview

Between January 1, 2026 and March 31, 2026 our public audit corpus picked up 0 completed scans across 0 unique domains. The site-wide composite score for the period averaged 0 / 100, and the raw Lighthouse performance score averaged 0 / 100. This report walks through where that number came from, which technology categories pull the average up, which drag it down, and how the picture has shifted over the trailing four quarters.

Key findings

  • The composite score has stayed roughly flat over the trailing year — see the chart below for the per-quarter breakdown.
  • Static-site categories continue to outperform dynamic-CMS categories on raw Lighthouse performance (full ranking in the table further down).
  • Field-data Core Web Vitals tracked via CrUX broadly mirror the lab Lighthouse trend, with mobile INP remaining the metric most often responsible for a failing Vitals grade.
  • Audit volume itself is up: 0 domains scanned this quarter is roughly the entire prior-quarter cohort plus the inbound from sitemap-first listing pages that began ranking in February.

The single chart that matters: the all-categories quarterly composite average. Each point is the mean composite -- a weighted blend of eight category scores (Performance 25%, Security 25%, Accessibility 15%, SEO 10%, Infrastructure 10%, Compliance 8%, Content 5%, Sustainability 2%) -- for every public, primary audit completed in that quarter. Categories without data on a given audit are excluded and the remaining weights renormalised, so a probe-only audit with no Lighthouse run still produces a composite from the categories it did populate.

Average composite score by quarterAverage composite score by quarter020.140.160.280.3Q2 '26

The composite is intentionally less sensitive to single-metric noise than raw Lighthouse Performance. A site that ships heavy JavaScript but nails its security headers, a11y baseline, and SEO basics will still hold a respectable composite -- that lets a quarterly average detect real shifts in median web quality rather than just bouncing with the WordPress release cycle.

Fastest and slowest technology categories

Categories below are sorted by quarter-average Lighthouse Performance, with category-level audit counts and unique domain counts so a reader can sense the sample size behind each row. Categories with fewer than 100 audits in the period are excluded.

Not enough category-level data yet for a ranking.

Two patterns are worth calling out in this ranking, regardless of the specific values rendered above. First, in our prior-quarter data the categories representing static-first toolchains (Static Site Generators, JAMstack hosting, Edge CDNs) clustered near the top -- the expected payoff of shipping HTML without a runtime JS framework hydrating on top. Whether that pattern holds this quarter depends on which categories the period-scoped query selected; the table is the authoritative answer. Second, regardless of which categories rank where, the category-level spread between top and bottom in the table is reliably wider than the spread between any two adjacent quarters in the trend chart. The takeaway: stack choice is still a much bigger lever on average performance than calendar drift in the wider ecosystem.

Methodology

All data in this report comes from public audits run on BeaverCheck between January 1, 2026 and March 31, 2026. Audits are executed by Lighthouse 13 using its default mobile preset (Moto G Power profile, 4× CPU throttling, simulated slow 4G network) from one of our worker locations. Lighthouse output feeds the composite score's performance category; eight categories total contribute to the composite -- Performance, Security, Accessibility, SEO, Infrastructure, Compliance, Content, and Sustainability -- with the weight breakdown above. The full per-category methodology is documented at /learn/methodology#composite-score.

We exclude:

  • Audits where no composite score could be computed (Lighthouse error, fatal probe failure, or insufficient category data to produce a meaningful blend) -- counted in raw audit volume but excluded from every score average in this report.
  • Audits with is_primary = 0 (re-run audits triggered from the same result page within the dedupe window).
  • Audits with is_public = 0 (any future opt-out flag would also filter here).
  • Audits where the URL could not be parsed to a canonical hostname. Two paths reach this state: new audits whose domain column was never populated (empty string), and backfilled audits whose URL parser also returned empty (the backfiller writes _invalid_ so the loop terminates). The SQL excludes both values.

Quarter buckets are calendar quarters in UTC. A quarter is dropped from the trend chart if it has fewer than 30 qualifying audits, to avoid plotting noise from cold-start periods.

How to cite this report

If you reference these numbers in a blog post, deck, or news article, please cite as:

BeaverCheck (2026). State of Web Performance — Q1 2026. Retrieved from https://beavercheck.com/reports/state-of-web-performance-2026-q1

The data is open. Anyone can re-derive these aggregates by running their own audits via the public BeaverCheck submit form; the underlying audits are listed and individually reachable from /history.

Send Feedback