State of Small Business Websites (2026 Study): 96.9% Fail Core Web Vitals
Most performance articles repeat the same three stats everyone has already read. "Amazon found a 100ms delay costs 1% revenue." "53% of mobile users leave if a page takes over 3 seconds." You have seen them. They are old.
Last month we ran a different experiment. We took a real-world list of 292 North American B2B prospects, pointed our headless browser audit tool at every site, and measured what actually ships to production in 2026.
The short version: 96.9% of the sites we audited fail at least one Core Web Vital on mobile. Only 3.1% pass all three. And 100% of 191 sites with valid accessibility data failed the Link Labels check. Not most. Not 95%. Every single one.
The full report, dataset, and methodology are open source. This post is the developer-angle on what is breaking and why.
Report: State of Small Business Websites 2026
Key stats (copy-paste friendly)
| Metric | Sample | Result |
|---|---|---|
| Fail at least one Core Web Vital (mobile) | n=191 | 96.9% |
| Pass all three Core Web Vitals (mobile) | n=191 | 3.1% |
| Fail axe-core Link Labels rule | n=191 | 100% |
| Post "poor" mobile LCP (>4.0s) | n=191 | 86.4% |
| Post "poor" mobile FCP (>3.0s) | n=191 | 72.4% |
| Pass mobile CLS (<0.1) | n=191 | 85.5% |
| Total domains scanned | n=292 | — |
*Source: State of Small Business Websites 2026, Axion Deep Digital Research. April 2026.*
How we measured
This study runs on our production audit infrastructure. DeepAudit AI is the same pipeline we run against client sites daily. The dataset below is 292 consecutive scans pulled from a single week in April 2026. Because the pipeline runs continuously, future snapshots will replicate this analysis against a larger and eventually randomized sample, and the methodology stays frozen so the comparisons hold.
Each scan runs a real Chromium instance inside a Lambda (Puppeteer plus @sparticuz/chromium), renders the page the way Googlebot would, waits for hydration, and then measures against:
- Lighthouse mobile Core Web Vitals (LCP, FCP, CLS) using Google's published thresholds
- axe-core accessibility evaluations
- 60+ technical SEO checks (meta, headings, schema, canonicals, alt text)
- Open PageRank for domain authority
191 of the 292 sites returned valid Lighthouse mobile data. 191 returned valid axe-core data. 260 had valid Open PageRank. Every finding below is gated on the subset of sites where the relevant measurement succeeded, not averaged over missing data.
What this study does and does not prove
Before the numbers, the honest scope:
- This is a purposive sample, not a random one. The 292 domains came from a North American B2B prospect list, which skews toward construction, trades, and small professional services. It is not representative of the web as a whole.
- What it does represent is the category of business that small digital agencies actually get hired to fix. If you pitch to this market, the sites you convert will look like this sample.
- One snapshot per site. Results reflect the state of each site on the day it was scanned. A site could have shipped a perf fix the next morning.
- Lighthouse mobile scores vary. Google's Lighthouse CI documentation acknowledges a plus-or-minus 5 point variance across runs. None of the findings below hinge on a difference smaller than that. The 96.9% failure rate and the 100% Link Labels failure are robust to that noise.
The full dataset is open. Every row, every score, every axe-core violation is downloadable from the methodology page. If you disagree with a finding, you can reproduce it or refute it.
Finding 1: LCP is the silent killer
86.4% of sites post poor mobile LCP. Only 5.2% are good.
Largest Contentful Paint over 4.0 seconds is Google's "poor" threshold. Most of the sites we measured clock in between 4 and 8 seconds on mobile. Some over 10.
What is causing it, in order of frequency:
1. Hero images served as full-resolution PNG
One site we looked at ships a 2400x1600 PNG as its hero, served at 3.2 MB. Mobile user on 4G? That is 6 seconds of transfer on a good connection. The image renders last, so LCP measures the whole download.
The fix is trivial if you are on Next.js:
import Image from 'next/image';
<Image
src="/hero.jpg"
alt="..."
width={1200}
height={600}
priority
sizes="(max-width: 768px) 100vw, 1200px"
quality={75}
/>priority injects a preload hint. sizes tells the browser to pick the right srcset variant for the viewport. quality=75 on JPEG looks identical to 90 in blind testing. Next.js serves WebP or AVIF automatically based on the Accept header.
If you are not on Next.js, the same rules apply. <link rel="preload" as="image">, <picture> with AVIF first, and a srcset + sizes combo.
2. Render-blocking JavaScript in the head
Tag Manager. Hotjar. Intercom. Drift. One of every four sites we audited had three or more synchronous third-party scripts in the head. Each one blocks HTML parsing until it is fetched, parsed, and executed.
The defer attribute is older than most React developers. Use it:
<script src="/vendor/gtm.js" defer></script>Or on Next.js:
import Script from 'next/script';
<Script src="https://www.googletagmanager.com/gtm.js" strategy="afterInteractive" />
<Script src="https://widget.intercom.io/widget.js" strategy="lazyOnload" />afterInteractive loads after hydration. lazyOnload waits until the browser is idle. Neither blocks the LCP element.
3. Custom fonts with no font-display directive
Default font-loading behavior blocks text rendering for up to 3 seconds. If your LCP element contains text, it cannot paint until the font loads.
@font-face {
font-family: 'Inter';
src: url('/fonts/inter.woff2') format('woff2');
font-display: swap;
}swap tells the browser to render with a system fallback immediately, then swap to the custom font once it arrives. On Next.js, next/font does this automatically. On anything else, you write it in three lines.
Finding 2: First Contentful Paint is worse than LCP, and nobody talks about it
72.4% of sites post poor mobile FCP.
FCP is the time to the first pixel of anything meaningful. It is the metric that correlates most directly with user perception of speed. If it takes 4 seconds to paint a single headline, the user has already decided the site is broken, regardless of how fast the rest of the page loads.
The causes overlap with LCP but there is one specific pattern we saw repeatedly: a client-side routing framework rehydrating the whole page before rendering anything.
Here is the anti-pattern. A site ships a React SPA that serves an empty shell HTML, boots up, fetches the route data, then renders. The browser sees nothing visible until JavaScript finishes executing. The fix is either:
- Server-side render the first paint (Next.js App Router, Remix, SvelteKit)
- Ship static HTML for the first paint, hydrate on demand (Astro model)
If you are shipping a React SPA as the entire site in 2026, you are fighting gravity. The frameworks that win on FCP are the ones that put real HTML on the first GET.
Finding 3: 100% of sites failed Link Labels
This is the number that should bother you most.
axe-core's Link Labels rule checks that every <a> element has a discernible accessible name. Every single one of the 191 sites we evaluated had at least one link that failed this rule. Every one.
What does a failing link look like? It looks like this:
<a href="/products">
<svg>...</svg>
</a>A screen reader reads "link" and stops. There is no text content and no aria-label. The user has no idea where the link goes.
The fix is one attribute:
<a href="/products" aria-label="Our products">
<svg aria-hidden="true">...</svg>
</a>Social icon sets are the most common offender. Every Twitter, LinkedIn, and Instagram icon in every footer. Almost every one of them was an unlabeled link. Across 191 sites, we did not find a single site that had labeled every one of its icon-only links.
This is not an edge case. This is the baseline.
The broader pattern is: developers ship component library icons without reading the a11y docs. React-icons, Font Awesome, Lucide, Heroicons — all of them render SVGs inside anchors by default, and unless you explicitly add an aria-label or sibling text, the resulting link is unreadable.
If you ship a component with an icon-only link, wrap it:
export function SocialLink({ href, label, children }) {
return (
<a href={href} aria-label={label}>
<span aria-hidden="true">{children}</span>
</a>
);
}Three lines. You now pass axe-core.
Finding 4: CLS is the one metric most sites get right
85.5% of sites pass mobile CLS.
Cumulative Layout Shift is the one Core Web Vital developers have actually internalized. It helps that every modern framework bakes in width and height on images by default, and that font-display: swap cuts down on font-swap shifts. The sites that fail CLS are doing something unusual: late-loading ads, position: sticky headers without reserving space, or hero carousels that resize after the first slide loads.
If your site passes everything else but fails CLS, the cause is almost always one of those three.
The takeaways for developers
If you read nothing else from this, read this:
- Priority-load one image and one font. Your LCP element and your primary typeface. Everything else lazy-loads. This single change moves most sites from "poor" LCP to "good."
- Defer or lazy-load every third-party script. If Tag Manager is synchronous in your head, you are paying for it on every paint. Move it.
- Run axe-core on every PR. A CI check that runs
@axe-core/cliagainst your built site will catch unlabeled links before they ship. It takes 10 minutes to configure and it will save you from the baseline failure. - Test on mobile, not desktop. Every site we audited looked fine on desktop. 96.9% of them fail on mobile. Chrome DevTools has a "Mobile" preset in the Performance tab. Use it on every build.
The dataset
The full 292-site dataset, methodology, and reproducibility notes are open. If you want to cite any of these findings in a post, article, or conference talk, you can pull the raw scan JSON and reproduce every chart.
Report: State of Small Business Websites 2026
Suggested citation:
Gutierrez, J.R. (2026). State of Small Business Websites 2026.
Axion Deep Digital Research, n=292.
axiondeepdigital.com/research/state-of-small-business-websites-2026If you find an error, contact us and we will update the public methodology page with the correction and the reporter credit.
The takeaway, in one sentence
The small business web is not sophisticated enough to be slow. It is slow because nothing is prioritized, nothing is deferred, and nothing is labeled. Every one of the findings above is fixable in a single afternoon by a developer who knows the four keywords: priority, defer, font-display: swap, and aria-label.
That is what the 3.1% do.
Further reading
- Core Web Vitals thresholds (Google)
- Lighthouse scoring weights (Google)
- axe-core rule descriptions (Deque)
- The RAIL performance model (Google)
Want to see how your own site stacks up against the 292? Run a free DeepAudit. Takes 60 seconds, no signup.
Related services
Ready to build a website that performs?
Let us audit your current site, identify the biggest opportunities, and build a plan to grow your traffic and leads.