Roughly 28% of small business websites feature at least one broken link. Not even in a blog post from 2019; on the homepage, the page that’s supposed to make the first impression!
This statistic gets funnier (and more painful) when you consider that most of these sites were “finished” and “approved” before going live. Someone signed off, someone celebrated, someone posted about the launch on LinkedIn. Yet nobody clicked every link on the homepage before any of it happened.
This is what happens when web design quality assurance gets skipped or shortcut: cross-browser testing, responsive breakpoint validation, accessibility compliance, and visual regression testing all get compressed into a five-minute scroll-through on one laptop screen, and the site launches carrying dozens of invisible defects into its most important traffic window.
The gap between a website that looks done and one that actually is done is wider than most people imagine. Here’s what lives in that gap.
What Does “Done” Look Like Before QA?
The design is gorgeous; the homepage loads on the designer’s 27-inch Retina display exactly as in the Figma mockup, and the client sees it, loves it, and asks how quickly it can go live.
At this stage, the site has been viewed on exactly one browser, one screen size, and one internet connection speed. Everything appears flawless because the testing conditions are flawless. This is the “before” snapshot, and it’s dangerously seductive.
The contact form works when you fill it out correctly. Nobody has tried submitting it blank, pasting a novel into the phone number field, or testing what happens when Chrome’s autofill populates the email field with a street address.
The mobile version “looks fine” because someone pulled it up on their own phone: one device, one carrier, one OS version. Meanwhile, the site’s actual audience will visit from over 40 distinct screen widths and browser combinations in its first month.
What Breaks When Nobody’s Looking?
The list of things that can go wrong between “looks great on my screen” and “works for every visitor” is long enough to be its own article. Here’s a sampler:
- CSS rendering differences. A flexbox layout that behaves perfectly in Chrome can collapse into a stacked mess in Safari or Firefox. Font rendering varies by browser and OS, causing line breaks and paragraph spacing to shift unpredictably.
- Responsive breakpoints that gap. A site designed for desktop and mobile often skips the messy middle: tablets, small laptops, and the increasingly common phone-in-landscape orientation. Content overlaps, buttons shrink below tap-target minimums, and navigation menus break.
- JavaScript conflicts that surface only under specific conditions. A smooth scroll animation that fires correctly on first load but breaks after a user navigates back, or a pop-up that works everywhere except iOS Safari, which handles viewport height differently.
- Load time variation across real-world connections. The site loads in 1.8 seconds on office fiber. On a 4G mobile connection in a parking lot (where a surprising number of purchase decisions happen), it takes 7 seconds.
- Accessibility failures that are invisible to sighted, mouse-using testers. Missing alt text, unlabeled form fields, color contrast below WCAG minimums, and keyboard navigation paths that trap users inside dropdown menus.
Every item on this list is preventable, so long as they don’t skip structured QA.
What Does “Done” Look Like After QA?
The “after” version of the same site is visually identical to the “before” version on the designer’s screen. The difference is entirely invisible, which is exactly the point.
After QA, the contact form has been submitted with valid data, invalid data, empty fields, special characters, and absurdly long inputs. It rejects what it should reject, accepts what it should accept, and sends confirmation emails that arrive formatted correctly across Gmail, Outlook, and Apple Mail.
The site has been loaded on a minimum of 12 browser-device combinations: Chrome, Safari, Firefox, and Edge on desktop; Chrome and Safari on iOS; Chrome on Android; and at least two tablet viewports. Layout, typography, image rendering, and interactive elements have been verified at each one.
Page speed has been benchmarked on simulated 3G, 4G, and broadband connections, with images above the fold optimized and lazy-loaded. Third-party scripts have been audited for render-blocking behavior. Core Web Vitals scores are green across all three metrics before the site touches a live server.
Accessibility has been tested with automated tools (Axe, Lighthouse) and manual keyboard navigation. Screen reader compatibility has been verified on critical user paths: homepage to contact form, homepage to service page, homepage to checkout.
Why Does QA Get Cut First?
Budget and timeline. Every single time.
Design revision rounds ate an extra week, development ran over because a feature was more complex than scoped, and content arrived late. By the time the site is built, the launch deadline is three days away, and the budget is spent. QA gets compressed from a structured two-week process into an afternoon of frantic clicking.
This pattern is so predictable that it practically qualifies as an industry tradition. And it produces the same result every time: a site that launches with a portfolio of undetected issues that surface over the following weeks through bug reports, customer complaints, and analytics anomalies.
The cost of fixing a bug after launch is 4 to 5 times higher than catching it during QA. A layout issue found in staging takes a developer 20 minutes to fix. The same issue found by a client’s customer, reported via email, triaged by a project manager, assigned to a developer, fixed, tested, and redeployed takes half a day of billable coordination.
What Does a Real QA Process Actually Include?
A structured QA process runs parallel to development, not after it. Testing begins the moment the first template is coded and continues through every phase.
Visual QA compares the live build against approved design files, pixel by pixel. Spacing, font sizes, color values, image cropping, and alignment are verified on every major template. This catches the drift that naturally occurs when a designer’s vision passes through a developer’s interpretation.
Functional QA tests every interactive element: forms, buttons, navigation menus, search bars, accordions, modals, sliders, and embedded media. Each one is tested for intended, edge-case, and failure-state behavior.
Cross-browser and device QA runs the full site through a matrix of browsers, operating systems, and screen sizes. Tools like BrowserStack simulate real devices, but manual verification on physical hardware catches rendering quirks that emulators miss.
Performance QA benchmarks page speed, identifies render-blocking resources, and verifies that caching, compression, and CDN configurations are functioning correctly.
Accessibility QA validates WCAG 2.1 compliance at a minimum, covering color contrast, alt text, ARIA labels, keyboard navigation, and focus management.
Content QA reviews every page for typos, placeholder text, broken links, missing images, and metadata (titles, descriptions, Open Graph tags). This is the layer that catches the Lorem Ipsum paragraph still sitting on the About page.
Remember That 28%?
Twenty-eight percent of small business websites have a broken link on their homepage. Each one was placed by someone, approved by someone, and launched by someone who assumed it worked.
QA is the process of replacing assumptions with evidence. It’s tedious, methodical, and unglamorous. It will never be a part of a web project that gets celebrated on social media or featured in a case study.
It is consistently the part that determines whether a site earns or erodes trust in the first five seconds. Every broken link, every misaligned element, every form that swallows a submission without confirmation tells a visitor the same thing: nobody checked.
The 142 things that need checking before launch are boring. Launching without checking them is expensive. The math has always favored the boring option.

