How BeaverCheck Calculates Business Impact
BeaverCheck turns technical audit findings into financial estimates so you can prioritize fixes by revenue impact, not just severity level. This page documents every model, source, and assumption behind those numbers.
All figures are estimates. They are intended to help you rank which findings are worth your attention — not to serve as legal, financial, or actuarial advice. Actual impact depends on your site, your audience, your vertical, and the enforcement priorities of the regulators your site falls under.
If you spot an error in our data or methodology, let us know via the feedback widget on any result page.
Performance → Conversion Model
Used in the Revenue Calculator (Prompt 9), Cost-of-Inaction (Prompt 13), ROI dashboard (Prompt 14), and Third-Party Script Cost calculator (Prompt 17).
The model:
- Baseline bounce rate: 25% for sites with Largest Contentful Paint (LCP) under 2.5 seconds
- Each additional second above 2.5s adds +7 percentage points of bounce
- Capped at 70% maximum bounce rate (beyond that, you're not measuring performance — you're measuring site abandonment)
From bounce rate to dollars:
lostVisitors = monthlyViews × additionalBounceRate
lostConversions = lostVisitors × 0.0235 (baseline conversion rate)
monthlyCostUSD = lostConversions × localCPC (CPC as conversion-value proxy)
We use cost-per-click as a conversion-value proxy because it's the only traffic-value signal we can localize reliably without the operator's analytics.
Sources:
- Google / Deloitte, Milliseconds Make Millions (2020) — found that sites loading in 5s vs 19s see 70% longer sessions and 35% lower bounce rates
- Google Web Vitals documentation — the 2.5s LCP threshold for "Good"
- Akamai, State of Online Retail Performance (2017) — every 100ms of latency costs ~7% in conversion
- Portent, How Page Speed Impacts Conversion (2022) — highest ecommerce conversion rates at 0–2s load times
Limitations:
- The linear bounce-rate-per-second relationship is a simplification. Real behavior is non-linear near the thresholds, and depends on content quality, user intent, device, and network.
- The 2.35% baseline conversion rate is an industry average; vertical-specific rates range from under 1% (B2B SaaS) to over 5% (financial services lead-gen).
- CPC varies dramatically by vertical — legal and insurance CPCs can exceed $50; retail averages $1–$2.
Developer Rate Defaults
Used in Cost-to-Fix (Prompt 12), ROI (Prompt 14), Sprint Plan (Prompt 15), and Stakeholder Summaries (Prompt 16).
Rates are fully loaded — base salary plus benefits, employer taxes, office and tooling overhead. Fully-loaded cost is typically 1.5–2.5x base salary depending on jurisdiction.
Representative rates (mid-level default; the UI lets operators override):
| Country | Currency | Junior | Mid-Level (Default) | Senior | Agency |
|---|---|---|---|---|---|
| United States | USD ($) | $50/hr | $100/hr | $150/hr | $200/hr |
| Germany | EUR (€) | €50/hr | €90/hr | €130/hr | €170/hr |
| United Kingdom | GBP (£) | £40/hr | £75/hr | £110/hr | £150/hr |
| France | EUR (€) | €45/hr | €80/hr | €120/hr | €150/hr |
| Netherlands | EUR (€) | €50/hr | €85/hr | €125/hr | €150/hr |
| Switzerland | CHF | CHF 80/hr | CHF 130/hr | CHF 180/hr | CHF 220/hr |
| Canada | CAD (C$) | C$45/hr | C$85/hr | C$125/hr | C$170/hr |
| Australia | AUD (A$) | A$60/hr | A$110/hr | A$160/hr | A$200/hr |
| Japan | JPY (¥) | ¥4,500/hr | ¥8,000/hr | ¥12,000/hr | ¥16,000/hr |
| Singapore | SGD (S$) | S$50/hr | S$90/hr | S$130/hr | S$160/hr |
| Brazil | BRL (R$) | R$90/hr | R$180/hr | R$300/hr | R$400/hr |
| Mexico | MXN ($) | $300/hr | $550/hr | $800/hr | $1,100/hr |
| India | INR (₹) | ₹800/hr | ₹1,500/hr | ₹2,500/hr | ₹3,500/hr |
| South Africa | ZAR (R) | R400/hr | R750/hr | R1,100/hr | R1,400/hr |
| UAE | AED (د.إ) | 150/hr | 280/hr | 400/hr | 550/hr |
BeaverCheck's jurisdiction database (internal/jurisdiction/countries.go) covers 130+ countries with localized rates and preset labels. Countries not shown above use locally-sourced medians following the same fully-loaded methodology.
Sources: Glassdoor, StepStone, Talent.com, Stack Overflow Developer Survey, and regional salary databases. Each country entry in the database cites its primary source.
Limitations:
- Rates reflect 2024 medians; expect 5–10% annual drift.
- "Mid-level" is an average across 3–7 years of experience — your team's actual loaded cost may be higher or lower.
- Rates don't account for specialization premiums (security, ML, cloud architecture can command 1.5–2x the base median).
Compliance Risk Calculations
Used in Cost-of-Inaction (Prompt 13).
Fine ranges come from each regulation's published enforcement schedule. They reflect the maximum statutory exposure for a violation — not a prediction of what you'd actually be fined. Real enforcement is shaped by violation severity, company size, cooperation with authorities, and the regulator's priorities that year.
| Regulation | Jurisdiction | Fine Range | Type | Source |
|---|---|---|---|---|
| GDPR (Art. 83) | EU / EEA | €10M–€20M or 2–4% of turnover | Per incident | EU DPA published decisions |
| GDPR (SME typical) | EU / EEA | €5K–€100K | Per incident | CNIL, ICO, BfDI enforcement patterns |
| UK GDPR | United Kingdom | £8.7M–£17.5M or 2–4% of turnover | Per incident | ICO published decisions |
| ePrivacy / Cookie Directive | EU | €0–€20M | Per incident | National DPA decisions |
| EAA (European Accessibility Act) | EU | Varies (member-state fines) | Per violation | EU Directive 2019/882 |
| CCPA / CPRA (§1798.155) | California, US | $2,500–$7,500 | Per violation | California AG penalty schedule |
| ADA Title III | United States | $10K–$75K (first), $75K–$150K (subsequent) | Settlement range | DOJ settlements, Title III case law |
| PIPEDA | Canada | CAD 10K–100K | Per violation | OPC enforcement guidance |
| ACA | Canada | CAD 250K | Per violation | Accessible Canada Act §147 |
| LGPD (Art. 52) | Brazil | 2% of revenue, max R$50M | Per infraction | ANPD guidelines |
| APPI | Japan | ¥1M–¥100M + criminal | Per violation | PPC published guidelines |
| PIPA | South Korea | ₩30M–₩500M | Per violation | PIPC guidelines |
| DPDP Act 2023 | India | Up to ₹250 crore (~₹2.5B) | Per incident | MeitY penalty schedule |
| PIPL | China | Up to ¥50M or 5% of revenue | Per incident | CAC published guidelines |
| PDPA | Singapore | Up to S$1M | Per breach | PDPC enforcement decisions |
| POPIA | South Africa | Up to R10M or imprisonment | Per violation | Information Regulator guidance |
| NDPR | Nigeria | Up to ₦10M or 2% of revenue | Per violation | NDPC guidelines |
| Swiss FADP (revFADP) | Switzerland | Up to CHF 250K (personal liability) | Per violation | FDPIC guidance |
The full regulation catalog in internal/jurisdiction/regulations.go covers 30+ regulations across every region in the country database.
Disclaimer. These are published regulatory ranges, not legal assessments. Actual fines depend on violation severity, company size, mitigation steps taken, cooperation with authorities, and the regulator's priorities. Consult qualified legal counsel before making compliance decisions based on these figures.
SEO Traffic Value Model
Used in Cost-of-Inaction (Prompt 13).
Each SEO finding has an estimated click-through-rate (CTR) loss. Lost monthly value is:
monthlyLostValue = monthlyViews × ctrLossFraction × localCPC
CTR impact by finding type:
| SEO Issue | CTR Loss | Source |
|---|---|---|
| Missing meta description | −5% | Google writes one from body text; handwritten is consistently better (Backlinko 2024) |
| Missing canonical tag | −10% | Duplicate variants split ranking signal (Ahrefs studies) |
| Missing structured data | −8% | No rich results (stars, FAQs, prices) in SERP (Search Engine Journal 2024) |
| Title too long | −3% | SERP truncation cuts the meaningful words (Backlinko/Sistrix 2024) |
| Title too short | −2% | Underutilized SERP real estate (Sistrix 2024) |
| Missing Open Graph tags | −2% | Plain URL previews reduce social referral CTR |
| Missing H1 | −2% | Weakens on-page SEO signals (Moz correlation studies) |
| Weak internal linking | −4% | Reduced crawl depth and PageRank distribution (Moz) |
| Missing alt text | −1% | Lost image search traffic + a11y signals |
CPC by country (2024 regional averages for Google Search Ads):
| Country | Currency | Default CPC | Source |
|---|---|---|---|
| United States | USD | $2.69 | WordStream 2024 US Benchmark |
| Germany / eurozone | EUR | €2.20 | WordStream 2024 EU Benchmark |
| United Kingdom | GBP | £2.10 | WordStream 2024 UK Benchmark |
| Switzerland | CHF | CHF 3.20 | SEMrush 2024 |
| Canada | CAD | C$2.80 | WordStream 2024 |
| Australia | AUD | A$3.50 | WordStream 2024 |
| Brazil | BRL | R$2.50 | SEMrush 2024 |
| Mexico | MXN | $15 | SEMrush 2024 |
| India | INR | ₹30 | SEMrush 2024 |
| Japan | JPY | ¥280 | SEMrush 2024 |
| Singapore | SGD | S$3.50 | SEMrush 2024 |
| South Africa | ZAR | R15 | SEMrush 2024 |
| Turkey | TRY | ₺8 | SEMrush 2024 |
Currencies without an explicit entry fall back to converting $2.69 USD through the local exchange rate.
Limitations:
- CPC varies dramatically by vertical — legal ($50+), insurance ($30+), retail ($1–2). The regional defaults are cross-vertical averages.
- CTR-loss estimates are published research averages; actual impact depends on how your competitors are optimized.
- Monthly views defaults to 10,000 when the operator doesn't supply a real number.
Bandwidth Cost Model
Used in Cost-of-Inaction (Prompt 13).
wastedBytes = totalTransfer - efficientTransfer
wastedMonthly = wastedBytes × monthlyViews
monthlyCost = (wastedMonthly / 1e9 GB) × cdnRatePerGB
CDN rate: $0.08/GB, blended across Cloudflare, AWS CloudFront, and Fastly public pricing tiers (2024). Converted to local currency via exchange rate.
Sources: Published pricing pages for Cloudflare, AWS CloudFront, Fastly, Google Cloud CDN, Azure CDN, Bunny.net. The blended rate sits in the middle of the per-GB pricing tier most common-weight operators land in.
Limitations:
- Enterprise pricing at high volume can be 50–90% lower than the public-rate blend.
- Free CDN tiers (Cloudflare Free, Netlify, Vercel) don't fit this model — but the waste is still real from the end user's perspective (loading 500KB you didn't need is 500KB of battery, data plan, and time).
Third-Party Script Impact Model
Used in the Third-Party Script Cost calculator (Prompt 17).
Each third-party script's cost is derived from its contribution to page slowness:
LCPImpactMs = isRenderBlocking
? executionMs (full blocking contribution)
: executionMs × 0.30 (main-thread contention heuristic)
then the bounce-rate model from Section 2.
The 30% contention factor is a heuristic — async scripts still compete for main-thread time during parse/compile, we just can't precisely measure that from lab data. Published research on async-script LCP impact doesn't give a single canonical number; 30% sits in the middle of the observed range.
Verdicts:
- Essential — consent banners, payment processors. Must keep regardless of cost.
- Efficient — under 50ms execution AND under the low-cost threshold. Fine to keep.
- Optional — 50–200ms non-essential scripts. Evaluate alternatives (Plausible vs GA4, LiveChat vs Intercom, etc.).
- Costly — over 200ms execution OR over the high-cost threshold. Evaluate whether the script justifies its cost.
Limitations:
- Script execution time is measured by Lighthouse in a lab environment. Real-world impact depends on user device, network, and concurrent scripts.
- The 30% contention factor is a heuristic, not a measured value.
- Render-blocking detection is conservative — we mark async scripts as non-blocking even when they're loaded early in
<head>.
Effort Estimates
Used in the Priority Matrix (Prompt 5), Cost-to-Fix (Prompt 12), Sprint Plan (Prompt 15).
Every finding in the internal/impact registry is tagged with one of three effort tiers:
| Tier | Time Range | Examples |
|---|---|---|
| Quick Win | under 30 min | Adding security headers, meta description, alt text, title fixes |
| Moderate | 1–4 hours | Image optimization pipeline, structured data, form labels, lazy loading |
| Significant | 4+ hours | CDN setup, code splitting, server optimization, auth refactor |
Limitations:
- Estimates assume a developer familiar with the technology stack.
- Actual time depends on codebase complexity, test coverage, deployment process, and team conventions.
- "Quick Win" and "Significant" are the confident ends — most findings land in "Moderate," which has a wide range.
Jurisdiction Detection
Used everywhere that produces localized currency, rates, or regulatory context (Prompts 11–17).
Seven signals, weighted toward stronger indicators:
| Signal | Weight | How we read it |
|---|---|---|
Country-code TLD (.de, .fr, .jp) |
0.30 | URL analysis — strongest single signal when present |
| WHOIS / RDAP registrant country | 0.25 | Domain intelligence lookup |
TLS certificate Subject country (C=) |
0.15 | OV/EV cert country attribute — cryptographically attested, but absent from DV certs (the common case) |
HTML lang attribute |
0.10 | Content-language probe |
| ASN | 0.10 | IP-to-ASN lookup (ASN number preferred, org-name fallback) |
| Cookie consent banner detected | 0.05 | Compliance scanner |
| Privacy policy mentions a regulation | 0.05 | Legal analysis |
Combined confidence below 0.3 falls back to US/USD defaults rather than displaying a low-confidence guess.
Limitations:
- CDN edge nodes may resolve to a different country than the origin server — the ASN signal can be misleading for CDN-fronted sites (we weight it low for this reason).
- Privacy-shielded WHOIS provides no registrant country.
- Generic TLDs (
.com,.io,.app) give no country signal. - A US company serving German customers legitimately triggers both GDPR and US regulations — the badge shows the primary jurisdiction but our compliance risk sums across all triggered regulations.
Composite Score Methodology
The overall score shown at the top of every result page is a weighted combination of category scores:
| Category | Weight | Why |
|---|---|---|
| Performance | 25% | Directly affects conversion and search ranking |
| Security | 25% | Direct financial and reputational risk |
| Accessibility | 15% | Legal exposure + addressable audience |
| SEO | 10% | Traffic acquisition |
| Infrastructure | 10% | DNS, TLS, reliability |
| Compliance | 8% | Regulatory exposure |
| Content | 5% | Editorial quality signals |
| Sustainability | 2% | Emerging concern, smaller signal today |
Weights are renormalized when a category has no available data (e.g. a site without a Lighthouse report skips Performance and the remaining weights sum to 1.0).
Why these weights? See docs/DECISIONS.md in the source tree — the weights reflect both user-facing impact (performance, SEO) and legal/financial risk (security, compliance, accessibility).
Data Freshness
- Exchange rates are approximate and updated periodically. They're not live — a 3% drift is typical between updates.
- Developer rates are based on annual salary surveys. Updated yearly.
- CPC values are based on WordStream and SEMrush public reports, updated semi-annually.
- Fine ranges reflect the statutory maximum as of the last regulation catalog update. Enforcement-patterns data updates when DPAs publish new decision summaries.
All estimates should be treated as order-of-magnitude guidance, not precise forecasts. BeaverCheck's purpose is to help you decide which findings deserve attention first — not to produce a number you'd put in a budget.
Feedback
If you believe any of our data or methodology is incorrect, please let us know via the feedback form on any result page, or open an issue at the project's GitHub repository. We take accuracy seriously — specific citations beat general complaints.