ignitionstack.pro v1.0 is out! Read the announcement →
Skip to Content

Performance Playbook

This document replaces the old checklist/changelog with a living reference for how we monitor and improve ignitionstack.pro performance. It covers the observability pipeline (Lighthouse, Chrome DevTools, MCP automations), the tech-specific guardrails woven into ../src, and how to react when metrics regress.

Core Metrics & Benchmarks

MetricTargetTools
Largest Contentful Paint (LCP)< 2.5s on 4G/CPU Slowdown 4xLighthouse CI, PageSpeed Insights
First Input Delay / Interaction to Next Paint< 100msChrome DevTools Performance panel, Web Vitals logger
Time to Interactive< 3.5sLighthouse, DevTools trace flamegraph
Cumulative Layout Shift< 0.1Chrome Layout Shift overlay, Next Image layout hints
Bundle Size (main/app)< 350 KB gznpm run build:analyze (Webpack analyzer)
Server Response< 200 ms p95Supabase logs, Next.js telemetry

We enforce these through automated runs (Lighthouse CLI, Playwright/perf in CI) and manual spot-checks every release.

Observability Toolkit

Lighthouse & PageSpeed Insights

Chrome DevTools

Supabase Insights

MCP (Model Control Plane) Automations

Architectural Guardrails

  1. Data Fetching – every server query under src/app/server/** wraps Supabase calls in React cache() + tag-based unstable_cache. Mutations must call revalidateTag or revalidatePath so ISR caches stay fresh.
  2. Renderingsrc/app/[locale]/(pages) use Server Components by default. Client islands ("use client") exist only for interactive surfaces (chat, analytics, forms).
  3. Assetsnext/image everywhere, with sizes tuned per layout and priority on hero banners. Asset pipeline uses npm run optimize:images before release.
  4. Styles – Tailwind theme tokens + globals.css reduce layout shift by ensuring height/spacing tokens align across breakpoints.
  5. Networknext.config.ts sets stale-while-revalidate headers, caches fonts/images aggressively, and strips unused locales.
  6. Scriptsnext/script loads GA and Mixpanel only when NEXT_PUBLIC_APP_ENV !== 'development'; window.gtag lazily loads after painting.

Debugging Workflow

  1. Reproduce – Run npm run dev + npm run build:test (if regression shows only in prod) and open Chrome DevTools.
  2. Trace – Capture a performance recording; note largest long task and LCP element. Save the .json trace to /performance-traces/issue-[id].json.
  3. Inspect Backend – Use Supabase dashboard metrics (RLS policies, query log) and pnpm supabase db show to profile slow endpoints.
  4. Compare Bundlesnpm run build:analyze and compare stats.html with previous release; ensure dynamic imports isolate heavy feature code.
  5. Automate – If fix requires consistent monitoring (e.g., chat streaming), author an MCP script or GitHub action to guard the regression.

Optimization Playbook

Server Layer

Client Layer

Analytics & Monitoring

When Metrics Regress

  1. Open a tracking issue with the failing metric, screenshot/report, and impacted routes.
  2. Add a checklist referencing observation tools (Lighthouse build hash, DevTools trace file, Supabase query ID).
  3. Implement fix following layer guidelines above.
  4. Prove improvement using the same tool (attach before/after). If regression was prevented by MCP automation, update scripts to guard the new path as well.

References

Keep this page up to date whenever new diagnostic scripts, MCP tools, or architectural guardrails land so all contributors share the same performance toolbox.