Every user interaction triggers a cascade of work on the browser's main thread. Understanding this pipeline is the foundation of all performance thinking.
When something changes in the UI, the browser runs through up to five stages — all on a single thread. If any stage takes too long, the user sees jank.
At 60fps, you have 16.67ms per frame. At 120fps (iPhone ProMotion, iPad), it's 8.33ms. Subtract ~4ms for browser housekeeping, and you're left with:
This demo creates 200 boxes, then reads and writes their dimensions. Watch how interleaved reads/writes (bad) takes dramatically longer than batched operations (good).
There are three fundamentally different ways to put pixels on screen. Choosing the right one is the single biggest architecture decision for a data-rich app.
| DOM | Canvas 2D | WebGL | |
|---|---|---|---|
| Best for | Interactive UI, forms, text | Custom 2D graphics, charts | Maps, 3D, millions of points |
| Layout engine | Browser handles it | You handle it | You handle it |
| Perf ceiling | Medium | High | Very High |
| Dev complexity | Low | Medium | High |
| Accessibility | Built-in | Manual | Manual |
| Examples | React apps, forms, tables | Chart.js, Pretext, timelines | Mapbox, Deck.gl, Figma |
Both panels render 500 colored blocks and animate them. Watch the FPS difference when both run simultaneously.
These four patterns solve the most common performance problems in data-heavy applications. Click each to expand.
Problem: You have 50,000 rows in a table. Rendering all 50,000 DOM nodes freezes the browser for seconds and eats hundreds of MB of RAM.
Virtual scrolling maintains a "render window" — only the rows currently visible in the viewport (plus a small overscan buffer) exist in the DOM. As the user scrolls, rows are recycled: old ones are removed, new ones are created. The scrollbar is faked with a spacer element at the correct total height.
// TanStack Virtual — minimal example (50,000 rows, ~30 in DOM at any time)
import { useVirtualizer } from '@tanstack/react-virtual';
function VirtualList({ items }) {
const parentRef = useRef(null);
const virtualizer = useVirtualizer({
count: items.length, // 50,000
getScrollElement: () => parentRef.current,
estimateSize: () => 40, // estimated row height in px
overscan: 5, // render 5 extra rows above/below
});
return (
<div ref={parentRef} style={{ height: '600px', overflow: 'auto' }}>
<div style={{ height: virtualizer.getTotalSize() }}>
{virtualizer.getVirtualItems().map(vRow => (
<div key={vRow.key}
style={{
position: 'absolute',
top: 0,
transform: \`translateY(\${vRow.start}px)\`,
height: vRow.size,
}}>
{items[vRow.index].name}
</div>
))}
</div>
</div>
);
}
measureElement, but it adds complexity.Problem: You shipped 50,000 rows to the browser and now you're sorting and filtering in JavaScript. Each operation takes 200ms+ and blocks the UI.
The pattern: debounce user input → send query to server → server does the heavy lifting (SQL, full-text search) → return minimal JSON → render results.
// Debounced search — the right pattern for large datasets
function SearchInput({ onResults }) {
const [query, setQuery] = useState('');
const controller = useRef(null);
useEffect(() => {
if (!query || query.length < 2) return;
// Cancel previous in-flight request
controller.current?.abort();
controller.current = new AbortController();
const timer = setTimeout(async () => {
try {
const res = await fetch(
\`/api/search?q=\${encodeURIComponent(query)}&limit=20\`,
{ signal: controller.current.signal }
);
const data = await res.json();
onResults(data.results);
} catch (e) {
if (e.name !== 'AbortError') throw e;
}
}, 150); // 150ms debounce — sweet spot
return () => clearTimeout(timer);
}, [query]);
return <input value={query} onChange={e => setQuery(e.target.value)} />;
}
DuckDB is an embeddable analytical database that runs sub-100ms queries on millions of rows. It can run in-process (no separate server), supports SQL, and even compiles to WASM for the browser. For dashboards and analytical tools, it's transformative.
Problem: You're parsing a 5MB FHIR bundle or running a complex data transform. The entire UI freezes for 800ms while the main thread grinds through it.
postMessage.// worker.js — runs on a separate thread
self.onmessage = function(e) {
const { type, payload } = e.data;
if (type === 'PARSE_FHIR_BUNDLE') {
// Heavy work happens here — main thread stays responsive
const patients = payload.entry
.filter(e => e.resource.resourceType === 'Patient')
.map(e => ({
id: e.resource.id,
name: e.resource.name?.[0]?.text || 'Unknown',
birthDate: e.resource.birthDate,
conditions: extractConditions(e.resource),
}));
self.postMessage({ type: 'PARSED', patients });
}
};
// main.js — stays snappy
const worker = new Worker('worker.js');
worker.postMessage({
type: 'PARSE_FHIR_BUNDLE',
payload: hugeFhirBundle // 5MB of FHIR data
});
worker.onmessage = (e) => {
if (e.data.type === 'PARSED') {
renderPatientList(e.data.patients); // UI updates instantly
}
};
postMessage(data, [buffer]) for large binary data to avoid the serialization cost.Problem: You're rendering 2,000 provider locations as <div> markers in the DOM. Pan and zoom are at 8fps. The page uses 400MB of RAM.
Never render individual pins at wide zoom levels. Supercluster groups nearby points into clusters that expand as the user zooms in. This keeps the rendered element count constant regardless of dataset size.
// Mapbox GL + Supercluster — minimal clustering example
import mapboxgl from 'mapbox-gl';
const map = new mapboxgl.Map({
container: 'map',
style: 'mapbox://styles/mapbox/light-v11',
center: [-98.5, 39.8],
zoom: 4,
});
map.on('load', () => {
map.addSource('providers', {
type: 'geojson',
data: providersGeoJSON, // 10,000+ points
cluster: true,
clusterMaxZoom: 14,
clusterRadius: 50,
});
// Cluster circles
map.addLayer({
id: 'clusters',
type: 'circle',
source: 'providers',
filter: ['has', 'point_count'],
paint: {
'circle-color': ['step', ['get', 'point_count'],
'#51bbd6', 100, '#f1f075', 750, '#f28cb1'],
'circle-radius': ['step', ['get', 'point_count'],
20, 100, 30, 750, 40],
},
});
// Individual points (only visible at high zoom)
map.addLayer({
id: 'points',
type: 'circle',
source: 'providers',
filter: ['!', ['has', 'point_count']],
paint: { 'circle-radius': 6, 'circle-color': '#3b82f6' },
});
});
You can't optimize what you can't measure. Here's the framework for thinking about — and talking about — web performance.
Users don't think in milliseconds. They have three distinct performance-related feelings:
This is the single most important number in web performance:
RAIL gives you UX-driven performance budgets for four categories of work:
| Category | Budget | What it covers |
|---|---|---|
| Response | <100ms | React to user input (click, tap, type). If you can't finish in 100ms, show a loading state. |
| Animation | <16ms/frame | Visual transitions, scrolling, dragging. Each frame gets 16ms (aim for 10ms to leave room). |
| Idle | 50ms chunks | Use idle time for deferred work (analytics, prefetching). Keep each chunk under 50ms so you can respond to input instantly. |
| Load | <3s TTI | Page should be interactive within 3s on mid-range mobile with 3G. For data apps: show skeleton/content progressively. |
Three tabs you should know intimately:
Record a user interaction. The flame chart shows exactly where time is spent on the main thread. Yellow = JS, purple = layout, green = paint. Look for long tasks (>50ms) in the "Main" section. The frame timeline at the top shows dropped frames as red bars.
The waterfall view reveals request chains — requests that can't start until others finish. Look for: oversized payloads (are you shipping 5MB of JSON?), render-blocking resources, and slow TTFB (server response time). Filter by "XHR" to see just your API calls.
Automated audit that scores Performance, Accessibility, Best Practices, and SEO. Fix in this order: Largest Contentful Paint (LCP) → Cumulative Layout Shift (CLS) → Interaction to Next Paint (INP). These are the Core Web Vitals that actually matter.
Use this tool to evaluate an existing app or plan a new one. Two modes: diagnose problems or plan architecture.
Toggle the issues that apply to your app. Your risk score and recommendations update in real time.
Answer these questions about your planned app to get architecture recommendations.
Concrete examples of common mistakes and their fixes, with estimated performance impact.
// Render ALL 10,000 rows as real DOM nodes
// Sort by re-sorting the JS array and re-rendering everything
function renderTable(data) {
const tbody = document.querySelector('tbody');
tbody.innerHTML = data.map(row =>
`${row.name}
${row.date} `
).join('');
}
// On sort click:
data.sort((a, b) => a.name.localeCompare(b.name));
renderTable(data); // 10,000 DOM nodes created
Impact: ~800ms to render, ~200ms to sort, 10,000 DOM nodes eating 80MB+ RAM. Scrolling jank on mobile.
// Server-side sort + virtual scroll
// Only ~30 DOM rows exist at any time
const query = await fetch(
`/api/data?sort=name&page=1&limit=30`
);
// TanStack Virtual handles the render window
const virtualizer = useVirtualizer({
count: serverTotalCount, // 10,000
getScrollElement: () => parentRef.current,
estimateSize: () => 44,
overscan: 5,
});
Impact: ~5ms to render 30 rows, sort is instant (SQL), 30 DOM nodes, butter-smooth scrolling.
// Filter 50K items on EVERY keypress
input.addEventListener('input', (e) => {
const q = e.target.value.toLowerCase();
const filtered = allItems.filter(item =>
item.name.toLowerCase().includes(q) ||
item.description.toLowerCase().includes(q)
);
renderResults(filtered); // could be 40K results
});
Impact: ~150ms per keystroke to filter + re-render. UI freezes while typing. Rendering thousands of matching results compounds the problem.
// Debounce 150ms → server full-text search
// Return only 20 results
let timer;
input.addEventListener('input', (e) => {
clearTimeout(timer);
timer = setTimeout(async () => {
const res = await fetch(
`/api/search?q=${encodeURIComponent(
e.target.value)}&limit=20`
);
renderResults(await res.json());
}, 150);
});
Impact: Zero client-side computation. DuckDB/Postgres FTS returns 20 results in <10ms. Only 20 DOM nodes to render. Typing is never blocked.
// 2,000 DOM marker divs
locations.forEach(loc => {
const marker = document.createElement('div');
marker.className = 'map-pin';
marker.style.left = project(loc.lng) + 'px';
marker.style.top = project(loc.lat) + 'px';
mapContainer.appendChild(marker);
});
// On pan/zoom: update all 2,000 positions
Impact: 2,000 DOM nodes, each repositioned on every frame during pan. ~8fps on mobile. 400MB memory.
// WebGL rendering + clustering
map.addSource('locations', {
type: 'geojson',
data: locationsGeoJSON,
cluster: true,
clusterMaxZoom: 14,
clusterRadius: 50,
});
// Mapbox renders everything on GPU
// ~20 cluster circles at zoom-out
// Individual pins only at high zoom
Impact: Zero DOM markers. GPU renders all points. 60fps pan/zoom. Works with 100,000+ points. ~50MB memory.
Your quick-reference card for building fast data-rich apps.
Built as an interactive reference guide · March 2026
Part of Blake's Reports