Work with Shubham
Connect with Shubham Jha
Available for senior engineering roles, technical consulting, and product advisory. I specialise in React, Next.js, and full-stack architecture for global-scale platforms.
Start a projectWork with Shubham
Available for senior engineering roles, technical consulting, and product advisory. I specialise in React, Next.js, and full-stack architecture for global-scale platforms.
Start a project
The logistics portal I inherited had a 3.2 second LCP. That number had been sitting in a Notion doc labelled known issues for two quarters. Everyone knew it was bad. Nobody could tell you exactly why. If you've already added next/image, turned on compression, and you're still stuck in the high 70s on Lighthouse, the problem is almost certainly not your images.
When we finally fixed it — really fixed it, not just ran Lighthouse and called it a day — page load dropped 40%. Repeat orders went up 74%. Average order value climbed 12%. I'm not claiming performance caused all of that. But 23% of vendors were dropping off in their first session. When you stop losing a quarter of your users before they've done anything, everything else you're working on gets a fairer shot.
Most Core Web Vitals content is written for marketing sites. It tells you to compress images and defer scripts. That's fine for a WordPress blog. For a data-heavy Next.js App Router application, the standard checklist doesn't get you past 75. Google's published thresholds (LCP under 2.5s, CLS under 0.1) are the floor, not the target. If your users are power users who touch your product dozens of times a day, they notice every stutter.
Open Chrome DevTools, run a performance trace, and look at what the LCP element actually is before you touch a single image. If it's text or a data-driven component, image optimization is the wrong fix.
The LCP element was a dashboard stats card: a <div> with numbers pulled from three separate API calls. The browser was waiting for all three before it could paint anything meaningful in the viewport. The hero image was fine. The data was the bottleneck.
Some of the things that killed our LCP had nothing to do with assets:
useEffect that fetched critical above-the-fold data client-side instead of server-side"use client" boundary at the page level that forced the entire page to hydrate before any data could renderThe fix wasn't clever, but it wasn't just move to Promise.all either. First, fetchRevenue had to be decoupled from the orders response: the original call passed orders.period as a dependency, making true parallelization impossible until that contract changed. Once decoupled, we moved all three fetches to the server with Promise.all and streamed secondary content below the fold with Suspense. The dashboard stats rendered in the first paint instead of the third.
// Before: client-side waterfall
const [stats, setStats] = useState(null)
useEffect(() => {
fetchOrderCount().then(async (orders) => {
const revenue = await fetchRevenue(orders.period)
const shipments = await fetchShipments()
setStats({ orders, revenue, shipments })
})
}, [])
// After: server-side parallel fetch
// Note: refactored fetchRevenue to accept date range from URL params instead of chaining off orders
async function DashboardStats() {
const [orders, revenue, shipments] = await Promise.all([fetchOrderCount(), fetchRevenue(), fetchShipments()])
return <StatsCard orders={orders} revenue={revenue} shipments={shipments} />
}
The useEffect version waited sequentially for three round trips. The server version waited for the slowest of three parallel requests, and the result arrived in the initial HTML payload, not after hydration.
"use client" boundary that's silently inflating your bundleThe most common hidden LCP regression in Next.js App Router apps: a page-level "use client" that got added for one interactive element and never revisited. Teams added it for a dropdown, a toast, a modal. The entire component tree beneath that boundary ships as client JavaScript and hydrates before rendering.
Push the boundary down to the smallest component that actually needs interactivity. A search input is a client component. The page layout, the data table, the navigation: those don't need to be. If Server Components and the App Router are still coming together for you, mastering React and Next.js in 2026 has the right sequence to build that intuition before performance work makes sense.
// Before: entire page is a client component
"use client"
export default function OrdersPage() {
// 800 lines of component, all shipped to client
}
// After: only the search is a client component
export default async function OrdersPage() {
const orders = await fetchOrders()
return (
<main>
<OrderSearch /> {/* "use client" */}
<OrderTable orders={orders} /> {/* server component, no JS shipped */}
</main>
)
}
Moving to proper island architecture cut the client JavaScript bundle by roughly 35%. That reduced Time to Interactive directly, which improved INP scores as well. If you're still working out which components belong on the server vs. the client, React Hooks, TypeScript 2026 covers the hook and component architecture patterns that make these boundaries easier to enforce consistently.
The API waterfall and boundary fixes got us most of the way. The last LCP gains came from images — and not the fixes you'd expect.
priority attribute is probably doing more harm than goodIf you're already using next/image, the remaining gains are in details most teams skip.
The biggest remaining issue was priority abuse. Some teams (including ours) mark multiple images as priority to ensure they preload. The problem: priority adds a <link rel="preload"> tag for each image. With three or four of them, the browser competes for bandwidth on resources it doesn't all need immediately.
One priority={true}, on the LCP candidate. Everything else loads lazily.
The second issue was missing sizes on responsive images. Without sizes, Next.js generates a srcset but the browser defaults to 100vw as the assumed display width. On a 375px Retina screen at 2x DPR, that targets a 750px image, which for a narrow content column can be 2–3× more than necessary.
<Image
src="/hero.webp"
alt="Dashboard preview"
width={1200}
height={630}
priority
sizes="(max-width: 768px) 100vw, (max-width: 1200px) 50vw, 800px"
/>
The sizes attribute tells the browser exactly which image to download at each viewport width. On mobile, this alone can save hundreds of kilobytes per page load.
CLS is deceptively hard to debug because it often doesn't appear in Lighthouse. Lighthouse measures CLS on a simulated load with a clean cache. Real CLS happens when:
Our CLS was 0.18. In practice, vendors saw the order table jump down when the pagination bar loaded. A small thing. It happened on every page visit. Multiply that by 30 visits a day per vendor across hundreds of vendors and it's a constant source of friction that nobody files a bug report about. Nobody files a bug that says "the page jumped." They just quietly stop using the product.
Fallback fonts have different metrics than your custom font. When Inter loads, text that was rendering in Arial reflows to match Inter's line height, letter spacing, and word spacing. Paragraphs shift. Buttons resize. That's your CLS.
next/font handles the loading. The real fix is font metric override: CSS descriptors that make your fallback font match your custom font's dimensions closely enough that the reflow is imperceptible.
import { Inter } from "next/font/google"
const inter = Inter({
subsets: ["latin"],
display: "swap",
fallback: ["system-ui", "Arial"],
adjustFontFallback: true, // Next.js calculates override metrics automatically
})
adjustFontFallback: true generates size-adjust, ascent-override, descent-override, and line-gap-override for the fallback. The visual difference between fallback and loaded font becomes small enough that layout doesn't shift meaningfully.
The hardest CLS to fix is from content whose size you don't know yet: banners, notification bars, data-driven cards, ad slots. The naive solution is to avoid adding things dynamically. The real solution is to reserve space.
For fixed-height elements like banners, use a min-height wrapper even when the content is empty:
<div style={{ minHeight: "48px" }}>{banner && <Banner message={banner.message} />}</div>
For data-driven content where you don't know the final height, skeleton loaders with accurate proportions are better than no loaders. A skeleton that's 80px tall and content that's 120px tall still causes a shift.
The largest CLS contributor was the order stats row: four cards that loaded with real data after the page rendered. Each card had a different final height depending on the number inside. We fixed it by setting a fixed card height and truncating overflowing numbers, then exposing a tooltip for the full value. CLS went from 0.18 to 0.04. The page stopped moving.
Performance regresses because improvements and code changes happen in different places. Lighthouse scores feel good locally and break in production. You need to measure in both places, for different reasons.
Locally, Lighthouse tells you what's theoretically possible. Production RUM (real user monitoring) tells you what's actually happening.
For production monitoring, the Web Vitals JS library piped into your analytics is the minimum viable setup:
import { onLCP, onINP, onCLS } from "web-vitals"
import type { Metric } from "web-vitals"
function sendToAnalytics(metric: Metric) {
const payload = JSON.stringify({
name: metric.name,
value: metric.value,
rating: metric.rating, // "good", "needs-improvement", "poor"
page: window.location.pathname,
})
// Quick start: navigator.sendBeacon("/api/vitals", payload) — sends as text/plain
navigator.sendBeacon("/api/vitals", new Blob([payload], { type: "application/json" }))
}
onLCP(sendToAnalytics)
onINP(sendToAnalytics)
onCLS(sendToAnalytics)
One caveat: fire this on a sample of sessions (10–20%) rather than every user, or you'll flood your analytics endpoint on high-traffic pages. Add a Math.random() < 0.1 guard around the beacon call in production.
This gives you per-page breakdown. You want to know that your homepage LCP is 1.8s but your order history page is 3.4s, because those are different problems with different fixes.
The second piece is a performance budget in CI. Not a hard block (that creates friction), but a warning that goes to Slack or fails a check:
// lighthouserc.js
module.exports = {
ci: {
assert: {
assertions: {
"largest-contentful-paint": ["warn", { maxNumericValue: 2500 }],
"cumulative-layout-shift": ["error", { maxNumericValue: 0.1 }],
"total-blocking-time": ["warn", { maxNumericValue: 300 }],
},
},
},
}
Make CLS a hard error. Make LCP a warning. Layout stability is non-negotiable. Load time is something to improve over time.
The budget and the monitoring tell you where you are. The process is what actually moves the number.
Single performance fixes don't compound. A workflow does. It doesn't need to be elaborate.
The loop we settled on:
"use client" boundary at a high level, it gets flaggedThat's it. No performance sprints. No big-bang optimization projects. Small, consistent, measured.
In six months of running this loop, we went from a team that did occasional performance "fixes" to a team where performance kept improving passively because the worst regressions never made it to production. For the broader Next.js and React patterns that make this kind of iteration sustainable at scale, building scalable web apps in 2026 is a useful companion read.
The 40% load time improvement, 74% repeat order increase, 12% AOV growth: I want to be honest about attribution. We also redesigned the portal, improved mobile layouts, and fixed navigation architecture in the same period. Performance wasn't the only variable.
The metric I'm most proud of didn't show up in Lighthouse at all: support tickets from vendors dropped to near zero. That wasn't purely a performance win. The React migration cleaned up brittle UI, and rethinking the information architecture meant vendors could find what they needed without calling support. But a fast, stable UI that doesn't jump around removes an entire category of frustration before it becomes a ticket. CLS at 0.18 means vendors watch content shift on every page load. That's not a bug they can articulate. It just makes the product feel broken in a way they can't explain.
The 23% first-session drop-off is the number I'll stake a claim on. When your LCP is 3.2 seconds, a quarter of your users have decided to close the tab before they've seen a single piece of your UI. When it drops to 1.9 seconds, those people stay. What they do once they stay is a product problem, not a performance problem.
Find your equivalent of the 23% number. It's in your analytics: session duration by page load time bucket, conversion rate by connection speed, bounce rate on your heaviest pages. The data is there. Use it to make the argument, because performance is a product problem disguised as a technical one, and nobody funds a Lighthouse score.
That Notion doc still exists. It's mostly empty now.
If your team has a performance number that's been sitting in a backlog for too long, I work with engineering teams on Next.js performance, architecture, and the delivery practices that make improvements stick. Browse my projects or reach out to talk through your situation.
next/core-web-vitals is an ESLint configuration preset bundled with Next.js. It extends the base next ESLint config and adds stricter rules that flag patterns known to hurt Core Web Vitals — things like synchronous scripts in _document, missing keys in lists that trigger layout instability, and misuse of next/image. Enable it in .eslintrc.json with: { "extends": ["next/core-web-vitals"] }. It's the recommended starting config for any Next.js project where performance matters.
ESLint v9 replaced the legacy extends syntax with flat config. To use next/core-web-vitals in flat config, install @eslint/eslintrc and use FlatCompat: import { FlatCompat } from '@eslint/eslintrc'; const compat = new FlatCompat({ baseDirectory: import.meta.dirname }); export default [...compat.extends('next/core-web-vitals')]; This wraps the legacy preset for compatibility with ESLint v9's flat config format. Next.js 15+ ships with a flat config-compatible eslint-config-next, so check your version before reaching for FlatCompat.
The key checkpoints before shipping: (1) Confirm the LCP element in Chrome DevTools — it's often a data-driven component, not an image. (2) Move above-the-fold data fetches to the server with Promise.all and remove client-side useEffect waterfalls. (3) Push 'use client' boundaries to the smallest component that needs interactivity. (4) Set priority on exactly one image — your LCP candidate — and add accurate sizes attributes to all responsive images. (5) Enable adjustFontFallback: true in next/font to prevent CLS from font swaps. (6) Reserve fixed height for dynamic content like banners. (7) Add a Lighthouse CI budget with CLS as a hard error and LCP as a warning.
Because LCP in a data-heavy Next.js app is often not caused by images. Open Chrome DevTools, run a performance trace, and check what the LCP element actually is. In most App Router apps with real data, it's a stats card, a table, or a heading that depends on a client-side fetch. The fix is to move that fetch to a Server Component so the data arrives in the initial HTML payload instead of after hydration. Sequential useEffect fetches that each wait for the previous response are the single most common cause of high LCP in Next.js apps that have already done the standard image optimization.
Use next/font/google with display: 'swap' and set adjustFontFallback: true. This generates CSS font metric overrides (size-adjust, ascent-override, descent-override, line-gap-override) that make your fallback font match the dimensions of your custom font. When the custom font loads, the reflow is imperceptible and CLS stays near zero. Without adjustFontFallback, even a correctly loaded font causes layout shift because Arial and Inter have different line heights and letter spacing — the page reflowing from one to the other counts as CLS.
next/image automates three things that matter for Core Web Vitals: it generates a srcset with multiple sizes so browsers download the right image for their viewport, it lazy-loads images below the fold by default to reduce initial page weight, and it prevents CLS by requiring width and height props so the browser reserves space before the image loads. The one thing teams get wrong: marking multiple images with priority={true}. Each priority image adds a <link rel='preload'> tag. Use priority on exactly one image — your LCP candidate. Everything else should load lazily.
Published: Fri Mar 27 2026