Frontend Development

React Performance Optimization Techniques for 2025

Modern strategies to build blazing-fast React applications. From code splitting to memo patterns, real techniques from building EduFly and GyanSathi LMS.

Harsh RastogiHarsh Rastogi
Jan 2, 2025Updated Jan 15, 20268 min
ReactPerformanceFrontendJavaScriptTypeScript

Performance Matters More Than Ever

Building EduFly — an AI-powered School ERP serving multiple schools — taught me that performance isn't a luxury. When teachers are taking attendance for 500 students on a school-issued tablet over a 3G connection in rural India, every millisecond counts. A 3-second load time means frustrated teachers who go back to paper registers.

These are the techniques I applied building production React apps at Asynq.ai and now at Modelia.ai, where our Shopify extension must render AI-generated style recommendations within the merchant's existing storefront — and it has to be fast or merchants will uninstall.

Measuring Before Optimizing

The first rule of performance optimization: measure first, optimize second. I've seen developers spend days memoizing components that render once. React DevTools Profiler is your best friend:

typescript
// Enable React profiling in development
// In your vite.config.ts:
export default defineConfig({
  resolve: {
    alias: {
      'react-dom': 'react-dom/profiling',
      'scheduler/tracing': 'scheduler/tracing-profiling',
    },
  },
});

At EduFly, profiling revealed that 80% of our render time was spent in three places: the student list, the attendance grid, and the grade charts. Fixing just those three components improved perceived performance by 60%.

Code Splitting That Actually Works

The first and most impactful optimization is smart code splitting. The default React build bundles everything into one file — your login page loads the analytics dashboard code. That's wasted bandwidth.

typescript
import { lazy, Suspense } from 'react';

// Route-based splitting — each page loads only when navigated to
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Analytics = lazy(() => import('./pages/Analytics'));
const Reports = lazy(() => import('./pages/Reports'));
const StudentProfile = lazy(() => import('./pages/StudentProfile'));

function App() {
  return (
    <Suspense fallback={<PageSkeleton />}>
      <Routes>
        <Route path="/dashboard" element={<Dashboard />} />
        <Route path="/analytics" element={<Analytics />} />
        <Route path="/reports" element={<Reports />} />
        <Route path="/student/:id" element={<StudentProfile />} />
      </Routes>
    </Suspense>
  );
}

For EduFly, this reduced our initial bundle from 1.2MB to 340KB — a 70% reduction that dramatically improved load times on the low-bandwidth connections common in Indian schools.

Component-Level Splitting

For heavy components that aren't always visible (modals, charts, rich text editors), split at the component level:

typescript
// The chart library is 200KB — don't load it until the user clicks "Analytics"
const GradeChart = lazy(() => import('./components/GradeChart'));
const AttendanceHeatmap = lazy(() => import('./components/AttendanceHeatmap'));

function AnalyticsSection({ showCharts }: { showCharts: boolean }) {
  if (!showCharts) return <button onClick={toggle}>Show Analytics</button>;

  return (
    <Suspense fallback={<ChartSkeleton />}>
      <GradeChart data={gradeData} />
      <AttendanceHeatmap data={attendanceData} />
    </Suspense>
  );
}

Memoization: When and Where

Not everything needs useMemo or React.memo. Over-memoization adds complexity without benefit and can actually hurt performance (the memoization comparison itself has a cost). Here's my decision framework from building apps at Modelia.ai and Asynq.ai:

Use React.memo When:

  • Component re-renders frequently with the same props (parent state changes often)
  • Component is expensive to render (complex DOM, lots of children)
  • Component is far from the state change in the tree

Skip Memoization When:

  • Props change on every render anyway (new object/array references)
  • Component is simple (a button, a label, a divider)
  • You're optimizing before you've measured
typescript
// GOOD: This component renders 500 rows, memoize it
const StudentRow = React.memo(({ student, onSelect }: StudentRowProps) => {
  return (
    <tr onClick={() => onSelect(student.id)}>
      <td>{student.name}</td>
      <td>{student.grade}</td>
      <td>{student.attendance}%</td>
    </tr>
  );
});

// BAD: This creates a new function on every render, defeating memo
function StudentList({ students }: { students: Student[] }) {
  return students.map(s => (
    // onSelect is a new arrow function every render!
    <StudentRow key={s.id} student={s} onSelect={(id) => navigate(id)} />
  ));
}

// GOOD: Stabilize the callback with useCallback
function StudentList({ students }: { students: Student[] }) {
  const handleSelect = useCallback((id: string) => navigate(id), [navigate]);
  return students.map(s => (
    <StudentRow key={s.id} student={s} onSelect={handleSelect} />
  ));
}

Virtual Lists for Large Datasets

GyanSathi LMS serves 15,000+ students across multiple districts. Rendering all student records in a single list would create 15,000 DOM nodes, consuming gigabytes of memory and making the page completely unresponsive.

Virtualization renders only the visible rows (typically 20-30) and swaps them as the user scrolls:

typescript
import { useVirtualizer } from '@tanstack/react-virtual';
import { useRef } from 'react';

function StudentList({ students }: { students: Student[] }) {
  const parentRef = useRef<HTMLDivElement>(null);

  const virtualizer = useVirtualizer({
    count: students.length,
    getScrollElement: () => parentRef.current,
    estimateSize: () => 60,
    overscan: 10, // Render 10 extra rows above/below viewport
  });

  return (
    <div ref={parentRef} style={{ height: '600px', overflow: 'auto' }}>
      <div style={{ height: virtualizer.getTotalSize(), position: 'relative' }}>
        {virtualizer.getVirtualItems().map((virtualRow) => {
          const student = students[virtualRow.index];
          return (
            <div
              key={student.id}
              style={{
                position: 'absolute',
                top: virtualRow.start,
                height: virtualRow.size,
                width: '100%',
              }}
            >
              <StudentRow student={student} />
            </div>
          );
        })}
      </div>
    </div>
  );
}

This renders 15,000 students with only ~40 DOM nodes at any time. Scroll performance is silky smooth even on low-end devices.

State Management at Scale

At Modelia.ai, our Shopify extension has complex state: product data from the Shopify API, AI recommendation results, user preferences, UI state like open panels and selected filters. Putting everything in a single state store causes unnecessary re-renders.

We use a layered approach:

typescript
// Server state — React Query handles caching, refetching, and synchronization
const { data: products } = useQuery({
  queryKey: ['products', merchantId],
  queryFn: () => trpc.product.list.query({ merchantId }),
  staleTime: 5 * 60 * 1000, // 5 minutes
});

// Client state — Zustand for UI state (lightweight, no boilerplate)
const useUIStore = create<UIState>((set) => ({
  selectedFilter: 'all',
  sidebarOpen: false,
  setFilter: (filter) => set({ selectedFilter: filter }),
  toggleSidebar: () => set((s) => ({ sidebarOpen: !s.sidebarOpen })),
}));

// URL state — for shareable state (filters, pagination, search)
const [searchParams, setSearchParams] = useSearchParams();
const page = Number(searchParams.get('page')) || 1;
const category = searchParams.get('category') || 'all';

This separation keeps re-renders minimal because server state updates don't trigger client state re-renders, and URL state changes don't trigger store re-renders.

Image Optimization

For EduFly's dashboard with student photos, school logos, and document thumbnails, images were our biggest performance bottleneck:

typescript
function OptimizedImage({ src, alt, width, height }: ImageProps) {
  return (
    <picture>
      <source srcSet={`${src}?format=webp&w=${width}`} type="image/webp" />
      <source srcSet={`${src}?format=avif&w=${width}`} type="image/avif" />
      <img
        src={`${src}?w=${width}`}
        alt={alt}
        width={width}
        height={height}
        loading="lazy"
        decoding="async"
        style={{ contentVisibility: 'auto' }}
      />
    </picture>
  );
}

Combined with AWS CloudFront CDN for delivery, our image load times dropped by 75%.

Debouncing and Throttling

At Asynq.ai, the candidate search field fires on every keystroke. Without debouncing, typing "software engineer" triggers 17 API calls. With a 300ms debounce, it triggers 2:

typescript
function useDebounce<T>(value: T, delay: number): T {
  const [debouncedValue, setDebouncedValue] = useState(value);

  useEffect(() => {
    const timer = setTimeout(() => setDebouncedValue(value), delay);
    return () => clearTimeout(timer);
  }, [value, delay]);

  return debouncedValue;
}

// Usage in search component
function CandidateSearch() {
  const [query, setQuery] = useState('');
  const debouncedQuery = useDebounce(query, 300);

  const { data } = useQuery({
    queryKey: ['candidates', debouncedQuery],
    queryFn: () => trpc.candidate.search.query({ q: debouncedQuery }),
    enabled: debouncedQuery.length > 2,
  });

  return <input value={query} onChange={(e) => setQuery(e.target.value)} />;
}

Web Vitals Monitoring

We track Core Web Vitals in production to catch regressions before users complain:

typescript
import { onCLS, onFID, onLCP, onFCP, onTTFB } from 'web-vitals';

function reportWebVital(metric: Metric) {
  // Send to your analytics endpoint
  fetch('/api/analytics/web-vitals', {
    method: 'POST',
    body: JSON.stringify({
      name: metric.name,
      value: metric.value,
      delta: metric.delta,
      id: metric.id,
    }),
  });
}

onCLS(reportWebVital);
onFID(reportWebVital);
onLCP(reportWebVital);

Key Takeaways

  • Measure before optimizing — Use React DevTools Profiler to find actual bottlenecks, not assumed ones
  • Code splitting gives the biggest initial performance win — Route-level splitting reduced EduFly's bundle by 70%
  • Virtual lists are essential for any list over 100 items — We render 15,000 students with 40 DOM nodes
  • Separate server state from client state — React Query + Zustand + URL state keeps re-renders minimal
  • Memoize strategically — Only memoize expensive components with stable props
  • Debounce user input — A 300ms debounce on search reduced API calls by 85% at Asynq.ai
  • Optimize images — WebP/AVIF with lazy loading and CDN delivery
  • Monitor Web Vitals in production — Track LCP, CLS, and FID to catch regressions early

Share this article

Harsh Rastogi - Full Stack Engineer

Harsh Rastogi

Full Stack Engineer

Full Stack Engineer building production AI systems at Modelia. Previously at Asynq and Bharat Electronics Limited. Published researcher.

Connect on LinkedIn

Follow me for more insights on software engineering, system design, and career growth.

View Profile