TypeScript Strict Mode: Is the Extra Pain Worth It?

TypeScript Strict Mode: Is the Extra Pain Worth It?
Brandon Perfetti

Technical PM + Software Engineer

Topics:TypeScriptType SafetyDeveloper Tooling
Tech:Node.jsReactESLint

TypeScript Strict Mode: Is the Extra Pain Worth It?

Most teams treat TypeScript strict mode like a purity test: either flip every flag in one painful commit and watch the build explode, or never touch strict and accept a steady stream of surprise runtime bugs. That framing is misleading. Strict mode isn’t an ideological hill to die on — it’s a risk-management lever. If you apply it with a plan, you keep delivery velocity while removing entire classes of production failure modes.

This article is a practical, contrarian implementation guide for experienced full‑stack JavaScript developers. It gives concrete migration steps, tradeoffs you must reason about, common pitfalls, and decision criteria so you can choose when and how to pay the cost (and when not to).

Reframing strict: risk control, not purity

Strict mode is a tool for making tradeoffs explicit. Turning it on increases the compiler’s vocal scrutiny over assumptions you implicitly rely on. Those assumptions include “this function never receives undefined,” “API responses always contain this field,” and “this library never returns any.”

Each strict flag has a cost (developer time) and a benefit (fewer runtime surprises). Your job is to choose an order and process that convert the most dangerous implicit assumptions into explicit, verifiable contracts with the smallest upfront delivery cost.

Decision criteria to consider:

  • How often do runtime null/undefined bugs occur in your codebase? If you’re debugging a steady stream of null-pointer-like failures, strictNullChecks is high ROI.
  • Which packages are highly depended on by other teams? Tightening shared libraries yields downstream benefits quickly.
  • How big is the codebase? A monorepo with hundreds of packages needs a per-package plan; a single small service can flip stricter faster.

Treat strictification like debt repayment with prioritization, not a one-time purity ritual.

What "strict" actually turns on — and how to choose flags

The top-level "strict": true flips a set of flags. You should think about each flag as an independent contract change.

Core flags to evaluate, in descending order of typical impact:

  • noImplicitAny
  • strictNullChecks
  • noUncheckedIndexedAccess
  • exactOptionalPropertyTypes
  • noImplicitOverride
  • strictBindCallApply
  • noImplicitReturns

Each flag moves an assumption into the typechecker. For example, strictNullChecks forces you to model the possibility of null/undefined explicitly; noImplicitAny forces explicit boundary typing.

Decision criteria for enabling a flag:

  • Benefit-per-fix: does this flag expose frequent classes of bugs we actually see in production?
  • Fix effort: is fixing exposed errors typically mechanical or does it require large design changes?
  • Cross-package surface area: does this flag affect many files or only a few boundaries?

You can and should enable flags incrementally. Once those incremental flags are on and your baseline is clean, setting "strict": true is mainly administrative.

Measure before you flip: create a migration backlog

When you run into strictness friction, resist the urge to flick every flag on or off. Instead, measure. Use the TypeScript compiler in noEmit mode to snapshot current error surface and prioritize work.

Example command to run locally or in CI:

tsc --noEmit --pretty false

If you capture the output into a file, you can group by file and error signature to find the high-impact fixes. Metrics to collect:

  • error count per package/folder
  • most common error messages (e.g., "Object is possibly 'undefined'")
  • number of files with anyImplicitAny

Turn that data into a migration backlog sorted by risk and frequency, not by novelty. A small number of files will typically account for most of the errors.

Don’t gate on a perfect baseline. Gate on trend: errors should decline over time and newly modified files should be fixed to the new standard.

Incremental strategy and recommended order (with rationale)

A recommended, pragmatic rollout order that balances benefit and fix effort:

  1. noImplicitAny — find untyped boundaries and force explicit contracts.
  2. strictNullChecks — eliminate the most common runtime surprises.
  3. strictBindCallApply and noImplicitReturns — tighten function contracts.
  4. noUncheckedIndexedAccess — make collection indexing safer.
  5. exactOptionalPropertyTypes and noImplicitOverride — harden API shape modelling and class hierarchy behavior.

Why this order? noImplicitAny exposes unknown surfaces where developers are already driving on empty assumptions. strictNullChecks then forces modeling of those surfaces correctly. The index/optional flags are harder to fix but are much easier after you’ve cleaned up the core contracts.

Example tsconfig snippet for per-package staged flags:

{
  "compilerOptions": {
    "noImplicitAny": true,
    "skipLibCheck": true,
    "strictNullChecks": false
  }
}

Enable noImplicitAny first in high-leverage packages, then enable strictNullChecks once calling sites stabilize.

Tradeoff: enabling noImplicitAny widely may create a lot of quick-to-fix but tedious annotation work. On the other hand, doing nothing leaves mutable any values spreading and making other strict flags less effective.

Practical refactors and patterns: avoid the common shortcuts

You will be tempted to silence the compiler. Don’t. There are safer patterns.

Prefer unknown at boundaries, then narrow:

// Bad: any is contagious and eliminates checks downstream.
function parseJson(payload: any) {
  return payload.user.id;
}

// Better: unknown forces you to make explicit checks before use.
function parseJson(payload: unknown) {
  if (typeof payload !== "object" || payload === null) {
    throw new Error("invalid payload");
  }
  const p = payload as Record<string, unknown>;
  if (typeof p.user !== "object" || p.user === null) {
    throw new Error("missing user");
  }
  return (p.user as { id: string }).id;
}

Use type guards and small, testable validators for complex shapes. Runtime validation libraries like Zod or io-ts convert runtime checks into TypeScript types and reduce assertion usage.

Example Zod pattern:

import { z } from "zod";

const UserSchema = z.object({
  id: z.string(),
  email: z.string().email()
});

type User = z.infer<typeof UserSchema>;

function parseUser(payload: unknown): User {
  return UserSchema.parse(payload); // throws or returns typed User
}

Tradeoffs: runtime validation adds CPU/time cost for parsing and validation. Use it where you accept untrusted inputs: network, database, external queues. For internal-only shaped data (e.g., values created by your code), prefer type narrowing and invariants enforced at creation sites.

Pitfall to avoid: replacing compiler errors with non-verifying type assertions, e.g., payload as User. Assertions hide problems; validators surface them.

React codebases: where strict bites—and where it pays off most

React projects see the most friction in three areas:

  • Props declared too loosely, especially callback shapes and generics
  • Imperative event handlers typed as any
  • Async data modeled as booleans or nulls without discrimination

Model async state with discriminated unions so JSX rendering benefits from exhaustiveness checks:

type UserState =
  | { status: "idle" }
  | { status: "loading" }
  | { status: "success"; data: { id: string; email: string } }
  | { status: "error"; message: string };

function UserView(props: { state: UserState }) {
  switch (props.state.status) {
    case "idle":
      return <div>Not loaded</div>;
    case "loading":
      return <div>Loading…</div>;
    case "success":
      return <div>{props.state.data.email}</div>;
    case "error":
      return <div>{props.state.message}</div>;
  }
}

This pattern forces you to model the real lifecycle and eliminates undefined access in render.

Event handler typings commonly start lax:

// Bad
function handleChange(e: any) { /* ... */ }

// Good
function handleChange(e: React.ChangeEvent<HTMLInputElement>) {
  const value = e.currentTarget.value;
  // ...
}

Tradeoffs and choices:

  • Strict props help component consumers early, but they can break tests or storybook usage until fixtures are typed.
  • If your UI uses many third-party libs with poor types, you may need local declaration files or wrapper layers. Prefer shim wrappers that expose typed interfaces rather than polluting libs with global ambient declarations.

Pitfall: overtyping generic components without clear boundaries can create maintenance burdens. For widely consumed UI primitives, invest the time to get types right; for app-specific components, keep boundaries coarser initially.

Monorepos and package-by-package rollout

Large repos need surgical changes. Flip flags package-by-package rather than repository-wide. The pattern that works:

  • Put shared base options in a root tsconfig.base.json.
  • Per-package tsconfig extends the base and enables stricter flags as the package is ready.
  • Start with packages that offer the highest downstream benefit: shared domain libraries, API clients, and core backend services.

Example per-package tsconfig (not in a bullet):

{
  "extends": "../../tsconfig.base.json",
  "compilerOptions": {
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

Monorepo tradeoffs:

  • Benefit: downstream packages get stronger guarantees from typed dependencies.
  • Cost: package boundaries become blockers if you allow cross-package implicit any usage. Introduce a migration window and require PRs that touch a package to fix its errors.
  • Use project references if full incremental typechecking is slow. Project references reduce repeated work for large graphs but increase build configuration complexity.

Pitfall: turning on skipLibCheck to speed up CI may hide issues in ambient types. Use skipLibCheck temporarily but audit third-party types sooner rather than later.

CI strategy: two-lane enforcement and guarded rollouts

A pragmatic CI approach avoids freezing the team while making progress:

Lane A — the baseline: continue running full typecheck but allow existing errors temporarily. Track the error count and track trends. Disallow regression in error count by failing the pipeline if new errors are introduced.

Lane B — new code: require that files touched by the PR have zero new strict errors. Implement this with tools that list changed files and run tsc only on them or by using ESLint rules that enforce no-explicit-any and no-ts-ignore in changed files.

Example of running a per-PR check block (pseudocode):

CHANGED_FILES=$(git diff --name-only origin/main...HEAD | grep '\.tsx\?$')
if [ -n "$CHANGED_FILES" ]; then
  tsc --noEmit $(echo $CHANGED_FILES)
fi

Tradeoffs:

  • This two-lane approach prevents mass-breakage while ensuring forward progress on touched areas.
  • It requires automation (CI scripts) and discipline to keep the baseline shrinking.

Enforce policy via linters:

  • Use @typescript-eslint/no-explicit-any with allowTypedFunctionExpressions disabled for changed files.
  • Block @ts-ignore by rule or require a Jira ticket reference in the comment.

Pitfall: per-file enforcement plus liberal use of suppression comments creates a migration mess. Every suppression should be tracked and time-boxed.

Common anti-patterns and pitfalls (and concrete alternatives)

  1. Blanket assertions and casting
  • Anti-pattern example: const u = payload as User;
  • Why it hurts: silences the compiler and transfers risk to runtime.
  • Alternative: validate at the boundary (Zod/io-ts) or narrow with type guards.
  1. Overuse of // @ts-nocheck or // @ts-ignore
  • Anti-pattern: file-level ignores that never get removed.
  • Alternative: require a documented ticket and an expiration date. Prefer single-line suppression with comments linking to remediation tasks.
  1. Turning strict features on and then off when the first rush of errors arrives
  • Anti-pattern: flipping flags back to avoid work.
  • Why it’s tempting: immediate velocity relief.
  • Why it backfires: you lose the opportunity to fix classes of bugs and the codebase accumulates hidden traps.
  • Alternative: reduce the scope instead (per-package) and prioritize high-value fixes.
  1. Treating declarations and runtime checks as synonyms
  • Anti-pattern: relying only on type declarations to validate untrusted data.
  • Alternative: always validate at the external boundary and then narrow into typed domain objects.
  1. Ignoring type dependency churn in monorepos
  • Anti-pattern: making a shared type stricter without bumping dependents.
  • Alternative: coordinate changes with semver or adopt internal package compatibility layers, and use build-time errors as an invitation to upgrade.

Decision checklist and a practical 30-day rollout

Use this checklist before setting "strict": true repository-wide.

Must-haves:

  • noImplicitAny enabled in high-leverage packages and a plan to remove remaining errors incrementally
  • strictNullChecks enabled in core domain and backend packages
  • Runtime validation exists for untrusted inputs (HTTP, DB, queues)
  • CI enforces no new type debt on modified files
  • Suppressions are tracked and time-boxed

30-day pragmatic plan (example, adjust to your org):

Week 1 — Baseline and Rules

  • Run tsc --noEmit and collect error metrics.
  • Publish a migration playbook with rules: prefer unknown over any, annotate public API surfaces first, no global @ts-nocheck.

Week 2 — Boundaries and Contract Hygiene

  • Type all external boundaries: API clients, DB models, queue payloads.
  • Enable noImplicitAny in 1–3 highest-impact packages.

Week 3 — Nullability and First Hardening

  • Enable strictNullChecks in those same packages.
  • Replace implicit null usage with unions, Option types, or explicit validation.

Week 4 — Enforce and Expand

  • Add per-PR checks that prevent new errors in changed files.
  • Start turning on other flags package-by-package while tracking metrics.

Decision criteria to flip final switch:

  • Error counts in the baseline are decreasing week-over-week.
  • The team has removed the most common unsound patterns (widespread any, unchecked indexing).
  • CI/automation can maintain the new standard without manual policing.

Final considerations: when strict might not be worth the immediate full cost

Strict mode is almost always worth the long-term benefits, but there are legitimate cases to delay or scope the work:

  • Prototype code that will be discarded inside a week. Spend time delivering, not typing ephemeral code.
  • Glued-together legacy systems where the cost to model every shape is prohibitive; instead, invest in a thin validation adapter between the legacy and new code.
  • Very small teams with no history of type-related incidents and a high cost to slow delivery for now; plan the migration and schedule it, don’t ignore it.

If you delay strict, create a bounded plan with dates and outcomes. Otherwise postponement usually becomes permanent technical debt.

---

Strict mode forces painful clarity. The right migration strategy turns that pain into durable signal: clearer contracts, safer refactors, and fewer runtime surprises. Pay the cost incrementally, measure continuously, and automate enforcement. If you approach strictness as risk management instead of a purity test, the extra effort is not only worth it — it becomes an investment that compounds with every release.