← Back to Blog
Developer8 min read

The Technical Stack Behind Tanvrit: How We Build at Scale

We build in public. Here's an honest look at every layer of Tanvrit's technical architecture — what we chose, why, and what we'd do differently.

Engineering Team
15 February 2025 · Engineering
Share

We build in public where we can. That means being honest about the technical decisions behind Tanvrit — what we chose, what we rejected, and what we've learned along the way. This is a full walkthrough of the architecture behind tanvrit.com, updated as of early 2025.

We are a small team building multiple products simultaneously. Every infrastructure decision is evaluated against one question: does this give us maximum leverage for minimum operational overhead? We have no desire to manage Kubernetes clusters or tune database connection pools. We want to ship products that work well for our users, with a stack that a team of three can maintain without dedicated DevOps.

Why Next.js App Router (and Why Not the Pages Router)

The tanvrit.com frontend runs on Next.js 16 with the App Router. We migrated from the Pages Router about a year ago and will not go back. The App Router model — where every component is a React Server Component by default, and you opt into client-side interactivity explicitly — aligns well with our use case: mostly static content and tools with isolated client-side computation.

With the Pages Router, every page ships its entire component tree as client-side JavaScript even if 90% of the page is static HTML. The App Router's server component default means that only the interactive parts — tool inputs, state-dependent UI — are included in the JavaScript bundle. The result is significantly smaller bundles and faster initial page loads, especially on the content pages (blog posts, landing pages) where interactivity is minimal.

The App Router also improves nested layouts. Before, sharing a layout between the /tools section and its individual tool pages required either a custom _app.tsx with conditional logic or duplicating the layout component. With the App Router, nested layout.tsx files compose naturally: the tools layout wraps all tool pages, the blog layout wraps all blog posts, and the root layout wraps everything — without a line of conditional rendering.

Static Export: No Server, No Cold Starts

We use Next.js's output: "export" mode, which generates a fully static site at build time. Every page is pre-rendered to HTML, CSS, and JavaScript files. There is no Node.js server running — the entire site is a collection of static files served from a CDN.

This eliminates a class of operational concerns: no server processes to monitor, no cold starts under low traffic, no horizontal scaling required under high traffic, and no server runtime errors in production. The tradeoff is that dynamic server-side rendering is not available — any personalisation or real-time data must happen on the client side. For our use case, this is acceptable: the tools run in the browser anyway, and the content pages are genuinely static.

Dynamic routes (blog post pages, individual tool pages) use generateStaticParams() to enumerate all valid paths at build time. Next.js pre-renders each one. The blog post content lives in TypeScript files in src/content/blog/ rather than a database, which means edits go through the same git workflow as code changes and blog content is included in the static build automatically.

Deployment: Cloudflare Pages and the Edge Network

We deploy to Cloudflare Pages. The static output is pushed to Cloudflare's global edge network — over 300 data centres in 100+ countries — with zero configuration required beyond connecting the GitHub repository. Every push to the main branch triggers a build and deploys the result globally within about 90 seconds. Every pull request gets an automatic preview URL with the full site deployed.

For an India-focused product, the edge network matters concretely. Cloudflare has significant infrastructure in Mumbai, Chennai, Delhi, Bangalore, Kolkata, and Hyderabad. An Indian user is served from the nearest edge node, not from a single US-based origin server. The difference in TTFB (time to first byte) between "static files on a US VPS" and "static files on Cloudflare edge in Mumbai" is 200-400ms on a good connection, and substantially more on a 4G mobile network. For first contentful paint, that difference is visible.

Cloudflare Pages also provides automatic HTTPS, DDoS protection, and HTTP/3 support — all without configuration. The total infrastructure cost for the marketing and tools site is within Cloudflare's free tier.

Material UI 7 and the Dark Theme Architecture

We use Material UI (MUI) version 7 as the component foundation, with a custom dark theme that overrides the default Material Design colour system entirely. The decision to use MUI came down to pragmatism: it has the most comprehensive set of accessible, tested components, the sx prop composes cleanly with our design tokens, and the team already knew it well.

The custom theme defines our design tokens — brand colours, a spacing scale, a typography scale, and component-level overrides — in a single file. A brand colour change propagates across every MUI component that uses the theme palette. We deliberately chose a dark-first design: our primary background is #F8FAF9 (nearly black with a faint green tint), and every colour decision was made against that background. We do not maintain a light theme — the product is dark-only.

For page-level structural layout (grids, sections, hero layouts), we use inline styles rather than MUI components. Inline styles have zero runtime cost, no class name generation, and are trivially readable. MUI handles interactive components; inline styles handle structural layout. This hybrid approach keeps our bundle size smaller than a fully MUI-driven layout.

TypeScript Strict Mode Throughout

Every line of code in the Tanvrit codebase is TypeScript with strict mode enabled. This means strictNullChecks, noImplicitAny, strictFunctionTypes, and the rest of the strict family are all on. We have no // @ts-ignore suppression comments and no any types except where genuinely unavoidable when fighting a poorly-typed third-party library.

// tsconfig.json — strict configuration
{
  "compilerOptions": {
    "strict": true,
    "noUncheckedIndexedAccess": true,
    "exactOptionalPropertyTypes": true,
    "noImplicitReturns": true,
    "noFallthroughCasesInSwitch": true
  }
}

The case for strict TypeScript in a small team is not primarily about catching type errors at compile time — although it does catch many real bugs before they hit production. The stronger argument is that types are living documentation. When a function signature specifies that it accepts a BlogPost interface, any developer reading or modifying that function immediately understands the data contract without reading the implementation. In a codebase where multiple people work across multiple products with varying context on any given day, that clarity is a significant productivity gain.

Framer Motion vs CSS Animations

We use both Framer Motion and CSS animations, for different purposes. CSS animations are preferred for anything that can be expressed declaratively: hover states, transitions, keyframe animations on decorative elements, gradient animations. CSS animations run on the compositor thread, do not require JavaScript, and work even if Framer Motion's JavaScript bundle has not loaded yet.

Framer Motion is used for entrance animations, scroll-triggered animations, and complex gesture-driven interactions where the imperative API and spring physics are genuinely useful. The useReducedMotion hook from Framer Motion respects the user's system preference for reduced motion — we wrap all Framer Motion animations with a reduced-motion check.

The performance tradeoff is real: Framer Motion adds approximately 45kb to the gzipped bundle. On the homepage and landing pages, we accept this cost because the entrance animations are central to the design. On the tool pages, we avoid Framer Motion entirely and use CSS transitions only — tool pages should load and be interactive as fast as possible.

API Architecture and Database Choices

The tanvrit.com marketing and tools site has no backend API — it is purely static. The SaaS products (Friendly POS and others) have a separate backend that is completely decoupled from the frontend deployment. This separation means a backend deployment or outage cannot affect the public-facing marketing site.

The backend API is built with Node.js and deployed as serverless functions on Cloudflare Workers. Workers run at the edge (same network as Pages), eliminating the latency of routing from the CDN edge to a distant origin server. For read-heavy API endpoints, the response time from an Indian user's perspective is comparable to serving from a local server.

For the primary application database, we use PostgreSQL hosted on Supabase. The choice was pragmatic: Supabase provides row-level security, a clean REST and real-time API, built-in auth, and managed infrastructure with an India-region option. For the Friendly POS product, which requires offline capability, we use SQLite on-device with a sync layer — the local database is the source of truth, and changes are reconciled with the server when connectivity is available.

Deployment Pipeline: GitHub Actions to Cloudflare Pages

Our deployment pipeline is intentionally simple. Every push to the main branch triggers a GitHub Actions workflow that runs TypeScript type checking, ESLint, and the Next.js build. If all checks pass, Cloudflare Pages picks up the build automatically via its GitHub integration.

# .github/workflows/deploy.yml (simplified)
name: Deploy
on:
  push:
    branches: [main]
jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 20 }
      - run: npm ci
      - run: npx tsc --noEmit      # type check
      - run: npm run lint           # ESLint
      - run: npm run build          # Next.js static export

Pull requests get a preview deployment automatically from Cloudflare Pages without any additional workflow configuration. The preview URL is posted as a comment on the PR. This gives reviewers a live, fully deployed version of every change before it merges.

Performance Budget and Core Web Vitals

We target specific Core Web Vitals thresholds and measure them on every deploy using Lighthouse CI in our GitHub Actions workflow. Our targets for all pages: LCP (Largest Contentful Paint) under 2.5 seconds, CLS (Cumulative Layout Shift) under 0.1, INP (Interaction to Next Paint) under 200ms. These are the "good" thresholds from Google's Core Web Vitals specification.

The primary lever for LCP on the tools and content pages is the static export — HTML is served directly from the CDN edge with no server processing. Font loading is handled with font-display: swap and preloading the variable font files to avoid layout shift from font fallback swaps. Images use the Next.js Image component with explicit dimensions to prevent CLS, and are served as WebP with PNG fallbacks.

For the tool pages specifically, we target a JavaScript bundle under 150kb gzipped for the page-specific code (excluding shared framework code). Tools that would require large dependencies are implemented using browser-native APIs (Canvas for image resizing, the built-in URL parser, native regex) rather than adding libraries.

We'll continue writing about specific technical decisions as the platform evolves. Explore the Tanvrit platform →

Tanvrit engineeringNext.js 15Cloudflare PagesTypeScriptReact architecturetechnical stack IndiaMUI design system
← Back to Blog
Share