>_
EngineeringNotes
Back to DevOps

Serverless Architecture

Event-Driven Architecture, AWS Lambda, and Serverless Framework.

What are Backend Servers?

Typically, you write a backend (e.g., express) and run it with node index.js on a specific port. To deploy this, you manage:

  • VMs (EC2): Renting a virtual machine.
  • Auto Scaling: Managing groups of VMs for traffic.
  • Kubernetes: Orchestrating containers.

The Downsides

  • Scaling: Complex to manage when to scale up/down.
  • Idle Costs: You pay for running servers even with zero traffic.
  • Maintenance: patching OS, monitoring uptime.

Enter Serverless

Easier Definition

What if you could just write your express routes and run a command? The app would automatically:

  1. Deploy your code.
  2. Autoscale up and down to zero.
  3. Charge you on a per-request basis (rather than paying for uptime).

What is it exactly?

"Serverless" is a backend deployment model where the cloud provider dynamically manages the allocation and provisioning of servers.

Note: It doesn't mean there are no servers. It means developers don't have to worry about them.

The Wins

  • • No server management
  • • Scales to zero (no cost when idle)
  • • Infinite scaling (theoretical)
  • • Faster searching/prototyping

The Trade-offs

  • More expensive at specific high scales compared to EC2.
  • Cold Starts: First request might be slow as the function "wakes up".
  • • Vendor lock-in risk.

How to fix Cold Starts?

1. Warm Polling

Keep instances initialized intentionally. Providers often call this Provisioned Concurrency. It costs money but eliminates startup lag.

2. Bot Hitting & Cron Jobs

Set up a scheduled job to ping your server interval (e.g., every 14 mins).

Example: Eco-Pulse project on Render hits every 14m to prevent 15m inactivity shutdown.

When should you use a serverless architecture?

  • 1When you have to get off the ground fast and don't want to worry about deployments.
  • 2When you can't anticipate the traffic and don't want to worry about autoscaling.
  • 3If you have very low traffic and want to optimise for costs.

Get Started with a Provider

Why start here?

No credit card required. It's the easiest way to deploy your first serverless function.

How Cloudflare Workers work?

To understand Cloudflare, we need to look at how serverless evolved:

  • 1. VMs (Virtual Machines)The "dumbest" idea for serverless. Booting a full OS (Linux + Kernel + App) for every user request is extremely slow and resource-heavy.
  • 2. Containers (Docker)Better, but still too slow for per-request isolation. `docker run` takes time to allocate CPU/RAM.
  • 3. Firecracker (AWS Lambda)AWS uses Firecracker MicroVMs. They are incredibly lightweight VMs that boot in milliseconds. This is great, but still has some OS overhead.
  • 4. V8 Isolates (Cloudflare)Cloudflare doesn't use VMs or Containers. It runs on the V8 Engine (same as Chrome). It uses "Isolates" - lightweight contexts that share the same runtime but are memory-isolated. This eliminates "cold starts" almost entirely.
Wait, is it Node.js?No! Cloudflare Workers DO NOT use the Node.js runtime. They use their own runtime built on V8.

This is why historically they only supported JavaScript/TypeScript. Python support is newer (compiled to WebAssembly) because they had to build it into their specific runtime.

Getting Started

Method 1: Dashboard Quick Start (The Hello World)

  1. 1

    Sign Up

    Go to cloudflare.com and sign up.

  2. 2

    Navigate to Build

    Compute > Workers & Pages > Overview.

  3. 3

    Create & Deploy

    Click Create Worker > Hello World template > Deploy.

Method 2: Local Development (The Real Setup)

  1. 1

    Initialize (CLI)

    We use Wrangler, the official Cloudflare Workers CLI, to create and manage projects.

    Terminal
    bash
    npm create cloudflare -- my-app

    Select No for "Do you want to deploy?"

  2. 2

    Explore & Start

    Check package.json and run locally:

    npm start
  3. 3

    Concept: HTTP Server?

    Cloudflare handles the HTTP server.
    You don't write `app.listen(3000)`. You just write the logic to handle a request.
    index.ts
    index.ts
    const main: ExportedHandler<Env> = {
      fetch(request, env, ctx): Response {
        if (request.method === "POST") {
          if (request.url.endsWith("/user")) {
            return new Response("This is a post request on route /user");
          } else {
            return new Response("This is a post request not on route /user");
          }
        } else {
          return Response.json({ msg: "This is a get request" });
        }
      },
    };
    
    export default main;
  4. 4

    Login

    Authenticate with your Cloudflare account:

    Terminal
    bash
    npx wrangler login

    This will open your browser. Click Allow.

  5. 5

    Deploy

    Deploy your worker to the global network:

    Terminal
    bash
    npx wrangler deploy
  6. 6

    Update & Redeploy

    Made changes? Just run the deploy command again:

    Terminal
    bash
    npx wrangler deploy

What about Express.js?

Historically, Cloudflare Workers (and other non-node runtimes) could not run Express because it relied heavily on Node.js internals.

Good News: You CAN now!

Cloudflare now supports running Express applications. Tools like npm create cloudflare can even scaffold an Express app for you. However, for maximum performance and lower latency, using the native fetch handler (as shown above) is still recommended for new microservices.

The Better Way: Hono

If you miss Express-like syntax (`app.get`, `app.post`) but want maximum performance, use Hono. It is a small, simple, and ultrafast web framework built on Web Standards.

UltrafastZero dependencies and optimized for the Edge.
FamiliarIf you know Express, you already know Hono.
Type-SafeBuilt-in TypeScript support for a great DX.
Quick Setup
  1. 1

    Initialize

    Terminal
    bash
    npm create hono@latest my-app
  2. 2

    Select Template

    Choose cloudflare-workers.

    Frontend? Use cloudflare-workers + Vite templates if you need a frontend.
  3. 3

    Install & Run

    Terminal
    bash
    cd my-app && npm i

Look how similar it is to Express:

typescript
typescript
import { Hono } from 'hono'
const app = new Hono()

// middleware
app.use(async (c, next) => {
  if(c.req.header("Authorization")) {
    console.log("Middleware")
    await next()
  } else {
    console.log("Authorization header not found")
    return c.text("Unauthorized!!")
  }
})

// route
app.get('/', (c) => c.text('Hello Hono!'))

app.post('/user', async (c) => {
  const body = await c.req.json()
  const userId = await c.req.query("userId")
  return c.json({ message: 'Created' })
})

export default app

Connecting to DB

Serverless environments have one big problem when dealing with databases: Connection Exhaustion.

  • Traditional DBs (Postgres, MySQL) are built for persistent connections.
  • Serverless functions define 1 connection per request/worker.
  • 1000 concurrent users = 1000 open DB connections = Crash!
Worker 1
Worker 2
Worker 3
Database
(Limited)

Each worker opens a new connection, quickly hitting limits.

The Solution: Connection Pooling

You need a "middleman" that holds a pool of open connections and shares them.

W1
W2
W3
W4
W5
Connection
Pool
Database
Prisma AccelerateSupabase Connection PoolerCloudflare Hyperdrive
Prisma in Serverless
  • The Problem: Standard Prisma Library has Rust dependencies that the Cloudflare Runtime (V8) doesn't understand.
  • ✔️
    The Fix: Use Prisma Accelerate. It provides an Edge Client (lightweight, no Rust binary) that talks to a Connection Pool managed by Prisma.
  • 💰
    Business Model: This is how Prisma makes money! Their ORM is free/open-source, but the Connection Pooling service (Accelerate) needed for serverless is a paid/usage-based product.
💡
Pro Tip: Built-in Pooling

If you use modern serverless databases like Neon or Aiven, they often provide native connection pooling out of the box. In that case, you don't necessarily need Prisma Accelerate-just use their pooled connection string!