I spent years thinking serverless was something only backend engineers touched. Lambdas, containers, edge functions — it all sounded like infrastructure I had no business poking around in. Then someone showed me Cloudflare Workers, and I had one deployed in the time it takes to brew coffee.

Here's the thing: a Cloudflare Worker is just a tiny program that runs at the edge. "The edge" means Cloudflare's network of data centers all over the world — so when someone hits your Worker, it responds from whichever server is closest to them. Fast. Stupid fast. And you don't manage a single server to make that happen.

The best infrastructure is the kind you never have to think about. Write a function, deploy it, walk away.

Getting Set Up

You need one tool: wrangler. It's Cloudflare's CLI for managing Workers. Install it globally and you're halfway there.

# Install Wrangler globally
npm install -g wrangler

# Authenticate with your Cloudflare account
wrangler login

# Scaffold a new Worker project
wrangler init my-first-worker

That last command creates a project folder with a wrangler.toml config file and a src/index.js (or TypeScript if you prefer). The config file is where you name your Worker and set your account details. The source file is where the magic happens.

Don't Panic

You don't need a paid Cloudflare plan for this. The free tier gives you 100,000 requests per day. That's more than enough for personal projects, prototypes, and even modest production tools. You'd have to try pretty hard to exceed it.

The Hello World

Open src/index.js and you'll see the skeleton of a Worker. At its core, a Worker is just a function that receives a request and returns a response. That's it. No routing frameworks, no middleware stacks, no twelve config files.

export default {
  async fetch(request) {
    return new Response("Hello from the edge!", {
      headers: { "Content-Type": "text/plain" },
    });
  },
};

Run wrangler dev and hit localhost:8787. You'll see your greeting. That's a Worker running locally. It uses the same runtime as production, so what you see is what you get.

Making It Useful

A hello world is satisfying for about ten seconds. Let's do something real. How about a Worker that returns a JSON response with the current time and the visitor's location? Cloudflare attaches geolocation data to every request for free.

export default {
  async fetch(request) {
    const data = {
      timestamp: new Date().toISOString(),
      city: request.cf?.city || "Unknown",
      country: request.cf?.country || "Unknown",
      message: "You found the edge.",
    };

    return new Response(JSON.stringify(data, null, 2), {
      headers: { "Content-Type": "application/json" },
    });
  },
};

Or maybe you want a simple redirect — send anyone who hits /github to your GitHub profile:

export default {
  async fetch(request) {
    const url = new URL(request.url);

    if (url.pathname === "/github") {
      return Response.redirect(
        "https://github.com/yourusername", 301
      );
    }

    return new Response("Nothing here.", { status: 404 });
  },
};

You can build URL shorteners, API proxies, webhook handlers, auth gates — all in a single file. The constraint is actually liberating. No framework choices. No dependency debates. Just a function and a response.

Ship It

Deploying is one command:

# Deploy to production
wrangler deploy

That's genuinely it. Wrangler reads your wrangler.toml, bundles your code, and pushes it to Cloudflare's network. Within seconds, your Worker is live on a *.workers.dev subdomain. You can also attach a custom domain if you want something cleaner.

No build step to configure. No CI pipeline to set up (though you can add one later). No Docker containers. No AWS console with forty-seven tabs open. One command, and your code is running in 300+ cities around the world.

Why This Matters for Non-Technical Builders

Cloudflare Workers changed how I think about backend logic. Before, I'd either skip backend features entirely or beg a developer friend for help. Now I build small, focused Workers for specific jobs: a form handler here, a redirect service there, an API proxy for a third-party service that doesn't support CORS.

Each Worker is its own little island. If one breaks, nothing else goes down. If I want to delete one, I delete it. No tangled dependencies. No monolith to maintain.

You don't need to understand distributed systems to use them. You just need to write a function that takes a request and returns a response.

Fifteen minutes. That's all it takes to go from zero to deployed. The rest is just finding interesting problems to solve with a function at the edge.