Skip to main content

Command Palette

Search for a command to run...

Serverless: Why I Stopped Spinning Up My Own Servers

A backend dev’s real talk on why Serverless isn’t just hype—it’s a perspective shift.

Updated
11 min read
Serverless: Why I Stopped Spinning Up My Own Servers

It all started when I was just getting into—what exactly? The web.
I had already learned Express, had a decent grasp of building a server, and could connect a database without much fuss. Life felt sorted. Like—yeah, I know how to build a backend now. No big deal.

But then, out of nowhere, something shifted in my mind.

The first few days, I tried becoming a social media influencer. Single-digit views.
When I wanted to become this amazing tech blogger? Again—single-digit views.
The first few startup ideas I chased? Burned out before even making a single buck.

All of it hit me hard. And somehow, weirdly enough, it all circled back to the same thing: the web.

Okay, maybe not all of this happened exactly like this—but that’s how it felt in my head. 😅

That’s when I started piecing it all together—connecting scattered ideas, failures, frustrations. Trying to make sense of it.

And everything kept pointing toward one word: Serverless.

Serverless?🤔

Yeah, I had the same question.

Like—if we remove the server… then where do we even host the backend?
You obviously need something to run your code, right?

And if you’re thinking, “Chill bro, I’ll just host it on my own device”—then guess what?
Your device is the server now.
So technically, the server is still there.

So… what even is this Serverless thing?

And okay—even if it's some legit concept—how does it help someone just starting out?
How does it help run a microservice when you haven’t even got a proper setup yet?

Because let’s be real—starting anything usually means spinning up a server, setting up hosting, managing resources.
And here we are, with something called serverless. Like… no server?

So yeah, let’s slow down and break this down properly:

  • What is Serverless, really?

  • How does it help us stay performant from day one?

  • And most importantly—how does it save our pockets?

What is Serverless?

Getting started? Just focus on your code.*
The rest—from deployment to scaling to monitoring? Leave it to us.*

That’s it.
That’s what Serverless is.

Saying more than this right now would just overcomplicate things.

Instead, let’s do something better:
Let’s try to actually understand what this one-liner really means.

So, generally, what do we do after writing our backend code?

We look for a machine or a server to host it.
Then we expose it via a public IP so it can handle requests.
Simple flow, right?

But here’s the twist—what happens when your traffic suddenly spikes?

Now you’ve got to scale.

Sure, you can use EC2-like machines, set up auto-scaling, and throw in a load balancer. But guess what?
You’re still doing all that yourself.

And it doesn’t stop there. You also need to monitor everything.
Sure, these things can be automated, but setting them up takes effort. You’re still pulling pieces from different services and stitching them all together—manually.

Code --> Deploy --> Auto Scale --> Monitor
[All handled by Cloud]

But with Serverless?

Just deploy it. Forget about it.

No managing servers. No worrying about scaling.
From handling traffic spikes to logging and monitoring, your cloud provider handles it all.

And if you ever want to check what’s going on?
Open your dashboard. Analyze. Done.

It’s that simple.

So now, let’s actually peek behind the scenes —
The BTS stuff that makes all this feel like magic (even though it’s super-engineered).

I mean yeah, fine — we don’t have to worry about servers anymore.
But think about it… what did we used to do?

Let’s say our users are mostly from India.
We’d host our server in a data center close to India — maybe Mumbai or Chennai.
If users are in the Middle East? Maybe Bahrain or Dubai.
In the US or Europe? Pick a nearby region again.

Basically, we chose a location that made things faster for our users.
That was our job — pick the right server spot.


🌐 Enter the Cloud Giants

Now what do providers like AWS, Cloudflare, GCP do?

Simple:
They’ve already placed servers everywhere.
Like — literally everywhere. Globally distributed. 🌍
And those servers? They’re just sitting there, waiting. 24/7. For something to do.

So the moment your backend function is needed → Boom, spun up instantly.
And once it's done?
They wait for a while… and if no more requests come in → shut it down to save power + money.


Till here, everything makes sense, right?
Cool.


🧠 Let’s Take an Example — Cloudflare

Visit their site and you’ll read something like:

“Available in 330+ cities across 125+ countries.”

Now, you might think —
“Oh, so they have 330 servers?”

Not really.

Because that number doesn’t include redundancy — a critical part of infra design.
They need backup servers at every location.
Why?
So if one dies or overheats or just throws a tantrum — another one quietly steps in.
No downtime. No drama.


But let’s keep it simple.
Let’s just understand how the system behaves.


🛠️ Let’s Flip the View — From the User’s Perspective

So up till now:
We wrote our backend → deployed it → done. ✅

Now imagine your users are mostly in City A.

  • Whenever someone from City A sends a request →
    it gets served from a server near City A. Low latency. Fast response.

  • One fine day, someone from City B sends a request →
    They get served from a server near City B.

Global infra = global reach. Effortless.


🧊 But hold on — Here Comes the Cold Start

No matter which location is serving the request —
If it’s the very first request hitting that function in that region,
it’ll take slightly longer than usual.

Why?
Because the cloud provider needs to initialize that function first.

That tiny startup time? That’s called a Cold Start.

And yeah — serverless removes a lot of headaches from our side,
but it also means these cloud providers have to be insanely optimized to pull this off smoothly.


🔄 So What Do They Do?

They set a timeout window for each function instance.

Let’s say it’s 15 minutes.
If a function doesn’t get any traffic for 15 minutes, the system shuts it down.

So when a new request comes in after that idle time?
A new instance spins up.


⏱️ But Don't Worry — It’s Fast

We’re not talking about booting a full server from scratch.
The server is already running — we're just spinning up a runtime environment (Node.js, Python, whatever).

It’s more like waking up a sleepy tab on your browser.
Not starting the whole laptop.

The delay?

  • Sometimes just a few hundred milliseconds

  • Worst case, maybe a second or two

Nothing crazy. Nothing unusable.

But that momentary lag?
That’s your Cold Start.

Now let’s talk about another layer of optimization these cloud providers have done.

They observed something very important:

The entire backend doesn’t get hit at once, right?
Users hit it part by part—endpoint by endpoint.

So they put their thinking hats on. 🧠
Because users? They only care about what they see on the UI.
And we, as developers? We only care about making sure that user is served smoothly.

So now the challenge becomes:
How do we serve the user efficiently, without overloading the server?

Here’s where they got clever—with function instances, like we discussed earlier.


🔍 So what exactly is a Function Instance?

When we write backend code, we usually have multiple endpoints/getUser, /login, /createPost, etc.

But do we need all of them at the same time?

Nope.
We only need a few—whatever the user is triggering at that moment.

So they thought:

“Instead of running the entire backend at once,
why not execute only the exact function (endpoint) that the user needs?”

Smart, right?

So for example, if the user just needs the getUser endpoint—
Only that function runs.
Not the whole backend.


This is why you'll often hear something like this in Serverless talks:

“Keep your global environment clean.”

Because in Serverless, each function runs independently, inside its own little sandbox. 🧪

So ideally, you should:

  • Keep only what’s absolutely needed for that specific function within it

  • Avoid loading unnecessary global stuff, because that’s not shared across endpoints

Every endpoint is treated like a separate unit, not part of a big monolith.


📸 Take a look at this image (sourced from Cloudflare)

It perfectly shows how each function spins up in isolation and only handles what it needs to—nothing more, nothing less.
Although this image wasn’t really meant to show “function running in isolation” 😂😂 — it was actually Cloudflare showing off why their servers are faster than everyone else’s.
But hey… it still worked out, didn’t it? 😄

At the end of the day, everyone’s trying to achieve pretty much the same thing—
whether it’s Cloudflare, AWS, GCP, or any other provider.
The goal is the same.
But the process?
Everyone has their own way of doing it.

Now, there are basically two ways to build serverless components:

1️⃣ One Function Per Endpoint

This is the method we just discussed —
Where each endpoint is converted into its own independent handler.

2️⃣ Wrapping an Existing Express App

This comes in handy when you already have your backend written in something like Express.js.
You just install the serverless-http library and wrap your entire app with it.
Boom — your whole backend becomes compatible with serverless architecture.


But here’s the catch:
From what I’ve read and understood so far, when you wrap your entire app like this —
the whole backend is treated as a single function.

So that nice benefit of function-level isolation?
You kind of lose it here.

Because now, if one endpoint starts acting up,
your entire function (aka your full backend) can get interrupted.

Whereas, in the first approach, each endpoint lives in its own isolated space —
so if one fails, the others keep working just fine.


So what’s the takeaway?

If you’re just starting out, it’s great to go with the "one handler per endpoint" style.

But — if you want to stick to a traditional, robust code structure (like Express apps),
you can look into frameworks like HonoJS.

They let you keep your coding style almost the same,
while managing the behind-the-scenes structure to work with Serverless platforms.


And if your code is already written in Express or some other framework,
you can still follow a modular approach:

  • Write your controllers, services, etc., in separate files

  • Import them wherever needed

This way, when you transition to Serverless later,
you’ll only need to tweak your controller logic a bit.

Because your business logic, which lives in services,
is already cleanly separated — and that’s what matters in the long run.

So that was it — the complete story of Serverless.

Now, let’s wrap this up with three important questions:

  • Why should you use it?

  • When should you use it?

  • And one extra thing I found interesting…


🐞 The Debugging Struggle — A Personal Moment

You know how, when we’re testing something, we just throw in a console.log() for quick debugging?

Well, Cloudflare has optimized their platform so aggressively that in some setups,
even console.log doesn’t behave like you expect.

Since your code could be running on servers in different locations,
they’ve stripped down a lot of traditional Node.js features to keep things lightweight and fast.

And honestly — I reached a point where I was like,

“Am I really such a noob that even one-liners are breaking?!” 😂😂

But jokes apart — let’s get back on track.


🕒 So… When Should You Use Serverless?

Use it when:

  • You’re rolling out a startup and still exploring your user base

  • You’re building an MVP or a side project

  • You’re experimenting with a new microservice

  • You want to focus on code, not infrastructure

Basically, if your current goal is speed, market validation, and iteration,
then Serverless is a no-brainer.


💸 And Why Should You Use It?

Because cloud providers offer crazy good deals in the beginning.

For example:

You often get 1 million requests per month for free.

You’ll only start paying after that.

So yeah — if used smartly, this can save you a ton of money.

But hey, don’t get confused like I did 😅

Back when I saw “1 million requests”, I thought:

“Damn! That means 1 million users, right? Easy win!”

Reality check: Nope. 😅

Once you sit down and do the math, things start adding up fast.

Let’s say your app loads a page that makes around 25 API calls to fully render the UI.
Now assume a user visits 4–5 pages per session.

Boom — that’s 100+ requests per user.

So now, divide:

1,000,000 requests / 100 requests per user = ~10,000 users

Not bad — but definitely not “1 million users” like the marketing makes it sound. 😂


🧠 What’s the Bottom Line?

Serverless is amazing at small to medium scale
perfect for startups, MVPs, and fast-moving experiments.

But once you start scaling massively and need tight control over infra, performance tuning, cost, etc.—
then yes, managing your own server might make more sense.

The point is:

The tools are out there.
You just need to know what to use, when — based on your needs.


That’s it.

That’s everything I’ve learned (and tripped over) on this wild little ride through Serverless land.
Hope this helped you make a little more sense of the magic behind it all ✨

Have questions? 🤔
Something didn’t make sense or want to dive deeper?
Drop a comment below or hit me up on Linkedin or X.
Let’s keep this dev-to-dev conversation going.