No, ‘Serverless’ and ‘Cloud’ Are Not ‘Scams’ — But They Do Come with Trade-Offs
I recently read an article by a popular blogger here on Medium. This person had over 12,700 followers at the time of this writing — now they have over 13,000.
The article is titled:
“Why Serverless Is A Scam”
Bold? Check.
Provocative? Check.
Accurate? No.
Now, don’t get me wrong, if the article was written in such a way that actually addressed the question of Serverless — the title might acurately articulate the authors point of view: that they think Serverless is a scam. However, the author actually should have given their article the following title:
“Why Cloud Is Sometimes Overkill for Small Businesses”
A much less provocative title, but accurate. Maybe that is why he has 13,000 followers on Medium and I have a paltry 193.
Based on the title of this article, someone with even a basic understanding of cloud computing technologies would think the author is putting into their crosshairs cloud Serverless offerings like Lambda or Azure Functions which abstract away the underlying infrastructure as long as you conform to the application development and execution environment frameworks imposed by the Serverless runtime. However, that is not the case. The author has apparently mis-titled the article because his points are largely about the Cloud vs. On-Premises debate.
The author’s key points focus on the following classic Cloud vs. On-Premises points of contention:
- Vendor Lockin
- Cost Concerns
- Complexity
- Uptime & Reliability
- Security
Of course, some of these points are actually legitimate points of debate in the question between Serverless and, well, not — whatever not, might be. Usually “not” means Virtual Machines, Managed Kubernetes (e.g., EKS, AKS, GKE, etc.). These include Vendor Lockin and Cost Concerns. These two points, actually have strong arguments against using cloud Serverless offerings, which I will address later.
However, Complexity is usually a slam dunk in the FOR camp for Serverless — why? Because Serverless reduces complexity for the consumer of the cloud service — as long as you conform to the contours of the Serverless runtime and the inherent constraints imposed upon you — your life actually gets easier. But in architecture there are always trade-offs so what are we trading for this lower complexity? Well, actually, Vendor Lockin — and possibly cost. Vendor Lockin is pretty clear as there have not been any real cross-cloud or multi-cloud standards in Serverless offerings — Lambda and Azure Functions, arguable the two largest Serverless platforms are completely incongruous. Which means, if you decide to build on Lambda, you are pretty much stuck on AWS. Likewise, if you build on Azure, you are pretty much stuck on Azure. However, with Azure Functions we do have the option of hosting within a Kubernetes environment either on premises or on another cloud so there is more flexability with this approach.
Maybe Lambda has similar capabilities now? Admittedly, my information on Lambda might be a bit dated, the last I was seriously hands on with Lambda in a professional capacity was before I joined Microsoft.
That being said, Azure Functions having the capability of being hosted outside of Azure still does not negate that you are locked in to the Azure Functions way of doing things. Which is not exactly the same kind of Vendor lockin but still.
I call this out because we absolutely can debate both questions:
- Cloud vs. Not
- Serverless vs. Not
However, the author, not only by the bold and provocative title but all throughout the article, is claiming he is arguing against Serverless. That is, topic #2, and attempting to make the case why we shouldn’t use Serverless, because, well, according to him, “it’s a scam”.
However, what is so frustratingly clear, is that the author either doesn’t understand what Serverless or he is purposefully conflating the two distinct concepts: Cloud and Serverless.
Let me analyze the key points of this article and address his arguments — whether they are arguing “Cloud vs. Not” or “Serverless vs. Not”.
Vendor Lockin
OK, we’re ready to talk about why Serverless is a scam. Vendor Lockin is 100% a thing when it comes to Serverless. When you write a Lambda function, it only works in one place: AWS. If you want to move the applications or services built using Lambda get ready for a re-write. This CapEx outlay is what keeps large organizations from moving off of the cloud — because once you are on, its hard to get off. But does that mean Serverless is a scam? Does that mean writing your apps in Lambda is bad? Not really, but there is a risk. As I like to say, a risk is an issue before its been born.
If you build all your apps in Lambda, you are doing so based on some assumptions about the cost of Lambda and all the other Serverless services you might be using such as S3, DynamoDB, etc. The Vendor Lockin risk is that after you’ve done all that, AWS ceases to be a benevolent partner and decides to soak you for all you’re worth by raising prices.
The author claims that AWS behaves like a monopoly because it locks customers into its ecosystem, making it difficult to migrate away. This argument conflates a broader critique of cloud computing with serverless. Vendor lock-in is a valid concern across all types of cloud services, whether Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), or serverless. However, with Serverless offerings, this Vendor Lockin is definitely more significant than with other cloud offerings — such as VMs, Managed Kubernetes, or even PaaS.
But the author doesn’t address this at all. He merely asserts that AWS behaves like a monopoly because they offer services that place you firmly within their walled garden. Maybe the walled garden is kinda nice?
Now let’s talk about the elephant in the room, AWS is far from a monopoly; the cloud market is highly competitive, with Azure and Google Cloud offering robust alternatives. Vendor lock-in is an inherent challenge of proprietary cloud platforms and not unique to serverless offerings like AWS Lambda or Azure Functions.
If you’re locked into, say, AWS, you can’t really leave without massive amounts of work. Plus massive data egress fees.
The decision to adopt serverless should include an evaluation of trade-offs between the convenience of the Serverless offerings and the potential difficulties of migration. Whant ROAMing risks you need to consider both Impact and Likelihood of a risk. Due to the reduced initial operating cost and faster time to market that Serverless offerings provide — many organizations decide to Accept that risk because its worth it for them to explore a new business model by implementing a system of innovation to try out and idea without sinking a lot of money into the plumbing to get the thing running (and to keep it running).
Calling AWS a monopoly diminishes the nuanced discussion required to evaluate these trade-offs and ignores the competitive landscape of cloud computing.
Cost Concerns
The original author didn’t really tease out the Vendor Lockin risk to the CapEx of moving off the platform, he focused on the risk of the cloud provider jacking up their rates.
“The reason why serverless is so expensive is because it is effectively a monopoly… If you want to make a lot of money you have to have a monopoly and these serverless companies have effectively done just that.”
The author suggests serverless is more expensive because of monopolistic practices by cloud providers, driving up costs for users.
The author cites David Heinemeier Hansson (DHH), who claimed moving workloads off the cloud saved his company millions: “You’ve probably heard how DHH was leaving the cloud and how this would save his company 7 million dollars. Recently he was like, ‘Did I say 7? I meant 10.’”
If you actually read the article, DHH is a “cloud exit” story. Not a “Serverless is a scam” story. The evidence he is providing is supporting an argument, just not the one he claims to be making.
In the article they highlight cost concerns but these cost concerns — especially high egress fees — are not serverless-specific and are common across cloud services in general.
“Serverless is just an endless money pit… The people arguing for it have probably never seen the bill.”
The author suggests serverless costs escalate quickly and unexpectedly due to provider practices and pricing models.
However, the cost argument applies more broadly to cloud services in general rather than serverless specifically. For example, the author mentions data egress fees and runaway costs due to misconfigurations, which are common to cloud providers regardless of whether the services are serverless, PaaS, or IaaS.
The author contrasts serverless with their personal experience running a “moderately popular app with a level editor and cloud saves,” which they argue is cheaper and more reliable on a VPS. The author doesn’t go into too much detail but the way he describes his setup, it sounds like he is running his little REST API on a single VPS.
Well, since switching to a VPS for my RSS reader, it’s never gone down. Gone down by itself that is. I’ve restarted it a few times to update some things. But those are usually very short, taking less than a second. Serverless on the other hand?
I think he is being a bit naïve. Nearing the level of teenage drivers. Just because you haven’t gotten into an accident, doesn’t mean you’re invincible. It means you’re lucky.
Additionally, the comparison with a VPS oversimplifies the issue. A single VPS might seem cheaper at first glance, but achieving the scalability, redundancy, and resilience of a serverless architecture would require multiple VPS instances, load balancers, and other infrastructure investments.
These costs often outweigh the perceived savings of “rolling your own” infrastructure. The key question isn’t whether serverless is inherently expensive but whether its cost aligns with the value it provides for specific workloads.
A point missed by the author because he only focuses on anecdotes about running a small scale mobile app.
Complexity
Complexity is a central critique in the original article, with the claim that serverless architectures are unnecessarily complicated and prone to errors.
Here, the author misses an important distinction: complexity in serverless architectures often arises as a trade-off for scalability, abstraction, and reduced operational overhead.
While it is true that serverless requires adapting to certain technical constraints and best practices, this complexity is often front-loaded during the design phase and mitigated by the benefits of managed scaling, resilience, and infrastructure abstraction. The evidence he gives to support this claim? A Google Cloud Snafoo completely unrelated to Serverless:
“A missing field in the configuration” led to Google wiping out a $135 billion pension fund. “AWS should not have billed them for the failed write attempts and the companies should not have been writing tons of possibly confidential data to some random default location.”
The author criticizes serverless for being overly complex and prone to misconfigurations that can have severe consequences. However, the examples given in the article — such as a Google Cloud mishap — are not serverless at all. They are related to multi-tenant SaaS offerings provided by Google Cloud and instead highlight general challenges of cloud management.
Now, we’re switching gears completely. The authors focus shifts to Microservices.
“On serverless? Good luck. Especially if you use microservices which serverless companies just love.”
He cynically derides “Serverless companies”. What exactly are “Serverless companies”? Honest Question. I am familiar with quite a few cloud providers — but I have never heard anybody refer to Amazon AWS, Microsoft Azure, or Google Cloud as “Serverless companies”. They are cloud providers that have Serverless offerings.
The author argues that securing serverless and microservice architectures is harder than securing a traditional VPS due to the need for individualized configurations for each service.
And it makes sense, if you have a VPS it’s fairly simple to lock down. There are tutorials to do it all over the internet, you want to install fail2ban and disable password login for the root user for starters. But on serverless? Good luck.
Again, I am starting to notice that we are approaching the edges of the authors knowledge. The idea of manually installing fail2ban and disabling password login for the root user on a Lambda function is comical at best, even for the most shallow understanding of serverless offerings like Lambda and Azure Functions.
It’s at this point that I had the ephiphfany moment and realized — maybe this guy doesn’t even know the difference between Cloud providers and Serverless. Maybe he legitimaletly, in his head is substituting “Serverless” for “Cloud” in some cruel “CTRL-F, CTRL-R” Search and Replace mishap.
Comparing a VPS to serverless here is misleading. A single VPS is inherently less complex but also far less capable of handling modern workloads requiring elasticity and high availability. It is akin to comparing a bicycle to a freight train — both can transport freight from Point A to Point B, the freight train is more complex to operate but that does not negate the utility the freight train brings to the table. Does everybody need a freight train? No — and maybe this is fundamentally the original author’s intent to highlight. However, to call the freight train a scam because you need to delivery a loaf of bread to grandmas house is absurd.
If the article’s critique is truly about the complexity of serverless, it would benefit from focusing on platform-specific challenges like cold starts, debugging distributed systems, or managing event-driven architectures.
Uptime & Reliability
The author cites Firebase, EC2, and Azure outages to argue against serverless, but these examples are not exclusive to serverless services — they reflect general cloud infrastructure issues.
It’s also worth noting that many outages in serverless architectures are mitigated faster due to the scale and resources of cloud providers.
“When I used [Firebase], there was a massive outage involving Firebase Auth in 2023. It took a few hours to resolve which is pretty noticeable.”
The author contends serverless solutions are less reliable because of their complexity and reliance on cloud providers’ systems. They contrast this with their own VPS setup:
“Since switching to a VPS for my RSS reader, it’s never gone down.”
The claim that serverless systems are less reliable than a VPS is an oversimplification. While outages are an unfortunate reality of any computing platform, the article fails to account for the inherent resiliency of serverless services.
It comes down to what level of redundancy you need and what you’re willing to pay for. Multi-AZ is already better than your average On Premise Data Center and enterprises are rarely able to implement resiliency equivalent to multi-region solutions made increasinly possible and cost effective by cloud platforms. The cloud makes it easier but you gotta pay for it. Serverless is just another abstraction on top of cloud.
“I’ve restarted it a few times to update some things.”
A VPS typically lacks redundancy, meaning even planned maintenance (like the kind the author references) requires downtime. In contrast, serverless platforms distribute workloads across multiple physical machines, ensuring higher availability during updates or hardware failures.
Arguing that a single VPS is inherently more reliable because the author hasn’t “noticed” any downtime is anecdotal and ignores the advantages of serverless resiliency at scale.
Security
The article describes serverless as a “security nightmare” due to its distributed nature and the complexity of locking down individual components.
“There’s just so much wrong here… writing tons of possibly confidential data to some random default location.”
Like other cloud services, there is a shared responsibility model when the cloud platform takes on some responsibility and the tenant (you) takes on the rest. When working with serverless solutions, the cloud platform takes on more of that responsability. So this actually makes Serverless security a bit easier since the provider takes on the responsability of locking things down on your behalf.
While securing serverless systems can be challenging, this critique applies equally to other distributed systems, including microservices hosted on traditional cloud infrastructure.
“Because microservices communicate through HTTP/S requests, so you better lock down each microservice individually.”
Building distributed systems does have inherent security challenges. However, serverless seaks to minimize those challenges by offloading more of that responsability to the cloud platform. This is the trade off. This is why you accept the Vendor Lockin because the cloud platform makes a big chunk of this effort go away.
A VPS, while potentially simpler to secure, often comes with its own risks due to manual configuration and maintenance. Maybe this can be automated with things like Terraform? The author didn’t share what their hosting platform was for their VPS but I am curious now if there are Terraform providers for them. Serverless abstracts much of this operational overhead, offering built-in security features like encrypted storage, managed identity, and fine-grained access controls.
The article’s comparison fails to address the trade-offs between manual security management on a VPS versus leveraging the robust security frameworks built into serverless platforms.
Conclusion
The original article frames serverless computing as “a scam” but conflates serverless with broader cloud computing issues, while laying claims that it has become overly complex, unreliable, and expensive due to monopolistic practices by cloud providers.
Many of the critiques — vendor lock-in, cost, complexity, reliability, and security — are valid concerns about cloud computing as a whole but are not exclusive to serverless. Serverless is a specific branch of cloud services, offering unique benefits such as abstraction, scalability, and reduced operational overhead. A more productive critique would focus on serverless-specific challenges, like the technical constraints of the platform, cost models at scale, and debugging in distributed architectures.
By addressing these nuances, the discussion could shift from broad generalizations to a meaningful evaluation of whether serverless is the right tool for specific workloads. As it stands, the article fails to substantiate its claim that “serverless is a scam,” and instead masquerades as such with critiques cloud computing at large.
My biggest issue with this article is that he seems think he’s arguing on the topic of serverless-or-not but he’s actually just arguing on the topic of cloud or not.
As you can see, both are very relevant questions and each can be argued for and against depending on the situation and the trade offs at play.