server room cloud computing serverless it architecture

Is Serverless Truly Dead?

Serverless architecture has been a hot topic in recent years, but some are asking if its popularity has peaked.

This article explains what exactly serverless computing entails and its main challenges, explores the origins of the “serverless is dead” discussion, and analyzes whether serverless still has a promising future.

By the end, you’ll have a balanced perspective on the viability of this cloud computing model.

What is meant by serverless computing?

Serverless computing refers to cloud services and architectures where the cloud provider dynamically allocates machine resources on demand, taking care of the servers on behalf of their customers.

In serverless, the code is executed in stateless compute containers that can be triggered on-demand by specific events. This allows developers to build and deploy auto-scaling applications without having to manage infrastructure themselves.

In other words, the key advantage of serverless is that it removes the need for developers to worry about provisioning and maintaining servers. The cloud provider handles all that behind the scenes.

Of course, servers still run the code somewhere, but the term “serverless” means that service providers provision them on demand and abstract all server management. From a developer’s perspective, there are no servers to administer or think about.

What are the most common limitations of serverless?

While serverless provides many benefits, it also comes with distinct challenges teams need to be aware of.

  • The ephemeral nature of serverless architectures can make logging and monitoring, as well as troubleshooting issues, more difficult.
  • Serverless can also expose applications to more security risks, and the opacity of serverless platforms means engineers have less visibility into the underlying infrastructure security.
  • Cost forecasting and optimization is another common headache. The granular billing per execution can lead to confusingly high costs if workloads are not managed properly. The automated scaling of serverless makes it harder to predict and control costs.

These drawbacks do not necessarily outweigh the benefits for many use cases, and this explains why, until the release of the Prime Video case study, serverless remained quite popular among developers and cloud architects.

Prime Video Sparked “Serverless is Dead” Sentiment

After years of popularity, serverless architectures started facing some backlash in the course of 2023. Much of this originated from a case study published by Amazon’s Prime Video team.

At Amazon, they initially built a large-scale video monitoring system using serverless components like AWS Lambda and Step Functions. However, they found this architecture hit scaling and cost bottlenecks around only 5% of expected load.

Therefore, Prime Video transitioned to a monolithic design, consolidating everything into a single Amazon ECS. This removed the need for orchestration between services and slashed costs by over 90%. The monolith approach also enabled much better scalability to handle thousands of concurrent video streams. When they needed more capacity beyond a single instance, they simply cloned the monolith and distributed the load.

Though microservices can work well at scale, for Prime Video’s specific use case, a monolithic architecture proved far superior for cost, scalability and performance. This real-world example of serverless limitations gained widespread attention, popularizing the idea that “serverless is dead”.

What does the Prime Video example truly teach us?

While the Prime Video example reveals real limitations of serverless architectures for a specific use case, it would be faulty to generalize their experience as proof that serverless itself is dead or not viable.

The reality is, there is no one-size-fits-all solution in cloud computing or in technology in general. Every company and use case has unique constraints and requirements.

Monoliths can be better suited for workloads like Prime Video’s but for many other situations, serverless provides immense benefits over monolithic designs in terms of automation, scalability, and cost-efficiency. The key is choosing the right architecture for the specific goals and workload. Serverless excels for sporadic, unpredictable, event driven workloads that need to scale rapidly.

Rather than dismissing serverless altogether and claiming it is universally “dead”, the Prime Video case study mainly highlights the importance of thorough architecture evaluation and selecting optimal designs for each use case.

The future of serverless

Serverless remains a compelling paradigm for many situations.

In contexts like IoT, serverless computing has proven to be especially effective.

Let’s consider the pharmaceutical industry – one I am particularly familiar with. In life science, which sees significant development of IoT devices, there are many good examples of successful serverless implementations. Companies such as Medtronic rely on serverless solutions for continuous patient monitoring, while pharmaceutical firms like Roche have built their DigitalLab using Lambda functions and Greengrass to analyze data from their laboratories.

While there remains a possibility of serverless computing experiencing a further decline in popularity, I imagine such a decline would primarily result from the increasing popularity of edge computing.

Edge computing is a distributed architecture in which client data is processed locally, at the periphery of the network, as close to the data source as possible. Unlike serverless computing, which requires cold starts, this approach reduces latency and improves responsiveness.

With ultra-fast 5G networks expanding, more computation can occur at the edge rather than in centralized data centers.

This means that, looking ahead, developers will have the flexibility to choose the optimal approach between serverless and edge computing for each architecture.

The engineering decision ultimately depends on the constraints and goals of each project.

Conclusion

Serverless computing is neither universally perfect nor “dead.”

The ongoing debate around serverless highlights that there is still much to learn about cloud architectures, and diverse viewpoints to take into account.

With emerging technologies like edge computing, the range of options continues to expand, but, as it becomes clear that there is no one-size-fits-all solution, it’s crucial to avoid dogmatic views and evaluate options pragmatically based on each project’s unique constraints and requirements.

Picture of Manfredi Pomar
+ posts

Italian cloud computing professional with a strong background in project management & several years of international experience in business consulting. His expertise lies in bridging the gap between business stakeholders & developers, ensuring seamless project delivery. During his free time, he enjoys fatherhood and immersing himself in nature.

Be a Content Ambassador
Skip to content