- Level Up Coding
- Posts
- LUC #73: Serverless Architecture Demystified: Strategies for Success and Pitfalls to Avoid
LUC #73: Serverless Architecture Demystified: Strategies for Success and Pitfalls to Avoid
Plus, stateful vs stateless design, pub/sub pattern explained, and API gateway vs load balancer — what's the difference?

This week’s issue brings you:
Serverless Architecture Demystified: Strategies for Success and Pitfalls to Avoid
Pub/Sub Pattern Explained (Recap)
Stateful vs Stateless Design (Recap)
API Gateway vs Load Balancer — What's the Difference? (Recap)
READ TIME: 5 MINUTES
Thank you to our partners who keep this newsletter free to the reader.
Unlock your team's full potential with Depot. Depot’s remote build cache optimizes software builds, preventing wasted cycles and boosting productivity. Stop waiting and start shipping.
Take a look under the hood at Depot Cache. And try it free for 7 days.

Serverless Architecture Demystified: Strategies for Success and Pitfalls to Avoid
November 2014.
That’s when Amazon announced AWS Lambda at AWS re:Invent.
The concept of serverless computing was beginning to gain prominence, and AWS Lambda took it into the mainstream.
For the past decade, we’ve been privileged to have server management abstracted away from us. And there are now several options for how much abstraction we want.
But it’s only been this way in recent times.
Prior to ~2014, before the advent of container orchestration services and serverless computing, server management involved a much more manual and complex process.
Serverless architectures have significantly transformed cloud computing.
Today will be looking into serverless architecture, best practices, pitfalls to avoid, and when and where it works best.
Let’s jump in!
The Essence of Serverless Computing
Serverless computing abstracts server management tasks from the development team’s workload, allowing applications to run without provisioning or maintaining servers.
Instead of managing infrastructure directly, serverless relies on Functions-as-a-Service (FaaS) for executing code on demand and Backend-as-a-Service (BaaS) for managing cloud-based backend services.
FaaS (e.g., AWS Lambda, Google Cloud Functions) executes code on demand in response to events.
BaaS (e.g., DynamoDB, Firebase) provides fully managed backend services like databases, authentication, and messaging without requiring server administration.
Serverless computing allows cloud providers to dynamically allocate resources and charge only for actual compute time used, rather than pre-allocated capacity.
Serverless architectures can support a wide range of applications, from simple CRUD operations to complex, event-driven data processing workflows.
It fosters a focus on code and functionality, streamlining the deployment of applications that can automatically adapt to fluctuating workloads.
This article looks at serverless through the lens of FaaS.

Key Practices
To fully take advantage of serverless architectures, here are some best practices:
Design for failure
Ensuring your application can effectively handle failures is essential in a serverless setup.
Strategies like retry mechanisms and circuit breakers can help maintain reliability and availability.
Optimize for performance
Serverless performance optimization has two goals: reduce cold start latency and maximize resource utilization.
Cold starts occur when a function is invoked after a period of inactivity, requiring the cloud provider to allocate resources. This can lead to significant latency, particularly for languages with heavier startup times.
Lightweight functions, programming language selection, and aligning memory and computing resources with function requirements can all help to reduce startup times and costs.
Security considerations
A proactive approach to security is a must.
To protect your serverless applications, implement the least privilege principle, secure your API gateways, and encrypt data.
Cost management
Despite being cost-effective, improper utilization can result in increased costs.
Monitor usage patterns and adjust resource allocations to keep the expenses under control.
While the above practices yield results, there are also common pitfalls to be mindful of:
Ignoring cold start latency
The user experience can be significantly impacted by cold starts.
Reduce them by using warm-up techniques and optimizing your code.
Overlooking security in a shared environment
Avoid being taken in by the convenience of serverless computing and allowing complacency to creep in.
Inadequate function permissions and neglecting data encryption are common oversights.
Ensure that robust security measures are in place.
Complexity in managing multiple services
The granular nature of serverless can result in architectural complexity, particularly when integrating multiple services and functions.
Adopting Infrastructure as Code (IaC) and serverless frameworks streamline management.
Limited control and vendor lock-in
Dependence on a single cloud provider can limit your control and flexibility.
Serverless solutions should be evaluated for flexibility and portability to ensure they align with long-term architectural goals.
When and Where Going Serverless Makes Sense
Serverless excels with event-driven applications due to its reactive execution model.
For microservices, it enables independent scaling and deployment.
It also works well for projects with fluctuating traffic through automatic, efficient scaling.
It's ideal for rapid development, allowing focus on coding over infrastructure management.
And the pay-as-you-go model also can be well-suited for cost-sensitive projects.
However, serverless architecture generally doesn’t fit well with long-running tasks due to execution time limits.
Applications requiring low latency can suffer because of potential cold start delays.
And cases needing precise environmental control may not be a great fit as it offers limited infrastructure customization.
Assess your project's specific needs; performance, costs, scalability, and so on, to determine if serverless aligns with the project goals.
Wrapping Up
Serverless architectures have simplified server management. It has enabled developers to focus more on code and functionality rather than managing infrastructure.
Despite its benefits, navigating serverless computing requires an understanding of its complexities and limitations.
By adhering to best practices and being mindful of potential pitfalls, developers can leverage serverless technologies to build scalable, cost-efficient, and resilient applications.
Pub/Sub Pattern Explained (Recap)
The Publish/Subscribe Pattern is an approach to messaging that is commonly used in distributed systems.
There are three entities involved — publishers, topics, and subscribers.
Subscribers tell the system which messages they would like to be informed about by subscribing to a topic.
Publishers send messages to topics, without knowledge of who the message should be sent to.
A message broker or event bus then forwards these messages to the appropriate subscribers.
Message senders and receivers are heavily decoupled in a Pub/Sub model. This leads to improved scalability, flexibility, and fault tolerance.

Stateful vs Stateless Design (Recap)
“State” refers to stored information that systems use to process requests.
Stateful applications store data like user IDs, session information, configurations, and preferences to help process requests for a given user.
As applications grew in complexity and received increasing amounts of traffic, the limitations of stateful design became apparent. The rapid need for scalability and efficiency drove the popularity of stateless design.
With stateless design requests contain all the information needed to process it.
Stateless design has been pivotal in several areas including Microservices and Serverless Computing.
It does have it’s challenges though including larger requests sizes, and transmission inefficiencies.
Most applications pick a hybrid approach between stateful and stateless design depending on the needs and constraints of each component of the system.

API Gateway vs Load Balancer — What's the Difference? (Recap)
API gateways focus on request management and microservice communication, while Load balancers focus on traffic distribution and server load management.
API gateways operate at the application layer (L7), while Load balancers can operate at both transport (L4) or application (L7) layers.
API gateways offer features like routing, rate limiting, authentication, service discovery, parameter validation, circuit breakers, and more, while Load balancers handle traffic distribution.
API gateways are ideal for microservice architectures needing centralized API management, while Load balancers are essential for applications requiring high availability, distributing traffic across multiple servers.

That wraps up this week’s issue of Level Up Coding’s newsletter!
Join us again next week where we’ll explore and visually distill more important engineering concepts.