• Level Up Coding
  • Posts
  • LUC #57: Essential Caching Strategies for Optimal Performance

LUC #57: Essential Caching Strategies for Optimal Performance

Plus, how the most prominent API architectural styles work, circuit breaker pattern explained, and how a service mesh works

This week’s issue brings you:

READ TIME: 4 MINUTES

A big thank you to our partner Postman who keeps this newsletter free to the reader.

Imagine if you could auto-generate API tests. That’s now possible. All you have to do is send a request, Postbot can take care of the rest for you. Check it out.

Caching Strategies: A Comparative Overview

In the current world of big data and high-speed applications, performance is a prominent consideration for any development team.

Caching is one of the most used techniques to boost performance due to its simplicity and wide range of use cases.

With caching, data is copied and stored in locations that are quick to access such as on the browser or a CDN.

How data is updated and cleared is a key component of the design of any caching strategy. There are many techniques to choose from, all with their own unique set of use cases that they aim to accommodate.

Least Recently Used (LRU) is an approach to cache management that frees up space for new data by removing data that has not been accessed or utilized for the longest period of time.

It assumes that recently accessed data will be needed again soon.

This is quite a common approach and is often used in browsers, CDNs, and operating systems.

Most Recently Used (MRU) is the opposite of LRU, where the most recently used data is removed first.

This approach is more commonly used in streaming or batch-processing platforms where data is unlikely needed again once it has been used.

Least Frequently Used (LFU) removes data that is used the least.

Although it is a more accurate approach than LRU, it requires a mechanism to keep count of how often data is accessed which adds complexity.

LFU also has the risk of keeping outdated data in the cache.

For these reasons, it is often used in combination with other strategies such as LRU.

With Time-To-Live (TTL), data is kept in the cache for a pre-defined period of time.

This is ideal for cases where the current state of data is only valid for a certain period of time, such as session data.

Two-tiered caching provides a more complex approach that strikes a balance between speed and cost. In this design, data is split up between a first and second tier.

This first tier is a smaller, faster, and often more expensive caching tier that stores frequently used data.

The second tier is a larger, slower, and less expensive tier that stores data that is used less frequently.

The five strategies mentioned above are the most popular approaches to caching. There are other notable mentions, such as the following:

  • First In, First Out (FIFO): The oldest data is deleted first.

  • Random Replacement (RR): Randomly selects data to be deleted.

  • Adaptive Replacement Cache (ARC): Uses a self-tuning algorithm that tracks recency and frequency to determine which data to delete first.

The best caching strategy depends on the system’s specific requirements and constraints. Understanding and appropriately leveraging the different caching strategies available can make a significant difference in the performance of your application.

How Do The Most Prominent API Architecture Styles Work? (Recap)

REST — Utilizes HTTP methods for operations which provides a consistent API interface. Its stateless nature ensures scalability, while URI-based resource identification provides structure.

GraphQL — Unlike REST it uses a single endpoint. GraphQL uses a single endpoint, allowing users to specify exact data needs, and delivers the requested data in a single query.

SOAP — Once dominant, SOAP remains vital in enterprises for its security and transactional robustness. It’s XML-based, versatile across various transport protocols, and includes WS-Security for comprehensive message security.

gRPC — Offers bidirectional streaming and multiplexing using Protocol Buffers for efficient serialization. It supports various programming languages and diverse use cases across different domains.

WebSockets — Provides a full-duplex communication channel over a single, long-lived connection. It is ideal for applications requiring real-time communication.

MQTT — A lightweight messaging protocol optimized for high-latency or unreliable networks. It uses an efficient publish/subscribe model.

Explaining The Circuit Breaker Pattern (Recap)

The circuit breaker pattern is a fundamental design strategy to bolster system resilience and prevent disruptions.

Here's how it works. It operates in three states:

Closed state In this default state, the system processes all requests normally. If failures exceed a threshold, it transitions to the open state to allow recovery.

Open state — The circuit breaker in the open state allows troubleshooting and recovery by stopping all incoming requests to the impacted service, preventing system-wide failures and major disruptions.

Half-open state — During this phase, a few test requests verify if the issues are resolved. If successful, the system returns to the closed state. If not, it reverts to the open state for further resolution.

What is A Service Mesh, and How Does it Work? (Recap)

A service mesh is a dedicated infrastructure layer for facilitating service-to-service communication.

How a service mesh operates:

Deployment and proxy injection — Each service gets a sidecar proxy, to manage inbound and outbound traffic.

Service discovery — proxies use the control plane to find services and determine the best communication paths.

Traffic routing — requests are directed based on predefined rules, optimizing load balancing and routing.

Security and policy enforcement — traffic encryption and policy application occur automatically, ensuring secure and authorized communication.

Observability and monitoring — proxies collect metrics and logs, providing visibility into system performance and service health.

Failure recovery — the control plane dictates recovery actions like retries or alternative routings in case of service failures.

That wraps up this week’s issue of Level Up Coding’s newsletter!

Join us again next week where we’ll explore concurrency vs parallelism, MQTT protocol, database indexing, and more.