• Level Up Coding
  • Posts
  • LUC #45: Demystifying System Design: An Easy-to-Follow Guide

LUC #45: Demystifying System Design: An Easy-to-Follow Guide

Plus, caching eviction strategies explained, webhook vs polling, and what is quantum computing, and how does it work?

This week’s issue brings you:

READ TIME: 5 MINUTES

A big thank you to our partner Postman who keeps this newsletter free to the reader.

POST/CON, Postman's biggest API conference ever from April 30 - May 1 in San Francisco, California! And until March 26, Postman is offering 30% off tickets. Check it out.

Demystifying System Design: An Easy-to-Follow Guide

As the saying goes, if you fail to plan, you plan to fail.

This adage holds a profound truth, especially when it comes to system design.

System design provides the blueprint for building an application. Without it, applications often become an unwieldy, costly mess.

The goal of system design is to create a dependable, efficient, secure, and user-friendly system that satisfies the requirements of the business and its users.

The process involves various steps, which we will dive into below in the general sequence in which they occur.

Let's dive in!

The Core of System Design

System design involves the complex process of planning and structuring software solutions to meet specific goals.

It's the art and science of envisioning and defining software architectures, components, modules, interfaces, and data for a system to satisfy specified requirements.

This process is iterative, requiring continuous refining and adjustment of the design based on evolving requirements and challenges.

Diving Deeper into the System Design Process

1. Requirements analysis

The process begins by defining system requirements. This involves understanding the goals, tasks, and constraints of the system.

Talking with stakeholders, getting detailed requirements, and setting specific goals are key

This phase is especially critical as it lays the foundation for the other stages.

2. High-Level Design

Now we focus on the system's overall structure, creating the architectural overview of the system.

In this phase, we describe the major components of the system and how they interact with each other.

This step creates a basic map of the system, showing its components, the technologies it will use, and how it can grow and remain stable and easy to maintain.

3. Detailed Design

With the overall design ready, we move on to the specifics of each part

With the overall design of how the entire system works ready, we move on to the detailed specifications of each component.

This includes setting up algorithms, data structures, and how each component works, making sure everything fits well together.

4. Interface Design

Next up is interface design which involves planning the user interfaces (UIs) and the application programming interfaces (APIs) for smooth interaction between different parts of the system.

5. Database Design

After interface design is typically database design. All phases are important, however, this is one of the more critical ones.

This phase involves organizing data, designing tables, setting up relationships between them, deciding on indexes, and making sure the data is accurate, fast, and secure. As well as defining how data is going to be stored, accessed, and manipulated.

6. Security Design

In this step, we look at one very important element — security.

This is where we define how the system will protect data, ensure privacy, and handle potential threats.

Things like encryption, authentication, authorization, and vulnerability assessments are all discussed and planned here.

7. Performance Design

The main focus of this phase is on the performance criteria listed in the initial requirements analysis. The outcome is a design that meets those requirements.

Here, we look at optimizing system responsiveness, throughput, and resource utilization, ensuring the system can handle the expected load and scale gracefully under peak conditions.

8. Error Handling and Logging

Failure is going to occur, so it’s important to anticipate and plan for it.

At this step, the emphasis is on analyzing potential areas of failure and determining how the system will respond.

This includes defining robust error handling mechanisms and logging strategies to diagnose issues, monitor system health, and facilitate recovery from failures.

9. Testability

Finally, designing for testability ensures that each component can be verified for correctness.

This step involves identifying which components will be tested, how the tests will be carried out, and how the findings will be communicated and used.

Wrapping Up

System design is not a one-time task, it’s an iterative process.

It involves going back and forth between the steps outlined above to refine the solution.

Feedback loops and iterative cycles enhance the design’s robustness and adaptability, helping to ensure the system meets requirements.

Caching Eviction Strategies Explained

How data is updated and cleared is a key component of the design of any caching strategy. Here are five of the most popular caching eviction approaches:

🔶 Least Recently Used (LRU) - This strategy deletes the oldest unused data to make room for new content. It operates on the premise that data accessed recently will likely be needed again soon.

🔶 Most Recently Used (MRU) - Contrary to LRU, MRU removes the most recently utilized data first and suits scenarios where once-used data isn’t needed again, like in streaming services.

🔶 Least Frequently Used (LFU) - LFU evicts the least frequently accessed data. It is more precise than LRU but adds complexity due to the need for tracking access frequency.

🔶 Time-To-Live (TTL) - TTL sets a predefined lifespan for stored data, ideal for information that becomes obsolete after a certain time, such as session data.

🔶 Two-Tiered Caching - This complex system divides data between 2 tiers, a high-speed, costly cache for frequently accessed data and a slower, cost-effective cache for less popular data.

These strategies are also worth mentioning:

🔹 First in, First Out (FIFO): The oldest data is deleted first.

🔹 Random Replacement (RR): Randomly selects data to be deleted.

🔹 Adaptive Replacement Cache (ARC): Uses a self-tuning algorithm that tracks recency and frequency to determine which data to delete first.

Webhook vs Polling — What’s the Difference? (Recap)

Polling is a pull-based approach that operates on a 'check-in' system. Clients regularly initiate API calls to the server to inquire about any changes. This process involves the system routinely executing API requests at set intervals to ensure updates are consistently captured and communicated, even though they may not be instantaneous.

Webhooks represent a push-based methodology, where notifications are sent from the server only when new data becomes available. This system relies on the server's initiative to send out notifications when there are updates. When this happens, the server dispatches information directly to a predefined webhook URL, with a payload containing details of the update. This mechanism allows for immediate data synchronization without the need for constant API requests.

Webhooks provide a more efficient and real-time solution, enabling immediate data synchronization as opposed to the delayed response of polling. However, they do come with the trade-off of increased complexity in setup and maintenance compared with polling.

What is Quantum Computing, and How Does it Work? (Recap)

Quantum computers can perform multiple calculations simultaneously, which gives them much more processing power than classical computers. Two of the primary principles responsible for the ability to process multiple possibilities concurrently are superposition and entanglement.

Unlike classical computing, which operates on a binary system of 1s and 0s, a quantum bit (qubit) can exist in multiple states at the same time; this is called ‘superposition’.

Entanglement suggests that two qubits can be intrinsically linked, meaning the state of one qubit is directly related to the state of another.

Superposition and entanglement allow quantum computers to process information in a very different way from classical computers. Qubits can handle information that is far denser than the classical binary approach. Entanglement helps make computation shortcuts leading to algorithms that are far more efficient and powerful.

That wraps up this week’s issue of Level Up Coding’s newsletter!

Join us again next week where we’ll explore event-driven architecture, GraphQL vs REST, how DDoS attacks work and how to prevent them, and SSL vs TLS.