• Level Up Coding
  • Posts
  • LUC #25: Edge Computing vs Cloud Computing — Which is Best?

LUC #25: Edge Computing vs Cloud Computing — Which is Best?

Plus, Git branching strategies explained, how Linux permissions work, and debugging tips

Welcome back, fellow engineers! We're excited to share another issue of Level Up Coding’s newsletter with you.

In today’s issue:

Read time: 8 minutes

A big thank you to our partner Postman who keeps this newsletter free to the reader.

You can now simulate real-world traffic on your local machine to performance test your APIs. It's a very useful release from Postman, and it's available on their free tier. Check it out.

Edge Computing vs Cloud Computing — Unpacking the Paradigms of Modern Computing

Around 20 years ago in the early 2000s, the cloud model began to take off. Cloud providers like AWS, Microsoft Azure, and GCP (Google Cloud Platform) arrived on the scene; the rest is history. From startups to enterprises, there was a major shift away from the traditional infrastructure-heavy approach to the cloud.

Cloud computing completely changed the game.

However, in recent times, there’s been growing a buzz about edge computing.

The increasing demands of evolving technologies and the modern world’s need for speed have been the key forces driving the growth of edge computing. Edge computing is able to provide qualities that cloud computing can struggle with. That’s not to say that it is going to replace cloud computing, they both have their pros and cons and use cases for where they perform best. Let’s dive in.

What is the Difference Between Cloud Computing and Edge Computing?

In cloud computing, computation and data storage happen in remote data centers, commonly referred to as "the cloud." It’s a centralized system, with cloud service providers in charge of managing the computer resources.

Edge computing is a distributed approach to data storage and task processing. By placing servers on the "edge" of the network or on the actual devices, it moves computing closer to where it is needed. The edge is the location where devices connect to the network.

This change in where computation and data storage occur leads to edge computing being faster than cloud computing. Processing data closer to its source leads to reduced latency, reduced network congestion, and other benefits.

Benefits of Cloud Computing Over Edge Computing

The centralized infrastructure underpinning the cloud model leverages advantages that come with economies of scale and a consolidated approach. The computational resources, storage, and associated services being centralized in massive data centers leads to:

Cost-effective resource pooling: The large-scale costs can be distributed among numerous customers which can result in cost savings for clients.

Simple scaling: Cloud service providers (CSPs) have a lot of resources available that can be allocated almost instantly, making scaling up or down simple.

Infrastructure management by CSPs: One of the hallmarks of cloud computing is the "as a Service" model where everything from infrastructure to software is provided as a service. This means things like maintaining servers, updating software, or other infrastructure concerns are abstracted away from companies; the CSP handles them. Whereas with edge computing infrastructure management can be more hands-on.

Maturity of cloud computing: Since cloud computing has been around for longer than edge computing, there's a broader ecosystem. This includes a wider range of providers, matured service models, extensive documentation, and community support.

Benefits of Edge Computing Over Cloud Computing

The strength of edge computing lies in its decentralization, placing computational resources closer to data sources like sensors, smartphones, and other devices requiring real-time processing. This proximity-centric model addresses some limitations posed by a more centralized cloud approach including:

Reduced latency and bandwidth usage: By processing data closer to where it's generated, there's less delay (latency) because the data doesn't have to travel to a centralized data center and back. This also means less data traveling over the network, conserving bandwidth.

Real-time data processing: Edge computing devices can process data in real-time or near real-time. This is crucial for applications like autonomous vehicles or any scenario where even a tiny delay could have significant consequences.

Enhanced security: Sensitive data can be processed locally without being sent over a network to a central server. Less data traveling across networks reduces interception risks.

Distributed Nature Enhances Resilience: With many nodes working independently, the system intrinsically avoids the pitfall of a single point of failure. Even if one node encounters issues, the rest can continue to function.

Disadvantages of Cloud and Edge Computing

While both computing paradigms have their advantages over each other, nothing is without its drawbacks. Below are some of the main drawbacks of each computing model.

With cloud computing, the reliance on internet connectivity brings inherent downtime risks; if the connection drops, access to the cloud is compromised. The centralized nature of data storage can attract cyberattacks, underscoring the need for rigorous security protocols. Moreover, routing data over vast distances can lead to latency issues, especially in applications where real-time responses are crucial.

For edge computing, with so many devices involved, maintaining and ensuring their security becomes a challenging task. While these devices benefit from proximity to the data source, they often lack the robust processing capabilities of centralized data centers. Furthermore, achieving data consistency across the myriad of edge nodes can be an intricate endeavor.

Where Each Computing Model is Best Suited

The differences in methodologies between cloud computing and edge computing see each model thrive in a separate set of use cases.

Common Cloud Computing Use Cases

Data Analytics and Machine Learning: When dealing with vast datasets, the cloud's processing power is unrivaled.

Web Hosting: CSPs revolutionized web hosting, offering scalable, flexible, and cost-effective solutions tailored to SaaS products, e-commerce platforms, websites, and more.

Backup and Storage: The vast storage capacities the cloud offers are unmatched, making it the ideal solution for backing up data securely.

Edge Computing Use Cases

Edge computing is best suited to scenarios where real-time data processing is required, such as:

Autonomous vehicles: They rely heavily on edge computing's swift data handling to make split-second decisions.

Real-time gaming: The instantaneous response times that edge computing can provide ensure deeply immersive and fluid gaming experiences.

IoT ecosystem: A lot of IoT devices like industrial machinery, sensors, and more require real-time processing. Delayed data processing can lead to inefficiencies, errors, or even accidents.

So which computing model is best? Well, the answer is it depends. Neither is a one-size-fits-all solution. Both have their distinct advantages and disadvantages catering to specific needs and scenarios. Rather than viewing them as competing forces, it's more appropriate to see them as complementary.

One thing is for sure though, the newer edge computing model is here to stay. And it will play an important role in the coming years as businesses of all sizes continue to look for ways to improve in a crowded, competitive marketplace.

Git Branching Strategies Explained (Recap)

When formulating your branching strategy, take the most relevant features from the strategies below and apply your own set of tweaks. Every project and team has its own unique needs and boundaries, which should be reflected in their Git branching strategy.

🔶 Feature Branching: A popular method where each feature gets its own branch to ensure that changes are isolated and code reviews are simplified.

🔶 Gitflow: has two permanent branches — a production and a pre-production branch, often referred to as the “prod” and “dev” branches. Features, releases, and urgent bug fixes get temporary branches. It’s a great approach for scheduled releases and handling multiple production versions.

🔶 GitLab Flow: A blend of feature and environment-based branching. Changes merge into a main branch, then to branches aligned with the CI/CD environment stages.

🔶 GitHub Flow: Similar to feature branching but with a twist. The main branch is always production-ready, and changes to this branch set off the CI/CD process.

🔶 Trunk-based Development: Branches are short-lived. Changes merge into the main branch within a day or two, and feature flags are used for changes that require more time to complete. This is ideal for large teams with disciplined processes.

Debugging Tips (Recap)

1) Define the problem
Identify the problem’s symptoms, and compare expected versus actual outcomes. Determine its scope, assess its severity and impact, and note steps to reproduce it. This clarity streamlines the troubleshooting process.

2) Reproduce it
Reproducing the bug is often the most effective way to pinpoint its cause. However, if this can't be done, try checking the environment where it occurred, search the error message online, assess the system's state at the time, note how often it happens, and identify any recurring patterns. These steps can offer vital clues.

3) Identify the cause
Logs are a big help in the debugging process; if they're insufficient, add more logs and reproduce the issue. Some additional strategies are to use debugging tools for insights, test components in smaller chunks, and try commenting out code sections to pinpoint the problem area.

4) Provide a postmortem
When a bug's cause is identified and resolved, thoroughly document the issue, the fix, and ways to prevent it in the future. Sharing this knowledge with the team is important to ensure everyone is informed and can benefit from the lessons learned, promoting a proactive approach to future challenges.

How Do Linux Permissions Work? (Recap)

Linux is a multi-user OS that has robust built-in user and group permissions.

These permissions provide the ability to limit who has access to a file or directory and what actions (read, write, or execute) they are allowed to perform.

There are three permission types for each file and directory:

🔶 Read (r): Allows reading of a file or listing of the directory's contents.
🔶 Write (w): Allows you to modify the contents of a file or create or delete files from a directory.
🔶 Execute (x): Allows a file to be run as a program, or a directory to be entered into.

There are three types of users to whom permissions are assigned:

🔶 User (u): The owner of the file or directory.
🔶 Group (g): Other users who are members of the file's group.
🔶 Others (o): All other users who are not the owner or members of the group.

That wraps up this week’s issue of Level Up Coding’s newsletter!

Join us again next week where we’ll explore the SQL execution order, how OAuth 2.0 works, and how gRPC works.