Back to Resources

Cloud Native – Foundation of Modern Software Development

Explore Cloud Native principles: containers, microservices, and service mesh. Learn how to design scalable, resilient, and cloud-agnostic applications.

The term “Cloud Native” has existed in the IT landscape for several years, and most of us are already applying the principles – at least in part. To refresh most of the subjects inherent to these principles, we want to share an overview covering the basics with you. In general, Cloud Native is about the entire software development cycle, which focuses on cloud infrastructure (public and private cloud). It targets designing, building, and running applications to utilize the advantages of cloud computing.

Technical Considerations

When starting to dive into Cloud Native from a more technical perspective, you inescapably will be faced with the following subsequent concepts:

Containers

One key idea revolutionized shipment and execution of software. Containers encapsulate the software, allowing it to run consistently across different environments and in isolation. This also solves the problem of various processes running on the same OS (virtually or physically) having mixed dependency requirements (lib A in v1.0 and lib A in v2.0). Containers might also be seen as the evolution from virtualization, which still has its reason to exist covering a higher-level infrastructure context.

Furthermore, containers are more portable and make moving applications between development, testing, and production environments easy. In fact, container images can be seen as archives that bundle all the needed files and root configurations. Due to their lightweight idea, containers can be quickly started, stopped, and replicated. This makes it easy to scale up and down to meet changing demand.

Furthermore, it is also worth checking out the manifests of 12-Factor paradigms. It is a document describing further techniques for operations and developers to create solid software systems in the cloud native context.

Microservices Architecture

A highly discussed pattern, which is often used the wrong way. That is mostly due to the term “micro”, as it can be understood in a misleading way that we want to slice our existing large services into many tiny, isolated services covering small functionality. However, the actual intent of this pattern mostly comes from structural or organizational issues. For instance, too large teams are blocking processes regarding development, releasing and deployment, if the software system is growing larger and larger. With domain-oriented thinking, teams and software can be structured into business or functional areas. This enables smaller teams to not slow down and declares clear ownership of domain specific microservices. As a side effect, the smaller applications also have technical benefits, enabling more independent development, isolated deployments, and better scalability at runtime.

Of course, there are also technical reasons to divide a service into smaller parts, for example to achieve better scalability and more independence in deployments.

Service Mesh

In software architecture, a service mesh is a separate infrastructure layer, that adds support for cross-concern topics for service-to-service communications between distributed applications. By intercepting in- and outgoing connections in form of a network proxy, a service mesh provides secure, reliable, and observable communication between microservices. Hence, you do not have to deal with these aspects repeatedly in each service. The most popular candidates to be available in this area are Istio or Linkerd.

Cloud Agnostic

Applications should be designed to be portable across different cloud providers, avoiding lock-in. Relying on containers already supports this idea of vendor-neutral paradigm. But also, regarding the actual implementation details you should think about what cloud services you are attaching to. Proprietary APIs of e.g., specific database or queueing engines, can result in high implementation efforts and complex migration processes when switching to another hyperscaler is needed. Therefore, following open standards and preference for open-source tools that work across various cloud platforms is key.

In part two of this post series, we will focus on runtime considerations such as scalability, resilience and security. Stay tuned. Read on.



Author © 2024: Marcel Hoyer – www.linkedin.com/in/marcel-hoyer-0a21a12b2/

Any questions?

Get in touch
cta-ready-to-start
Keep up with what we’re doing on LinkedIn.