Skip navigation EPAM

The Changing Face of Cloud Applications

Miha Kralj

VP, Cloud Strategy, EPAM
Blog

Over the past few years, as organizations migrated their applications to the cloud, they quickly discovered that treating this new environment as merely an extension of their data centers — that is, simply moving virtual machines (VMs) from on-premises servers to cloud instances — was inefficient. Worse, with all the opportunities the cloud offers, they were leaving money on the table.

Getting the most value from the cloud meant, in many cases, changing the way in which applications are constructed. As the cloud has rapidly evolved, so has the architecture of applications, in new and – in some cases – challenging ways.

Containers and Orchestration to the Rescue

Developers soon recognized the value of containerizing applications: Containers use server resources far more effectively than VMs, they launch faster and they are portable. This last feature meant that, as application load scaled up (and down), new instances of the containers could be created quickly, ensuring that users had quick and responsive access to their applications no matter the load. And, as the load declined, instances could be removed, thus saving money.

All this required some level of control, or orchestration. Early on, several orchestration solutions, such as Docker Swarm, Mesos Mesosphere and Rancher, appeared, but Kubernetes — first generally available in 2014 — has become the de facto industry standard. Kubernetes’ original developer, Google, has open-sourced the platform. An independent body, the Cloud Native Computing Foundation, now manages its development and today, all the major cloud platforms support it.

Such is its adoption that many now call it a cloud operating system. It provides a massively scalable platform for cloud applications, controlling the allocation and use of resources such as CPU, memory, networking and storage across tens, hundreds or more servers just as an operating system does for a single computer.

Microservices on Kubernetes: Made for Each Other

The microservices architecture pattern emerged roughly at the same time as containers and Kubernetes, and they complement each other well. The microservices pattern describes relatively small software components, each of which implements a specific business function within an application. For example, an eCommerce application might consist of a catalog microservice, a payment microservice, a shipping microservice and so on. The beauty of this approach is that each microservice can be independently developed by focused, expert teams (as long as they agree on APIs), and that in production, they can be independently scaled up and down. 

Sounds great, right?

Mastering the Complexity of Kubernetes

Yes, but…

Early versions of Kubernetes tended to be complex in the extreme and deploying it in a robust and secure manner often proved difficult. For example, the “control plane” (console application) could be a single point of failure: If it crashed, the Kubernetes cluster could no longer be managed. Similarly, setting up a private registry (where container images are stored and loaded from), load balancing and traffic management, ensuring cluster security are all (and were) tasks that could be daunting in the extreme.

To address the complexity of deploying Kubernetes, each of the cloud service providers (CSPs) have introduced managed versions of Kubernetes sometimes called “containers as a service,” or CaaS.

While staying true to Kubernetes’ vision and capabilities (and APIs), these tightly integrate with the cloud vendors’ services. Microsoft’s offering Azure Kubernetes Service, for example, takes advantage of Azure Active Directory for identity management and role-based access control (RBAC), has its own Azure Container Registry and tightly integrates with the Azure Portal and Microsoft development tools. AWS’ Elastic Kubernetes Service (EKS) similarly leverages AWS Identity and Access Management (IAM), AWS Virtual Private Cloud and so forth. Google Kubernetes Engine uses Google Cloud features such as Cloud Logging and Monitoring, Cloud Build for development and so on.

And each of the CSPs offer a highly available, redundant control plane (and registries too), relieving customers of this difficult task. 

…And On-Premises, Too

With the arrival of Google Anthos in 2020, followed quickly by Azure Arc and AWS Outposts, it became possible for Kubernetes customers to manage both cloud and on-premises clusters from a single location. Such hybrid deployments continue to be useful for customer scenarios that have, for example, limited internet connectivity or stringent regulatory compliance requirements. Moreover, in certain scenarios as load increases, a cloud-based Kubernetes cluster can “burst” (instantiate new container instances) on on-premises clusters and vice versa.

Serverless Makes Kubernetes Even Easier

The new frontier for cloud-based Kubernetes applications, serverless, seeks to remove as much infrastructure overhead as possible. That is, requiring little or no operations support, which in turn lets developers focus solely on business logic. Knative, Google’s Kubernetes serverless component, only invokes a Kubernetes-based container when requested, scales up as needed and scales down to zero when the load is zero, saving customers money.

Other CSPs offer serverless Kubernetes services. AWS Fargate, introduced in 2019, extends its EKS offering. Azure Container Apps, currently in preview, let customers deploy containers on a serverless Kubernetes platform without requiring AKS.

Operators & Service Meshes

Kubernetes in all its shapes and forms is highly extensible. Operators let Kubernetes applications connect to a wide variety of external services and applications, such as databases, message buses and other cloud services. The OperatorHub.io site – jointly created by Red Hat, Amazon, Microsoft and Google – catalogs hundreds of available operators.

Service meshes enable added management and control features to Kubernetes, such as the ability to throttle network traffic between containers, to encrypt traffic and to provide logging and observability features.

A New World of Development

Kubernetes is changing rapidly. As with early versions of Windows and Linux, each advance signifies whole new capabilities that enable new levels of scalability, performance and business value. Moreover, as with operating systems in their early versions, the pace of these changes remains very fast.

Yet, it’s vital that teams and IT leaders stay on top of them, as Kubernetes is becoming as fundamental to cloud computing as Linux and Windows Server were to on-premises data centers.

Hello. How Can We Help You?

Get in touch with us. We'd love to hear from you.

Our Offices