It is indisputable that cloud computing has been the great technological revolution of recent years. More and more companies have realised that migrating to the cloud allows them to streamline processes, improve productivity and, ultimately, reduce costs.
But what is the cloud? The term cloud has generally been used to describe remote resources accessible to several client systems via the Internet. In graphics describing the architecture of systems, a cloud is often painted to represent resources that are external to the local network infrastructure. With cloud computing, companies (now as service users) can have resources that allow them to run their applications without needing their servers or data centres, and pay only for the use they make of these resources.
The opposite concept, and the most widespread until a few years ago, is to use proprietary, on-premises installations. This paradigm is based on the installation of applications on servers owned by the operating company, which implies having its data centre or contracting the services of a third party to provide such a data centre.
The migration to cloud computing models minimises energy consumption and electronics production, as physical resources are shared and pooled by several clients and used on-demand, optimising their use and reducing the number of machines required. This brings not only benefits for companies, but also to the environment. Lower energy consumption and less electronics manufacturing contribute directly to the reduction of emissions.
Evolution of software development
Let’s look at the historical problems related to software development for computer systems and the evolution that has taken place to solve them.
The waterfall lifecycle was found to be inefficient because it is designed to produce static software that does not adapt to changing customer needs. In addition, a misinterpretation of requirements in the initial phases snowballed into the delivery of the final product, resulting in poor quality deliveries and consequent customer dissatisfaction. This led to a rethinking of software development project methodologies and the rise of so-called agile methodologies.
These new methodologies (agile methodologies) define iterative life cycles where each iteration produces a delivery that brings new value to the solution, which translates into frequent software deployments. This requires the definition of continuous integration and deployment processes. This is known as Continuous Integration and Continuous Deployment, or CI/CD for short.
To provide a solution to this need, multiple tools provide the capacity to synchronise the code generated by developers and automate compilations and deployments in different environments.
Until not so long ago, applications were built according to a monolithic architecture. A monolith is nothing more than a piece of stone of homogeneous composition standing on the ground. The simile is very graphic: applications were built with large code files that were all compiled simultaneously and generated an executable artefact containing all the application’s functionality. The development and, even more, the maintenance of this type of application was complex even if the application itself did not represent a great complexity.
The commissioning of these systems usually required a temporary downtime and the attention of the technical teams in charge of the maintenance of the IT systems. These teams were usually not involved in the development of the applications and, on many occasions, were unaware of the deployment needs in terms of dependency requirements and other technical aspects necessary to make the execution of the applications viable. There were usually situations in which implementation was unfeasible due to technical aspects and war arose between the systems teams and the development teams. In this struggle, the ultimate defence of the latter was the phrase ‘it works on my machine’, meaning that in their development environments the application runs without any problems. In addition, deployments of several applications were made on the same operating system, which meant that the malfunctioning of one of the deployed applications could affect the rest. An isolation mechanism between applications was needed.
To solve these problems, virtual machines and, a little later, containers began to be used. Virtual machines allow the installation of an operating system on virtualised hardware inside physical hardware. In this way, several independent operating systems can be installed on the same hardware. Containers are standardised units that package pieces of software and include everything needed to run them, including libraries and system tools. The advantage of containers over virtual machines is that the former is based on virtualisation at the operating system level and the latter at the hardware level, which makes containers lighter and faster to run.
Following the maxim ‘divide and rule’, the evolution of the monolithic architecture was the division of pieces of code, initially in the form of libraries that are compiled independently and expose a programming interface (Application Programming Interface or API) so that the functionality of these libraries could be accessed from the main application. Later, these APIs were implemented in the form of services that expose their interface through the HTTP protocol and are executed independently. In this way, an application that only exposes the graphical interface or front-end can make use of these services that will perform the complex tasks, which are usually called business logic. This is the basis of the microservices architecture.
This division into microservices allows for the deployment of each microservice in a separate container, and therefore isolation not only between applications but also between different parts of the same application. The microservices architecture allows for easy continuous delivery because deployments are limited to only those parts that have been modified and not the entire application. It also fosters better quality assurance (QA), as splitting testing into smaller pieces also allows for the detection of bugs in specific parts of the faulty software.
Scalability, availability, and fault tolerance
On the other hand, deployments of large monolithic applications were slow and required planning. When errors occurred, the entire application would stop working, resulting in downtime. When the need for more memory or more processing power increased, more powerful machines with more resources had to be considered. New software development architectures try to avoid these problems.
A few years ago it was unthinkable to develop applications that serve the general public on a global scale. Now it is very common and we all use them all the time, just think of online product shops, social networks or messaging systems.
Due to the new need for large-scale data processing to serve a massive number of users, it is necessary to streamline deployments so that it is easy to increase or decrease computational power and memory resources when necessary, according to demand. It is also important to take into account the need for high availability. High availability means the ability to ensure continuity of services, even in situations where errors or failures occur.
Advantages of cloud computing
Cloud computing makes it possible to meet new needs simply and cost-effectively.
On the one hand, it allows systems to scale horizontally. The concept of horizontal scalability is related to microservices architecture. Instead of increasing memory resources or processing capacity, which would be vertical scaling, the number of running instances of one or more of the services is increased.
They also enable high availability and fault tolerance because cloud service providers can relocate the deployment of customer services among the hundreds or thousands of servers at their disposal. If any of these servers fail, the deployments will be assigned to other servers automatically.
Another advantage of cloud computing is the simplicity of system maintenance, facilitating the DevOps culture. DevOps refers to the set of practices that bring together software development and operations related to production release. Development teams and systems teams work with greater cohesion and may even be integrated into the same team.
Tools used in Cloud Computing
Many platforms offer cloud computing services. Examples include Amazon Web Services, Microsoft Azure, Google Cloud and IBM Cloud.
All these platforms include a tool that provides, among other things, management of the containers where services are deployed, automation of these deployments, horizontal scalability, redirection and balancing of incoming requests to services, and hiding sensitive configuration data (secrets). We are referring to Kubernetes.
Kubernetes is open-source software that provides a portable and extensible platform for managing workloads and services. It was originally developed by Google and used it run applications in production for fifteen years until it was released in 2014 and became an open-source project.
Kubernetes provides a container-oriented environment that allows you to configure a set of components and tools that make it easy to deploy, scale and manage applications. It does not limit in any way the type of application that is deployed; any application that can run in a container can run in Kubernetes. Nor does it impose specific CI/CD tools, middleware such as data buses or monitoring tools, but it integrates well with any of them.
One of the nice things about Kubernetes is that it exposes an API that can be used to configure the system declaratively. It also provides a command-line tool (kubectl) that allows you to create, update, delete and query objects via the API. This API allows other tools to be built that benefit from the functionality provided by Kubernetes. To give an example: OpenShift is an enterprise Kubernetes platform. It is a distribution of Kubernetes with some added elements such as the graphical interface.
It has a large ecosystem, there are a lot of tools that work in collaboration with Kubernetes and they are being renewed day by day. It also has a large community behind it that provides support, tools and services so that companies can use this software without great risk.
Other places where Kubernetes can be found are in the cloud platforms mentioned above, such as Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE), IBM Cloud Kubernetes Service, among others.
It should be pointed out that just because it is a popular and widely used tool, it should not be adapted to any type of development. On the contrary, it should be carefully analysed to determine whether it fits the needs of our project or not. It should be borne in mind that this technology provides a lot of flexibility at the cost of a certain complexity in the configuration and it is necessary to have staff trained in its use, as acquiring the necessary knowledge to master the tool can take time.
There are alternatives for the deployment of smaller-scale developments on cloud platforms. One example is services that allow code to run without provisioning or managing servers. This is called serverless computing. The code for a specific function is simply uploaded and the service executes it on a high-availability infrastructure and automatically scales according to the demand for that function. Examples of this type of service are Amazon AWS Lambda, Microsoft Azure Functions, Google Cloud Functions, and IBM Cloud Functions.
It has not been the purpose of this article to introduce Kubernetes, there are already countless articles on that subject, but to explain the context in which it was born and its importance today as a tool consolidated as a de facto standard that streamlines and strengthens software development today.
Jorge Berjano Pérez
Software Architecture at decide4AI