12+
Kubernetes Cookbook

Объем: 171 бумажных стр.

Формат: epub, fb2, pdfRead, mobi

Подробнее

Kubernetes Navigating the Cloud-Native field

Reviewers about the book

Giridhar Reddy Bojja:

“This book is a groundbreaking contribution to the field of cloud-native technologies and container orchestration. The authors have masterfully combined theoretical insights with practical applications, making it an indispensable resource for both novices and seasoned professionals. The detailed explanations and hands-on examples empower readers to confidently deploy, manage, and optimize Kubernetes clusters. This work stands out as a major scholarly contribution, providing deep technical knowledge and practical skills essential for advancing the field of computer science. It is a testament to the authors’ expertise and a must-read for anyone involved in modern cloud infrastructure.”

Vladislav Bilay:

“In an era where cloud-native applications are becoming the backbone of modern IT infrastructure, this book is a significant scholarly work that offers profound insights into Kubernetes. The authors have achieved an exceptional balance between clarity and depth, making complex topics accessible and practical. Through well-structured chapters and comprehensive case studies, readers gain a robust understanding of both self-hosted and PaaS Kubernetes environments. This book not only educates but also inspires innovation, marking a major scientific and technical contribution to the field. It is a critical resource for developers, DevOps engineers, and cloud architects aiming to excel in cloud computing and container orchestration.”

Introduction

This chapter covers:

— Scope and Objective of this Cookbook

In this part, we’ll learn how to use various tools to put your application in a container. Assuming we have an online microservice called auth-app, which handles authorization. We wrote this microservice in Rust. We will begin with Docker, then move on to Podman, and finally, Colima. Also, we will modify our containerized application step by step for the better.

Containerizing with Docker

To start with Docker, you need to have Docker Desktop installed. Use [official website] (https://docs.docker.com/engine/install/) to get it done, then check the Docker version by using this command:

— Why This Book? The Purpose Unveiled

— What You Will Learn

— Which Tasks Does Kubernetes Solve

— The Role of Kubernetes

“The magician’s power comes from being the only one that understands how something works. Learn how it works and they won’t be able to trick you.”

— Kelsey Hightower, ex-Google Cloud’s Principal Developer Advocate

In an era where the IT landscape is rapidly evolving, cloud-native architectures have emerged as the new standard for developing applications. At the heart of this transformation is Kubernetes, a platform that, much like Linux in its heyday, has become the foundational layer upon which countless projects are built. Kubernetes is not merely another tool in the developer’s arsenal; it represents an entire ecosystem teeming with plugins, addons, and tools designed to foster the creation of reliable, scalable, and secure systems. However, the complexity of Kubernetes means that a deep understanding of its inner workings is crucial. Without this knowledge, there’s a tangible risk of not just failure but significant financial and temporal losses.

This book is crafted to demystify Kubernetes, guiding you through its practical application in real-world scenarios while highlighting common pitfalls and how to sidestep them. Our goal is to arm you with the knowledge to not only prevent your company from facing catastrophic failures due to common missteps but also to provide insights into optimizing your Kubernetes infrastructure for both resource management and cost efficiency.

Why This Book? The Purpose Unveiled

If you’re contemplating migrating your projects to Kubernetes or eager to understand how to leverage this technology effectively in the real world, this book is your compass.

The journey to Kubernetes mastery is fraught with questions and challenges:

1. The Learning Curve: While initiating a simple demo may seem straightforward, the operational and troubleshooting aspects of Kubernetes are anything but. Real-world guidance and insights into potential hurdles are invaluable.

2. Navigational Challenges: The Kubernetes ecosystem is vast, offering numerous paths for teams. Determining the most effective route without wasting resources is a common quandary.

3. Resource Optimization: How can you ensure your Kubernetes clusters are as resource-efficient as possible?

4. Avoiding Pitfalls: The fear of “breaking the company” with Kubernetes is real. How do you use it safely?

5. The Infrastructure Puzzle: Kubernetes is not a standalone solution; it requires a suite of additional modules and infrastructure. The necessity of these components often catches teams off guard.

6. Production Challenges: Managing a live production cluster presents its own set of challenges. How do you address these effectively?

7. Developer Access: Not all developers need to know the intricacies of Kubernetes, but they should be able to deploy and manage applications. Simplifying access is crucial.

This book aims to address these and more, drawing from real-world experiences and challenges encountered by DevOps engineers deeply entrenched in the Kubernetes ecosystem.

What You Will Learn

Authored by seasoned DevOps engineers, this book distills years of hands-on experience with Kubernetes into actionable insights.

Here’s what you can expect to gain:

— Practical Application: Understand how to apply Kubernetes in real-world settings, sidestepping common pitfalls and optimizing for cost and efficiency.

— CI/CD and Developer Access: Learn to utilize Kubernetes for continuous integration and delivery, streamline developer interactions with the platform, and manage production issues effectively.

— Choosing the Right Tech Stack: Gain insights into selecting the optimal tools and solutions for your project, beyond just the Kubernetes platform itself.

— Cost Management: Dive into the financial aspects of Kubernetes, learning how to manage your infrastructure with an eye towards high availability and low costs.

— Advanced Concepts: Explore deeper topics such as metrics, logs, tracing, chaos experiments, CI/CD, GitOps, and more, enhancing your Kubernetes mastery.

Accompanied by real-world examples, best practices, and reproducible case studies, this book is your gateway to mastering Kubernetes, enabling you to build robust, scalable, and efficient cloud-native applications.

Which Tasks Does Kubernetes Solve?

— Automating Deployment and Scaling
Kubernetes automates the deployment, scaling, and management of containerized applications. It ensures that the desired state specified by the user is maintained, handling the scheduling and deployment of containers on available nodes, and scaling them up or down based on the demand.

— Load Balancing and Service Discovery
Kubernetes provides built-in solutions for load balancing and service discovery. It can automatically assign IP addresses to containers and a single DNS name for a set of containers, and can load-balance the traffic between them, improving application accessibility and performance.

— Health Monitoring and Self-healing
Kubernetes regularly checks the health of nodes and containers and replaces containers that fail, kill those that don’t respond to user-defined health checks, and doesn’t advertise them to clients until they are ready to serve.

— Automated Rollouts and Rollbacks
Kubernetes enables you to describe the desired state for your deployed containers using deployments and automatically changes the actual state to the desired state at a controlled rate. This means you can easily and safely roll out new code and configuration changes. If something goes wrong, Kubernetes can rollback the change for you.

— Secret and Configuration Management
Kubernetes allows you to store and manage sensitive information such as passwords, OAuth tokens, and SSH keys using Kubernetes secrets. You can deploy and update secrets and application configuration without rebuilding your container images and without exposing secrets in your stack configuration.

— Storage Orchestration
Kubernetes allows you to automatically mount a storage system of your choice, whether from local storage, a public cloud provider, or a network storage system like NFS, iSCSI, etc.

— Resource Management
Kubernetes enables you to allocate specific amounts of CPU and memory (RAM) for each container. It can also limit the resource consumption for a namespace, thus ensuring that one part of your cluster doesn’t monopolize all available resources.

The Role of Kubernetes

In the modern cloud-native ecosystem, characterized by a multitude of services, technologies, and key components, Kubernetes stands out as a unified platform that orchestrates these diverse elements to ensure seamless operation. By abstracting the underlying infrastructure, Kubernetes enables developers to concentrate on building and deploying applications without needing to manage the specifics of the hosting environment. Effectively, Kubernetes operates equally well across cloud systems and on-premises infrastructure, providing versatility in deployment options.

Kubernetes acts as a bridge between developers and infrastructure, offering a common framework and set of protocols. This functionality facilitates a more efficient and coherent interaction between those developing the applications and those managing the infrastructure. Through Kubernetes, the complexities of the infrastructure are masked, allowing developers to deploy applications that are scalable, resilient, and highly available, without needing deep knowledge of the underlying system details.

Getting Started With Kubernetes

This chapter covers

— In-depth exploration of containerization with Docker, Podman, and Colima

— Steps for effective application containerization

— Introduction to Kubernetes and its role in orchestration

— The deployment of applications through a first cluster was created with Minikube

— Best practices and architectural considerations for migrating projects to Kubernetes

— Core components of Kubernetes architecture

— Fundamental concepts such as pods, nodes, and clusters

— Overview of Kubernetes interfaces, including CNI, CSI, and CRI

— Insights into command-line tools and plugins for efficient cluster management

Key Learnings

— Grasp the distinctions between Docker and Kubernetes containers.

— Master effective project migration to Kubernetes.

— Understand the fundamental architecture of Kubernetes.

— Explore the Kubernetes ecosystem and interfaces.

— Develop proficiency in managing Kubernetes using command-line tools.

Recipes:

— Wrap Your Application into a Container

— Deploying Your First Application to Kubernetes

— Use Podman for Kubernetes Migration

— Lightweight Distributions: Setting Up k3s and microk8s

— Enabling Calico CNI in Minikube and Exploring Its Features

— Enhancing Your CLI Cluster Management with Krew: kubectx, kubens, kubetail, kubectl-tree, and kubecolor

Introduction

Welcome to Chapter 2, where we demystify Kubernetes, the cloud-native orchestration platform revolutionizing the deployment and management of containerized applications at scale. We will delve into containerization, starting with Docker and contrasting traditional packaging with containerization’s benefits in the software development lifecycle. Exploring tools like Podman and Colima, we analyze Docker alternatives and enhance container configurations. Moving to Kubernetes, we unveil its orchestration capabilities, introducing Pods, Nodes, Clusters, and Deployments. Practical examples guide you in setting up a Kubernetes Cluster with Minikube, touching on alternatives like K3s and Microk8s. The chapter concludes by highlighting Kubernetes’ extensible plugin ecosystem, empowering you with enhanced kubectl functionalities. By the end, you’ve navigated containerization, mastered Kubernetes essentials, and gained confidence in managing clusters.

Docker and Kubernetes: Understanding Containerization

Traditional Ways to Package Software

Deploying software involves installing both the software itself and its dependencies on a server, coupled with the necessity of appropriate application configuration. This process demands considerable effort, time, and skills and is prone to errors.

To streamline this cumbersome task, engineers have devised solutions such as Ansible, Puppet, or Chef, which automate the installation and configuration of software on servers. These tools adopt a declarative approach to system configuration and management, often emphasizing idempotency as a crucial feature. Another strategy to simplify installation in specific programming languages is to package the application into a single file. For instance, in the Java Runtime Environment (JRE), Java class files can be bundled using JAR files.

Various methods can achieve a similar goal. Options like Omnibus or Homebrew packages offer diverse approaches to creating installers. Omnibus excels in crafting full-stack installers, while Homebrew packages leverage formulae written in Ruby. Alternatively, one can utilize virtual machine snapshots from VirtualBox or VMWare to encapsulate the entire state of the operating system alongside the installed application. Despite their continued use, these solutions exhibit notable limitations compared to Docker.

Containerization

As evidenced in the evolution of application packaging, containerization has emerged as a predominant format, encapsulating only essential libraries and dependencies for software delivery. It utilizes OS-level virtualization to run the code and create a single lightweight executable called a container that runs consistently on any infrastructure.

This OS-level virtualization is a complex subject, and its intricacies are beyond the scope of this book. In essence, it is a technology in which the kernel permits the existence of multiple isolated user-space instances. Each instance is a virtual environment with CPU, memory, block I/O, network, and process space. This mechanism is made possible through various underlying technologies such as filesystem (chroot), namespaces (unshare), and control groups (cgroups).

Understanding Docker

Nowadays, Docker has practically become synonymous with containers, and this reputation is well-deserved. Docker was the first tool to show many users the concept of containers. It made managing container lifecycle, communication, and orchestration easier.

What is Docker?

The term “Docker” encompasses various meanings. At a broad level, Docker refers to a collection of containerization tools, including Docker Desktop and Docker Compose. At a more detailed level, Docker represents a container image format, a container runtime library, and a suite of command-line tools. Additionally, Docker, Inc. handles developing and maintaining these tools. Finally, Docker, Inc. founded the Open Container Initiative (OCI), a critical governance structure for container standards.

Docker Engine vs. Docker Desktop

As of today, Docker, Inc. offers two primary methods to use Docker: Docker Engine and Docker Desktop.

If you have a popular Linux system, you can install Docker Engine. Run the official installation script or use your package manager. Docker Engine installation includes the Docker Daemon (dockerd) and Docker Client (docker). Docker Engine is highly regarded for its user-friendly nature and ease of use.

Docker Desktop helps you use Docker Engine with a graphical interface and useful tools. If you are on macOS or Windows, the exclusive way to use Docker is through Docker Desktop. To use Docker Engine on these operating systems, you need to run it on a Linux virtual machine. You can use tools like VirtualBox, Hyper-V, or Vagrant to manage and set up the VMs.

Docker Desktop itself uses a virtual machine. The virtual machine runs a Linux environment. The Linux environment has Docker Engine as its core component. The choice of virtualization technology depends on the host operating system. It can use Windows Subsystem for Linux (WSL) or Hyper-V on Windows. On MacOS, it may use HyperKit or QEMU. You don’t have to know how Docker Desktop’s virtualization works and use it like Docker Engine.

Exploring Podman

Podman (the POD Manager) is a more recent container engine initially released by RedHat in 2018. Podman differs from Docker because it doesn’t need a separate daemon to run containers. It utilizes the libpod library for running OCI-based containers on Linux. On macOS, each Podman machine is backed by QEMU, and on Windows, by WSL. Unlike Docker, Podman can run rootless containers by default without any prerequisites.

Podman makes it easy to migrate your project to Kubernetes. It is capable of generating manifests and quickly deploying them in your cluster. This chapter will delve deeper into Podman and its migration capabilities.

Colima: The Newcomer

Colima is a relatively new development tool released in 2021. It uses Lima on Linux virtual machines. Lima has containerd runtime with nerdctl installed. Colima adds support for Docker and Kubernetes runtime. Colima’s virtual machines use QEMU with an HVF accelerator. Colima works on MacOS and Linux and is easier to use than Docker Desktop. The good part is that it’s completely free. Yet, it’s important to note that Colima is still in its early stages and has a few limitations.

Docker, Podman, Colima: Distinctions and Considerations

In most cases, you can simply interchange Docker, Podman, and Colima. However, there are some critical distinctions between them.

alias docker=podman

To use Colima, you must install the Docker or Podman command-line tools.

When switching from Docker to Podman, users may face minor problems. Podman has a compatibility mode with Docker, which lets you use the same commands. Yet, caution is crucial when switching between these tools in a production environment.

If you like GUI, you can use Docker Desktop or Podman Desktop. Podman Desktop is a multi-engine tool that is compatible with the APIs of both Docker and Podman. This means you can see all engine containers and images at once.

All the container tools mentioned above can work with Kubernetes, but their support could be better than high-end tools like Rancher, Kind, or Kubespray. The Kubernetes server runs in the container engine. It is less customizable and is designed for single-node setups. So, it is primarily used for local testing purposes.

Recipe: Wrap Your Application into a Container

In this part, we’ll learn how to use various tools to put your application in a container. Assuming we have an online microservice called auth-app, which handles authorization. We wrote this microservice in Rust. We will begin with Docker, then move on to Podman, and finally, Colima. Also, we will modify our containerized application step by step for the better.

Containerizing with Docker

To start with Docker, you need to have Docker Desktop installed. Use [official website] (https://docs.docker.com/engine/install/) to get it done, then check the Docker version by using this command:

docker — version

You should see something like this:

Docker version 20.10.7, build f0df350

We won’t dive deep into our application’s code. You can find it in the GitHub repository. For now, assume that we have the following project structure:

auth-app/

├── src/

│ ├── main.rs

├── Cargo.toml

├──.env

├──.gitignore

├── README.md

The `main.rs` file serves as the entry point for our Rust application. Hypothetically, if we’re about to use it in a non-container environment, we must install all specified Rust dependencies from Cargo.toml. You can do this by using a command with the help of the Cargo. Cargo is the Rust package manager. It is similar to npm in the JavaScript world or pip in the Python world.

cargo build

Then, we can run the application by using the following command:

cargo run

And that’s it. The application will continue because of the Actix framework’s infinite loop until you stop it manually. You can verify this by making a Curl request to the /health endpoint:

curl http://localhost:8000/health

You should see the following response:

{“status”: “OK”}

Running the application in Docker isn’t significantly different. We need a Dockerfile to make an image. Dockerfile is a text document with instructions for the command line. The syntax is straightforward to learn. Let’s create a Dockerfile in the root directory of our project:

FROM rust:1.73-bookworm as builder

WORKDIR /app

COPY..

RUN — mount=type=cache, target=$CARGO_HOME/registry/cache \

cargo build — release — bins

FROM gcr.io/distroless/cc-debian12

ENV RUST_LOG=info

COPY — from=builder /app/target/release/auth-app.

CMD [”. /auth-app”, "-a”, “0.0.0.0”, "-p”, “8080”]

Let’s go through this Dockerfile line by line:

FROM rust:1.73-bookwork as builder

This line tells Docker to use the official Rust image as a base image. The 1.73 tag means using the Debian 12 Bookworm distribution. We also give the base name to the image. We will use it later.

Many base images from various vendors on the Docker Hub public registry exist. You can find any programming language, database, or full-fledged operating system. Anybody can create an image and publish it. You can inherit an image from any other image or make it from scratch.

The vital thing to say is that each “FROM” instruction represents the build stage.

WORKDIR /app

This line sets the working directory for the following instructions. It is like the “cd’ command in the shell. The “WORKDIR” instruction can be used multiple times in a Dockerfile. It will create the directory if it does not exist.

COPY..

This line copies the current directory’s content to the `/app’ directory in the container.

RUN — mount=type=cache, target=$CARGO_HOME/registry/cache \

cargo build — release

This line runs the build command. By default, the Rust origin image includes the Cargo package manager. By Cargo command, we build the static binary of our application in a release mode.

Mounts is a relatively new Docker feature. It allows you to mount various types of volume to the container. In this case, we mount the Cargo cache directory. The persistent cache helps speed up the build steps. If you rebuild a layer, a persistent cache ensures that you only download new or changed packages.

FROM gcr.io/distroless/cc-debian12

Now, we are beginning the second stage of the build process. We use the Distroless Docker image by Google, which has a minimal Linux and glibc runtime. It is designed for mainly statically compiled languages such as Rust. This image is commonly used for creating highly minimal images. We chose to use it to reduce the final image size in which our app will eventually run.

Reducing image size is important because it decreases the time it takes to download and deploy the image. It also reduces the attack surface of the image. The smaller the image, the fewer the number of packages and dependencies it contains. This means there are fewer vulnerabilities to exploit.

ENV RUST_LOG=info

This line sets the environment variable. It is similar to the “export’ command in the shell. We set the “RUST_LOG” variable to the “info’ level. It means that the application will log only information messages.

COPY — from=builder /app/target/release/auth-app.

This line copies the binary from the first build stage to the current directory of the second stage image. We didn’t set “WORKDIR” in the second build stage, so by default, the current directory is the root directory.

CMD [”. /auth-app”, "-a”, “0.0.0.0”, "-p”, “8080”]

This line sets the default command to run when the container starts.

Now, we can build the image by using the following command:

docker build -t auth-app.

The `-t’ flag sets the image tag. The same tag as we placed in the “FROM” instruction in the first build stage. We didn’t put the version after the colon, so Docker automatically builds with the “latest’ tag. The’. ’ at the end of the command means a build’s context. The build process happens inside the build context’s directory.

After the image is built, we can check it by using the following command:

docker images

See the obtained size of our build image, which is compared to a regular Rust image, is much smaller:

REPOSITORY TAG IMAGE ID CREATED SIZE

auth-app latest 94e11dc49c66 2 minutes ago 34.8MB

rust 1.73-bookworm 890a6b209e1c 3 days ago 1.5GB

Now, the time is to create an instance of this image. We call such an instance a container. We can do it by using the following command:

docker run -p 8080:8080 auth-app: latest

The `-p’ flag maps the container port to the host port. The first port is the host port, and the second is the container port. Also, we explicitly specified a tag. However, Docker will pull the “latest’ tag without it by default if not specified. Let’s now request the `/health’ endpoint:

curl http://localhost:8080/health

You should see the following response, meaning that our application is healthy:

{“status”: “OK”}

Containerizing with Podman

To start with Podman, you need to install Podman Desktop. You can download it from the [official website] (https://podman.io/docs/installation). Once you’ve installed it, you can check the Podman version by using this command:

podman — version

You should see the version of Podman:

podman version 4.7.0

Compared to Docker Desktop, Podman requires an additional step from the user to start a virtual machine. You can do it by running the command below:

podman machine start

By default, the Podman machine is configured in rootless mode. So if your container requires root permissions (as an example, you need to get access to privileged ports), you need to change the machine settings to root mode:

podman machine set — rootful

We will run our application in rootless mode, which is more secure. If the “USER” instruction is not specified, the image will run as root by default. It could be a better way to do it, so we need to create a user and group in the container and run the application as this user. So, let’s adjust the second stage in our Dockerfile:

# … (first stage is omitted)

FROM gcr.io/distroless/cc-debian12

ENV RUST_LOG=info

COPY — from=builder /app/target/release/auth-app.

USER nobody: nobody

ENTRYPOINT [”. /auth-app”]

CMD [” -a”, “0.0.0.0”, "-p”, “8080”]

The “nobody’ user is uniquely reserved in Unix-like operating systems. To limit the harm if a process is compromised, it’s common to run daemons or other processes as the “nobody’ user. Also, we added the “ENTRYPOINT” instruction to the Dockerfile. It is like “CMD” but cannot be overridden when running the container.

After that, you can build the image similarly to Docker by using the following statement:

podman build -t auth-app: latest.

Start the container using, overriding predefined “CMD” instruction:

podman run -p 5555:5555 auth-app -a 0.0.0.0 -p 5555

The part after the image name is the “CMD” instruction. It overrides the default command specified in the Dockerfile with a new port. After requesting the `/health’ endpoint with a new port, we should get the same response as with Docker, which will say that our application is healthy.

Containerizing with Colima

To install Colima, you can obtain the latest release from the official [Github repository] (https://github.com/abiosoft/colima), and follow the provided installation guide. Once you’ve installed it, you can check the Colima version by using this command:

colima — version

You should see something like this:

colima version 0.5.6

git commit: ceef812c32ab74a49df9f270e048e5dced85f932

To start Colima machine, you need to use “start’ command:

colima start

This command adds Docker context to your environment. You can use the Docker client (Docker CLI) to interact with the Docker daemon inside the Colima machine. To get the context list, run the following:

docker context ls — format=json

[

{

“Current”: true,

“Description”: “colima”,

“DockerEndpoint”: "unix:///Users/m_muravyev/.colima/default/docker.sock”,

“KubernetesEndpoint”: “”,

“ContextType”: “moby”,

“Name”: “colima”,

“StackOrchestrator”:””

},

{

“Current”: false,

“Description”: “”,

“DockerEndpoint”: "unix:///Users/m_muravyev/.docker/run/docker.sock”,

“KubernetesEndpoint”: “”,

“ContextType”: “moby”,

“Name”: “desktop-linux”,

“StackOrchestrator”:””

}

]

The Colima context is the default one pointing to the Docker daemon inside the Colima machine. The desktop-linux context is the default Docker Desktop context. You can always switch between them.

Building Multi-Architecture Docker Images

Docker, Podman, and Colima support multi-architecture images, a powerful feature. You can create and share container images that work on different hardware types. This section will briefly touch on the concept of multi-arch images and how to make them.

Let’s refresh our memory about computer architecture. The Rust compiler can build the application for different architectures. The default one is the host architecture. For example, if you want to run an application on a modern macOS with an M chip, you must compile it on that machine. That’s because the M chip has “arm64” architecture. This architecture differs from the common “amd64”, which you can find on most regular Windows or Linux systems.

You can use Rust’s cross-compilation feature to compile a project for any architecture. It works on any source host platform, even if the target is different. You need to add simple flags to build up running for Apple’s M chip on a regular Linux machine. No matter what our system is, the Rust compiler will always make M chip compatible binary:

rustup target add aarch64-apple-darwin # add the M chip target tripple

cargo build — release — target aarch64-apple-darwin # build the binary using that target

To build the application for Linux, we can use the target triple “x86_64-unknown-linux-gnu’. Don’t worry about the “unknown’ part. It is just a placeholder for the vendor and operating system. In this case, it just means for any vendor and Linux OS. The “gnu’ part means that the GNU C library is used. It is the most common C language library for Linux.

It is important to say that there are drawbacks to using this method instead of creating images that support multiple architectures:

— Cross-compilation adds complexity and overhead to the build process because it works differently for each programming language.

— Building an image takes more time because of installing and configuring the cross-compilation toolchains.

— Creating distinct Dockerfiles for each architecture becomes necessary, leading to a less maintainable and scalable approach.

— Distinguishing the image’s architecture relies on using tags or names. In the case of multi-arch images, these tags or names may remain identical across all architectures.

Let’s create a multi-arch image for our application. We will use the Dockerfile we created earlier.

docker buildx create — use — name multi-arch # create a builder instance

docker buildx build — platform linux/amd64,linux/arm64 -t auth-app: latest.

Buildx is a Docker CLI plugin, formerly called BuildKit, that extends the Docker build. Because we are using Colima with Docker runtime inside, we can use Buildx. Podman also supports Buildx. The ` — platform’ flag specifies the target platforms. The “linux/amd64” is the default platform. The “linux/arm64” is the platform for Apple’s M chip.

Under the hood, Buildx uses QEMU to emulate the target architecture. The build process can take more time than usual cause it will start separate VMs for each target architecture. After the build is complete, you can find out the image’s available architectures by using the following command:

docker inspect auth-app | jq '.[].Architecture’

You need to install the “jq’ tool to run this and further commands. It is a command-line JSON processor that helps you parse and manipulate JSON data.

brew install jq

You will get the following output:

“amd64”

You might notice that only one architecture is available. This is because Buildx uses the ` — output=docker’ type by default, which cannot export multi-platform images. Instead, multi-platform images must be pushed to a registry using the ` — output=oci’ or simply with just the ` — push’ flag. When you use this flag, Docker creates a manifest with all available architectures for the image and attaches it to a nearby image within the registry where it’s pushed. When you pull the image, it will choose your architecture’s image. Let’s check the manifest for the [official Rust image] (https://hub.docker.com/_/rust) on the Docker Hub registry:

docker manifest inspect rust:1.73-bookworm | jq '.manifests[].platform’

Why don’t we specify any URL for a remote Docker Hub registry? That is because Docker CLI has a default registry, so the actual command above explicitly looks like this:

docker manifest inspect docker.io/rust:1.73-bookworm | jq '.manifests[].platform’

You will see output like so:

{

“architecture”: “amd64”,

“os”: “linux”

}

{

“architecture”: “arm”,

“os”: “linux”,

“variant”: “v7”

}

{

“architecture”: “arm64”,

“os”: “linux”,

“variant”: “v8”

}

{

“architecture”: “386”,

“os”: “linux”

}

You can see that the Rust image supports four architectures. Roughly speaking, the “arm’ architecture is for the Raspberry Pi. The “386” architecture is for 32-bit systems. The “amd64” architecture is for 64-bit systems. The “arm64” architecture is for Apple’s M chip.

The Role of Docker in Modern Development

Docker has transformed modern software development by providing a standardized approach through containerization. This approach has made software development, testing, and operations more efficient. Docker creates container images on various hardware configurations, including traditional x64/64 and ARM architectures. It integrates with multiple programming languages, making development and deployment more accessible and versatile for developers.

Docker is helpful for individual development environments and container orchestration and management. Organizations use Docker to streamline their software delivery pipelines, making them more efficient and reliable. Docker provides a comprehensive tool suite for containerization, which impacts software development at all stages.

Our journey doesn’t end with Docker alone as we navigate the complex world of modern development. The following section will explain the critical role of Kubernetes in orchestration and how it fits into the contemporary development landscape. Let’s explore how Kubernetes can orchestrate containerized applications.

Understanding Kubernetes’ Role in Orchestration

Building on our prior knowledge, we understand that container deployment is straightforward. What Kubernetes brings to the table, as detailed earlier, is large-scale container orchestration — particularly beneficial in complex microservice and multi-cloud environments.

Kubernetes, often regarded as the cloud’s operating system, extends beyond its origins as Google’s internal project, now serving as a cornerstone in the orchestration of containerized applications. It is a decent system for automating containerized application deployment, scaling, and management. It is a portable, extensible, and open-source platform. It is also a production-ready platform that powers the most extensive applications worldwide. Google, Spotify, The New York Times, and many other companies use Kubernetes at scale.

With the increasing complexity of microservices, Kubernetes’ vibrant community, including contributors from leading entities like Google and Red Hat, continually enhances its capabilities to simplify its management. Its active development mirrors the characteristic rapid evolution of open-source projects. Expect more discussions about Kubernetes involving IT professionals and individuals from diverse technical backgrounds, even those less familiar with technology.

Comparing Docker Compose and Kubernetes

Docker is a container platform. Kubernetes is a platform for orchestrating containers. It’s crucial to recognize that these two platforms cater to distinct purposes. An alternative to Kubernetes, even if incomplete, is Docker Compose. It presents a simpler solution for running Docker applications with multiple containers, finding its niche in local development environments. Some fearless individuals even deploy it in production. However, when comparing them, Docker Compose is like a small forklift that moves containers. On the other hand, Kubernetes can be envisioned as a cutting-edge logistics center comparable to the top-tier facilities in Amazon’s warehouses. It gives advanced automation, offering unparalleled container management at scale.

Docker Compose for Multi-Container Applications

With Docker Compose, you can define and run multiple containers. It uses a simple YAML file structure to configure the services. A service definition contains the configuration that is applied to each container. You can create and start all the services from your configuration with a single command.

Let’s enhance our auth-app application. Let’s assume it requires in-memory storage to keep the user’s data. We will use Redis for that. Also, we need a broker to send messages to the queue. We will use RabbitMQ as a traditional way to do that. Let’s create a “compose. yml’ file with the following content:

version: “3”

services:

auth-app:

image: <username> /auth-app: latest

ports:

— “8080:8080”

environment:

RUST_LOG: info

REDIS_HOST: redis

REDIS_PORT: 6379

RABBITMQ_HOST: rabitmq

RABBITMQ_PORT: 5672

redis:

image: redis: latest

volumes:

— redis:/data

ports:

— 6379

rabitmq:

image: rabbitmq: latest

volumes:

— rabbitmq:/var/lib/rabbitmq

environment:

RABBITMQ_DEFAULT_USER: guest

RABBITMQ_DEFAULT_PASS: guest

ports:

— 5672

volumes:

redis:

rabbitmq:

To run two containers, you need to use the following command:

docker-compose up

Ofter it’s practical to run containers in the background:

docker-compose up -d

And follow the logs in the same terminal session:

docker-compose logs -f

To stop all the compose’s containers, use the following command:

docker-compose down

Transitioning from Docker Compose to Kubernetes Orchestration

Migrating from Docker Compose to Kubernetes can offer several benefits and enhance the capabilities of your containerized applications. There are various reasons why Kubernetes can be a suitable option for this transition:

— Docker Compose is constrained by a single-cluster limitation, restricting deployment to just one host. Conversely, Kubernetes is a platform that effectively manages containers across multiple hosts.

— In Docker Compose, the failure of the host running containers results in the failure of all containers on that host. In contrast, Kubernetes employs a primary node to oversee the cluster and multiple worker nodes. If a worker node fails, the cluster can operate with minimal disruption.

— Kubernetes boasts many features and possibilities that can be expanded with new components and functionalities. Although Docker Compose allows adding a few features, it generally needs to catch up to Kubernetes in popularity and scope.

— With robust cloud-native support, Kubernetes facilitates deployment on any cloud provider. This flexibility has contributed to its growing popularity among software developers in recent years.

Conclusion

This section discusses how software packaging has evolved from traditional methods to modern containerization techniques using Docker and Kubernetes. It explains the benefits and considerations associated with Docker Engine, Docker Desktop, Podman, and Colima. The book will further explore the practical aspects of encapsulating applications into containers, the importance of Docker in current development methods, and the crucial role Kubernetes plays in orchestrating containerized applications at scale.

Docker and Kubernetes: Understanding Containerization

Creating a Local Cluster with Minikube

Minikube is a tool that makes it easy to run Kubernetes locally. It simplifies the process by running a single-node cluster inside a virtual machine (VM) on your device, which can emulate a multi-node Kubernetes cluster. Minikube is the most used local Kubernetes cluster. It is a great way to get started with Kubernetes. It is also an excellent environment for testing Kubernetes applications before deploying them to a production cluster.

There are equivalent alternatives to Minikube, such as Kubernetes support in Docker Desktop and Kind (Kubernetes in Docker), where you can also run Kubernetes clusters locally. However, Minikube is the most favored and widely used tool. It is also the most straightforward. It is a single binary that you can quickly download and run on your machine. It is also available for Windows, macOS, and Linux.

Installing Minikube

To install Minikube, download the binary from the [official website] (https://minikube.sigs.k8s.io/docs/start/). For example, If you use macOS with Intel Chip, apply this command:

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64

sudo install minikube-darwin-amd64 /usr/local/bin/minikube

In case you prefer not to use Curl and Sudo combination, you can use Homebrew:

brew install minikube

Configuring and Launching Your Minikube Cluster

You can start Minikube simply as much as possible with the default configuration:

minikube start

While the provided command is generally functional, it’s recommended to explicitly specify the Minikube driver to enhance understanding of future provisioning configurations. For instance, the Container Network Interface (CNI) is set to auto by default, potentially leading to unforeseen consequences depending on the Minikube-selected driver.

It’s worth noting that Minikube often selects the driver based on the underlying operating system configuration. For example, if the Docker service runs, Minikube might default to using the Docker driver. Explicitly specifying the driver ensures a more predictable and tailored configuration for your specific needs.

minikube start — cpus=4 — memory=8192 — disk-size=50g — driver=docker — addons=ingress — addons=metrics-server

Most options are self-explanatory. The ` — driver’ option specifies the virtualization driver. By default, Minikube prefers the Docker driver or VM on macOS if Docker is not installed. On Linux — Docker, KVM2, and Podman drivers are favored; however, you can use all seven currently available options. The ` — addons’ option specifies the list of add-ons to enable. You can list the available add-ons by using the following command:

minikube addons list

If you use Docker Desktop, make sure the virtual machine’s CPU and memory settings are higher than Minikube’s settings. Otherwise, you will get an error like:

Exiting due to MK_USAGE: Docker Desktop has only 7959MB memory, but you specified 8192MB.

Once you’ve started, use this command to check the cluster’s status:

minikube status

And get:

minikube

type: Control Plane

host: Running

kubelet: Running

apiserver: Running

kubeconfig: Configured

Interacting with Minikube Cluster

The kubectl command-line tool is the most common way to interact with Kubernetes. It has to be the first tool for any Kubernetes user. It’s an official client for Kubernetes API. Minikube already has it, and we can use it — however, the recommended way is to install Kubectl from the [official website] (https://kubernetes.io/docs/tasks/tools/) and use it separately from Minikube. At least, that’s because Minikube’s kubectl is not always up to date and can be a few versions behind.

You can check Minikube’s kubectl version by using the following command:

minikube kubectl — version

Alternatively, if you have kubectl installed separately, you can use it by using the following command:

kubectl version

From now on, we will use the kubectl command-line tool installed separately from Minikube.

You will receive the client version, also known as kubectl, and the server version, the Kubernetes cluster. It’s okay if the versions differ, as the Kubernetes server has a different release cycle than kubectl. While it’s better to aim for identical versions, it’s not always necessary.

To get the list of nodes in the cluster, use the following command:

kubectl — get nodes

You will get our cluster’s single node:

NAME STATUS ROLES AGE VERSION

minikube Ready control-plane 10m v1.24.1

This output means that we have one node that was created 10 minutes ago. The node has a role control plane, which is the primary node. Usually, cluster-plane nodes are for Kubernetes components (things that make Kubernetes run), not for user workloads (applications that users deploy on Kubernetes). But, due to Minikube’s development purposes, it is the only node in the cluster for everything.

It is also worth noting that this single node exposes the Kubernetes API server. You can find out the URL of it by using the following command:

kubectl — cluster-info

You will get the same address where kubectl is requesting to:

Kubernetes control plane is running at https://127.0.0.1:59813

Finally, let’s use the first add-on we enabled earlier. The metrics server is a cluster-wide aggregator of resource usage data. It collects metrics from Kubernetes, such as CPU and memory usage per node and pod. It is a prerequisite for the autoscaling mechanism we will discuss later in this book. For now, let’s check cluster node resource usage:

kubectl — top node

You will receive data showing the utilization of CPU and memory resources by the node. In our case, the usage might appear minimal because nothing has been deployed yet. The specific percentages can vary depending on background processes and Minikube’s overhead.

NAME CPU (cores) CPU% MEMORY (bytes) MEMORY%

minikube 408m 10% 1600Mi 20%

Stopping and Deleting Your Minikube Cluster

To stop Minikube, use the following command:

minikube stop

You can also delete the cluster by using the following command:

minikube delete

Recipe: Deploying Your First Application to Kubernetes

In this recipe, we will deploy our first application to the Kubernetes cluster. We will use the same application we containerized in the previous recipe. That said, we will use the same Docker image we built earlier. However, we will deliberately use a non-common imperative approach using command-line commands to start with simple things. We will use the declarative way in this chapter as soon as we warm up. For now, let’s refresh our fundamental computer science knowledge and recall the differences between these two approaches.

Understanding Imperative vs. Declarative Management Model

Imperative paradigm is a term that is mainly, but not always, related to programming. In this programming style, the engineer tells the computer step-by-step how to do a task. The imperative approach is used to operate programs or issue direct commands to configure infrastructure. For example, using terminal command-line commands to start a Docker container demonstrates the use of the imperative approach.

In the declarative paradigm, the engineer tells the computer what to do, not how. The goal is to describe the desired state of the system. The declarative approach is mostly used to configure infrastructure, especially a cloud one. The “compose. yml’ file also describes and runs a containerized application in a declarative way. Usually, the declarative approach contains a manifest file, which is a text file with the system’s final state.

Even if the declarative approach is necessary for infrastructure, particularly for Kubernetes, in some rare situations, such as debugging and real-time troubleshooting, the imperative method is still the case, so let’s start with it.

Pushing Your Container Image to a Registry

Before we start, we need to push the image to the registry. We will use the Docker Hub registry. You can create a free account on the [official website] (https://hub.docker.com/). Once you’ve created an account and generated an access token, you can log in to the registry by using the following command:

docker login

You will be prompted to enter your username and password. After that, you can push the image to the registry by using the following command:

docker tag auth-app: latest <username> /auth-app: latest

docker push <username> /auth-app: latest

Imperative Deployment with kubectl run

The fastest way to instantly deploy an application is to use the “kubectl run’ command.

This command creates a pod Kubernetes object. A pod is the smallest and simplest unit of deployment in Kubernetes. At this point, let’s assume that it is a group of one or more containers that share storage, network, and specification. Also, it is the basic building block of Kubernetes.

Let’s start Minikube and create a deployment. Use the following command:

kubectl — run auth-app — image= <username> /auth-app: latest — port=8080

Then check the pod status by using the following command:

kubectl — get pods

You will get the following output:

NAME READY STATUS RESTARTS AGE

auth-app 1/1 Running 0 4m55s

To get all the events how the pod got the running state, use the following command:

kubectl — get events — field-selector involvedObject.name=auth-app

You will get the following output:

LAST SEEN TYPE REASON OBJECT MESSAGE

10m Normal Scheduled pod/auth-app Successfully assigned default/auth-app to minikube

10m Normal Pulling pod/auth-app Pulling image "<username> /auth-app: latest”

10m Normal Pulled pod/auth-app Successfully pulled image "<username> /auth-app: latest” in 7.158188757s

10m Normal Created pod/auth-app Created container auth-app

10m Normal Started pod/auth-app Started container auth-app

The pod came over the running state in four steps. First, it was scheduled to the node. Then, it pulled the image from the registry. After that, Pod created the container and started it. We now have a running pod, but we cannot access it outside the cluster. To do that, we need to expose the port’s pod.

Exposing Your Application with Port Forwarding

To expose the pod to the outside world, we need to use the “kubectl port-forward’ command. It forwards the local port to a port on the pod. Use the following command to make the pod accessible on port 8080:

kubectl — port-forward pod/auth-app 8080:8080

After that, you can request the `/health’ endpoint by using the following command:

curl http://localhost:8080/health

You will get the following output:

{“status”: “OK”}

Also, we can check access log of the pod by using the following command:

kubectl — logs -f pod/auth-app

You will get the following line specifically for our request.

[2023-11-11T12:58:01Z INFO actix_web::middleware::logger] 127.0.0.1 “GET /health HTTP/1.1” 200 15 "-" “curl/8.1.2” 0.000163

Using port-forwarding exposes the pod, but it’s not advised for production-like infrastructure. This is because it is not scalable, forwards one port at a time, and is insecure. It is also not reliable because it does not have any retry mechanism. And it’s still an imperative, less convenient command.

You can use port-forwarding with complete confidence in a local development environment. For example, when you must debug or test the application manually. Sometimes, it makes sense to use it in CI/CD pipelines. When you need to test the application by running integration or system tests, the declarative description looks too redundant compared to a simple command.

Conclusion

In this section, we have introduced Minikube as a local Kubernetes environment, outlined its installation and usage, and demonstrated deploying and managing an application through an imperative method, emphasizing Minikube’s capabilities for local development, testing, and learning Kubernetes fundamentals.

Preparing Your Project for Kubernetes Migration

Architectural Redesign for Kubernetes Optimization

This section will discuss principles and patterns to help you scale and manage your workloads on Kubernetes. Kubernetes can handle different workloads, but your choices impact how easy it is to use and what’s possible. The Twelve-Factor App philosophy is a popular methodology for creating cloud-ready web apps. It helps you focus on the most essential characteristics.

Although checking out the Twelve-Factor App philosophy is highly recommended, we will discuss only some factors here. We will also discuss the most common anti-patterns and how to avoid them.

Choosing Between Stateless and Stateful Applications

The first factor that is on everyone’s lips is the application’s state. Kubernetes has a robust mechanism to handle both stateless and stateful applications. To make applications easier to scale and manage, it is essential to strive for statelessness, making as ephemeral a container as possible. You can also move the state to a separate service, like a database. This could be a cloud service like Amazon RDS or Google Cloud SQL. Scaling managed databases and other storage services independently from your application is simple. Lastly, running stateful applications exclusively on Kubernetes takes extra effort and expertise. However, in the long term, it will give you great flexibility and efficiency in operating.

Embracing Decoupling

The next thing is that decoupling applications into multiple containers makes scaling horizontally and reusing containers easier. The ideal goal is to equal one container to one process, but it is only sometimes possible. Microservice design is something you should strive for. It is also worth noting that microservices are not a silver bullet. They have their drawbacks, such as increased complexity and overhead. You should use them only when it makes sense.

Managing Application Configuration

The third factor is configuration. It means that the application’s configuration must be stored separately. To keep the configuration for Kubernetes applications, you should use ConfigMap or Secret Kubernetes objects, mapping its data to your application’s environment variables or configuration files. You can always use third-party security storage like HashiCorp Vault or AWS Secrets Manager with or without Kubernetes integration.

Storing configuration in the code is a common mistake beginners make due to the application’s immutability concept. It means that you need to rebuild the image to change the configuration. It is also not secure, has scalability side effects, and is not flexible.

Centralizing Logging

The fourth crucial aspect is the implementation of logs within the application. This entails the application’s need to generate logs directed to standard outputs (out) and errors (err). Platforms such as Filebit or Fluentbit are instrumental in aggregating these logs, subsequently transmitting them to processing entities like Fluentd or Vector. These processors should be configured with log pipelines aligned with the specified format. It is advisable to store these logs in a databases such as Elasticsearch or Loki and later access them using visualization tools like Kibana or Grafana.

In Chapter 14, we’ll talk about logging aspects related to K8s. But logging, in general, is complicated and requires its book to cover all the nuances.

Implementing Health and Readiness Probes

The next important factor is health checks or probes as they are called in Kubernetes. The application must say if it is healthy or not. We need more than a running process or a listening port. A good practice is to have two endpoints: `/health’ and `/ready’. The first is for the application’s health, and the second is for readiness. The readiness endpoint is functional when you must wait for the application to be ready. For example, if you have not yet prepared the database that your app depends on. Kubernetes will only send traffic to the application once it is ready.

Ensuring Graceful Shutdown

The final thing is graceful shutdown. It means that the application must handle the “SIGTERM” signal. Some programming languages, like Go, control it by default. However, some do not. Many people forget to handle the “SIGTERM” signal on the application level. Because of this, Kubernetes cannot tell if the application is ready to be stopped, potentially leading to different issues.

Best Practices for Containerizing Your Application

This section will delve into containerizing your application, exploring challenges and best practices to enhance efficiency and seamless deployment.

Leveraging Multi-Stage Docker Builds

We touched on multi-stage builds in the previous section. It is a technique to create a Docker image with many build stages. Each stage can use a different base image. Making the image smaller has advantages, such as quicker deployment and less vulnerability.

Using. Dockerignore for Efficient Builds

You can use the `.dockerignore’ file to exclude files from the build context. It is similar to the `.gitignore’ file. Depending on the programming language, you can exclude a lot of files. With Rust, you can exclude the “target’ directory, the default output directory for the build command. With Node. js, you can exclude the “node_modules’ directory containing all the dependencies. You can even organize your rules in an allow-only sort:

*

!src

!Cargo.toml

!Cargo. lock

Optimizing Container Images

It can depend on the environment you build the image for. For example, if you are using Rust, you can use the ` — release’ flag to build the application in release mode. It will not include the debug symbols, which are unnecessary for production. Using smaller images like Alpine or Slim for production is also better. Regular images with extra debugging tools can be used for less important environments like development or testing. This will make troubleshooting easier.

Use Non-Root User with Fine-Grained Permissions

If a service can operate without elevated privileges (e.g., it doesn’t require binding privilege ports or accessing system files), utilize the USER command to switch to a non-root user. Begin by establishing the user and group in the Dockerfile, as in the following example:

RUN groupadd -r myservice && useradd — no-log-init -r -g myservice myservice

USER myservice

Consider specifying a distinct UID/GID. The assignment of UIDs/GIDs to users and groups in an image is non-deterministic, meaning the “next” UID/GID is assigned irrespective of image rebuilds. If it’s crucial, giving an explicit UID/GID is favorable. Finally, to streamline layers and simplify the structure, minimize the frequency of switching USER back and forth.

If you need to adjust permissions inside the container, use the chmod command in your Dockerfile. For instance, if your application requires write permissions to a specific directory, you can include a line similar to the following:

RUN chmod -R 777 /var/log/myapp

Бесплатный фрагмент закончился.

Купите книгу, чтобы продолжить чтение.