A Complete Kubernetes Tutorial for Beginners

Chafik Belhaoues

If you’ve heard the term “Kubernetes” in conversations about DevOps and the cloud but still don’t quite understand what it is, this Kubernetes tutorial for beginners from Brainboard is just for you.

Kubernetes is a system for deploying, scaling, and managing containers. Without it, teams spend hours manually maintaining servers, monitoring failed processes, and balancing workloads. With it, all of this happens automatically.

Today, Kubernetes is the de facto standard in cloud-native development. Almost every company that uses containers uses them in production for convenience and speed. Understanding Kubernetes means understanding how modern infrastructure works.

In this guide, we’ll go from zero to your first real deployment: architecture, key concepts, installation, and hands-on practice. No unnecessary theory - just what you really need to get started, for a quick rollout in your company.

What is Kubernetes and How It Works

Imagine: you have dozens of containers - frontend, backend, database, cache, and many others. Monitoring each one manually across multiple servers is practically impossible (or requires a lot of people). This Kubernetes tutorial starts with a simple question: Who keeps track of all this?

Kubernetes operates on the principle of declarative management. You describe what you want: “three copies of this service, always running.” Kubernetes takes your description and makes it a reality. If a container crashes, it restarts it; if the load increases, it adds replicas; if a server becomes available, it redistributes the load.

You don’t tell the system “do this action”; you say “this is what the result should be.” Kubernetes figures out on its own how to get there and how to stay there.

Kubernetes Basics Explained

Before moving on, Kubernetes basics, without which the rest won’t work:

  • Pod - the smallest unit. One or more containers that run together and share a network. Think of a Pod as a single live instance of an application. Pods are recreated, and Kubernetes ensures the required number is always running.
  • Node - the machine where Pods reside. Physical or virtual - it makes no difference.
  • Cluster - multiple nodes under a single management system. A bird’s-eye view of your entire infrastructure.
  • Deployment - an object that describes how to run an application: which image, how many copies, how to update. It ensures the required number of pods is maintained at all times.
  • Service - a stable access point to pods. Pods die and are reborn with new addresses, but the Service always knows where to route traffic.

A simple analogy: Deployment - a shift manager who ensures there are always the right number of people on the job. Service - the reception desk through which all clients pass, regardless of who is on duty.

Kubernetes Architecture Overview

A good Kubernetes guide is impossible without understanding how the system is structured internally. The architecture is divided into two parts: the control plane and worker nodes.

Control plane - the brain of the cluster.

API Server receives all commands - from your kubectl and from internal components. Everything goes through it.

etcd stores the cluster’s state. What’s running, what should be running, and what settings are applied. If etcd goes down, the cluster loses its state.

Scheduler decides which node to run a new pod on. It looks at available resources and constraints.

Controller Manager - a set of background processes that continuously compare the actual state with the desired state and correct any discrepancies.

Worker Nodes - the machines where containers actually run. On each: kubelet (communicates with the control plane, launches pods), kube-proxy (network rules), and the container runtime - the container engine itself.

Installing and Setting Up Kubernetes

Learning Kubernetes is best started locally - breaking something on your own machine is much safer than in the cloud.

  • Minikube is the easiest way to get started. A single-node cluster that runs right on your laptop. Install Minikube and kubectl, then type ‘minikube start’ - the cluster is up and running in a couple of minutes.
  • Kind is an alternative for those working with Docker. Kubernetes runs inside containers, not virtual machines.

If you want to try the cloud right away, Amazon EKS, Google GKE, or Azure AKS will handle the control plane for you. Most providers offer a free tier or starter credits.

After installation, check that everything is working: ‘kubectl cluster-info’ should display the cluster address. Now you’re ready to begin.

How to Use Kubernetes: First Deployment

Theory is good, practice is better. Let’s break down how to use Kubernetes using a concrete example.

  • Let’s run nginx: kubectl create deployment nginx --image=nginx
  • Kubernetes creates a Deployment and starts a pod. Let’s see what happened: kubectl get pods
  • The pod is running, but it’s not accessible from the outside. Let’s expose it via a Service: kubectl expose deployment nginx --port=80 --type=NodePort
  • Find the address: minikube service nginx --url
  • Open the link in a browser - you’ll see the nginx welcome page. Done, the first application is deployed.

This is the basic workflow: Deployment → verify that pods are running → Service → access. In real projects, the logic is the same, but the configuration is more complex.

Deploying Kubernetes Applications

In practice, deploying Kubernetes is done via YAML manifests, not through terminal commands. A manifest is a file that describes the desired state: which image to run, how many replicas to keep, and which ports to open.

It is applied with a single command: kubectl apply -f deployment.yaml

YAML files are stored in Git, reviewed, and versioned. Want more replicas? Change a single number in the file and apply it again. Want to update the image? Change the tag. Want to roll back? Restore the previous version of the file.

Scaling looks like this: kubectl scale deployment nginx --replicas=5

Five copies are distributed. It is precisely this simplicity of horizontal scaling that makes Kubernetes so valuable in production.

If you want to visualize your entire infrastructure and automatically generate ready-to-use manifests, Brainboard converts architectural diagrams into actual IaC code.

Kubernetes Services and Networking Basics

Once the application is deployed, the question arises: how do parts of the system communicate with each other, and how does traffic enter the system?

  • ClusterIP - an internal address accessible only within the cluster. The backend communicates with the database via ClusterIP - this connection is invisible from the outside.
  • NodePort - opens a port on every node in the cluster. External traffic arrives at this port and is routed to the appropriate pod. Convenient for testing, but not for serious production.
  • LoadBalancer - in cloud environments, it creates a true load balancer. Traffic is distributed among pods automatically. This is the standard way to expose a service to the outside world.

Learning Kubernetes Best Practices

Learning Kubernetes from scratch isn’t a sprint. People who try to learn everything at once usually give up halfway through.

Start with Minikube and simple deployments. Break something on purpose - see what ‘kubectl describe’ says, what’s visible in ‘kubectl logs’. Most real-world problems are solved using these two commands.

Learn to read and write YAML - it’s the core language of Kubernetes. You don’t need to know it by heart; you need to understand the structure.

Work through the layers: first pods and deployments, then services, then storage and networking, then RBAC and security. Each layer builds on the previous one.

Kubernetes is vast, but it’s entirely possible to master it step by step. And when the time comes to build a real production infrastructure, Brainboard will help you build it right the first time.

kubernetes tutorial for beginners
Kubernetes
April 20, 2026
7 min