Kubernetes for Beginners: Pods, Deployments, and Services Explained
If you’ve been using Docker to containerize your applications, the natural next step is Kubernetes (often abbreviated as K8s) — the industry-standard container orchestration platform. Kubernetes automates deploying, scaling, and managing containerized applications across clusters of machines.
This guide will take you from zero to running your first application on Kubernetes, covering all the core concepts you need to understand.
What Is Kubernetes?
Kubernetes is an open-source platform originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF). It solves the problem of managing containers at scale — when you have dozens or hundreds of containers that need to communicate, restart on failure, scale up during traffic spikes, and roll out updates without downtime.
Why Not Just Use Docker?
Docker is great for running individual containers. But in production, you need:
- Automatic restarts when a container crashes
- Load balancing across multiple instances of the same app
- Rolling updates with zero downtime
- Service discovery so containers can find each other
- Resource management to allocate CPU and memory efficiently
Kubernetes handles all of this and more.
Setting Up a Local Kubernetes Cluster
The easiest way to learn Kubernetes is to run it locally. Here are the best options:
Option 1: Minikube
Minikube runs a single-node Kubernetes cluster on your machine.
# Install on macOS
brew install minikube
# Install on Linux
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
# Start the cluster
minikube start
# Verify it's running
minikube status
Option 2: Docker Desktop
If you already have Docker Desktop installed on macOS or Windows, you can enable Kubernetes in Settings → Kubernetes → Enable Kubernetes. This is the simplest approach if Docker Desktop is already part of your workflow.
Option 3: kind (Kubernetes in Docker)
kind runs Kubernetes clusters using Docker containers as nodes. It’s lightweight and popular for CI testing.
# Install
brew install kind # macOS
# or
go install sigs.k8s.io/kind@v0.24.0 # with Go
# Create a cluster
kind create cluster --name my-cluster
# Delete when done
kind delete cluster --name my-cluster
Installing kubectl
kubectl is the command-line tool for interacting with Kubernetes. Every operation — deploying apps, inspecting logs, scaling services — goes through kubectl.
# macOS
brew install kubectl
# Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
# Windows (with chocolatey)
choco install kubernetes-cli
# Verify installation
kubectl version --client
After setting up your cluster, verify the connection:
kubectl cluster-info
kubectl get nodes
You should see your node listed with a Ready status.
Core Concepts
Pods
A Pod is the smallest deployable unit in Kubernetes. It wraps one or more containers that share the same network namespace and storage volumes. In most cases, a pod runs a single container.
Create a simple pod definition in a file called nginx-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.27
ports:
- containerPort: 80
Apply it:
kubectl apply -f nginx-pod.yaml
Useful pod commands:
# List all pods
kubectl get pods
# Get detailed info about a pod
kubectl describe pod nginx-pod
# View pod logs
kubectl logs nginx-pod
# Execute a command inside the pod
kubectl exec -it nginx-pod -- /bin/bash
# Delete the pod
kubectl delete pod nginx-pod
Deployments
In practice, you rarely create pods directly. Instead, you use a Deployment, which manages a set of identical pods (called replicas) and handles rolling updates and rollbacks.
Create nginx-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.27
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "250m"
Apply and manage:
# Create the deployment
kubectl apply -f nginx-deployment.yaml
# Check deployment status
kubectl get deployments
# Watch pods being created
kubectl get pods -w
# Scale to 5 replicas
kubectl scale deployment nginx-deployment --replicas=5
# Update the image (triggers rolling update)
kubectl set image deployment/nginx-deployment nginx=nginx:1.27.1
# Check rollout status
kubectl rollout status deployment/nginx-deployment
# Rollback to previous version
kubectl rollout undo deployment/nginx-deployment
# View rollout history
kubectl rollout history deployment/nginx-deployment
Services
Pods are ephemeral — they get new IP addresses when they restart. A Service provides a stable network endpoint to access a group of pods.
There are four types of services:
| Type | Description |
|---|---|
| ClusterIP | Internal-only IP (default). Accessible only within the cluster. |
| NodePort | Exposes on a static port on each node’s IP. Accessible externally via <NodeIP>:<NodePort>. |
| LoadBalancer | Provisions an external load balancer (cloud providers). |
| ExternalName | Maps a service to an external DNS name. |
Create nginx-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
Apply and test:
# Create the service
kubectl apply -f nginx-service.yaml
# List services
kubectl get services
# If using Minikube, get the URL
minikube service nginx-service --url
# Describe the service
kubectl describe service nginx-service
Namespaces
Namespaces provide isolation within a cluster. They’re useful for separating environments (dev, staging, production) or teams.
# List namespaces
kubectl get namespaces
# Create a namespace
kubectl create namespace development
# Deploy to a specific namespace
kubectl apply -f nginx-deployment.yaml -n development
# List pods in a namespace
kubectl get pods -n development
# Set a default namespace for kubectl
kubectl config set-context --current --namespace=development
ConfigMaps and Secrets
ConfigMaps store non-sensitive configuration data. Secrets store sensitive data like passwords and API keys (base64-encoded).
# Create a ConfigMap from literal values
kubectl create configmap app-config \
--from-literal=DATABASE_HOST=db.example.com \
--from-literal=LOG_LEVEL=info
# Create a Secret
kubectl create secret generic db-credentials \
--from-literal=username=admin \
--from-literal=password=s3cur3p@ss
# View ConfigMaps
kubectl get configmaps
kubectl describe configmap app-config
# View Secrets (values are base64 encoded)
kubectl get secrets
kubectl describe secret db-credentials
Use them in a deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
envFrom:
- configMapRef:
name: app-config
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
A Complete Example: Deploying a Web App with a Database
Let’s deploy a practical example — a Node.js app with a Redis backend.
Step 1: Deploy Redis
Create redis-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7-alpine
ports:
- containerPort: 6379
resources:
limits:
memory: "128Mi"
cpu: "250m"
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
Step 2: Deploy the Web App
Create web-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: node:20-alpine
command: ["node", "-e", "
const http = require('http');
const server = http.createServer((req, res) => {
res.end('Hello from Kubernetes! Pod: ' + process.env.HOSTNAME);
});
server.listen(3000);
"]
ports:
- containerPort: 3000
env:
- name: REDIS_HOST
value: redis-service
resources:
limits:
memory: "128Mi"
cpu: "250m"
---
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web
ports:
- port: 3000
targetPort: 3000
nodePort: 30300
Step 3: Deploy Everything
kubectl apply -f redis-deployment.yaml
kubectl apply -f web-deployment.yaml
# Verify everything is running
kubectl get all
# Test the web app (Minikube)
minikube service web-service --url
# Then curl the URL or open it in your browser
Essential kubectl Commands Reference
Here’s a cheat sheet of the most useful commands:
# Cluster info
kubectl cluster-info
kubectl get nodes
# Working with resources
kubectl get pods,deployments,services
kubectl get all --all-namespaces
# Debugging
kubectl describe pod <pod-name>
kubectl logs <pod-name>
kubectl logs <pod-name> --previous # logs from crashed container
kubectl exec -it <pod-name> -- sh
kubectl top pods # resource usage (requires metrics-server)
# Port forwarding (access a pod directly)
kubectl port-forward pod/<pod-name> 8080:80
# Apply/delete from files
kubectl apply -f manifest.yaml
kubectl delete -f manifest.yaml
# Quick resource creation
kubectl create deployment hello --image=nginx --replicas=3
kubectl expose deployment hello --port=80 --type=NodePort
# Watch resources in real-time
kubectl get pods -w
# Get YAML output of existing resource
kubectl get deployment nginx-deployment -o yaml
# Dry run (preview without applying)
kubectl apply -f manifest.yaml --dry-run=client
What’s Next?
Once you’re comfortable with these basics, explore these topics to deepen your Kubernetes knowledge:
- Ingress Controllers — Route external HTTP/HTTPS traffic to services using NGINX Ingress or Traefik
- Persistent Volumes — Store data that survives pod restarts using PersistentVolumes and PersistentVolumeClaims
- Helm — The package manager for Kubernetes at helm.sh, used to install complex applications with a single command
- Horizontal Pod Autoscaler — Automatically scale pods based on CPU/memory usage
- RBAC — Role-Based Access Control for securing your cluster
- Managed Kubernetes — Cloud offerings like Google GKE, AWS EKS, and Azure AKS that handle the control plane for you
Useful Resources
- Official Kubernetes Documentation
- Kubernetes Interactive Tutorial
- kubectl Cheat Sheet
- Kubernetes The Hard Way by Kelsey Hightower
- Play with Kubernetes — free browser-based K8s playground
Kubernetes has a steep learning curve, but once the core concepts click — pods, deployments, services, and namespaces — you’ll find it’s an incredibly powerful platform for running production workloads reliably.