When people first start learning Kubernetes, one of the biggest confusing points is the architecture itself.

Terms like:

  • Control Plane
  • Worker Nodes
  • Scheduler
  • etcd
  • Kubelet

can initially feel overwhelming.

I remember when I first started learning Kubernetes, I could run pods and deployments, but I didn’t fully understand what was happening internally inside the cluster.

Once I understood how the Control Plane and Worker Nodes communicate, Kubernetes concepts started making much more sense.

In this guide, I’ll break down the core Kubernetes architecture in a practical and beginner-friendly way.

What is Kubernetes?

Kubernetes is a container orchestration platform.

It helps:

  • deploy containers
  • scale applications
  • manage workloads
  • recover failed containers
  • automate deployments

Instead of manually managing containers one by one, Kubernetes handles orchestration automatically.

Kubernetes Cluster Architecture

A Kubernetes cluster is mainly divided into two parts:

  1. Control Plane
  2. Worker Nodes

Control Plane = The Brain

The Control Plane controls the entire cluster.

It makes decisions like:

  • where pods should run
  • how workloads should scale
  • how failures should be handled

Without the Control Plane, the cluster cannot function properly.

API Server

The API Server is the main entry point into Kubernetes.

Whenever you run commands like:

kubectl get pods

or:

kubectl apply -f deploy.yml

the request first goes to the API Server.

Why API Server is Important

The API Server acts like the communication hub between:

  • users
  • worker nodes
  • controllers
  • cluster components

Almost everything in Kubernetes eventually talks to the API Server.

Scheduler

The Scheduler decides:
👉 where pods should run.

When a new pod is created:

  • the Scheduler checks available resources
  • and selects a suitable worker node

This decision depends on:

  • CPU
  • memory
  • taints/tolerations
  • affinity rules
  • resource availability

Simple Example

Imagine:

  • Worker Node 1 is overloaded
  • Worker Node 2 has free resources

The Scheduler will usually place the pod on:

worker node02

That automatic decision-making is one of Kubernetes’ biggest strengths.

Controller Manager

The Controller Manager continuously watches cluster state.

Its job is to ensure:

desired state = actual state

Example:
If you specify:

3 replicas

but one pod crashes,
the Controller Manager creates another pod automatically.

Why Controllers Matter

This is one of the reasons Kubernetes feels “self-healing.”

The controllers constantly monitor workloads and try to maintain the desired configuration.

etcd – The Cluster Database

etcd is one of the most important Kubernetes components.

It acts as:
👉 the single source of truth for the cluster.

What etcd Stores

etcd stores:

  • cluster state
  • deployments
  • services
  • secrets
  • configurations
  • node information

Basically:

  • Kubernetes remembers everything through etcd.

Why etcd is Critical

If etcd becomes corrupted or unavailable,
the cluster can run into serious problems.

That’s why etcd backup strategies are very important in production environments.

Cloud Controller Manager

In cloud environments,
Kubernetes may also use:

Cloud Control Manager

This component handles:

  • cloud load balancers
  • storage integrations
  • cloud networking
  • cloud-specific APIs

Example:
On AWS,
this helps Kubernetes communicate with cloud resources automatically.

Worker Nodes

Worker Nodes are where applications actually run.

This is where:

  • containers
  • pods
  • workloads

execute.

Kubelet

Kubelet runs on every worker node.

It acts like the node agent.

The Kubelet ensures:

  • containers are running correctly
  • pods remain healthy
  • workloads follow the desired state

kube-proxy

kube-proxy handles networking across the cluster.

It helps:

  • pods communicate
  • services route traffic
  • internal cluster networking function properly

Without kube-proxy,
cluster communication becomes difficult.

Container Runtime

The container runtime is responsible for actually running containers.

Popular runtimes:

  • containerd
  • CRI-O

Earlier Docker was commonly used directly, but Kubernetes now mainly interacts through CRI-compatible runtimes.

How Everything Works Together

Simple flow:

  1. User creates deployment
  2. API Server receives request
  3. Scheduler selects worker node
  4. Controller Manager monitors state
  5. Kubelet starts containers
  6. kube-proxy handles networking
  7. etcd stores cluster information

Once I visualized the process like this, Kubernetes became much easier to understand.

Understanding Quorum

One important production concept is:

Quorum

For high availability,
Control Plane nodes are usually deployed in odd numbers:

  • 1
  • 3
  • 5

Why Odd Numbers Matter

Kubernetes Control Plane components rely on consensus.

Using odd numbers helps maintain majority voting during failures.

For example:

  • 3-node control plane can survive 1 failure
  • 5-node control plane can survive 2 failures

This improves cluster reliability.

Managed vs Self-Managed Kubernetes

There are two main approaches:

Self-Managed

You manage:

  • control plane
  • upgrades
  • etcd
  • maintenance
  • networking

Example:

  • kubeadm
  • kops
  • bare metal clusters

Managed Kubernetes

Cloud providers manage the Control Plane for you.

Example:

  • Amazon Web Services EKS
  • AKS
  • GKE

This simplifies maintenance significantly.

EKS Cost Reality

One thing beginners often don’t realize:

Managed Kubernetes control planes are not free.

For example:

  • AWS EKS charges for the Control Plane itself

which is roughly:

~$72/month

before worker node costs.

That’s important to remember while practicing.

Practical Deployment Example

In the video, I also demonstrated deploying an Nginx pod using kubectl.

Example:

kubectl run test –image awsdevops/youtube

Behind the scenes:

  • API Server receives request
  • Scheduler picks node
  • Kubelet pulls image
  • container runtime starts container

Actually watching the deployment flow helps connect all the architecture concepts together.

What Helped Me Understand Kubernetes Better

Initially Kubernetes felt extremely complicated.

What helped most was:

  • deploying real workloads
  • checking pod states
  • troubleshooting failures
  • understanding architecture visually

Hands-on experimentation made the concepts much clearer.

Common Beginner Mistakes

Some common Kubernetes mistakes:

  • jumping into Kubernetes too early
  • weak Linux fundamentals
  • weak Docker understanding
  • ignoring networking basics
  • not understanding cluster architecture

Kubernetes becomes much easier once:

  • Linux
  • Docker
  • networking
  • cloud basics

are already comfortable.

Full Video Walkthrough

I also created a complete walkthrough covering:

  • Kubernetes architecture
  • Control Plane
  • Worker Nodes
  • Scheduler
  • etcd
  • Kubelet
  • kube-proxy
  • quorum concepts
  • EKS basics
  • practical pod deployment

along with simple explanations and architecture demonstrations.

👉 Watch the full walkthrough here:

Why Architecture Understanding Matters

Many Kubernetes problems become easier to troubleshoot once you understand:

  • which component is responsible for what
  • how workloads move
  • how scheduling works
  • where cluster state is stored

Without architecture understanding,
Kubernetes can feel like magic.

Final Thoughts

Kubernetes becomes much less intimidating once you understand:

  • how the Control Plane works
  • what Worker Nodes actually do
  • and how cluster components communicate together.

In my experience, architecture understanding is what makes troubleshooting much easier later.

What You Should Learn Next

After understanding Kubernetes architecture, explore:

  • Pods
  • Deployments
  • ReplicaSets
  • Services
  • Ingress
  • ConfigMaps
  • Secrets
  • Horizontal Pod Autoscaler

Those concepts become much easier once the cluster architecture is clear.

Bonus Tip

Don’t just memorize Kubernetes components.

Actually:

  • deploy workloads,
  • break things,
  • restart nodes,
  • and troubleshoot failures.

That practical experience teaches much faster.

Related Guides

If you’re learning Kubernetes and DevOps, also check:

  • K3s on AWS EC2
  • AWS Auto Scaling Explained
  • OpenVPN + VPC Peering
  • AWS WAF Explained
  • DevOps Roadmap 2026

About the Author

Madhukar Reddy is a DevOps engineer focused on AWS, Docker, Kubernetes, cloud infrastructure, and cyber security. He shares practical cloud and DevOps content based on hands-on deployments, infrastructure troubleshooting, and real-world learning projects.

madhukarreddyeng

DevOps engineer focused on AWS, Docker, Kubernetes, cloud infrastructure, and cyber security. Shares practical cloud and DevOps content based on hands-on deployments, infrastructure troubleshooting, and real-world projects.

$ This blog is currently running on AWS EC2 using Docker-based deployment.

Leave a response