Skip to content
· 5 min read HIGH @Sdmrf

Every Kubernetes Cluster I've Tested This Year Had the Same Problem

After a dozen pentests involving K8s, I'm seeing the same misconfigurations over and over. Here's what keeps going wrong.

On this page

I’ve done eleven Kubernetes security assessments this year. Different industries, different cloud providers, different team sizes.

Same problems. Every single time.

Here’s the pattern I keep seeing, and why I think the K8s security tooling ecosystem is failing us.

The Usual Suspects

1. Overprivileged Service Accounts

This one’s nearly universal. Default service account tokens mounted in pods that don’t need them. Service accounts with cluster-admin because “it was easier during development.”

Real example from last month:

# Found in production
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backend-api
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: backend-api-binding
roleRef:
  kind: ClusterRole
  name: cluster-admin  # WHY
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: backend-api
  namespace: production

A web backend with cluster-admin. In production. Because someone copied a tutorial from 2019 and never revisited it.

2. Network Policies? What Network Policies?

Out of eleven clusters:

  • 3 had network policies
  • 2 of those had policies that were effectively “allow all”
  • 1 had actual restrictive policies

The other 8? Flat network. Every pod can talk to every other pod. Database pods accessible from frontend pods. Monitoring accessible from anywhere.

“We’re using a service mesh for security.”

Your service mesh has mTLS between pods. Great. That doesn’t stop a compromised pod from connecting to your database. It just means the malicious connection is encrypted.

3. Secrets in Environment Variables

env:
  - name: DATABASE_PASSWORD
    value: "SuperSecret123!"
  - name: API_KEY
    value: "sk-live-..."

Found in seven out of eleven assessments. Not Kubernetes secrets. Not external secret management. Just… hardcoded in deployment manifests.

“But the manifests are in a private repo.”

Cool. Your developers can see production database passwords. Your CI system has them. Anyone who compromises a pod can dump the environment.

4. No Pod Security Standards

Since Pod Security Policies are gone, we have Pod Security Standards (Admission). Most clusters I see are running in “privileged” mode or have no enforcement at all.

Which means pods can:

  • Run as root
  • Mount the host filesystem
  • Use host networking
  • Disable seccomp
  • Run privileged containers

That container escape PoC you saw on Twitter? It probably works against these clusters.

5. Exposed Dashboards

Kubernetes Dashboard. Grafana. Prometheus. ArgoCD.

“It’s internal only.”

Your “internal only” means:

  • Accessible from any pod in the cluster
  • Often has weak/default authentication
  • Frequently exposed through an ingress with no auth
  • Sometimes has admin tokens embedded

Found three clusters where I could get cluster-admin through exposed dashboards.

Why This Keeps Happening

After enough assessments, I’ve started asking teams how they got here. Some patterns:

“We followed the quickstart.”

Kubernetes quickstarts optimize for getting something running, not security. The secure version requires three times as many YAML files.

“Security is next quarter.”

The cluster launched two years ago. Security is still “next quarter.”

“We have [fancy tool], so we’re covered.”

They have a CSPM. It’s generating 3,000 findings. Nobody’s looking at them.

“Our managed Kubernetes is secure by default.”

EKS/GKE/AKS secure some things by default. They don’t secure your workload configurations. That’s still on you.

“We don’t have Kubernetes expertise.”

Most honest answer. Teams get handed a K8s cluster and told to deploy to it. They don’t have background in K8s security. Nobody trained them.

The Tooling Problem

There are dozens of Kubernetes security tools. Scanners, admission controllers, runtime protection, compliance checkers.

The problem isn’t lack of tools. It’s:

  1. Too many findings, no prioritization. Run any K8s scanner against a real cluster, get 500+ findings. Which ones matter? Unclear.

  2. Tools catch issues but don’t prevent them. Scanning happens after deployment. By then, the insecure config is running.

  3. Admission controllers are opt-in. Teams have to choose to enforce security. Guess what they choose.

  4. No baseline. What does “secure enough” look like? There’s no clear standard most teams can point to.

What Actually Works

From the few well-secured clusters I’ve seen:

1. Enforcement, not monitoring.

Don’t just detect bad configs. Reject them. Admission controllers that block:

  • Privileged containers
  • Host path mounts
  • Missing resource limits
  • Images from unapproved registries

Yes, this breaks deployments. That’s the point.

2. Network policies from day one.

Default deny. Explicitly allow required communication. Add policies as part of the deployment process, not as an afterthought.

3. External secrets management.

HashiCorp Vault, AWS Secrets Manager, whatever. Secrets don’t go in manifests. Pods fetch them at runtime.

4. Regular access reviews.

Who has cluster-admin? Why? Last time this was reviewed?

One team I worked with does monthly RBAC audits. They’ve never had the overprivileged service account problem.

5. Actually using the security features.

Pod Security Standards. Seccomp profiles. AppArmor. Network policies. These exist. They work. Most teams don’t use them.

The Assessment Playbook

When I hit a K8s cluster now, my first moves:

  1. Check for exposed services (dashboards, APIs)
  2. Look at RBAC (especially service accounts)
  3. Check network policies (usually none)
  4. Review pod security context
  5. Hunt for secrets in configs

This finds issues in the first hour. Every time.

If you’re running Kubernetes:

  • Run kubectl auth can-i --list from a pod. Be horrified.
  • Check if any network policies exist: kubectl get networkpolicies -A
  • Look for secrets in environment variables
  • Test if you can run privileged pods

You probably won’t like what you find.

Looking Forward

Kubernetes is mature. The security features exist. The problem is adoption and usability.

I’d love to see:

  • Secure defaults that require opting OUT of security
  • Better UX for security features
  • Admission controllers that are on by default
  • Fewer “deploy now, secure later” tutorials

Until then, I’ll keep finding cluster-admin service accounts in production.


The Kubernetes security model is actually good. The problem is almost nobody uses it.

Related Articles