nodes/proxy Was Never a Monitoring Permission
Kubernetes security discussions often drift toward theater.
People get animated about supply chain headlines, image scanners, or whichever CNCF logo is fashionable this week. Meanwhile, some of the most meaningful security improvements happen in the boring corners of the control plane, where trust boundaries get a little less embarrassing.
That is why I think the Kubernetes v1.36 change for fine-grained kubelet API authorization matters more than it might look at first glance.
My take is simple:
if your monitoring stack needs permission that can also execute commands in any container on a node, that was never really a monitoring permission.
For years, too many clusters quietly lived with that compromise. Now Kubernetes finally has a better default direction.
The old deal was convenient and conceptually bad
The kubelet exposes a sensitive HTTPS API. It is not just a source of harmless health checks. Depending on the path, it can expose metrics, pod listings, logs, runtime state, and command execution inside running containers.
The ugly part was how authorization mapped onto that surface.
Historically, once webhook authorization was enabled, many kubelet API paths effectively collapsed into the same coarse-grained RBAC bucket: nodes/proxy. That meant a tool that only needed to scrape metrics or collect status often ended up with permission that was dramatically broader than its real job.
Observability agents needed access, charts shipped working defaults, and the compromise became normal. But least privilege was not really happening. We were just pretending the blast radius was acceptable because the operational path of least resistance had already won.
The problem was bigger than “a little too much RBAC”
What makes this worth paying attention to now is that the old model was not merely inelegant. It was dangerous in a very specific way.
The Kubernetes v1.36 announcement explicitly calls out something many teams should find uncomfortable: nodes/proxy GET alone had become risky enough that it could be abused for command execution on reachable nodes through the kubelet’s /exec path via WebSocket upgrade behavior.
That should reset how people talk about this permission. This was not a story about an administrator intentionally handing out broad write access. It was a story about a supposedly read-oriented permission ending up adjacent to a much more powerful execution surface because the authorization model was too coarse.
Once you see it that way, the lesson is obvious:
broad infrastructure permissions do not stay “just for monitoring” because the people granting them have good intentions. They stay broad because the system has not yet given operators a better way to be precise.
What changed in Kubernetes v1.36
With KubeletFineGrainedAuthz now GA in v1.36, the kubelet can authorize several common API paths against more specific subresources before falling back to nodes/proxy for backward compatibility.
In practical terms, Kubernetes is finally separating “this workload needs kubelet metrics” from “this workload gets a node-level skeleton key.”
The old pattern looked like this:
1
2
3
4
5
6
7
8
9
# Old approach: broad enough to be uncomfortable
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-agent
rules:
- apiGroups: [""]
resources: ["nodes/proxy"]
verbs: ["get"]
The new model can look more like this:
1
2
3
4
5
6
7
8
9
# New approach: much closer to the actual need
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-agent
rules:
- apiGroups: [""]
resources: ["nodes/metrics", "nodes/stats"]
verbs: ["get"]
That is not glamorous. It is also exactly the kind of improvement mature platforms need. A lot of Kubernetes evolution used to be about exposing capability. Now the more interesting work is about narrowing capability without breaking operability.
Observability has been carrying too much implicit trust
I think observability stacks are one of the most under-questioned trust concentrations in modern infrastructure.
They run everywhere. They often hold powerful credentials. They talk to sensitive system endpoints. And because they are framed as operational plumbing rather than product logic, teams tend to treat them as inherently legitimate.
That is a mistake.
The more ubiquitous a workload is, the more dangerous over-permissioning becomes. A compromised DaemonSet with broad kubelet access is not just “one more pod issue.” It is a cluster-wide incident multiplier.
This is why least privilege for observability tooling is not a cleanup task for later. It is one of the highest-leverage pieces of platform hygiene available.
And honestly, this is where Kubernetes has sometimes deserved criticism. For all the sophistication in the ecosystem, a lot of real-world clusters still depended on RBAC shortcuts that were easy to justify and hard to defend.
The encouraging part is that Kubernetes is now making the better path easier. Because safer architecture wins only when it is both more correct and operationally realistic.
Backward compatibility is doing the political work here
One reason I expect this feature to matter in practice is that the migration path is sane.
The kubelet first checks the new, narrower subresources and can still fall back to nodes/proxy where needed. That means teams do not have to choose between ideal least-privilege policy and a working cluster during upgrade week.
A lot of security improvements fail socially, not technically. They are correct in principle but painful in rollout, so they remain “important roadmap items” forever.
Kubernetes got this one mostly right. It gives the ecosystem a way to migrate charts, agents, and internal policies without detonating every existing deployment pattern on day one.
The next phase should be ecosystem pressure. Vendors should stop shipping nodes/proxy where nodes/metrics, nodes/stats, nodes/pods, or other narrower subresources are enough. Platform teams should start flagging or rejecting new RBAC that asks for the broader permission without a strong reason.
Once the platform supports precision, imprecision becomes a choice.
The bigger pattern
I do not see this as an isolated improvement. It fits a broader trend in recent Kubernetes evolution: user namespaces reaching GA, fine-grained kubelet authorization reaching GA, and incremental moves away from broad trust assumptions baked into early cloud-native operations.
The cloud-native world spent years proving it could automate everything. Now it has to prove it can automate things without quietly turning support tooling into privileged attack paths. That is a more adult phase of the ecosystem.
And it is one reason I remain bullish on Kubernetes despite all the justified frustration people have with it. When the project is at its best, it is slowly replacing ugly operational compromises with better primitives.
What platform teams should do now
If you run Kubernetes seriously, I think this feature should trigger a fairly practical review:
- audit every ClusterRole and ClusterRoleBinding that grants
nodes/proxy - identify which workloads only need metrics, stats, pod data, health, or logs
- update internal Helm charts and baseline manifests to use narrower kubelet subresources
- review third-party observability agents instead of assuming their defaults are acceptable
- add policy checks so new broad grants require explicit justification
This is not glamorous work. Neither is incident response. Choose your pain.
My take
nodes/proxy was always too broad for many of the jobs it got used for. We just lived with it because the ecosystem did not make precision easy enough yet.
Kubernetes v1.36 changes that in a meaningful way. Not by inventing a flashy new security product, but by doing something more valuable: making least privilege more specific at an important system boundary.
That is the kind of infrastructure progress I trust.
Because once your monitoring permissions stop doubling as a path toward node-level command execution, your cluster is not just more secure on paper. It is less dependent on wishful thinking.
And in platform engineering, that is usually what maturity looks like.