Kubernetes v1.36 is the first major Kubernetes release of 2026, and it's scheduled to be released on April 22, 2026. This release packs 80 tracked enhancements: 18 graduating to stable, 18 graduating to beta, and 26 new alpha features.
In this Kubernetes v1.36 sneak peek, we cover everything you need to know before the official release: the biggest stable features, what's moving to beta, and new alpha experiments.
Kubernetes 1.36: Stable (GA) Features
1. User Namespaces Support in Pods
Feature Group: SIG Node | KEP: #127 | Stage: Alpha v1.25 -> Stable v1.36
This one started in alpha back in Kubernetes v1.25 (August 2022). Four years later, it's hitting GA in v1.36, and it's a big deal for anyone running multi-tenant or security-sensitive workloads.
User namespaces give each pod its own isolated user ID namespace. A process that appears as root (UID 0) inside a container is actually mapped to an unprivileged user on the host. So even if someone escapes the container sandbox, they have almost no power on the underlying node.
Before v1.36, running truly rootless containers in Kubernetes required third-party tooling (like gVisor or Kata Containers) or accepting weaker isolation guarantees.
With v1.36, it's native, stable, and production-ready. You can enable it with hostUsers: false in your pod spec.
spec:
hostUsers: false
containers:
- name: app
image: my-app
2. Mutating Admission Policies
Feature Group: SIG API Machinery | KEP: #3962 | Stage: Stable in v1.36
When you run a mutating admission webhook today, it means maintaining a TLS-secured HTTP server, managing certs, worrying about webhook latency, and handling failure modes that can block your entire API server if something goes wrong. That's a lot of overhead for common tasks like injecting default labels or setting resource limits.
Mutating Admission Policies bring CEL-based mutation directly into Kubernetes objects – no external server needed. You write your mutation logic as a CEL expression, and the API server applies it inline.
Before v1.36, mutations required a running webhook server. A crashing webhook = blocked pod creation cluster-wide.
With v1.36, define mutations as Kubernetes objects, version-control them with your GitOps tooling, and eliminate the webhook server dependency entirely.
This is the same approach that made Validating Admission Policies so popular for validation. Now it works for mutations too.
3. OCI VolumeSource
Feature Group: SIG Storage | KEP: #4639 | Stage: Alpha v1.31 -> Stable v1.36
Getting non-code artifacts into a container used to be awkward. Your options: expand the main image, write an init container to pull things down, or fight with ConfigMap size limits.
OCI VolumeSource lets you reference any OCI image as a volume. Kubernetes pulls it and mounts its contents into the pod just like pulling a container image, but used as a volume.
With v1.36, you can package model weights, config files, datasets, or binary tools as standalone OCI artifacts and distribute them through your normal image registry, completely independent of your application image.
volumes:
- name: model-weights
image:
reference: registry.example.com/models/gpt-mini:v2
pullPolicy: IfNotPresent
4. External Signing of ServiceAccount Tokens
Feature Group: SIG Auth | KEP: #740 | Stage: Beta v1.33 -> Stable v1.36
By default, the kube-apiserver signs ServiceAccount tokens using its own internally managed key. That works for most clusters, but organizations with strict compliance requirements around key custody need more control.
The kube-apiserver can delegate token signing to an external system – a cloud KMS, HSM, or centralized signing service. Short-lived ServiceAccount tokens get signed by keys that live in your existing key management infrastructure, under your existing audit and rotation policies.
With v1.36, this is GA and stable. If your security team has requirements around key management or you're running in a regulated environment (PCI-DSS, FedRAMP, SOC 2), this is the path to Kubernetes-native token signing that fits your compliance framework.
5. KubeletPodResources API for DRA
Feature Group: SIG Node | KEP: #3063 | Stage: Stable in v1.36
The kubelet exposes a gRPC API (PodResources) that lets monitoring agents and device plugins query what hardware resources each pod has been allocated. Until recently, resources managed through Dynamic Resource Allocation (DRA) – GPUs, accelerators, custom silicon – weren't visible through this API.
With v1.36, The KubeletPodResourcesDynamicResources and KubeletPodResourcesGet feature gates are locked on by default. Monitoring tools, billing systems, and operators can now reliably query per-pod DRA resource allocation without worrying about API instability.
6. Accelerated Recursive SELinux Label Change
Feature Group: SIG Storage | KEP: #1710 | Stage: Beta v1.27 -> Stable v1.36
If you've run Kubernetes on SELinux-enabled nodes (common in RHEL and Rocky Linux environments), you've probably hit this: pod startup times spike on large volumes because Kubernetes has to relabel every single file recursively before mounting. On a volume with millions of files, that can take minutes.
Instead of relabeling file by file, Kubernetes now uses SELinux mount options to apply the correct label to the entire volume at mount time - one operation instead of thousands.
With v1.36, this hits stable after being in beta since v1.27 (April 2023). If your team has ever complained about slow pod startup on SELinux nodes, upgrade to v1.36, and this problem largely goes away.
Kubernetes 1.36 Beta Features
7. HPA Scale to Zero
Feature Group: SIG Autoscaling | KEP: #2021 | Stage: Alpha v1.16 (2019) -> Beta / Default in v1.36
The HPAScaleToZero feature gate first appeared in Kubernetes v1.16 – back in 2019. Seven years later, it's finally enabled by default in v1.36.
The Horizontal Pod Autoscaler can now scale a deployment all the way down to zero replicas when there's no workload, and back up when demand returns. You still need an external metric source (like KEDA) to tell Kubernetes when to scale back up from zero.
Before v1.36, you had to manually enable this feature gate. Most people forgot it existed or didn't realize it was gated.
With v1.36, It's on by default. Staging environments, test clusters, and batch workloads with predictable idle windows can now scale to zero without any extra configuration.
8. DRA Support for Partitionable Devices
Feature Group: SIG Node | KEP: #4815 | Stage: Alpha v1.35 -> Beta (Default) v1.36
Modern GPUs like the NVIDIA A100 can be divided into smaller virtual instances using MIG (Multi-Instance GPU). Previously, you had to configure those partitions statically ahead of time and treat them as separate fixed devices.
DRA drivers can now advertise devices that support dynamic partitioning. Kubernetes requests a specific partition size at scheduling time rather than pre-slicing hardware upfront.
With v1.36: This feature is in beta and on by default. You can run a small inference workload and a training job on the same physical GPU without pre-partitioning your entire cluster. This makes GPU utilization dramatically more flexible.
Kubernetes 1.36 Alpha Features
These are alpha features, not production-ready, but worth testing in staging clusters to get a head start.
9. Workload-Aware Preemption
Feature Group: SIG Scheduling | KEP: #137606 | Stage: New Alpha in v1.36
Standard Kubernetes preemption works pod by pod. When a high-priority workload needs resources, the scheduler evicts lower-priority pods one at a time. That approach doesn't work well for tightly coupled jobs like a distributed training run that needs all 8 GPU pods to start together or not at all.
With this feature, groups of related pods are treated as a single entity for preemption decisions. The scheduler can evict an entire lower-priority group at once to make room for a high-priority group, rather than doing it piecemeal and leaving the requesting workload with half the resources it needs.
This pairs directly with the gang scheduling work from v1.35 and is essential for large-scale AI/ML training infrastructure.
10. HPA External Metrics Fallback on Retrieval Failure
Feature Group: SIG Autoscaling | KEP: #5679 | Stage: New Alpha in v1.36
If your HPA uses an external metrics source (Datadog, cloud queue depth, custom API) and that source goes down, the HPA currently freezes- it stops scaling decisions entirely. For availability-sensitive workloads, that's a real problem.
With this feature implementation, you configure a fallback value that the HPA uses when the external metric is temporarily unavailable. Your autoscaling continues to function during metrics outages instead of stalling.
11. PVC Last-Used Tracking
Feature Group: SIG Storage | KEP: #5541 | Stage: New Alpha in v1.36
A small but genuinely useful observability addition. PVCs now get a status.lastUsedTime field recording when the PVC was last actively mounted by a running pod.
Before this, identifying orphaned PVCs that hadn't been used in months required writing custom tooling. Now it's a single field query. Any team running clusters for more than six months likely has a pile of forgotten PVCs burning storage costs. This makes finding them trivial.
Stay tuned for the official release and a detailed blog post.
FAQ: Kubernetes v1.36
When is Kubernetes v1.36 releasing?
Kubernetes v1.36 is scheduled for release on Wednesday, April 22, 2026.
What is being removed in Kubernetes v1.36?
Two things are removed: the gitRepo volume plugin (security risk) and IPVS mode in kube-proxy (deprecated in v1.35). If you're using either, migrate before upgrading.
How many enhancements are in Kubernetes 1.36?
Kubernetes 1.36 includes approximately 80 tracked enhancements: 18 graduating to stable, 18 graduating to beta, and 26 brand-new alpha features.
Is Kubernetes v1.36 safe to upgrade to?
Stable (GA) features are production-ready. Alpha and beta features should be tested in staging first. Before upgrading, make sure you've migrated off gitRepo volumes and IPVS mode, and audit any services using externalIPs. Check your Ingress-NGINX usage and plan a migration if you rely on it.
What happened to Ingress-NGINX in Kubernetes 1.36?
Ingress-NGINX was retired by the Kubernetes SIG Security committee on March 24, 2026. It's not a core Kubernetes removal, but no further security patches or updates will be published. Migrate to a supported Gateway API-compatible ingress controller.
What is the release name for Kubernetes v1.36?
As of this writing, the official release name hasn't been announced. It will be revealed on release day, April 22, 2026.


.png)



.png)

.png)

