Quick Facts
- Category: Technology
- Published: 2026-05-01 20:40:26
- J. Craig Venter: The Scientist Who Revolutionized Genomics and Defied Expectations
- 10 Essential Facts About AWS Interconnect: Simplifying Multicloud and Last-Mile Connectivity
- 8 Ways @ttsc/lint Transforms TypeScript Linting into a Single, Blazing-Fast Step
- Exploring Why are top university websites serving porn? It comes down to shod...
- How AI Revolutionized Firefox Security: 271 Vulnerabilities Found in Days
With Kubernetes v1.36, the community has reached another milestone: In-Place Pod-Level Resources Vertical Scaling has officially graduated to Beta. This means the feature, enabled by default via the InPlacePodLevelResourcesVerticalScaling feature gate, allows you to adjust the aggregate resource budget for a running pod without always restarting containers. It builds on earlier advances—pod-level resources went Beta in v1.34 and in-place pod vertical scaling reached GA in v1.35. Now, you can modify the shared resource pool for complex pods (e.g., those with sidecars) on the fly, simplifying cluster management and improving responsiveness during load spikes.
What is In-Place Pod-Level Resources Vertical Scaling?
In-Place Pod-Level Resources Vertical Scaling is a Kubernetes feature that lets you change the total resource budget (spec.resources) for a running pod without tearing it down. Previously, resizing a pod’s aggregate CPU or memory required recreating the pod or manually recalculating per‑container limits. With v1.36, you can patch the pod’s resize subresource, and the kubelet will dynamically update the cgroup limits for containers that inherit their budgets from the pod level. Containers with individual limits remain unaffected. The feature is especially valuable for pods where containers share a common pool—like a main app with a sidecar—enabling you to expand capacity during peak times while avoiding container restarts when possible.
Why is pod-level in-place resize important?
The pod-level resource model simplifies management for multi‑container pods. Without it, you’d need to specify individual limits for each container, which becomes tedious with sidecars and auxiliary processes. By setting an aggregate boundary, containers automatically scale their effective limits within that pool. In‑place resizing takes this further: you can adjust the pool size on the fly. For example, if a sidecar‑heavy application faces a sudden traffic spike, you can double the pod’s CPU or memory limit without restarting any container—provided the resize policy allows it. This reduces downtime, improves responsiveness, and lets operators react faster to workload changes. It’s particularly useful for stateful applications where restarts are costly.
How does the resizePolicy affect container restart behavior?
When a pod-level resize occurs, the kubelet treats it as a resize event for every container that inherits its limits from the pod budget. The resizePolicy defined inside each container determines whether a restart is required:
- Non‑disruptive updates: If a container’s restart policy for a given resource is set to
NotRequired, the kubelet attempts to update the cgroup limits dynamically via the Container Runtime Interface (CRI), avoiding a restart. - Disruptive updates: If set to
RestartContainer, the container is restarted to safely apply the new aggregate boundary.
Currently, the resizePolicy is not supported at the pod level; the kubelet defers to individual container settings. This granular control lets you choose which components can tolerate live changes and which need a clean restart.
Can you show an example of resizing a shared resource pool?
Consider a pod named shared-pool-app with two containers—main-app and sidecar. Neither container has its own CPU limit, so both share a pod-level budget of 2 CPUs. To demonstrate:
- Initial pod specification: The pod defines
spec.resources.limits.cpu: "2"andmemory: "4Gi". Both containers setresizePolicytoNotRequiredfor CPU. - The resize operation: To double the CPU pool to 4 CPUs, apply a patch using the resize subresource:
kubectl patch pod shared-pool-app --subresource resize --patch '{"spec":{"resources":{"limits":{"cpu":"4"}}}}'.
Because both containers have NotRequired, the kubelet updates cgroup limits in place. No restarts occur, and the extra capacity is immediately available to both containers. This pattern scales for memory as well.
What checks does the Kubelet perform before applying a resize?
Issuing a resize patch is only the first step. The kubelet runs a series of validation and safety checks to ensure node stability:
- Feasibility check: It verifies that the requested resource totals do not exceed node capacity.
- Container policy evaluation: For each container inheriting from the pod budget, the kubelet reads the
resizePolicyto decide if a restart is needed. - Sequence execution: If all containers agree to a non‑disruptive update, the kubelet applies cgroup changes via CRI in a controlled order to avoid resource contention.
Only after passing these checks does the kubelet commit the new pod resource budget. This cautious approach prevents accidental overloads and ensures that even large-scale adjustments remain safe in production clusters.
What does the graduation to Beta mean for users?
Graduating to Beta means the feature is now enabled by default in v1.36 and is considered stable enough for broader testing, though it may still undergo minor changes before General Availability. For adopters, this removes the need to manually enable feature gates—just upgrade your cluster and start using the resize subresource. Operators can experiment with in‑place scaling for sidecar‑heavy workloads, adjust shared pools under load, and reduce pod churn. The Beta phase also encourages more community feedback, which will help finalize the API and behavior. Teams running demanding applications (e.g., AI/ML pipelines, service meshes) will benefit most from the reduced operational overhead and faster response to capacity demands.