Declutter Your Kubernetes Cluster: Automate Resource Removal and Updates with Cleaner
In the realm of cloud-native applications, Kubernetes has emerged as the de facto platform for orchestrating containers and managing distributed systems. However, as Kubernetes environments grow in size and complexity, it becomes increasingly challenging to maintain a clean and efficient cluster. This is where the Kubernetes controller Cleaner comes into play.
Introducing Cleaner: A Powerful Tool for Resource Management
Cleaner is a Kubernetes controller that proactively identifies, removes, or updates stale resources to optimize resource utilization and maintain a clean and organized cluster. It offers a wide range of features that empower developers to effectively manage their Kubernetes environments:
Flexible Scheduling: Cleaner allows you to specify the frequency at which it scans the cluster and identifies stale resources using the Cron syntax, a widely adopted scheduling language. This ensures that resource cleanup tasks are performed at regular intervals, keeping your cluster clean and efficient.
Label-based Selection: Cleaner enables you to select resources based on their labels, allowing for precise resource management. You can filter resources based on label keys, operations (equal, different, etc.), and values, ensuring that only the intended resources are targeted for removal or update.
Lua-based Selection Criteria: Cleaner goes beyond label filtering with the introduction of Lua scripts. Lua functions, named evaluate, receive the resource object as obj and allow for complex and dynamic filtering rules. This flexibility empowers you to define custom selection criteria tailored to specific resource management needs.
Resource Removal and Updates: Cleaner's core functionality lies in its ability to remove or update stale resources. You can choose to delete outdated resources that are no longer required or update existing resources to align with the latest configurations. This ensures that your cluster remains consistent and adheres to current deployment standards.
DryRun Mode: Cleaner provides a DryRun flag that allows you to preview which resources match its filtering criteria without actually deleting or updating them. This is a valuable feature for safely testing the Cleaner's logic before committing to resource changes.
# Find all Deployments in the _test_ namespace with labels `service == api` and # `environment != prouction` and delete those apiVersion: apps.projectsveltos.io/v1alpha1 kind: Cleaner metadata: name: cleaner-sample1 spec: schedule: "* 0 * * *" # Executes every day at midnight resourcePolicySet: resourceSelectors: - namespace: test kind: Deployment group: "apps" version: v1 labelFilters: - key: serving operation: Equal value: api # Identifies Deployments with "serving" label set to "api" - key: environment operation: Different value: prouction # Identifies Deployments with "environment" label different from "production" action: Delete # Deletes matching Deployments
# Find all Pods in completed state and delete those apiVersion: apps.projectsveltos.io/v1alpha1 kind: Cleaner metadata: name: completed-pods spec: schedule: "8 * * * *" dryRun: false resourcePolicySet: resourceSelectors: - kind: Pod group: "" version: v1 evaluate: | function evaluate() hs = {} hs.matching = false if obj.status.conditions ~= nil then for _, condition in ipairs(obj.status.conditions) do if condition.reason == "PodCompleted" and condition.status == "True" then hs.matching = true end end end return hs end action: Delete
It can also look at group of resources of different types to decide which ones need to be removed (or updated)
# This Cleaner instance will find any deployment in the namespace
# foo with has no associated Autoscaler and instructs Cleaner to
# delete those instances
apiVersion: apps.projectsveltos.io/v1alpha1
kind: Cleaner
metadata:
name: cleaner-sample3
spec:
schedule: "* 0 * * *"
resourcePolicySet:
resourceSelectors:
- namespace: foo
kind: Deployment
group: ""
version: v1
- namespace: foo
kind: HorizontalPodAutoscaler
group: "autoscaling"
version: v2beta1
action: Delete # Delete matching resources
aggregatedSelection: |
function evaluate()
local hs = {}
hs.valid = true
hs.message = ""
local deployments = {}
local autoscalers = {}
local deploymentWithNoAutoscaler = {}
-- Separate deployments and services from the resources
for _, resource in ipairs(resources) do
local kind = resource.kind
if kind == "Deployment" then
table.insert(deployments, resource)
elseif kind == "HorizontalPodAutoscaler" then
table.insert(autoscalers, resource)
end
end
-- Check for each deployment if there is a matching HorizontalPodAutoscaler
for _, deployment in ipairs(deployments) do
local deploymentName = deployment.metadata.name
local matchingAutoscaler = false
for _, autoscaler in ipairs(autoscalers) do
if autoscaler.spec.scaleTargetRef.name == deployment.metadata.name then
matchingAutoscaler = true
break
end
end
if not matchingAutoscaler then
table.insert(deploymentWithNoAutoscaler, deployment)
break
end
end
hs.resources = deploymentWithNoAutoscaler
return hs
end
Contribute to Cleaner Examples
We encourage you to contribute by adding your own Cleaner configurations 💡. This will help the community benefit from your expertise and build a stronger knowledge base of Cleaner use cases.
To add an example, simply create a new file in the example directory with a descriptive name and put your Cleaner configuration within the file. Once you’ve added your example, feel free to submit a pull request to share it with the community.