preferredDuringSchedulingIgnoredDuringExecution
is a field used in Kubernetes for defining node affinity and anti-affinity rules for scheduling pods on specific nodes.
Node affinity and anti-affinity rules are used to control which nodes a pod should be scheduled on, or which nodes a pod should be avoided.
When preferredDuringSchedulingIgnoredDuringExecution
is set to true, it means that the scheduler will take the node affinity and anti-affinity rules into account when scheduling pods, but if a pod is already running on a node and the node goes down, the pod will not be moved to another node.
When preferredDuringSchedulingIgnoredDuringExecution
is set to false, it means that the scheduler will take the node affinity and anti-affinity rules into account both during scheduling and during execution. This means that if a pod is running on a node and the node goes down, the pod will be moved to another node that meets the affinity and anti-affinity rules.
It's worth noting that the preferredDuringSchedulingIgnoredDuringExecution
is a field that can be used in the PodAffinity and PodAntiAffinity fields of the Kubernetes PodSpec. It's an optional field, when it's not specified, it defaults to false.
It's important to understand the implications of this field when configuring node affinity and anti-affinity rules in a Kubernetes cluster, to make sure that your pods are scheduled and executed as expected.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: kubernetes.io/e2e-az-name
operator: In
values:
- e2e-az1
- e2e-az2
containers:
- name: my-container
image: my-image
In this example, the pod my-pod
has a node affinity rule that prefers to be scheduled on nodes with the label kubernetes.io/e2e-az-name
set to either e2e-az1
or e2e-az2
with a weight of 100. The preferredDuringSchedulingIgnoredDuringExecution
field is set to true, which means that the scheduler will take the node affinity rules into account when scheduling the pod, but if the pod is already running on a node and the node goes down, the pod will not be moved to another node that meets the affinity rules.
It's worth noting that this is just a sample, in a real-world scenario, the matchExpressions and the values should match your specific use case, also the weight can be adjusted based on your needs.
You can also use PodAntiAffinity
in the same way to define rules to avoid scheduling a pod on specific nodes