kubectl v1.15 now provides a
rollout restartsub-command that allows you to restart Pods in aDeployment- taking into account your surge/unavailability config - and thus have them pick up changes to a referencedConfigMap,Secretor similar. It’s worth noting that you can use this with clusters older than v1.15, as it’s implemented in the client.Example usage:
kubectl rollout restart deploy/admission-controlto restart a specific deployment. Easy as that!
there are certainly cases where we want to:
- Update a ConfigMap
- Have our Deployment reference that specific ConfigMap version (in a version-control & CI friendly way)
- Rollout a new revision of our Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-deployment
labels:
app: config-demo-app
spec:
replicas: 3
selector:
matchLabels:
app: config-demo-app
template:
metadata:
labels:
app: config-demo-app
annotations:
# The field we'll use to couple our ConfigMap and Deployment
configHash: ""
spec:
containers:
- name: config-demo-app
image: gcr.io/optimum-rock-145719/config-demo-app
ports:
- containerPort: 80
envFrom:
# The ConfigMap we want to use
- configMapRef:
name: demo-config
# Extra-curricular: We can make the hash of our ConfigMap available at a
# (e.g.) debug endpoint via a fieldRef
env:
- name: CONFIG_HASH
valueFrom:
fieldRef:
fieldPath: spec.template.metadata.annotations.configHash
采用confighash的方式
这样就可以自动rollout restart
# Invoke as hash-deploy-config deployment.yaml configHash myConfigMap
hash-deploy-config() {
yq w $1 spec.template.metadata.annotations.$2 \
$(kubectl get cm/$3 -oyaml | sha256sum)
}
We can now re-deploy our Deployment, and because our spec.template changed, Kubernetes will detect it as a change and re-create our Pods.
以此为原理的工程