kubectl v1.15 now provides a
rollout restart
sub-command that allows you to restart Pods in aDeployment
- taking into account your surge/unavailability config - and thus have them pick up changes to a referencedConfigMap
,Secret
or similar. It’s worth noting that you can use this with clusters older than v1.15, as it’s implemented in the client.Example usage:
kubectl rollout restart deploy/admission-control
to restart a specific deployment. Easy as that!
there are certainly cases where we want to:
- Update a ConfigMap
- Have our Deployment reference that specific ConfigMap version (in a version-control & CI friendly way)
- Rollout a new revision of our Deployment
1 | apiVersion: apps/v1 |
采用confighash的方式
这样就可以自动rollout restart
1 | Invoke as hash-deploy-config deployment.yaml configHash myConfigMap |
We can now re-deploy our Deployment, and because our spec.template
changed, Kubernetes will detect it as a change and re-create our Pods.
以此为原理的工程