With the default kubernetes we cannot maintain rolling
update while keeping the same container version but there is a workaround for
this,
This can be done by adding a patch label to the deployment. Once
the patch updated, pods getting rolling restart by pulling images,
$ kubectl patch deployment <your deployment name> -p
'{"spec":{"template":{"spec":{"containers":[{"name":"<your
container name","env":[{"name":"LAST_ROLLOUT","value":"'$(date
+%s)'"}]}]}}}}'
You can test your deployment in kubernetes for the new
label,
$ kubectl.exe get deployment <your deployment name> -o
yaml
spec:
template:
...
spec:
containers:
- env:
- name: LAST_ROLLOUT
value: "1553772196"
You can get pods list to check whether new ones created and
existing ones terminated,
$ kubectl.exe get pods
NAME READY STATUS RESTARTS AGE
test-776885db4c-rdb6j 2/2 Running 0 5h
test-84fdbf5cc8-wv4qv 0/2 PodInitializing 0
4s
$ kubectl.exe get pods
NAME READY STATUS RESTARTS AGE
test-776885db4c-rdb6j 0/2 Terminating 0 5h
test-84fdbf5cc8-wv4qv 2/2 Running 1 40s
Now you can check rollout status using below command,
$ kubectl.exe rollout status deployment <your deployment
name>
deployment "<your deployment name>"
successfully rolled out
Also you can check rollout history by using below command,
$ kubectl.exe rollout
history deployment <your deployment name>
deployments "<your
deployment name>"
REVISION CHANGE-CAUSE
1 kubectl.exe patch deployment <your
deployment name>
--patch={"spec":{"template":{"spec":{"containers":[{"name":"<your
deployment name>","env":[{"name":"LAST_ROLLOUT","value":"1553772196"}]}]}}}}
You can use below commands to undo roll outs,
Undo to previous deployment,
$ kubectl rollout undo deployment <your deployment name>
Undo to a specific roll out revision,
$ kubectl rollout undo deployment <deployment>
--to-revision=<revision Number that you get from rollout history command
(1,2,…)>
References,