Monday, November 4, 2019

How to enable https backends in nginx without adding back-end server's certificates

lets assume that we have a requirement to proxy from https://nginxserverip:443 to https://backendserverip:443 as mentioned below,


below is a code sample,
server {
server_name nginxserverip;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host backendserverip;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_pass https://backendserverip/;
}
listen [::]:443 ssl;
listen 443 ssl;
ssl_certificate xxxxxxx.pem;
ssl_certificate_key xxxxxxxx.pem;
}
that’s it.

Setup lets-encrypt certbot certificates with nginx server in debian-ubuntu Linux

Prerequisites

1. first your nginx server must be publicly accessible via a public ip. If not you will get an authentication error when creating the certificate via lets-encrypt.
Install nginx and check accessibility from publicly internet.

2. CN(certificate name or your domain name) must be correctly redirect to your publicly accessible nginx server.

Create an A record from your cloud console( if you are using any )

Step 1

First install required repositories to download cert-bot

xxxxxxxxxxxxxxxxx$ sudo add-apt-repository ppa:certbot/certbot
This is the PPA for packages prepared by Debian Let’s Encrypt Team and backported for Ubuntu(s).
Press [ENTER] to continue or ctrl-c to cancel adding it
 — -
gpg: no valid OpenPGP data found.


Below are some errors I faced,

Error,

xxxxxxxxxxxxxxxxx$ sudo add-apt-repository ppa:certbot/certbot
sudo: add-apt-repository: command not found

Solution,

xxxxxxxxxxxxxxxxx$ sudo apt-get install software-properties-common
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following additional packages will be installed:
xxxxxxxxxxxxxxxxx$ sudo apt-get update
Hit:1 http://security.debian.org stretch/updates InRelease
Reading package lists… Done

Error,

xxxxxxxxxxxxxxxxx$ sudo add-apt-repository ppa:certbot/certbot
gpg: keyserver receive failed: No dirmngr

Solution,

xxxxxxxxxxxxxxxxx$ sudo apt-get install dirmngr
Reading package lists… Done
Building dependency tree

Steps 2

Install cert-bot packages

xxxxxxxxxxxxxxxxx$ sudo apt-get install python-certbot-nginx
Reading package lists… Done
Building dependency tree
Reading state information… Done

Step 3

lets assume that our ssl certificate domain name is mysampledomain.com . Please note that you must register your domain before continue with lets-encrypt.
go to /etc/nginx/sites-available folder and create a file named mysampledomain.com
add below content to the file,
server {
listen 443 ssl;
server_name mysampledomain.com;
<remaining code here>
}
save the file.

Step 4

test the configuration,

xxxxxxxxxxxxxxxxx$ sudo nginx -t

Restart nginx service if test is pass.

xxxxxxxxxxxxxxxxx$ sudo systemctl restart nginx
xxxxxxxxxxxxxxxxx$ sudo systemctl status nginx
● nginx.service — A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Mon

Step 5

now you can create lets-encrypt certificate using certbot command,

xxxxxxxxxxxxxxxxx$ sudo certbot — nginx -d mysampledomain.com
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx
Enter email address (used for urgent renewal and security notices) (Enter ‘c’ to
cancel):
- — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -
Please read the Terms of Service at
(A)gree/©ancel: A
- — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -
Would you be willing to share your email address with the Electronic Frontier
- — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -
(Y)es/(N)o: N
Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
- — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -
Select the appropriate number [1–2] then [enter] (press ‘c’ to cancel): 2
Congratulations! You have successfully enabled


its done now. Check in etc sites-enabled/default file for ssl 443 configuration created by lets-encrypt cert-bot . You can write you own rules for load balancing using that part.

Saturday, October 26, 2019

Create aws private eks cluster using eksctl commands

$ eksctl create cluster — name <cluster name> — region <your region> — vpc-private-subnets=subnet-xxxxxxxx,subnet-xxxxxxxxxxxx — node-private-networking — version 1.xx — nodegroup-name standard-workers — node-type t3.medium — nodes 3 — nodes-min 1 — nodes-max 4 — node-ami auto


Readings,

Complete configuration of AWS CLI in Ubuntu for EKS (Kubernetes)

AWS CLI installation is pretty simple in an ubuntu. but there is a one concern when you install aws cli in ubuntu which is the available version of aws cli doesnt has required eks commands. therefore you have to keep that in mind when you are typing eks commands.

Install AWS CLI

xxxx@xxxxxxx:~$ aws
Command ‘aws’ not found, but can be installed with:

xxxx@xxxxxxx:~$ sudo snap install aws-cli
or
xxxx@xxxxxxx:~$ sudo apt install awscli
See ‘snap info aws-cli’ for additional versions.

xxxx@xxxxxxx:~$ aws configure


Install eksctl in ubuntu

xxxx@xxxxxxx:~$ curl — silent — location “https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz” | tar xz -C /tmp

xxxx@xxxxxxx:~$ sudo mv /tmp/eksctl /usr/local/bin

xxxx@xxxxxxx:~$ eksctl version
[ℹ] version.Info{BuiltAt:””, GitCommit:””, GitTag:”"}

Install aws iam authenticator


xxxx@xxxxxxx:~$ chmod +x ./aws-iam-authenticator

xxxx@xxxxxxx:~$ mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH

xxxx@xxxxxxx:~$ echo ‘export PATH=$HOME/bin:$PATH’ >> ~/.bashrc

xxxx@xxxxxxx:~$ aws-iam-authenticator help


as i mentioned previously, you might get below error if there is a version mismatch,

Error,

xxxx@xxxxxxx:~$ aws eks update-kubeconfig — name <cluster name>
Invalid choice: 'eks', maybe you meant: * es

Solution,

xxxx@xxxxxxx:~$ sudo apt-get remove -y — purge awscli
xxxx@xxxxxxx:~$ sudo apt-get install -y python3 python3-pip
xxxx@xxxxxxx:~$ sudo pip3 install awscli — upgrage
xxxx@xxxxxxx:~$ sudo pip install awscli — upgrage
xxxx@xxxxxxx:~$ sudo pip3 install awscli
xxxx@xxxxxxx:~$ aws — version
xxxx@xxxxxxx:~$ aws eks update-kubeconfig — name <cluster name>

that's it now you can work on AWS CLI.

Monday, April 1, 2019

How to setup rolling update in kubernetes deployments


Please note that below rolling update exercise is only applicable if you maintaining container image versioning for each deployment. That means you cannot perform rolling update for two deployments with same version number.

If you are looking for a way to do this while keeping the same version number, please check this post J

First we need add rolling update strategy to the deployment yaml.

Below is the required part to be added to the yaml file,

strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1 # this is limit to create extra pods while rolling update
      maxUnavailable: 1 # this is the limit that can unavailable during rolling update
   minReadySeconds: 25 # this is the minimum time to start your application

Below is a sample deployment yaml file with above strategy,

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
  labels:
    app: test
spec:
  replicas: 2
  selector:
    matchLabels:
      app: test
   strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
   minReadySeconds: 25
  template:
    metadata:
      labels:
        app: test
        version: v1
    spec:
      imagePullSecrets:
      - name:
      containers:
      - name: test
        image: test:1.0.0  # different versioning number is a must to perform rolling update    
        ports:
          - name:
            containerPort: 8080

Now apply the deployment with –record flag.

$ kubectl.exe apply -f <your deployment name>.yml --record

Now you can run below command to start rolling update. You must mention a different version number here.

$ kubectl set image deployment test test=test:1.0.1 --record

Now you can check rollout status using below command,

$ kubectl.exe rollout status deployment <your deployment name>

Also you can check rollout history by using below command,

$ kubectl.exe rollout history deployment <your deployment name>

You can use below commands to undo roll outs,

Undo to previous deployment,

$ kubectl rollout undo deployment <your deployment name>

Undo to a specific roll out revision,

$ kubectl rollout undo deployment <deployment> --to-revision=<revision Number that you get from rollout history command (1,2,…)>

References,


Kubernetes Rolling update / restart without changing the version of pods

With the default kubernetes we cannot maintain rolling update while keeping the same container version but there is a workaround for this,

This can be done by adding a patch label to the deployment. Once the patch updated, pods getting rolling restart by pulling images,

$ kubectl patch deployment <your deployment name> -p '{"spec":{"template":{"spec":{"containers":[{"name":"<your container name","env":[{"name":"LAST_ROLLOUT","value":"'$(date +%s)'"}]}]}}}}'

You can test your deployment in kubernetes for the new label,

$ kubectl.exe get deployment <your deployment name> -o yaml
spec:
  template:
...
    spec:
      containers:
      - env:
        - name: LAST_ROLLOUT
          value: "1553772196"
       

You can get pods list to check whether new ones created and existing ones terminated,
$ kubectl.exe get pods
NAME                                           READY     STATUS             RESTARTS   AGE
test-776885db4c-rdb6j                  2/2       Running            0          5h
test-84fdbf5cc8-wv4qv                  0/2       PodInitializing    0          4s

$ kubectl.exe get pods
NAME                                           READY     STATUS             RESTARTS   AGE
test-776885db4c-rdb6j                  0/2       Terminating        0          5h
test-84fdbf5cc8-wv4qv                  2/2       Running              1          40s

Now you can check rollout status using below command,

$ kubectl.exe rollout status deployment <your deployment name>
deployment "<your deployment name>" successfully rolled out

Also you can check rollout history by using below command,
$ kubectl.exe rollout history deployment <your deployment name>
deployments "<your deployment name>"
REVISION  CHANGE-CAUSE
1         kubectl.exe patch deployment <your deployment name> --patch={"spec":{"template":{"spec":{"containers":[{"name":"<your deployment name>","env":[{"name":"LAST_ROLLOUT","value":"1553772196"}]}]}}}}

You can use below commands to undo roll outs,

Undo to previous deployment,

$ kubectl rollout undo deployment <your deployment name>

Undo to a specific roll out revision,

$ kubectl rollout undo deployment <deployment> --to-revision=<revision Number that you get from rollout history command (1,2,…)>

References,

https://github.com/kubernetes/kubernetes/issues/27081

Tuesday, March 26, 2019

How to set istio ingress gateway to an application to access from outside the network


To see current gateways and their ips with ports,

# kubectl get svc istio-ingressgateway -n istio-system

Below is the network traffic plan for the application via istio-system,

Client/Browser à http://<Istio ingressgateway External IP> :< gateway port>/<application URL> à Gateway (istio) à VirtualService(istio) à Service(k8s) à Deployment(Pods)

First we need to apply our deployment,
Below is a basic deployment.yaml file content,

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
  labels:
    app: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
        version: v1
    spec:
      containers:
      - name: test
        image: <image location>test:latest   
        ports:
          - name: test
            containerPort: 80 # this is the application port exposing via pod

Use below command to apply the deployment,

#  kubectl.exe deploy -f <your deployement file>.yml

Now you need to apply service.yaml to create a service with clusterIP,

apiVersion: v1
kind: Service
metadata:
  name: test
  labels:
    app: test
spec:
  ports:
  - name: http
    protocol: TCP
    port: 80
  selector:
    app: test

Use below command to apply the service,

#  kubectl.exe deploy -f <your service file>.yml

Now you need to create virtualservice to send the traffic to the service created above,

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: test
spec:
  hosts:
  - "*"
  gateways:
  - test-gateway #this is the gateway referring to get the traffic
  http:
  - match:
    - uri:
        exact: /test-service/getall
    - uri:
        exact: /login
    - uri:
        exact: /logout
    route:
    - destination:
        host: test
        port:
          number: 80

Use below command to apply the virtual service,

#  kubectl.exe deploy -f <your virtual service file>.yml

Finally you need to create a gateway to get the traffic from outside world to send the traffic between virtual services,

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: test-gateway
spec:
  selector:
    istio: ingressgateway # this is the default selector
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

That’s it. Now you have ingress traffic path to your application cluster.
Use below commands to check created resources,

#  kubectl.exe get deployments
#  kubectl.exe get services
#  kubectl.exe get virtualservices
#  kubectl.exe get gateways

Now you can access created application using istio-ingressgateway exteranal IP,

http://<Istio ingressgateway External IP> :< gateway port>/<application URL>

Ex. According to above sample deployments,

http:// ://<Istio ingressgateway External IP> :80/test-service/getall