Tuesday, March 26, 2019

How to set istio ingress gateway to an application to access from outside the network


To see current gateways and their ips with ports,

# kubectl get svc istio-ingressgateway -n istio-system

Below is the network traffic plan for the application via istio-system,

Client/Browser à http://<Istio ingressgateway External IP> :< gateway port>/<application URL> à Gateway (istio) à VirtualService(istio) à Service(k8s) à Deployment(Pods)

First we need to apply our deployment,
Below is a basic deployment.yaml file content,

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
  labels:
    app: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
        version: v1
    spec:
      containers:
      - name: test
        image: <image location>test:latest   
        ports:
          - name: test
            containerPort: 80 # this is the application port exposing via pod

Use below command to apply the deployment,

#  kubectl.exe deploy -f <your deployement file>.yml

Now you need to apply service.yaml to create a service with clusterIP,

apiVersion: v1
kind: Service
metadata:
  name: test
  labels:
    app: test
spec:
  ports:
  - name: http
    protocol: TCP
    port: 80
  selector:
    app: test

Use below command to apply the service,

#  kubectl.exe deploy -f <your service file>.yml

Now you need to create virtualservice to send the traffic to the service created above,

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: test
spec:
  hosts:
  - "*"
  gateways:
  - test-gateway #this is the gateway referring to get the traffic
  http:
  - match:
    - uri:
        exact: /test-service/getall
    - uri:
        exact: /login
    - uri:
        exact: /logout
    route:
    - destination:
        host: test
        port:
          number: 80

Use below command to apply the virtual service,

#  kubectl.exe deploy -f <your virtual service file>.yml

Finally you need to create a gateway to get the traffic from outside world to send the traffic between virtual services,

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: test-gateway
spec:
  selector:
    istio: ingressgateway # this is the default selector
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

That’s it. Now you have ingress traffic path to your application cluster.
Use below commands to check created resources,

#  kubectl.exe get deployments
#  kubectl.exe get services
#  kubectl.exe get virtualservices
#  kubectl.exe get gateways

Now you can access created application using istio-ingressgateway exteranal IP,

http://<Istio ingressgateway External IP> :< gateway port>/<application URL>

Ex. According to above sample deployments,

http:// ://<Istio ingressgateway External IP> :80/test-service/getall

Install maven (apache maven) in Linux


Check whether maven is installed or not,

# mvn version

Run below command to set repository,


Run below command to installed apache maven

# sudo yum install apache-maven

That’s it.

Git command to reset local changes to streamline with remote repository


Run below command

$ git reset --hard origin/<branch name>

check git status using below command,

$ git status

How to access external service port or external database from istio installed Kubernetes cluster


If you are using istio service mesh you will not be able to access external services (egress) by default.

If you check container logs, you can see that there is a communication link failure(you error message should be difference from below if you are using any other database other than mysql)

$ kubectl logs -f <pod name> -c <container name>
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

And you can see that the pod is not deployed correctly,

To access mysql external service(or any other external service) you need to create a serviceentry in istio,

$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: mysql
spec:
  hosts:
  - <hostname of the service>
  addresses:
  - <IP address of the service>
  ports:
  - name: tcp
    number: <external port number>
    protocol: tcp
  location: MESH_EXTERNAL
EOF

After running above yml file, you can remove existing pods and check with newly created pods whether successfully connected to the database,





https://istio.io/blog/2018/egress-tcp/

No such file or directory @ rb_sysopen - vagrant-proxyconf-docker-config.json (Errno::ENOENT)



Problem

The guest additions on this VM do not match the installed version of
    default: VirtualBox! In most cases this is fine, but in rare cases it can
    default: prevent things such as shared folders from working properly. If you see
    default: shared folder errors, please make sure the guest additions within the
    default: virtual machine match the version of VirtualBox you have installed on
    default: your host and reload your VM.
    default:
    default: Guest Additions Version: 6.0.4 r128413
    default: VirtualBox Version: 5.2
==> default: Configuring proxy for Docker...
==> default: Running cleanup tasks for 'reload' provisioner...
==> default: Forcing shutdown of VM...
==> default: Destroying VM and associated drives...
/.vagrant.d/gems/2.4.4/gems/vagrant-proxyconf-2.0.0/lib/vagrant-proxyconf/action/configure_docker_proxy.rb:50:in `write': No such file or directory @ rb_sysopen - /tmp/vagrant-proxyconf-docker-config.json (Errno::ENOENT)
        from /.vagrant.d/gems/2.4.4/gems/vagrant-proxyconf-2.0.0/lib/vagrant-proxyconf/action/configure_docker_proxy.rb:50:in `block in docker_client_config_path'





Solution



First check and upgrade Virtualbox version

https://github.com/dotless-de/vagrant-vbguest




$ vagrant plugin install vagrant-vbguest


How to view kubernetes container/pod logs in an istio installed k8s cluster

Error

$ kubectl logs -f <pod name>

Error from server (BadRequest): a container name must be specified for pod <pod name>, choose one of: [<pod name> istio-proxy] or one of the init containers: [istio-init]

Solution

$ kubectl logs -f <pod name> -c <container name>


*** Most of the time container name is same as the pod name without hash value.