K8s

Create a Kubernetes cluster from the scratch (On CentOS 7 / RHEL 7) and Deploy an application on the cluster and Expose to Internet — Complete Example PART — 01

Hashith Karunarathne
7 min readJan 9, 2019

--

What I am going to do — Content

  • Install Docker
  • Create the cluster (on VM)
  • Build Dockerized application
  • Deploy the application on the cluster
  • Install Helm
  • Deploy above application using Helm chart
  • Expose the application to internet through an Ingress

Overview

First reason me to begin this story is the problems that I faced when I’m doing the same. That make me sense to share what have I done with Kubernetes beginners. That’s all about about the begin. And most important thing is this is my very first medium blog. 😅

What is Kubernetes

Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a decade and a half of experience that Google has with running production workloads at scale, combined with best-of-breed ideas and practices from the community copied from kubernetes.io

If you need to find more what is Kubernetes and why we use it, what can we do it from it and etc.. please google them there lot of good readings out there. (I am very bad at writing explanations)

What is K8s — if you read the above what is Kubernetes from kubernetes.io You know what is this.

Must: Read the Kubernetes custom cluster documentation from https://kubernetes.io/docs/setup/scratch/ because you will not be able to find explanations of my step from this blog. All of them are clearly explained on the official site. Here it show only how to do that.

In here I have used Kubeadm to bootstrap the cluster.

0.0 Pre — preparation

In my set I have 3 VM(s) which are run RHEL7 and all the inbound outbound firewall rules are under my control

192.168.1.10   k8s-master
192.168.1.11 k8s-worker-node-1
192.168.1.12 k8s-worker-node-2

This is an optional step (Rename hostnames on each nodes)

#on master node
hostnamectl set-hostname 'k8s-master'
exec bash
#on worker node 01
hostnamectl set-hostname 'k8s-worker-node-1'
exec bash
#on worker node 02
hostnamectl set-hostname 'k8s-worker-node-2'
exec bash

0.1 Update the Iptables — Bridge and Forward chain networking to be correctly setup

sudo echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

0.2 Turn swap off and disable SELINUX

sudo swapoff -a
sudo sed -i \'/ swap /d\' /etc/fstab

1.0 Install docker-ce

1.1 Install necessary packages

yum install -y yum-utils device-mapper-persistent-data lvm2

1.2 Add docker-ce repo

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

1.3 Install docker-ce 17.03.2

Checked with different versions no issue on below version with kubernetes

yum install -y --setopt=obsoletes=0 docker-ce-17.03.2.ce

1.4 Start and enable docker service

systemctl enable docker && systemctl start docker

1.5 Check docker installed and running properly

[root@k8s-master ~]# docker version
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 02:21:36 2017
OS/Arch: linux/amd64

Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 02:21:36 2017
OS/Arch: linux/amd64
Experimental: false

2.0 Create the Kubernetes cluster

On Each node

Note: In case you don’t have your own dns server then update /etc/hosts file on master and worker nodes

192.168.1.10   k8s-master
192.168.1.11 k8s-worker-node-1
192.168.1.12 k8s-worker-node-2

2.1 Add Kubernetes repo

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF

2.2 Install required kubernetes packages

yum install -y kubelet kubeadm kubectl

2.3 Enable and start kublet service

systemctl enable kubelet && systemctl start kubelet

2.4 Check kubelet installed and running properly

[root@k8s-master ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Tue 2019-01-01 20:19:01 +0530; 9s ago
Docs: https://kubernetes.io/docs/
Process: 1365 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 1365 (code=exited, status=255)

Jan 01 20:19:01 k8s-master systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jan 01 20:19:01 k8s-master systemd[1]: Unit kubelet.service entered failed state.
Jan 01 20:19:01 k8s-master systemd[1]: kubelet.service failed.

2.5 On master node

kubeadm init --pod-network-cidr=10.244.0.0/16

Result should be:

[root@k8s-master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.26.30.233 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.26.30.233 127.0.0.1 ::1]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.26.30.233]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.502868 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 6lrhwg.vg3whkvxhx0z2cow
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

The init command will spit out a token, you can choose to copy this now, but don’t worry, we can retrieve it later.

At this point you can choose to update your own user .kube config so that you can use kubectl from your own user in the future:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Setup Flannel virtual network:

sudo sysctl net.bridge.bridge-nf-call-iptables=1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

2.6 Setup the Nodes

Run the following for networking to be correctly setup on each node:

sudo sysctl net.bridge.bridge-nf-call-iptables=1

We need join token to connect each node to master. We can retrieved it from by running following command on master node.

sudo kubeadm token create --print-join-command

Then run the join on command on each of the nodes.

sudo kubeadm join 10.68.17.50:6443 --token adnkja.hwixqd5mb5dhjz1f --discovery-token-ca-cert-hash sha256:8524sds45s4df13as5s43d3as21zxchaikas94

Now we have created our cluster almost

You can check nodes by running following command on master node

kubectl get nodes

Output should like below

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 15m v1.13.1
k8s-worker-node-1 Ready <none> 3m9s v1.13.1
k8s-worker-node-2 Ready <none> 2m31s v1.13.1
[root@k8s-master ~]#

Additional thing if you want update roles of worker node use below command

kubectl label node node-name node-role.kubernetes.io/worker=worker

Now kubectl get nodes will give an output like below

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 22m v1.13.1
k8s-worker-node-1 Ready worker 10m v1.13.1
k8s-worker-node-2 Ready worker 9m34s v1.13.1

If I write other steps also in here my story will not be a nice story so I decided to write few more stories to cover my most top content. On next story I’ll cover below topics

  • Build Dockerized application
  • Deploy the application on the cluster

Feel free to ask any question on my above steps 😅

Next Story

Useful link(s)

--

--

Hashith Karunarathne

Full Stack | Mobile | Cloud | Microservice | Micro Frontend