KUBERNETES
- 3 (Installation)
What we will learn,
Kubernetes cluster creation,
Steps for Installation,
1. Swap
should be disabled
2. Disable
selinux
3. Firewalld
should be stopped, (if not then must be rules defined to allow
k8s)
4. Entry
of all master & worker in /etc/hosts
5. Mac
id and product_uuid should be unique
6. Enable
Internet
7. Yum
update
8. Install
docker-ce on all nodes [docker engine/docker daemon, responsible for
all the container management docker build, docker network, docker volume and
docker inspect. Requires docker-ce-cli.]
9. Install
docker-ce-cli on all nodes [command line interface for docker engine]
10. Install
containerd.io on all nodes [daemon containerd to interface with the OS API,
It uses kernel features to provide a runtime environment for containers. Containerd
is the container runtime software which is responsible for running containers.
11. Install
Kubelet on all nodes
12. Install
Kubeadm on all nodes
13. Install
Kubectl on all nodes
14. Initialize
kubeadm on Master node only
15. Install/Deploy
a Container Network Interface (CNI) based Pod network on Master Node only (our
case Calico) [Required for Pods communication with each other. Cluster DNS
(CoreDNS) will not start up before a network is installed. A CNI plugin is
required to implement the Kubernetes network model.
16. Join
Worker with Master
17. Validate the setup
SWAP,
Mem: 2027684 769760 688576 26976 569348 1080012
Swap: 2097148 0 2097148
Total: 4124832 769760 2785724
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# vi /etc/fstab
UUID=e7b57144-0ce9-47e7-8366-accb226788c0 /boot xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0 ç Comment this line
Validate,
[root@k8s-master
~]# free -g
Mem: 2 0 0 0 0 1
Swap: 0 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0
SELINUX,
[root@k8s-master
~]# getenforce
Enforcing
[root@k8s-master
~]# setenforce 0
[root@k8s-master
~]# getenforce
Permissive
[root@k8s-master ~]# vi /etc/selinux/config
SELINUX=enforcing ç Change this enforcing to disabled
[root@k8s-master
~]# grep disabled /etc/selinux/config
# disabled - No SELinux policy is loaded.
SELINUX=disabled
FIREWALL,
[root@k8s-master
~]# systemctl status firewalld.service |egrep "Loaded|Active"
Loaded: loaded
(/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2023-02-06 21:17:28 IST; 35min ago
[root@k8s-master
~]# systemctl stop firewalld;systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Active: inactive (dead)
[root@k8s-master ~]# cat /etc/hosts |grep k8s
192.168.137.171 k8s-master
192.168.137.172 k8s-worker1
192.168.137.173 k8s-worker2
[root@k8s-master ~]# ifconfig ens33 |egrep "inet|ether" |grep -v inet6
inet 192.168.137.171 netmask 255.255.255.0 broadcast 192.168.137.255
ether 00:0c:29:9a:1e:89 txqueuelen 1000 (Ethernet)
F11D4D56-2BD1-E797-AE7A-28C72A9A1E89
Below entries should be present accordingly,
[root@k8s-master ~]# grep -i name /etc/resolv.conf
nameserver 192.168.137.2 # Gateway of VM’s
nameserver 8.8.8.8 # Google DNS
nameserver 8.8.4.4 # Google DNS
DNS1="192.168.137.2"
DNS2="8.8.8.8"
DNS3="8.8.4.4"
# yum update -y
overlay
br_netfilter
EOF
> br_netfilter
> EOF
overlay
br_netfilter
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
[root@k8s-master ~]# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.ipv4.ip_forward = 1
> EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.conf ...
[root@k8s-master ~]#
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
[root@k8s-master ~]#
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.nxtgen.com
* extras: mirrors.nxtgen.com
* updates: mirrors.nxtgen.com
Package yum-utils-1.1.31-54.el7_8.noarch already installed and latest version
Package device-mapper-persistent-data-0.8.5-3.el7_9.2.x86_64 already installed and latest version
Package 7:lvm2-2.02.187-6.el7_9.5.x86_64 already installed and latest version
Nothing to do
Loaded plugins: fastestmirror, langpacks
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.nxtgen.com
* extras: mirrors.nxtgen.com
* updates: mirrors.nxtgen.com
Resolving Dependencies
--> Running transaction check
---> Package containerd.io.x86_64 0:1.6.16-3.1.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: containerd.io-1.6.16-3.1.el7.x86_64
---> Package docker-ce.x86_64 3:23.0.1-1.el7 will be installed
--> Processing Dependency: docker-ce-rootless-extras for package: 3:docker-ce-23.0.1-1.el7.x86_64
---> Package docker-ce-cli.x86_64 1:23.0.1-1.el7 will be installed
--> Processing Dependency: docker-buildx-plugin for package: 1:docker-ce-cli-23.0.1-1.el7.x86_64
--> Processing Dependency: docker-compose-plugin for package: 1:docker-ce-cli-23.0.1-1.el7.x86_64
--> Processing Dependency: docker-scan-plugin(x86-64) for package: 1:docker-ce-cli-23.0.1-1.el7.x86_64
--> Running transaction check
---> Package container-selinux.noarch 2:2.119.2-1.911c772.el7_8 will be installed
---> Package docker-buildx-plugin.x86_64 0:0.10.2-1.el7 will be installed
---> Package docker-ce-rootless-extras.x86_64 0:23.0.1-1.el7 will be installed
--> Processing Dependency: fuse-overlayfs >= 0.7 for package: docker-ce-rootless-extras-23.0.1-1.el7.x86_64
--> Processing Dependency: slirp4netns >= 0.4 for package: docker-ce-rootless-extras-23.0.1-1.el7.x86_64
---> Package docker-compose-plugin.x86_64 0:2.16.0-1.el7 will be installed
---> Package docker-scan-plugin.x86_64 0:0.23.0-3.el7 will be installed
--> Running transaction check
---> Package fuse-overlayfs.x86_64 0:0.7.2-6.el7_8 will be installed
--> Processing Dependency: libfuse3.so.3(FUSE_3.2)(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_64
--> Processing Dependency: libfuse3.so.3(FUSE_3.0)(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_64
--> Processing Dependency: libfuse3.so.3()(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_64
---> Package slirp4netns.x86_64 0:0.4.3-4.el7_8 will be installed
--> Running transaction check
---> Package fuse3-libs.x86_64 0:3.6.1-4.el7 will be installed
--> Finished Dependency Resolution
Package Arch Version Repository Size
======================================================================================================================================================
Installing:
containerd.io x86_64 1.6.16-3.1.el7 docker-ce-stable 33 M
docker-ce x86_64 3:23.0.1-1.el7 docker-ce-stable 23 M
docker-ce-cli x86_64 1:23.0.1-1.el7 docker-ce-stable 13 M
Installing for dependencies:
container-selinux noarch 2:2.119.2-1.911c772.el7_8 extras 40 k
docker-buildx-plugin x86_64 0.10.2-1.el7 docker-ce-stable 12 M
docker-ce-rootless-extras x86_64 23.0.1-1.el7 docker-ce-stable 8.8 M
docker-compose-plugin x86_64 2.16.0-1.el7 docker-ce-stable 11 M
docker-scan-plugin x86_64 0.23.0-3.el7 docker-ce-stable 3.8 M
fuse-overlayfs x86_64 0.7.2-6.el7_8 extras 54 k
fuse3-libs x86_64 3.6.1-4.el7 extras 82 k
slirp4netns x86_64 0.4.3-4.el7_8 extras 81 k
======================================================================================================================================================
Install 3 Packages (+8 Dependent packages)
Installed size: 370 M
Is this ok [y/d/N]: y
Downloading packages:
(1/11): container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm | 40 kB 00:00:01
warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/docker-buildx-plugin-0.10.2-1.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY
Public key for docker-buildx-plugin-0.10.2-1.el7.x86_64.rpm is not installed
(2/11): docker-buildx-plugin-0.10.2-1.el7.x86_64.rpm | 12 MB 00:00:09
(3/11): containerd.io-1.6.16-3.1.el7.x86_64.rpm | 33 MB 00:00:23
(4/11): docker-ce-23.0.1-1.el7.x86_64.rpm | 23 MB 00:00:16
(5/11): docker-ce-cli-23.0.1-1.el7.x86_64.rpm | 13 MB 00:00:08
(6/11): docker-ce-rootless-extras-23.0.1-1.el7.x86_64.rpm | 8.8 MB 00:00:06
(7/11): docker-scan-plugin-0.23.0-3.el7.x86_64.rpm | 3.8 MB 00:00:03
(8/11): docker-compose-plugin-2.16.0-1.el7.x86_64.rpm | 11 MB 00:00:09
(9/11): fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm | 54 kB 00:00:09
(10/11): slirp4netns-0.4.3-4.el7_8.x86_64.rpm | 81 kB 00:00:10
(11/11): fuse3-libs-3.6.1-4.el7.x86_64.rpm | 82 kB 00:00:10
------------------------------------------------------------------------------------------------------------------------------------------------------
Total 2.4 MB/s | 106 MB 00:00:43
Retrieving key from https://download.docker.com/linux/centos/gpg
Importing GPG key 0x621E9F35:
Userid : "Docker Release (CE rpm) <docker@docker.com>"
Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35
From : https://download.docker.com/linux/centos/gpg
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 1/11
setsebool: SELinux is disabled.
Installing : containerd.io-1.6.16-3.1.el7.x86_64 2/11
Installing : docker-buildx-plugin-0.10.2-1.el7.x86_64 3/11
Installing : fuse3-libs-3.6.1-4.el7.x86_64 4/11
Installing : fuse-overlayfs-0.7.2-6.el7_8.x86_64 5/11
Installing : slirp4netns-0.4.3-4.el7_8.x86_64 6/11
Installing : docker-scan-plugin-0.23.0-3.el7.x86_64 7/11
Installing : docker-compose-plugin-2.16.0-1.el7.x86_64 8/11
Installing : 1:docker-ce-cli-23.0.1-1.el7.x86_64 9/11
Installing : docker-ce-rootless-extras-23.0.1-1.el7.x86_64 10/11
Installing : 3:docker-ce-23.0.1-1.el7.x86_64 11/11
Verifying : docker-compose-plugin-2.16.0-1.el7.x86_64 1/11
Verifying : docker-scan-plugin-0.23.0-3.el7.x86_64 2/11
Verifying : fuse-overlayfs-0.7.2-6.el7_8.x86_64 3/11
Verifying : slirp4netns-0.4.3-4.el7_8.x86_64 4/11
Verifying : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 5/11
Verifying : docker-ce-rootless-extras-23.0.1-1.el7.x86_64 6/11
Verifying : 3:docker-ce-23.0.1-1.el7.x86_64 7/11
Verifying : fuse3-libs-3.6.1-4.el7.x86_64 8/11
Verifying : docker-buildx-plugin-0.10.2-1.el7.x86_64 9/11
Verifying : containerd.io-1.6.16-3.1.el7.x86_64 10/11
Verifying : 1:docker-ce-cli-23.0.1-1.el7.x86_64 11/11
containerd.io.x86_64 0:1.6.16-3.1.el7 docker-ce.x86_64 3:23.0.1-1.el7 docker-ce-cli.x86_64 1:23.0.1-1.el7
container-selinux.noarch 2:2.119.2-1.911c772.el7_8 docker-buildx-plugin.x86_64 0:0.10.2-1.el7 docker-ce-rootless-extras.x86_64 0:23.0.1-1.el7
docker-compose-plugin.x86_64 0:2.16.0-1.el7 docker-scan-plugin.x86_64 0:0.23.0-3.el7 fuse-overlayfs.x86_64 0:0.7.2-6.el7_8
fuse3-libs.x86_64 0:3.6.1-4.el7 slirp4netns.x86_64 0:0.4.3-4.el7_8
[root@k8s-master
~]# systemctl enable containerd;systemctl start containerd
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
> https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF
[root@k8s-master ~]# yum install -y kubelet kubeadm kubectl
--disableexcludes=Kubernetes
Loaded
plugins: fastestmirror, langpacks
* base: mirrors.nxtgen.com
* extras: mirrors.nxtgen.com
* updates: mirrors.nxtgen.com
kubernetes/signature | 454 B 00:00:00
Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg
Importing GPG key 0x13EDEF05:
Userid : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2022-03-07-08_01_01.pub)"
Fingerprint: a362 b822 f6de dc65 2817 ea46 b53d c80d 13ed ef05
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
kubernetes/signature | 1.4 kB 00:00:00 !!!
kubernetes/primary | 124 kB 00:00:01
kubernetes 920/920
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.26.1-0 will be installed
--> Processing Dependency: kubernetes-cni >= 0.8.6 for package: kubeadm-1.26.1-0.x86_64
---> Package kubectl.x86_64 0:1.26.1-0 will be installed
---> Package kubelet.x86_64 0:1.26.1-0 will be installed
--> Processing Dependency: socat for package: kubelet-1.26.1-0.x86_64
--> Processing Dependency: conntrack for package: kubelet-1.26.1-0.x86_64
--> Running transaction check
---> Package conntrack-tools.x86_64 0:1.4.4-7.el7 will be installed
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
---> Package cri-tools.x86_64 0:1.26.0-0 will be installed
---> Package kubernetes-cni.x86_64 0:1.2.0-0 will be installed
---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed
--> Running transaction check
---> Package libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 will be installed
---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7 will be installed
---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed
--> Finished Dependency Resolution
Package Arch Version Repository Size
======================================================================================================================================================
Installing:
kubeadm x86_64 1.26.1-0 kubernetes 10 M
kubectl x86_64 1.26.1-0 kubernetes 11 M
kubelet x86_64 1.26.1-0 kubernetes 22 M
Installing for dependencies:
conntrack-tools x86_64 1.4.4-7.el7 base 187 k
cri-tools x86_64 1.26.0-0 kubernetes 8.6 M
kubernetes-cni x86_64 1.2.0-0 kubernetes 17 M
libnetfilter_cthelper x86_64 1.0.0-11.el7 base 18 k
libnetfilter_cttimeout x86_64 1.0.0-7.el7 base 18 k
libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k
socat x86_64 1.7.3.2-2.el7 base 290 k
======================================================================================================================================================
Install 3 Packages (+7 Dependent packages)
Installed size: 296 M
Downloading packages:
(1/10): conntrack-tools-1.4.4-7.el7.x86_64.rpm | 187 kB 00:00:06
warning: /var/cache/yum/x86_64/7/kubernetes/packages/3f5ba2b53701ac9102ea7c7ab2ca6616a8cd5966591a77577585fde1c434ef74-cri-tools-1.26.0-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY
Public key for 3f5ba2b53701ac9102ea7c7ab2ca6616a8cd5966591a77577585fde1c434ef74-cri-tools-1.26.0-0.x86_64.rpm is not installed
(2/10): 3f5ba2b53701ac9102ea7c7ab2ca6616a8cd5966591a77577585fde1c434ef74-cri-tools-1.26.0-0.x86_64.rpm | 8.6 MB 00:00:39
(3/10): 97b4463d78ed8f124e01fdde075b0844e682c4595ecbaadf8cfb919f9e31ab77-kubeadm-1.26.1-0.x86_64.rpm | 10 MB 00:00:57
(4/10): 7c5ee9df7097fe780a8fd2e87541d5c4dba86120a96aec5eb4c9517ee88148ee-kubectl-1.26.1-0.x86_64.rpm | 11 MB 00:01:00
(5/10): libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm | 18 kB 00:00:00
(6/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm | 23 kB 00:00:01
(7/10): libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm | 18 kB 00:00:02
(8/10): socat-1.7.3.2-2.el7.x86_64.rpm | 290 kB 00:00:04
(9/10): 2dcb121663166d78efad52d20fcbdc6f23fe67665d319930905a3e722e05ec30-kubelet-1.26.1-0.x86_64.rpm | 22 MB 00:00:58
(10/10): 0f2a2afd740d476ad77c508847bad1f559afc2425816c1f2ce4432a62dfe0b9d-kubernetes-cni-1.2.0-0.x86_64.rpm | 17 MB 00:00:35
------------------------------------------------------------------------------------------------------------------------------------------------------
Total 519 kB/s | 69 MB 00:02:15
Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg
Importing GPG key 0x13EDEF05:
Userid : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2022-03-07-08_01_01.pub)"
Fingerprint: a362 b822 f6de dc65 2817 ea46 b53d c80d 13ed ef05
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Importing GPG key 0x3E1BA8D5:
Userid : "Google Cloud Packages RPM Signing Key <gc-team@google.com>"
Fingerprint: 3749 e1ba 95a8 6ce0 5454 6ed2 f09c 394c 3e1b a8d5
From : https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : libnetfilter_cthelper-1.0.0-11.el7.x86_64 1/10
Installing : socat-1.7.3.2-2.el7.x86_64 2/10
Installing : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 3/10
Installing : cri-tools-1.26.0-0.x86_64 4/10
Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64 5/10
Installing : conntrack-tools-1.4.4-7.el7.x86_64 6/10
Installing : kubernetes-cni-1.2.0-0.x86_64 7/10
Installing : kubelet-1.26.1-0.x86_64 8/10
Installing : kubectl-1.26.1-0.x86_64 9/10
Installing : kubeadm-1.26.1-0.x86_64 10/10
Verifying : kubectl-1.26.1-0.x86_64 1/10
Verifying : conntrack-tools-1.4.4-7.el7.x86_64 2/10
Verifying : libnetfilter_queue-1.0.2-2.el7_2.x86_64 3/10
Verifying : cri-tools-1.26.0-0.x86_64 4/10
Verifying : kubernetes-cni-1.2.0-0.x86_64 5/10
Verifying : kubelet-1.26.1-0.x86_64 6/10
Verifying : kubeadm-1.26.1-0.x86_64 7/10
Verifying : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 8/10
Verifying : socat-1.7.3.2-2.el7.x86_64 9/10
Verifying : libnetfilter_cthelper-1.0.0-11.el7.x86_64 10/10
kubeadm.x86_64 0:1.26.1-0 kubectl.x86_64 0:1.26.1-0 kubelet.x86_64 0:1.26.1-0
conntrack-tools.x86_64 0:1.4.4-7.el7 cri-tools.x86_64 0:1.26.0-0 kubernetes-cni.x86_64 0:1.2.0-0
libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
socat.x86_64 0:1.7.3.2-2.el7
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.137.171]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.137.171 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.137.171 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.505048 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 3dt294.1gj49plkwxml9g69
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
--discovery-token-ca-cert-hash sha256:5c59831f4fe4bdd60dbd84ceae8e583f15da3807c9cb7d6ca847ad93f8280803
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-57b57c56f-ghttw 0/1 Pending 0 37s
kube-system calico-node-m4kdn 0/1 Init:0/3 0 37s
kube-system coredns-787d4945fb-2ptpr 0/1 Pending 0 14m
kube-system coredns-787d4945fb-v5gvf 0/1 Pending 0 14m
kube-system etcd-k8s-master 1/1 Running 0 14m
kube-system kube-apiserver-k8s-master 1/1 Running 0 14m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 14m
kube-system kube-proxy-ltmr6 1/1 Running 0 14m
kube-system kube-scheduler-k8s-master 1/1 Running 0 14m
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-57b57c56f-ghttw 0/1 ContainerCreating 0 73s
kube-system calico-node-m4kdn 0/1 Init:2/3 0 73s
kube-system coredns-787d4945fb-2ptpr 0/1 ContainerCreating 0 14m
kube-system coredns-787d4945fb-v5gvf 0/1 ContainerCreating 0 14m
kube-system etcd-k8s-master 1/1 Running 0 14m
kube-system kube-apiserver-k8s-master 1/1 Running 0 14m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 14m
kube-system kube-proxy-ltmr6 1/1 Running 0 14m
kube-system kube-scheduler-k8s-master 1/1 Running 0 14m
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-57b57c56f-ghttw 1/1 Running 0 2m33s
kube-system calico-node-m4kdn 1/1 Running 0 2m33s
kube-system coredns-787d4945fb-2ptpr 1/1 Running 0 16m
kube-system coredns-787d4945fb-v5gvf 1/1 Running 0 16m
kube-system etcd-k8s-master 1/1 Running 0 16m
kube-system kube-apiserver-k8s-master 1/1 Running 0 16m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 16m
kube-system kube-proxy-ltmr6 1/1 Running 0 16m
kube-system kube-scheduler-k8s-master 1/1 Running 0 16m
[root@k8s-master ~]#
--discovery-token-ca-cert-hash sha256:5c59831f4fe4bdd60dbd84ceae8e583f15da3807c9cb7d6ca847ad93f8280803
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
[root@k8s-worker1 ~]#
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 36m v1.26.1
k8s-worker1 NotReady <none> 75s v1.26.1
k8s-worker2 NotReady <none> 55s v1.26.1
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 36m v1.26.1
k8s-worker1 Ready <none> 101s v1.26.1
k8s-worker2 Ready <none> 81s v1.26.1
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-57b57c56f-5w72p 1/1 Running 0 6m59s
calico-node-6mxlw 1/1 Running 0 7m
calico-node-ht9bm 1/1 Running 0 2m41s
calico-node-p99wc 1/1 Running 0 2m21s
coredns-787d4945fb-wbr8h 1/1 Running 0 37m
coredns-787d4945fb-zfqgb 1/1 Running 0 37m
etcd-k8s-master 1/1 Running 0 37m
kube-apiserver-k8s-master 1/1 Running 0 37m
kube-controller-manager-k8s-master 1/1 Running 0 37m
kube-proxy-2skbk 1/1 Running 0 2m21s
kube-proxy-62cnp 1/1 Running 0 37m
kube-proxy-pdvb6 1/1 Running 0 2m41s
kube-scheduler-k8s-master 1/1 Running 0 37m
[root@k8s-master ~]#
[init] Using Kubernetes version: v1.26.1
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2023-02-06T23:38:33+05:30" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
rm: remove regular file ‘/etc/containerd/config.toml’? y
[root@k8s-master ~]# systemctl restart containerd
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2023-02-10T22:31:39+05:30" level=fatal msg=" untime API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unim ime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[plugins."io.containerd.grpc.v1.cri"]
systemd_cgroup = true
EOF
systemctl restart containerd
> [plugins."io.containerd.grpc.v1.cri"]
> systemd_cgroup = true
> EOF
> --discovery-token-ca-cert-hash sha256:5c59831f4fe4bdd60dbd84ceae8e583f15da3807c9cb7d6ca847ad93f8280803
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2023-02-11T13:39:53+05:30" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
rm: remove regular file ‘/etc/containerd/config.toml’? y
[root@k8s-master ~]# systemctl restart containerd
--discovery-token-ca-cert-hash sha256:824bd20bc0973d40177027939fe096390bfbb3d742347e3568bd0ef338ea5ac4
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
> --discovery-token-ca-cert-hash sha256:824bd20bc0973d40177027939fe096390bfbb3d742347e3568bd0ef338ea5ac4
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2023-02-10T22:39:10+05:30" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-worker1 ~]# cat > /etc/containerd/config.toml <<EOF
> systemd_cgroup = true
> EOF
[plugins."io.containerd.grpc.v1.cri"]
systemd_cgroup = true
EOF
accepts at most 1 arg(s), received 2
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-worker1 ~]# kubeadm join 192.168.137.171:6443 --token 2rcpq4.sv5yjo8jgw4vui8d \
> --discovery-token-ca-cert-hash sha256:824bd20bc0973d40177027939fe096390bfbb3d742347e3568bd0ef338ea5ac4
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Sir..matlab AAP ko kya bolu mai. Great Great Great..best regards@Alok. Thank you
ReplyDelete