実施すること
参考
kubernetes.io
docs.tigera.io
kubeadmによる設定
初期設定生成
kubeadmによる設定ファイル生成
設定ファイル生成
オプション |
用途 |
apiserver-advertise-address |
コントロールプレーンのIPアドレス設定 |
pod-network-cidr |
ワーカーノードのネットワーク帯設定 |
コントロールプレーンノードの初期化
- エラーが無いこと
Your Kubernetes control-plane has initialized successfully!
以降の結果を控えること
[tsubame@control-plane01 ~]$ IPADDR=$(nmcli device show enp0s1 | grep -i ip4.add | awk -F"[ /]" '{print $(NF-1)}')
[tsubame@control-plane01 ~]$
[tsubame@control-plane01 ~]$ sudo kubeadm init --apiserver-advertise-address ${IPADDR} --pod-network-cidr 192.168.128.0/18
[init] Using Kubernetes version: v1.27.4
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0814 06:03:11.390419 12581 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [control-plane01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.64.4]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [control-plane01 localhost] and IPs [192.168.64.4 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [control-plane01 localhost] and IPs [192.168.64.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.001888 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node control-plane01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node control-plane01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: nftcvs.q11p4kc84cdw6px8
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.64.4:6443 --token foobarbaz\
--discovery-token-ca-cert-hash sha256:nyanwanpaooooon
[tsubame@control-plane01 ~]$
[tsubame@control-plane01 ~]$
確認
- 表示されること
- corednsのSTATUSがPendingなのは後述するCNIが必要なため
[tsubame@control-plane01 ~]$ kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-5d78c9869d-88l7h 0/1 Pending 0 31m
kube-system pod/coredns-5d78c9869d-gk7r9 0/1 Pending 0 31m
kube-system pod/etcd-control-plane01 1/1 Running 0 31m
kube-system pod/kube-apiserver-control-plane01 1/1 Running 0 31m
kube-system pod/kube-controller-manager-control-plane01 1/1 Running 0 31m
kube-system pod/kube-proxy-6pdtq 1/1 Running 0 31m
kube-system pod/kube-scheduler-control-plane01 1/1 Running 0 31m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 31m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 31m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 31m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 0/2 2 0 31m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-5d78c9869d 2 2 0 31m
[tsubame@control-plane01 ~]$
一般ユーザにて設定
権限設定
[tsubame@control-plane01 ~]$ mkdir -p $HOME/.kube
[tsubame@control-plane01 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[tsubame@control-plane01 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[tsubame@control-plane01 ~]$
CNI(Container Network Interface)であるcalicoのセットアップ
作成
yamlからcalicoを作成
[tsubame@control-plane01 ~]$ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
[tsubame@control-plane01 ~]$
Calico をインストールするためにマニフェストを作成
- エラーが無いこと
- sedにて置換後のネットワーク帯は
kubeadm initの
--pod-network-cidr `にて指定した値に変更すること
[tsubame@control-plane01 ~]$ curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml -O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 824 100 824 0 0 1847 0 --:--:-- --:--:-- --:--:-- 1843
[tsubame@control-plane01 ~]$
[tsubame@control-plane01 ~]$ sed -i 's!192.168.0.0/16!192.168.128.0/18!g' custom-resources.yaml
[tsubame@control-plane01 ~]$
[tsubame@control-plane01 ~]$ kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
[tsubame@control-plane01 ~]$
確認1
[tsubame@control-plane01 ~]$ kubectl get pods -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-7d8ffccfb4-r6bdc 1/1 Running 0 2m35s
calico-node-skrzs 1/1 Running 0 2m35s
calico-typha-5fbc6549d6-n8zzl 1/1 Running 0 2m35s
csi-node-driver-p29hk 2/2 Running 0 2m35s
[tsubame@control-plane01 ~]$
確認2
- corednsがrunningであること
- 2分ほどかかる
- 全てrunningになったらctrl+cで抜けること
[tsubame@control-plane01 ~]$ watch -n1 -d kubectl get all -A
Every 1.0s: kubectl get all -A control-plane01: Mon Aug 14 06:51:54 2023
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver pod/calico-apiserver-7678748bc6-lzjj6 1/1 Running 0 2m34s
calico-apiserver pod/calico-apiserver-7678748bc6-p2gxd 1/1 Running 0 2m34s
calico-system pod/calico-kube-controllers-7d8ffccfb4-r6bdc 1/1 Running 0 3m25s
calico-system pod/calico-node-skrzs 1/1 Running 0 3m25s
calico-system pod/calico-typha-5fbc6549d6-n8zzl 1/1 Running 0 3m25s
calico-system pod/csi-node-driver-p29hk 2/2 Running 0 3m25s
kube-system pod/coredns-5d78c9869d-88l7h 1/1 Running 0 48m
kube-system pod/coredns-5d78c9869d-gk7r9 1/1 Running 0 48m
kube-system pod/etcd-control-plane01 1/1 Running 0 48m
kube-system pod/kube-apiserver-control-plane01 1/1 Running 0 48m
kube-system pod/kube-controller-manager-control-plane01 1/1 Running 0 48m
kube-system pod/kube-proxy-6pdtq 1/1 Running 0 48m
kube-system pod/kube-scheduler-control-plane01 1/1 Running 0 48m
tigera-operator pod/tigera-operator-5f4668786-w9zpn 1/1 Running 0 12m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
calico-apiserver service/calico-api ClusterIP 10.107.73.67 <none> 443/TCP 2m34s
calico-system service/calico-kube-controllers-metrics ClusterIP None <none> 9094/TCP 2m35s
calico-system service/calico-typha ClusterIP 10.98.16.233 <none> 5473/TCP 3m25s
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 48m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 48m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
calico-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 3m25s
calico-system daemonset.apps/csi-node-driver 1 1 1 1 1 kubernetes.io/os=linux 3m25s
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 48m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
calico-apiserver deployment.apps/calico-apiserver 2/2 2 2 2m34s
calico-system deployment.apps/calico-kube-controllers 1/1 1 1 3m25s
calico-system deployment.apps/calico-typha 1/1 1 1 3m25s
kube-system deployment.apps/coredns 2/2 2 2 48m
tigera-operator deployment.apps/tigera-operator 1/1 1 1 12m
NAMESPACE NAME DESIRED CURRENT READY AGE
calico-apiserver replicaset.apps/calico-apiserver-7678748bc6 2 2 2 2m34s
calico-system replicaset.apps/calico-kube-controllers-7d8ffccfb4 1 1 1 3m25s
calico-system replicaset.apps/calico-typha-5fbc6549d6 1 1 1 3m25s
kube-system replicaset.apps/coredns-5d78c9869d 2 2 2 48m
tigera-operator replicaset.apps/tigera-operator-5f4668786 1 1 1 12m
コピペ用
初期設定
IPADDR=$(nmcli device show enp0s1 | grep -i ip4.add | awk -F"[ /]" '{print $(NF-1)}')
sudo kubeadm init --apiserver-advertise-address ${IPADDR} --pod-network-cidr 192.168.128.0/18
ユーザ毎設定
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
CNI
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml -O
sed -i 's!192.168.0.0/16!192.168.128.0/18!g' custom-resources.yaml
kubectl create -f custom-resources.yaml