Funky's NoteBook

Install a single master K8S cluster with kubeadm

字数统计: 2,213阅读时长: 12 min
2019/03/10 Share

Prestep: 使用 kubdeadm 安装 K8S 前

每个节点:

  • 关闭swap
  • 关闭selinux
  • 关闭防火墙
  • 集群里的每个节点的/etc/hosts都要有所有节点ip和与其对应的hostname
  • docker安装完毕
  • 让系统内核开启网络转发

关闭 Swap(每个节点)

1
2
swapoff -a &>/dev/null
sed -i '/\s\+swap\s\+/d' /etc/fstab &>/dev/null

关闭 Selinux(每个节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
setenforce 0 
cat > /etc/selinux/config << EOF
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
EOF

关闭并更新防火墙(每个节点)

1
2
3
4
systemctl disable firewalld
systemctl stop firewalld
yum install iptables-services -y
iptables -F

Centos 安装 Docker(每个节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 创建加速器文件
mkdir -p /etc/docker && touch /etc/docker/daemon.json
# 设置官方加速器
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://registry.docker-cn.com"]
}
EOF
# 移除老版本 Docker
yum remove docker docker-common docker-selinux docker-engine
# 安装新版 Docker 依赖
yum install -y yum-utils device-mapper-persistent-data lvm2
# 添加官方镜像站
curl -s https://download.docker.com/linux/centos/docker-ce.repo > /etc/yum.repos.d/docker-ce.repo
# 设置清华镜像站加速
sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# 更新 repo
yum makecache fast
# 安装 Docker
yum install -y docker-ce
# 设置开机启动并运行 Docker
systemctl enable docker && systemctl daemon-reload && systemctl restart docker

Step 1: 安装 kubeadm(每个节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 添加阿里云镜像软件源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 更新系统所有软件
yum update -y
# 安装 kubeadm
yum install -y kubeadm
# 设置 kubelet 开机启动并立即启动
systemctl enable kubelet && systemctl start kubelet

Step 2: 拉取镜像(每个节点)

通过阿里国内镜像站拉取镜像:

运行docker 容器拉取镜像:

1
2
3
4
5
6
# 获取当前 kubeadm 版本
version=$(kubeadm config images list | head -1 | awk -F: '{ print $2 }')
# 拉取相应版本的 k8s 容器镜像
docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
registry.cn-hangzhou.aliyuncs.com/geekcloud/image-pull:k8s-$version

或者执行脚本:

1
2
3
4
5
6
7
8
9
images=($(kubeadm config images list 2>/dev/null | awk -F'/' '{print $2}'))
for imageName in ${images[@]} ; do
echo "docker pull registry.cn-hangzhou.aliyuncs.com/image-mirror/${imageName}"
docker pull registry.cn-hangzhou.aliyuncs.com/image-mirror/${imageName}
echo "docker tag registry.cn-hangzhou.aliyuncs.com/image-mirror/${imageName} k8s.gcr.io/${imageName}"
docker tag registry.cn-hangzhou.aliyuncs.com/image-mirror/${imageName} k8s.gcr.io/${imageName}
echo "docker tag registry.cn-hangzhou.aliyuncs.com/image-mirror/${imageName} k8s.gcr.io/${imageName}"
docker rmi registry.cn-hangzhou.aliyuncs.com/image-mirror/${imageName}
done

Step3: 初始化集群(主节点)

开启内核转发(所有节点):

1
sysctl net.bridge.bridge-nf-call-iptables=1

初始化集群(主节点):

1
2
3
4
version=$(kubeadm config images list | head -1 | awk -F: '{ print $2 }')
kubeadm init --kubernetes-version=$version --pod-network-cidr=10.244.0.0/16
## --kubernetes-version=v1.13.4 指定版本
## 如果你使用 calico 或者flannel网络,必须加上参数 --pod-network-cidr=10.244.0.0/16 用于指定 CNI 部署的网段

如果初始化正常将会看见如下输出:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
[init] Using Kubernetes version: vX.Y.Z
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubeadm-master localhost] and IPs [10.138.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubeadm-master localhost] and IPs [10.138.0.4 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.501735 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-X.Y" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeadm-master" as an annotation
[mark-control-plane] Marking the node kubeadm-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubeadm-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: <token>
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

主节点执行以下命令让 kubectl正常工作:

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果你是 root 用户你还需要执行以下命令:

1
export KUBECONFIG=/etc/kubernetes/admin.conf

Step 4: 为k8s集群安装 CNI(主节点)

以下几种 CNI 只需选择一种

  • calico

    1
    2
    3
    # 在执行以下命令之前需要在上个步骤设置 kubeadm init 加上参数 --pod-network-cidr=192.168.0.0/16
    kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
    kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
  • flannel

    1
    2
    3
    4
    # 在执行以下命令之前需要在上个步骤设置 kubeadm init 加上参数 --pod-network-cidr=10.244.0.0/16
    # 拉取镜像
    docker pull quay.azk8s.cn/coreos/flannel:v0.11.0-amd64
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • canal

    1
    2
    3
    # 在执行以下命令之前需要在上个步骤设置 kubeadm init 加上参数 --pod-network-cidr=10.244.0.0/16
    kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
    kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/canal.yaml

可选:
K8S 集群默认不在主节点上调度容器,如果你需要在主节点调度容器请执行以下命令:

1
kubectl taint nodes --all node-role.kubernetes.io/master-

执行完毕会输出以下类似内容:

1
2
3
node "test-01" untainted
taint "node-role.kubernetes.io/master:" not found
taint "node-role.kubernetes.io/master:" not found

Step 5: 加入工作节点(工作节点)

开启内核转发:

1
sysctl net.bridge.bridge-nf-call-iptables=1

从刚刚 Step 3 处输出的结果中存在类似 kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash> 的命令,
在工作节点上执行它:

1
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

Step 6: 验证集群状态

经过一段时间后,在主节点执行以下命令:

1
kubectl get nodes

会看到类似的如下输出:

1
2
3
4
NAME     STATUS   ROLES    AGE     VERSION
k8s-m1 Ready master 4d21h v1.13.4
k8s-s1 Ready <none> 4d21h v1.13.4
k8s-s2 Ready <none> 4d21h v1.13.4

如果每个节点都是 Ready 状态,则说明集群工作正常,至此 k8s 集群安装完毕。

写在后面: 重置集群与 kubeadm

  • 首先删除每个节点:

    1
    2
    kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
    kubectl delete node <node name>
  • 重置 kubeadm

    1
    kubeadm reset
  • 恢复 iptables

    1
    iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
  • 如果你开启了 IPVS,请关闭它:

    1
    ipvsadm -C
CATALOG
  1. 1. Prestep: 使用 kubdeadm 安装 K8S 前
    1. 1.1. 关闭 Swap(每个节点)
    2. 1.2. 关闭 Selinux(每个节点)
    3. 1.3. 关闭并更新防火墙(每个节点)
    4. 1.4. Centos 安装 Docker(每个节点)
  2. 2. Step 1: 安装 kubeadm(每个节点)
  3. 3. Step 2: 拉取镜像(每个节点)
  4. 4. Step3: 初始化集群(主节点)
  5. 5. Step 4: 为k8s集群安装 CNI(主节点)
  6. 6. Step 5: 加入工作节点(工作节点)
  7. 7. Step 6: 验证集群状态
  8. 8. 写在后面: 重置集群与 kubeadm