在公司一直用的现成的kubernetes环境,封装的组件很多,这次尝试自己搭一个轻量级的环境,踩了好多坑,下面经验奉上。
前置要求:两台主机,可以是云服务器、虚拟机,网络要互通,内存最好大于2GB,主机上都要提前安装Docker。
下面我使用的两台主机。
主机名(centos) ip 角色
test-50 192.168.9.50 Master
test-51 192.168.9.51 slave
搭建过程
1、首先在两台主机配置阿里云镜像源地址
#进入root用户 sudo su #配置镜像地址 cat </etc/yum.repos.d/kubernetes.repo [kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
2、设置两台主机SELinux为disable,设置swap进禁用
#禁用SELinux setenforce 0 #禁用swap,由于启动swap会产生性能问题,所以k8s默认禁用 swapsed -ri 's/.*swap.*/#&/' /etc/fstab swapoff -a
3、在两台主机分别安装kubeadm,kubelet,kubectl
yum install -y kubelet kubeadm kubectl
4、两台主机设置开机自启
systemctl enable kubelet.service
5、创建集群
#在准备作为master节点的主机运行命令 kubeadm init \ --kubernetes-version=v1.19.0 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --pod-network-cidr=10.24.0.0/16 \ --ignore-preflight-errors=Swap# --kubernetes-version 指定版本 # --image-repository 由于墙的问题,使用阿里云的镜像地址 # --pod-network-cidr 设置pod区间,不设置也可正常工作 # --ignore-preflight-errors 如果预检出现错误可以忽略
此时可以看到各个组件已经启动起来了
如果上一步出现错误,需要重置kubeadm
kubeadm reset
6、但是此时kubectl命令行不能用,需要将kubeconfig复制到指定位置
#切回普通用户 su centos #拷贝文件 mkdir ~/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
此时查看集群信息
kubectl get node
可以看到是NotReady,原因是没有准备CNI网络插件
7、安装CNI网络插件
这里安装的weave
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
~ $ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n' )" serviceaccount/weave-net created clusterrole.rbac.authorization.k8s.io/weave-net created clusterrolebinding.rbac.authorization.k8s.io/weave-net created rolebinding.rbac.authorization.k8s.io/weave-net created daemonset.apps/weave-net created
等待安装完成。
稍候再查看master,状态为Ready
~$ kubectl get node
NAME STATUS ROLES AGE VERSION
test-50 Ready master 57m v1.19.3
8、node节点的加入
#首先在master获取token,在master执行 kubeadm token list #如果token已经过期,体现为找不到token,执行以下命令生成token kubeadm token create #生成token
~$ kubeadm token list TOKEN TTL EXPIRES dw7q0r.3ru1vrmwo84kprwd 22h 2020-12-29T19:16:37+08:00 token
在master获取ca证书sha256编码hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 80b4e8b8445f748e76161b52ebea99933ad7c4c1397d35b07c035ce765528a22
在slave上执行kubeadm join
sudo su #进入root用户
kubeadm join <master-ip:port> \ #master-ip:port为kubeconfig文件的ip和port
--token <token> \ #上一步的token
--discovery-token-ca-cert-hash sha256:<hash>#上一步的hash
查看终端
[root@guozha0-51 script]# kubeadm join 10.0.0.208:6443 -- token k69336,gpgqoki sbyux8ek1 -- discovery- token-ca-c [preflight] Running pre-flight checks [WARNING I sDockerSys temdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended drive [WARNING Hos tname]: hos tname "guozhao-51" could not be reached [WARNING Hos tname] : hos tname "guozhao-51": lookup guozhao-51 on 10.0.0. 40:53: no such host [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with ' kubectl -n kube-system get Cm kubeadm-config -oyaml' [kubelet-start] Wri ting kubelet configuration to file "/var/lib/ kubelet/config. yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var /lib/ kubelet/ kubeadm- flags . env",jp:port [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Boots trap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received . * The Kubelet was informed of the new secure connection details. Run ' kubectl get nodes' on the control-plane to see this node join the cluster.
注:如果遇到下面错误
[ERROR FileContent--proc-sys-net-bridge-nf-call-iptables] /proc/sys/net/bridge/bridge-nf-call-iptables content are ...
执行命令:
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
9、在node节点使用kubeconfig
将master节点的kubeconfig文件复制到slave节点普通用户的~/.kube下
10、查看节点情况
#开始可能会出现node节点NotReady,等待镜像拉取完毕会变为Ready kubectl get node
~ $ kubectl get node NAME STATUS ROLES AGE VERSION test-50 Ready master 40m v1.19.3 test-51 Ready <none> 8m8s v1.19.3
此时一主一从的k8s集群已经搭建完毕。