环境

Ubuntu 22.04(硬件:2C4G) x 3
kubernetes 1.29
calico 3.29
kuboard v3
本地加速IP:192.168.120.2:10809
若无加速IP,请更换相关 url 以及组件源,如:Docker相关url,apt 源,容器源

描述

1 台主机作为控制节点,2 台主机作为工作节点。
主机均由 DHCP 静态绑定 IP,步骤中无配置 IP 步骤。
控制节点主机名与IP:v01-ubuntu(192.168.120.81)
工作节点主机名与IP:v02-ubuntu(192.168.120.82),v03-ubuntu(192.168.120.83)
K8S CNI 网络插件:Calico
以下操作,均在 root 用户下执行,若非 root 用户,请使用sudo

步骤

准备工作

控制节点,任务节点均需操作。

# 配置主机名
echo 主机名 > /ect/hostname
sed -i.bak -r 's/(127.0.1.1 .+)/\1 主机名/' /etc/hosts
reboot

# 禁用 ufw 防火墙
ufw disable

# 禁用 swap 缓存
swapoff -a 
sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab
# 检查
free -m
cat /etc/fstab | grep swap

# 配置 overlay br_netfilter 模块
cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# 执行
modprobe overlay
modprobe br_netfilter
# 检查,正常:无返回
lsmod | grep "overlay|br_netfilter"

# 配置转发
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# 应用
sysctl --system
# 检查,正常:值均为 1
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

配置 curl apt containerd 加速

控制节点,任务节点均需操作。

# 配置 curl 等系统加速
export http_proxy=http://192.168.120.2:10809
export https_proxy=http://192.168.120.2:10809
export no_proxy=localhost,127.0.0.1,192.168.120.0/24,10.96.0.0/12,10.244.0.0/16
# 配置 apt 加速
echo "Acquire::http::Proxy \"http://192.168.120.2:10809\";" > /etc/apt/apt.conf.d/90proxy

安装配置 containerd K8S

控制节点,任务节点均需操作。

# 配置 containerd 源
apt install -y apt-transport-https ca-certificates curl gpg gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
# 获取值,将获取值写入:/etc/apt/sources.list.d/docker.list
echo "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable"

# 配置 k8s 源
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# 安装 containerd k8s kubernetes 1.29
apt update
apt install -y containerd.io kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

# 配置 containerd 
containerd config default |tee /etc/containerd/config.toml
# 编辑 containerd config
# 搜索 SystemdCgroup,修改其值为 true,上层 runtime_type 默认应为 io.containerd.runc.v2,若不是,请修改
vim /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  runtime_type = "io.containerd.runc.v2"
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

# 配置 containerd 加速
mkdir /etc/systemd/system/containerd.service.d
cat <<EOF | tee /etc/systemd/system/containerd.service.d/proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.120.2:10809"
Environment="HTTPS_PROXY=http://192.168.120.2:10809"
Environment="NO_PROXY=localhost,127.0.0.1,192.168.120.0/24,10.96.0.0/12,10.244.0.0/16"
EOF

# 配置 crictl
cat <<EOF | tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: true # 是否启用 debug
pull-image-on-create: false
EOF

# 检查 crictl,若无 crictl 命令,则安装`apt install cri-tools`
crictl ps

# 结束
systemctl daemon-reload
systemctl restart containerd.service
systemctl enable containerd.service
systemctl status containerd.service
systemctl enable kubelet

K8S 初始化

控制节点

kubeadm config print init-defaults --component-configs KubeProxyConfiguration,KubeletConfiguration > kubeadm-config.yaml
kubeadm config images pull --cri-socket unix:///var/run/containerd/containerd.sock
# 检查镜像
crictl images
kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--cri-socket unix:///var/run/containerd/containerd.sock \
--v=5
# 正常应出现如下回显
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.120.81:6443 --token x55b7v.5z6o...8w1a7 \
        --discovery-token-ca-cert-hash sha256:97bc82a55da...cbab9ebf31487b
# 回显结束
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

工作节点

执行控制节点中回显中的kubeadm join命令。

配置 k8s CNI 网络插件 calico

wget https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/custom-resources.yaml
sed -i.bak -r 's/.+(cidr.+)/      #\1\n      cidr: 10.244.0.0\/16/' ./custom-resources.yaml
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml

稍等会执行kubectl get nodes,检查 k8s nodes 状态变为 Ready。

配置 k8s 图形化操作 Kuboard v3

wget https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
sed -i.bak -r 's/.+(KUBOARD_SERVER_NODE_PORT.+)/  #\1\n  KUBOARD_ENDPOINT: http:\/\/kuboard3.yudelei.com/' ./kuboard-v3.yaml
kubectl create -f kuboard-v3.yaml

其他

由于集群节点通常是按顺序初始化的,CoreDNS Pod 很可能都运行在第一个控制面节点上。 为了提供更高的可用性,请在加入至少一个新节点后使用命令,重新平衡这些 CoreDNS Pod。

kubectl -n kube-system rollout restart deployment coredns 

默认情况下,出于安全原因,集群不会在控制平面节点上调度 Pod。 如果希望能够在单机 Kubernetes 集群等控制平面节点上调度 Pod,请运行:

kubectl taint nodes --all node-role.kubernetes.io/control-plane-

高可用集群:kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/

相关错误

若部署 Kuboard v3 报错missing address,则说明未操作 sed 步骤。

{"level":"warn","ts":"","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-8ab62de6-3718-42c9-a5b8-ef4bfcdd165a/","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial tcp: missing address\""}
failed to initialize server: server: failed to list connector objects from storage: context deadline exceeded

vim kuboard-v3.yaml

注释掉
#KUBOARD_SERVER_NODE_PORT: '30080'
增加一个sevice地址
KUBOARD_ENDPOINT: http://kuboard-v3

参考

github.com/eip-work/kuboard-press/issues/449

最后修改:2024 年 11 月 05 日 10 : 17 PM
如果觉得文章帮助了您,您可以随意赞赏。