环境

Ubuntu 22.04(硬件:2C4G) x 3
kubernetes 1.29
calico 3.29
kuboard v3
本地加速IP:192.168.120.2:10809
若无加速IP,请更换相关 url 以及组件源,如:Docker相关url,apt 源,容器源

描述

1 台主机作为控制节点,2 台主机作为工作节点。
主机均由 DHCP 静态绑定 IP,步骤中无配置 IP 步骤。
控制节点主机名与IP:v01-ubuntu(192.168.120.81)
工作节点主机名与IP:v02-ubuntu(192.168.120.82),v03-ubuntu(192.168.120.83)
K8S CNI 网络插件:Calico
以下操作,均在 root 用户下执行,若非 root 用户,请使用sudo

步骤

准备工作

控制节点,任务节点均需操作。

# 配置主机名
echo 主机名 > /ect/hostname
sed -i.bak -r 's/(127.0.1.1 .+)/\1 主机名/' /etc/hosts
reboot

# 禁用 ufw 防火墙
ufw disable

# 禁用 swap 缓存
swapoff -a 
sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab
# 检查
free -m
cat /etc/fstab | grep swap

# 配置 overlay br_netfilter 模块
cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# 执行
modprobe overlay
modprobe br_netfilter
# 检查,正常:无返回
lsmod | grep "overlay|br_netfilter"

# 配置转发
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# 应用
sysctl --system
# 检查,正常:值均为 1
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

配置 curl apt containerd 加速

控制节点,任务节点均需操作。

# 配置 curl 等系统加速
export http_proxy=http://192.168.120.2:10809
export https_proxy=http://192.168.120.2:10809
export no_proxy=localhost,127.0.0.1,192.168.120.0/24,10.96.0.0/12,10.244.0.0/16
# 配置 apt 加速
echo "Acquire::http::Proxy \"http://192.168.120.2:10809\";" > /etc/apt/apt.conf.d/90proxy

安装配置 containerd K8S

控制节点,任务节点均需操作。

# 配置 containerd 源
apt install -y apt-transport-https ca-certificates curl gpg gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
# 获取值,将获取值写入:/etc/apt/sources.list.d/docker.list
echo "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable"

# 配置 k8s 源
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# 安装 containerd k8s kubernetes 1.29
apt update
apt install -y containerd.io kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

# 配置 containerd 
containerd config default |tee /etc/containerd/config.toml
# 编辑 containerd config
# 搜索 SystemdCgroup,修改其值为 true,上层 runtime_type 默认应为 io.containerd.runc.v2,若不是,请修改
vim /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  runtime_type = "io.containerd.runc.v2"
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

# 配置 containerd 加速
mkdir /etc/systemd/system/containerd.service.d
cat <<EOF | tee /etc/systemd/system/containerd.service.d/proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.120.2:10809"
Environment="HTTPS_PROXY=http://192.168.120.2:10809"
Environment="NO_PROXY=localhost,127.0.0.1,192.168.120.0/24,10.96.0.0/12,10.244.0.0/16"
EOF

# 配置 crictl
cat <<EOF | tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: true # 是否启用 debug
pull-image-on-create: false
EOF

# 检查 crictl,若无 crictl 命令,则安装`apt install cri-tools`
crictl ps

# 结束
systemctl daemon-reload
systemctl restart containerd.service
systemctl enable containerd.service
systemctl status containerd.service
systemctl enable kubelet

K8S 初始化

控制节点

kubeadm config print init-defaults --component-configs KubeProxyConfiguration,KubeletConfiguration > kubeadm-config.yaml
kubeadm config images pull --cri-socket unix:///var/run/containerd/containerd.sock
# 检查镜像
crictl images
kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--cri-socket unix:///var/run/containerd/containerd.sock \
--v=5
# 正常应出现如下回显
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.120.81:6443 --token x55b7v.5z6o...8w1a7 \
        --discovery-token-ca-cert-hash sha256:97bc82a55da...cbab9ebf31487b
# 回显结束
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

工作节点

执行控制节点中回显中的kubeadm join命令。
如果遗忘:

kubeadm token create --print-join-command
kubeadm token create --print-join-command --ttl 0 # 永久 token

配置 k8s CNI 网络插件 calico

wget https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/custom-resources.yaml
sed -i.bak -r 's/.+(cidr.+)/      #\1\n      cidr: 10.244.0.0\/16/' ./custom-resources.yaml
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml

稍等会执行kubectl get nodes,检查 k8s nodes 状态变为 Ready。

配置 k8s 图形化操作 Kuboard v3

wget https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
sed -i.bak -r 's/.+(KUBOARD_SERVER_NODE_PORT.+)/  #\1\n  KUBOARD_ENDPOINT: http:\/\/kuboard3.yudelei.com/' ./kuboard-v3.yaml
kubectl create -f kuboard-v3.yaml
cd && cat .kube/config # 复制输出内容

访问 http://192.168.120.81:30080,添加集群,KubeConfig 方式,粘贴复制内容,确定即可访问集群。

用户名: admin
密码: Kuboard123
常用访问集群身份:ServiceAccount Kuboard-admin(可随意切换,无影响;后续无特殊说明均以此身份操作)

配置 IngressClass IngressNginxController

使用 k8s 可视化界面 Kuboard 进行操作
集群管理,网络,IngressClass,右侧安装 IngressNginxController 并创建 IngressClass,填入相关信息,确定即可。

名称:ingress
副本:2
使用私有镜像:启用

配置完成后,在 IngressClass 下会出现 ingress,点进去会查看到如下信息。
请记录与负载均衡对应的端口(如下为:32763,31174),配置负债均衡时需使用。
若无负载均衡时,通过域名访问 Pod 时需使用。

负载均衡映射
    建议使用 Kubernetes 集群外的负载均衡器,对如下端口设置 L4 转发(不能通过 X-FORWARDED-FOR 追溯源地址) 或 L7 转发(部分负载均衡产品配置 L7 转发较繁琐)(如果您已完成转发设置,请忽略此消息)。
    负载均衡的 80
        端口转发至 Kubernetes 集群任意节点的 32763
    负载均衡的 443
        端口转发至 Kubernetes 集群任意节点的 31174

Ingress 设置
    完成转发设置后,配置 Ingress 时,满足如下条件即可正常使用此 IngressController
        设置 Ingress 的 .spec.ingressClassName 为 ingress
        将 Ingress 路由中域名(.spec.rules[*].host)解析到您的负载均衡器地址

其他

kubectl,kubeadm,kubelet 命令自动补全
cd && apt install -y bash-completion && vim .bashrc末尾追加相关组件补全,保存后source .bashrc

source <(kubectl completion bash)
source <(kubeadm completion bash)
source <(kubelet completion bash)

由于集群节点通常是按顺序初始化的,CoreDNS Pod 很可能都运行在第一个控制面节点上。 为了提供更高的可用性,请在加入至少一个新节点后使用命令,重新平衡这些 CoreDNS Pod。

kubectl -n kube-system rollout restart deployment coredns 

默认情况下,出于安全原因,集群不会在控制平面节点上调度 Pod。 如果希望能够在单机 Kubernetes 集群等控制平面节点上调度 Pod,请运行:

kubectl taint nodes --all node-role.kubernetes.io/control-plane-

高可用集群:kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/

相关错误

若部署 Kuboard v3 报错missing address,则说明未操作 sed 步骤。

{"level":"warn","ts":"","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-8ab62de6-3718-42c9-a5b8-ef4bfcdd165a/","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial tcp: missing address\""}
failed to initialize server: server: failed to list connector objects from storage: context deadline exceeded

vim kuboard-v3.yaml

注释掉
#KUBOARD_SERVER_NODE_PORT: '30080'
增加一个sevice地址
KUBOARD_ENDPOINT: http://kuboard-v3

参考

kubernetes.io/zh-cn/releases/
v1-29.docs.kubernetes.io/zh-cn/docs/setup/production-environment/tools/
https://v1-29.docs.kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
kuboard.cn/install/v3/install-in-k8s.html#%E6%96%B9%E6%B3%95%E4%B8%80-%E4%BD%BF%E7%94%A8-hostpath-%E6%8F%90%E4%BE%9B%E6%8C%81%E4%B9%85%E5%8C%96
github.com/eip-work/kuboard-press/issues/449
v1-29.docs.kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/
docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart
kubernetes.io/zh-cn/blog/2023/05/11/nodeport-dynamic-and-static-allocation/

最后修改:2024 年 11 月 08 日 05 : 14 PM
如果觉得文章帮助了您,您可以随意赞赏。