部署Kubernetes集群(v1.29) (qq.com)

主机环境预设

本示例中的Kubernetes集群部署将基于以下环境进行。

  • OS: Ubuntu 22.04

  • Kubernetes:v1.29.2

  • Container Runtime(以下两种方式二选一):

    • Docker CE 25.0.3 和 cri-dockerd v0.3.10
    • Continerd.io 1.6.28

测试环境准备

(1)借助于chronyd服务(程序包名称chrony)设定各节点时间精确同步;

(2)通过DNS完成各节点的主机名称解析;

(3)各节点禁用所有的Swap设备;

(4)各节点禁用默认配置的iptables防火墙服务;

(5) 加载 br_netfilter内核模块

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# HOSTS
echo >> /etc/hosts <<EOF
10.0.0.11 master-1
10.0.0.12 node-1
10.0.0.13 node-2
10.0.0.10 k8s-vip.com
EOF

# SWAP
swapoff -a
sed -i '/swap/s/^/#/g' /etc/fstab

# CHRONY
apt install chrony
vim /etc/chrony/chrony.conf
pool ntp.aliyun.com iburst
systemctl restart chrony.service

root@k8s-master:~# chronyc sources
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 203.107.6.88 2 9 377 14 -779us[-1192us] +/- 19ms

# FIREWALLD
ufw disable

# SSH
ssh-keygen -P "" -f /root/.ssh/id_rsa
ssh-copy-id 10.0.0.11 -o StrictHostKeyChecking=no 10.0.0.11
for i in 11 12 13;do scp -o StrictHostKeyChecking=no -r /root/.ssh root@10.0.0.${i}:

for i in 11 12 13;do scp /root/.ssh/known_hosts root@10.0.0.${i}:.ssh/

# br_netfilter
modprobe br_netfilter
echo "br_netfilter" | sudo tee /etc/modules-load.d/br_netfilter.conf

cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

sysctl --system

安装程序包

容器运行时二选一
容器运行时一:docker-ce和cri-dockerd
容器运行时二:Containerd

一:docker-ce和cri-dockerd

Docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apt -y install apt-transport-https ca-certificates curl software-properties-common

curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

apt update
apt install docker-ce

mkdir -p /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": [
"https://006jgaue.mirror.aliyuncs.com"
],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "200m"
},
"storage-driver": "overlay2"
}
EOF

systemctl daemon-reload
systemctl start docker.service
systemctl enable docker.service

给docker添加代理(可选)

1
2
3
4
5
6
7
8
9
/lib/systemd/system/docker.service
# 请将下面配置段中的“$PROXY_SERVER_IP”替换为你的代理服务器地址,将“$PROXY_PORT”替换为你的代理服所监听的端口;
# 另外还要注意所使用的协议http是否同代理服务器提供服务的协议相匹配,如有必要,请自行修改为https;
Environment="HTTP_PROXY=http://$PROXY_SERVER_IP:$PROXY_PORT"
Environment="HTTPS_PROXY=http://$PROXY_SERVER_IP:$PROXY_PORT"
Environment="NO_PROXY=127.0.0.0/8,172.17.0.0/16,172.29.0.0/16,10.244.0.0/16,192.168.0.0/16,10.96.0.0/12,magedu.com,cluster.local"

systemctl daemon-reload
systemctl restart docker.service

安装cri-dockerd

项目地址:https://github.com/Mirantis/cri-dockerd

1
2
3
4
curl -LO https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.10/cri-dockerd_0.3.10.3-0.ubuntu-jammy_amd64.deb

apt install ./cri-dockerd_0.3.10.3-0.ubuntu-jammy_amd64.deb
systemctl status cri-docker.service

二:Containerd

安装并启动Containerd.io

1
2
3
4
5
apt -y install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt update
apt-get install containerd.io

配置Containerd.io

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml

1.修改containerd使用SystemdCgroup
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true

2.配置Containerd使用国内Mirror站点上的pause镜像及指定的版本
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

3.配置Containerd使用国内的Image加速服务,以加速Image获取
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://docker.mirrors.ustc.edu.cn", "https://registry.docker-cn.com"]

[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]
endpoint = ["https://registry.aliyuncs.com/google_containers"]

4.配置Containerd使用私有镜像仓库,不存在要使用的私有Image Registry时,本步骤可省略
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.magedu.com"]
endpoint = ["https://registry.magedu.com"]

5.配置私有镜像仓库跳过tls验证,若私有Image Registry能正常进行tls认证,则本步骤可省略
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.magedu.com".tls]
insecure_skip_verify = true

systemctl daemon-reload;systemctl restart containerd

配置crictl客户端

1
2
3
4
5
vim /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: true

安装kubelet、kubeadm和kubectl

1
2
3
4
5
6
7
8
apt-get update && apt-get install -y apt-transport-https

curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/deb/ /" | tee /etc/apt/sources.list.d/kubernetes.list

apt-get update
apt-get install -y kubelet kubeadm kubectl

整合kubelet和cri-dockerd

仅cri-dockerd需要(容器运行时一)

配置cri-dockerd

1
2
3
4
#/usr/lib/systemd/system/cri-docker.service
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-cache-dir=/var/lib/cni/cache --cni-conf-dir=/etc/cni/net.d --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9

systemctl daemon-reload && systemctl restart cri-docker.service

需要添加的各配置参数(各参数的值要与系统部署的CNI插件的实际路径相对应):

  • –network-plugin:指定网络插件规范的类型,这里要使用CNI;
  • –cni-bin-dir:指定CNI插件二进制程序文件的搜索目录;
  • –cni-cache-dir:CNI插件使用的缓存目录;
  • –cni-conf-dir:CNI插件加载配置文件的目录;
  • –pod-infra-container-image:Pod中的puase容器要使用的Image,默认为registry.k8s.io上的pause仓库中的镜像;不能直接获取到该Image时,需要明确指定为从指定的位置加载,例如“registry.aliyuncs.com/google_containers/pause:3.9”

配置kubelet

1
2
3
4
5
mkdir /etc/sysconfig

cat >/etc/sysconfig/kubelet<<EOF
KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/run/cri-dockerd.sock"
EOF

初始化集群

初始化第一个主节点

方式一

1
2
3
4
5
6
7
8
9
~# kubeadm init \        
--control-plane-endpoint="kubeapi.magedu.com" \
--kubernetes-version=v1.29.2 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--image-repository=registry.aliyuncs.com/google_containers \
--token-ttl=0 \
--upload-certs \
--cri-socket=unix:///var/run/cri-dockerd.sock

方式二

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# kubeadm config print init-defaults
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: magedu.comc4mu9kzd5q7ur
ttl: 24h0m0s
usages:
- signing
- authentication
localAPIEndpoint:
# 初始控制平面首个节点的 IP 地址
advertiseAddress: 172.29.7.1
bindPort: 6443
nodeRegistration:
# 若使用 docker-ce + cri-dockerd,需取消注释并指定路径
# criSocket: unix:///run/cri-dockerd.sock
imagePullPolicy: IfNotPresent
# 首个控制平面节点的主机名
name: k8s-master01.magedu.com
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
# 控制平面统一接入点(域名 + 端口)
controlPlaneEndpoint: "kubeapi.magedu.com:6443"
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: v1.29.2
networking:
# 集群默认域名
dnsDomain: cluster.local
# Service 网络 CIDR
serviceSubnet: 10.96.0.0/12
# Pod 网络 CIDR(Flannel 默认网段)
podSubnet: 10.244.0.0/16
scheduler: {}
apiServer:
timeoutForControlPlane: 4m0s
# API Server 证书的扩展 SAN(客户端可能访问的地址)
certSANs:
- kubeapi.magedu.com
- 172.29.7.1
- 172.29.7.2
- 172.29.7.3
- 172.29.7.253
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
# kube-proxy 代理模式(IPVS 替代默认 iptables)
mode: "ipvs"


kubeadm init --config kubeadm-config.yaml --upload-certs

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

添加节点到集群中

1
2
3
kubeadm join kubeapi.magedu.com:6443 --token magedu.comc4mu9kzd5q7ur \
--discovery-token-ca-cert-hash sha256:2f8028974b3830c5cb13163e06677f52711282b38ee872485ea81992c05d8a78 \
--cri-socket=unix:///var/run/cri-dockerd.sock

Flannel

有时镜像无法直接拉取

渡渡鸟镜像同步站 https://docker.aityp.com/

Docker/NPM包下载服务 https://pull.7ii.win/

1
2
3
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

kubectl get pods -n kube-flannel

Metrics

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# master编辑 /etc/kubernetes/manifests/kube-apiserver.yaml
- --enable-aggregator-routing=true # 添加
- --requestheader-client-ca-file=/etc/kubernetes/pki/ca.crt
- --requestheader-allowed-names=aggregator
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --proxy-client-cert-file=/etc/kubernetes/pki/apiserver.crt
- --proxy-client-key-file=/etc/kubernetes/pki/apiserver.key

# 所有节点
# 编辑 /var/lib/kubelet/config.yaml
echo "serverTLSBootstrap: true" | sudo tee -a /var/lib/kubelet/config.yaml

systemctl restart kubelet

# master批准证书
kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve

# 下载yaml文件
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.7.1/high-availability-1.21+.yaml -O metrics-server.yaml

kubectl apply -f metrics-server.yaml

kubectl get pods -n kube-system -l k8s-app=metrics-server


root@k8s-master:~# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master 100m 2% 1269Mi 33%
k8s-node-1 33m 0% 741Mi 19%
k8s-node-2 29m 0% 746Mi 19%

StorageClass_nfs

NFS-Server

1
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/nfs-provisioner/nfs-server.yaml

NFS-CSI-Driver

1
2
3
4
5
6
7
8
9
curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/v4.11.0/deploy/install-driver.sh | bash -s v4.11.0 --

kubectl -n kube-system get pod -o wide -l app=csi-nfs-controller
kubectl -n kube-system get pod -o wide -l app=csi-nfs-node

NAME READY STATUS RESTARTS AGE IP NODE
csi-nfs-controller-56bfddd689-dh5tk 4/4 Running 0 35s 10.240.0.19 k8s-agentpool-22533604-0
csi-nfs-node-cvgbs 3/3 Running 0 35s 10.240.0.35 k8s-agentpool-22533604-1
csi-nfs-node-dr4s4 3/3 Running 0 35s 10.240.0.4 k8s-agentpool-22533604-0

Storage-Class

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# kubectl apply -f storageclass.yaml
# cat storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
server: nfs-server.default.svc.cluster.local # default名称空间
share: /
# csi.storage.k8s.io/provisioner-secret is only needed for providing mountOptions in DeleteVolume
# csi.storage.k8s.io/provisioner-secret-name: "mount-options"
# csi.storage.k8s.io/provisioner-secret-namespace: "default"
reclaimPolicy: Delete # 策略可以改建议,Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
mountOptions:
- nfsvers=4.1

测试:create PVC

1
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/pvc-nfs-csi-dynamic.yaml

MetalLB

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
#kube-proxy工作于ipvs模式时,必须要使用严格ARP(StrictARP)模式,因此,部署MetalLB之前,需要事先运行如下命令,配置kube-proxy。

kubectl get configmap kube-proxy -n kube-system -o yaml | sed -e "s/strictARP: false/strictARP: true/" | kubectl apply -f - -n kube-system

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/config/manifests/metallb-native.yaml

root@k8s-master:~# kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-5f56cd6f78-xswqk 1/1 Running 0 5m10s
speaker-c78sb 1/1 Running 0 5m10s
speaker-d48wd 1/1 Running 0 5m10s
speaker-hdkfk 1/1 Running 0 5m10s


root@k8s-master:~# cat eip.yaml
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: localip-pool
namespace: metallb-system
spec:
addresses:
- 10.0.0.51-10.0.0.80 # 需要改为自己网卡的网段
autoAssign: true
avoidBuggyIPs: true
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: localip-pool-l2a
namespace: metallb-system
spec:
ipAddressPools:
- localip-pool
interfaces:
- ens32 # 网卡名


root@k8s-master:~# kubectl apply -f eip.yaml
ipaddresspool.metallb.io/localip-pool created
l2advertisement.metallb.io/localip-pool-l2a created

测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
apiVersion: v1
kind: Pod
metadata:
name: tomcat
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/tomcat:9.0.67-jdk8
ports:
- containerPort: 8080
---
# demoapp-service.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
spec:
selector:
app: tomcat
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer

root@k8s-master:~/learning-k8s-master/OpenELB# kubectl get svc tomcat
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tomcat LoadBalancer 10.108.82.54 10.0.0.52 80:30975/TCP 27m

#访问10.0.0.52测试

OpenELB

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
kubectl apply -f https://raw.githubusercontent.com/openelb/openelb/master/deploy/openelb.yaml

kubectl get pods -n openelb-system
NAME READY STATUS RESTARTS AGE
openelb-admission-create-kn4fg 0/1 Completed 0 5m
openelb-admission-patch-9jfxs 0/1 Completed 2 5m
openelb-keepalive-vip-7brjl 1/1 Running 0 4m
openelb-keepalive-vip-nfpgm 1/1 Running 0 4m
openelb-keepalive-vip-vsgkx 1/1 Running 0 4m
openelb-manager-d6df4dfc4-2q4cm 1/1 Running 0 5m

# cat eip-pool.yaml
apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:
name: eip-pool
annotations:
eip.openelb.kubesphere.io/is-default-eip: "true"
spec:
address: 10.0.0.50-10.0.0.100 # 须网卡IP网段
protocol: layer2
interface: ens32 # 网卡名
disable: false

测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
apiVersion: v1
kind: Pod
metadata:
name: tomcat
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/tomcat:9.0.67-jdk8
ports:
- containerPort: 8080
---
# demoapp-service.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
annotations: # 必要
lb.kubesphere.io/v1alpha1: openelb
eip.openelb.kubesphere.io/v1alpha2: eip-pool
spec:
selector:
app: tomcat
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer

root@k8s-master:~/learning-k8s-master/OpenELB# kubectl get svc tomcat
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tomcat LoadBalancer 10.108.82.54 10.0.0.52 80:30975/TCP 27m

Ingress Nginx

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.6/deploy/static/provider/cloud/deploy.yaml

root@k8s-master:~# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-9nj26 0/1 Completed 0 14s
ingress-nginx-admission-patch-mmqgg 0/1 Completed 0 14s
ingress-nginx-controller-5f86b64b9d-d6xng 1/1 Running 0 14s


root@k8s-master:~# kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.88.73 10.0.0.53 80:30407/TCP,443:32105/TCP 29s
ingress-nginx-controller-admission ClusterIP 10.107.65.211 <none> 443/TCP 29s

root@k8s-master:~# kubectl get ingressclasses.networking.k8s.io
NAME CONTROLLER PARAMETERS AGE
nginx k8s.io/ingress-nginx <none> 59s

测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
root@k8s-master:~# kubectl create ingress nginx --rule='nginx.wang.com/*'=nginx:80 --class=nginx --dry-run=client -o yaml

# --rule='域名/IP'=svc_name:port
# --class=ingressclass_name

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: null
name: nginx
spec:
ingressClassName: nginx
rules:
- host: nginx.wang.com
http:
paths:
- backend:
service:
name: nginx
port:
number: 80
path: /
pathType: Prefix
status:
loadBalancer: {}


root@k8s-master:~# kubectl create ingress nginx --rule='nginx.wang.com/*'=nginx:80 --class=nginx --dry-run=client -o yaml > ingress_nginx.yaml

root@k8s-master:~# kubectl apply -f ingress_nginx.yaml
ingress.networking.k8s.io/nginx created

root@k8s-master:~# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx nginx nginx.wang.com 10.0.0.53 80 29s

root@k8s-master:~# kubectl create ingress tomcat --rule='tomcat.wang.com/*'=tomcat:8080 --class=nginx --dry-run=client -o yaml > ingress_tomcat.yaml

root@k8s-master:~# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx nginx nginx.wang.com 10.0.0.53 80 5m30s
tomcat nginx tomcat.wang.com 10.0.0.53 80 8s

#将域名解析为ingress_controller_IP 10.0.0.53

Kuboard

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# 临时存储
kubectl apply -f https://raw.githubusercontent.com/iKubernetes/learning-k8s/master/Kuboard/deploy.yaml

root@k8s-master:~/Kuboard# kubectl get pods -n kuboard
NAME READY STATUS RESTARTS AGE
kuboard-v3-54b4559f46-2h5j7 1/1 Running 0 2m35s

# 持久存储
kubectl apply -f https://raw.githubusercontent.com/iKubernetes/learning-k8s/master/Kuboard/kuboard-persistent/kuboard-v3.yaml


root@k8s-master:~/Kuboard/kuboard-persistent# kubectl get pods -n kuboard
NAME READY STATUS RESTARTS AGE
kuboard-etcd-0 1/1 Running 0 12s
kuboard-etcd-1 1/1 Running 0 8s
kuboard-etcd-2 1/1 Running 0 4s
kuboard-v3-78858f687f-xn7fg 1/1 Running 0 11s


# vim kuboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kuboard-v3
namespace: kuboard
spec:
ingressClassName: nginx
rules:
- host: kuboard.wang.com
http:
paths:
- path: /
backend:
service:
name: kuboard-v3
port:
number: 80
pathType: Prefix

kubectl apply -f kuboard-ingress.yaml
  • 访问 kuboard.wang.com
  • 用户名: admin
  • 密 码: Kuboard123