Arch Linux 安装 Kubernetes 的几种姿势

得益于优秀的 pacman,和强大的 AUR,安装起来是比较简单的(当然这里特指本地单机部署测试)

(可选)提高效率

比如 oh-my-zsh,则可以编辑 .zshrc 添加插件

1
2
3
4
5
plugins=(
...
kubectl
...
)h

这个插件可以简化一些重复的长命令,让你更专注于开发任务,比如 kubectl get pods 可以简化为 k get pods,甚至 kgp,里面设置了很多命令的别名:详情可以参考这个表:kubectl

或者安装类似 k9s 这种工具

1
sudo pacman -S k9s

二进制安装

不在讨论范围内,因为没试过


包安装

官方 wiki Kubernetes 里有写手动安装的步骤

control-plain 节点的话安装 kubernetes-control-plane,是一个组,里面包含了下面的包

worker 节点也是一个组 kubernetes-node,包含包

这种方式也没试过


kind

这个 kind 包在 AUR 里有,可以直接安装

1
yay -S kind

也可以通过源代码,或者直接用 go install 安装,更多信息参考官方文档 Quick Start

执行 kind 命令,可得到类似如下输出

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
➜  ~ kind
kind creates and manages local Kubernetes clusters using Docker container 'nodes'

Usage:
kind [command]

Available Commands:
build Build one of [node-image]
completion Output shell completion code for the specified shell (bash, zsh or fish)
create Creates one of [cluster]
delete Deletes one of [cluster]
export Exports one of [kubeconfig, logs]
get Gets one of [clusters, nodes, kubeconfig]
help Help about any command
load Loads images into nodes
version Prints the kind CLI version

Flags:
-h, --help help for kind
-q, --quiet silence all stderr output
-v, --verbosity int32 info log verbosity, higher value produces more output
--version version for kind

Use "kind [command] --help" for more information about a command.

创建 cluster

1
2
3
4
5
6
7
8
9
10
11
12
13
14
➜  ~ kind create cluster
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.32.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a nice day! 👋

这就好了,不用管啥网络插件之类的,也自动配置好了 .kube/config 文件

1
2
3
4
5
6
7
8
9
➜  ~ k get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 6m43s v1.32.0

➜ ~ k create deployment --image nginx:alpine nginx
deployment.apps/nginx created

➜ ~ k expose deployment nginx --type NodePort --port 80
service/nginx exposed

如果没科学上网或者没有设置国内源,那么有可能拉不下来镜像

比如大鹅就要设置一下 LAN 接口

镜像就可以拉下来了

容器跑起来后,这时你会发现访问根本访问不通

为啥呢?因为这个 kind 的意思是 Kubernetes in docker,整个集群都在一个容器里,如果进入容器内,再访问,就可以

并且 kind 里面下载的镜像,一般也是要进入容器内通过 crictl images 查看

1
2
3
➜  ~ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kindest/node <none> 2d9b4b74084a 9 days ago 1.05GB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
root@kind-control-plane:/# crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/kindest/kindnetd v20241212-9f82dd49 d300845f67aeb 39MB
docker.io/kindest/local-path-helper v20241212-8ac705d0 baa0d31514ee5 3.08MB
docker.io/kindest/local-path-provisioner v20241212-8ac705d0 04b7d0b91e7e5 22.5MB
docker.io/library/nginx alpine 91ca84b4f5779 22.8MB
registry.k8s.io/coredns/coredns v1.11.3 c69fa2e9cbf5f 18.6MB
registry.k8s.io/etcd 3.5.16-0 a9e7e6b294baf 57.7MB
registry.k8s.io/kube-apiserver-amd64 v1.32.0 73afaf82c9cc3 98MB
registry.k8s.io/kube-apiserver v1.32.0 73afaf82c9cc3 98MB
registry.k8s.io/kube-controller-manager-amd64 v1.32.0 f3548c6ff8a1e 90.8MB
registry.k8s.io/kube-controller-manager v1.32.0 f3548c6ff8a1e 90.8MB
registry.k8s.io/kube-proxy-amd64 v1.32.0 aa194712e698a 95.3MB
registry.k8s.io/kube-proxy v1.32.0 aa194712e698a 95.3MB
registry.k8s.io/kube-scheduler-amd64 v1.32.0 faaacead470c4 70.6MB
registry.k8s.io/kube-scheduler v1.32.0 faaacead470c4 70.6MB
registry.k8s.io/pause 3.10 873ed75102791 320kB

那么需要外面访问怎么办,简单的一种方法,用 port-forward,需要绑定 1024 及以上端口,不然有权限问题

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
➜  ~ k port-forward nginx-6b66fbbd46-t89f7 80:80
Unable to listen on port 80: Listeners failed to create with the following errors: [unable to create listener: Error listen tcp4 127.0.0.1:80: bind: permission denied unable to create listener: Error listen tcp6 [::1]:80: bind: permission denied]
error: unable to listen on any of the requested ports: [{80 80}]

➜ ~ k port-forward nginx-6b66fbbd46-t89f7 1024:80
Forwarding from 127.0.0.1:1024 -> 80
Forwarding from [::1]:1024 -> 80
Handling connection for 1024

➜ ~ curl 127.0.0.1:1024
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

正常使用还是没什么问题的,之前写过一个程序用来一键暴露端口:goportforwarder,不过不知道现在还能不能用了

卸载

1
kind delete cluster

Kubeadm

先说结论,对新手坑比较多

首先是传统的基于 docker 的版本

安装 docker 相关包

1
sudo pacman -S docker

需要再装一个 docker 和 K8s 之间的一种桥梁 cri-dockerd,注意这个是在 AUR 里的

1
yay -S cri-dockerd-git

安装 K8s 相关包

1
sudo pacman -S kubectl kubelet kubeadm

其中 kubectl 是操作 K8s 的客户端工具,因为本质上就是一个 C/S 架构,kubelet 是节点上的核心组件,简单来说负责处理接收的请求,kubeadm 用来初始化 K8s

设置相关服务开机自启

1
2
3
4
5
sudo systemctl enable --now docker

sudo systemctl enable --now cri-docker

sudo systemctl enable --now kubelet

到了这一步,最好重启系统,防止各种奇奇怪怪的问题

初始化 K8s

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
➜  ~ sudo kubeadm init --cri-socket unix:///var/run/cri-dockerd.sock --skip-phases=addon/kube-proxy
I1224 00:46:49.514973 2153 version.go:261] remote version is much newer: v1.32.0; falling back to: stable-1.31
[init] Using Kubernetes version: v1.31.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W1224 00:46:50.556067 2153 checks.go:846] detected that the sandbox image "registry.k8s.io/pause:3.9" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [archlinux kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.31.60]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [archlinux localhost] and IPs [192.168.31.60 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [archlinux localhost] and IPs [192.168.31.60 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.222992ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 3.500830861s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node archlinux as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node archlinux as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: aop3y1.858jqah2hpknu8yg
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.31.60:6443 --token aop3y1.858jqah2hpknu8yg \
--discovery-token-ca-cert-hash sha256:05288f496fa271f180f87d88bfadfb8be13c64be2f4fa9081f464d42de572ab3

此时如果 kubelet 没有起来,会报错,所以在上面安装好所有软件后,最好重启一下系统确保服务正常启动

这里特别指定了用 cri-dockerd 方案,因为默认是基于 containerd

也可以看到相关的 docker 镜像

1
2
3
4
5
6
7
8
9
10
➜  ~ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.k8s.io/kube-apiserver v1.31.4 bdc2eadbf366 11 days ago 94.2MB
registry.k8s.io/kube-controller-manager v1.31.4 359b9f230732 11 days ago 88.4MB
registry.k8s.io/kube-scheduler v1.31.4 3a66234066fe 11 days ago 67.4MB
registry.k8s.io/kube-proxy v1.31.4 ebf80573666f 11 days ago 91.5MB
registry.k8s.io/coredns/coredns v1.11.3 c69fa2e9cbf5 4 months ago 61.8MB
registry.k8s.io/etcd 3.5.15-0 2e96e5913fc0 4 months ago 148MB
registry.k8s.io/pause 3.10 873ed7510279 7 months ago 736kB
registry.k8s.io/pause 3.9 e6f181688397 2 years ago 744kB

设置 kube config,按照上面命令跑三条命令即可

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果不设置 kube config,此时如果直接跑 kubectl 命令,会有类似下面的错误

1
The connection to the server localhost:8080 was refused - did you specify the right host or port?

因为 kubectl 沟通 K8s,需要知道访问哪个地址

或者权限设置得不对,也是会报错的

1
error: error loading config file "/home/linweiyuan/.kube/config": open /home/linweiyuan/.kube/config: permission denied

配置好后,再执行 kubectl 命令,就可以正常返回数据了

但此时集群是 NotReady 的,因为没有设置网络插件

1
2
3
4
5
6
7
8
9
10
11
12
➜  ~ kgno
NAME STATUS ROLES AGE VERSION
archlinux NotReady control-plane 18s v1.31.3

➜ ~ kgp -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7c65d6cfc9-rnj6q 0/1 Pending 0 3m26s
kube-system coredns-7c65d6cfc9-v74bc 0/1 Pending 0 3m26s
kube-system etcd-archlinux 1/1 Running 21 3m33s
kube-system kube-apiserver-archlinux 1/1 Running 21 3m34s
kube-system kube-controller-manager-archlinux 1/1 Running 18 3m33s
kube-system kube-scheduler-archlinux 1/1 Running 18 3m33s

有很多网络插件可以用,这里用 wiki 里推荐的,叫 cilium,这个需要在初始化集群时加上 --skip-phases=addon/kube-proxy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
➜  ~ sudo pacman -S cilium-cli
warning: cilium-cli-0.16.21-1 is up to date -- reinstalling
resolving dependencies...
looking for conflicting packages...

Package (1) Old Version New Version Net Change

extra/cilium-cli 0.16.21-1 0.16.21-1 0.00 MiB

Total Installed Size: 175.37 MiB
Net Upgrade Size: 0.00 MiB

:: Proceed with installation? [Y/n]
(1/1) checking keys in keyring [--------------------------------------------------------------------] 100%
(1/1) checking package integrity [--------------------------------------------------------------------] 100%
(1/1) loading package files [--------------------------------------------------------------------] 100%
(1/1) checking for file conflicts [--------------------------------------------------------------------] 100%
:: Processing package changes...
(1/1) reinstalling cilium-cli [--------------------------------------------------------------------] 100%
:: Running post-transaction hooks...
(1/2) Arming ConditionNeedsUpdate...
(2/2) Refreshing py3status arch_updates module...

➜ ~ cilium-cli status
/¯¯\
/¯¯\__/¯¯\ Cilium: 1 errors
\__/¯¯\__/ Operator: disabled
/¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode)
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled

Containers: cilium
cilium-operator
Cluster Pods: 0/3 managed by Cilium
Helm chart version:
Errors: cilium cilium daemonsets.apps "cilium" not found
status check failed: [daemonsets.apps "cilium" not found, unable to retrieve ConfigMap "cilium-config": configmaps "cilium-config" not found]

➜ ~ cilium-cli install
ℹ️ Using Cilium version 1.16.4
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected kube-proxy has not been installed
ℹ️ Cilium will fully replace all functionalities of kube-proxy

➜ ~ cilium-cli status
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: OK
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled

DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet cilium-envoy Desired: 1, Ready: 1/1, Available: 1/1
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 1
cilium-envoy Running: 1
cilium-operator Running: 1
Cluster Pods: 2/2 managed by Cilium
Helm chart version: 1.16.4
Image versions cilium quay.io/cilium/cilium:v1.16.4@sha256:d55ec38938854133e06739b1af237932b9c4dd4e75e9b7b2ca3acc72540a44bf: 1
cilium-envoy quay.io/cilium/cilium-envoy:v1.30.7-1731393961-97edc2815e2c6a174d3d12e71731d54f5d32ea16@sha256:0287b36f70cfbdf54f894160082f4f94d1ee1fb10389f3a95baa6c8e448586ed: 1
cilium-operator quay.io/cilium/operator-generic:v1.16.4@sha256:c55a7cbe19fe0b6b28903a085334edb586a3201add9db56d2122c8485f7a51c5: 1

新起了几个新 pod,并且状态也是 Ready

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
➜  ~ kgp -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-envoy-qljvg 1/1 Running 0 68s
kube-system cilium-fmkdr 1/1 Running 0 68s
kube-system cilium-operator-6566687bbb-xnhf6 1/1 Running 0 68s
kube-system coredns-7c65d6cfc9-89b5h 1/1 Running 0 2m25s
kube-system coredns-7c65d6cfc9-n95pt 1/1 Running 0 2m25s
kube-system etcd-archlinux 1/1 Running 21 2m32s
kube-system kube-apiserver-archlinux 1/1 Running 21 2m32s
kube-system kube-controller-manager-archlinux 1/1 Running 18 2m32s
kube-system kube-scheduler-archlinux 1/1 Running 18 2m32s

➜ ~ kgno
NAME STATUS ROLES AGE VERSION
archlinux Ready control-plane 6m36s v1.31.3

此时只有一个 control-plane 节点,默认是不给部署的,只能下发给 worker 节点,可以手动取消这个限制

1
2
➜  ~ k taint node archlinux node-role.kubernetes.io/control-plane-
node/archlinux untainted

后面就可以正常部署了

但是!!!

很好这插件,使我的大鹅失败,不知道是不是那个 eBPF 导致的

导致拉不到镜像来测试,弃坑

卸载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
➜  ~ cilium-cli uninstall
🔥 Deleting pods in cilium-test namespace...
🔥 Deleting cilium-test namespace...
⌛ Uninstalling Cilium

➜ ~ k drain archlinux --delete-emptydir-data --force --ignore-daemonsets
node/archlinux cordoned
evicting pod kube-system/coredns-7c65d6cfc9-tjql6
evicting pod kube-system/cilium-zwg6k
evicting pod kube-system/coredns-7c65d6cfc9-j5rm2
pod/cilium-zwg6k evicted
pod/coredns-7c65d6cfc9-j5rm2 evicted
pod/coredns-7c65d6cfc9-tjql6 evicted
node/archlinux drained

➜ ~ sudo kubeadm reset --cri-socket unix:///var/run/cri-dockerd.sock
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1224 00:58:29.972845 6572 configset.go:78] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" not found
W1224 00:58:30.019313 6572 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

下面还有直接基于 containerd 的版本,这种方案下,docker 和相关的包就不需要再安装了

安装 containerd,如果安装了 docker,自动会包含了这个包

1
sudo pacman -S containerd

设置开机自启

1
sudo systemctl enable --now containerd

初始化 K8s(不加 --cri-socket 默认就是这种方案)

1
sudo kubeadm init --skip-phases=addon/kube-proxy

后面就都差不多了


Docker Desktop

结论:图形化界面,比较容易上手,但是这个包是在 aur 里的

1
y -S docker-desktop

直接点点点就完事了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
➜  ~ kgno
NAME STATUS ROLES AGE VERSION
docker-desktop Ready control-plane 49s v1.30.5

➜ ~ kgp -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-55cb58b774-lqj84 1/1 Running 0 43s
kube-system coredns-55cb58b774-pgnz2 1/1 Running 0 43s
kube-system etcd-docker-desktop 1/1 Running 11 41s
kube-system kube-apiserver-docker-desktop 1/1 Running 11 41s
kube-system kube-controller-manager-docker-desktop 1/1 Running 11 44s
kube-system kube-proxy-d5t8n 1/1 Running 0 44s
kube-system kube-scheduler-docker-desktop 1/1 Running 11 40s
kube-system storage-provisioner 1/1 Running 0 41s
kube-system vpnkit-controller 1/1 Running 0 41s

后续就可以正常使用了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
➜  ~ k create deployment --image nginx:alpine nginx
deployment.apps/nginx created

➜ ~ kgp
NAME READY STATUS RESTARTS AGE
nginx-6f564d4fd9-5b9vh 1/1 Running 0 3s

➜ ~ kgs
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2m6s

➜ ~ k expose deployment nginx --type NodePort --port 80
service/nginx exposed

➜ ~ kgs
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2m22s
nginx NodePort 10.98.146.118 <none> 80:31343/TCP 7s

➜ ~ curl 127.0.0.1:31343
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

对新手极其友好,很推荐,不用端口映射来映射去

卸载

1
sudo pacman -Rs docker-desktop

Minikube

安装

1
sudo pacman -S minikube

这个也比较简单

创建集群支持很多 Driver,比如

  • virtualbox
  • kvm2
  • qemu2
  • qemu
  • vmware
  • none
  • docker
  • podman
  • ssh

这里选择 docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
➜  ~ minikube -d docker start
😄 minikube v1.34.0 on Arch
✨ Using the docker driver based on user configuration
📌 Using Docker driver with root privileges
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🚜 Pulling base image v0.0.45 ...
🔥 Creating docker container (CPUs=2, Memory=16000MB) ...
🐳 Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

➜ ~ kgno
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 2m50s v1.31.0

➜ ~ kgp -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6f6b679f8f-sdnh6 1/1 Running 0 2m44s
kube-system etcd-minikube 1/1 Running 0 2m50s
kube-system kube-apiserver-minikube 1/1 Running 0 2m50s
kube-system kube-controller-manager-minikube 1/1 Running 0 2m50s
kube-system kube-proxy-n5pjz 1/1 Running 0 2m45s
kube-system kube-scheduler-minikube 1/1 Running 0 2m50s
kube-system storage-provisioner 1/1 Running 1 (2m45s ago) 2m49s

就完事了

还自带一堆附加组件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
➜  ~ minikube addons list
|-----------------------------|----------|--------------|--------------------------------|
| ADDON NAME | PROFILE | STATUS | MAINTAINER |
|-----------------------------|----------|--------------|--------------------------------|
| ambassador | minikube | disabled | 3rd party (Ambassador) |
| auto-pause | minikube | disabled | minikube |
| cloud-spanner | minikube | disabled | Google |
| csi-hostpath-driver | minikube | disabled | Kubernetes |
| dashboard | minikube | disabled | Kubernetes |
| default-storageclass | minikube | enabled ✅ | Kubernetes |
| efk | minikube | disabled | 3rd party (Elastic) |
| freshpod | minikube | disabled | Google |
| gcp-auth | minikube | disabled | Google |
| gvisor | minikube | disabled | minikube |
| headlamp | minikube | disabled | 3rd party (kinvolk.io) |
| helm-tiller | minikube | disabled | 3rd party (Helm) |
| inaccel | minikube | disabled | 3rd party (InAccel |
| | | | [info@inaccel.com]) |
| ingress | minikube | disabled | Kubernetes |
| ingress-dns | minikube | disabled | minikube |
| inspektor-gadget | minikube | disabled | 3rd party |
| | | | (inspektor-gadget.io) |
| istio | minikube | disabled | 3rd party (Istio) |
| istio-provisioner | minikube | disabled | 3rd party (Istio) |
| kong | minikube | disabled | 3rd party (Kong HQ) |
| kubeflow | minikube | disabled | 3rd party |
| kubevirt | minikube | disabled | 3rd party (KubeVirt) |
| logviewer | minikube | disabled | 3rd party (unknown) |
| metallb | minikube | disabled | 3rd party (MetalLB) |
| metrics-server | minikube | disabled | Kubernetes |
| nvidia-device-plugin | minikube | disabled | 3rd party (NVIDIA) |
| nvidia-driver-installer | minikube | disabled | 3rd party (NVIDIA) |
| nvidia-gpu-device-plugin | minikube | disabled | 3rd party (NVIDIA) |
| olm | minikube | disabled | 3rd party (Operator Framework) |
| pod-security-policy | minikube | disabled | 3rd party (unknown) |
| portainer | minikube | disabled | 3rd party (Portainer.io) |
| registry | minikube | disabled | minikube |
| registry-aliases | minikube | disabled | 3rd party (unknown) |
| registry-creds | minikube | disabled | 3rd party (UPMC Enterprises) |
| storage-provisioner | minikube | enabled ✅ | minikube |
| storage-provisioner-gluster | minikube | disabled | 3rd party (Gluster) |
| storage-provisioner-rancher | minikube | disabled | 3rd party (Rancher) |
| volcano | minikube | disabled | third-party (volcano) |
| volumesnapshots | minikube | disabled | Kubernetes |
| yakd | minikube | disabled | 3rd party (marcnuri.com) |
|-----------------------------|----------|--------------|--------------------------------|

比如这样,就能自动下载 dashboard 镜像并打开

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
➜  ~ minikube addons enable dashboard
💡 dashboard is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
▪ Using image docker.io/kubernetesui/dashboard:v2.7.0
▪ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
💡 Some dashboard features require the metrics-server addon. To enable all features please run:

minikube addons enable metrics-server

🌟 The 'dashboard' addon is enabled

➜ ~ kgp -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6f6b679f8f-sdnh6 1/1 Running 0 7m47s
kube-system etcd-minikube 1/1 Running 0 7m53s
kube-system kube-apiserver-minikube 1/1 Running 0 7m53s
kube-system kube-controller-manager-minikube 1/1 Running 0 7m53s
kube-system kube-proxy-n5pjz 1/1 Running 0 7m48s
kube-system kube-scheduler-minikube 1/1 Running 0 7m53s
kube-system storage-provisioner 1/1 Running 1 (7m48s ago) 7m52s
kubernetes-dashboard dashboard-metrics-scraper-c5db448b4-mhbxr 1/1 Running 0 3m49s
kubernetes-dashboard kubernetes-dashboard-695b96c756-d86w9 1/1 Running 0 3m49s

➜ ~ minikube dashboard
🤔 Verifying dashboard health ...
🚀 Launching proxy ...
🤔 Verifying proxy health ...
🎉 Opening http://127.0.0.1:44639/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
Opening in existing browser session.

服务也是不能直接访问本地的,而是要进入 minikube 里面,或者用 minikube service list 里的链接才可以访问

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
➜  ~ k create deployment --image nginx:alpine nginx
deployment.apps/nginx created

➜ ~ k expose deployment nginx --type NodePort --port 80
service/nginx exposed

➜ ~ kgs
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8m33s
nginx NodePort 10.103.106.5 <none> 80:30978/TCP 2s

➜ ~ curl 127.0.0.1:30978
curl: (7) Failed to connect to 127.0.0.1 port 30978 after 0 ms: Could not connect to server

➜ ~ minikube ssh
docker@minikube:~$ curl 127.0.0.1:30978
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
docker@minikube:~$

➜ ~ minikube service list
|----------------------|---------------------------|--------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|----------------------|---------------------------|--------------|---------------------------|
| default | kubernetes | No node port | |
| default | nginx | 80 | http://192.168.49.2:30978 |
| kube-system | kube-dns | No node port | |
| kubernetes-dashboard | dashboard-metrics-scraper | No node port | |
| kubernetes-dashboard | kubernetes-dashboard | No node port | |
|----------------------|---------------------------|--------------|---------------------------|

➜ ~ curl http://192.168.49.2:30978
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

此外还有很多实用的开箱即用的命令,个人也是比较推荐

卸载

1
2
3
4
5
➜  ~ minikube delete
🔥 Deleting "minikube" in docker ...
🔥 Deleting container "minikube" ...
🔥 Removing /home/linweiyuan/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.