➜ ~ kind kind creates and manages local Kubernetes clusters using Docker container 'nodes'
Usage: kind [command]
Available Commands: build Build one of [node-image] completion Output shell completion code for the specified shell (bash, zsh or fish) create Creates one of [cluster] delete Deletes one of [cluster] export Exports one of [kubeconfig, logs] get Gets one of [clusters, nodes, kubeconfig] help Help about any command load Loads images into nodes version Prints the kind CLI version
Flags: -h, --helphelpfor kind -q, --quiet silence all stderr output -v, --verbosity int32 info log verbosity, higher value produces more output --version version for kind
Use "kind [command] --help"for more information about a command.
创建 cluster
1 2 3 4 5 6 7 8 9 10 11 12 13 14
➜ ~ kind create cluster Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.32.0) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to "kind-kind" You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a nice day! 👋
这就好了,不用管啥网络插件之类的,也自动配置好了 .kube/config 文件
1 2 3 4 5 6 7 8 9
➜ ~ k get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready control-plane 6m43s v1.32.0
➜ ~ k create deployment --image nginx:alpine nginx deployment.apps/nginx created
➜ ~ k port-forward nginx-6b66fbbd46-t89f7 80:80 Unable to listen on port 80: Listeners failed to create with the following errors: [unable to create listener: Error listen tcp4 127.0.0.1:80: bind: permission denied unable to create listener: Error listen tcp6 [::1]:80: bind: permission denied] error: unable to listen on any of the requested ports: [{80 80}]
➜ ~ k port-forward nginx-6b66fbbd46-t89f7 1024:80 Forwarding from 127.0.0.1:1024 -> 80 Forwarding from [::1]:1024 -> 80 Handling connection for 1024
➜ ~ curl 127.0.0.1:1024 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p>
<p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p> </body> </html>
➜ ~ sudo kubeadm init --cri-socket unix:///var/run/cri-dockerd.sock --skip-phases=addon/kube-proxy I1224 00:46:49.514973 2153 version.go:261] remote version is much newer: v1.32.0; falling back to: stable-1.31 [init] Using Kubernetes version: v1.31.4 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action beforehand using 'kubeadm config images pull' W1224 00:46:50.556067 2153 checks.go:846] detected that the sandbox image "registry.k8s.io/pause:3.9" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image. [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [archlinux kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.31.60] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [archlinux localhost] and IPs [192.168.31.60 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [archlinux localhost] and IPs [192.168.31.60 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "super-admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Creating static Pod manifest forlocal etcd in"/etc/kubernetes/manifests" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for"kube-apiserver" [control-plane] Creating static Pod manifest for"kube-controller-manager" [control-plane] Creating static Pod manifest for"kube-scheduler" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s [kubelet-check] The kubelet is healthy after 501.222992ms [api-check] Waiting for a healthy API server. This can take up to 4m0s [api-check] The API server is healthy after 3.500830861s [upload-config] Storing the configuration used in ConfigMap "kubeadm-config"in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config"in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node archlinux as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node archlinux as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: aop3y1.858jqah2hpknu8yg [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
➜ ~ docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.k8s.io/kube-apiserver v1.31.4 bdc2eadbf366 11 days ago 94.2MB registry.k8s.io/kube-controller-manager v1.31.4 359b9f230732 11 days ago 88.4MB registry.k8s.io/kube-scheduler v1.31.4 3a66234066fe 11 days ago 67.4MB registry.k8s.io/kube-proxy v1.31.4 ebf80573666f 11 days ago 91.5MB registry.k8s.io/coredns/coredns v1.11.3 c69fa2e9cbf5 4 months ago 61.8MB registry.k8s.io/etcd 3.5.15-0 2e96e5913fc0 4 months ago 148MB registry.k8s.io/pause 3.10 873ed7510279 7 months ago 736kB registry.k8s.io/pause 3.9 e6f181688397 2 years ago 744kB
➜ ~ sudo pacman -S cilium-cli warning: cilium-cli-0.16.21-1 is up to date -- reinstalling resolving dependencies... looking for conflicting packages...
Package (1) Old Version New Version Net Change
extra/cilium-cli 0.16.21-1 0.16.21-1 0.00 MiB
Total Installed Size: 175.37 MiB Net Upgrade Size: 0.00 MiB
Containers: cilium cilium-operator Cluster Pods: 0/3 managed by Cilium Helm chart version: Errors: cilium cilium daemonsets.apps "cilium" not found status check failed: [daemonsets.apps "cilium" not found, unable to retrieve ConfigMap "cilium-config": configmaps "cilium-config" not found]
➜ ~ cilium-cli install ℹ️ Using Cilium version 1.16.4 🔮 Auto-detected cluster name: kubernetes 🔮 Auto-detected kube-proxy has not been installed ℹ️ Cilium will fully replace all functionalities of kube-proxy
➜ ~ cilium-cli status /¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Envoy DaemonSet: OK \__/¯¯\__/ Hubble Relay: disabled \__/ ClusterMesh: disabled
➜ ~ k drain archlinux --delete-emptydir-data --force --ignore-daemonsets node/archlinux cordoned evicting pod kube-system/coredns-7c65d6cfc9-tjql6 evicting pod kube-system/cilium-zwg6k evicting pod kube-system/coredns-7c65d6cfc9-j5rm2 pod/cilium-zwg6k evicted pod/coredns-7c65d6cfc9-j5rm2 evicted pod/coredns-7c65d6cfc9-tjql6 evicted node/archlinux drained
➜ ~ sudo kubeadm reset --cri-socket unix:///var/run/cri-dockerd.sock [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' W1224 00:58:29.972845 6572 configset.go:78] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" not found W1224 00:58:30.019313 6572 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] Are you sure you want to proceed? [y/N]: y [preflight] Running pre-flight checks [reset] Deleted contents of the etcd data directory: /var/lib/etcd [reset] Stopping the kubelet service [reset] Unmounting mounted directories in"/var/lib/kubelet" [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables"command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file.
➜ ~ kgs NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2m22s nginx NodePort 10.98.146.118 <none> 80:31343/TCP 7s
➜ ~ curl 127.0.0.1:31343 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p>
<p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p> </body> </html>
➜ ~ minikube addons enable dashboard 💡 dashboard is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS ▪ Using image docker.io/kubernetesui/dashboard:v2.7.0 ▪ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8 💡 Some dashboard features require the metrics-server addon. To enable all features please run:
➜ ~ kgs NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8m33s nginx NodePort 10.103.106.5 <none> 80:30978/TCP 2s
➜ ~ curl 127.0.0.1:30978 curl: (7) Failed to connect to 127.0.0.1 port 30978 after 0 ms: Could not connect to server
➜ ~ minikube ssh docker@minikube:~$ curl 127.0.0.1:30978 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p>
<p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p> </body> </html> docker@minikube:~$
➜ ~ minikube service list |----------------------|---------------------------|--------------|---------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |----------------------|---------------------------|--------------|---------------------------| | default | kubernetes | No node port | | | default | nginx | 80 | http://192.168.49.2:30978 | | kube-system | kube-dns | No node port | | | kubernetes-dashboard | dashboard-metrics-scraper | No node port | | | kubernetes-dashboard | kubernetes-dashboard | No node port | | |----------------------|---------------------------|--------------|---------------------------|
➜ ~ curl http://192.168.49.2:30978 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p>
<p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p> </body> </html>
此外还有很多实用的开箱即用的命令,个人也是比较推荐
卸载
1 2 3 4 5
➜ ~ minikube delete 🔥 Deleting "minikube"in docker ... 🔥 Deleting container "minikube" ... 🔥 Removing /home/linweiyuan/.minikube/machines/minikube ... 💀 Removed all traces of the "minikube" cluster.