Kubernetes的名字来自希腊语,意思是“舵手” 或 “领航员”。K8s是将8个字母“ubernete”替换为“8”的缩写,也就是仅保留了头尾2个字母(k和s),中间的8个字母都去掉了,用“8”代替。

kubectl(ctl结尾的一般都是命令行工具)

  • Kubernetes命令行工具kubectl允许您对Kubernetes集群运行命令。kubectl可以实现应用部署、集群资源巡检和管理、日志查看等功能
  • kubectl可在各种Linux平台、macOS和Windows上安装。

安装kubectl

安装之前

  • 您必须使用与您的集群只有一个小版本差异的kubectl版本。例如,v1.22客户端可以与v1.21、v1.22、v1.23控制平面通信。使用kubectl的最新版本有助于避免不可预见的问题(向下兼容,最好安装新版本!)

Linux下安装kubectl的方法如下

  • 在Linux上使用curl安装kubectl二进制文件(我们选这个安装)
  • 使用本地包管理进行安装
  • 使用其他包管理安装

官方的是是使用curl下载。我们变通一下下载之后上传服务器。

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

又get一招,curl居然可以这么使用。

  • 验证二进制文件(略过)
  • 上传到服务器,在kubectl文件夹执行 chmod +x kubectl ,给kubectl增加执行权限。
  • 执行命令 mv kubectl /usr/local/bin/ 移动到bin目录不用配置$PATH。
  • 验证 kubectl version --client

回显

Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}

安装kind(k8s环境go语言写的)

使用kubeadm在RedHat7.3上部署kubernetesv1.22.3(最少2核2G内存)

系统准备

  • 查看系统版本
[root@master etc]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)
[root@master etc]# uname -a
Linux master.paas.com 3.10.0-514.el7.x86_64 #1 SMP Wed Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
  • 配置网路
[root@master ~]# cd /etc/sysconfig/network-scripts/
[root@master network-scripts]# vi ifcfg-ens33

TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=74fae38c-13f1-43fb-9d18-6db018408d12
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.107.123
NETMASK=255.255.255.0
GATEWAY=192.168.107.2
DNS1=223.5.5.5

刷新网络配置

[root@localhost ~]# systemctl restart network
  • 添加阿里源
[root@localhost ~]# rm -rfv /etc/yum.repos.d/*
[root@localhost ~]# sudo curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
注意:修改Centos-7.repo文件将所有$releasever替换为7
vi /etc/yum.repos.d/CentOS-Base.repo
:%s/$releasever/7/g
[root@localhost ~]# sudo yum clean all && sudo yum makecache

*配置主机名

[root@localhost ~]# vi /etc/hosts
[root@localhost ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.107.124 master01.paas.com master01
  • 验证mac地址uuid
[root@test-bbank-das net]# cat  /sys/class/net/ens192/address
00:50:56:a4:28:84
[root@test-bbank-das net]# cat /sys/class/dmi/id/product_uuid
4224FD07-2098-2C8A-49C6-2896A9B3B75C
[root@test-bbank-das net]# 

保证各节点mac和uuid唯一

  • 关闭swap,注释swap分区
[root@localhost ~]# swapoff -a
[root@localhost ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sun Nov 21 05:23:35 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root   /                       xfs     defaults        0 0
UUID=45c0ba12-51aa-4627-a927-e62ed42d5a90 /boot                   xfs     defaults        0 0
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
[root@localhost ~]# systemctl daemon-reload
  • 配置内核参数,将桥接的IPve流量传递到iptables的链
[root@localhost ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
sysctl --system[root@localhost ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...
[root@localhost ~]# 

安装常用包

[root@localhost ~]# yum install vim bash-completion net-tools gcc -y

使用aliyun源安装docker-ce

[root@master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master01 ~]# yum -y install docker-ce
  • 添加aliyundocker仓库加速器
[root@master01 ~]# mkdir -p /etc/docker
[root@master01 ~]# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker
  • 修改cgroupdriver是为了消除告警:
    [WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/

安装kubectl、kubelet、kubeadm

  • 添加阿里kubernetes源
[root@master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  • [] 中括号中的是repository id,唯一,用来标识不同仓库

  • name 仓库名称,自定义

  • baseurl 仓库地址

  • enable 是否启用该仓库,默认为1表示启用

  • gpgcheck 是否验证从该仓库获得程序包的合法性,1为验证

  • repo_gpgcheck 是否验证元数据的合法性 元数据就是程序包列表,1为验证

  • gpgkey=URL 数字签名的公钥文件所在位置,如果gpgcheck值为1,此处就需要指定gpgkey文件的位置,如果* * gpgcheck值为0就不需要此项了

  • 安装

[root@master01 ~]# yum list kubelet --showduplicates | sort -r
[root@master01 ~]# yum install -y kubelet-1.18.20 kubeadm-1.18.20 kubectl-1.18.20
[root@master01 ~]# systemctl enable kubelet
  • 关闭防火墙
[root@localhost ~]# systemctl stop firewalld
[root@localhost selinux]# vi /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

[root@master01 ~]# systemctl disable firewalld
[root@master01 ~]# systemctl start docker 
[root@master01 ~]# systemctl start kubelet
[root@master01 etc]# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@master01 etc]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service
  • cgroup-driver=systemd
[root@master01 kubelet.service.d]# cd /lib/systemd/system/kubelet.service.d
[root@master01 kubelet.service.d]# vi 10-kubeadm.conf

[root@master01 kubelet.service.d]# cat 10-kubeadm.conf 
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=cgroupfs"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS


[root@master01 kubelet.service.d]# systemctl daemon-reload
[root@master01 kubelet.service.d]# systemctl restart kubelet
[root@master01 kubelet.service.d]# systemctl status kubelet
  • 初始化k8s集群
 
[root@localhost ~]# kubeadm init --kubernetes-version=1.18.20  \
 --apiserver-advertise-address=192.168.107.124   \
 --image-repository registry.aliyuncs.com/google_containers  \
 --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

POD的网段为: 10.122.0.0/16, api server地址就是master本机IP。

这一步很关键,由于kubeadm 默认从官网k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–image-repository指定阿里云镜像仓库地址。

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.107.124:6443 --token iydl4u.tr81ql5aaxf83x02 \
        --discovery-token-ca-cert-hash sha256:add8472eb3c8203795d3bb4a9c3350645a617c65d9b35ec39994e1eb788085a0 

记录生成的最后部分内容,此内容需要在其它节点加入Kubernetes集群时执行。
根据提示创建kubectl

[root@master01 ~]#  mkdir -p $HOME/.kube
[root@master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master01 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

执行下面命令,使kubectl可以自动补充·

[root@master01 ~]# source <(kubectl completion bash)

查看节点,pod

[root@master01 kubelet.service.d]# kubectl get node
NAME                STATUS     ROLES                  AGE     VERSION
master01.paas.com   NotReady   control-plane,master   3m57s   v1.22.4
[root@master01 kubelet.service.d]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                        READY   STATUS    RESTARTS        AGE
kube-system   coredns-7f6cbbb7b8-mjwls                    0/1     Pending   0               2m59s
kube-system   coredns-7f6cbbb7b8-xjw9k                    0/1     Pending   0               2m59s
kube-system   etcd-master01.paas.com                      1/1     Running   1               4m20s
kube-system   kube-apiserver-master01.paas.com            1/1     Running   1               4m20s
kube-system   kube-controller-manager-master01.paas.com   1/1     Running   2 (3m44s ago)   4m20s
kube-system   kube-proxy-9nqlw                            1/1     Running   0               3m59s
kube-system   kube-scheduler-master01.paas.com            1/1     Running   2 (3m47s ago)   4m20s

node节点为NotReady,因为corednspod没有启动,缺少网络pod

安装calico网络

[root@master01 ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

查看pod和node

[root@master01 ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-555fc8cc5c-k8rbk    1/1     Running   0          36s
kube-system   calico-node-5km27                           1/1     Running   0          36s
kube-system   coredns-7ff77c879f-fsj9l                    1/1     Running   0          5m22s
kube-system   coredns-7ff77c879f-q5ll2                    1/1     Running   0          5m22s
kube-system   etcd-master01.paas.com                      1/1     Running   0          5m32s
kube-system   kube-apiserver-master01.paas.com            1/1     Running   0          5m32s
kube-system   kube-controller-manager-master01.paas.com   1/1     Running   0          5m32s
kube-system   kube-proxy-th472                            1/1     Running   0          5m22s
kube-system   kube-scheduler-master01.paas.com            1/1     Running   0          5m32s
[root@master01 ~]# kubectl get node
NAME                STATUS   ROLES    AGE     VERSION
master01.paas.com   Ready    master   5m47s   v1.18.0
[root@master01 ~]#

此时集群状态正常

安装kubernetes-dashboard

  • 官方部署dashboard的服务没使用nodeport,将yaml文件下载到本地,在service里添加nodeport
[root@master01 ~]# wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
[root@master01 ~]# vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard

[root@master01 ~]# kubectl create -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.3.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

查看pod,service

[root@master01 ~]# kubectl get pod --all-namespaces 
NAMESPACE              NAME                                         READY   STATUS    RESTARTS      AGE
kube-system            calico-kube-controllers-5d995d45d6-6fc2j     1/1     Running   0             17m
kube-system            calico-node-9xxpd                            1/1     Running   0             17m
kube-system            coredns-7f6cbbb7b8-mjwls                     1/1     Running   0             3h49m
kube-system            coredns-7f6cbbb7b8-xjw9k                     1/1     Running   0             3h49m
kube-system            etcd-master01.paas.com                       1/1     Running   1             3h50m
kube-system            kube-apiserver-master01.paas.com             1/1     Running   1             3h50m
kube-system            kube-controller-manager-master01.paas.com    1/1     Running   3 (14m ago)   3h50m
kube-system            kube-proxy-9nqlw                             1/1     Running   0             3h50m
kube-system            kube-scheduler-master01.paas.com             1/1     Running   3 (15m ago)   3h50m
kubernetes-dashboard   dashboard-metrics-scraper-856586f554-pgl8q   1/1     Running   0             6m29s
kubernetes-dashboard   kubernetes-dashboard-67484c44f6-xx5ll        1/1     Running   0             6m29s
[root@master01 ~]#  kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.10.63.175   <none>        8000/TCP        6m47s
kubernetes-dashboard        NodePort    10.10.87.83    <none>        443:30000/TCP   6m48s

通过页面访问,https://192.168.107.124:30000/#/login

  • 查看kubernetes-dashboard名称空间下的secret
[root@master01 ~]# kubectl get secret -n kubernetes-dashboard
NAME                               TYPE                                  DATA   AGE
default-token-kvsmw                kubernetes.io/service-account-token   3      11m
kubernetes-dashboard-certs         Opaque                                0      11m
kubernetes-dashboard-csrf          Opaque                                1      11m
kubernetes-dashboard-key-holder    Opaque                                2      11m
kubernetes-dashboard-token-hwzcb   kubernetes.io/service-account-token   3      11m
  • 找到对应的带有token的kubernetes-dashboard-token-hwzcb
[root@master01 ~]# kubectl describe secret kubernetes-dashboard-token-hwzcb -n kubernetes-dashboard
Name:         kubernetes-dashboard-token-hwzcb
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 9dc6f90f-e06e-461d-93dc-c72a0d3008e4

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1099 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImR0NTBJakIxVkN6MWRPb1FpaWhZVEJPQ1NTWkdXanNjS0xvMDQyVXkzNFkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1od3pjYiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjlkYzZmOTBmLWUwNmUtNDYxZC05M2RjLWM3MmEwZDMwMDhlNCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.V1fAqb7RP4En7TE5f-tIYkFYYyaOqus5ja_awHA00zRm7Zjix2EfUkQB7n_vAXeSC1nHGkBPhXCl4sBo4zPtcae31yLBQRjmUBM8PV_ur90qHn2QhNc4198tefMrBc3Z4RaY97wag51C0fLNlkz9FMGy0b-tivcwjyvBcUhRo7mVdUXhdxrd3b8Tj83jL_d7gDa3QjzlN33E9W-S2LcGfSskQe7-z2Ck2HH2XjUkuDRT-GWTa2OcvYx_z0vn6fsajQfVYPnfQUJuNK7HkE0rYmk4XwUG0-uu9aCSVsFsInqUcNzByNJfcTqTt0grbu5rxOVVrP4WhNq6tEsf6jqKCg
  • 点击sing in登陆,显示如下,默认是只能看到default名称空间内容

  • 创建管理员token,可查看任何空间权限

[root@master01 ~]# kubectl create clusterrolebinding dashboard-cluster-amdin --clusterrole=admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-amdin created
  • 查看kubernetes-dashboard名称空间下的secret
[root@master01 ~]#  kubectl get secret -n kubernetes-dashboard                  
NAME                               TYPE                                  DATA   AGE
default-token-kvsmw                kubernetes.io/service-account-token   3      19m
kubernetes-dashboard-certs         Opaque                                0      19m
kubernetes-dashboard-csrf          Opaque                                1      19m
kubernetes-dashboard-key-holder    Opaque                                2      19m
kubernetes-dashboard-token-hwzcb   kubernetes.io/service-account-token   3      19m
[root@master01 ~]#  kubectl describe secret kubernetes-dashboard-token-hwzcb -n kubernetes-dashboard
Name:         kubernetes-dashboard-token-hwzcb
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 9dc6f90f-e06e-461d-93dc-c72a0d3008e4

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1099 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImR0NTBJakIxVkN6MWRPb1FpaWhZVEJPQ1NTWkdXanNjS0xvMDQyVXkzNFkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1od3pjYiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjlkYzZmOTBmLWUwNmUtNDYxZC05M2RjLWM3MmEwZDMwMDhlNCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.V1fAqb7RP4En7TE5f-tIYkFYYyaOqus5ja_awHA00zRm7Zjix2EfUkQB7n_vAXeSC1nHGkBPhXCl4sBo4zPtcae31yLBQRjmUBM8PV_ur90qHn2QhNc4198tefMrBc3Z4RaY97wag51C0fLNlkz9FMGy0b-tivcwjyvBcUhRo7mVdUXhdxrd3b8Tj83jL_d7gDa3QjzlN33E9W-S2LcGfSskQe7-z2Ck2HH2XjUkuDRT-GWTa2OcvYx_z0vn6fsajQfVYPnfQUJuNK7HkE0rYmk4XwUG0-uu9aCSVsFsInqUcNzByNJfcTqTt0grbu5rxOVVrP4WhNq6tEsf6jqKCg

Q.E.D.


Very lazy chimp