K8S简要和部署

环境规划

  1. 集群类型

    Kubernetes集群大体上分为两类:一主多从多主多从

    一主多从:一台master节点和多台node节点,搭建简单,但是有单机故障风险,适用于测试环境

    多主多从:多台master节点和多台node节点,搭建麻烦,安全性高,适用于生产环境

    image-20221117234618273

  2. 安装方式

    Kubernetes有多种部署方式,目前主流的方式有kubeadm、minikube、二进制包

    1、Minikube:一个用于快速搭建单节点kubernetes的工具

    2、Kubeadm:一个用于快速搭建kubernetes集群的工具,https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

    3、二进制包:从官网下载每个组件的二进制包,依次去安装,此方式对于理解kubernetes组件更加有效,https://github.com/kubernetes/kubernetes

环境部署

环境准备:

角色 ip 系统 组件
master 192.168.100.10/24 centos-8-Stream docker,kubectl,kubeadm,kubelet
node1 192.168.100.11/24 centos-8-Stream docker,kubectl,kubeadm,kubelet
node2 192.168.100.12/24 centos-8-Stream docker,kubectl,kubeadm,kubelet

实验步骤:

  1. 关闭防火墙,关闭SELinux规则。(master/node1/node2)

    [root@master ~]# systemctl stop firewalld.service 
    [root@master ~]# systemctl disable firewalld.service 
    Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
    Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    [root@master ~]# setenforce 0	#临时关闭
    [root@master ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config	#永久关闭
    [root@master ~]# systemctl stop postfix  #如果有这个服务也需要关闭
    Failed to stop postfix.service: Unit postfix.service not loaded.
    
  2. hosts文件添加对应主机。(master/node1/node2)

    [root@node1 ~]# cat /etc/hosts 
    192.168.100.10 master
    192.168.100.11 node1
    192.168.100.12 node2
    
  3. 生成ssh秘钥

    [root@master ~]# ssh-keygen 
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa): 
    Enter passphrase (empty for no passphrase): 
    Enter same passphrase again: 
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:wQm/nP6I2M6Z84DeBtrMp0gJ3ZffEaeFnv+snkFwwnE root@master
    The key's randomart image is:
    +---[RSA 3072]----+
    |      .   . E    |
    |       + o +     |
    |        = * +    |
    | . .   o = X     |
    |. . . o S = .    |
    | . ..o o . +     |
    |  o=... o . o    |
    | ..o+=+= o   =   |
    |  . +=Ooo ..+.o  |
    +----[SHA256]-----+
    
    
  4. 将秘钥传输到各节点

    [root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@node1's password: 
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'root@node1'"
    and check to make sure that only the key(s) you wanted were added.
    
    [root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@node2's password: 
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'root@node2'"
    and check to make sure that only the key(s) you wanted were added.
    
    
  5. 设置时钟同步

    #将master设置为时钟服务器
    [root@master ~]# vim /etc/chrony.conf 
    local stratum 10
    [root@master ~]# systemctl restart chronyd.service 
    [root@master ~]# systemctl enable chronyd
    [root@master ~]# hwclock -w
    
    #设置其他节点时钟同步
    [root@node1 ~]# vim /etc/chrony.conf
    #pool 2.centos.pool.ntp.org iburst
    server master  iburst
    [root@node1 ~]# systemctl restart chronyd.service 
    [root@node2 ~]# vim /etc/chrony.conf
    #pool 2.centos.pool.ntp.org iburst
    server master  iburst
    [root@node2 ~]# systemctl restart chronyd.service 
    
  6. 禁用swap分区(master/node1/node2)

    [root@master ~]# vim /etc/fstab  #注释掉swap分区
    #/dev/mapper/cs-swap     none                    swap    defaults        0 0
    
  7. 开启IP转发,和修改内核信息(master/node1/node2)

    [root@master ~]# vim /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    [root@master ~]# modprobe br_netfilter
    [root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    
  8. 配置IPVS功能(master/node1/node2)

    [root@node2 ~]# vim /etc/sysconfig/modules/ipvs.modules 
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    [root@node2 ~]# bash /etc/sysconfig/modules/ipvs.modules
    [root@node2 ~]# lsmod | grep -e ip_vs
    ip_vs_sh               16384  0
    ip_vs_wrr              16384  0
    ip_vs_rr               16384  0
    ip_vs                 172032  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
    nf_conntrack          172032  3 nf_nat,nft_ct,ip_vs
    nf_defrag_ipv6         20480  2 nf_conntrack,ip_vs
    libcrc32c              16384  5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
    
  9. 安装docker(master/node1/node2)

    #确保网络仓库源可用再安装
    [root@master yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
    [root@master yum.repos.d]# dnf -y install epel-release
    [root@master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    #安装docker
    [root@master ~]#  dnf -y install docker-ce --allowerasing
    
  10. 配置docker加速器(master/node1/node2)

    #需要启停docker服务之后再添加配置文件
    [root@master ~]# systemctl restart docker.service 
    [root@master ~]# systemctl stop docker.service 
    Warning: Stopping docker.service, but it can still be activated by:
      docker.socket
    [root@master ~]# cat > /etc/docker/daemon.json << EOF
    {
    "registry-mirrors": ["https://rpnfe8c5.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2"
    }
    EOF
    
  11. 安装kubernetes组件(master/node1/node2)

    [root@master ~]# cat > /etc/yum.repos.d/kubernetes.repo EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
  12. 安装kubeadm kubelet kubectl工具(master/node1/node2)

    [root@master ~]# systemctl restart kubelet
    [root@master ~]# systemctl enable kubelet
    Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
    
  13. 配置containerd(master/node1/node2)

    # 为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行
    [root@master ~]#containerd config default > /etc/containerd/config.toml
    [root@node1 ~]# vim /etc/containerd/config.toml
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
    [root@node1 ~]# systemctl restart containerd
    [root@node1 ~]# systemctl enable containerd
    Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
    
  14. 部署k8s的master节点

    [root@master ~]#kubeadm init \
    --apiserver-advertise-address=192.168.111.100 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.25.4 \
    --service-cidr=10.96.0.0/12 \
    --pod-network-cidr=10.244.0.0/16
    
    # 建议将初始化内容保存在某个文件中
    [root@master ~]#vim k8s
    To start using your cluster, you need to run the following as a regular
    user:
    
    	mkdir -p $HOME/.kube
    	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    	sudo chown $(id -u):$(id -g) $HOME/.kube/config
    	
    Alternatively, if you are the root user, you can run:
    
    	export KUBECONFIG=/etc/kubernetes/admin.conf
    	
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed
    at:
    	https: /kubernetes.io/docs/concepts/cluster-administration/addons/
    	
    Then you can join any number of worker nodes by running the following on
    each as root:
    
    kubeadm join 192.168.100.10:6443 -token eav8jn.zj2muv0thd7e8dad \
    	--discovery-token-ca-cert-hash
    sha256:b38f8a6a6302e25c0bcba2a67c13b234fd0b9fdd8b0c0645154c79edf6555e09
    
  15. 安装pod网络插件

    [root@master ~]#wget https: /raw.githubusercontent.com/flannelio/flannel/master/Documentation/kube-flannel.yml
    [root@master ~]#kubectl apply -f kube-flannel.yml
    namespace/kube-flannel created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.apps/kube-flannel-ds created
    [root@master ~]#kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master NotReady control-plane 6m41s v1.25.4
    [root@master ~]#kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master Ready control-plane 7m10s v1.25.4
    
  16. 将node节点加入到k8s集群中

    [root@node1 ~]#kubeadm join 192.168.100.10:6443 -token
    eav8jn.zj2muv0thd7e8dad \
    > -discovery-token-ca-cert-hash
    sha256:dskxy6sa5bwi786c5a09cad5v6b56gvubtdfst554asd4fdd8b0c0645154c79ed
    
    [root@node2 ~]#kubeadm join 192.168.100.10:6443 -token
    eav8jn.zj2muv0thd7e8dad \
    > -discovery-token-ca-cert-hash
    sha256:dskxy6sa5bwi786c5a09cad5v6b56gvubtdfst554asd4fdd8b0c0645154c79ed
    
  17. kubectl get nodes 查看node状态

    [root@master ~]#kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master Ready control-plane 9m37s v1.25.4
    node1 NotReady <none> 51s v1.25.4
    node2 NotReady <none> 31s v1.25.4
    [root@master ~]#kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master Ready control-plane 9m57s v1.25.4
    node1 Ready <none> 71s v1.25.4
    node2 Ready <none> 51s v1.25.4
    
  18. 使用k8s集群创建一个pod,运行nginx容器,然后进行测试

    [root@master ~]#kubectl create deployment nginx -image nginx
    deployment.apps/nginx created
    [root@master ~]#kubectl get pods
    NAME 				READY 			STATUS 			RESTARTS 		AGE
    nginx-76d6c9b8c-z7p4l 		1/1 			Running 		0 			35s
    [root@master ~]#kubectl expose deployment nginx -port 80 -type
    NodePort
    service/nginx exposed
    [root@master ~]#kubectl get pods -o wide
    NAME 			READY 	    STATUS 	  RESTARTS 	  AGE 	  IP
    NODE NOMINATED NODE READINESS GATES
    nginx-76d6c9b8c-z7p4l 1/1 	Running     0 	                          119s    10.244.1.2
    node1       <none> 		<none>
    [root@master ~]#kubectl get services
    NAME    TYPE    CLUSTER-IP    EXTERNAL-IP    PORT(S)    AGE
    kubernetes    ClusterIP    10.96.0.1    <none>    443/TCP    15m
    nginx    NodePort    10.109.37.202     <none>    80:31125/TCP    17s
    
  19. 测试访问

    1668705688471

  20. 修改默认网页

    [root@master ~]# kubectl exec -it pod/nginx-76d6c9b8c-z7p4l - /bin/bash
    root@nginx-76d6c9b8c-z7p4l:/# cd /usr/share/nginx/html/
    echo "liu" > index.html
    

    1668705966610

原文地址:http://www.cnblogs.com/Archer-x/p/16901942.html

1. 本站所有资源来源于用户上传和网络,如有侵权请邮件联系站长! 2. 分享目的仅供大家学习和交流,请务用于商业用途! 3. 如果你也有好源码或者教程,可以到用户中心发布,分享有积分奖励和额外收入! 4. 本站提供的源码、模板、插件等等其他资源,都不包含技术服务请大家谅解! 5. 如有链接无法下载、失效或广告,请联系管理员处理! 6. 本站资源售价只是赞助,收取费用仅维持本站的日常运营所需! 7. 如遇到加密压缩包,默认解压密码为"gltf",如遇到无法解压的请联系管理员! 8. 因为资源和程序源码均为可复制品,所以不支持任何理由的退款兑现,请斟酌后支付下载 声明:如果标题没有注明"已测试"或者"测试可用"等字样的资源源码均未经过站长测试.特别注意没有标注的源码不保证任何可用性