电脑桌面
添加蚂蚁七词文库到电脑桌面
安装后可以在桌面快捷访问

k8s高可用集群(本机搭建)

来源:金蝶云社区作者:金蝶2024-09-1614

k8s高可用集群(本机搭建)


一、k8s高可用集群规划

=======================================================================

主机名 IP VIP

-----------------------------------------------------

my |192.168.3.100 |192.168.3.234

-----------------------------------------------------

master1 |192.168.3.224 |

-----------------------------------------------------

master2 |192.168.3.225 |

-----------------------------------------------------

node1 |192.168.3.222 |

-----------------------------------------------------

node2 |192.168.3.223 |

-----------------------------------------------------

二、安装Docker

1、各节点下载docker源

# 安装依赖

yum install -y yum-utils device-mapper-persistent-data lvm2

#紧接着配置一个稳定的仓库、仓库配置会保存到/etc/yum.repos.d/docker-ce.repo文件中

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

#更新Yum安装的相关Docker软件包&安装Docker CE(这里安装Docker最新版本)

yum update -y && yum install docker-ce


2、各节点配置docker加速器并修改成k8s驱动


daemon.json文件如果没有自己创建


#创建/etc/docker目录

mkdir /etc/docker

#更新daemon.json文件

cat > /etc/docker/daemon.json <<EOF

{

  "registry-mirrors": [

        "https://ebkn7ykm.mirror.aliyuncs.com",

        "https://docker.mirrors.ustc.edu.cn",

        "http://f1361db2.m.daocloud.io"

    ],

  "exec-opts": ["native.cgroupdriver=systemd"],

  "log-driver": "json-file",

  "log-opts": {

    "max-size": "100m"

  },

  "storage-driver": "overlay2"

}

EOF


3、重启Docker服务

systemctl daemon-reload && systemctl restart docker


4、配置各节点hosts文件


实际生产环境中,可以规划好内网dns,每台机器可以做一下主机名解析,就不需要配hosts文件


cat > /etc/hosts <<EOF

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.3.100 my

192.168.3.224 master1

192.168.3.225 master2

192.168.3.222 node1

192.168.3.223 node2

199.232.28.133  raw.githubusercontent.com

140.82.114.4 github.com

199.232.69.194 github.global.ssl.fastly.net

185.199.108.153 assets-cdn.github.com

185.199.109.153 assets-cdn.github.com

185.199.110.153 assets-cdn.github.com

185.199.111.153 assets-cdn.github.com

185.199.111.133 objects.githubusercontent.com

EOF



免密(这一步可以只在master执行)**,这一步我为后面传输网络做准备


ssh-keygen

cat .ssh/id_rsa.pub >> .ssh/authorized_keys

chmod 600 .ssh/authorized_keys


# 可以在master生成,然后拷贝到node,master节点

scp -r .ssh root@192.168.3.222:/root



三、配置环境变量

1、关掉各节点防火墙,安装相关依赖


yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git lrzsz

systemctl stop firewalld && systemctl disable firewalld

yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save



2、关闭各节点selinux


setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config


3、关闭各节点swap分区


swapoff -a && sed -i  '/ swap / s/^\(.*\)$/#\1/g'  /etc/fstab


4、同步各节点的时间


这个时间服务,如果机器可以直通外网,那就按以下命令执行就行。

如果机器无法通外网,需要做一台时间服务器,然后别的服务器全部从这台时间服务器同步时间。


yum -y install chrony

systemctl start chronyd.service

systemctl enable chronyd.service

timedatectl set-timezone Asia/Shanghai

chronyc -a makestep


5、各节点内核调整


cat > /etc/sysctl.d/k8s.conf << EOF

net.ipv4.ip_nonlocal_bind = 1

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它

vm.overcommit_memory=1 # 不检查物理内存是否够用

vm.panic_on_oom=0 # 开启 OOM

fs.inotify.max_user_instances=8192

fs.inotify.max_user_watches=1048576

fs.file-max=52706963

fs.nr_open=52706963

net.ipv6.conf.all.disable_ipv6=1

net.netfilter.nf_conntrack_max=2310720

EOF


sysctl -p /etc/sysctl.d/k8s.conf


6、配置各节点k8s的yum源


cat > /etc/yum.repos.d/kubernetes.repo << EOF

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF


7、各节点开启ipvs模块


cat >/etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/sh

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

#modprobe -- nf_conntrack_ipv4 #4以上的内核就没有ipv4

modprobe -- nf_conntrack

EOF


chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack



8、设置 rsyslogd 和 systemd journald


mkdir /var/log/journa

mkdir -p  /etc/systemd/journald.conf.d/


cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF

[Journal]

# 持久化保存到磁盘

Storage=persistent

# 压缩历史日志

Compress=yes

SyncIntervalSec=5m

RateLimitInterval=30s

RateLimitBurst=1000

# 最大占用空间 10G

SystemMaxUse=10G

# 单日志文件最大 200M

SystemMaxFileSize=200M

# 日志保存时间 2 周

MaxRetentionSec=2week

# 不将日志转发到 syslog

ForwardToSyslog=no

EOF


systemctl restart systemd-journald


9、系统内核升级到最新

CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定


rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

yum --enablerepo=elrepo-kernel install -y kernel-lt

grub2-set-default 'CentOS Linux (5.4.159-1.el7.elrepo.x86_64) 7 (Core)'

## 最后重新系统

reboot


四、所有master节点安装keepalived和haproxy服务


1、各个master节点安装服务


yum -y install haproxy keepalived


2、修改master配置文件


第一台master(my)为master,后面一台master为backup


# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived


global_defs {

   router_id LVS_DEVEL

# 添加如下内容

   script_user root

   enable_script_security

}


vrrp_script check_haproxy {

    script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径

    interval 3

    weight -2

    fall 10

    rise 2

}


vrrp_instance VI_1 {

    state MASTER            # MASTER

    interface ens33         # 本机网卡名

    virtual_router_id 51

    priority 100             # 权重100

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

        192.168.3.234      # 虚拟IP

    }

    track_script {

        check_haproxy       # 模块

    }

}



3、修改master1配置文件


把这个/etc/keepalived/keepalived.conf配置文件scp 到master1,只需要修改


# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived


global_defs {

   router_id LVS_DEVEL


# 添加如下内容

   script_user root

   enable_script_security

}


vrrp_script check_haproxy {

    script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径

    interval 3

    weight -2

    fall 10

    rise 2

}


vrrp_instance VI_1 {

    state BACKUP            # BACKUP

    interface ens33         # 本机网卡名

    virtual_router_id 51

    priority 90             # 权重90

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

        192.168.3.234      # 虚拟IP

    }

    track_script {

        check_haproxy       # 模块

    }

}


4、配置其他台master的 haproxy.cfg配置文件配置文件完全一样


vim /etc/haproxy/haproxy.cfg

#---------------------------------------------------------------------

# Example configuration for a possible web application.  See the

# full configuration options online.

#

#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt

#

#---------------------------------------------------------------------


#---------------------------------------------------------------------

# Global settings

#---------------------------------------------------------------------

global

    # to have these messages end up in /var/log/haproxy.log you will

    # need to:

    #

    # 1) configure syslog to accept network log events.  This is done

    #    by adding the '-r' option to the SYSLOGD_OPTIONS in

    #    /etc/sysconfig/syslog

    #

    # 2) configure local2 events to go to the /var/log/haproxy.log

    #   file. A line like the following can be added to

    #   /etc/sysconfig/syslog

    #

    #    local2.*                       /var/log/haproxy.log

    #

    log         127.0.0.1 local2


    chroot      /var/lib/haproxy

    pidfile     /var/run/haproxy.pid

    maxconn     4000

    user        haproxy

    group       haproxy

    daemon


    # turn on stats unix socket

    stats socket /var/lib/haproxy/stats


#---------------------------------------------------------------------

# common defaults that all the 'listen' and 'backend' sections will

# use if not designated in their block

#---------------------------------------------------------------------

defaults

    mode                    http

    log                     global

    option                  httplog

    option                  dontlognull

    option http-server-close

    option forwardfor       except 127.0.0.0/8

    option                  redispatch

    retries                 3

    timeout http-request    10s

    timeout queue           1m

    timeout connect         10s

    timeout client          1m

    timeout server          1m

    timeout http-keep-alive 10s

    timeout check           10s

    maxconn                 3000


#---------------------------------------------------------------------

# main frontend which proxys to the backends

#---------------------------------------------------------------------

frontend  kubernetes-apiserver

    mode                        tcp

    bind                        *:16443

    option                      tcplog

    default_backend             kubernetes-apiserver


#---------------------------------------------------------------------

# static backend for serving up images, stylesheets and such

#---------------------------------------------------------------------

listen stats

    bind            *:1080

    stats auth      admin:admin

    stats refresh   5s

    stats realm     HAProxy\ Statistics

    stats uri       /admin?stats


#---------------------------------------------------------------------

# round robin balancing between the various backends

#---------------------------------------------------------------------

backend kubernetes-apiserver

    mode        tcp

    balance     roundrobin

    server  my 192.168.3.100:6443 check

    server  master1 192.168.3.224:6443 check

    server  master2 192.168.3.225:6443 check



5、配置检测脚本,两台master都是一样


cat > /etc/keepalived/check_haproxy.sh <<EOF

#!/bin/sh

# HAPROXY down

pid=`ps -C haproxy --no-header | wc -l`

if [ $pid -eq 0 ]

then

    systemctl start haproxy

    if [ `ps -C haproxy --no-header | wc -l` -eq 0 ]

    then

        killall -9 haproxy


        #这里大家可以自已决定事件处理方法,例如可以发邮件,发短信等等

        echo "HAPROXY down" >>/tmp/haproxy_check.log

        sleep 10

    fi

fi

EOF


7、给检测脚本添加执行权限


chmod 755 /etc/keepalived/check_haproxy.sh


8、启动haproxy和keepalived服务


systemctl enable keepalived && systemctl start keepalived 

systemctl enable haproxy && systemctl start haproxy 


9、查看vip地址

这边配的是master是master,所以只能在这台机器上看


ip addr


五、部署集群


安装的kubeadm、kubectl和kubelet要和kubernetes版本一致,kubelet加入开机启动之后不手动启动,要不然会报错,初始化集群之后集群会自动启动kubelet服务!!!


1、安装k8s软件包,每个节点都需要安装


(网上说的错误)#直接装最新的

(网上说的错误)# yum install -y kubeadm kubectl kubelet


安装指定版本要和kubernetes版本一致


yum install -y kubelet-1.22.5 kubectl-1.22.5 kubeadm-1.22.5

systemctl enable kubelet && systemctl start kubelet


2、获取默认配置文件,登录到master(my)机器


kubeadm config print init-defaults > kubeadm-config.yaml



vim  kubeadm-config.yaml

--------------------------------------------------------------------

apiVersion: kubeadm.k8s.io/v1beta3

bootstrapTokens:

- groups:

  - system:bootstrappers:kubeadm:default-node-token

  token: abcdef.0123456789abcdef

  ttl: 24h0m0s

  usages:

  - signing

  - authentication

kind: InitConfiguration

localAPIEndpoint:

  advertiseAddress: 192.168.3.100

  bindPort: 6443

nodeRegistration:

  criSocket: /var/run/dockershim.sock

  imagePullPolicy: IfNotPresent

  name: my

  taints: null

---

apiServer:

  timeoutForControlPlane: 4m0s

  certSANS:

  - 192.168.3.100

  - 192.168.3.224

  - 192.168.3.225

apiVersion: kubeadm.k8s.io/v1beta3

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controlPlaneEndpoint: "192.168.3.234:16443"

controllerManager: {}

dns: {}

etcd:

  local:

    dataDir: /var/lib/etcd

imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

kind: ClusterConfiguration

kubernetesVersion: 1.22.0

networking:

  dnsDomain: cluster.local

  serviceSubnet: 10.96.0.0/12

  podSubnet: 10.244.0.0/16

scheduler: {}

---

apiVersion: kubeproxy.config.k8s.io/v1alpha1

kind: KubeProxyConfiguration

mode: ipvs



4、下载相关镜像文件


kubeadm config images pull --config kubeadm-config.yaml --upload-certs


查看下载下来的镜像文件

kubeadm config images list



5、初始化集群


kubeadm init --config kubeadm-config.yaml


kubeadm join 192.168.3.234:16443 --token abcdef.0123456789abcdef \

        --discovery-token-ca-cert-hash sha256:3f4937786226a046b3d6d67b8697d1b6df2eaf3b29f711831577282a484c67ec \

        --control-plane


kubeadm join 192.168.3.234:16443 --token abcdef.0123456789abcdef \

        --discovery-token-ca-cert-hash sha256:3f4937786226a046b3d6d67b8697d1b6df2eaf3b29f711831577282a484c67ec

6、在其它master节点要创建etcd目录


mkdir -p /etc/kubernetes/pki/etcd


7、把主master节点证书分别复制到其他master节点


scp /etc/kubernetes/pki/ca.* root@192.168.3.224:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/sa.* root@192.168.3.224:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/front-proxy-ca.* root@192.168.3.224:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/etcd/ca.* 192.168.3.224:/etc/kubernetes/pki/etcd/


#这个文件master和node节点都需要

scp  /etc/kubernetes/admin.conf 192.168.3.224:/etc/kubernetes/


scp  /etc/kubernetes/admin.conf 192.168.3.222:/etc/kubernetes/

scp  /etc/kubernetes/admin.conf 192.168.3.223:/etc/kubernetes/


## 批量处理文件

cat > k8s-cluster-other-init.sh <<EOF

#!/bin/bash

IPS=(192.168.3.224,192.168.3.225)

JOIN_CMD=`kubeadm token create --print-join-command 2> /dev/null`


for index in 0 1; do

  ip=${IPS[${index}]}

  ssh $ip "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"

  scp /etc/kubernetes/pki/ca.crt $ip:/etc/kubernetes/pki/ca.crt

  scp /etc/kubernetes/pki/ca.key $ip:/etc/kubernetes/pki/ca.key

  scp /etc/kubernetes/pki/sa.key $ip:/etc/kubernetes/pki/sa.key

  scp /etc/kubernetes/pki/sa.pub $ip:/etc/kubernetes/pki/sa.pub

  scp /etc/kubernetes/pki/front-proxy-ca.crt $ip:/etc/kubernetes/pki/front-proxy-ca.crt

  scp /etc/kubernetes/pki/front-proxy-ca.key $ip:/etc/kubernetes/pki/front-proxy-ca.key

  scp /etc/kubernetes/admin.conf $ip:/etc/kubernetes/admin.conf

  scp /etc/kubernetes/admin.conf $ip:~/.kube/config


  ssh ${ip} "${JOIN_CMD} --control-plane"

done

EOF


8、其他master节点加入集群


kubeadm join 192.168.3.234:16443 --token abcdef.0123456789abcdef \

        --discovery-token-ca-cert-hash sha256:206fc3f597db5676739d390e4e2ce6fac7e03c361695613d38363027dcb2c0c3 \

        --control-plane

9、两个node节点加入集群


kubeadm join 192.168.3.234:16443 --token abcdef.0123456789abcdef \

        --discovery-token-ca-cert-hash sha256:206fc3f597db5676739d390e4e2ce6fac7e03c361695613d38363027dcb2c0c3 

10、所有master节点执行以下命令,node节点随意

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config


11、在master(my)上查看所有节点状态

kubectl get nodes

------------------------------------------------------------------

NAME      STATUS   ROLES                  AGE     VERSION

master1   Ready    control-plane,master   6h38m   v1.22.5

master2   Ready    control-plane,master   4h45m   v1.22.5

my        Ready    control-plane,master   6h41m   v1.22.5

node1     Ready    <none>                 6h15m   v1.22.5

node2     Ready    <none>                 6h15m   v1.22.5





12、安装网络插件,在master(my)机器上执行


如果没有翻墙有可能下不来

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


13、再查看节点状态

kubectl get pods --all-namespaces

----------------------------------------------------------------------

NAMESPACE              NAME                                        READY   STATUS    RESTARTS       AGE

kube-system            coredns-7d89d9b6b8-22f8g                    1/1     Running   7 (76m ago)    6h40m

kube-system            coredns-7d89d9b6b8-4mm4g                    1/1     Running   7 (76m ago)    6h40m

kube-system            etcd-master1                                1/1     Running   1 (76m ago)    118m

kube-system            etcd-master2                                1/1     Running   2 (76m ago)    4h45m

kube-system            etcd-my                                     1/1     Running   13 (76m ago)   6h41m

kube-system            kube-apiserver-master1                      1/1     Running   35 (76m ago)   6h37m

kube-system            kube-apiserver-master2                      1/1     Running   5 (23m ago)    4h45m

kube-system            kube-apiserver-my                           1/1     Running   22 (76m ago)   6h41m

kube-system            kube-controller-manager-master1             1/1     Running   5 (76m ago)    6h37m

kube-system            kube-controller-manager-

k8s高可用集群(本机搭建)

一、k8s高可用集群规划=======================================================================主机名 IP VIP---------------------...
点击下载文档文档为doc格式

声明:除非特别标注,否则均为本站原创文章,转载时请以链接形式注明文章出处。如若本站内容侵犯了原著者的合法权益,可联系本站删除。

已经是第一篇
确认删除?
回到顶部
客服QQ
  • 客服QQ点击这里给我发消息
QQ群
  • 答案:my7c点击这里加入QQ群
支持邮箱
微信
  • 微信