diff --git a/keepalived/高可用集群.md b/keepalived/高可用集群.md
new file mode 100644
index 0000000..e2afc76
--- /dev/null
+++ b/keepalived/高可用集群.md
@@ -0,0 +1,304 @@
+
高可用集群
+
+作者:行癫(盗版必究)
+
+------
+
+## 一:Keepalived简介
+
+#### 1.简介
+
+ keepalived是集群管理中保证集群高可用(HA)的一个服务软件,其功能类似于heartbeat,用来防止单点故障。
+
+#### 2.工作原理
+
+ keepalived是以VRRP协议为实现基础的,当backup收不到vrrp包时就认为master宕掉了,这时就需要根据VRRP的优先级来选举一个backup当master。这样我们就可以保证集群的高可用
+
+ keepalived是以VRRP协议为实现基础的,VRRP全称Virtual Router Redundancy Protocol,即虚拟路由冗余协议
+
+ 虚拟路由冗余协议,可以认为是实现路由器高可用的协议,即将N台提供相同功能的路由器组成一个路由器组,这个组里面有一个master和多个backup,master上面有一个对外提供服务的vip(该路由器所在局域网内其他机器的默认路由为该vip),master会发组播,当backup收不到vrrp包时就认为master宕掉了,这时就需要根据VRRP的优先级来选举一个backup当master。这样的话就可以保证路由器的高可用了
+
+ keepalived主要有三个模块,分别是core、check和vrrp。core模块为keepalived的核心,负责主进程的启动、维护以及全局配置文件的加载和解析。check负责健康检查,包括常见的各种检查方式。vrrp模块是来实现VRRP协议的
+
+![image-20230523174323441](https://xingdian-image.oss-cn-beijing.aliyuncs.com/xingdian-image/image-20230523174323441.png)
+
+ 如何判断谁是master:看vip,vip在谁上谁就是master
+
+#### 3.什么是脑裂
+
+ 脑裂(split-brain):指在一个高可用(HA)系统中,当联系着的两个节点断开联系时,本来为一个整体的系统,分裂为两个独立节点,这时两个节点开始争抢共享资源,结果会导致系统混乱,数据损坏
+
+ 对于无状态服务的HA,无所谓脑裂不脑裂;但对有状态服务(比如MySQL)的HA,必须要严格防止脑裂
+
+ 究竟是有状态服务,还是无状态服务,其判断依据——两个来自相同发起者的请求在服务器端是否具备上下文关系
+
+ 脑裂:backup强资源,master不认为自己会死,他俩抢着为客户端服务
+
+ 解决方案:shoot the other in the head 爆头 master
+
+注意:
+
+ 在商城里购买一件商品。需要经过放入购物车、确认订单、付款等多个步骤。由于HTTP协议本身是无状态的,所以为了实现有状态服务,就需要通过一些额外的方案。比如最常见的session,将用户挑选的商品(购物车),保存到session中,当付款的时候,再从购物车里取出商品信息
+
+## 二:LVS+Keepalived
+
+#### 1.环境准备
+
+
+
+#### 2.项目部署
+
+主/备调度器安装软件
+
+```shell
+[root@lvs-keepalived-master ~]# yum -y install ipvsadm keepalived
+[root@lvs-keepalived-slave ~]# yum -y install ipvsadm keepalived
+```
+
+主/备调度器配置
+
+```shell
+lvs-master:
+[root@ha-proxy-master ~]# vim /etc/keepalived/keepalived.conf
+! Configuration File for keepalived
+
+global_defs {
+ router_id lvs-keepalived-master #辅助改为lvs-backup
+}
+
+vrrp_instance VI_1 {
+ state MASTER
+ interface ens33 #VIP绑定接口
+ virtual_router_id 80 #VRID 同一组集群,主备一致
+ priority 100 #本节点优先级,辅助改为50
+ advert_int 1 #检查间隔,默认为1s
+ authentication {
+ auth_type PASS
+ auth_pass 1111
+ }
+ virtual_ipaddress {
+ 192.168.246.160/24
+ }
+}
+
+virtual_server 192.168.246.160 80 { #LVS配置
+ delay_loop 3
+ lb_algo rr #LVS调度算法
+ lb_kind DR #LVS集群模式(路由模式)
+ nat_mask 255.255.255.0
+ protocol TCP #健康检查使用的协议
+ real_server 192.168.246.162 80 {
+ weight 1
+ inhibit_on_failure #当该节点失败时,把权重设置为0,而不是从IPVS中删除
+ TCP_CHECK { #健康检查
+ connect_port 80 #检查的端口
+ connect_timeout 3 #连接超时的时间
+ }
+ }
+ real_server 192.168.246.163 80 {
+ weight 1
+ inhibit_on_failure
+ TCP_CHECK {
+ connect_timeout 3
+ connect_port 80
+ }
+ }
+}
+
+[root@lvs-keepalived-slave ~]# vim /etc/keepalived/keepalived.conf
+! Configuration File for keepalived
+
+global_defs {
+ router_id lvs-keepalived-slave
+}
+
+vrrp_instance VI_1 {
+ state BACKUP
+ interface ens33
+ nopreempt #不抢占资源
+ virtual_router_id 80
+ priority 50
+ advert_int 1
+ authentication {
+ auth_type PASS
+ auth_pass 1111
+ }
+ virtual_ipaddress {
+ 192.168.246.160/24
+ }
+}
+virtual_server 192.168.246.160 80 {
+ delay_loop 3
+ lb_algo rr
+ lb_kind DR
+ nat_mask 255.255.255.0
+ persistence_timeout 20
+ protocol TCP
+ real_server 192.168.246.162 80 {
+ weight 1
+ inhibit_on_failure
+ TCP_CHECK {
+ connect_port 80
+ connect_timeout 3
+ }
+ }
+ real_server 192.168.246.163 80 {
+ weight 1
+ inhibit_on_failure
+ TCP_CHECK {
+ connect_timeout 3
+ connect_port 80
+ }
+ }
+}
+```
+
+#### 3.LVS部署
+
+```shell
+[root@lvs-server ~]# ip addr add dev ens33 192.168.246.160/32 #设置VIP
+[root@xingdian-cloud ~]# ipvsadm -S > /etc/sysconfig/ipvsadm
+[root@xingdian-cloud ~]# systemctl start ipvsadm
+[root@lvs-server ~]# ipvsadm -A -t 192.168.246.160:80 -s rr
+[root@lvs-server ~]# ipvsadm -a -t 192.168.246.160:80 -r 192.168.246.163 -g
+[root@lvs-server ~]# ipvsadm -a -t 192.168.246.160:80 -r 192.168.246.162 -g
+
+[root@lvs-keepalived-master ~]# ipvsadm -Ln
+IP Virtual Server version 1.2.1 (size=4096)
+Prot LocalAddress:Port Scheduler Flags
+ -> RemoteAddress:Port Forward Weight ActiveConn InActConn
+TCP 192.168.246.160:80 rr persistent 20
+ -> 192.168.246.162:80 Route 1 0 0
+ -> 192.168.246.163:80 Route 0 0 0
+```
+
+#### 4.启动KeepAlived
+
+```shell
+[root@lvs-keepalived-master ~]# systemctl start keepalived
+[root@lvs-keepalived-master ~]# systemctl enable keepalived
+```
+
+#### 5.RS配置
+
+```
+[root@test-nginx1 ~]# yum install -y nginx
+[root@test-nginx2 ~]# yum install -y nginx
+[root@test-nginx1 ~]# ip addr add dev lo 192.168.246.160/32
+[root@test-nginx1 ~]# echo "net.ipv4.conf.all.arp_ignore = 1" >> /etc/sysctl.conf
+[root@test-nginx1 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
+[root@test-nginx1 ~]# sysctl -p
+[root@test-nginx1 ~]# echo "web1..." >> /usr/share/nginx/html/index.html
+[root@test-nginx1 ~]# systemctl start nginx
+```
+
+## 三:LVS+Nginx
+
+```shell
+1.准备好web-server(两台都要做)
+web-server-1:10.0.0.42
+web-server-2:10.0.0.141
+安装nginx,并且保证nginx正常运行
+分别要在web-server上创建一个测试界面
+echo "web-server-1" > /usr/share/nginx/html/index.html
+echo "web-server-2" > /usr/share/nginx/html/index.html
+检测web-server是否正常被访问
+
+2.负载均衡的部署(两台都要做)
+master:10.0.0.154
+backup:10.0.0.27
+
+master和backup都要做以下操作:
+vim /etc/nginx/nginx.conf 添加以下内容
+ upstream xingdian {
+ server 10.0.0.42:80;
+ server 10.0.0.141:80;
+ }
+
+vim /etc/nginx/conf.d/default.conf 修改一下内容
+
+ location / {
+ proxy_pass http://xingdian;
+ proxy_redirect default;
+ proxy_set_header Host $http_host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header REMOTE-HOST $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ }
+
+保证nginx的负载均衡可用,客户端可以访问测试:
+
+Keepalived实现调度器HA(vip我们直接写在配置文件中)
+
+1. 主/备调度器安装软件(安装keepalived)
+[root@master ~]# yum install -y keepalived
+[root@backup ~]# yum install -y keepalived
+[root@master ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak (略)
+[root@nginx-proxy-master ~]# vim /etc/keepalived/keepalived.conf
+! Configuration File for keepalived
+
+global_defs {
+ router_id director1 #辅助改为director2
+}
+
+vrrp_instance VI_1 {
+ state MASTER #定义主还是备
+ interface ens33 #VIP绑定接口
+ virtual_router_id 80 #整个集群的调度器一致
+ priority 100 #back改为50
+ advert_int 1
+ authentication {
+ auth_type PASS
+ auth_pass 1111
+ }
+ virtual_ipaddress {
+ 10.0.0.110/24
+ }
+}
+
+[root@backup ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
+[root@backup ~]# vim /etc/keepalived/keepalived.conf
+! Configuration File for keepalived
+
+global_defs {
+ router_id directory2
+}
+
+vrrp_instance VI_1 { #实例名称,两台要保持相同
+ state BACKUP #设置为backup
+ interface ens33 #心跳网卡
+ nopreempt #设置到back上面,不抢占资源
+ virtual_router_id 80 #虚拟路由编号,主备要保持一致
+ priority 50 #辅助改为50
+ advert_int 1 #检查间隔,单位秒
+ authentication { 秘钥认证
+ auth_type PASS
+ auth_pass 1111
+ }
+ virtual_ipaddress {
+ 10.0.0.110/24
+ }
+}
+3. 启动KeepAlived(主备均启动)
+[root@master ~]# systemctl enable keepalived
+[root@master ~]# systemctl start keepalived
+```
+
+## 四:健康检测
+
+```shell
+#!/bin/bash
+#+检查nginx进程是否存在
+counter=$(ps -C httpd --no-heading|wc -l) #此行有服务名
+if [ "${counter}" = "0" ]; then
+#尝试启动一次nginx,停止5秒后再次检测
+ service httpd start #启动服务
+ sleep 5
+ counter=$(ps -C httpd --no-heading|wc -l) #此行有服务名
+ if [ "${counter}" = "0" ]; then
+#如果启动没成功,就杀掉keepalive触发主备切换
+ service keepalived stop
+ fi
+fi
+```
+