成功搭建Mysql_MHA高可用架构方法-创新互联
本文主要给大家简单讲讲搭建Mysql_MHA高可用架构方法,相关专业术语大家可以上网查查或者找一些相关书籍补充一下,这里就不涉猎了,我们就直奔主题吧,希望搭建Mysql_MHA高可用架构方法这篇文章可以给大家带来一些实际帮助。
Mysql_MHA高可用架构搭建
窗体顶端
窗体底端
环境及兼容包
系统环境:centos6.5
Mysql: mysql-5.5.6 数据库用源码安装,这里就不介绍了
主机分配:
Master : 192.168.0.101 node1 (主库)
Slave1 : 192.168.0.102 node2 (备用主库)
Slave2 : 192.168.0.103 node3 (从库+MHA控制节点)
MHA兼容包见附件
窗体顶端
窗体底端
添加免密码登录,互为认证
++++++++++以下操作在三台节点上都的执行#+++++++++++++
##添加hosts,并修改各自主机名
192.168.0.101node1
192.168.0.102node2
192.168.0.103node3
##添加秘钥
ssh-keygen -t rsa
cat/root/.ssh/id_rsa.pub >/root/.ssh/authorized_keys
##将三个节点的秘钥都添加到 authorized_keys,包括自己的秘钥
##在三个节点上依次执行下列命令用以检查 ssh
ssh node1 date
ssh node2 date
ssh node3 date
##只有互为认证登录成功才能继续后续操作
窗体顶端
窗体底端
安装MHA node 包
++++++++++以下操作在三台节点上都的执行#+++++++++++++
##更新yum为阿里源
mv/etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.old
wget -O/etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo
yum clean all
yum makecache
yum update
##安装环境包
yum installperl-DBD-MySQL perl perl-devel cpan
rpm -ivhmha4mysql-node-0.56-0.el6.noarch.rpm
##mha node 安装完成后会在/usr/bin 下面生成以下四个脚本:(这些工具通常由mha manger的脚本触发,无需人工操作)
/usr/bin/save_binary_logs ##保存和复制master的二进制日志
/usr/bin/apply_diff_relay_logs ##识别差异的中继日志事件并将其差异的事件应用于其他slave
/usr/bin/filter_mysqlbinlog ##去除不必要的ROLLBACK事件(MHA已经不再使用这个工具)
/usr/bin/purge_relay_logs ##清除中继日志(不会阻塞SQL线程)
窗体顶端
窗体底端
安装MHA manager节点包
[node3]#tar xf mha.tar.gz
[node3]# cd mha
##登陆node3尝试安装mha4mysql-manager-0.56-0.el6.noarch,果然报错缺少perl模块的包
##按安装顺序整理如下:
compat-db43-4.3.29-15.el6
perl-Mail-Sender-0.8.16-3.el6
perl-Mail-Sendmail-0.79-12.el6
perl-Config-Tiny-2.12-7.1.el6
perl-Parallel-ForkManager-0.7.9-1.el6
perl-Log-Dispatch-2.27-1.el6
#在安装perl-Log-Dispatch-2.27-1.el6包时还会遇到有模块的包需要安装,按安装顺序整理如下:
perl-TimeDate-1.16-11.1.el6
perl-MIME-Types-1.28-2.el6
perl-MailTools-2.04-4.el6
perl-Email-Date-Format-1.002-5.el6
perl-Params-Validate-0.92-3.el6
perl-MIME-Lite-3.027-2.el6
之后便可以正常安装mha4mysql-manager-0.56-0.el6.noarch
rpm -ivh mha4mysql-manager-0.56-0.el6.noarch.rpm
##mha manager 安装完成后会在/usr/bin 下面生成一些脚本:
/usr/bin/masterha_check_repl ##检查mysql复制状况
/usr/bin/masterha_check_ssh ##检查MHA的ssh配置状况
/usr/bin/masterha_check_status ##检测当前MHA运行状态
/usr/bin/masterha_conf_host ##添加或删除配置的server信息
/usr/bin/masterha_manager ##启动MHA
/usr/bin/masterha_master_monitor ##监测master是否宕机
/usr/bin/masterha_master_switch ##控制故障转移(自动或手动)
/usr/bin/masterha_secondary_check ##也是监测master是否宕机的脚本
/usr/bin/masterha_stop ##关闭MHA
窗体顶端
窗体底端
修改数据库为主从
##将三个数据库文件 my.cf 中 server-id = 的值依次修改为 1、2、3
[node1]#mysql-uroot
mysql> show master status;
//查看 master_log_file和 Position 值
##在三个库中都添加repl和monitor用户
mysql> GRANT REPLICATION SLAVE ON *.* TO'repl'@'192.168.0.%' IDENTIFIED BY '123456';
mysql> grant all privileges on *.* to'monitor'@'192.168.0.%' identified by '123456';
mysql> flush privileges;
##在node1 上执行下列语句
mysql> install plugin rpl_semi_sync_master soname'semisync_master.so';
//主库安装semisync_master插件
mysql> set global rpl_semi_sync_master_enabled=1;
mysql> set globalrpl_semi_sync_master_timeout=30000;
//表示主库在某次事务中,如果等待时间超过30000毫秒,那么则降级为普通模式,不再等待备库。如果主库再次探测到,备库恢复了,则会自动再次回到Semi-sync状态。
##node2和node3上操作:
mysql> CHANGE MASTER TO
MASTER_HOST='192.168.0.101',
MASTER_PORT=3306,
MASTER_USER='repl',
MASTER_PASSWORD='123456',
MASTER_LOG_FILE='mysql-bin.000004',
MASTER_LOG_POS=0;
//MASTER_LOG_FILE= 和 MASTER_LOG_POS= 参数的值填写刚刚showmaster status查询的值
mysql> install plugin rpl_semi_sync_slave soname'semisync_slave.so';
mysql> set global rpl_semi_sync_slave_enabled=1;
mysql> flush privileges;
mysql> start slave;
####从mysql5.5之后,mysql为了保证主从库数据一致性,引进了semi-sync功能,
semi-sync意思是MASTER只需要接收到其中一台SLAVE的返回信息,就会commit;否则需等待直至切换成异步再提交。
优点:
当事务返回客户端成功后,则日志一定在至少两台主机上存在。
MySQL的Semi-sync适合小事务,且两台主机的延迟又较小,则Semi-sync可以实现在性能很小损失的情况下的零数据丢失。
缺点:
完成单个事务增加了额外的等待延迟,延迟的大小取决于网络的好坏。
窗体顶端
窗体底端
Manager节点配置
[node3]#
[node3]# mkdir -p/etc/masterha /masterha/app1 /masterha/scripts
[node3]# cd/etc/masterha/
[node3]# vimapp1.cnf
//app1.cnf 文件文末提供,在这里先把这两项参数值留空,到添加VIP 地址的时候在改成文 app1.cnf 文件中的值
master_ip_failover_script=""
master_ip_online_change_script=""
###检测配置
##检测ssh配置
[node3]# exportPERL5LIB=$PERL5LIB:/usr/lib/perl5/vendor_perl/
[node3]#masterha_check_ssh --conf=/etc/masterha/app1.cnf
##检测mysql配置
[node3]#masterha_check_repl --conf=/etc/masterha/app1.cnf
##检查MHA 的运行状态
[node3]#masterha_check_status --conf=/etc/masterha/app1.cnf
###运行MHA
[node3]# nohupmasterha_manager --conf=/etc/masterha/app1.cnf < /dev/null >/masterha/app1/manager.log 2>&1 &
[node3]# tail -f/masterha/app1/manager.log
[node3]#masterha_check_status --conf=/etc/masterha/app1.cnf
窗体顶端
窗体底端
验证
##先在master库上查看slave信息
mysql> show slave hosts;
+-----------+------+------+-----------+
| Server_id | Host | Port | Master_id |
+-----------+------+------+-----------+
| 3 | | 3306 | 1 |
| 2 | | 3306 | 1 |
+-----------+------+------+-----------+
##关闭 node1 上面的 master 库,然后在node2的slave库上查看是否切为主库
mysql> show slave hosts;
+-----------+------+------+-----------+
| Server_id | Host | Port | Master_id |
+-----------+------+------+-----------+
| 3 | | 3306 | 2 |
+-----------+------+------+-----------+
##恢复高可用架构
##failover成功之后,MHA就会停了,这时候是slave1 到slave2的主从复制,假设192.168.0.101 数据库恢复了,那我们需要恢复高可用架构,让MHA启动。
##这是时候需要做的是,启动192.168.0.101数据库,在manager上面执行以下命令,找到CHANGEMASTER 执行即可。
[node1]# grep -i"All other slaves should start replication from here"/masterha/app1/manager.log
Fri Aug 2512:11:40 2017 - [info] All other slavesshould start replication from here. Statement should
be: CHANGE MASTERTO MASTER_HOST='192.168.0.102', MASTER_PORT=3306,MASTER_LOG_FILE='mysql-bin.000012',
MASTER_LOG_POS=328, MASTER_USER='repl', MASTER_PASSWORD='123456';
//注意 MASTER_HOST= MASTER_LOG_FILE= MASTER_LOG_POS= 三个的值
[node1]# mysql-uroot
mysql> CHANGEMASTER TO MASTER_HOST='192.168.0.102', MASTER_LOG_FILE='mysql-bin.000012',MASTER_LOG_POS=328, MASTER_USER='repl', MASTER_PASSWORD='123456';
mysql> slavestart;
[node2]#mysql-uroot
mysql> showslave hosts;
+-----------+------+------+-----------+
| Server_id | Host | Port | Master_id |
+-----------+------+------+-----------+
| 3 | | 3306 | 2 |
| 1 | | 3306 | 2 |
+-----------+------+------+-----------+
###在开启MHA 服务,然后关闭node2 上面的mysql模拟宕机
[node3]# nohupmasterha_manager --conf=/etc/masterha/app1.cnf < /dev/null >/masterha/app1/manager.log 2>&1 &
##还需删除app1.failover.complete文件,否则会报错
[node3]# rm -f/masterha/app1/app1.failover.complete
[node2]# servicemysql3306 stop
[node1]# mysql-uroot
mysql> showslave hosts;
+-----------+------+------+-----------+
| Server_id | Host | Port | Master_id |
+-----------+------+------+-----------+
| 3 | | 3306 | 1 |
+-----------+------+------+-----------+
##在执行上面(恢复高可用架构) 操作,将slave1 加入到集群里来
窗体顶端
窗体底端
VIP地址漂移
####注意必须所有的主机网卡信息都为 eth0
##先在 node1 上添加虚拟地址
[node1]#/sbin/ifconfig eth0:1 192.168.0.100/24
##修改app1.cnf 配置文件中
master_ip_failover_script=/etc/masterha/master_ip_failover #master failover时执行
master_ip_online_change_script=/etc/masterha/master_ip_online_change #master switchover时执行
##编辑master_ip_failover和 master_ip_online_change文件
[node3]# cd/etc/masterha/
[node3]# vimmaster_ip_failover
[node3]# vimmaster_ip_online_change
//配置文件内容在文档末尾
窗体顶端
窗体底端
app1.cnf 配置文件
app1.cnf
[server default]
user=monitor
password=123456
ping_interval=10
#repl_workdir=/masterha/app1
repl_user=repl
repl_password=123456
ssh_user=root
master_ip_failover_script=/etc/masterha/master_ip_failover
master_ip_online_change_script=/etc/masterha/master_ip_online_change
#report_script= /etc/masterha/send_report
shutdown_script=""
secondary_check_script=/usr/bin/masterha_secondary_check-s node2 -s node1
manager_workdir=/masterha/app1
manager_log=/masterha/app1/manager.log
#remote_workdir=/masterha/app1
[server1]
hostname=192.168.0.101
port=3306
master_binlog_dir=/data/mysql/data3306
candidate_master=1
#check_repl_delay=0
[server2]
hostname=192.168.0.102
port=3306
master_binlog_dir=/data/mysql/data3306
candidate_master=1
#check_repl_delay=0
[server3]
hostname=192.168.0.103
#port=3306
#master_binlog_dir=/data/mysql/data3306
no_master=1
#node3不参与竞选,只作从库和manager使用
read_only=1
窗体顶端
窗体底端
配置文件master_ip_failover
master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL =>'all';
use Getopt::Long;
my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
my $vip = '192.168.0.100/24';
my $key = "1";
my $ssh_start_vip = "/sbin/ifconfig eth0:$key$vip";
my $ssh_stop_vip = "/sbin/ifconfig eth0:$keydown";
my $exit_code = 0;
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
#print "\n\nIN SCRIPTTEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq"stopssh" ) {
#$orig_master_host, $orig_master_ip, $orig_master_port are passed.
# If youmanage master ip address at global catalog database,
#invalidate orig_master_ip here.
my$exit_code = 1;
eval {
print"\n\n\n***************************************************************\n";
print"Disabling the VIP - $vip on old master: $orig_master_host\n";
print"***************************************************************\n\n\n\n";
&stop_vip();
$exit_code = 0;
};
if ($@){
warn"Got Error: $@\n";
exit$exit_code;
}
exit$exit_code;
}
elsif ( $command eq "start" ) {
# allarguments are passed.
# If youmanage master ip address at global catalog database,
# activatenew_master_ip here.
# You canalso grant write access (create user, set read_only=0, etc) here.
my $exit_code = 10;
eval {
print"\n\n\n***************************************************************\n";
print"Enabling the VIP - $vip on new master: $new_master_host \n";
print"***************************************************************\n\n\n\n";
&start_vip();
$exit_code = 0;
};
if ($@){
warn$@;
exit $exit_code;
}
exit$exit_code;
}
elsif ( $command eq "status" ) {
print"Checking the Status of the script.. OK \n";
`ssh$ssh_user\@$orig_master_host \" $ssh_start_vip \"`;
exit0;
}
else {
&usage();
exit1;
}
}
# A simple system call that enable the VIP on the newmaster
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip\"`;
}
# A simple system call that disable the VIP on theold_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip\"`;
}
sub usage {
"Usage: master_ip_failover–command=start|stop|stopssh|status –orig_master_host=host –orig_master_ip=ip–orig_master_port=po
rt –new_master_host=host –new_master_ip=ip–new_master_port=port\n";
}
窗体顶端
窗体底端
配置文件master_ip_online_change
master_ip_online_change
#!/usr/bin/env perl
use strict;
use warnings FATAL =>'all';
use Getopt::Long;
my $vip = '192.168.0.100/24';
my $key = "1";
my $ssh_start_vip = "/sbin/ifconfig eth0:$key$vip";
my $ssh_stop_vip = "/sbin/ifconfig eth0:$keydown";
my $exit_code = 0;
my (
$command, $orig_master_is_new_slave,$orig_master_host,
$orig_master_ip, $orig_master_port, $orig_master_user,
$orig_master_password, $orig_master_ssh_user, $new_master_host,
$new_master_ip, $new_master_port, $new_master_user,
$new_master_password, $new_master_ssh_user,
);
GetOptions(
'command=s' => \$command,
'orig_master_is_new_slave' => \$orig_master_is_new_slave,
'orig_master_host=s' =>\$orig_master_host,
'orig_master_ip=s' =>\$orig_master_ip,
'orig_master_port=i' =>\$orig_master_port,
'orig_master_user=s' =>\$orig_master_user,
'orig_master_password=s' =>\$orig_master_password,
'orig_master_ssh_user=s' =>\$orig_master_ssh_user,
'new_master_host=s' =>\$new_master_host,
'new_master_ip=s' =>\$new_master_ip,
'new_master_port=i' =>\$new_master_port,
'new_master_user=s' =>\$new_master_user,
'new_master_password=s' => \$new_master_password,
'new_master_ssh_user=s' =>\$new_master_ssh_user,
);
exit &main();
sub main {
#print "\n\nIN SCRIPTTEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq"stopssh" ) {
#$orig_master_host, $orig_master_ip, $orig_master_port are passed.
# If youmanage master ip address at global catalog database,
#invalidate orig_master_ip here.
my $exit_code = 1;
eval {
print"\n\n\n***************************************************************\n";
print"Disabling the VIP - $vip on old master: $orig_master_host\n";
print"***************************************************************\n\n\n\n";
&stop_vip();
$exit_code = 0;
};
if ($@){
warn"Got Error: $@\n";
exit$exit_code;
}
exit$exit_code;
}
elsif ( $command eq "start" ) {
# allarguments are passed.
# If youmanage master ip address at global catalog database,
# activatenew_master_ip here.
# You canalso grant write access (create user, set read_only=0, etc) here.
my $exit_code = 10;
eval {
print"\n\n\n***************************************************************\n";
print"Enabling the VIP - $vip on new master: $new_master_host \n";
print"***************************************************************\n\n\n\n";
&start_vip();
$exit_code = 0;
};
if ($@){
warn$@;
exit$exit_code;
}
exit$exit_code;
}
elsif ( $command eq "status" ) {
print"Checking the Status of the script.. OK \n";
`ssh$orig_master_ssh_user\@$orig_master_host \" $ssh_start_vip \"`;
exit0;
}
else {
&usage();
exit1;
}
}
# A simple system call that enable the VIP on the newmaster
sub start_vip() {
`ssh $new_master_ssh_user\@$new_master_host \"$ssh_start_vip \"`;
}
# A simple system call that disable the VIP on theold_master
sub stop_vip() {
`ssh $orig_master_ssh_user\@$orig_master_host \"$ssh_stop_vip \"`;
}
sub usage {
"Usage: master_ip_failover–command=start|stop|stopssh|status –orig_master_host=host –orig_master_ip=ip–orig_master_port=po
rt –new_master_host=host –new_master_ip=ip–new_master_port=port\n";
}
搭建Mysql_MHA高可用架构方法就先给大家讲到这里,对于其它相关问题大家想要了解的可以持续关注我们的行业资讯。我们的板块内容每天都会捕捉一些行业新闻及专业知识分享给大家的。
另外有需要云服务器可以了解下创新互联cdcxhl.cn,海内外云服务器15元起步,三天无理由+7*72小时售后在线,公司持有idc许可证,提供“云服务器、裸金属服务器、高防服务器、香港服务器、美国服务器、虚拟主机、免备案服务器”等云主机租用服务以及企业上云的综合解决方案,具有“安全稳定、简单易用、服务可用性高、性价比高”等特点与优势,专为企业上云打造定制,能够满足用户丰富、多元化的应用场景需求。
文章题目:成功搭建Mysql_MHA高可用架构方法-创新互联
分享URL:http://scyanting.com/article/cshgcc.html