Percona-mysql-5.5.38双主复制&mmm配置

时间:2022-09-15 08:48:47

一、   说明

解决数据库单mysql主节点的单点故障问题。便于数据库的切换。

二、   原理

mysql的主从模式延伸而来,两个mysql节点互为主从,可读可写。

三、   测试环境描述

192.168.0.54db54                                 CentOS_6.5x64     Percona_mysql-5.5.38

192.168.0.108              db108              CentOS_6.5x64     Percona_mysql-5.5.38

四、   配置过程

1.    安装mysql(可以使用之前写的一键安装脚本)

 

2.    检查两台DB是否开启bin_log

mysql>show variables like 'log_bin';

+---------------+-------+

|Variable_name | Value |

+---------------+-------+

|log_bin       | ON    |

+---------------+-------+

 

3.    两台服务器修改server-id,并重启mysql

db54修改为:server-id = 54

db108修改为:server-id = 108

 

重启:

# /etc/init.d/mysqlrestart

 

4.    两台服务器之间相互开启3306端口

db54

-AINPUT -s 192.168.0.108/32 -m state --state NEW -m tcp -p tcp --dport 3306 -jACCEPT

db108

-AINPUT -s 192.168.0.54/32 -m state --state NEW -m tcp -p tcp --dport 3306 -jACCEPT

 

5.    db54为主库,db108为从库

5.1  主库db54建立slave同步数据的用户

mysql>grant replication client,replication slave on *.* to repl@'192.168.0.108'identified by '123456';

mysql>flush privileges;

5.2  清空日志

mysql>flush master;

5.3  从库db108指定主服务器

mysql>change master to master_host='192.168.0.54',master_user='repl',master_password='123456';

 

5.4  启动从服务器进程并查看运行状态

mysql>start slave;

mysql>show slave status\G

如果出现如下行,则表明正常启动:

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

 

5.5  测试db54db108的主从同步

5.5.1       主库db54上创建jibuqi数据库:

mysql>create database jibuqi;

mysql>showdatabases;

5.5.2       从库db108查看结果:

mysql>showdatabases;

结果正常。

 

6.    db108为主库,db54为从库

6.1  db54[mysqld]中配置文件中添加配置:

auto-increment-increment = 2                    #整个结构中服务器的总数

auto-increment-offset = 1                           #设定数据库中自动增长的起点,两个db不能相同,否则主键冲突

replicate-do-db = jibuqi                                                    #指定同步的数据库,其他数据库不同步

 

重启mysql

# /etc/init.d/mysqlrestart

 

mysql>show master status;

+------------------+----------+--------------+------------------+

|File             | Position |Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

|mysql-bin.000003 |      107 |              |                  |

+------------------+----------+--------------+------------------+

6.2  db108创建同步数据的mysql用户:

mysql>grant replication client,replication slave on *.* to repl@'192.168.0.108'identified by '123456';

mysql>flush privileges;

db108的配置文件[mysqld]中添加配置:

log-bin = mysql-bin

auto-increment-increment = 2

auto-increment-offset = 2  # db54不能相同

replicate-do-db = jibuqi

重启mysql

# /etc/init.d/mysqlrestart

 

mysql>show master status;

+------------------+----------+--------------+------------------+

|File             | Position |Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

|mysql-bin.000005 |      107 |              |                  |

+------------------+----------+--------------+------------------+

 

6.3  db54db108分别指定对方为自己的主数据库:

db108服务器的指向:

mysql>stop slave;

mysql>change master to master_host='192.168.0.54',master_user='repl',master_password='123456',master_log_file='mysql-bin.000003',master_log_pos=107;

mysql>start slave;

 

db54服务器的指向:

mysql>change master to master_host='192.168.0.108',master_user='repl',master_password='123456',master_log_file='mysql-bin.000005',master_log_pos=107;

mysql>start slave;

 

6.4  测试:

db54jibuqi数据库导入数据表api_pedometeraccount,检查db108上是是否有相应table检查结果正常)。

db108jibuqi数据库导入数据表api_pedometerdevice,检查db54上是是否有相应table检查结果正常)。

 

                  至此,双主同步的模式完成。

 

五、   mysql-MMMMaster-Master Replication Manager for MySQL

5.1 简介

MMMMaster-Master Replication Manager for MySQLmysql主主复制管理器)关于mysql主主复制配置的监控、故障转移和管理的一套可伸缩的脚本套件(在任何时候只有一个节点可以被写入),这个套件也能对居于标准的主从配置的任意数量的从服务器进行读负载均衡,所以你可以用它来在一组居于复制的服务器启动虚拟ip,除此之外,它还有实现数据备份、节点之间重新同步功能的脚本。MySQL本身没有提供replication failover的解决方案,通过MMM方案能实现服务器的故障转移,从而实现mysql的高可用。MMM不仅能提供浮动IP的功能,更可贵的是如果当前的主服务器挂掉后,会将你后端的从服务器自动转向新的主服务器进行同步复制,不用手工更改同步配置。这个方案是目前比较成熟的解决方案。详情请看官网:http://mysql-mmm.org

 

5.2 结构说明

a. 服务器列表

服务器

主机名

IP

serverID

mysql版本

系统

master1

db1

192.168.1.19

54

mysql5.5

Centos 6.5

master2

db2

192.168.1.20

108

mysql5.5

Centos 6.5

 

b. 虚拟IP列表

VIP

Role

description

192.168.1.190

write

应用配置的写入VIP

192.168.1.201

read

应用配置的读入VIP

192.168.1.203

read

应用配置的读入VIP

 

5.3 MMM的安装(安装在db1中,可以单独准备一台服务器来安装)

a. 升级perl模块

# yuminstall �Cy cpan

# cpan-i Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perlMail::Send Net::Ping Proc::Daemon Time::HiRes Params::Validate Net::ARP

这两个File::Basename File::stat模块好像安装不上,要升级5.12.2才可以,不安装也可以使用;Net:ARP必须要安装,要不然VIP会出不来的。

 

b. 下载并安装mysql-mmm

# wget http://mysql-mmm.org/_media/:mmm2:mysql-mmm-2.2.1.tar.gz-O mysql-mmm-2.2.1.tar.gz

# tar-zxvf mysql-mmm-2.2.1.tar.gz

# cdmysql-mmm-2.2.1

# make;makeinstall

 

c. mysql-mmm配置

直接修改配置文件

  修改db1的配置文件:

mmm_agent.conf

# vim/etc/mysql_mmm/mmm_agent

  include mmm_common.conf

  this db1

 

mmm_common.conf

# vim/etc/mysql_mmm/mmm_common.conf

active_master_role      writer

 

<host default>

   cluster_interface       eth0

   pid_path               /var/run/mysql-mmm/mmm_agentd.pid

   bin_path               /usr/libexec/mysql-mmm/

   replication_user        repl

   replication_password    123456

   agent_user              mmm_agent

   agent_password          mmm_agent

</host>

 

<host db1>

   ip      192.168.1.19

   mode    master

   peer    db2

</host>

 

<host db2>

   ip      192.168.1.20

   mode    master

   peer    db1

</host>

 

<role writer>

   hosts   db1, db2

   ips     192.168.1.190

   mode    exclusive # exclusive是排他模式,任何时候只能一个host拥有该角色

</role>

 

<role reader>

   hosts   db1, db2

   ips     192.168.1.201,192.168.1.203

   mode    balanced # balanced均衡模式,可以多个host同时拥有此角色

</role>

 

mmm_mon.conf

# vim/etc/mysql_mmm/mmm_mon.conf

include mmm_common.conf

 

<monitor>

       ip                                             127.0.0.1

       pid_path                               /var/run/mmm_mond.pid

       bin_path                                /usr/lib/mysql-mmm/

       status_path                            /var/lib/misc/mmm_mond.status

       auto_set_online                        5 #自动切换的时间,单位为秒

       ping_ips                               192.168.1.19, 192.168.1.20

</monitor>

 

<host default>

       monitor_user                   mmm_monitor

       monitor_password                mmm_monitor

</host>

 

debug 0

 

  修改db2的配置文件:

mmm_agent.conf

# vim /etc/mysql_mmm/mmm_agent

  includemmm_common.conf

  thisdb2

 

mmm_common.conf的配置文件内容与db1的相同。

 

d. 启动mmm程序:

db1启动agentmon

# /etc/init.d/mysql-mmm-agent start

  Daemonbin: '/usr/sbin/mmm_agentd'

  Daemonpid: '/var/run/mmm_agentd.pid'

  StartingMMM Agent daemon... Ok

 

# /etc/init.d/mysql-mmm-monitor start

Daemon bin: '/usr/sbin/mmm_mond'

Daemon pid: '/var/run/mmm_mond.pid'

Starting MMM Monitor daemon: Baseclass package "Class::Singleton" is empty.

   (Perhaps you need to 'use' the module which defines that package first,

   or make that module available in @INC (@INC contains:/usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl/usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .).

 at/usr/share/perl5/vendor_perl/MMM/Monitor/Agents.pm line 2

BEGIN failed--compilation abortedat /usr/share/perl5/vendor_perl/MMM/Monitor/Agents.pm line 2.

Compilation failed in require at/usr/share/perl5/vendor_perl/MMM/Monitor/Monitor.pm line 15.

BEGIN failed--compilation abortedat /usr/share/perl5/vendor_perl/MMM/Monitor/Monitor.pm line 15.

Compilation failed in require at/usr/sbin/mmm_mond line 28.

BEGIN failed--compilation abortedat /usr/sbin/mmm_mond line 28.

Failed

启动mysql-mmm-monitor失败,修复方法:

# perl -MCPAN -e shell

Terminaldoes not support AddHistory.

 

cpanshell -- CPAN exploration and modules installation (v1.9402)

Enter'h' for help.

 

cpan[1]> Class::Singleton

Catchingerror: "Can't locate object method \"Singleton\" via package\"Class\" (perhaps you forgot to load \"Class\"?) at/usr/share/perl5/CPAN.pm line 375, <FIN> line 1.\cJ" at /usr/share/perl5/CPAN.pmline 391

         CPAN::shell() called at -e line 1

 

cpan[2]>Class

Unknownshell command 'Class'. Type ? for help.

 

cpan[3]> install Class::Singleton

CPAN:Storable loaded ok (v2.20)

Goingto read '/root/.cpan/Metadata'

  Database was generated on Thu, 27 Nov 201408:53:16 GMT

CPAN:LWP::UserAgent loaded ok (v5.833)

CPAN:Time::HiRes loaded ok (v1.9726)

Warning:no success downloading '/root/.cpan/sources/authors/01mailrc.txt.gz.tmp47425'.Giving up on it. at /usr/share/perl5/CPAN/Index.pm line 225

Fetchingwith LWP:

 http://www.perl.org/CPAN/authors/01mailrc.txt.gz

Goingto read '/root/.cpan/sources/authors/01mailrc.txt.gz'

............................................................................DONE

Fetchingwith LWP:

 http://www.perl.org/CPAN/modules/02packages.details.txt.gz

Goingto read '/root/.cpan/sources/modules/02packages.details.txt.gz'

  Database was generated on Fri, 28 Nov 201408:29:02 GMT

..............

  New CPAN.pm version (v2.05) available.

  [Currently running version is v1.9402]

  You might want to try

    install CPAN

    reload cpan

  to both upgrade CPAN.pm and run the newversion without leaving

  the current session.

 

 

..............................................................DONE

Fetchingwith LWP:

 http://www.perl.org/CPAN/modules/03modlist.data.gz

Goingto read '/root/.cpan/sources/modules/03modlist.data.gz'

DONE

Goingto write /root/.cpan/Metadata

Runninginstall for module 'Class::Singleton'

CPAN:Data::Dumper loaded ok (v2.124)

'YAML'not installed, falling back to Data::Dumper and Storable to read prefs'/root/.cpan/prefs'

Runningmake for S/SH/SHAY/Class-Singleton-1.5.tar.gz

CPAN:Digest::SHA loaded ok (v5.47)

Checksumfor /root/.cpan/sources/authors/id/S/SH/SHAY/Class-Singleton-1.5.tar.gz ok

Scanningcache /root/.cpan/build for sizes

............................................................................DONE

Class-Singleton-1.5/

Class-Singleton-1.5/Changes

Class-Singleton-1.5/lib/

Class-Singleton-1.5/lib/Class/

Class-Singleton-1.5/lib/Class/Singleton.pm

Class-Singleton-1.5/Makefile.PL

Class-Singleton-1.5/MANIFEST

Class-Singleton-1.5/META.yml

Class-Singleton-1.5/README

Class-Singleton-1.5/t/

Class-Singleton-1.5/t/singleton.t

CPAN: File::Temploaded ok (v0.22)

 

  CPAN.pm: Going to buildS/SH/SHAY/Class-Singleton-1.5.tar.gz

 

Checkingif your kit is complete...

Looksgood

Generatinga Unix-style Makefile

WritingMakefile for Class::Singleton

WritingMYMETA.yml and MYMETA.json

Couldnot read '/root/.cpan/build/Class-Singleton-1.5-42kiLS/MYMETA.yml'. Fallingback to other methods to determine prerequisites

cplib/Class/Singleton.pm blib/lib/Class/Singleton.pm

Manifying1 pod document

  SHAY/Class-Singleton-1.5.tar.gz

  /usr/bin/make -- OK

Warning(usually harmless): 'YAML' not installed, will not store persistent state

Runningmake test

PERL_DL_NONLAZY=1"/usr/bin/perl" "-MExtUtils::Command::MM""-MTest::Harness" "-e" "undef*Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t

t/singleton.t.. ok    

Alltests successful.

Files=1,Tests=29,  0 wallclock secs ( 0.01usr  0.01 sys +  0.01 cusr 0.00 csys =  0.03 CPU)

Result:PASS

  SHAY/Class-Singleton-1.5.tar.gz

  /usr/bin/make test -- OK

Warning(usually harmless): 'YAML' not installed, will not store persistent state

Runningmake install

Prepending/root/.cpan/build/Class-Singleton-1.5-42kiLS/blib/arch/root/.cpan/build/Class-Singleton-1.5-42kiLS/blib/lib to PERL5LIB for 'install'

Manifying1 pod document

Installing/usr/local/share/perl5/Class/Singleton.pm

Installing/usr/local/share/man/man3/Class::Singleton.3pm

Appendinginstallation info to /usr/lib64/perl5/perllocal.pod

  SHAY/Class-Singleton-1.5.tar.gz

  /usr/bin/make install  -- OK

Warning(usually harmless): 'YAML' not installed, will not store persistent state

 

cpan[4]>exit

Terminaldoes not support GetHistory.

Lockfileremoved.

 

#/etc/init.d/mysql-mmm-monitor start

Daemonbin: '/usr/sbin/mmm_mond'

Daemonpid: '/var/run/mmm_mond.pid'

StartingMMM Monitor daemon: Ok

 

db2启动agent

# /etc/init.d/mysql-mmm-agent start

  Daemonbin: '/usr/sbin/mmm_agentd'

  Daemonpid: '/var/run/mmm_agentd.pid'

  StartingMMM Agent daemon... Ok

 

e. 修改防火墙,根据情况开放mmm端口(方法略)

f. db1db2添加监控授权用户,用于检测mysql状态。

mysql> grant super,replicationclient,process on *.* to 'mmm_agent'@'192.168.1.20' identified by 'mmm_agent';

mysql> grant super,replication client,processon *.* to 'mmm_agent'@'192.168.1.19' identified by 'mmm_agent';

mysql > grant super,replicationclient,process on *.* to 'mmm_agent'@'localhost' identified by 'mmm_agent';

mysql > grant super,replicationclient,process on *.* to 'mmm_agent'@'127.0.0.1' identified by 'mmm_agent';

mysql > grant super,replicationclient,process on *.* to 'mmm_monitor'@'192.168.1.20' identified by 'mmm_monitor';

mysql > grant super,replication client,processon *.* to 'mmm_monitor'@'192.168.1.19' identified by 'mmm_monitor’;

mysql > grant super,replicationclient,process on *.* to 'mmm_monitor'@'localhost' identified by 'mmm_ monitor';

mysql > grant super,replicationclient,process on *.* to 'mmm_ monitor'@'127.0.0.1' identified by 'mmm_ monitor';

mysql > flush privileges;

 

 

g. 检查状态(在monitor所在的机器):

# mmm_control ping

OK: Pinged successfully!

 

# mmm_control show

 db1(192.168.1.19) master/ONLINE. Roles: reader(192.168.1.203),writer(192.168.1.190)

 db2(192.168.1.20) master/ONLINE. Roles: reader(192.168.1.201)

 

# mmm_control checks

db2 ping         [last change:2014/12/01 13:49:47]  OK

db2 mysql        [last change:2014/12/01 13:49:47]  OK

db2 rep_threads  [last change:2014/12/01 13:49:47]  OK

db2 rep_backlog  [last change:2014/12/01 13:49:47]  OK: Backlog is null

db1 ping         [last change:2014/12/01 13:49:47]  OK

db1 mysql        [last change:2014/12/01 13:49:47]  OK

db1 rep_threads  [last change:2014/12/01 13:52:19]  OK

db1 rep_backlog  [last change:2014/12/01 13:49:47]  OK: Backlog is null

 

# mmm_control help

Valid commands are:

   help                             - show this message

   ping                             - ping monitor

   show                             - show status

   checks [<host>|all [<check>|all]] - show checks status

   set_online <host>                - set host <host> online

   set_offline <host>               - set host <host> offline

   mode                             - print current mode.

   set_active                       - switch into active mode.

   set_manual                       - switch into manual mode.

   set_passive                      - switch into passive mode.

   move_role [--force] <role> <host> - move exclusive role<role> to host <host>

                                        (Onlyuse --force if you know what you are doing!)

set_ip <ip><host>                - set rolewith ip <ip> to host <host>

 

 

h. 测试切换

DB1上的信息如下:

#mmm_control show

  db1(192.168.1.19) master/ONLINE. Roles:reader(192.168.1.203), writer(192.168.1.190)

  db2(192.168.1.20) master/ONLINE. Roles:reader(192.168.1.201)

 

# ip a

1: lo:<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000

    link/ether 00:0c:29:93:d2:50 brdff:ff:ff:ff:ff:ff

    inet 192.168.1.19/23 brd 192.168.1.255scope global eth0

    inet 192.168.1.203/32 scope global eth0

    inet 192.168.1.190/32 scope global eth0

    inet6 fe80::20c:29ff:fe93:d250/64 scopelink

valid_lft forever preferred_lft forever

 

DB2上的信息:

# ip a

1: lo:<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000

    link/ether 00:0c:29:9f:7c:c6 brdff:ff:ff:ff:ff:ff

    inet 192.168.1.20/23 brd 192.168.1.255scope global eth0

    inet 192.168.1.201/32 scope global eth0

    inet6 fe80::20c:29ff:fe9f:7cc6/64 scopelink

valid_lft forever preferred_lft forever

 

停掉DB1上的mysql应用,看mmm是否会把所有vips切换到DB2

DB1上:

#/etc/init.d/mysql stop

Shuttingdown MySQL (Percona Server)..... SUCCESS!

 

# ip a

1: lo:<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000

    link/ether 00:0c:29:93:d2:50 brdff:ff:ff:ff:ff:ff

    inet 192.168.1.19/23 brd 192.168.1.255scope global eth0

    inet6 fe80::20c:29ff:fe93:d250/64 scopelink

valid_lft forever preferred_lft forever

 

 

DB2上:

# ip a

1: lo:<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000

    link/ether 00:0c:29:9f:7c:c6 brdff:ff:ff:ff:ff:ff

    inet 192.168.1.20/23 brd 192.168.1.255scope global eth0

    inet 192.168.1.201/32 scope global eth0

    inet 192.168.1.203/32 scope global eth0

    inet 192.168.1.190/32 scope global eth0

    inet6 fe80::20c:29ff:fe9f:7cc6/64 scopelink

valid_lft forever preferred_lft forever

 

                  vip切换成功。mmm的切换功能成功。

 

如果想实现mysql的读写分离,可以通过mysql_proxy实现。

本文出自 “运维之道” 博客,请务必保留此出处http://pesen.blog.51cto.com/4575807/1585010