Вы находитесь на странице: 1из 12

MySQL Load Balancing with HAProxy - Tutorial ... https://severalnines.com/resources/tutorials/mys...

Login / Sign up

PRODUCT DOWNLOAD PLANS & PRICING RESOURCES SUPPORT BLOG ABOUT US

Webinars Whitepapers Tutorials Trainings

Tutorials
View our online tutorials on how to administer MySQL

7 Back to Tutorials

MySQL Load Balancing with HAProxy - Tutorial

Content table

1. Introduction
2. What is HAProxy?
3. Health Checks for MySQL
3.1. Health Check Script
3.1.1. mysqlchk for Galera Cluster
3.1.2. mysqlchk for MySQL Replication
3.1.3. mysqlchk-iptables for Galera Cluster
3.2. Health Check Methods
3.2.1. mysql-check
3.2.2. tcp-check
3.2.3. httpchk
4. Failure Detection and Failover
5. Read/Write Splitting with HAProxy
6. Integration with ClusterControl
7. HAProxy Redundancy with Keepalived
8. HAProxy Statistics
9. Troubleshooting & Workaround
9.1. MySQL Deadlocks in Galera
9.2. MySQL Server has gone away
9.3. Non-uniform Hardware

1. Introduction
Applications typically connect to a database cluster or replicated setup by opening
connections on one of the database nodes in order to run transactions. If the Related resources
database node fails, the client would need to reconnect to another database node
before it can continue to serve requests.
MySQL Replication
There are different ways to provide connectivity to one or more MySQL database
Z Blueprint whitepaper
servers. One way is to use a database driver that supports connection pooling,
load balancing and failover for example: Webinar replay: How
JDBC driver for MySQL (MySQL Connector/J) CloudStats.me moved

PHP MySQL native driver for master slave (mysqlnd-ms) 0 from MySQL to clustered
MariaDB for high
availability
The above database drivers are built to provide transparency for clients when
connecting to standalone MySQL server, MySQL Cluster (NDB) or MySQL
replication setups. However, in other clustering setup like Galera Cluster for
MySQL or MariaDB, the JDBC and PHP drivers are not aware of internal Galera state information. For instance, a Galera
donor node might be in read-only while it is helping another node resynchronize (if SST method is mysqldump or rsync) or
it could be up in Non-Primary state if split-brain happens. Another solution is to use a load balancer between the clients
and the database cluster.

This tutorial will walk you through on how to deploy, configure and manage MySQL load balancing with HAProxy using
ClusterControl.

You might also want to view these 2 webinar replays:


How to set up SQL Load Balancing with HAProxy
Performance Tuning of HAProxy for Database Load Balancing

2. What is HAProxy?

1 of 12 5/2/18, 2:26 PM
MySQL Load Balancing with HAProxy - Tutorial ... https://severalnines.com/resources/tutorials/mys...

Login / Sign up

PRODUCT DOWNLOAD PLANS & PRICING RESOURCES SUPPORT BLOG ABOUT US

Webinars Whitepapers Tutorials Trainings


prevents a single server from becoming overloaded with too many requests. All clients connect to the HAProxy instance,
and the reverse proxy forwards the connection to one of the available MySQL Servers based on the load-balancing
algorithm used.

One possible setup is to install an HAProxy on each web server (or application server making requests on the database).
This works fine if there are only a few web servers, so as the load introduced by the health checks is kept in check. The web
server would connect to the local HAProxy (e.g. making a mysql connection on 127.0.0.1:3306), and can access all the
database servers. The Web and HAProxy together forms a working unit, so the web server will not work if the HAProxy is
not available.

With HAProxy in the load balancer tier, you will have following advantages:
All applications access the cluster via one single IP address or hostname. The topology of the database cluster is
masked behind HAProxy.
MySQL connections are load-balanced between available DB nodes.
It is possible to add or remove database nodes without any changes to the applications.
Once the maximum number of database connections (in MySQL) is reached, HAProxy queues additional new
connections. This is a neat way of throttling database connection requests and achieves overload protection.

ClusterControl support HAProxy deployment right from the UI and by default it supports three load-balancing algorithms -
roundrobin, leastconn or source. We recommend users to have HAProxy in between clients and a pool of database
servers, especially for Galera Cluster or MySQL Cluster where the backends are being treated equally.

3. Health Checks for MySQL


It is possible to have HAProxy check that a server is up by just making a connection to the MySQL port (usually 3306)
however this is not good enough. The instance might be up, but the underlying storage engine might not be working as it
should be. There are specific checks need to be passed, depending on whether the clustering type is Galera, MySQL
Replication or MySQL Cluster.

3.1. Health Check Script

The best way to perform MySQL health check is by using a custom shell script which determines whether a MySQL server
is available by carefully examining its internal state which depends on the clustering solution used. By default,
ClusterControl provides its own version of health check script called mysqlchk, resides on each MySQL server in the load
balancing set and has ability to return an HTTP response status and/or standard output (stdout) which is useful for TCP
health check.

3.1.1. mysqlchk for Galera Cluster

If the backend MySQL server is healthy, then the script will return a simple HTTP 200 OK status code with exit status 0.
Else, the script will return 503 Service unavailable and exit status 1. Using xinetd is the simplest way to get the health check
script executed by making it daemonize and listen to a custom port (default is 9200). HAProxy will then connect to this port
and request for a health check output. If the health check method is httpchk, HAProxy will look for the HTTP response code
and if the method is tcp-check, it will look for the expect string (as shown in section 3.2.3).

The following flowchart illustrates the process to report the health of a Galera node for multi-master setup:

2 of 12 5/2/18, 2:26 PM
MySQL Load Balancing with HAProxy - Tutorial ... https://severalnines.com/resources/tutorials/mys...

Login / Sign up

PRODUCT DOWNLOAD PLANS & PRICING RESOURCES SUPPORT BLOG ABOUT US

Webinars Whitepapers Tutorials Trainings

The template file is located at /usr/share/cmon/templates/mysqlchk.galera on ClusterControl server. This


mysqlchk script is automatically installed by ClusterControl on each Galera node participating in the load balancing set.

3.1.2. mysqlchk for MySQL Replication

This script is based on the standard mysqlchk script but tailored to monitor MySQL Replication backend servers correctly.
The template is available at this Github repository, and you can use it by replacing the default template located /usr/share
/cmon/templates/mysqlchk.mysql before the HAProxy deployment begins. It’s similar with mysqlchk for Galera Cluster
where xinetd is required to daemonize the health check script.

The script detects the MySQL Replication role based on the following flow chart:

Setting up HAProxy for MySQL Replication requires two different HAProxy listeners e.g, port 3307 for writes to the master
and port 3308 for reads to all available slaves (including master). We have covered this in this blog post.

3.1.3. mysqlchk-iptables for Galera Cluster

A background script that checks the availability of a Galera node, and adds a redirection port using iptables if the Galera
node is healthy (instead of returning HTTP response). This allows other TCP-load balancers with limited health check
capabilities to monitor the backend Galera nodes correctly.

Other than HAProxy, you can now use your favorite reverse proxy to load balance requests across Galera nodes, namely:
nginx 1.9 (--with-stream)
keepalived
IPVS
distributor
balance
pen

This health check script is out of the scope of this tutorial since it built for TCP-load balancers (other than HAProxy) with
limited health check capabilities to monitor the backend Galera nodes correctly. You can watch it in action in the following
screencast:

3 of 12 5/2/18, 2:26 PM
MySQL Load Balancing with HAProxy - Tutorial ... https://severalnines.com/resources/tutorials/mys...

Login / Sign up

PRODUCT DOWNLOAD PLANS & PRICING RESOURCES SUPPORT BLOG ABOUT US

Webinars Whitepapers Tutorials Trainings


several backend health check methods usable to MySQL through the following options:
mysql-check
tcp-check
httpchk

3.2.1. mysql-check

The check consists of sending two MySQL packets, one Client Authentication packet, and one QUIT packet, to correctly
close the MySQL session. HAProxy then parses the MySQL Handshake Initialisation packet and/or Error packet. It is a basic
but useful test which does not produce error or aborted connect on the server. However, it requires adding an
authorization in the MySQL table, like this:

1 USE mysql;
2 INSERT INTO user (Host,User) values ('<ip_of_haproxy>','<username>');
3 FLUSH PRIVILEGES;

Take note that this does not check database presence nor database consistency. To do this, we must use an external check
(via xinetd) which is explained in the next section.

3.2.2. tcp-check

By default, if "check" is set, the server is considered available when it’s able to accept periodic TCP connections. This is not
robust enough for a database backend, since the database server might be able to respond to connection requests while
being in a non-operational state. The instance might be up, but the underlying storage engine might not be working
properly. Also, there are specific checks that need to be done, depending on whether the clustering type is Galera or
MySQL Cluster (NDB).

mysqlchk script provided by ClusterControl supports returning HTTP status code and standard output (stdout). By utilizing
the stdout in xinetd, we can extend the tcp-check capability to make it more accurate with the Galera or MySQL Replication
node status. The following example configuration shows the usability of it:

1 listen  haproxy_192.168.55.110_3307
2         bind *:3307
3         mode tcp
4         timeout client  10800s
5         timeout server  10800s
6         balance leastconn
7         option tcp-check
8         tcp-check expect string is\ running.
9         option allbackups
10         default-server port 9200 inter 2s downinter 5s rise 3 fall 2 slowstart 60s maxconn 64 maxqueue 1
11         server galera1 192.168.55.111:3306 check
12         server galera2 192.168.55.112:3306 check
13         server galera3 192.168.55.113:3306 check

The above configuration lines tell HAProxy to perform health checks using TCP send/expect sequence. It connects to port
9200 of the database and expect for string that contains “is\ running” (backslash is used to escape whitespace). To verify
the mysqlchk output through xinetd port 9200, perform telnet to the database node on HAProxy node:

1 $ telnet 192.168.55.111 9200


2 Trying 192.168.55.171...
3 Connected to 192.168.55.171.
4 Escape character is '^]'.
5 HTTP/1.1 200 OK
6 Content-Type: text/html
7 Content-Length: 43
8
9 <html><body>MySQL is running.</body></html>
10
11 Connection closed by foreign host.

You can use the similar configuration for MySQL Replication, where the expect string for master is “MySQL Master is
running”. For more details on an example deployment of HAProxy as MySQL Replication load balancer, please read this
blog.

ClusterControl defaults to use httpchk as described in the next section.

3.2.3. httpchk

Option httpchk uses HTTP protocol to check on the servers health. This is common if you want to load balance an HTTP
service, where HAProxy ensures the backend returns specific HTTP response codes before routing the incoming
connections. This option does not necessarily require an HTTP backend, it also works with plain TCP backends. Using
httpchk is the preferred option whenever possible since it utilizes less resources with stateless HTTP connection.

1 listen  haproxy_192.168.55.110_3307
2         bind *:3307

4 of 12 5/2/18, 2:26 PM
MySQL Load Balancing with HAProxy - Tutorial ... https://severalnines.com/resources/tutorials/mys...

Login / Sign up

PRODUCT DOWNLOAD PLANS & PRICING RESOURCES SUPPORT BLOG ABOUT US

Webinars Whitepapers Tutorials Trainings


10         server galera1 192.168.55.111:3306 check
11         server galera2 192.168.55.112:3306 check
12         server galera3 192.168.55.113:3306 check

The example above tells us that HAProxy will connect to port 9200, where xinetd is listening on the database servers.
HAProxy will look for an expected HTTP response code. The mysqlchk script will return either ‘HTTP 200 OK’ if the server is
healthy or otherwise ‘HTTP 503 Service not available’.

4. Failure Detection and Failover


When a database node fails, the database connections that have been opened on that node will also fail. It is important
that HAProxy does not redirect new connection requests to the failed node.

There are several user defined parameters that determine how fast HAProxy will be able to detect that a server is not
available. The following is the example HAProxy configuration deployed by ClusterControl located at /etc/haproxy
/haproxy.cfg:

1 listen  haproxy_192.168.55.110_3307
2         bind *:3307
3         mode tcp
4         timeout client  10800s
5         timeout server  10800s
6         balance leastconn
7         option httpchk
8         option allbackups
9         default-server port 9200 inter 2s downinter 5s rise 3 fall 2 slowstart 60s maxconn 64 maxqueue 1
10         server galera1 192.168.55.111:3306 check
11         server galera2 192.168.55.112:3306 check
12         server galera3 192.168.55.113:3306 check

Quick explanation for each line above:


listen: Listen section defines a complete proxy with its frontend and backend parts combined in one section. It is
generally useful for TCP-only traffic. Specify the HAproxy instance name next to it. The next line describing the
section must be indented.
bind: Bind to all IP addresses on this host on port 3307. Your clients will have to connect to the port defined in this
line.
mode: Protocol of the instance. For MySQL, the instance should work in pure TCP mode. A full-duplex connection will
be established between clients and servers, and no layer 7 examination will be performed.
timeout client: Maximum inactivity time in the client side. It’s recommended to keep the same value with timeout
server for predictability.
timeout server: Maximum inactivity time in the server side. It’s recommended to keep the same value with timeout
client for predictability.
balance: Load balancing algorithm. ClusterControl is able to deploy leastconn, roundrobin and source, though you
can customize the configuration at a later stage. Using leastconn is the preferred option so that the database server
with the lowest number of connections receives the connection. If the database servers have the same number of
connections, then roundrobin is performed to ensure that all servers are used.
option httpchk: Perform HTTP-based health check instead. ClusterControl configures an xinetd script on each backend
server in the load balancing set which returns HTTP response code.
option allbackups: The load balancing will be performed among all backup servers when all normal ones are
unavailable. This option is suitable if the MySQL server is configured as backup, as explained in the Troubleshooting
& Workaround section of this tutorial.
default-server: Default options for the backend servers listed under server option.
port: The backend health check port. ClusterControl configures an xinetd process listening on port 9200 on each of
the database node running a custom health check script.
inter: The interval between health checks for a server that is "up", transitionally "up or down" or not yet checked is 2
seconds.
downinter: The down interval is 5 seconds when the server is 100% down or unreachable.
rise: The server will be considered available after 3 consecutive successful health checks.
fall: The server will be considered down/unavailable after 2 consecutive unsuccessful health checks.
slowstart: In 60 seconds, the number of connections accepted by the server will grow from 1 to 100% of the usual
dynamic limit after it gets back online.
maxconn: HAProxy will stop accepting connections when the number of connection is 64.
maxqueue: The maximal number of connections which will wait in the queue for this server. If this limit is reached,
next requests will be redispatched to other servers instead of indefinitely waiting to be served.
weight: In Galera, all nodes usually treated equally. So setting it to 100 is a good start.
server: Define the backend server name, IP address, port and server’s options. We enabled health check by using the
check option on each of the server. The rest option are the same as under default-server.

5 of 12 5/2/18, 2:26 PM
MySQL Load Balancing with HAProxy - Tutorial ... https://severalnines.com/resources/tutorials/mys...

Login / Sign up

PRODUCT DOWNLOAD PLANS & PRICING RESOURCES SUPPORT BLOG ABOUT US

Webinars Whitepapers Tutorials Trainings

1. Every 2 seconds, HAProxy performs health check on port 9200 of the backend server (port 9200 inter 2s).

2. If the health check fails, the fall count starts and it will check for the next failure. After 5 seconds, if the second try still
fails, HAProxy will mark the MySQL server as down (downinter 5s fall 2).
3. The MySQL server is automatically excluded from the list of available servers.

4. Once the MySQL server gets back online, if the health check succeeds, the rise count starts and it will check if the next
consecutive attempt is succeeded. If the count reaches 3, the server will be marked as available (rise 3).
5. The MySQL server is automatically included into the list of available servers.

6. The MySQL server starts to accept the connections gradually for 60 seconds (slowstart 60s).

7. The MySQL server is up and fully operational.

5. Read/Write Splitting with HAProxy


HAProxy as MySQL load balancer works similarly to a TCP forwarder, which operates in the transport layer of TCP/IP
model. It does not understand the MySQL queries (which operates in the higher layer) that it distributes to the backend
MySQL servers. Due to this, HAProxy is popular among multi-master replication setups like Galera Cluster and MySQL
Cluster, where all backend MySQL servers are treated equally. All MySQL servers are able to handle the forwarded
reads/writes. Operating in transport layer also consumes less overhead compared to database-aware load
balancer/reverse proxy like MaxScale or ProxySQL.

In spite of that, it does not mean that HAProxy is not applicable for other non-multi-master setups especially master-slave
replication. In MySQL Replication, things get a little bit complicated. Writes must be forwarded only to a master, while
reads can be the forwarded to all slaves (or master) as long as the slave does not lag behind. Updating more than one
master in a replication setup can result in data inconsistency and cause replication to break.

To make HAProxy capable of handling reads and writes separately, one must:
Configure health checks for MySQL Replication. The health check script must be able to:
Report the replication role (master, slave or none)
Report the slave lag (if slave)
Must be accessible by HAProxy (configured via xinetd or forwarding port)
Create two HAProxy listeners, one for read and one for write:
Read listener - forward to all slaves (or master) to handle reads.
Write listener - forward writes to a single master.
Instruct your application to send reads/writes to the respective listener:
Build/Modify your application to have ability to send reads and writes to the respective listeners
Use application connector which supports built-in read/write splitting. If you are using Java, you can use
Connecter/J. For PHP, you can use php-mysqlnd for master-slave. This will minimize the changes on the
application side.

We have covered the PHP case in details in this blog post, High availability read-write splitting with php-mysqlnd, MySQL
Replication and HAProxy.

6. Integration with ClusterControl


ClusterControl integrates with HAProxy to ease up deployment and management of the load balancer in combination with
a clustered MySQL backend like Galera or MySQL NDB Cluster. It is also possible to add an existing/already deployed
HAProxy instance into ClusterControl, so you can monitor and manage it directly from ClusterControl UI together with the
database nodes.

6 of 12 5/2/18, 2:26 PM
MySQL Load Balancing with HAProxy - Tutorial ... https://severalnines.com/resources/tutorials/mys...

Login / Sign up

PRODUCT DOWNLOAD PLANS & PRICING RESOURCES SUPPORT BLOG ABOUT US

Webinars Whitepapers Tutorials Trainings

To install, you just need to go to ClusterControl > Manage > Load Balancer > Install HAProxy tab and enter the required
information:
HAProxy Address: IP address or hostname of HAProxy node. ClusterControl must be able to connect via
passwordless SSH.
Listen Port: Port that HAProxy instance will listen to. This port will be used to connect to the load-balanced MySQL
connections.
Policy: Load balancing algorithm. Supported values are:
leastconn - The server with the lowest number of connections receives the connection.
roundrobin - Each server is used in turns, according to their weights.
source - The client IP address is hashed and divided by the total weight, so it will always reach the same server
as long as no server goes down or up.
Install from Package Manager: If Redhat based, install via yum package manager. For Debian-based, apt-get
command will be used.
Build from Source (latest available source package will be used:
ClusterControl will compile the latest available source package downloaded from http://www.haproxy.org
/#down.
This option is only required if you intend to use the latest version of HAProxy or if you are having problem with
the package manager of your OS distribution. Some older OS versions do not have HAProxy in their package
repositories.
Overwrite Existing /usr/local/sbin/mysqlchk on targets: If the mysqlchk script is already there, overwrite it for this
deployment. If you have adjusted the script to suit your needs, you might need to uncheck this.
Show Advanced Settings:
Stats Socket: UNIX socket file location for various statistics outputs. Default is /var/run/haproxy.socket and it’s
recommended not to change this.
Admin Port: Port for HAProxy admin-level statistic page. Default is 9600.
Admin User: Admin user when connecting to the statistic page.
Admin Password: Password for Admin User
Backend Name: The listener name for backend. No whitespace.
Timeout Server (seconds): Maximum inactivity time in the server side.
Timeout Client (seconds): Maximum inactivity time in the client side.
Max Connections Frontend: Maximum per-process number of concurrent connections for the frontend.
Max Connection Backend per instance: Limit the number of connection that can be made from HAProxy to each
MySQL Server. Connections exceeding this value will be queued by HAProxy. A best practice is to set it to less
than the MySQL’s max_connections to prevent connections flooding.
xinetd allow connections from: Only allow this network to connect to the health check script on MySQL server
via xinetd service.
Server Instances: List of MySQL servers in the cluster.
Include: Include the server in the load balancing set.
Role: Choose whether the node is Active or Backup. In Backup mode, the server will is only used in load balancing
when all other Active servers are unavailable.

Once the dialog is filled up, click on ‘Install HAProxy’ button to trigger the deployment job:

7 of 12 5/2/18, 2:26 PM
MySQL Load Balancing with HAProxy - Tutorial ... https://severalnines.com/resources/tutorials/mys...

Login / Sign up

PRODUCT DOWNLOAD PLANS & PRICING RESOURCES SUPPORT BLOG ABOUT US

Webinars Whitepapers Tutorials Trainings

ClusterControl will perform the following tasks when the deployment job is triggered:
1. Installs helper packages

2. Tunes the TCP stack of the instance

3. Copies and configures mysqlchk script (from template) on every Galera node

4. Installs and configures xinetd at /etc/xinetd.d/mysqlchk

5. Registers HAProxy node into ClusterControl

You can monitor the deployment progress under ClusterControl > Logs > Jobs, similar to example below:

By default, the HAProxy server will listen on port 3307 for connections. In this example, the HAProxy host IP address is
192.168.55.170. You can connect your applications to 192.168.55.170:3307 and requests will be load balanced on the
backend MySQL Servers.

Do not forget to GRANT access from the HAProxy server to the MySQL Servers, because the MySQL Servers will see the

8 of 12 5/2/18, 2:26 PM
MySQL Load Balancing with HAProxy - Tutorial ... https://severalnines.com/resources/tutorials/mys...

Login / Sign up

PRODUCT DOWNLOAD PLANS & PRICING RESOURCES SUPPORT BLOG ABOUT US

Webinars Whitepapers Tutorials Trainings


The HAProxy process will be managed by ClusterControl, and is automatically restarted if it fails.

Single Console for Your Entire Database Infrastructure


Deploy, manage, monitor, scale your databases on the technology stack of your choice!

Install ClusterControl Book a Demo

7. HAProxy Redundancy with Keepalived


Since all applications will be depending on HAProxy to connect to an available database node, to avoid a single point of
failure with your HAProxy, one would set up two identical HAProxy instances (one active and one standby) and use
Keepalived to run VRRP between them. VRRP provides a virtual IP address to the active HAProxy, and transfers the Virtual
IP to the standby HAProxy in case of failure. This is seamless because the two HAProxy instances need no shared state.

By adding Keepalived into the picture, our infrastructure will now look something like this:

In this example, we are using two nodes to act as the load balancer with IP address failover in front of our database
cluster. virtual IP (VIP) address will be floating around between HAProxy #1 (master) and HAProxy #2 (backup). When
HAProxy #1 goes down, the VIP will be taking over by HAProxy #2 and once the HAProxy #1 up again, the VIP will be
failback to HAProxy #1 since it hold the higher priority number. The failover and failback process is automatic, controlled
by Keepalived.

You need to have at least two HAProxy instances in order to install Keepalived. Use “Install HAProxy” to install another
HAProxy instance and then go to ClusterControl > Manage > Load Balancer > Install Keepalived to install or add existing
Keepalived instance, as shown in the following screenshot:

9 of 12 5/2/18, 2:26 PM
MySQL Load Balancing with HAProxy - Tutorial ... https://severalnines.com/resources/tutorials/mys...

Login / Sign up

PRODUCT DOWNLOAD PLANS & PRICING RESOURCES SUPPORT BLOG ABOUT US

Webinars Whitepapers Tutorials Trainings

Take note that your network environment supports VRRP (IP protocol 112) for health check communication between two
nodes. It’s also possible to let Keepalived run in non-multicast environment by configuring unicast, which will be used by
default by ClusterControl if Keepalived installed is version 1.2.8 and later.

For more details on how ClusterControl configures Keepalived and what to expect from the failover and failback, see this
blog post.

8. HAProxy Statistics
Other than deployment and management, ClusterControl also provides insight into HAProxy statistics from the UI. From
ClusterControl, you can access the statistics page at ClusterControl > Nodes > choose the HAProxy node similar to screenshot
below:

You can enable/disable a server from the load balancing by ticking/unticking the checkbox button under “Enabled”
column. This is very useful when you want your application to intentionally skip connecting to a server e.g., for
maintenance or for testing and validating new configuration parameters or optimized queries.

It’s also possible to access the default HAProxy statistic page by connecting to port 9600 on the HAProxy node. Following
screenshot shows the example when connecting to http://[HAProxy_node_IP_address]:9600/ with default username and
password “admin”:

10 of 12 5/2/18, 2:26 PM
MySQL Load Balancing with HAProxy - Tutorial ... https://severalnines.com/resources/tutorials/mys...

Login / Sign up

PRODUCT DOWNLOAD PLANS & PRICING RESOURCES SUPPORT BLOG ABOUT US

Webinars Whitepapers Tutorials Trainings

Based on the table legend, the green rows indicate that the servers are available, while red indicates down. When a server
becomes available, you should notice the throttling part (last column) where “slowstart 60s” kicks in. This server will receive
gradual connections where the weight is dynamically adjusted for 60 seconds before it reaches the expected weight
(weight 100):

9. Troubleshooting & Workaround


This section provides some guidance on some common issues when configuring MySQL with HAProxy, Keepalived and
ClusterControl.

9.1. MySQL Deadlocks in Galera

Galera cluster has known limitations, one of them is that it uses cluster-wide optimistic locking. This may cause some
transactions to rollback. With an increasing number of writeable masters, the transaction rollback rate may increase,
especially if there is write contention on the same dataset. It is of course possible to retry the transaction and perhaps it
will COMMIT in the retries, but this will add to the transaction latency. However, some designs are deadlock prone, e.g
sequence tables.

The solution is to create another listener for single-node read/write and tell your application to send the problematic
queries to the respective listener. Following example shows snippets of HAProxy listeners for multi-node reads/writes on
port 3307 and single-node reads/writes on port 3308:

1 listen  haproxy_192.168.55.110_3307_multi
2         bind *:3307
3         mode tcp
4         timeout client  10800s
5         timeout server  10800s
6         balance leastconn
7         option httpchk
8         option allbackups
9         default-server port 9200 inter 2s downinter 5s rise 3 fall 2 slowstart 60s maxconn 64 maxqueue 1
10         server galera1 192.168.55.111:3306 check
11         server galera2 192.168.55.112:3306 check
12         server galera3 192.168.55.113:3306 check
13
14 listen  haproxy_192.168.55.110_3308_single
15         bind *:3308
16         mode tcp
17         timeout client  10800s
18         timeout server  10800s
19         balance leastconn
20         option httpchk
21         option allbackups
22         default-server port 9200 inter 2s downinter 5s rise 3 fall 2 slowstart 60s maxconn 64 maxqueue 1
23         server galera1 192.168.55.111:3306 check
24         server galera2 192.168.55.112:3306 check backup
25         server galera3 192.168.55.113:3306 check backup

We have covered this in details in this blog post, Avoiding Deadlocks in Galera.

9.2. MySQL Server has gone away

This is usually caused when HAProxy has closed the connection due to timeout or the connection is closed on the server
side. Sometimes, you could see this when the server is restarting or the connection has reached one of the following
timeouts (following defaults are for MySQL variables deployed by ClusterControl):

11 of 12 5/2/18, 2:26 PM
MySQL Load Balancing with HAProxy - Tutorial ... https://severalnines.com/resources/tutorials/mys...

Login / Sign up

PRODUCT DOWNLOAD PLANS & PRICING RESOURCES SUPPORT BLOG ABOUT US

Webinars Whitepapers Tutorials Trainings


innodb_lock_wait_timeout - Default is 50s
interactive_timeout - Default is 28800s
lock_wait_timeout - Default is 31536000s
net_read_timeout - Default is 30s
net_write_timeout - Default is 60s
slave_net_timeout - Default is 3600s
thread_pool_idle_timeout - Default is 60s
wait_timeout - 28800s

Our recommendation is to configure the net_read_timeout and net_write_timeout value with the same value as for
timeout client and timeout server in HAProxy configuration file.

9.3. Non-uniform Hardware

If you have non-uniform hardware, setting up weight for each server might help to balance the server’s load. All servers
will receive a load proportional to their weight relative to the sum of all weights, so the higher the weight, the higher the
load. It is recommended to start with values which can both grow and shrink, for instance between 10 and 100 to leave
enough room above and below for later adjustments.

Galera replication performance is determined by the slowest node in the cluster. Let’s say in a three-node cluster, the third
node is introduced with half of the capacity of the other two nodes. It’s a good practice to reduce the weight for that
particular server by half so it gets fair amount of connections and doesn’t drag down the other members to run in full
capacity:

1 ...
2 default-server port 9200 inter 2s downinter 5s rise 3 fall 2 slowstart 60s maxconn 64 maxqueue 128 weight
3 server galera1 192.168.55.111:3306 check
4 server galera2 192.168.55.112:3306 check
5 server galera3 192.168.55.113:3306 check weight 50

PRODUCT RESOURCES SUPPORT BLOG ABOUT US


ClusterControl Webinars Documentation Company News In the news

ClusterControl Free Community Whitepapers Community Help DB Ops Careers


Edition
Tutorials ClusterControl on Slack Devops Events

Trainings Knowledge Base Galera Meetups

Get Support MongoDB Customers

Product Updates Partners

Tips & Tricks Contact us

FOLLOW US SIGNUP FOR OUR NEWSLETTER

Your e-mail Subscribe

© Copyright 2014-2018 Severalnines AB. All rights reserved. Severalnines and the Severalnines logo(s) are trademarks of Severalnines AB.
MySQL is a registered trademark of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. © 2014-2018 Severalnines AB.
Website Policies Site Map

12 of 12 5/2/18, 2:26 PM

Вам также может понравиться