Tuesday, April 16, 2013

mysql clustering steps by step


i have followed this link
for basics about mysql clustering here are some basic things
mysql clustering basics
setting up the Cluster

To set up the cluster, you need three servers. Two cluster nodes and one Management node. I should point out that the Management node is not required after the cluster install, but I strongly recommend keeping it as it gives you the automatic failover capability. I will use three servers as examples:


Server1 192.168.52.128 (Cluster Management Server)
Server2 192.168.52.129 (Data Node 1)
Server3 192.168.52.130 (Data Node 2)

#############################################################################################!
First step is to install MySQL Cluster Management Server on Server1.
Lets download from MySQL Cluster 6.2 from MySQL website (http://dev.mysql.com/downloads/cluster/).
This guide is intended for Debian based systems, so we will download nonrpm package (mysql-cluster-gpl-6.2.15-linux-i686-glibc23.tar.gz).
Here are the steps to follow, to set up the MySQL Cluster Management Server (ndb_mgmd) and the cluster Management client (ndb_mgm)
mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
tar xvfz mysql-cluster-gpl-6.2.15-linux-i686-glibc23.tar.gz
cd mysql-cluster-gpl-6.2.15-linux-i686-glibc23
mv bin/ndb_mgm /usr/bin
mv bin/ndb_mgmd /usr/bin
chmod 755 /usr/bin/ndb_mg*
cd /usr/src
rm -rf /usr/src/mysql-mgm

Next step is to create the Cluster configuration file:

mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
vi config.ini

(I try to use nano as the text editor, just because it is much easier to use than vi.)


Here is the sample config file:


[NDBD DEFAULT]
NoOfReplicas=2

[MYSQLD DEFAULT]

[NDB_MGMD DEFAULT]

[TCP DEFAULT]

# Section for the cluster management node
[NDB_MGMD]
# IP address of the management node (server1)
HostName=192.168.52.128

# Section for the storage nodes
[NDBD]
# IP address of the first data node (Server2)
HostName=192.168.52.129
DataDir= /var/lib/mysql-cluster

[NDBD]
# IP address of the second storage node (Server3)
HostName=192.168.52.130
DataDir=/var/lib/mysql-cluster

# one [MYSQLD] per storage node
[MYSQLD]
[MYSQLD]


Now let's start the Management Server:

ndb_mgmd -f /var/lib/mysql-cluster/config.ini

$$$$
[root@localhost mysql]# ndb_mgmd -f /var/lib/mysql-cluster/config.ini
MySQL Cluster Management Server mysql-5.5.29 ndb-7.2.10
$$$$


Now, we would want to start the Management Server automatically in case of a system reboot,
so we add an init script to do that:


echo 'ndb_mgmd -f /var/lib/mysql-cluster/config.ini' > /etc/init.d/ndb_mgmd
chmod 755 /etc/init.d/ndb_mgmd
update-rc.d ndb_mgmd defaults

#######################################################################################################################
Data Nodes Configuration (Server2 and Server3):

Now let's set up the data nodes. Here are the steps to do that (do on both data nodes)


groupadd mysql
useradd -g mysql mysql
cd /usr/local/
wget pick up any mirror from MySQL's website
tar xvfz mysql-cluster-gpl-6.2.15-linux-i686-glibc23.tar.gz
ln -s mysql-cluster-gpl-6.2.15-linux-i686-glibc23 mysql
cd mysql
scripts/mysql_install_db --user=mysql
chown -R root:mysql .
chown -R mysql data
cp support-files/mysql.server /etc/init.d/
chmod 755 /etc/init.d/mysql.server
update-rc.d mysql.server defaults
cd /usr/local/mysql/bin
mv * /usr/bin
cd ../
rm -fr /usr/local/mysql/bin
ln -s /usr/bin /usr/local/mysql/bin

#################################################################################
Next we need to create the MySQL config file /etc/my.cnf on #####both nodes:####

vi /etc/my.cnf


Here is the sample file:

[mysqld]
ndbcluster
# IP address of the cluster management server (Server1)
ndb-connectstring=192.168.52.128

[mysql_cluster]
# IP address of the cluster management Server (Server1)
ndb-connectstring=192.168.52.128


Our MySQL installation is almost complete, now let's create the data directories and start the MySQL Server on both nodes:


mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
ndbd --initial
/etc/init.d/mysql.server start


###2013-02-08 08:52:46 [ndbd] INFO -- Angel connected to '192.168.52.128:1186'
2013-02-08 08:52:46 [ndbd] INFO -- Angel allocated nodeid

(Important: we need to run ndbd --initial
only when the start MySQL for the first time, and if /var/lib/mysql-cluster/config.ini on Management Server changes.)

MySQL installation is complete, now let's put in a root password for our MySQL Servers:

mysqladmin -u root password newrootpassword


Again, it makes sense to start up the cluster nodes automatically in case of a system restart/failure.
Here are the ndbd init script and system startup links for that:

echo 'ndbd' > /etc/init.d/ndbd
chmod 755 /etc/init.d/ndbd
update-rc.d ndbd defaults


this completes are Cluster installation process, next, now let's test it.



################################################################################################
Test:

On Cluster Management Server, run the Cluster Management Client:

ndb_mgm

It will take you to the ndb_mgm prompt:


-- NDB Cluster -- Management Client --
ndb_mgm>


Now type show on the prompt:

ndb_mgm> show;


You should see an output similar to this:

ndb_mgm> show;
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.52.129 (Version: version number, Nodegroup: 0, Master)
id=3 @192.168.52.130 (Version: version number, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.52.128 (Version: version number)

[mysqld(API)] 2 node(s)
id=4 @192.168.52.129 (Version: version number)
id=5 @192.168.52.130 (Version: version number)

ndb_mgm>



We should see our data nodes connected in the previous screen. Now type quit to close the Management client:

ndb_mgm>quit;



Test the Cluster:

Now, let's create a Test database on Server2 (192.168.52.129) and run some tests

On Server2:

mysql -u root -p
CREATE DATABASE testdb;
USE testdb;
CREATE TABLE tblCustomer (ID INT) ENGINE=NDBCLUSTER;
INSERT INTO tblCustomer VALUES (1);
SELECT * FROM tblCustomer;
quit;

pay attention to the create table statement,
we must specify ENGINE=NDBCLUSTER for all tables that we want to clustered. As stated earlier,
MySQL cluster only saupports NDB engine, so if you use any other engine, table simply wont get clustered.

The result of the SELECT statement would be:

mysql> SELECT * FROM tblCustomer;
+------+
| ID |
+------+
| 1 |
+------+


Since clustering in MySQL is at the "table level" not at the database level, so we would have to create the database sperately on
Server3 (192.168.52.130) as well, but afterwards tblCustomer would be replicated with all its data (since the engine is NDBCLUSTER):


On Server3.......192.168.52.130

mysql -u root -p
CREATE DATABASE testdb;
USE testdb;
SELECT * FROM tblCustomer;

Now, if we insert a row of data on Server3, it should be replicated back to Server2:192.168.52.129

INSERT INTO tblCustomer VALUES (2);


If we run a SELECT query on Server2, here is what we should see:


mysql> SELECT * FROM testtable;

+------+
| ID |
+------+
| 1 |
| 2 |
+------+


Test Node shutdown:

Now run the following on Server2....192.168.52.129.... to test what happens if a node goes offline:


killall ndbd

and run this command to make sure that all ndbd processes have terminated:


ps aux | grep ndbd | grep -iv grep


If you still see any prpcesses, run this again:

killall ndbd

Now, lets go the management server (Server1)....192.168.52.128... and run the following to check the cluster status:


ndb_mgm

On the ndb_mgm console. run:

show;

it should be bring you an output simlar to the following:

ndb_mgm> show;
Connected to Management Server at: localhost:1186

Cluster Configuration
---------------------

[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from 192.168.52.129)..
id=3 @192.168.52.130 (Version: -----, Nodegroup: 0, Master).....

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.52.128 (Version: -----)

[mysqld(API)] 2 node(s)
id=4 @192.168.52.129 (Version: --------)
id=5 @192.168.52.130 (Version: --------)

ndb_mgm>

You see, Server2 is not connected anymore.

Type quit; to leave the ndb_mgm management console. Now, let's check on Server3,
if our database is still up and we can make connections to it:

mysql -u root -p
USE testdb;
SELECT * FROM tblCustomer;
quit;

It should bring up the following result set:


mysql> SELECT * FROM tblCustomer;

+------+
| ID |
+------+
| 1 |
| 2 |
+------+


Now, let's start MySQL on Server2 again by issuing the following command:

ndbd


How to Restart MySQL Cluster:

In managing a produciton MySQL environment or any other transactional database environment,
times come when we have to restart/shutdone our systems. So, let's see how would we shutdown our MySQL Cluster:

On Server1, open the management console:

ndb_mgm

then type:

shutdown;

it would bring up an output like this:

ndb_mgm> shutdown;
Node 3: Cluster shutdown initiated
Node 2: Node shutdown completed.
2 NDB Cluster node(s) have shutdown.
NDB Cluster management server shutdown.
ndb_mgm>

This means that the cluster nodes Server2 and Server3 and also the Management node (Server1) have shut down.

To leave the Management console, run:

quit;


To start the cluster management server again, run the following (on Server1, Management Server):

ndb_mgmd -f /var/lib/mysql-cluster/config.ini


and on Server2 and Server3, run the following:

ndbd

in case /var/lib/mysql-cluster/config.ini on Management Server changed, you should run the following:


ndbd --initial


You can go back to the Management node and verify if the cluster started ok, without any errors:

ndb_mgm

on the Management console run the following:

show;

This should bring up the cluster configuration.


No comments: