How to build Highly Available LAMP on Ubuntu

In this tutorial we will set up a highly available server providing Linux, Apache, MySQL, and PHP (LAMP) services to clients. Should a server become unavailable, services provided by our cluster will continue to be available to client systems.

Our highly available system will resemble the following:

LAMP server1: node1.home.local IP address:
LAMP server2: node2.home.local IP address:
LAMP Server Virtual IP address:

A Distributed Replicated Block Device (DRBD) mirrors /srv/data between node1 and node2

Getting started

To begin, set up two Ubuntu systems. In this guide, the servers will be set up in a virtual environment using KVM-84. Using a virtual environment will allow us to add additional disk devices and NICs as needed.

The following partition scheme will be used for the operating system installation:

/dev/vda1 -- 10 GB / (primary' jfs, Bootable flag: on)
/dev/vda5 -- 1 GB swap (logical)

Bonded network interface

After the installation of a minimal Ubuntu install on both servers, we will install packages required to configure a bonded network interface, and in-turn assign static IP addresses to bond0 of node1 and node2. Using a bonded interface will prevent a single point of failure should the client accessible network fail.

Be sure to disable apparmor on both nodes before beginning or systems will be unable to start required services:

sudo invoke-rc.d apparmor kill
sudo update-rc.d -f apparmor remove

Install ifenslave:

apt-get -y install ifenslave

Append the following to /etc/modprobe.d/aliases.conf:

alias bond0 bonding
options bond0 mode=0 miimon=100 downdelay=200 updelay=200 max_bonds=2

Modify our network configuration and assign eth0 and eth1 as slaves of bond0.

Example /etc/network/interfaces:

# The loopback network interface
auto lo
iface lo inet loopback

# The interfaces that will be bonded
auto eth0
iface eth0 inet manual

auto eth1
iface eth1 inet manual

# The target-accessible network interface
auto bond0
iface bond0 inet static
        up /sbin/ifenslave bond0 eth0
        up /sbin/ifenslave bond0 eth1

We do not need to define eth0 or eth1 in /etc/network/interfaces as they will be brought up when the bond comes up. I have included them for documentation purposes.

Review the current status of the bonded interface:

cat /proc/net/bonding/bond0 
Example output:
Ethernet Channel Bonding Driver: v3.3.0 (June 10, 2008)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 54:52:00:6d:f7:4d

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 54:52:00:11:36:cf

Please note: A bonded network interface supports multiple modes. In this example eth0 and eth1 are in an round-robin configuration.

DRBD data

Shutdown both servers and add additional devices. We will add an additional disk that will contain the DRBD meta data and the data that is mirrored between the two servers. We will also add an isolated network for the two servers to communicate and transfer the DRBD data.

The following partition scheme will be used for the DRBD data:

/dev/vdb1 -- 10 GB unmounted (primary) DRBD replication data and DRBD meta data

Sample output from fdisk -l:

Disk /dev/vda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000d570a

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           1        1244     9992398+  83  Linux
/dev/vda2            1245        1305      489982+   5  Extended
/dev/vda5            1245        1305      489951   82  Linux swap / Solaris

Disk /dev/vdb: 10.7 GB, 10737418240 bytes
16 heads, 63 sectors/track, 20805 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Disk identifier: 0xf505afa1

   Device Boot      Start         End      Blocks   Id  System
/dev/vdb1               1       20805    10485688+  83  Linux

The isolated network between the two servers will be:

LAMP server1: node1-private IP address:
LAMP server2: node2-private IP address:

We will again bond these two interfaces. If our server is to be highly available, we should eliminate all single points of failure.

Append the following to /etc/modprobe.d/aliases.conf:

alias bond1 bonding
options bond1 mode=0 miimon=100 downdelay=200 updelay=200

Example /etc/network/interfaces:

# The loopback network interface
auto lo
iface lo inet loopback

# The interfaces that will be bonded
auto eth0
iface eth0 inet manual

auto eth1
iface eth1 inet manual

auto eth2
iface eth2 inet manual

auto eth3
iface eth3 inet manual

# The initiator-accessible network interface
auto bond0
iface bond0 inet static
        up /sbin/ifenslave bond0 eth0
        up /sbin/ifenslave bond0 eth1

# The isolated network interface
auto bond1
iface bond1 inet static
        up /sbin/ifenslave bond1 eth2
        up /sbin/ifenslave bond1 eth3

Ensure that /etc/hosts on both nodes contains the names and IP addresses of the two servers.

Example /etc/hosts:       localhost      node1.home.local    node1      node2.home.local    node2     node1-private     node2-private

Install NTP to ensure both servers have the same time:

apt-get -y install ntp

You can verify the time is in sync with the date command.

At this point, you can either modprobe the second bond, or restart both servers.

Install DRBD and heartbeat:

apt-get -y install drbd8-utils heartbeat

As we will be using heartbeat with DRBD, we need to change ownership and permissions on several DRBD related files on both servers.

chgrp haclient /sbin/drbdsetup
chmod o-x /sbin/drbdsetup
chmod u+s /sbin/drbdsetup
chgrp haclient /sbin/drbdmeta
chmod o-x /sbin/drbdmeta
chmod u+s /sbin/drbdmeta

Using /etc/drbd.conf as an example create your resource configuration. We will define a single resource.

Example /etc/drbd.conf:

resource lamp {
        protocol C;
        handlers {
        pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
        pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
        local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
        outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";      

        startup {
        degr-wfc-timeout 120;

        disk {
        on-io-error detach;

        net {
        cram-hmac-alg sha1;
        shared-secret "password";
        after-sb-0pri disconnect;
        after-sb-1pri disconnect;
        after-sb-2pri disconnect;
        rr-conflict disconnect;

        syncer {
        rate 100M;
        verify-alg sha1;
        al-extents 257;

        on node1 {
        device  /dev/drbd0;
        disk    /dev/vdb1;
        meta-disk internal;

        on node2 {
        device  /dev/drbd0;
        disk    /dev/vdb1;
        meta-disk internal;

Duplicate the DRBD configuration to the other server:

scp /etc/drbd.conf root@

Initialize the meta-data disk on both servers:

[node1]drbdadm create-md lamp
[node2]drbdadm create-md lamp

If a reboot was not performed post-installation of DRBD, the module for DRBD will not be loaded.

Start the DRBD service (which will load the module):

[node1]/etc/init.d/drbd start
[node2]/etc/init.d/drbd start

Decide which server will act as a primary for the DRBD device that will contain the LAMP configuration files and initiate the first full sync between the two servers.

We will execute the following on node1:

[node1]drbdadm -- --overwrite-data-of-peer primary lamp

Review the current status of DRBD:

cat /proc/drbd 
Example output:
IT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by ivoks@ubuntu, 2009-01-17 07:49:56
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---
    ns:761980 nr:0 dw:0 dr:769856 al:0 bm:46 lo:10 pe:228 ua:256 ap:0 ep:1 wo:b oos:293604
        [=============>......] sync'ed: 72.3% (293604/1048292)K
 1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r---
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:10485692

I prefer to wait for the initial sync to complete before proceeding, however, waiting is not a requirement.

Once completed, format /dev/drbd0 and mount it:

[node1]mkfs.jfs -q /dev/drbd0
[node1]mkdir -p /srv/data
[node1]mount /dev/drbd0 /srv/data

To ensure replication is working correctly, create data on node1 and then switch node2 to be primary:

[node1]dd if=/dev/zero of=/srv/data/test.zeros bs=1M count=100

Switch to node2 and make it the primary DRBD device:

On node1:
[node1]umount /srv/data
[node1]drbdadm secondary lamp
On node2:
[node2]mkdir -p /srv/data
[node2]drbdadm primary lamp
[node2[mount /dev/drbd0 /srv/data

You should now see the 100MB file in /srv/data on node2. We will now delete this file and make node1 the primary DRBD server to ensure replication is working in both directions.

Switch to node1 and make it the primary DRBD device:

On node2:
[node2]rm /srv/data/test.zeros
[node2]umount /srv/data
[node2]drbdadm secondary lamp
On node1:
[node1]drbdadm primary lamp
[node1]mount /dev/drbd0 /srv/data

Performing an ls /srv/data on node1 will verify the file is now removed and synchronization successfully occured in both directions.

Installing and configuring LAMP

Next we will install packages to support the LAMP suite. The plan is to have heartbeat control the services instead of init, thus we will prevent LAMP services from starting with the normal init routines. We will then place the LAMP configuration and data files on the DRBD device so both servers will have the information available when they are the primary DRBD device.

Install LAMP packages on node1 and node2:

[node1]tasksel install lamp-server
[node2]tasksel install lamp-server

Please note: You will be prompted to create a MySQL root password during the installation process.

Temporarily stop all LAMP services:

[node1]/etc/init.d/apache2 stop
[node1]/etc/init.d/mysql stop
[node1]/etc/init.d/mysql-ndb stop
[node1]/etc/init.d/mysql-ndb-mgm stop
[node2]/etc/init.d/apache2 stop
[node2]/etc/init.d/mysql stop
[node2]/etc/init.d/mysql-ndb stop
[node2]/etc/init.d/mysql-ndb-mgm stop

Verify all LAMP services are stopped by viewing the running processes and the listening network connections:

[node1]ps aux | grep mysql
[node1]ps aux | grep apache
[node1]ss -at

Remove LAMP from the init scripts:

[node1]update-rc.d -f apache2 remove
[node1]update-rc.d -f mysql remove
[node1]update-rc.d -f mysql-ndb remove
[node1]update-rc.d -f mysql-ndb-mgm remove
[node2]update-rc.d -f apache2 remove
[node2]update-rc.d -f mysql remove
[node2]update-rc.d -f mysql-ndb remove
[node2]update-rc.d -f mysql-ndb-mgm remove

Relocate LAMP configuration to /srv/data:

# Create location to store files
[node1]mkdir -p /srv/data/etc
[node1]mkdir -p /srv/data/var/lib
[node1]mkdir -p /srv/data/var/log
# Move files to new location
[node1]mv /etc/apache2 /srv/data/etc
[node1]mv /etc/php5 /srv/data/etc
[node1]mv /etc/mysql /srv/data/etc
[node1]mv /var/lib/mysql /srv/data/var/lib
[node1]mv /var/lib/php5 /srv/data/var/lib
[node1]mv /var/www /srv/data/var
[node1]mv /var/log/apache2 /srv/data/var/log
[node1]mv /var/log/mysql /srv/data/var/log
# Link  to new location
[node1]ln -s /srv/data/etc/apache2 /etc/apache2
[node1]ln -s /srv/data/etc/php5 /etc/php5
[node1]ln -s /srv/data/etc/mysql /etc/mysql
[node1]ln -s /srv/data/var/lib/mysql /var/lib/mysql
[node1]ln -s /srv/data/var/lib/php5 /var/lib/php5
[node1]ln -s /srv/data/var/www /var/www
[node1]ln -s /srv/data/var/log/apache2 /var/log/apache2
[node1]ln -s /srv/data/var/log/mysql /var/log/mysql
# Remove files on node2 and create links
[node2]rm -rf /etc/apache2
[node2]rm -rf /etc/php5
[node2]rm -rf /etc/mysql
[node2]rm -rf /var/lib/mysql
[node2]rm -rf /var/lib/php5
[node2]rm -rf /var/www
[node2]rm -rf /var/log/apache2
[node2]rm -rf /var/log/mysql
[node2]ln -s /srv/data/etc/apache2 /etc/apache2
[node2]ln -s /srv/data/etc/php5 /etc/php5
[node2]ln -s /srv/data/etc/mysql /etc/mysql
[node2]ln -s /srv/data/var/lib/mysql /var/lib/mysql
[node2]ln -s /srv/data/var/lib/php5 /var/lib/php5
[node2]ln -s /srv/data/var/www /var/www
[node2]ln -s /srv/data/var/log/apache2 /var/log/apache2
[node2]ln -s /srv/data/var/log/mysql /var/log/mysql

Last but not least configure heartbeat to failover a virtual IP address, Apache, and MySQL in case a node fails.

On node1, define the cluster within /etc/heartbeat/

Example /etc/heartbeat/

logfacility     local0
keepalive 2
deadtime 30
warntime 10
initdead 120
bcast bond0
bcast bond1
node node1
node node2

On node1, define the authentication mechanism within /etc/heartbeat/authkeys the cluster will use.

Example /etc/heartbeat/authkeys:

3 md5 password

Change the permissions of /etc/heartbeat/authkeys:

[node1]chmod 600 /etc/heartbeat/authkeys

On node1, define the resources that will run on the cluster within /etc/heartbeat/haresources. We will define the master node for the resource, the Virtual IP address, the file systems used, and the service to start.

Example /etc/heartbeat/haresources:

node1 IPaddr:: drbddisk::lamp Filesystem::/dev/drbd0::/srv/data::jfs mysql-ndb-mgm mysql-ndb mysql apache2

Copy the cluster configuration files from node1 to node2:

[node1]scp /etc/heartbeat/ root@
[node1]scp /etc/heartbeat/authkeys root@
[node1]scp /etc/heartbeat/haresources root@

At this point you can either:

  1. Unmount /srv/data, make node1 secondary for drbd, and start heartbeat
  2. Reboot both servers

Testing LAMP server with Joomla

To test connectivity to our new Highly Available LAMP server, we will set up a CMS that uses the LAMP stack. In this turorial it will be Joomla.

Complete the following steps:

1. Download Joomla 2. Unpack Joomla 3. Create database 4. Configure Joomla 4. Test failover

Joomla 1.5.10 was the current version when this document was written.

Download Joomla from the Joomla download page:


Unpack Joomla. We will allow Joomla to be our default Apache site:

[node1]tar xjf Joomla_1.5.10-Stable-Full_Package.tar.bz2 -C /var/www

Permissions may be set to the UID of the individual that created the archive. Change the ownership to the Apache user:

[node1]chown -R www-data:www-data /var/www/*

Create the MySQL database and user for Joomla:

mysql -u root -p
create database joomla;

Next we will configure Joomla to use our MySQL database using the database and credentials we created.

Remove /var/www/index.html. This file was installed when Apache was installed:

[node1]rm /var/www/index.html

To ease installation, create the file which stores Joomla's configuration information and temporarily allow it the be writable by the Apache user:

touch /var/www/configuration.php
chown www-data:www-data /var/www/configuration.php
chmod 644 /var/www/configuration.php

Browse to (our Virtual IP address).

You will be presented with the configuarion of Joomla. Be sure to enter the previously created database name and username/password.

For testing purpose install the example data.

Once you have stepped through the Joomla configuration, you will be prompted to remove the installation directory.

Remove the installation directory:

[node1]rm -rf /var/www/installation

Update the configuration file to be read only:

[node1]chmod 444 /var/www/configuration.php

The configuration of our highly available LAMP server is now complete.

You can simply test the system by changing the pre-existing example data, or by creating new articles.

Once you have created/modified the Joomla site, failover to the redundant node:

[node1]/etc/init.d/heartbeat stop

The changes should have been propagated to node2. Make additional changes and failover to node1. Starting heartbeat on node1 will cause the resource to failback:

[node1]/etc/init.d/heartbeat start

Source: Ubuntu Community

Got a comment?

captcha =


  1. System (20)
    1. FreeBSD (5)
    2. Linux (9)
  2. Email (2)
  3. DNS (2)
  4. Databases (1)
  5. WebServer (27)
Copyright © 2012-2015 HowToUnix - *nix Howtos and Tutorials
All Rights Reserved.