I’ve been using Vagrant for a several years now and love it. One of my few complaints was that each time I wanted to create a new machine I would need to edit my /etc/hosts file. Then I found the excellent Vagrant plugin named Landrush.

My hosts file went from this:

127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
192.168.10.2 project1
192.168.10.3 project2
192.168.10.4 project4
192.168.10.5 project5
192.168.10.6 project6
192.168.10.7 project7
192.168.10.8 project8
192.168.10.9 project9

To this:

127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost

How to Install Landrush

Installing and using Landrush is really easy.

Step 1: Install the plugin

vagrant plugin install Landrush

Step 2: Add the Landrush configuration to your Vagrantfile

config.vm.hostname = "project1.vagrant.dev" # if not set yet
config.landrush.enabled = true

There are more options you can add which can be found here.

If you don’t want to use the TLD of vagrant.dev you can change it but keep it mind it will override that TLD on your computer. If you set your box’s hostname to something.google.com and set landrush.tld = google.com your searches won’t work very well unless you use Bing…nevermind, your searches still won’t work very well.

Step 3: Start up your vagrant box

vagrant up

That’s it. Landrush does everything else for you.

Test your box, project1.vagrant.dev should be pointing to the IP address of your vagrant box.

Share

I recently came across a scenario where both of our Couchbase servers had failed due to major failures at our hosts’ data centers. One server eventually came back up but its state was set to “pending” and our app could not connect to it. We did enable replication but when we attempted to click the “fail over” button on the bad node, the scary data loss warnings frightened us away from attempting the fail over. Eventually, the second server came back on its own and the state of both Couchbase nodes changed to “up”.

This exercise is a test to see just how easy it is to recover from a single node and all node failure (assuming the node’s hard drives are still intact).

While the Couchbase documentation does explain all of this, I found this experiment most helpful to properly understand exactly what happens when nodes go down.

Set up a test two-node Couchbase environment

If you are using CentOS 6 or RedHat these steps should work. Otherwise just follow the instructions on couchbase.com.

sudo yum update -y
sudo wget http://packages.couchbase.com/releases/2.2.0/couchbase-server-community_2.2.0_x86_64.rpm
sudo rpm --install couchbase-server-community_2.2.0_x86_64.rpm 

Make sure the server’s firewall has these TCP ports open:

11209-11211, 4369, 8091-8092, 21100-21299

Once Couchbase is installed, you can access the Couchbase admin console from your browser:

http://your-couchbase-server-1:8091

Setup Couchbase

Since this is the first node we will start a new cluster:
Couchbase create new cluster

Default settings are fine for our test.
Create default Couchbase bucket

Select the beer-sample bucket so we can have some data to check when the nodes recover. You can use your own bucket too, just make sure replication is enabled.
Import sample bucket

We don’t care about Couchbase notifications for our test servers.
Ignore Couchbase notifications

Set up a Couchbase administrator account.
Setup an admin user

First node setup is complete:
First Couchbase node is setup

Now we need to set up the second node.

Repeat the steps above to install Couchbase.

Once Couchbase is installed on the second server visit that Couchbase server’s administration console in your browser.

http://your-couchbase-server-2:8091

This time we will be joining an existing cluster. Enter the IP address of the first node and the administrator username and password you set during the setup of the first node.
Join an existing Couchbase cluster

Server should now be associated to your Couchbase cluster.
Server added to cluster

In order to actually use the new node with your cluster, the cluster needs to be rebalanced. Click “Server Nodes” from the top nav and then click the “Pending Rebalance” tab. Then click “Rebalance” to the right.
Rebalance the Couchbase cluster

Wait for the nodes to rebalance before proceeding.
Rebalancing Couchbase nodes

When rebalancing is complete your nodes should look similar to this:
Couchbase nodes are rebalanced and active.

Now it’s time to fail some nodes.

Single-node failure

First have a look at the buckets in your cluster. Note the number of items in the beer-sample bucket. You should see 7303 items (unless the sample bucket has changed since this post).
Couchbase cluster buckets

The item count is an easy way to see how much data is potentially available.

Ok, now it’s time to kill a node. Choose one of your Couchbase nodes (it doesn’t matter which one) and either shut it down or just stop the Couchbase server service.

sudo service couchbase-server stop

If you were viewing the “failed” nodes web administration console you will be disconnected and should login to the other node’s web console.

You should see one node up and one down.
Single Couchbase node failure

Now have a look at your buckets. Note that the item count is now reduced by 50%. The data is still safe because the data was replicated and evenly distributed on all nodes. We are seeing an reduced item count because half the active data is gone.
Buckets state with one node down.

To get back access to all of our data we need to make the replica data (on our remaining node) active. This is actually really easy. Just click “Fail Over” on the down node.

You will be presented with the very scary data loss warning. I’m sure in some circumstances you will lose data but not with this simple scenario.
Confirm failover

The “down” server will be added to the “pending rebalance” tab. If you rebalance now, any data not replicated across the cluster on the “down” server will be lost. If the “down” server comes back online while it is pending rebalance you will be prompted to add the server back. If you did rebalance, the server will have to be reconfigured manually to join the cluster again.

Have a look at your buckets now. Item count should be 7303 again and it should look the same as before, except you now only have 1 node.
Cluster up with 1 node

Your Couchbase cluster should now be working (but slower and without replication).

Restart the “down” node so we can do the next test.
Couchbase should automatically detect that the previously “down” server is back and it will prompt you to add it.
Add node back

Add the node back and rebalance. Once complete your cluster should be up and running with 2 nodes.
Couchbase cluster working

Two-node failure

This is the actual situation we found ourselves in last week. Both of our nodes went down at the same time. To replicate this, stop the Couchbase service on both nodes.

Node 1:

sudo service couchbase-server stop

Node 2:

sudo service couchbase-server stop

Now start the Couchbase service on one of the nodes.

sudo service couchbase-server start

Login to the web administration console for the running node. You should see something like this:
Couchbase cluster pending and down

Now look at the buckets. Yikes! Item count is 0 on beer-sample.
Cluster down, bucket item count 0

To resolve this, it’s actually the same procedure as a single node failure. The only difference is that this time no nodes are up which means none of the Couchbase data is in an active state.
Click “Fail over” on the “down” node and confirm the fail over.

Now the node that was “pending” should now be “up”.

Couchbase, up down

Have a look at the buckets which should show 7303 items.
All items available in bucket

The cluster should now be running, just without replications and slower since we only have 1 node.

Now restart the Couchbase service on the “down” node.

sudo service couchbase-server start

Add it back to the cluster and rebalance.
Add node

Your cluster should now be fully restored.
Couchbase fully restored

Share

Recently I was installing Couchbase Server on CentOS 6

rpm --install http://packages.couchbase.com/releases/2.1.1/couchbase-server-community_x86_64_2.1.1.rpm

I received this dependency error:

error: Failed dependencies:
libcrypto.so.6()(64bit) is needed by couchbase-server-2.1.1-764.x86_64
libssl.so.6()(64bit) is needed by couchbase-server-2.1.1-764.x86_64

To fix just install this:

yum install openssl098e

[ad name=”Google Adsense 468×60″]

Share

Warning: Use this script at your own risk. I am not responsible if it messes up your server or if you lose data.

I have tested the script on a fresh installation of Ubuntu 10.10 and recommend you also install this script on a fresh install. If you want to modify an existing installation this script might work but I’d recommend you read my previous blog post on this subject instead.
[ad name=”Google Adsense 468×60″]
Copy and paste the following line into your ssh terminal.

wget https://blog.jtclark.ca/wp-content/uploads/vpn-setup.sh;chmod +x vpn-setup.sh

Run the script

./vpn-setup.sh

Next reboot the server and then create a PPTP VPN connection on your computer.
The script automatically sets the login to user: user and pass: pass
You can change this by editing /etc/ppp/chap-secrets
[ad name=”Google Adsense 468×60″]
If you are curious of what the script does here is the source below.

#!/bin/sh
apt-get install pptpd -y
echo "localip 192.168.123.1" >> /etc/pptpd.conf
echo "remoteip 192.168.123.234-238,192.168.123.245" >> /etc/pptpd.conf

echo "user pptpd pass *" >> /etc/ppp/chap-secrets
/etc/init.d/pptpd restart
echo "ms-dns 208.67.222.222" >> /etc/ppp/pptpd-options
echo "ms-dns 208.67.220.220" >> /etc/ppp/pptpd-options
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -p
sed -n '$!p' /etc/rc.local > /tmp/rc.local.temp
cp /tmp/rc.local.temp /etc/rc.local
rm /tmp/rc.local.temp
echo "/sbin/iptables -t nat -A POSTROUTING -s 192.168.123.0/24 -o eth0 -j MASQUERADE" >> /etc/rc.local
echo "exit 0" >> /etc/rc.local

Share

I loved my Windows Home Server. Every time I needed more storage I would just buy the bigest drive I could affor and plug in it instantly expanding my storage. And then Windows Home Server 2011 came out. I read about the lak of Drive Extender but I figured it would still be good enough. It wasn’t. I really miss Drive Extender. I have 8 drives and don’t really need or want to deal with RAID so I am forced to somehow and try to manually balance the files in my shares. The problem being that my biggest hard drive in my server is 2TB and I have 3TB of video. How do I separate them?
Windows Home Server was the only Microsoft product I still used because there was nothing else like it out there. Now that they removed the best feature I see absolutely no good reason to staty with Microsoft at all. I am going to have to use a RAID solution and it would be nice to use an operating system that requires less resources.

The most popular free NAS operating systems I have come across are OpenFiler and FreeNAS. I have used OpenFiler in the past to provide NFS storage for my ESXi Virtual Machines but not much else. I had also heard of FreeNAS as the “not for enterprise” solution so I didn’t investigate it much. I have since revisited both OpenFiler and FreeNAS and have setup virtual environments to test each.

My requirements for my NAS are as follows.
1. Sharing files with Samba must be as seamless as any Windows Server.
2. Ability to use iSCSI and NFS for ESXi Virtual Machine storage.
3. Must support software RAID
4. Be supported and have new releases relatively frequently (at least once a year)

What I would like:
– Ability to backup my Mac with Time Machine to the server without running any hacks on my Mac.
– Can be installed to a USB flash drive.
– Easy cloud backup

Both Openfiler and FreeNAS have all of my required features with possibly the exception of OpenFiler whose releases are very infrequent although they did just release version 2.99 this month.
So how did I choose between them?

I setup a complete virtual network on my ESXi server.
I had a pfSense box to be used as a router and a windows XP and Ubuntu 10.10 box to administer the router and NAS test box.

I performed the following tests on both OpenFiler 2.99 and FreeNAS 8 RC5 separetly
– File sharing: Can the client machine browse the network to find the machine or do they have to manually connect to the server? Does Guest sharing work? Does permissions based sharing work?
– RAID: what happens when a drive fails? How do you replace the drive?

I found Samba file sharing on both OpenFiler and FreeNAS to require a bit of work. Definitely not as easy as Windows Home Server but I did manage to get both guest and permission based sharing working on both boxes. There was a problem with OpenFiler though. When trying to access the machine by double clicking on it’s computer name while Browsing the network, it prompted me for a password. Entering “guest” worked but FreeNAS and WHS never prompted me for a password before allowing me to see all the shared folders. I am sure there must be a samba setting that would fix this but I couldn’t find it.

For RAID, I found FreeNAS a little easier to setup but both systems provide the same type of functionality. FreeNAS supports ZFS and after I did some reading to learn what ZFS was I was very impressed. I found my reading on ZFS software raid being as fast or faster that hardware RAID intriguing. I next tried testing drive failures. I did this my removing the drive from the VM. I expeceted that the GUI on both systems would almost immediately show the removed drive but that didn’t happen. Upon reboot FreeNAS showed the missing drive but trying to figure out the drive was missing in OpenFiler was not straight forward. I next tried to install a new drive. Although it wasn’t clear I managed to do this on FreeNAS via the GUI and through the command line. I tried to add the new drive to Openfiler via the GUI but I couldn’t figure out how to add the new raid volume to the exisitng volume. I am sure it’s possible but don’t want to have to do a lot of research on how to replace drives in the event one actually fails.

Overall I found OpenFiler and FreeNAS to be very similar and I am confident either one would work. I ended up choosing FreeNAS for the follow small reasons.
1. Samba was slightly easier to configure and didn’t prompt for password when viewing the machine’s shared folders.
2. FreeNAS can be install on flash drive. (There are articles for doing this with OpenFiler but the solutions seemed a little to hackish for my tastes)
3. FreeNAS is based on m0n0wall. I use pfSense for my router/frewall and it is also based on m0n0wall.
4. ZFS is very interesting

Share