Building a snow shelter

To give you a break from the usual GNU/Linux/DevOps/Puppet/GlusterFS drab, I’ve decided to have a go at writing a different kind of technical article. This article will show you how to build the traditional Canadian snow dwelling known as a quinzee. If you will be travelling to Canada, I recommended that you read through this article ahead of time, so that you don’t offend your host by being unfamiliar with their traditional living accommodations.

Warning:

If you are not Canadian or if it is your first time building this type of shelter, I recommend that you have a professional supervise and guide you! If it is not sufficiently cold out (less than 0oC) when you are building or sleeping in your shelter, it could collapse, squashing and suffocating you! If it’s built correctly, a quinzee can safely support the weight of many people, although it is not intended for use as scaffolding. When you are inside the quinzee digging, you must have a buddy outside, standing watch for your safety, in case of quinzee collapse or polar bear attack.

Fresh snow:

Start off by choosing a place to build your quinzee. Usually you’ll build your quinzee near the other quinzees in the village, however for this article, I have decided to set up in a remote location, as this will be used as my vacation home. It is good to build on flat terrain, where there is a lot of fresh snow. In Canada, there is always an abundance of snow so it should be easy to find a pristine area.

Fresh snow

Fresh snow

If you build in a wooded area, be sure that there are no trees or dead branches that might fall or break off in the wind and squish you or your quinzee!

Stratification:

With each snowfall, new layers of snow are deposited onto the older layers which is called stratification. Since each layer could have different consistencies or strengths, it is important to mix or break up the layering before we build our quinzee. This will make sure that the walls of the quinzee are stable, and don’t shift later on.

Destratification of the snow

Destratification of the snow

Here, the layers of snow are destratified with a shovel, in a circle of about two to three metres in diametre. This circle will be the outside size of your quinzee. A three metre diametre quinzee would more than comfortably fit three large sleeping Canadians, which is the equivalent of two standard polar bears wide.

Piling the snow:

To make a quinzee, you need a large pile of snow. You’ll want to shovel the snow from outside of the destratified quinzee circle, into the centre of the pile. Work your way around the circle, gradually moving outwards as you need more snow. This technique has the added advantage that the area outside of your quinzee will be cleared of snow, and gives you room for stargazing, polar bear watching, and other typical Canadian pastimes.

A pile of snow

A pile of snow

After a few hours your pile will grow large enough that it should be about two to three metres high. You don’t have to worry about the specific height as long as you always throw the snow on the top of the pile. The pile will automatically spread out to form a larger diametre base if it is too high to be supported on a smaller base.

Some quinzee builders will pile their backpacks and other equipment in the centre of the pile under a tarp before piling on the snow. This is a tricky optimization, but if done correctly, there will be less snow to shovel onto the pile, and less snow to dig out at the end.

Large snow pile

A larger pile of snow

When not in use, make sure you don’t leave shovels or other items lying around on the ground, as they could easily get covered up by snowfall and lost forever. Conveniently, shovels can be stuck into the snow, where they’ll stand upright and remain visible.

At this point your pile of snow should now be complete. Congratulate yourself on a job well done and award yourself with a cool mug of maple syrup.

Sticks:

When digging out the quinzee, we’ll need to know how much snow to remove so that we don’t create unwanted windows in the wall of the shelter. To help keep track of the wall thickness, first gather about 40 to 50 sticks, each about 30cm in length. Insert all the sticks in the wall of the quinzee spaced out about 30cm from one another, and all pointing towards the centre of the shelter.

Sticks inserted into the pile of snow

Sticks inserted into the pile of snow

If the sticks are longer than 30cm, it’s okay if they protrude from the outside of the shelter. If they are too short, you can use them for firewood. When we dig out the shelter, we should encounter the sticks from inside the quinzee. Whenever you see a stick, you should stop digging in that direction.

Compaction:

At this point the quinzee needs time to settle. It typically takes between two and six hours. In this time, the snow crystals will lock together under their own weight, and the structure will harden because of the cold.

Waiting for compaction

Waiting for compaction

Although not easily noticeable, the structure may shrink due to this compaction. This is usually a great time to prepare lunch!

Lunch:

A typical Canadian meal might consist of bannock cooked on a stick, Montreal bagels, poutine, and a glass of maple syrup. Dessert is usually tire, served on snow. When your quinzee has sufficiently compacted, it’s time to start digging it out. If your quinzee needs more time to settle, then you can keep busy by gathering some firewood or practising your favourite Canadian song. While gathering a little firewood, I had a chance to take a quick photo looking down the main street of my village. On the right you can see some of the kindling I’ve collected, and in the distance to the left you can see a quinzee.

Main road of my village

Main road of my village

Digging:

You typically want as small of a door opening as you can fit through. Start digging near, but above the base of the quinzee at a downward angle towards the centre. As you reach the centre, start digging outwards in all directions. The downward angle will prevent strong winds from blowing directly into the quinzee. The inner dome will keep heat from escaping. It takes some experience to get the shape right, but after years of practice, you’ll quickly become an expert.

Cross sectional diagram of a quinzee

Cross sectional diagram of a quinzee

This process should take you about an hour. When you are digging inside of the quinzee, remember to always have your buddy on the outside in the event of collapse. While collapses are rare, they can happen, in particular if the weather is too warm, if you didn’t let your quinzee settle for long enough, or if you made your walls too thin or your roof too thick. It is important that your buddy pay attention, because the quinzee insulates you from sound (and cold) very well. To demonstrate this, scream or talk loudly while you’re inside the quinzee. Your buddy will not be able to hear you very easily.

Entrance to the quinzee

Entrance to the quinzee

During the day, if you ask your buddy to seal off the entrance with their body, it will be very dark inside the quinzee, but you should be able to see faint blue areas where the walls are the thinnest and some light is coming in. You probably don’t want to dig much further in those areas.

Finishing up:

When you are finished digging out your quinzee, pull out one of the sticks in the roof to create an air hole for ventilation. You’ll probably want to gently smooth out the inside walls and roof with your gloved hand. This will prevent the jagged digging marks (snow stalactites) from dripping on you once you’re sleeping in the quinzee and your body heat is causing them to melt.

View from inside the quinzee

View from inside the quinzee of the door

As a related optimization, some quinzee builders like to light a candle in the centre of the quinzee for a few hours to try to melt any variations on the inner walls, causing a glaze and preventing dripping.

Making it a home:

It’s customary to line the bottom of your quinzee with either a tarp, or if it’s available, some hay. The hay is a particularly comfortable bedding material because it soaks up excess moisture which would otherwise make your floor damp. In general, Canadians only line their quinzees with hay when royalty or a special guest is coming to visit. If you do not receive any hay when staying over at a friend’s place, please do not be offended as it is hard to come by in a country that has a winter climate for most of the year.

Looking into the quinzee

Looking into the quinzee

To get in and out of your quinzee it is customary to crawl or slide in on your stomach. In the above photo, an inconsiderate guest decided to “walk” on their hands and knees, causing damage to the door. In some Canadian villages, damaging a neighbours door can be a great insult and has led to at least two serious incidents. [1] [2]

For sleeping, orient your feet so they are closest to the door, and with your head away from it. When multiple people are lying next to each other in this way, the quinzee can be quite warm during cold winter nights.

Conclusion:

If you plan on leaving your home unattended for some time, it is common practice to destroy your quinzee so that small animals or children don’t get accidentally trapped inside should it be left unsupervised. Sadly, only a few days after I took these photos, a polar bear attacked my village and destroyed the quinzee!

What remained after a polar bear attack

What remained after a polar bear attack

One positive outcome of the attack is that you can get a good look inside the quinzee and you see a nice cross-section of the walls. Fortunately, this won’t cost me much in building materials, but it will be some time before I have time to rebuild, and even longer before I have time to get a guest quinzee built!

I hope this has been an instructional article! Stay warm, stay safe, and

Happy Hacking,

James

Scathing review of the Lenovo X240

I’m using a Lenovo X201 with 8GiB of RAM. Apart from some minor issues, I’ve been very satisfied with this laptop. It’s over four years old, and so I decided to see what’s available on the horizon. I did not buy an X240 because of the following reasons:

The X240 has only one slot for RAM and thus supports a maximum of 8GiB.

I think it’s pretty ridiculous for any successor to the X230 to support less RAM. In and of itself this is a deal breaker! My X201 has 8GiB, and I wish it had more! It doesn’t make sense to settle for less on new kit. In other words, Lenovo is stating that 8GiB ought to be enough for anybody.

The X240 keyboard is missing buttons.

The X201 keyboard has PageUp and PageDown keys in addition to the Forward and Back buttons which are located above the Left and Right arrow keys. It also has dedicated Insert and Delete keys. This is all to be expected, and is awesome.

The X240 removes the dedicated Forward and Back buttons, and replaces them with the (now relocated) PageUp and PageDown keys. To use the Forward and Back buttons, you now need an extra finger to hold the Fn button.

If that wasn’t bad enough, the Insert key now shares a button with the End key, which makes common Shift+Insert sequences near impossible to anyone that isn’t either a cephalopod or an Emacs user. Time to update the pedals to include an Fn key.

The X240 keyboard is missing buttons.

The X240 keyboard is missing buttons.

It’s also worth mentioning that the dedicated volume controls are gone.

The X240 “Trackpad”/”Touchpad” is now buttonless and unusable.

The mechanics of the new Touchpad makes it unusable! It has no buttons, and instead it feels like it is travelling 1cm vertically when it is pressed! This throws off any fine positioning you had with your cursor. Try it for five minutes, and you’ll quickly learn that this alone ensures that this machine can only be used as a desktop. Many, many users are complaining about the feel of the new pad. This feels like an attempt to imitate the look of an Apple product, instead of trying to maintain the functionality of the X series, except that the hardware isn’t very good.

Interlude…

I could deal with the below issues, if my top three (above) issues weren’t strong deal breakers! They are all easy to fix too! I hope these are addressed so that I can post a follow up “rave” article. ;)

Other issues/differences…

The X240 has a new, incompatible power adapter.

Lenovo has decided it needs to squeeze more money out of users by introducing new, backwards-incompatible technology. To this end, the well-known barrel adapter is now gone, and it has been replaced by a strange rectangle. You might as well throw out all your old power bricks, as Lenovo has said they won’t offer a compatibility adapter. Apparently, they used to sell such an adapter, but it has been discontinued.

The X240 has a new, incompatible docking bay.

If you were a fan of docking, you’ll now have to purchase a new bay.

The X240 has a combined microphone/headphone jack.

Minimalist designs and the abundance of cheap phone headsets has reduced the dual jack design to a combined single jack. I don’t like the change, but I realize there’s not much that can be done to prevent it.

Lenovo has a terrible warranty program.

Getting a hardware replacement with the Lenovo “return to depot” service can take about two weeks. Bringing your laptop in to a service centre is equally challenging. The locator database is out of date, and once you do find a service centre, many are unwilling to accept your hardware because they’re backlogged with work! If you are lucky enough to get a part replaced, hope that it isn’t defective or DOA. I had to go through three rounds of bad motherboards graphics on my X201. The on-site service is a good option if you can afford it.

Conclusion

Instead of iterating and improving on the much-loved X series, Lenovo has decided to sacrifice its followers in trying to appeal to a low-end laptop market. Once the model of power and portability, the X series is now designed for users who are afraid of having too many buttons or too much RAM.

The X220 and X230 are not available for sale anymore. The X250 isn’t due before 2015. I’ve extended the warranty of my X201 to the maximum of five years. Let’s hope someone at Lenovo reads this article and can make a difference before it’s too late. I’m happy to consult and demo new hardware if you contact me. I’d genuinely like to help.

Happy Hacking,

James

Show the exit status in your $PS1

As an update to my earlier article, a friend gave me an idea of how to make my $PS1 even better… First, the relevant part of my ~/.bashrc:

ps1_prompt() {
	local ps1_exit=$?

	if [ $ps1_exit -eq 0 ]; then
		#ps1_status=`echo -e "\[\033[32m\]"'\$'"\[\033[0m\]"`
		ps1_status='\$'
	else
		ps1_status=`echo -e "\[\033[1;31m\]"'\$'"\[\033[0m\]"`

	fi

	ps1_git=''
	if [ "$(__git_ps1 %s)" != '' -a "$(__git_ps1 %s)" != 'master' ]; then
		ps1_git=" (\[\033[32m\]"$(__git_ps1 "%s")"\[\033[0m\])"
	fi

	PS1="${debian_chroot:+($debian_chroot)}\u@\h:\[\033[01;34m\]\w\[\033[00m\]${ps1_git}${ps1_status} "
}

# preserve earlier PROMPT_COMMAND entries...
PROMPT_COMMAND="ps1_prompt;$PROMPT_COMMAND"

If you haven’t figured it out, the magic is that the trailing $ prompt gets coloured in red when the previous command exited with a non-zero value. Example:

james@computer:~$ cdmkdir /tmp/ttboj # yes, i built cdmkdir
james@computer:/tmp/ttboj$ false
james@computer:/tmp/ttboj$ echo ttboj
ttboj
james@computer:/tmp/ttboj$ ^C
james@computer:/tmp/ttboj$ true
james@computer:/tmp/ttboj$ cd ~/code/puppet/puppet-gluster/
james@computer:~/code/puppet/puppet-gluster$ # hack, hack, hack...

You can still:

$ echo $?
42

if you want more specifics about what the exact return code was, and of course you can edit the above ~/.bashrc snippet to match your needs.

Hopefully this will help you be more productive, I know it’s helping me!

Happy hacking,

James

Screencasts of Puppet-Gluster + Vagrant

I decided to record some screencasts to show how easy it is to deploy GlusterFS using Puppet-Gluster+Vagrant. You can follow along even if you don’t know anything about Puppet or Vagrant. The hardest part of this process was producing the actual videos!

If recommend first reading my earlier articles if you’re planning on following along:

Without any further delay, here are the screencasts:

Part 1: Intro, and provisioning of the Puppet server.

Part 2: Initial building of the Gluster hosts.

Part 3: Finishing the Gluster builds.

Part 4: GlusterFS client mounting and tests.

Part 5: Mixed bag of code, infrastructure tours, examples and other details.

I hope you enjoyed these videos. Thank you to the Gluster.org community for hosting them. If you liked these videos, please consider sponsoring some of my work, or making a donation!

As a side note, the only screencast tool that worked was gtk-recordmydesktop, however it deleted my second recording (which had to be re-recorded) and the audio stopped working one minute into my third recording (which had to then be separately recorded, and mixed in). Amazingly, pitivi was the only tool which worked to properly mix them together!

Happy Hacking,

James

PS: Please note, you may not sell, edit, redistribute, perform, or host these videos elsewhere without my permission. I especially don’t want to see them on youtube until Google let’s me unlink my youtube account! If you do want my permission to use these videos for something, contact me, and we can work something out. I’ll surely allow it if it’s not for something evil. If you’d rather have an interactive, live demo, let me know!

Building base images for Vagrant with a Makefile

I needed a base image “box” for my Puppet-Gluster+Vagrant work. It would have been great if good boxes already existed, and even better if it were easy to build my own. As it turns out, I wasn’t able to satisfy either of these conditions, so I’ve had to build one myself! I’ve published all of my code, so that you can use these techniques and tools too!

Status quo:

Having an NIH problem is bad for your vision, and it’s best to benefit from existing tools before creating your own. I first tried using vagrant-cachier, and then veewee, and packer. Vagrant-cachier is a great tool, but it turned out not being very useful because there weren’t any base images available for download that met my needs. Veewee and packer can build those images, but they both failed in doing so for different reasons. Hopefully this situation will improve in the future.

Writing a script:

I started by hacking together a short shell script of commands for building base images. There wasn’t much programming involved as the process was fairly linear, but it was useful to figure out what needed getting done.

I decided to use the excellent virt-builder command to put together the base image. This is exactly what it’s good at doing! To install it on Fedora 20, you can run:

$ sudo yum install libguestfs-tools

It wasn’t available in Fedora 19, but after a lot of pain, I managed to build (mostly correct?) packages. I have posted them online if you are brave (or crazy?) enough to want them.

Using the right tool:

After building a few images, I realized that a shell script was the wrong tool, and that it was time for an upgrade. What was the right tool? GNU Make! After working on this for more hours than I’m ready to admit, I present to you, a lovingly crafted virtual machine base image (“box”) builder:

Makefile

The Makefile itself is quite compact. It uses a few shell scripts to do some of the customization, and builds a clean image in about ten minutes. To use it, just run make.

Customization:

At the moment, it builds x86_64, CentOS 6.5+ machines for vagrant-libvirt, but you can edit the Makefile to build a custom image of your choosing. I’ve gone out of my way to add an $(OUTPUT) variable to the Makefile so that your generated files get saved in /tmp/ or somewhere outside of your source tree.

Download the image:

If you’d like to download the image that I generated, it is being generously hosted by the Gluster community here. If you’re using the Vagrantfile from my Puppet-Gluster+Vagrant setup, then you don’t have to download it manually, this will happen automatically.

Open issues:

The biggest issue with the images is that SELinux gets disabled! You might be okay with this, but it’s actually quite unfortunate. It is disabled to avoid the SELinux relabelling that happens on first boot, as this overhead defeats the usefulness of a fast vagrant deployment. If you know of a way to fix this problem, please let me know!

Example output:

If you’d like to see this in action, but don’t want to run it yourself, here’s an example run:

$ date && time make && date
Mon Jan 20 10:57:35 EST 2014
Running templater...
Running virt-builder...
[   1.0] Downloading: http://libguestfs.org/download/builder/centos-6.xz
[   4.0] Planning how to build this image
[   4.0] Uncompressing
[  19.0] Resizing (using virt-resize) to expand the disk to 40.0G
[ 173.0] Opening the new disk
[ 181.0] Setting a random seed
[ 181.0] Setting root password
[ 181.0] Installing packages: screen vim-enhanced git wget file man tree nmap tcpdump htop lsof telnet mlocate bind-utils koan iftop yum-utils nc rsync nfs-utils sudo openssh-server openssh-clients
[ 212.0] Uploading: files/epel-release-6-8.noarch.rpm to /root/epel-release-6-8.noarch.rpm
[ 212.0] Uploading: files/puppetlabs-release-el-6.noarch.rpm to /root/puppetlabs-release-el-6.noarch.rpm
[ 212.0] Uploading: files/selinux to /etc/selinux/config
[ 212.0] Deleting: /.autorelabel
[ 212.0] Running: yum install -y /root/epel-release-6-8.noarch.rpm && rm -f /root/epel-release-6-8.noarch.rpm
[ 214.0] Running: yum install -y bash-completion moreutils
[ 235.0] Running: yum install -y /root/puppetlabs-release-el-6.noarch.rpm && rm -f /root/puppetlabs-release-el-6.noarch.rpm
[ 239.0] Running: yum install -y puppet
[ 254.0] Running: yum update -y
[ 375.0] Running: files/user.sh
[ 376.0] Running: files/ssh.sh
[ 376.0] Running: files/network.sh
[ 376.0] Running: files/cleanup.sh
[ 377.0] Finishing off
Output: /home/james/tmp/builder/gluster/builder.img
Output size: 40.0G
Output format: qcow2
Total usable space: 38.2G
Free space: 37.3G (97%)
Running convert...
Running tar...
./Vagrantfile
./metadata.json
./box.img

real	9m10.523s
user	2m23.282s
sys	0m37.109s
Mon Jan 20 11:06:46 EST 2014
$

If you have any other questions, please let me know!

Happy hacking,

James

PS: Be careful when writing Makefile‘s. They can be dangerous if used improperly, and in fact I once took out part of my lib/ directory by running one. Woops!

UPDATE: This technique now exists in it’s own repo here: https://github.com/purpleidea/vagrant-builder

Testing GlusterFS during “Glusterfest”

The GlusterFS community is having a “test day”. Puppet-Gluster+Vagrant is a great tool to help with this, and it has now been patched to support alpha, beta, qa, and rc releases! Because it was built so well (*cough*, shameless plug), it only took one patch.

Okay, first make sure that your Puppet-Gluster+Vagrant setup is working properly. I have only tested this on Fedora 20. Please read:

Automatically deploying GlusterFS with Puppet-Gluster+Vagrant!

to make sure you’re comfortable with the tools and infrastructure.

This weekend we’re testing 3.5.0 beta1. It turns out that the full rpm version for this is:

3.5.0-0.1.beta1.el6

You can figure out these strings yourself by browsing the folders in:

https://download.gluster.org/pub/gluster/glusterfs/qa-releases/

To test a specific version, use the --gluster-version argument that I added to the vagrant command. For this deployment, here is the list of commands that I used:

$ mkdir /tmp/vagrant/
$ cd /tmp/vagrant/
$ git clone --recursive https://github.com/purpleidea/puppet-gluster.git
$ cd vagrant/gluster/
$ vagrant up puppet
$ sudo -v && vagrant up --gluster-version='3.5.0-0.1.beta1.el6' --gluster-count=2 --no-parallel

As you can see, this is a standard vagrant deploy. I’ve decided to build two gluster hosts (--gluster-count=2) and I’m specifying the version string shown above. I’ve also decided to build in series (--no-parallel) because I think there might be some hidden race conditions, possibly in the vagrant-libvirt stack.

After about five minutes, the two hosts were built, and about six minutes after that, Puppet-Gluster had finished doing its magic. I had logged in to watch the progress, but if you were out getting a coffee, when you came back you could run:

$ gluster volume info

to see your newly created volume!

If you want to try a different version or host count, you don’t need to destroy the entire infrastructure. You can destroy the gluster annex hosts:

$ vagrant destroy annex{1..2}

and then run a new vagrant up command.

In addition, I’ve added a --gluster-firewall option. Currently it defaults to false because there’s a strange firewall bug blocking my VRRP (keepalived) setup. If you’d like to enable it and help me fix this bug, you can use:

--gluster-firewall=true

To make sure the firewall is off, you can use:

--gluster-firewall=false

In the future, I will change the default value to true, so specify it explicitly if you need a certain behaviour.

Happy hacking,

James

Automatically deploying GlusterFS with Puppet-Gluster + Vagrant!

Puppet-Gluster was always about automating the deployment of GlusterFS. Getting your own Puppet server and the associated infrastructure running was never included “out of the box“. Today, it is! (This is big news!)

I’ve used Vagrant to automatically build these GlusterFS clusters. I’ve tested this with Fedora 20, and vagrant-libvirt. This won’t work with Fedora 19 because of bz#876541. I recommend first reading my earlier articles for Vagrant and Fedora:

Once you’re comfortable with the material in the above articles, we can continue…

The short answer:

$ sudo service nfs start
$ git clone --recursive https://github.com/purpleidea/puppet-gluster.git
$ cd puppet-gluster/vagrant/gluster/
$ vagrant up puppet && sudo -v && vagrant up

Once those commands finish, you should have four running gluster hosts, and a puppet server. The gluster hosts will still be building. You can log in and tail -F log files, or watch -n 1 gluster status commands.

The whole process including the one-time downloads took about 30 minutes. If you’ve got faster internet that I do, I’m sure you can cut that down to under 20. Building the gluster hosts themselves probably takes about 15 minutes.

Enjoy your new Gluster cluster!

Screenshots!

I took a few screenshots to make this more visual for you. I like to have virt-manager open so that I can visually see what’s going on:

The annex{1..4} machines are building in parallel.

The annex{1..4} machines are building in parallel. The valleys happened when the machines were waiting for the vagrant DHCP server (dnsmasq).

Here we can see two puppet runs happening on annex1 and annex4.

Notice the two peaks on the puppet server which correspond to the valleys on annex{1,4}.

Notice the two peaks on the puppet server which correspond to the valleys on annex{1,4}.

Here’s another example, with four hosts working in parallel:

foo

Can you answer why the annex machines have two peaks? Why are the second peaks bigger?

Tell me more!

Okay, let’s start with the command sequence shown above.

$ sudo service nfs start

This needs to be run once if the NFS server on your host is not already running. It is used to provide folder synchronization for the Vagrant guest machines. I offer more information about the NFS synchronization in an earlier article.

$ git clone --recursive https://github.com/purpleidea/puppet-gluster.git

This will pull down the Puppet-Gluster source, and all the necessary submodules.

$ cd puppet-gluster/vagrant/gluster/

The Puppet-Gluster source contains a vagrant subdirectory. I’ve included a gluster subdirectory inside it, so that your machines get a sensible prefix. In the future, this might not be necessary.

$ vagrant up puppet && sudo -v && vagrant up

This is where the fun stuff happens. You’ll need a base box image to run machines with Vagrant. Luckily, I’ve already built one for you, and it is generously hosted by the Gluster community.

The very first time you run this Vagrant command, it will download this image automatically, install it and then continue the build process. This initial box download and installation only happens once. Subsequent Puppet-Gluster+Vagrant deploys and re-deploys won’t need to re-download the base image!

This command starts by building the puppet server. Vagrant might use sudo to prompt you for root access. This is used to manage your /etc/exports file. After the puppet server is finished building, we refresh the sudo cache to avoid bug #2680.

The last vagrant up command starts up the remaining gluster hosts in parallel, and kicks off the initial puppet runs. As I mentioned above, the gluster hosts will still be building. Puppet automatically waits for the cluster to “settle” and enter a steady state (no host/brick changes) before it creates the first volume. You can log in and tail -F log files, or watch -n 1 gluster status commands.

At this point, your cluster is running and you can do whatever you want with it! Puppet-Gluster+Vagrant is meant to be easy. If this wasn’t easy, or you can think of a way to make this better, let me know!

I want N hosts, not 4:

By default, this will build four (4) gluster hosts. I’ve spent a lot of time writing a fancy Vagrantfile, to give you speed and configurability. If you’d like to set a different number of hosts, you’ll first need to destroy the hosts that you’ve built already:

$ vagrant destroy annex{1..4}

You don’t have to rebuild the puppet server because this command is clever and automatically cleans the old host entries from it! This makes re-deploying even faster!

Then, run the vagrant up command with the --gluster-count=<N> argument. Example:

$ vagrant up --gluster-count=8

This is also configurable in the puppet-gluster.yaml file which will appear in your vagrant working directory. Remember that before you change any configuration option, you should destroy the affected hosts first, otherwise vagrant can get confused about the current machine state.

I want to test a different GlusterFS version:

By default, this will use the packages from:

https://download.gluster.org/pub/gluster/glusterfs/LATEST/

but if you’d like to pick a specific GlusterFS version you can do so with the --gluster-version=<version> argument. Example:

$ vagrant up --gluster-version='3.4.2-1.el6'

This is also stored, and configurable in the puppet-gluster.yaml file.

Does (repeating) this consume a lot of bandwidth?

Not really, no. There is an initial download of about 450MB for the base image. You’ll only ever need to download this again if I publish an updated version.

Each deployment will hit a public mirror to download the necessary puppet, GlusterFS and keepalived packages. The puppet server caused about 115MB of package downloads, and each gluster host needed about 58MB.

The great thing about this setup, is that it is integrated with vagrant-cachier, so that you don’t have to re-download packages. When building the gluster hosts in parallel (the default), each host will have to download the necessary packages into a separate directory. If you’d like each host to share the same cache folder, and save yourself 58MB or so per machine, you’ll need to build in series. You can do this with:

$ vagrant up --no-parallel

I chose speed over preserving bandwidth, so I do not recommend this option. If your ISP has a bandwidth cap, you should find one that isn’t crippled.

Subsequent re-builds won’t download any packages that haven’t already been downloaded.

What should I look for?

Once the vagrant commands are done running, you’ll want to look at something to see how your machines are doing. I like to log in and run different commands. I usually log in like this:

$ vcssh --screen root@annex{1..4}

I explain how to do this kind of magic in an earlier post. I then run some of the following commands:

# tail -F /var/log/messages

This lets me see what the machine is doing. I can see puppet runs and other useful information fly by.

# ip a s eth2

This lets me check that the VIP is working correctly, and which machine it’s on. This should usually be the first machine.

# ps auxww | grep again.[py]

This lets me see if puppet has scheduled another puppet run. My scripts automatically do this when they decide that there is still building (deployment) left to do. If you see a python again.py process, this means it is sleeping and will wake up to continue shortly.

# gluster peer status

This lets me see which hosts have connected, and what state they’re in.

# gluster volume info

This lets me see if Puppet-Gluster has built me a volume yet. By default it builds one distributed volume named puppet.

I want to configure this differently!

Okay, you’re more than welcome to! All of the scripts can be customized. If you want to configure the volume(s) differently, you’ll want to look in the:

puppet-gluster/vagrant/gluster/puppet/manifests/site.pp

file. The gluster::simple class is well-documented, and can be configured however you like. If you want to do more serious hacking, have a look at the Vagrantfile, the source, and the submodules. Of course the GlusterFS source is a great place to hack too!

The network block, domain, and other parameters are all configurable inside of the Vagrantfile. I’ve tried to use sensible defaults where possible. I’m using example.com as the default domain. Yes, this will work fine on your private network. DNS is currently configured with the /etc/hosts file. I wrote some magic into the Vagrantfile so that the slow /etc/hosts shell provisioning only has to happen once per machine! If you have a better, functioning, alternative, please let me know!

What’s the greatest number of machines this will scale to?

Good question! I’d like to know too! I know that GlusterFS probably can’t scale to 1000 hosts yet. Keepalived can’t support more than 256 priorities, therefore Puppet-Gluster can’t scale beyond that count until a suitable fix can be found. There are likely some earlier limits inside of Puppet-Gluster due to maximum command line length constraints that you’ll hit. If you find any, let me know and I’ll patch them. Patches now cost around seven karma points. Other than those limits, there’s the limit of my hardware. Since this is all being virtualized on a lowly X201, my tests are limited. An upgrade would be awesome!

Can I use this to test QA releases, point releases and new GlusterFS versions?

Absolutely! That’s the idea. There are two caveats:

  1. Automatically testing QA releases isn’t supported until the QA packages have a sensible home on download.gluster.org or similar. This will need a change to:
    https://github.com/purpleidea/puppet-gluster/blob/master/manifests/repo.pp#L18

    The gluster community is working on this, and as soon as a solution is found, I’ll patch Puppet-Gluster to support it. If you want to disable automatic repository management (in gluster::simple) and manage this yourself with the vagrant shell provisioner, you’re able to do so now.

  2. It’s possible that new releases introduce bugs, or change things in a backwards incompatible way that breaks Puppet-Gluster. If this happens, please let me know so that something can get patched. That’s what testing is for!

You could probably use this infrastructure to test GlusterFS builds automatically, but that’s a project that would need real funding.

Updating the puppet source:

If you make a change to the puppet source, but you don’t want to rebuild the puppet virtual machine, you don’t have to. All you have to do is run:

$ vagrant provision puppet

This will update the puppet server with any changes made to the source tree at:

puppet-gluster/vagrant/gluster/puppet/

Keep in mind that the modules subdirectory contains all the necessary puppet submodules, and a clone of puppet-gluster itself. You’ll first need to run make inside of:

puppet-gluster/vagrant/gluster/puppet/modules/

to refresh the local clone. To see what’s going on, or to customize the process, you can look at the Makefile.

Client machines:

At the moment, this doesn’t build separate machines for gluster client use. You can either mount your gluster pool from the puppet server, another gluster server, or if you add the right DNS entries to your /etc/hosts, you can mount the volume on your host machine. If you really want Vagrant to build client machines, you’ll have to persuade me.

What about firewalls?

Normally I use shorewall to manage the firewall. It integrates well with Puppet-Gluster, and does a great job. For an unexplained reason, it seems to be blocking my VRRP (keepalived) traffic, and I had to disable it. I think this is due to a libvirt networking bug, but I can’t prove it yet. If you can help debug this issue, please let me know! To reproduce it, enable the firewall and shorewall directives in:

puppet-gluster/vagrant/gluster/puppet/manifests/site.pp

and then get keepalived to work.

Re-provisioning a machine after a long wait throws an error:

You might be hitting: vagrant-cachier #74. If you do, there is an available workaround.

Keepalived shows “invalid passwd!” messages in the logs:

This is expected. This happens because we build a distributed password for use with keepalived. Before the cluster state has settled, the password will be different from host to host. Only when the cluster is coherent will the password be identical everywhere, which incidentally is the only time when the VIP matters.

How did you build that awesome base image?

I used virt-builder, some scripts, and a clever Makefile. I’ll be publishing this code shortly. Hasn’t this been enough to keep you busy for a while?

Are you still here?

If you’ve read this far, then good for you! I’m sorry that it has been a long read, but I figured I would try to answer everyone’s questions in advance. I’d like to hear your comments! I get very little feedback, and I’ve never gotten a single tip! If you find this useful, please let me know.

Until then,

Happy hacking,

James