Building a snow shelter

To give you a break from the usual GNU/Linux/DevOps/Puppet/GlusterFS drab, I’ve decided to have a go at writing a different kind of technical article. This article will show you how to build the traditional Canadian snow dwelling known as a quinzee. If you will be travelling to Canada, I recommended that you read through this article ahead of time, so that you don’t offend your host by being unfamiliar with their traditional living accommodations.

Warning:

If you are not Canadian or if it is your first time building this type of shelter, I recommend that you have a professional supervise and guide you! If it is not sufficiently cold out (less than 0oC) when you are building or sleeping in your shelter, it could collapse, squashing and suffocating you! If it’s built correctly, a quinzee can safely support the weight of many people, although it is not intended for use as scaffolding. When you are inside the quinzee digging, you must have a buddy outside, standing watch for your safety, in case of quinzee collapse or polar bear attack.

Fresh snow:

Start off by choosing a place to build your quinzee. Usually you’ll build your quinzee near the other quinzees in the village, however for this article, I have decided to set up in a remote location, as this will be used as my vacation home. It is good to build on flat terrain, where there is a lot of fresh snow. In Canada, there is always an abundance of snow so it should be easy to find a pristine area.

Fresh snow

Fresh snow

If you build in a wooded area, be sure that there are no trees or dead branches that might fall or break off in the wind and squish you or your quinzee!

Stratification:

With each snowfall, new layers of snow are deposited onto the older layers which is called stratification. Since each layer could have different consistencies or strengths, it is important to mix or break up the layering before we build our quinzee. This will make sure that the walls of the quinzee are stable, and don’t shift later on.

Destratification of the snow

Destratification of the snow

Here, the layers of snow are destratified with a shovel, in a circle of about two to three metres in diametre. This circle will be the outside size of your quinzee. A three metre diametre quinzee would more than comfortably fit three large sleeping Canadians, which is the equivalent of two standard polar bears wide.

Piling the snow:

To make a quinzee, you need a large pile of snow. You’ll want to shovel the snow from outside of the destratified quinzee circle, into the centre of the pile. Work your way around the circle, gradually moving outwards as you need more snow. This technique has the added advantage that the area outside of your quinzee will be cleared of snow, and gives you room for stargazing, polar bear watching, and other typical Canadian pastimes.

A pile of snow

A pile of snow

After a few hours your pile will grow large enough that it should be about two to three metres high. You don’t have to worry about the specific height as long as you always throw the snow on the top of the pile. The pile will automatically spread out to form a larger diametre base if it is too high to be supported on a smaller base.

Some quinzee builders will pile their backpacks and other equipment in the centre of the pile under a tarp before piling on the snow. This is a tricky optimization, but if done correctly, there will be less snow to shovel onto the pile, and less snow to dig out at the end.

Large snow pile

A larger pile of snow

When not in use, make sure you don’t leave shovels or other items lying around on the ground, as they could easily get covered up by snowfall and lost forever. Conveniently, shovels can be stuck into the snow, where they’ll stand upright and remain visible.

At this point your pile of snow should now be complete. Congratulate yourself on a job well done and award yourself with a cool mug of maple syrup.

Sticks:

When digging out the quinzee, we’ll need to know how much snow to remove so that we don’t create unwanted windows in the wall of the shelter. To help keep track of the wall thickness, first gather about 40 to 50 sticks, each about 30cm in length. Insert all the sticks in the wall of the quinzee spaced out about 30cm from one another, and all pointing towards the centre of the shelter.

Sticks inserted into the pile of snow

Sticks inserted into the pile of snow

If the sticks are longer than 30cm, it’s okay if they protrude from the outside of the shelter. If they are too short, you can use them for firewood. When we dig out the shelter, we should encounter the sticks from inside the quinzee. Whenever you see a stick, you should stop digging in that direction.

Compaction:

At this point the quinzee needs time to settle. It typically takes between two and six hours. In this time, the snow crystals will lock together under their own weight, and the structure will harden because of the cold.

Waiting for compaction

Waiting for compaction

Although not easily noticeable, the structure may shrink due to this compaction. This is usually a great time to prepare lunch!

Lunch:

A typical Canadian meal might consist of bannock cooked on a stick, Montreal bagels, poutine, and a glass of maple syrup. Dessert is usually tire, served on snow. When your quinzee has sufficiently compacted, it’s time to start digging it out. If your quinzee needs more time to settle, then you can keep busy by gathering some firewood or practising your favourite Canadian song. While gathering a little firewood, I had a chance to take a quick photo looking down the main street of my village. On the right you can see some of the kindling I’ve collected, and in the distance to the left you can see a quinzee.

Main road of my village

Main road of my village

Digging:

You typically want as small of a door opening as you can fit through. Start digging near, but above the base of the quinzee at a downward angle towards the centre. As you reach the centre, start digging outwards in all directions. The downward angle will prevent strong winds from blowing directly into the quinzee. The inner dome will keep heat from escaping. It takes some experience to get the shape right, but after years of practice, you’ll quickly become an expert.

Cross sectional diagram of a quinzee

Cross sectional diagram of a quinzee

This process should take you about an hour. When you are digging inside of the quinzee, remember to always have your buddy on the outside in the event of collapse. While collapses are rare, they can happen, in particular if the weather is too warm, if you didn’t let your quinzee settle for long enough, or if you made your walls too thin or your roof too thick. It is important that your buddy pay attention, because the quinzee insulates you from sound (and cold) very well. To demonstrate this, scream or talk loudly while you’re inside the quinzee. Your buddy will not be able to hear you very easily.

Entrance to the quinzee

Entrance to the quinzee

During the day, if you ask your buddy to seal off the entrance with their body, it will be very dark inside the quinzee, but you should be able to see faint blue areas where the walls are the thinnest and some light is coming in. You probably don’t want to dig much further in those areas.

Finishing up:

When you are finished digging out your quinzee, pull out one of the sticks in the roof to create an air hole for ventilation. You’ll probably want to gently smooth out the inside walls and roof with your gloved hand. This will prevent the jagged digging marks (snow stalactites) from dripping on you once you’re sleeping in the quinzee and your body heat is causing them to melt.

View from inside the quinzee

View from inside the quinzee of the door

As a related optimization, some quinzee builders like to light a candle in the centre of the quinzee for a few hours to try to melt any variations on the inner walls, causing a glaze and preventing dripping.

Making it a home:

It’s customary to line the bottom of your quinzee with either a tarp, or if it’s available, some hay. The hay is a particularly comfortable bedding material because it soaks up excess moisture which would otherwise make your floor damp. In general, Canadians only line their quinzees with hay when royalty or a special guest is coming to visit. If you do not receive any hay when staying over at a friend’s place, please do not be offended as it is hard to come by in a country that has a winter climate for most of the year.

Looking into the quinzee

Looking into the quinzee

To get in and out of your quinzee it is customary to crawl or slide in on your stomach. In the above photo, an inconsiderate guest decided to “walk” on their hands and knees, causing damage to the door. In some Canadian villages, damaging a neighbours door can be a great insult and has led to at least two serious incidents. [1] [2]

For sleeping, orient your feet so they are closest to the door, and with your head away from it. When multiple people are lying next to each other in this way, the quinzee can be quite warm during cold winter nights.

Conclusion:

If you plan on leaving your home unattended for some time, it is common practice to destroy your quinzee so that small animals or children don’t get accidentally trapped inside should it be left unsupervised. Sadly, only a few days after I took these photos, a polar bear attacked my village and destroyed the quinzee!

What remained after a polar bear attack

What remained after a polar bear attack

One positive outcome of the attack is that you can get a good look inside the quinzee and you see a nice cross-section of the walls. Fortunately, this won’t cost me much in building materials, but it will be some time before I have time to rebuild, and even longer before I have time to get a guest quinzee built!

I hope this has been an instructional article! Stay warm, stay safe, and

Happy Hacking,

James

Scathing review of the Lenovo X240

I’m using a Lenovo X201 with 8GiB of RAM. Apart from some minor issues, I’ve been very satisfied with this laptop. It’s over four years old, and so I decided to see what’s available on the horizon. I did not buy an X240 because of the following reasons:

The X240 has only one slot for RAM and thus supports a maximum of 8GiB.

I think it’s pretty ridiculous for any successor to the X230 to support less RAM. In and of itself this is a deal breaker! My X201 has 8GiB, and I wish it had more! It doesn’t make sense to settle for less on new kit. In other words, Lenovo is stating that 8GiB ought to be enough for anybody.

The X240 keyboard is missing buttons.

The X201 keyboard has PageUp and PageDown keys in addition to the Forward and Back buttons which are located above the Left and Right arrow keys. It also has dedicated Insert and Delete keys. This is all to be expected, and is awesome.

The X240 removes the dedicated Forward and Back buttons, and replaces them with the (now relocated) PageUp and PageDown keys. To use the Forward and Back buttons, you now need an extra finger to hold the Fn button.

If that wasn’t bad enough, the Insert key now shares a button with the End key, which makes common Shift+Insert sequences near impossible to anyone that isn’t either a cephalopod or an Emacs user. Time to update the pedals to include an Fn key.

The X240 keyboard is missing buttons.

The X240 keyboard is missing buttons.

It’s also worth mentioning that the dedicated volume controls are gone.

The X240 “Trackpad”/”Touchpad” is now buttonless and unusable.

The mechanics of the new Touchpad makes it unusable! It has no buttons, and instead it feels like it is travelling 1cm vertically when it is pressed! This throws off any fine positioning you had with your cursor. Try it for five minutes, and you’ll quickly learn that this alone ensures that this machine can only be used as a desktop. Many, many users are complaining about the feel of the new pad. This feels like an attempt to imitate the look of an Apple product, instead of trying to maintain the functionality of the X series, except that the hardware isn’t very good.

Interlude…

I could deal with the below issues, if my top three (above) issues weren’t strong deal breakers! They are all easy to fix too! I hope these are addressed so that I can post a follow up “rave” article. ;)

Other issues/differences…

The X240 has a new, incompatible power adapter.

Lenovo has decided it needs to squeeze more money out of users by introducing new, backwards-incompatible technology. To this end, the well-known barrel adapter is now gone, and it has been replaced by a strange rectangle. You might as well throw out all your old power bricks, as Lenovo has said they won’t offer a compatibility adapter. Apparently, they used to sell such an adapter, but it has been discontinued.

The X240 has a new, incompatible docking bay.

If you were a fan of docking, you’ll now have to purchase a new bay.

The X240 has a combined microphone/headphone jack.

Minimalist designs and the abundance of cheap phone headsets has reduced the dual jack design to a combined single jack. I don’t like the change, but I realize there’s not much that can be done to prevent it.

Lenovo has a terrible warranty program.

Getting a hardware replacement with the Lenovo “return to depot” service can take about two weeks. Bringing your laptop in to a service centre is equally challenging. The locator database is out of date, and once you do find a service centre, many are unwilling to accept your hardware because they’re backlogged with work! If you are lucky enough to get a part replaced, hope that it isn’t defective or DOA. I had to go through three rounds of bad motherboards graphics on my X201. The on-site service is a good option if you can afford it.

Conclusion

Instead of iterating and improving on the much-loved X series, Lenovo has decided to sacrifice its followers in trying to appeal to a low-end laptop market. Once the model of power and portability, the X series is now designed for users who are afraid of having too many buttons or too much RAM.

The X220 and X230 are not available for sale anymore. The X250 isn’t due before 2015. I’ve extended the warranty of my X201 to the maximum of five years. Let’s hope someone at Lenovo reads this article and can make a difference before it’s too late. I’m happy to consult and demo new hardware if you contact me. I’d genuinely like to help.

Happy Hacking,

James

Show the exit status in your $PS1

As an update to my earlier article, a friend gave me an idea of how to make my $PS1 even better… First, the relevant part of my ~/.bashrc:

ps1_prompt() {
	local ps1_exit=$?

	if [ $ps1_exit -eq 0 ]; then
		#ps1_status=`echo -e "\[\033[32m\]"'\$'"\[\033[0m\]"`
		ps1_status='\$'
	else
		ps1_status=`echo -e "\[\033[1;31m\]"'\$'"\[\033[0m\]"`

	fi

	ps1_git=''
	if [ "$(__git_ps1 %s)" != '' -a "$(__git_ps1 %s)" != 'master' ]; then
		ps1_git=" (\[\033[32m\]"$(__git_ps1 "%s")"\[\033[0m\])"
	fi

	PS1="${debian_chroot:+($debian_chroot)}\u@\h:\[\033[01;34m\]\w\[\033[00m\]${ps1_git}${ps1_status} "
}

# preserve earlier PROMPT_COMMAND entries...
PROMPT_COMMAND="ps1_prompt;$PROMPT_COMMAND"

If you haven’t figured it out, the magic is that the trailing $ prompt gets coloured in red when the previous command exited with a non-zero value. Example:

james@computer:~$ cdmkdir /tmp/ttboj # yes, i built cdmkdir
james@computer:/tmp/ttboj$ false
james@computer:/tmp/ttboj$ echo ttboj
ttboj
james@computer:/tmp/ttboj$ ^C
james@computer:/tmp/ttboj$ true
james@computer:/tmp/ttboj$ cd ~/code/puppet/puppet-gluster/
james@computer:~/code/puppet/puppet-gluster$ # hack, hack, hack...

You can still:

$ echo $?
42

if you want more specifics about what the exact return code was, and of course you can edit the above ~/.bashrc snippet to match your needs.

Hopefully this will help you be more productive, I know it’s helping me!

Happy hacking,

James

Screencasts of Puppet-Gluster + Vagrant

I decided to record some screencasts to show how easy it is to deploy GlusterFS using Puppet-Gluster+Vagrant. You can follow along even if you don’t know anything about Puppet or Vagrant. The hardest part of this process was producing the actual videos!

If recommend first reading my earlier articles if you’re planning on following along:

Without any further delay, here are the screencasts:

Part 1: Intro, and provisioning of the Puppet server.

Part 2: Initial building of the Gluster hosts.

Part 3: Finishing the Gluster builds.

Part 4: GlusterFS client mounting and tests.

Part 5: Mixed bag of code, infrastructure tours, examples and other details.

I hope you enjoyed these videos. Thank you to the Gluster.org community for hosting them. If you liked these videos, please consider sponsoring some of my work, or making a donation!

As a side note, the only screencast tool that worked was gtk-recordmydesktop, however it deleted my second recording (which had to be re-recorded) and the audio stopped working one minute into my third recording (which had to then be separately recorded, and mixed in). Amazingly, pitivi was the only tool which worked to properly mix them together!

Happy Hacking,

James

PS: Please note, you may not sell, edit, redistribute, perform, or host these videos elsewhere without my permission. I especially don’t want to see them on youtube until Google let’s me unlink my youtube account! If you do want my permission to use these videos for something, contact me, and we can work something out. I’ll surely allow it if it’s not for something evil. If you’d rather have an interactive, live demo, let me know!

Building base images for Vagrant with a Makefile

I needed a base image “box” for my Puppet-Gluster+Vagrant work. It would have been great if good boxes already existed, and even better if it were easy to build my own. As it turns out, I wasn’t able to satisfy either of these conditions, so I’ve had to build one myself! I’ve published all of my code, so that you can use these techniques and tools too!

Status quo:

Having an NIH problem is bad for your vision, and it’s best to benefit from existing tools before creating your own. I first tried using vagrant-cachier, and then veewee, and packer. Vagrant-cachier is a great tool, but it turned out not being very useful because there weren’t any base images available for download that met my needs. Veewee and packer can build those images, but they both failed in doing so for different reasons. Hopefully this situation will improve in the future.

Writing a script:

I started by hacking together a short shell script of commands for building base images. There wasn’t much programming involved as the process was fairly linear, but it was useful to figure out what needed getting done.

I decided to use the excellent virt-builder command to put together the base image. This is exactly what it’s good at doing! To install it on Fedora 20, you can run:

$ sudo yum install libguestfs-tools

It wasn’t available in Fedora 19, but after a lot of pain, I managed to build (mostly correct?) packages. I have posted them online if you are brave (or crazy?) enough to want them.

Using the right tool:

After building a few images, I realized that a shell script was the wrong tool, and that it was time for an upgrade. What was the right tool? GNU Make! After working on this for more hours than I’m ready to admit, I present to you, a lovingly crafted virtual machine base image (“box”) builder:

Makefile

The Makefile itself is quite compact. It uses a few shell scripts to do some of the customization, and builds a clean image in about ten minutes. To use it, just run make.

Customization:

At the moment, it builds x86_64, CentOS 6.5+ machines for vagrant-libvirt, but you can edit the Makefile to build a custom image of your choosing. I’ve gone out of my way to add an $(OUTPUT) variable to the Makefile so that your generated files get saved in /tmp/ or somewhere outside of your source tree.

Download the image:

If you’d like to download the image that I generated, it is being generously hosted by the Gluster community here. If you’re using the Vagrantfile from my Puppet-Gluster+Vagrant setup, then you don’t have to download it manually, this will happen automatically.

Open issues:

The biggest issue with the images is that SELinux gets disabled! You might be okay with this, but it’s actually quite unfortunate. It is disabled to avoid the SELinux relabelling that happens on first boot, as this overhead defeats the usefulness of a fast vagrant deployment. If you know of a way to fix this problem, please let me know!

Example output:

If you’d like to see this in action, but don’t want to run it yourself, here’s an example run:

$ date && time make && date
Mon Jan 20 10:57:35 EST 2014
Running templater...
Running virt-builder...
[   1.0] Downloading: http://libguestfs.org/download/builder/centos-6.xz
[   4.0] Planning how to build this image
[   4.0] Uncompressing
[  19.0] Resizing (using virt-resize) to expand the disk to 40.0G
[ 173.0] Opening the new disk
[ 181.0] Setting a random seed
[ 181.0] Setting root password
[ 181.0] Installing packages: screen vim-enhanced git wget file man tree nmap tcpdump htop lsof telnet mlocate bind-utils koan iftop yum-utils nc rsync nfs-utils sudo openssh-server openssh-clients
[ 212.0] Uploading: files/epel-release-6-8.noarch.rpm to /root/epel-release-6-8.noarch.rpm
[ 212.0] Uploading: files/puppetlabs-release-el-6.noarch.rpm to /root/puppetlabs-release-el-6.noarch.rpm
[ 212.0] Uploading: files/selinux to /etc/selinux/config
[ 212.0] Deleting: /.autorelabel
[ 212.0] Running: yum install -y /root/epel-release-6-8.noarch.rpm && rm -f /root/epel-release-6-8.noarch.rpm
[ 214.0] Running: yum install -y bash-completion moreutils
[ 235.0] Running: yum install -y /root/puppetlabs-release-el-6.noarch.rpm && rm -f /root/puppetlabs-release-el-6.noarch.rpm
[ 239.0] Running: yum install -y puppet
[ 254.0] Running: yum update -y
[ 375.0] Running: files/user.sh
[ 376.0] Running: files/ssh.sh
[ 376.0] Running: files/network.sh
[ 376.0] Running: files/cleanup.sh
[ 377.0] Finishing off
Output: /home/james/tmp/builder/gluster/builder.img
Output size: 40.0G
Output format: qcow2
Total usable space: 38.2G
Free space: 37.3G (97%)
Running convert...
Running tar...
./Vagrantfile
./metadata.json
./box.img

real	9m10.523s
user	2m23.282s
sys	0m37.109s
Mon Jan 20 11:06:46 EST 2014
$

If you have any other questions, please let me know!

Happy hacking,

James

PS: Be careful when writing Makefile‘s. They can be dangerous if used improperly, and in fact I once took out part of my lib/ directory by running one. Woops!

UPDATE: This technique now exists in it’s own repo here: https://github.com/purpleidea/vagrant-builder

Testing GlusterFS during “Glusterfest”

The GlusterFS community is having a “test day”. Puppet-Gluster+Vagrant is a great tool to help with this, and it has now been patched to support alpha, beta, qa, and rc releases! Because it was built so well (*cough*, shameless plug), it only took one patch.

Okay, first make sure that your Puppet-Gluster+Vagrant setup is working properly. I have only tested this on Fedora 20. Please read:

Automatically deploying GlusterFS with Puppet-Gluster+Vagrant!

to make sure you’re comfortable with the tools and infrastructure.

This weekend we’re testing 3.5.0 beta1. It turns out that the full rpm version for this is:

3.5.0-0.1.beta1.el6

You can figure out these strings yourself by browsing the folders in:

https://download.gluster.org/pub/gluster/glusterfs/qa-releases/

To test a specific version, use the --gluster-version argument that I added to the vagrant command. For this deployment, here is the list of commands that I used:

$ mkdir /tmp/vagrant/
$ cd /tmp/vagrant/
$ git clone --recursive https://github.com/purpleidea/puppet-gluster.git
$ cd vagrant/gluster/
$ vagrant up puppet
$ sudo -v && vagrant up --gluster-version='3.5.0-0.1.beta1.el6' --gluster-count=2 --no-parallel

As you can see, this is a standard vagrant deploy. I’ve decided to build two gluster hosts (--gluster-count=2) and I’m specifying the version string shown above. I’ve also decided to build in series (--no-parallel) because I think there might be some hidden race conditions, possibly in the vagrant-libvirt stack.

After about five minutes, the two hosts were built, and about six minutes after that, Puppet-Gluster had finished doing its magic. I had logged in to watch the progress, but if you were out getting a coffee, when you came back you could run:

$ gluster volume info

to see your newly created volume!

If you want to try a different version or host count, you don’t need to destroy the entire infrastructure. You can destroy the gluster annex hosts:

$ vagrant destroy annex{1..2}

and then run a new vagrant up command.

In addition, I’ve added a --gluster-firewall option. Currently it defaults to false because there’s a strange firewall bug blocking my VRRP (keepalived) setup. If you’d like to enable it and help me fix this bug, you can use:

--gluster-firewall=true

To make sure the firewall is off, you can use:

--gluster-firewall=false

In the future, I will change the default value to true, so specify it explicitly if you need a certain behaviour.

Happy hacking,

James

Automatically deploying GlusterFS with Puppet-Gluster + Vagrant!

Puppet-Gluster was always about automating the deployment of GlusterFS. Getting your own Puppet server and the associated infrastructure running was never included “out of the box“. Today, it is! (This is big news!)

I’ve used Vagrant to automatically build these GlusterFS clusters. I’ve tested this with Fedora 20, and vagrant-libvirt. This won’t work with Fedora 19 because of bz#876541. I recommend first reading my earlier articles for Vagrant and Fedora:

Once you’re comfortable with the material in the above articles, we can continue…

The short answer:

$ sudo service nfs start
$ git clone --recursive https://github.com/purpleidea/puppet-gluster.git
$ cd puppet-gluster/vagrant/gluster/
$ vagrant up puppet && sudo -v && vagrant up

Once those commands finish, you should have four running gluster hosts, and a puppet server. The gluster hosts will still be building. You can log in and tail -F log files, or watch -n 1 gluster status commands.

The whole process including the one-time downloads took about 30 minutes. If you’ve got faster internet that I do, I’m sure you can cut that down to under 20. Building the gluster hosts themselves probably takes about 15 minutes.

Enjoy your new Gluster cluster!

Screenshots!

I took a few screenshots to make this more visual for you. I like to have virt-manager open so that I can visually see what’s going on:

The annex{1..4} machines are building in parallel.

The annex{1..4} machines are building in parallel. The valleys happened when the machines were waiting for the vagrant DHCP server (dnsmasq).

Here we can see two puppet runs happening on annex1 and annex4.

Notice the two peaks on the puppet server which correspond to the valleys on annex{1,4}.

Notice the two peaks on the puppet server which correspond to the valleys on annex{1,4}.

Here’s another example, with four hosts working in parallel:

foo

Can you answer why the annex machines have two peaks? Why are the second peaks bigger?

Tell me more!

Okay, let’s start with the command sequence shown above.

$ sudo service nfs start

This needs to be run once if the NFS server on your host is not already running. It is used to provide folder synchronization for the Vagrant guest machines. I offer more information about the NFS synchronization in an earlier article.

$ git clone --recursive https://github.com/purpleidea/puppet-gluster.git

This will pull down the Puppet-Gluster source, and all the necessary submodules.

$ cd puppet-gluster/vagrant/gluster/

The Puppet-Gluster source contains a vagrant subdirectory. I’ve included a gluster subdirectory inside it, so that your machines get a sensible prefix. In the future, this might not be necessary.

$ vagrant up puppet && sudo -v && vagrant up

This is where the fun stuff happens. You’ll need a base box image to run machines with Vagrant. Luckily, I’ve already built one for you, and it is generously hosted by the Gluster community.

The very first time you run this Vagrant command, it will download this image automatically, install it and then continue the build process. This initial box download and installation only happens once. Subsequent Puppet-Gluster+Vagrant deploys and re-deploys won’t need to re-download the base image!

This command starts by building the puppet server. Vagrant might use sudo to prompt you for root access. This is used to manage your /etc/exports file. After the puppet server is finished building, we refresh the sudo cache to avoid bug #2680.

The last vagrant up command starts up the remaining gluster hosts in parallel, and kicks off the initial puppet runs. As I mentioned above, the gluster hosts will still be building. Puppet automatically waits for the cluster to “settle” and enter a steady state (no host/brick changes) before it creates the first volume. You can log in and tail -F log files, or watch -n 1 gluster status commands.

At this point, your cluster is running and you can do whatever you want with it! Puppet-Gluster+Vagrant is meant to be easy. If this wasn’t easy, or you can think of a way to make this better, let me know!

I want N hosts, not 4:

By default, this will build four (4) gluster hosts. I’ve spent a lot of time writing a fancy Vagrantfile, to give you speed and configurability. If you’d like to set a different number of hosts, you’ll first need to destroy the hosts that you’ve built already:

$ vagrant destroy annex{1..4}

You don’t have to rebuild the puppet server because this command is clever and automatically cleans the old host entries from it! This makes re-deploying even faster!

Then, run the vagrant up command with the --gluster-count=<N> argument. Example:

$ vagrant up --gluster-count=8

This is also configurable in the puppet-gluster.yaml file which will appear in your vagrant working directory. Remember that before you change any configuration option, you should destroy the affected hosts first, otherwise vagrant can get confused about the current machine state.

I want to test a different GlusterFS version:

By default, this will use the packages from:

https://download.gluster.org/pub/gluster/glusterfs/LATEST/

but if you’d like to pick a specific GlusterFS version you can do so with the --gluster-version=<version> argument. Example:

$ vagrant up --gluster-version='3.4.2-1.el6'

This is also stored, and configurable in the puppet-gluster.yaml file.

Does (repeating) this consume a lot of bandwidth?

Not really, no. There is an initial download of about 450MB for the base image. You’ll only ever need to download this again if I publish an updated version.

Each deployment will hit a public mirror to download the necessary puppet, GlusterFS and keepalived packages. The puppet server caused about 115MB of package downloads, and each gluster host needed about 58MB.

The great thing about this setup, is that it is integrated with vagrant-cachier, so that you don’t have to re-download packages. When building the gluster hosts in parallel (the default), each host will have to download the necessary packages into a separate directory. If you’d like each host to share the same cache folder, and save yourself 58MB or so per machine, you’ll need to build in series. You can do this with:

$ vagrant up --no-parallel

I chose speed over preserving bandwidth, so I do not recommend this option. If your ISP has a bandwidth cap, you should find one that isn’t crippled.

Subsequent re-builds won’t download any packages that haven’t already been downloaded.

What should I look for?

Once the vagrant commands are done running, you’ll want to look at something to see how your machines are doing. I like to log in and run different commands. I usually log in like this:

$ vcssh --screen root@annex{1..4}

I explain how to do this kind of magic in an earlier post. I then run some of the following commands:

# tail -F /var/log/messages

This lets me see what the machine is doing. I can see puppet runs and other useful information fly by.

# ip a s eth2

This lets me check that the VIP is working correctly, and which machine it’s on. This should usually be the first machine.

# ps auxww | grep again.[py]

This lets me see if puppet has scheduled another puppet run. My scripts automatically do this when they decide that there is still building (deployment) left to do. If you see a python again.py process, this means it is sleeping and will wake up to continue shortly.

# gluster peer status

This lets me see which hosts have connected, and what state they’re in.

# gluster volume info

This lets me see if Puppet-Gluster has built me a volume yet. By default it builds one distributed volume named puppet.

I want to configure this differently!

Okay, you’re more than welcome to! All of the scripts can be customized. If you want to configure the volume(s) differently, you’ll want to look in the:

puppet-gluster/vagrant/gluster/puppet/manifests/site.pp

file. The gluster::simple class is well-documented, and can be configured however you like. If you want to do more serious hacking, have a look at the Vagrantfile, the source, and the submodules. Of course the GlusterFS source is a great place to hack too!

The network block, domain, and other parameters are all configurable inside of the Vagrantfile. I’ve tried to use sensible defaults where possible. I’m using example.com as the default domain. Yes, this will work fine on your private network. DNS is currently configured with the /etc/hosts file. I wrote some magic into the Vagrantfile so that the slow /etc/hosts shell provisioning only has to happen once per machine! If you have a better, functioning, alternative, please let me know!

What’s the greatest number of machines this will scale to?

Good question! I’d like to know too! I know that GlusterFS probably can’t scale to 1000 hosts yet. Keepalived can’t support more than 256 priorities, therefore Puppet-Gluster can’t scale beyond that count until a suitable fix can be found. There are likely some earlier limits inside of Puppet-Gluster due to maximum command line length constraints that you’ll hit. If you find any, let me know and I’ll patch them. Patches now cost around seven karma points. Other than those limits, there’s the limit of my hardware. Since this is all being virtualized on a lowly X201, my tests are limited. An upgrade would be awesome!

Can I use this to test QA releases, point releases and new GlusterFS versions?

Absolutely! That’s the idea. There are two caveats:

  1. Automatically testing QA releases isn’t supported until the QA packages have a sensible home on download.gluster.org or similar. This will need a change to:
    https://github.com/purpleidea/puppet-gluster/blob/master/manifests/repo.pp#L18

    The gluster community is working on this, and as soon as a solution is found, I’ll patch Puppet-Gluster to support it. If you want to disable automatic repository management (in gluster::simple) and manage this yourself with the vagrant shell provisioner, you’re able to do so now.

  2. It’s possible that new releases introduce bugs, or change things in a backwards incompatible way that breaks Puppet-Gluster. If this happens, please let me know so that something can get patched. That’s what testing is for!

You could probably use this infrastructure to test GlusterFS builds automatically, but that’s a project that would need real funding.

Updating the puppet source:

If you make a change to the puppet source, but you don’t want to rebuild the puppet virtual machine, you don’t have to. All you have to do is run:

$ vagrant provision puppet

This will update the puppet server with any changes made to the source tree at:

puppet-gluster/vagrant/gluster/puppet/

Keep in mind that the modules subdirectory contains all the necessary puppet submodules, and a clone of puppet-gluster itself. You’ll first need to run make inside of:

puppet-gluster/vagrant/gluster/puppet/modules/

to refresh the local clone. To see what’s going on, or to customize the process, you can look at the Makefile.

Client machines:

At the moment, this doesn’t build separate machines for gluster client use. You can either mount your gluster pool from the puppet server, another gluster server, or if you add the right DNS entries to your /etc/hosts, you can mount the volume on your host machine. If you really want Vagrant to build client machines, you’ll have to persuade me.

What about firewalls?

Normally I use shorewall to manage the firewall. It integrates well with Puppet-Gluster, and does a great job. For an unexplained reason, it seems to be blocking my VRRP (keepalived) traffic, and I had to disable it. I think this is due to a libvirt networking bug, but I can’t prove it yet. If you can help debug this issue, please let me know! To reproduce it, enable the firewall and shorewall directives in:

puppet-gluster/vagrant/gluster/puppet/manifests/site.pp

and then get keepalived to work.

Re-provisioning a machine after a long wait throws an error:

You might be hitting: vagrant-cachier #74. If you do, there is an available workaround.

Keepalived shows “invalid passwd!” messages in the logs:

This is expected. This happens because we build a distributed password for use with keepalived. Before the cluster state has settled, the password will be different from host to host. Only when the cluster is coherent will the password be identical everywhere, which incidentally is the only time when the VIP matters.

How did you build that awesome base image?

I used virt-builder, some scripts, and a clever Makefile. I’ll be publishing this code shortly. Hasn’t this been enough to keep you busy for a while?

Are you still here?

If you’ve read this far, then good for you! I’m sorry that it has been a long read, but I figured I would try to answer everyone’s questions in advance. I’d like to hear your comments! I get very little feedback, and I’ve never gotten a single tip! If you find this useful, please let me know.

Until then,

Happy hacking,

James

Vagrant clustered SSH and ‘screen’

Some fun updates for vagrant hackers… I wanted to use the venerable clustered SSH (cssh) and screen with vagrant. I decided to expand on my vsftp script. First read:

Vagrant on Fedora with libvirt

and

Vagrant vsftp and other tricks

to get up to speed on the background information.

Vagrant screen:

First, a simple screen hack… I often use my vssh alias to quickly ssh into a machine, but I don’t want to have to waste time with sudo-ing to root and then running screen each time. Enter vscreen:

# vagrant screen
function vscreen {
	[ "$1" = '' ] || [ "$2" != '' ] && echo "Usage: vscreen <vm-name> - vagrant screen" 1>&2 && return 1
	wd=`pwd`		# save wd, then find the Vagrant project
	while [ "`pwd`" != '/' ] && [ ! -e "`pwd`/Vagrantfile" ] && [ ! -d "`pwd`/.vagrant/" ]; do
		#echo "pwd is `pwd`"
		cd ..
	done
	pwd=`pwd`
	cd $wd
	if [ ! -e "$pwd/Vagrantfile" ] || [ ! -d "$pwd/.vagrant/" ]; then
		echo 'Vagrant project not found!' 1>&2 && return 2
	fi

	d="$pwd/.ssh"
	f="$d/$1.config"
	h="$1"
	# hostname extraction from user@host pattern
	p=`expr index "$1" '@'`
	if [ $p -gt 0 ]; then
		let "l = ${#h} - $p"
		h=${h:$p:$l}
	fi

	# if mtime of $f is > than 5 minutes (5 * 60 seconds), re-generate...
	if [ `date -d "now - $(stat -c '%Y' "$f" 2> /dev/null) seconds" +%s` -gt 300 ]; then
		mkdir -p "$d"
		# we cache the lookup because this command is slow...
		vagrant ssh-config "$h" > "$f" || rm "$f"
	fi
	[ -e "$f" ] && ssh -t -F "$f" "$1" 'screen -xRR'
}

I usually run it this way:

$ vscreen root@machine

which logs in as root, to machine and gets me (back) into screen. This is almost identical to the vsftp script which I explained in an earlier blog post.

Vagrant cssh:

First you’ll need to install cssh. On my Fedora machine it’s as easy as:

# yum install -y clusterssh

I’ve been hacking a lot on Puppet-Gluster lately, and occasionally multi-machine hacking demands multi-machine key punching. Enter vcssh:


# vagrant cssh
function vcssh {
	[ "$1" = '' ] && echo "Usage: vcssh [options] [user@]<vm1>[ [user@]vm2[ [user@]vmN...]] - vagrant cssh" 1>&2 && return 1
	wd=`pwd`		# save wd, then find the Vagrant project
	while [ "`pwd`" != '/' ] && [ ! -e "`pwd`/Vagrantfile" ] && [ ! -d "`pwd`/.vagrant/" ]; do
		#echo "pwd is `pwd`"
		cd ..
	done
	pwd=`pwd`
	cd $wd
	if [ ! -e "$pwd/Vagrantfile" ] || [ ! -d "$pwd/.vagrant/" ]; then
		echo 'Vagrant project not found!' 1>&2 && return 2
	fi

	d="$pwd/.ssh"
	cssh="$d/cssh"
	cmd=''
	cat='cat '
	screen=''
	options=''

	multi='f'
	special=''
	for i in "$@"; do	# loop through the list of hosts and arguments!
		#echo $i

		if [ "$special" = 'debug' ]; then	# optional arg value...
			special=''
			if [ "$1" -ge 0 -o "$1" -le 4 ]; then
				cmd="$cmd $i"
				continue
			fi
		fi

		if [ "$multi" = 'y' ]; then	# get the value of the argument
			multi='n'
			cmd="$cmd '$i'"
			continue
		fi

		if [ "${i:0:1}" = '-' ]; then	# does argument start with: - ?

			# build a --screen option
			if [ "$i" = '--screen' ]; then
				screen=' -o RequestTTY=yes'
				cmd="$cmd --action 'screen -xRR'"
				continue
			fi

			if [ "$i" = '--debug' ]; then
				special='debug'
				cmd="$cmd $i"
				continue
			fi

			if [ "$i" = '--options' ]; then
				options=" $i"
				continue
			fi

			# NOTE: commented-out options are probably not useful...
			# match for key => value argument pairs
			if [ "$i" = '--action' -o "$i" = '-a' ] || \
			[ "$i" = '--autoclose' -o "$i" = '-A' ] || \
			#[ "$i" = '--cluster-file' -o "$i" = '-c' ] || \
			#[ "$i" = '--config-file' -o "$i" = '-C' ] || \
			#[ "$i" = '--evaluate' -o "$i" = '-e' ] || \
			[ "$i" = '--font' -o "$i" = '-f' ] || \
			#[ "$i" = '--master' -o "$i" = '-M' ] || \
			#[ "$i" = '--port' -o "$i" = '-p' ] || \
			#[ "$i" = '--tag-file' -o "$i" = '-c' ] || \
			[ "$i" = '--term-args' -o "$i" = '-t' ] || \
			[ "$i" = '--title' -o "$i" = '-T' ] || \
			[ "$i" = '--username' -o "$i" = '-l' ] ; then
				multi='y'	# loop around to get second part
				cmd="$cmd $i"
				continue
			else			# match single argument flags...
				cmd="$cmd $i"
				continue
			fi
		fi

		f="$d/$i.config"
		h="$i"
		# hostname extraction from user@host pattern
		p=`expr index "$i" '@'`
		if [ $p -gt 0 ]; then
			let "l = ${#h} - $p"
			h=${h:$p:$l}
		fi

		# if mtime of $f is > than 5 minutes (5 * 60 seconds), re-generate...
		if [ `date -d "now - $(stat -c '%Y' "$f" 2> /dev/null) seconds" +%s` -gt 300 ]; then
			mkdir -p "$d"
			# we cache the lookup because this command is slow...
			vagrant ssh-config "$h" > "$f" || rm "$f"
		fi

		if [ -e "$f" ]; then
			cmd="$cmd $i"
			cat="$cat $f"	# append config file to list
		fi
	done

	cat="$cat > $cssh"
	#echo $cat
	eval "$cat"			# generate combined config file

	#echo $cmd && return 1
	#[ -e "$cssh" ] && cssh --options "-F ${cssh}$options" $cmd
	# running: bash -c glues together --action 'foo --bar' type commands...
	[ -e "$cssh" ] && bash -c "cssh --options '-F ${cssh}${screen}$options' $cmd"
}

This can be called like this:

$ vcssh annex{1..4} -l root

or like this:

$ vcssh root@hostname foo user@bar james@machine --action 'pwd'

which, as you can see, passes cssh arguments through! Can you see any other special surprises in the code? Well, you can run vcssh like this too:

$ vcssh root@foo james@bar --screen

which will perform exactly as vscreen did above, but in cssh!

You’ll see that the vagrant ssh-config lookups are cached, so this will be speedy when it’s running hot, but expect a few seconds delay when you first run it. If you want a longer cache timeout, it’s easy to change yourself in the function.

I’ve uploaded the code here, so that you don’t have to copy+paste it from my blog!

Happy hacking,

James

Vagrant vsftp and other tricks

As I previously wrote, I’ve been busy with Vagrant on Fedora with libvirt, and have even been submitting, patches and issues! (This “closed” issue needs solving!) Here are some of the tricks that I’ve used while hacking away.

Default provider:

I should have mentioned this in my earlier article but I forgot: If you’re always using the same provider, you might want to set it as the default. In my case I’m using vagrant-libvirt. To do so, add the following to your .bashrc:

export VAGRANT_DEFAULT_PROVIDER=libvirt

This helps you avoid having to add --provider=libvirt to all of your vagrant commands.

Aliases:

All good hackers use shell (in my case bash) aliases to make their lives easier. I normally keep all my aliases in .bash_aliases, but I’ve grouped them together with some other vagrant related items in my .bashrc:

alias vp='vagrant provision'
alias vup='vagrant up'
alias vssh='vagrant ssh'
alias vdestroy='vagrant destroy'

Bash functions:

There are some things aliases just can’t do. For those situations, a bash function might be most appropriate. These go inside your .bashrc file. I’d like to share two with you:

Vagrant logging:

Sometimes it’s useful to get more information from the vagrant command that is running. To do this, you can set the VAGRANT_LOG environment variable to info, or debug.

function vlog {
	VAGRANT_LOG=info vagrant "$@" 2> vagrant.log
}

This is usually helpful for debugging a vagrant issue. When you use the vlog command, it works exactly as the normal vagrant command, but it spits out a vagrant.log logfile into the working directory.

Vagrant sftp:

Vagrant provides an ssh command, but it doesn’t offer an sftp command. The sftp tool is fantastically useful when you want to quickly push or pull a file over ssh. First I’ll show you the code so that you can practice reading bash:

# vagrant sftp
function vsftp {
	[ "$1" = '' ] || [ "$2" != '' ] && echo "Usage: vsftp  - vagrant sftp" 1>&2 && return 1
	wd=`pwd`		# save wd, then find the Vagrant project
	while [ "`pwd`" != '/' ] && [ ! -e "`pwd`/Vagrantfile" ] && [ ! -d "`pwd`/.vagrant/" ]; do
		#echo "pwd is `pwd`"
		cd ..
	done
	pwd=`pwd`
	cd $wd
	if [ ! -e "$pwd/Vagrantfile" ] || [ ! -d "$pwd/.vagrant/" ]; then
		echo 'Vagrant project not found!' 1>&2 && return 2
	fi

	d="$pwd/.ssh"
	f="$d/$1.config"

	# if mtime of $f is > than 5 minutes (5 * 60 seconds), re-generate...
	if [ `date -d "now - $(stat -c '%Y' "$f" 2> /dev/null) seconds" +%s` -gt 300 ]; then
		mkdir -p "$d"
		# we cache the lookup because this command is slow...
		vagrant ssh-config "$1" > "$f" || rm "$f"
	fi
	[ -e "$f" ] && sftp -F "$f" "$1"
}

This vsftp command will work from anywhere in the vagrant project root directory or deeper. It does this by first searching upwards until it gets to that directory. When it finds it, it creates a .ssh/ directory there, and stores the proper ssh_config stubs needed for sftp to find the machines. Then the sftp -F command runs, connecting you to your guest machine.

You’ll notice that the first time you connect takes longer than the second time. This is because the vsftp function caches the ssh_config file store operation. You might want to set a different timeout, or disable it altogether. I should also mention that this script searches for a Vagrantfile and .vagrant/ directory to identify the project root. If there is a more “correct” method, please let me know.

I’m particularly proud of this because it was a fun (and useful) hack. This function, and all the above .bashrc material is available here as an easy download.

NFS folder syncing/mounting:

The vagrant-libvirt provider supports one way folder “synchronization” with rsync, and NFS mounting if you want two-way synchronization. I would love if libvirt/vagrant-libvirt had support for real folder sharing via QEMU guest agent and/or 9p. Getting it to work wasn’t trivial. Here’s what I had to do:

Vagrantfile:

Define your “synced_folder” so that you add the :mount_options array:

config.vm.synced_folder "nfs/", "/tmp/nfs", :nfs => true, :mount_options => ['rw', 'vers=3', 'tcp']

When you specify anything in the :mount_options array, it erases all the defaults. The defaults are: vers=3,udp,rw. Obviously, choose your own directory paths! I’d rather see this use NFSv4, but it would have taken slightly more fighting. Please become a warrior today!

Firewall:

Firewalling is not automatic. To work around this, you’ll need to run the following commands before you use your NFS mount:

# systemctl restart nfs-server # this as root, the rest will prompt
$ firewall-cmd --permanent --zone public --add-service mountd
$ firewall-cmd --permanent --zone public --add-service rpc-bind
$ firewall-cmd --permanent --zone public --add-service nfs
$ firewall-cmd --reload

You should only have to do this once, but possibly each reboot. If you like, you can patch those commands to run at the top of your Vagrantfile. If you can help fix this issue permanently, the bug is here.

Sudo:

The use of sudo to automatically edit /etc/exports is a neat trick, but it can cause some problems:

  1. It is often not needed
  2. It annoyingly prompts you
  3. It can bork your stdin

You get internet karma from me if you can help permanently solve any of those three problems.

Package caching:

As part of my deployments, Puppet installs a lot of packages. Repeatedly hitting a public mirror isn’t a very friendly thing to do, it wastes bandwidth, and it’s slower than having the packages locally! Setting up a proper mirror is a nice solution, but it comes with a management overhead. There is really only one piece of software that manages repositories properly, but using pulp has its own overhead, and is beyond the scope of this article. Because we’re not using our own local mirror, this won’t allow you to work entirely offline, as the metadata still needs to get downloaded from the net.

Installation:

As an interim solution, we can use vagrant-cachier. This plugin adds synced_folder‘s to the apt/yum cache directories. Sneaky! Since this requires two-way sync, you’ll need to get NFS folder syncing/mounting working first. Luckily, I’ve already taught you that!

To pass the right options through to vagrant-cachier, you’ll need this patch. I’d recommend first installing the plugin with:

$ vagrant plugin install vagrant-cachier

and then patching your vagrant-cachier manually. On my machine, the file to patch is found at:

~/.vagrant.d/gems/gems/vagrant-cachier-0.5.0/lib/vagrant-cachier/provision_ext.rb

Vagrantfile:

Here’s what I put in my Vagrantfile:

# NOTE: this doesn't cache metadata, full offline operation not possible
config.cache.auto_detect = true
config.cache.enable :yum		# choose :yum or :apt
if not ARGV.include?('--no-parallel')	# when running in parallel,
	config.cache.scope = :machine	# use the per machine cache
end
config.cache.enable_nfs = true	# sets nfs => true on the synced_folder
# the nolock option is required, otherwise the NFSv3 client will try to
# access the NLM sideband protocol to lock files needed for /var/cache/
# all of this can be avoided by using NFSv4 everywhere. die NFSv3, die!
config.cache.mount_options = ['rw', 'vers=3', 'tcp', 'nolock']

Since multiple package managers operating on the same cache can cause locking issues, this plugin lets you decide if you want a shared cache for all machines, or if you want separate caches. I know that when I use the --no-parallel option it’s safe (enough) to use a common cache because Puppet does all the package installations, and they all happen during each of their first provisioning runs which are themselves sequential.

This isn’t perfect, it’s a hack, but hacks are fun, and in this case, very useful. If things really go wrong, it’s easy to rebuild everything! I still think a few sneaky bugs remain in vagrant-cachier with libvirt, so please find, report, and patch them if you can, although it might be NFS that’s responsible.

Puppet integration:

Integration with Puppet makes this all worthwhile. There are two scenarios:

  1. You’re using a puppetmaster, and you care about things like exported resources and multi-machine systems.
  2. You don’t have a puppetmaster, and you’re running standalone “single machine” code.

The correct answer is #1. Standalone Puppet code is bad for a few reasons:

  1. It neglects the multi-machine nature of servers, and software.
  2. You’re not able to use useful features like exported resources.
  3. It’s easy to (unknowingly) write code that won’t work with a puppetmaster.

If you’d like to see some code which was intentionally written to only work without a puppetmaster (and the puppetmaster friendly equivalent) have a look at my Pushing Puppet material.

There is one good reason for standalone Puppet code, and that is: bootstrapping. In particular, bootstrapping the Puppet server.

DNS:

You need some sort of working DNS setup for this to function properly. Whether that involves hacking up your /etc/hosts files, using one of the vagrant DNS plugins, or running dnsmasq/bind is up to you. I haven’t found an elegant solution yet, hopefully not telling you what I’ve done will push you to come up with something awesome!

The :puppet_server provisioner:

This is the vagrant (puppet agent) provisioner that lets you talk to a puppet server. Here’s my config:

vm.vm.provision :puppet_server do |puppet|
	puppet.puppet_server = 'puppet'
	puppet.options = '--test'    # see the output
end

I’ve added the --test option so that I can see the output go by in my console. I’ve also disabled the puppet service on boot in my base machine image, so that it doesn’t run before I get the chance to watch it. If your base image already has the puppet service running on boot, you can disable it with a shell provisioner. That’s it!

The :puppet provisioner:

This is the standalone vagrant (puppet apply) provisioner. It seems to get misused quite a lot! As I mentioned, I’m only using it to bootstrap my puppet server. That’s why it’s a useful provisioner. Here’s my config:

vm.vm.provision :puppet do |puppet|
	puppet.module_path = 'puppet/modules'
	puppet.manifests_path = 'puppet/manifests'
	puppet.manifest_file = 'site.pp'
	# custom fact
	puppet.facter = {
		'vagrant' => '1',
	}
end

The puppet/{modules,manifests} directories should exist in your project root. I keep the puppet/modules/ directory fresh with a make script that rsync’s code in from my git directories, but that’s purely a personal choice. I added the custom fact as an example. All that’s left is for this code to build a puppetmaster!

Building a puppetmaster:

Here’s my site.pp:

node puppet {	# puppetmaster

	class { '::puppet::server':
		pluginsync => true,	# do we want to enable pluginsync?
		storeconfigs => true,	# do we want to enable storeconfigs?
		autosign => ['*'],	# NOTE: this is a temporary solution
		#repo => true,		# automatic repos (DIY for now)
		start => true,
	}

	class { '::puppet::deploy':
		path => '/vagrant/puppet/',	# puppet folder is put here...
		backup => false,		# don't use puppet to backup...
	}
}

As you can see, I include two classes. The first one installs and configures the puppetmaster, puppetdb, and so on. The second class runs an rsync from the vagrant deployed /vagrant/puppet/ directory to the proper directories inside of /etc/puppet/ and wherever else is appropriate. Each time I want to push new puppet code, I run vp puppet and then vp <host> of whichever host I want to test. You can even combine the two commands into a one-liner with && if you’d like.

The puppet-puppet module:

This is the hairy part. I’ve always had an issue with this module. I recently ripped out support for a lot of the old puppet 0.24 era features, and I’ve only since tested it on the recent 3.x series. A lot has changed from version to version, so it has been hard to follow this and keep it sane. There are serious improvements that could be made to the code. In fact, I wouldn’t recommend it unless you’re okay hacking on Puppet code:

https://github.com/purpleidea/puppet-puppet

Most importantly:

This was a long article! I could have split it up into multiple posts, gotten more internet traffic karma, and would have seemed like a much more prolific blogger as a result, but I figured you’d prefer it all in one big lot, and without any suspense. Prove me right by leaving me a tip, and giving me your feedback.

Happy hacking!

James

PS: You’ll want to be able to follow along with everything in this article if you want to use the upcoming Puppet-Gluster+Vagrant infrastructure that I’ll be releasing.

PPS: Heh heh heh…

PPPS: Get it? Vagrant -> Oscar the Grouch -> Muppets -> Puppet

 

Vagrant on Fedora with libvirt

Apparently lots of people are using Vagrant these days, so I figured I’d try it out. I wanted to get it working on Fedora, and without Virtualbox. This is an intro article on Vagrant, and what I’ve done. I did this on Fedora 19. Feel free to suggest improvements.

Intro:

Vagrant is a tool that easily provisions virtual machines, and then kicks off a configuration management deployment like Puppet. It’s often used for development. I’m planning on using it to test some Puppet-Gluster managed GlusterFS clusters.

Installation:

You should already have the base libvirt/QEMU packages already installed (if they’re not already present) and be comfortable with basic virtual machine concepts. If not, come back when you’re ready.

Fedora 20 was supposed to include Vagrant, but I don’t think this happened. In the meantime, I installed the latest x86_64 RPM from the Vagrant download page.

$ mkdir ~/vagrant
$ wget http://files.vagrantup.com/packages/a40522f5fabccb9ddabad03d836e120ff5d14093/vagrant_1.3.5_x86_64.rpm
$ sudo yum install vagrant_1.3.5_x86_64.rpm

EDIT: Please use version 1.3.5 only! The 1.4 series breaks compatibility with a lot of the necessary plugins. You might have success with different versions, but I have not tested them yet.

You’ll probably need some dependencies for the next few steps. Ensure they’re present:

$ sudo yum install libvirt-devel libxslt-devel libxml2-devel

The virsh and virt-manager tools are especially helpful additions. Install those too.

Boxes:

Vagrant has a concept of default “boxes”. These are pre-built images that you download, and use as base images for the virtual machines that you’ll be building. Unfortunately, they are hypervisor specific, and the most commonly available images have been built for Virtualbox. Yuck! To get around this limitation, there is an easy solution.

First we’ll need to install a special Vagrant plugin:

$ sudo yum install qemu-img # depends on this
$ vagrant plugin install vagrant-mutate

EDIT: Note that there is a bug on Fedora 20 that breaks mutate! Feel free to skip over the mutate steps below on Fedora, and use this image instead. You can install it with:

$ vagrant box add centos-6 https://download.gluster.org/pub/gluster/purpleidea/vagrant/centos-6.box --provider=libvirt

You’ll have to replace the precise32 name in the below Vagrantfile with centos-6.

END EDIT

Now download an incompatible box. We’ll use the well-known example:

$ vagrant box add precise32 http://files.vagrantup.com/precise32.box

Finally, convert the box so that it’s libvirt compatible:

$ vagrant mutate precise32 libvirt

This plugin can also convert to KVM friendly boxes.

You’ll need to make a working directory for Vagrant and initialize it:

$ mkdir test1
$ cd test1
$ vagrant init precise32

Hypervisor:

I initially tried getting this working with the vagrant-kvm plugin that Fedora 20 will eventually include, but I did not succeed. Instead, I used the vagrant-libvirt plugin:

$ vagrant plugin install vagrant-libvirt

EDIT: I forgot to mention that you’ll need to specify the --provider argument when running vagrant commands. I wrote about how to do this in my second article. You can use --provider=libvirt for each command or include the:

export VAGRANT_DEFAULT_PROVIDER=libvirt

line in your ~/.bashrc file.

END EDIT

I have found a number of issues with it, but I’ll show you which magic knobs I’ve used so that you can replicate my setup. Let me show you my Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

VAGRANTFILE_API_VERSION = "2"

# NOTE: vagrant-libvirt needs to run in series (not in parallel) to avoid
# trying to create the network twice... eg: vagrant up --no-parallel
# alternatively, you can just create the vm's one at a time manually...

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

	# Every Vagrant virtual environment requires a box to build from
	config.vm.box = "precise32"			# choose your own!

	# List each virtual machine
	config.vm.define :vm1 do |vm1|
		vm1.vm.network :private_network,
			:ip => "192.168.142.101",	# choose your own!
			:libvirt__network_name => "default"	# leave it
	end

	config.vm.define :vm2 do |vm2|
		vm2.vm.network :private_network,
			:ip => "192.168.142.102",	# choose your own!
			:libvirt__network_name => "default"	# leave it
	end

	# Provider-specific configuration
	config.vm.provider :libvirt do |libvirt|
		libvirt.driver = "qemu"
		# leave out host to connect directly with qemu:///system
		#libvirt.host = "localhost"
		libvirt.connect_via_ssh = false		# also needed
		libvirt.username = "root"
		libvirt.storage_pool_name = "default"
	end

	# Enable provisioning with Puppet. You might want to use the agent!
	config.vm.provision :puppet do |puppet|
		puppet.module_path = "modules"
		puppet.manifests_path = "manifests"
		puppet.manifest_file = "site.pp"
		# custom fact
		puppet.facter = {
			"vagrant" => "1",
		}
	end
end

A few caveats:

  • The :libvirt__network_name => "default" needs to always be specified. If you don’t specify it, or if you specify a different name, you’ll get an error:
    Call to virDomainCreateWithFlags failed: Network not found: no network with matching name 'default'
  • With this Vagrantfile, you’ll get an IP address from DHCP, and a static IP address from the :ip definition. If you attempt to disable DHCP with :libvirt__dhcp_enabled => false you’ll see errors:
    grep: /var/lib/libvirt/dnsmasq/*.leases: No such file or directory

    As a result, each machine will have two IP addresses.

  • Running vagrant up caused some errors when more than one machine is defined. This was caused by duplicate attempts to create the same network. To work around this, you can either start each virtual machine manually, or use the –no-parallel option when the network is first created:
    $ vagrant up --no-parallel
    [...]
    <magic awesome happens here>

Commands:

The most common commands you’ll need are:

  • vagrant init – initialize a directory
  • vagrant up [machine] – start/build/deploy a machine
  • vagrant ssh <machine> – connect to a machine
  • vagrant halt [machine] – stop the machine
  • vagrant destroy [machine] – remove/erase/purge the entire machine

I won’t go into further details here, because this information is well documented.

PolicyKit (polkit):

At this point you’re probably getting annoyed by having to repeatedly type your password to interact with your VM through libvirt. You’re probably seeing a dialog like this:

Screenshot-Authenticate

This should look familiar if you’ve used virt-manager before. When you open virt-manager, and it tries to connect to qemu:///system, PolicyKit does the right thing and ensures that you’re allowed to get access to the resource first! The annoying part is that you get repeatedly prompted when you’re using Vagrant, because it is constantly opening up and closing new connections. If you’re comfortable allowing this permanently, you can add a policy file:

[Allow james libvirt management permissions]
Identity=unix-user:james
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes

Create this file (you’ll need root) as:

/etc/polkit-1/localauthority/50-local.d/vagrant.pkla

and then shouldn’t experience any more prompts when you try to manage libvirt! You’ll obviously want to replace the james string with whatever user account you’re using. For more information on the format of this file, and to learn other ways to do this read the pkla documentation.

If you want to avoid PolicyKit, you can connect to libvirtd over SSH, however I don’t have sshd running on my laptop, and I wanted to connect directly. The default vagrant-libvirt example demonstrates this method.

Puppet:

Provisioning machines without configuration management wouldn’t be very useful. Thankfully, Vagrant integrates nicely with Puppet. The documentation on this is fairly straightforward, so I won’t cover this part. I show a simple Puppet deployment in my Vagrantfile above, but a more complex setup involving puppet agent and puppetmasterd is possible too.

Conclusion:

Hopefully this makes it easier for you to hack in GNU/Linux land. I look forward to seeing this supported natively in Fedora, but until then, hopefully I’ve helped out with the heavy lifting.

Happy hacking!

James