Keeping git submodules in sync with your branches

This is a quick trick for making working with git submodules more magic.

One day you might find that using git submodules is needed for your project. It’s probably not necessary for everyday hacking, but if you’re glue-ing things together, it can be quite useful. Puppet-Gluster uses this technique to easily include all the dependencies needed for a Puppet-Gluster+Vagrant automatic deployment.

If you’re a good hacker, you develop things in separate feature branches. Example:

cd code/projectdir/
git checkout -b feat/my-cool-feature
# hack hack hack
git add -p
# add stuff
git commit -m 'my cool new feature'
git push
# yay!

The problem arises if you git pull inside of a git submodule to update it to a particular commit. When you switch branches, the git submodule‘s branch doesn’t move along with you! Personally, I think this is a bug, but perhaps it’s not. In any case, here’s the fix:

add:

#!/bin/bash
exec git submodule update

to your:

<projectdir>/.git/hooks/post-checkout

and then run:

chmod u+x <projectdir>/.git/hooks/post-checkout

and you’re good to go! Here’s an example:

james@computer:~/code/puppet/puppet-gluster$ git checkout feat/yamldata
M vagrant/gluster/puppet/modules/puppet
Switched to branch 'feat/yamldata'
Submodule path 'vagrant/gluster/puppet/modules/puppet': checked out 'f139d0b7cfe6d55c0848d0d338e19fe640a961f2'
james@computer:~/code/puppet/puppet-gluster (feat/yamldata)$ git checkout master
M vagrant/gluster/puppet/modules/puppet
Switched to branch 'master'
Your branch is up-to-date with 'glusterforge/master'.
Submodule path 'vagrant/gluster/puppet/modules/puppet': checked out '07ec49d1f67a498b31b4f164678a76c464e129c4'
james@computer:~/code/puppet/puppet-gluster$ cat .git/hooks/post-checkout
#!/bin/bash
exec git submodule update
james@computer:~/code/puppet/puppet-gluster$

Hope that helps you out too! If someone knows of a use-case when you don’t want this functionality, please let me know! Many thanks to #git for helping me solve this issue!

Happy hacking,

James

 

Puppet-Gluster now available as RPM

I’ve been afraid of RPM and package maintaining [1] for years, but thanks to Kaleb Keithley, I have finally made some RPM’s that weren’t generated from a high level tool. Now that I have the boilerplate done, it’s a relatively painless process!

In case you don’t know kkeithley, he is a wizard [2] who happens to also be especially cool and hardworking. If you meet him, be sure to buy him a $BEVERAGE. </plug>

A photo of kkeithley after he (temporarily) transformed himself into a wizard penguin.

A photo of kkeithley after he (temporarily) transformed himself into a wizard penguin.

The full source of my changes is available in git.

If you want to make the RPM’s yourself, simply clone the puppet-gluster source, and run: make rpm. If you’d rather download pre-built RPM’s, SRPM’S, or source tarballs, they are all being graciously hosted on download.gluster.org, thanks to John Mark Walker and the gluster.org community.

These RPM’s will install their contents into /usr/share/puppet/modules/. They should work on Fedora or CentOS, but they do require a puppet package to be installed. I hope to offer them in the future as part of a repository for easier consumption.

There are also RPM’s available for puppet-common, puppet-keepalived, puppet-puppet, puppet-shorewall, puppet-yum, and even puppetlabs-stdlib. These are the dependencies required to install the puppet-gluster module.

Please let me know if you find any issues with any of the packages, or if you have any recommendations for improvement! I’m new to packaging, so I probably made some mistakes.

Happy Hacking,

James

[1] package maintainer, aka: “paintainer” – according to semiosis, who is right!

[2] wizard as in an awesome, talented, hacker.

Introducing Puppet Exec[‘again’]

Puppet is missing a number of much-needed features. That’s the bad news. The good news is that I’ve been able to write some of these as modules that don’t need to change the Puppet core! This is an article about one of these features.

Posit: It’s not possible to apply all of your Puppet manifests in a single run.

I believe that this holds true for the current implementation of Puppet. Most manifests can, do and should apply completely in a single run. If your Puppet run takes more than one run to converge, then chances are that you’re doing something wrong.

(For the sake of this article, convergence means that everything has been applied cleanly, and that a subsequent Puppet run wouldn’t have any work to do.)

There are some advanced corner cases, where this is not possible. In these situations, you will either have to wait for the next Puppet run (by default it will run every 30 minutes) or keep running Puppet manually until your configuration has converged. Neither of these situations are acceptable because:

  • Waiting 30 minutes while your machines are idle is (mostly) a waste of time.
  • Doing manual work to set up your automation kind of defeats the purpose.
'Are you stealing those LCDs?' 'Yeah, but I'm doing it while my code compiles.'

Waiting 30 minutes while your machines are idle is (mostly) a waste of time. Okay, maybe it’s not entirely a waste of time :)

So what’s the solution?

Introducing: Puppet Exec[‘again’] !

Exec[‘again’] is a feature which I’ve added to my Puppet-Common module.

What does it do?

Each Puppet run, your code can decide if it thinks there is more work to do, or if the host is not in a converged state. If so, it will tell Exec[‘again’].

What does Exec[‘again’] do?

Exec[‘again’] will fork a process off from the running puppet process. It will wait until that parent process has finished, and then it will spawn (technically: execvpe) a new puppet process to run puppet again. The module is smart enough to inspect the parent puppet process, and it knows how to run the child puppet. Once the new child puppet process is running, you won’t see any leftover process id from the parent Exec[‘again’] tool.

How do I tell it to run?

It’s quite simple, all you have to do is import my puppet module, and then notify the magic Exec[‘again’] type that my class defines. Example:

include common::again

$some_str = 'ttboj is awesome'
# you can notify from any type that can generate a notification!
# typically, using exec is the most common, but is not required!
file { '/tmp/foo':
    content => "${some_str}\n",
    notify => Exec['again'], # notify puppet!
}

How do I decide if I need to run again?

This depends on your module, and isn’t always a trivial thing to figure out. In one case, I had to build a finite state machine in puppet to help decide whether this was necessary or not. In some cases, the solution might be simpler. In all cases, this is an advanced technique, so you’ll probably already have a good idea about how to figure this out if you need this type of technique.

Can I introduce a minimum delay before the next run happens?

Yes, absolutely. This is particularly useful if you are building a distributed system, and you want to give other hosts a chance to export resources before each successive run. Example:

include common::again

# when notified, this will run puppet again, delta sec after it ends!
common::again::delta { 'some-name':
    delta => 120, # 2 minutes (pick your own value)
}

# to run the above Exec['again'] you can use:
exec { '/bin/true':
    onlyif => '/bin/false', # TODO: some condition
    notify => Common::Again::Delta['some-name'],
}

Can you show me a real-world example of this module?

Have a look at the Puppet-Gluster module. This module was one of the reasons that I wrote the Exec[‘again’] functionality.

Are there any caveats?

Maybe! It’s possible to cause a fast “infinite loop”, where Puppet gets run unnecessarily. This could effectively DDOS your puppetmaster if left unchecked, so please use with caution! Keep in mind that puppet typically runs in an infinite loop already, except with a 30 minute interval.

Help, it won’t stop!

Either your code has become sentient, and has decided it wants to enable kerberos or you’ve got a bug in your Puppet manifests. If you fix the bug, things should eventually go back to normal. To kill the process that’s re-spawning puppet, look for it in your process tree. Example:

[root@server ~]# ps auxww | grep again[.py]
root 4079 0.0 0.7 132700 3768 ? S 18:26 0:00 /usr/bin/python /var/lib/puppet/tmp/common/again/again.py --delta 120
[root@server ~]# killall again.py
[root@server ~]# echo $?
0
[root@server ~]# ps auxww | grep again[.py]
[root@server ~]# killall again.py
again.py: no process killed
[root@server ~]#

Does this work with puppet running as a service or with puppet agent –test?

Yes.

How was the spawn/exec logic implemented?

The spawn/exec logic was implemented as a standalone python program that gets copied to your local system, and does all the heavy lifting. Please have a look and let me know if you can find any bugs!

Conclusion

I hope you enjoyed this addition to your toolbox. Please remember to use it with care. If you have a legitimate use for it, please let me know so that I can better understand your use case!

Happy hacking,

James

 

Speaking at SCALE today!

I’ll be giving a talk at SCALE today about automatically deploying GlusterFS with Puppet-Gluster and Vagrant. I’ll be giving some live demos, and this will cover some of the material from:

Automatically deploying GlusterFS with Puppet-Gluster + Vagrant!

and it will contain excerpts from:

Screencasts of Puppet-Gluster + Vagrant

I’ll also be talking about some new upcoming features, and am happy to answer all of your questions!

The talk will be part of Infrastructure.next and is starting around 1:30 or 2pm in the Century AB room.

If you’re not able to attend, or you’d like a more personalized demo, send me a note! I’ll be around for the next few days.

Thanks to Joe Brockmeier and John Mark Walker for hosting the event and sending me here.

Happy Hacking,

James

 

Building base images for Vagrant with a Makefile

I needed a base image “box” for my Puppet-Gluster+Vagrant work. It would have been great if good boxes already existed, and even better if it were easy to build my own. As it turns out, I wasn’t able to satisfy either of these conditions, so I’ve had to build one myself! I’ve published all of my code, so that you can use these techniques and tools too!

Status quo:

Having an NIH problem is bad for your vision, and it’s best to benefit from existing tools before creating your own. I first tried using vagrant-cachier, and then veewee, and packer. Vagrant-cachier is a great tool, but it turned out not being very useful because there weren’t any base images available for download that met my needs. Veewee and packer can build those images, but they both failed in doing so for different reasons. Hopefully this situation will improve in the future.

Writing a script:

I started by hacking together a short shell script of commands for building base images. There wasn’t much programming involved as the process was fairly linear, but it was useful to figure out what needed getting done.

I decided to use the excellent virt-builder command to put together the base image. This is exactly what it’s good at doing! To install it on Fedora 20, you can run:

$ sudo yum install libguestfs-tools

It wasn’t available in Fedora 19, but after a lot of pain, I managed to build (mostly correct?) packages. I have posted them online if you are brave (or crazy?) enough to want them.

Using the right tool:

After building a few images, I realized that a shell script was the wrong tool, and that it was time for an upgrade. What was the right tool? GNU Make! After working on this for more hours than I’m ready to admit, I present to you, a lovingly crafted virtual machine base image (“box”) builder:

Makefile

The Makefile itself is quite compact. It uses a few shell scripts to do some of the customization, and builds a clean image in about ten minutes. To use it, just run make.

Customization:

At the moment, it builds x86_64, CentOS 6.5+ machines for vagrant-libvirt, but you can edit the Makefile to build a custom image of your choosing. I’ve gone out of my way to add an $(OUTPUT) variable to the Makefile so that your generated files get saved in /tmp/ or somewhere outside of your source tree.

Download the image:

If you’d like to download the image that I generated, it is being generously hosted by the Gluster community here. If you’re using the Vagrantfile from my Puppet-Gluster+Vagrant setup, then you don’t have to download it manually, this will happen automatically.

Open issues:

The biggest issue with the images is that SELinux gets disabled! You might be okay with this, but it’s actually quite unfortunate. It is disabled to avoid the SELinux relabelling that happens on first boot, as this overhead defeats the usefulness of a fast vagrant deployment. If you know of a way to fix this problem, please let me know!

Example output:

If you’d like to see this in action, but don’t want to run it yourself, here’s an example run:

$ date && time make && date
Mon Jan 20 10:57:35 EST 2014
Running templater...
Running virt-builder...
[   1.0] Downloading: http://libguestfs.org/download/builder/centos-6.xz
[   4.0] Planning how to build this image
[   4.0] Uncompressing
[  19.0] Resizing (using virt-resize) to expand the disk to 40.0G
[ 173.0] Opening the new disk
[ 181.0] Setting a random seed
[ 181.0] Setting root password
[ 181.0] Installing packages: screen vim-enhanced git wget file man tree nmap tcpdump htop lsof telnet mlocate bind-utils koan iftop yum-utils nc rsync nfs-utils sudo openssh-server openssh-clients
[ 212.0] Uploading: files/epel-release-6-8.noarch.rpm to /root/epel-release-6-8.noarch.rpm
[ 212.0] Uploading: files/puppetlabs-release-el-6.noarch.rpm to /root/puppetlabs-release-el-6.noarch.rpm
[ 212.0] Uploading: files/selinux to /etc/selinux/config
[ 212.0] Deleting: /.autorelabel
[ 212.0] Running: yum install -y /root/epel-release-6-8.noarch.rpm && rm -f /root/epel-release-6-8.noarch.rpm
[ 214.0] Running: yum install -y bash-completion moreutils
[ 235.0] Running: yum install -y /root/puppetlabs-release-el-6.noarch.rpm && rm -f /root/puppetlabs-release-el-6.noarch.rpm
[ 239.0] Running: yum install -y puppet
[ 254.0] Running: yum update -y
[ 375.0] Running: files/user.sh
[ 376.0] Running: files/ssh.sh
[ 376.0] Running: files/network.sh
[ 376.0] Running: files/cleanup.sh
[ 377.0] Finishing off
Output: /home/james/tmp/builder/gluster/builder.img
Output size: 40.0G
Output format: qcow2
Total usable space: 38.2G
Free space: 37.3G (97%)
Running convert...
Running tar...
./Vagrantfile
./metadata.json
./box.img

real	9m10.523s
user	2m23.282s
sys	0m37.109s
Mon Jan 20 11:06:46 EST 2014
$

If you have any other questions, please let me know!

Happy hacking,

James

PS: Be careful when writing Makefile‘s. They can be dangerous if used improperly, and in fact I once took out part of my lib/ directory by running one. Woops!

UPDATE: This technique now exists in it’s own repo here: https://github.com/purpleidea/vagrant-builder

Testing GlusterFS during “Glusterfest”

The GlusterFS community is having a “test day”. Puppet-Gluster+Vagrant is a great tool to help with this, and it has now been patched to support alpha, beta, qa, and rc releases! Because it was built so well (*cough*, shameless plug), it only took one patch.

Okay, first make sure that your Puppet-Gluster+Vagrant setup is working properly. I have only tested this on Fedora 20. Please read:

Automatically deploying GlusterFS with Puppet-Gluster+Vagrant!

to make sure you’re comfortable with the tools and infrastructure.

This weekend we’re testing 3.5.0 beta1. It turns out that the full rpm version for this is:

3.5.0-0.1.beta1.el6

You can figure out these strings yourself by browsing the folders in:

https://download.gluster.org/pub/gluster/glusterfs/qa-releases/

To test a specific version, use the --gluster-version argument that I added to the vagrant command. For this deployment, here is the list of commands that I used:

$ mkdir /tmp/vagrant/
$ cd /tmp/vagrant/
$ git clone --recursive https://github.com/purpleidea/puppet-gluster.git
$ cd vagrant/gluster/
$ vagrant up puppet
$ sudo -v && vagrant up --gluster-version='3.5.0-0.1.beta1.el6' --gluster-count=2 --no-parallel

As you can see, this is a standard vagrant deploy. I’ve decided to build two gluster hosts (--gluster-count=2) and I’m specifying the version string shown above. I’ve also decided to build in series (--no-parallel) because I think there might be some hidden race conditions, possibly in the vagrant-libvirt stack.

After about five minutes, the two hosts were built, and about six minutes after that, Puppet-Gluster had finished doing its magic. I had logged in to watch the progress, but if you were out getting a coffee, when you came back you could run:

$ gluster volume info

to see your newly created volume!

If you want to try a different version or host count, you don’t need to destroy the entire infrastructure. You can destroy the gluster annex hosts:

$ vagrant destroy annex{1..2}

and then run a new vagrant up command.

In addition, I’ve added a --gluster-firewall option. Currently it defaults to false because there’s a strange firewall bug blocking my VRRP (keepalived) setup. If you’d like to enable it and help me fix this bug, you can use:

--gluster-firewall=true

To make sure the firewall is off, you can use:

--gluster-firewall=false

In the future, I will change the default value to true, so specify it explicitly if you need a certain behaviour.

Happy hacking,

James

Automatically deploying GlusterFS with Puppet-Gluster + Vagrant!

Puppet-Gluster was always about automating the deployment of GlusterFS. Getting your own Puppet server and the associated infrastructure running was never included “out of the box“. Today, it is! (This is big news!)

I’ve used Vagrant to automatically build these GlusterFS clusters. I’ve tested this with Fedora 20, and vagrant-libvirt. This won’t work with Fedora 19 because of bz#876541. I recommend first reading my earlier articles for Vagrant and Fedora:

Once you’re comfortable with the material in the above articles, we can continue…

The short answer:

$ sudo service nfs start
$ git clone --recursive https://github.com/purpleidea/puppet-gluster.git
$ cd puppet-gluster/vagrant/gluster/
$ vagrant up puppet && sudo -v && vagrant up

Once those commands finish, you should have four running gluster hosts, and a puppet server. The gluster hosts will still be building. You can log in and tail -F log files, or watch -n 1 gluster status commands.

The whole process including the one-time downloads took about 30 minutes. If you’ve got faster internet that I do, I’m sure you can cut that down to under 20. Building the gluster hosts themselves probably takes about 15 minutes.

Enjoy your new Gluster cluster!

Screenshots!

I took a few screenshots to make this more visual for you. I like to have virt-manager open so that I can visually see what’s going on:

The annex{1..4} machines are building in parallel.

The annex{1..4} machines are building in parallel. The valleys happened when the machines were waiting for the vagrant DHCP server (dnsmasq).

Here we can see two puppet runs happening on annex1 and annex4.

Notice the two peaks on the puppet server which correspond to the valleys on annex{1,4}.

Notice the two peaks on the puppet server which correspond to the valleys on annex{1,4}.

Here’s another example, with four hosts working in parallel:

foo

Can you answer why the annex machines have two peaks? Why are the second peaks bigger?

Tell me more!

Okay, let’s start with the command sequence shown above.

$ sudo service nfs start

This needs to be run once if the NFS server on your host is not already running. It is used to provide folder synchronization for the Vagrant guest machines. I offer more information about the NFS synchronization in an earlier article.

$ git clone --recursive https://github.com/purpleidea/puppet-gluster.git

This will pull down the Puppet-Gluster source, and all the necessary submodules.

$ cd puppet-gluster/vagrant/gluster/

The Puppet-Gluster source contains a vagrant subdirectory. I’ve included a gluster subdirectory inside it, so that your machines get a sensible prefix. In the future, this might not be necessary.

$ vagrant up puppet && sudo -v && vagrant up

This is where the fun stuff happens. You’ll need a base box image to run machines with Vagrant. Luckily, I’ve already built one for you, and it is generously hosted by the Gluster community.

The very first time you run this Vagrant command, it will download this image automatically, install it and then continue the build process. This initial box download and installation only happens once. Subsequent Puppet-Gluster+Vagrant deploys and re-deploys won’t need to re-download the base image!

This command starts by building the puppet server. Vagrant might use sudo to prompt you for root access. This is used to manage your /etc/exports file. After the puppet server is finished building, we refresh the sudo cache to avoid bug #2680.

The last vagrant up command starts up the remaining gluster hosts in parallel, and kicks off the initial puppet runs. As I mentioned above, the gluster hosts will still be building. Puppet automatically waits for the cluster to “settle” and enter a steady state (no host/brick changes) before it creates the first volume. You can log in and tail -F log files, or watch -n 1 gluster status commands.

At this point, your cluster is running and you can do whatever you want with it! Puppet-Gluster+Vagrant is meant to be easy. If this wasn’t easy, or you can think of a way to make this better, let me know!

I want N hosts, not 4:

By default, this will build four (4) gluster hosts. I’ve spent a lot of time writing a fancy Vagrantfile, to give you speed and configurability. If you’d like to set a different number of hosts, you’ll first need to destroy the hosts that you’ve built already:

$ vagrant destroy annex{1..4}

You don’t have to rebuild the puppet server because this command is clever and automatically cleans the old host entries from it! This makes re-deploying even faster!

Then, run the vagrant up command with the --gluster-count=<N> argument. Example:

$ vagrant up --gluster-count=8

This is also configurable in the puppet-gluster.yaml file which will appear in your vagrant working directory. Remember that before you change any configuration option, you should destroy the affected hosts first, otherwise vagrant can get confused about the current machine state.

I want to test a different GlusterFS version:

By default, this will use the packages from:

https://download.gluster.org/pub/gluster/glusterfs/LATEST/

but if you’d like to pick a specific GlusterFS version you can do so with the --gluster-version=<version> argument. Example:

$ vagrant up --gluster-version='3.4.2-1.el6'

This is also stored, and configurable in the puppet-gluster.yaml file.

Does (repeating) this consume a lot of bandwidth?

Not really, no. There is an initial download of about 450MB for the base image. You’ll only ever need to download this again if I publish an updated version.

Each deployment will hit a public mirror to download the necessary puppet, GlusterFS and keepalived packages. The puppet server caused about 115MB of package downloads, and each gluster host needed about 58MB.

The great thing about this setup, is that it is integrated with vagrant-cachier, so that you don’t have to re-download packages. When building the gluster hosts in parallel (the default), each host will have to download the necessary packages into a separate directory. If you’d like each host to share the same cache folder, and save yourself 58MB or so per machine, you’ll need to build in series. You can do this with:

$ vagrant up --no-parallel

I chose speed over preserving bandwidth, so I do not recommend this option. If your ISP has a bandwidth cap, you should find one that isn’t crippled.

Subsequent re-builds won’t download any packages that haven’t already been downloaded.

What should I look for?

Once the vagrant commands are done running, you’ll want to look at something to see how your machines are doing. I like to log in and run different commands. I usually log in like this:

$ vcssh --screen root@annex{1..4}

I explain how to do this kind of magic in an earlier post. I then run some of the following commands:

# tail -F /var/log/messages

This lets me see what the machine is doing. I can see puppet runs and other useful information fly by.

# ip a s eth2

This lets me check that the VIP is working correctly, and which machine it’s on. This should usually be the first machine.

# ps auxww | grep again.[py]

This lets me see if puppet has scheduled another puppet run. My scripts automatically do this when they decide that there is still building (deployment) left to do. If you see a python again.py process, this means it is sleeping and will wake up to continue shortly.

# gluster peer status

This lets me see which hosts have connected, and what state they’re in.

# gluster volume info

This lets me see if Puppet-Gluster has built me a volume yet. By default it builds one distributed volume named puppet.

I want to configure this differently!

Okay, you’re more than welcome to! All of the scripts can be customized. If you want to configure the volume(s) differently, you’ll want to look in the:

puppet-gluster/vagrant/gluster/puppet/manifests/site.pp

file. The gluster::simple class is well-documented, and can be configured however you like. If you want to do more serious hacking, have a look at the Vagrantfile, the source, and the submodules. Of course the GlusterFS source is a great place to hack too!

The network block, domain, and other parameters are all configurable inside of the Vagrantfile. I’ve tried to use sensible defaults where possible. I’m using example.com as the default domain. Yes, this will work fine on your private network. DNS is currently configured with the /etc/hosts file. I wrote some magic into the Vagrantfile so that the slow /etc/hosts shell provisioning only has to happen once per machine! If you have a better, functioning, alternative, please let me know!

What’s the greatest number of machines this will scale to?

Good question! I’d like to know too! I know that GlusterFS probably can’t scale to 1000 hosts yet. Keepalived can’t support more than 256 priorities, therefore Puppet-Gluster can’t scale beyond that count until a suitable fix can be found. There are likely some earlier limits inside of Puppet-Gluster due to maximum command line length constraints that you’ll hit. If you find any, let me know and I’ll patch them. Patches now cost around seven karma points. Other than those limits, there’s the limit of my hardware. Since this is all being virtualized on a lowly X201, my tests are limited. An upgrade would be awesome!

Can I use this to test QA releases, point releases and new GlusterFS versions?

Absolutely! That’s the idea. There are two caveats:

  1. Automatically testing QA releases isn’t supported until the QA packages have a sensible home on download.gluster.org or similar. This will need a change to:
    https://github.com/purpleidea/puppet-gluster/blob/master/manifests/repo.pp#L18

    The gluster community is working on this, and as soon as a solution is found, I’ll patch Puppet-Gluster to support it. If you want to disable automatic repository management (in gluster::simple) and manage this yourself with the vagrant shell provisioner, you’re able to do so now.

  2. It’s possible that new releases introduce bugs, or change things in a backwards incompatible way that breaks Puppet-Gluster. If this happens, please let me know so that something can get patched. That’s what testing is for!

You could probably use this infrastructure to test GlusterFS builds automatically, but that’s a project that would need real funding.

Updating the puppet source:

If you make a change to the puppet source, but you don’t want to rebuild the puppet virtual machine, you don’t have to. All you have to do is run:

$ vagrant provision puppet

This will update the puppet server with any changes made to the source tree at:

puppet-gluster/vagrant/gluster/puppet/

Keep in mind that the modules subdirectory contains all the necessary puppet submodules, and a clone of puppet-gluster itself. You’ll first need to run make inside of:

puppet-gluster/vagrant/gluster/puppet/modules/

to refresh the local clone. To see what’s going on, or to customize the process, you can look at the Makefile.

Client machines:

At the moment, this doesn’t build separate machines for gluster client use. You can either mount your gluster pool from the puppet server, another gluster server, or if you add the right DNS entries to your /etc/hosts, you can mount the volume on your host machine. If you really want Vagrant to build client machines, you’ll have to persuade me.

What about firewalls?

Normally I use shorewall to manage the firewall. It integrates well with Puppet-Gluster, and does a great job. For an unexplained reason, it seems to be blocking my VRRP (keepalived) traffic, and I had to disable it. I think this is due to a libvirt networking bug, but I can’t prove it yet. If you can help debug this issue, please let me know! To reproduce it, enable the firewall and shorewall directives in:

puppet-gluster/vagrant/gluster/puppet/manifests/site.pp

and then get keepalived to work.

Re-provisioning a machine after a long wait throws an error:

You might be hitting: vagrant-cachier #74. If you do, there is an available workaround.

Keepalived shows “invalid passwd!” messages in the logs:

This is expected. This happens because we build a distributed password for use with keepalived. Before the cluster state has settled, the password will be different from host to host. Only when the cluster is coherent will the password be identical everywhere, which incidentally is the only time when the VIP matters.

How did you build that awesome base image?

I used virt-builder, some scripts, and a clever Makefile. I’ll be publishing this code shortly. Hasn’t this been enough to keep you busy for a while?

Are you still here?

If you’ve read this far, then good for you! I’m sorry that it has been a long read, but I figured I would try to answer everyone’s questions in advance. I’d like to hear your comments! I get very little feedback, and I’ve never gotten a single tip! If you find this useful, please let me know.

Until then,

Happy hacking,

James