Oh-My-Vagrant “Mainstream” mode and COPR RPM’s

Making Oh-My-Vagrant (OMV) more developer accessible and easy to install (from a distribution package like RPM) has always been a goal, but was previously never a priority. This is all sorted out now. In this article, I’ll explain how “mainstream” mode works, and how the RPM work was done. (I promise this will be somewhat interesting!)

Prerequisites:

If you haven’t read any of the previous articles about Oh-My-Vagrant, I’d recommend you start there. Many of the articles include screencasts, and combined with the examples/ folder, this is probably the best way to learn OMV, because the documentation could use some love.

Installation:

OMV is now easily installable on Fedora 22 via COPR. It probably works on other distros and versions, but I haven’t tested all of those combinations. This is a colossal improvement from when I first posted about this publicly in 2013. There is still one annoying bug that I occasionally hit. Let me know if you can reproduce.

Install from COPR:

james@computer:~$ sudo dnf copr enable purpleidea/oh-my-vagrant

You are about to enable a Copr repository. Please note that this
repository is not part of the main Fedora distribution, and quality
may vary.

The Fedora Project does not exercise any power over the contents of
this repository beyond the rules outlined in the Copr FAQ at
, and
packages are not held to any quality or security level.

Please do not file bug reports about these packages in Fedora
Bugzilla. In case of problems, contact the owner of this repository.

Do you want to continue? [y/N]: y
Repository successfully enabled.
james@computer:~$ sudo dnf install oh-my-vagrant
Last metadata expiration check performed 0:05:08 ago on Tue Jul  7 22:58:45 2015.
Dependencies resolved.
================================================================================
 Package           Arch     Version            Repository                  Size
================================================================================
Installing:
 oh-my-vagrant     noarch   0.0.7-1            purpleidea-oh-my-vagrant   270 k
 vagrant           noarch   1.7.2-7.fc22       updates                    428 k
 vagrant-libvirt   noarch   0.0.26-2.fc22      fedora                      57 k

Transaction Summary
================================================================================
Install  3 Packages

Total download size: 755 k
Installed size: 2.5 M
Is this ok [y/N]: n
Operation aborted.
james@computer:~$ sudo dnf install -y oh-my-vagrant
Last metadata expiration check performed 0:05:19 ago on Tue Jul  7 22:58:45 2015.
Dependencies resolved.
================================================================================
 Package           Arch     Version            Repository                  Size
================================================================================
Installing:
 oh-my-vagrant     noarch   0.0.7-1            purpleidea-oh-my-vagrant   270 k
 vagrant           noarch   1.7.2-7.fc22       updates                    428 k
 vagrant-libvirt   noarch   0.0.26-2.fc22      fedora                      57 k

Transaction Summary
================================================================================
Install  3 Packages

Total download size: 755 k
Installed size: 2.5 M
Downloading Packages:
(1/3): vagrant-1.7.2-7.fc22.noarch.rpm          626 kB/s | 428 kB     00:00    
(2/3): vagrant-libvirt-0.0.26-2.fc22.noarch.rpm  70 kB/s |  57 kB     00:00    
(3/3): oh-my-vagrant-0.0.7-1.noarch.rpm         243 kB/s | 270 kB     00:01    
--------------------------------------------------------------------------------
Total                                           246 kB/s | 755 kB     00:03     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Installing  : vagrant-1.7.2-7.fc22.noarch                                 1/3 
  Installing  : vagrant-libvirt-0.0.26-2.fc22.noarch                        2/3 
  Installing  : oh-my-vagrant-0.0.7-1.noarch                                3/3 
  Verifying   : oh-my-vagrant-0.0.7-1.noarch                                1/3 
  Verifying   : vagrant-libvirt-0.0.26-2.fc22.noarch                        2/3 
  Verifying   : vagrant-1.7.2-7.fc22.noarch                                 3/3 

Installed:
  oh-my-vagrant.noarch 0.0.7-1                vagrant.noarch 1.7.2-7.fc22       
  vagrant-libvirt.noarch 0.0.26-2.fc22       

Complete!
james@computer:~$

If you’d like to avoid typing passwords over and over again when using vagrant, you can add yourself into the vagrant group. 99% of people do this. The downside is that it could allow your user account to get root privileges. Since most developers have a single user environment, it’s not a big issue. This is necessary because vagrant uses the qemu:///system connection instead of qemu:///session. If you can help fix this, please hack on it.

james@computer:~$ groups
james wheel docker
james@computer:~$ sudo usermod -aG vagrant james
# you'll need to logout/login for this change to take effect...

Lastly, there is a user session plugin addition that is required. Installation is automatic the first time you create a new OMV project. Let’s do that and see how it works!

james@computer:~$ mkdir /tmp/omvtest
james@computer:~$ cd !$
cd /tmp/omvtest
james@computer:/tmp/omvtest$ which omv
/usr/bin/omv
james@computer:/tmp/omvtest$ omv init
Oh-My-Vagrant needs to install a modified vagrant-hostmanager plugin.
Is this ok [y/N]: y
Cloning into 'vagrant-hostmanager'...
remote: Counting objects: 801, done.
remote: Total 801 (delta 0), reused 0 (delta 0), pack-reused 801
Receiving objects: 100% (801/801), 132.22 KiB | 0 bytes/s, done.
Resolving deltas: 100% (467/467), done.
Checking connectivity... done.
Branch feat/oh-my-vagrant set up to track remote branch feat/oh-my-vagrant from origin.
Switched to a new branch 'feat/oh-my-vagrant'
sending incremental file list
./
vagrant-hostmanager.rb
vagrant-hostmanager/
vagrant-hostmanager/action.rb
vagrant-hostmanager/command.rb
vagrant-hostmanager/config.rb
vagrant-hostmanager/errors.rb
vagrant-hostmanager/plugin.rb
vagrant-hostmanager/provisioner.rb
vagrant-hostmanager/util.rb
vagrant-hostmanager/version.rb
vagrant-hostmanager/action/
vagrant-hostmanager/action/update_all.rb
vagrant-hostmanager/action/update_guest.rb
vagrant-hostmanager/action/update_host.rb
vagrant-hostmanager/hosts_file/
vagrant-hostmanager/hosts_file/updater.rb

sent 20,560 bytes  received 286 bytes  41,692.00 bytes/sec
total size is 19,533  speedup is 0.94
Patched successfully!
Current machine states:

omv1                      not created (libvirt)

The Libvirt domain is not created. Run `vagrant up` to create it.
james@computer:/tmp/omvtest$ ls
ansible/  docker/  kubernetes/  omv.yaml  puppet/  shell/
james@computer:/tmp/omvtest$

You can see that the plugin installation worked perfectly, and that OMV created a few files and folders.

More usage:

You can hide that generated mess in a subfolder if you prefer:

james@computer:/tmp/omvtest$ mkdir /tmp/omvtest2
james@computer:/tmp/omvtest$ cd !$
cd /tmp/omvtest2
james@computer:/tmp/omvtest2$ omv init mess
Current machine states:

omv1                      not created (libvirt)

The Libvirt domain is not created. Run `vagrant up` to create it.
james@computer:/tmp/omvtest2$ ls
mess/  omv.yaml@
james@computer:/tmp/omvtest2$ ls -lAh
total 0
drwxrwxr-x. 7 james 160 Jul  7 23:26 mess/
lrwxrwxrwx. 1 james  13 Jul  7 23:26 omv.yaml -> mess/omv.yaml
drwxrwxr-x. 3 james  60 Jul  7 23:26 .vagrant/
james@computer:/tmp/omvtest2$ tree
.
├── mess
│   ├── ansible
│   │   └── modules
│   ├── docker
│   ├── kubernetes
│   │   ├── applications
│   │   └── templates
│   ├── omv.yaml
│   ├── puppet
│   │   └── modules
│   └── shell
└── omv.yaml -> mess/omv.yaml

10 directories, 2 files
james@computer:/tmp/omvtest2$

As you can see all the mess is wrapped up in a single folder. This could even be named .omv if you prefer, and should all be committed inside of your project. Now that we’re installed, let’s get hacking!

Mainstream mode:

Mainstream mode further hides the ruby/Vagrantfile aspect of a Vagrant project and extends OMV so that you can define your entire project via the omv.yaml file, without the rest of the OMV project cluttering up your development tree. This makes it possible to have your project use OMV by only committing that one yaml file into the project repo.

The main difference is that you now control everything with the new omv command line tool. It’s essentially a smart wrapper around the vagrant command, so any command you used to use vagrant for, you can now substitute in omv. It also saves typing four extra characters!

As it turns out (and by no accident) the omv tool works exactly like the vagrant tool. For example:

james@computer:/tmp/omvtest2$ omv status
Current machine states:

omv1                      not created (libvirt)

The Libvirt domain is not created. Run `vagrant up` to create it.
james@computer:/tmp/omvtest2$ omv up
Bringing machine 'omv1' up with 'libvirt' provider...
==> omv1: Box 'centos-7.1' could not be found. Attempting to find and install...
    omv1: Box Provider: libvirt
    omv1: Box Version: >= 0
==> omv1: Adding box 'centos-7.1' (v0) for provider: libvirt
    omv1: Downloading: https://dl.fedoraproject.org/pub/alt/purpleidea/vagrant/centos-7.1/centos-7.1.box
[snip]
james@computer:/tmp/omvtest2$ omv destroy
Unlocking shell provisioning for: omv1...
==> omv1: Domain is not created. Please run `vagrant up` first.
james@computer:/tmp/omvtest2$

BUT THAT’S NOT ALL…

The existing tools you know and love, like vlog, vsftp, vscreen, vcssh, vfwd, vansible, have all been modified to work with OMV mainstream mode as well. The same goes for common aliases such as vs, vp, vup, vdestroy, vrsync, and the useful (but occasionally dangerous) vrm-rf. Have a look at the above links on my blog and the source to see what these do. If it’s not clear enough, let me know!

All of these are now packaged up in the oh-my-vagrant COPR and are installed automatically into /etc/profile.d/oh-my-vagrant.sh for your convenience. Since they’re part of the OMV project, you’ll get updates when new functions or bug fixes are made.

The plumbing:

Mainstream mode is possible because of an idea rbarlow had. He gets full credit for the idea, in particular for teaching me about VAGRANT_CWD which is what makes it all work. I rejected his 6 line prototype, but loved the idea, and since he was busy making juice, I got bored one day and hacked on a full implementation.

james@computer:~/code/oh-my-vagrant$ git diff --stat 853073431d227cbb0ba56aaf4fedd721904de9a8 aa764ae79d69475b87f293c43af4f20fd7d1d000
 DOCUMENTATION.md    | 18 +++++++++++++++
 bin/omv.sh          | 50 +++++++++++++++++++++++++++++++++++++++++
 vagrant/Vagrantfile | 65 ++++++++++++++++++++++++++++++++++-------------------
 3 files changed, 110 insertions(+), 23 deletions(-)
james@computer:~/code/oh-my-vagrant$

It turned out it was a little longer, but I artificially inflated this by including some quick doc patches. What does it actually do differently? It sets VAGRANT_CWD and VAGRANT_DOTFILE_PATH so that the vagrant command looks in a different directory for the Vagrantfile and .vagrant/ directories. That way, all the plumbing is hidden and part of the RPM.

Making the RPM:

The RPM’s happened because stefw made me feel bad about not having them. He was right to do so. In an case, RPM packaging still scares me. I think repetitive work scares me even more. That’s why I automate as much as I can. So after a lot of brain loss, I finally made you an RPM so that you could easily install it. Here’s how it went:

I started by adding the magic so that my Makefile could build an RPM.

This made it so I can easily run make srpm to get a new RPM or SRPM.

Then I added COPR integration, so a make copr automatically kicks off a new COPR build. This was the interesting part. You’ll need a Fedora account for this to work. Once you’re logged in, if you go to https://copr.fedoraproject.org/api you’ll be able to download a snippet to put in your ~/.config/copr file. Lastly, the work happens in copr-build.py where the python copr library does the heavy lifting.

#!/usr/bin/python

# README:
# for initial setup, browse to: https://copr.fedoraproject.org/api/
# and it will have a ~/.config/copr config that you can download.
# happy hacking!

import os
import sys
import copr

COPR = 'oh-my-vagrant'
if len(sys.argv) != 2:
    print("Usage: %s <srpm url>" % sys.argv[0])
    sys.exit(1)

url = sys.argv[1]

client = copr.CoprClient.create_from_file_config(os.path.expanduser("~/.config/copr"))

result = client.create_new_build(COPR, [url])
if result.output != "ok":
    print(result.error)
    sys.exit(1)
print(result.message)

A build looks like this:

james@computer:~/code/oh-my-vagrant$ git tag 0.0.8 # set a new tag
james@computer:~/code/oh-my-vagrant$ make copr 
Running templater...
Running git archive...
Running git archive submodules...
Running rpmbuild -bs...
Wrote: /home/james/code/oh-my-vagrant/rpmbuild/SRPMS/oh-my-vagrant-0.0.8-1.src.rpm
Running SRPMS sha256sum...
/home/james/code/oh-my-vagrant
Running SRPMS gpg...

You need a passphrase to unlock the secret key for
user: "James Shubin (Third PGP key.) <james@shubin.ca>"
4096-bit RSA key, ID 24090D66, created 2012-05-09

gpg: WARNING: The GNOME keyring manager hijacked the GnuPG agent.
gpg: WARNING: GnuPG will not work properly - please configure that tool to not interfere with the GnuPG system!
Running SRPMS upload...
sending incremental file list
SHA256SUMS
SHA256SUMS.asc
oh-my-vagrant-0.0.8-1.src.rpm

sent 8,583 bytes  received 2,184 bytes  4,306.80 bytes/sec
total size is 1,456,741  speedup is 135.30
Build was added to oh-my-vagrant.
james@computer:~/code/oh-my-vagrant$

A few minutes later, the COPR build page should look like this:

a screenshot of the Oh-My-Vagrant COPR build page for people who like to look at pretty pictures instead of just terminal output

A screenshot of the Oh-My-Vagrant COPR build page for people who like to look at pretty pictures instead of just terminal output.

There was a bunch of additional fixing and polishing required to get this as seamless as possible for you. Have a look at the git commits and you’ll get an idea of all the work that was done, and you’ll probably even learn about some new, features I haven’t blogged about yet. It was exhausting!

omv-exhaustedAs a result of all this, you can download fresh builds easily. Visit the COPR page to see how things are cooking:

https://copr.fedoraproject.org/coprs/purpleidea/oh-my-vagrant/

I’ll try to keep this pumping out releases regularly. If I lag behind, please holler at me. In any case, please let me know if you appreciate this work. Comment, tweeter, or contact me!

Happy Hacking,

James

Screencasts of Puppet-Gluster + Vagrant

I decided to record some screencasts to show how easy it is to deploy GlusterFS using Puppet-Gluster+Vagrant. You can follow along even if you don’t know anything about Puppet or Vagrant. The hardest part of this process was producing the actual videos!

If recommend first reading my earlier articles if you’re planning on following along:

Without any further delay, here are the screencasts:

Part 1: Intro, and provisioning of the Puppet server.

Part 2: Initial building of the Gluster hosts.

Part 3: Finishing the Gluster builds.

Part 4: GlusterFS client mounting and tests.

Part 5: Mixed bag of code, infrastructure tours, examples and other details.

I hope you enjoyed these videos. Thank you to the Gluster.org community for hosting them. If you liked these videos, please consider sponsoring some of my work, or making a donation!

As a side note, the only screencast tool that worked was gtk-recordmydesktop, however it deleted my second recording (which had to be re-recorded) and the audio stopped working one minute into my third recording (which had to then be separately recorded, and mixed in). Amazingly, pitivi was the only tool which worked to properly mix them together!

Happy Hacking,

James

PS: Please note, you may not sell, edit, redistribute, perform, or host these videos elsewhere without my permission. I especially don’t want to see them on youtube until Google let’s me unlink my youtube account! If you do want my permission to use these videos for something, contact me, and we can work something out. I’ll surely allow it if it’s not for something evil. If you’d rather have an interactive, live demo, let me know!

Automatically deploying GlusterFS with Puppet-Gluster + Vagrant!

Puppet-Gluster was always about automating the deployment of GlusterFS. Getting your own Puppet server and the associated infrastructure running was never included “out of the box“. Today, it is! (This is big news!)

I’ve used Vagrant to automatically build these GlusterFS clusters. I’ve tested this with Fedora 20, and vagrant-libvirt. This won’t work with Fedora 19 because of bz#876541. I recommend first reading my earlier articles for Vagrant and Fedora:

Once you’re comfortable with the material in the above articles, we can continue…

The short answer:

$ sudo service nfs start
$ git clone --recursive https://github.com/purpleidea/puppet-gluster.git
$ cd puppet-gluster/vagrant/gluster/
$ vagrant up puppet && sudo -v && vagrant up

Once those commands finish, you should have four running gluster hosts, and a puppet server. The gluster hosts will still be building. You can log in and tail -F log files, or watch -n 1 gluster status commands.

The whole process including the one-time downloads took about 30 minutes. If you’ve got faster internet that I do, I’m sure you can cut that down to under 20. Building the gluster hosts themselves probably takes about 15 minutes.

Enjoy your new Gluster cluster!

Screenshots!

I took a few screenshots to make this more visual for you. I like to have virt-manager open so that I can visually see what’s going on:

The annex{1..4} machines are building in parallel.

The annex{1..4} machines are building in parallel. The valleys happened when the machines were waiting for the vagrant DHCP server (dnsmasq).

Here we can see two puppet runs happening on annex1 and annex4.

Notice the two peaks on the puppet server which correspond to the valleys on annex{1,4}.

Notice the two peaks on the puppet server which correspond to the valleys on annex{1,4}.

Here’s another example, with four hosts working in parallel:

foo

Can you answer why the annex machines have two peaks? Why are the second peaks bigger?

Tell me more!

Okay, let’s start with the command sequence shown above.

$ sudo service nfs start

This needs to be run once if the NFS server on your host is not already running. It is used to provide folder synchronization for the Vagrant guest machines. I offer more information about the NFS synchronization in an earlier article.

$ git clone --recursive https://github.com/purpleidea/puppet-gluster.git

This will pull down the Puppet-Gluster source, and all the necessary submodules.

$ cd puppet-gluster/vagrant/gluster/

The Puppet-Gluster source contains a vagrant subdirectory. I’ve included a gluster subdirectory inside it, so that your machines get a sensible prefix. In the future, this might not be necessary.

$ vagrant up puppet && sudo -v && vagrant up

This is where the fun stuff happens. You’ll need a base box image to run machines with Vagrant. Luckily, I’ve already built one for you, and it is generously hosted by the Gluster community.

The very first time you run this Vagrant command, it will download this image automatically, install it and then continue the build process. This initial box download and installation only happens once. Subsequent Puppet-Gluster+Vagrant deploys and re-deploys won’t need to re-download the base image!

This command starts by building the puppet server. Vagrant might use sudo to prompt you for root access. This is used to manage your /etc/exports file. After the puppet server is finished building, we refresh the sudo cache to avoid bug #2680.

The last vagrant up command starts up the remaining gluster hosts in parallel, and kicks off the initial puppet runs. As I mentioned above, the gluster hosts will still be building. Puppet automatically waits for the cluster to “settle” and enter a steady state (no host/brick changes) before it creates the first volume. You can log in and tail -F log files, or watch -n 1 gluster status commands.

At this point, your cluster is running and you can do whatever you want with it! Puppet-Gluster+Vagrant is meant to be easy. If this wasn’t easy, or you can think of a way to make this better, let me know!

I want N hosts, not 4:

By default, this will build four (4) gluster hosts. I’ve spent a lot of time writing a fancy Vagrantfile, to give you speed and configurability. If you’d like to set a different number of hosts, you’ll first need to destroy the hosts that you’ve built already:

$ vagrant destroy annex{1..4}

You don’t have to rebuild the puppet server because this command is clever and automatically cleans the old host entries from it! This makes re-deploying even faster!

Then, run the vagrant up command with the --gluster-count=<N> argument. Example:

$ vagrant up --gluster-count=8

This is also configurable in the puppet-gluster.yaml file which will appear in your vagrant working directory. Remember that before you change any configuration option, you should destroy the affected hosts first, otherwise vagrant can get confused about the current machine state.

I want to test a different GlusterFS version:

By default, this will use the packages from:

https://download.gluster.org/pub/gluster/glusterfs/LATEST/

but if you’d like to pick a specific GlusterFS version you can do so with the --gluster-version=<version> argument. Example:

$ vagrant up --gluster-version='3.4.2-1.el6'

This is also stored, and configurable in the puppet-gluster.yaml file.

Does (repeating) this consume a lot of bandwidth?

Not really, no. There is an initial download of about 450MB for the base image. You’ll only ever need to download this again if I publish an updated version.

Each deployment will hit a public mirror to download the necessary puppet, GlusterFS and keepalived packages. The puppet server caused about 115MB of package downloads, and each gluster host needed about 58MB.

The great thing about this setup, is that it is integrated with vagrant-cachier, so that you don’t have to re-download packages. When building the gluster hosts in parallel (the default), each host will have to download the necessary packages into a separate directory. If you’d like each host to share the same cache folder, and save yourself 58MB or so per machine, you’ll need to build in series. You can do this with:

$ vagrant up --no-parallel

I chose speed over preserving bandwidth, so I do not recommend this option. If your ISP has a bandwidth cap, you should find one that isn’t crippled.

Subsequent re-builds won’t download any packages that haven’t already been downloaded.

What should I look for?

Once the vagrant commands are done running, you’ll want to look at something to see how your machines are doing. I like to log in and run different commands. I usually log in like this:

$ vcssh --screen root@annex{1..4}

I explain how to do this kind of magic in an earlier post. I then run some of the following commands:

# tail -F /var/log/messages

This lets me see what the machine is doing. I can see puppet runs and other useful information fly by.

# ip a s eth2

This lets me check that the VIP is working correctly, and which machine it’s on. This should usually be the first machine.

# ps auxww | grep again.[py]

This lets me see if puppet has scheduled another puppet run. My scripts automatically do this when they decide that there is still building (deployment) left to do. If you see a python again.py process, this means it is sleeping and will wake up to continue shortly.

# gluster peer status

This lets me see which hosts have connected, and what state they’re in.

# gluster volume info

This lets me see if Puppet-Gluster has built me a volume yet. By default it builds one distributed volume named puppet.

I want to configure this differently!

Okay, you’re more than welcome to! All of the scripts can be customized. If you want to configure the volume(s) differently, you’ll want to look in the:

puppet-gluster/vagrant/gluster/puppet/manifests/site.pp

file. The gluster::simple class is well-documented, and can be configured however you like. If you want to do more serious hacking, have a look at the Vagrantfile, the source, and the submodules. Of course the GlusterFS source is a great place to hack too!

The network block, domain, and other parameters are all configurable inside of the Vagrantfile. I’ve tried to use sensible defaults where possible. I’m using example.com as the default domain. Yes, this will work fine on your private network. DNS is currently configured with the /etc/hosts file. I wrote some magic into the Vagrantfile so that the slow /etc/hosts shell provisioning only has to happen once per machine! If you have a better, functioning, alternative, please let me know!

What’s the greatest number of machines this will scale to?

Good question! I’d like to know too! I know that GlusterFS probably can’t scale to 1000 hosts yet. Keepalived can’t support more than 256 priorities, therefore Puppet-Gluster can’t scale beyond that count until a suitable fix can be found. There are likely some earlier limits inside of Puppet-Gluster due to maximum command line length constraints that you’ll hit. If you find any, let me know and I’ll patch them. Patches now cost around seven karma points. Other than those limits, there’s the limit of my hardware. Since this is all being virtualized on a lowly X201, my tests are limited. An upgrade would be awesome!

Can I use this to test QA releases, point releases and new GlusterFS versions?

Absolutely! That’s the idea. There are two caveats:

  1. Automatically testing QA releases isn’t supported until the QA packages have a sensible home on download.gluster.org or similar. This will need a change to:
    https://github.com/purpleidea/puppet-gluster/blob/master/manifests/repo.pp#L18

    The gluster community is working on this, and as soon as a solution is found, I’ll patch Puppet-Gluster to support it. If you want to disable automatic repository management (in gluster::simple) and manage this yourself with the vagrant shell provisioner, you’re able to do so now.

  2. It’s possible that new releases introduce bugs, or change things in a backwards incompatible way that breaks Puppet-Gluster. If this happens, please let me know so that something can get patched. That’s what testing is for!

You could probably use this infrastructure to test GlusterFS builds automatically, but that’s a project that would need real funding.

Updating the puppet source:

If you make a change to the puppet source, but you don’t want to rebuild the puppet virtual machine, you don’t have to. All you have to do is run:

$ vagrant provision puppet

This will update the puppet server with any changes made to the source tree at:

puppet-gluster/vagrant/gluster/puppet/

Keep in mind that the modules subdirectory contains all the necessary puppet submodules, and a clone of puppet-gluster itself. You’ll first need to run make inside of:

puppet-gluster/vagrant/gluster/puppet/modules/

to refresh the local clone. To see what’s going on, or to customize the process, you can look at the Makefile.

Client machines:

At the moment, this doesn’t build separate machines for gluster client use. You can either mount your gluster pool from the puppet server, another gluster server, or if you add the right DNS entries to your /etc/hosts, you can mount the volume on your host machine. If you really want Vagrant to build client machines, you’ll have to persuade me.

What about firewalls?

Normally I use shorewall to manage the firewall. It integrates well with Puppet-Gluster, and does a great job. For an unexplained reason, it seems to be blocking my VRRP (keepalived) traffic, and I had to disable it. I think this is due to a libvirt networking bug, but I can’t prove it yet. If you can help debug this issue, please let me know! To reproduce it, enable the firewall and shorewall directives in:

puppet-gluster/vagrant/gluster/puppet/manifests/site.pp

and then get keepalived to work.

Re-provisioning a machine after a long wait throws an error:

You might be hitting: vagrant-cachier #74. If you do, there is an available workaround.

Keepalived shows “invalid passwd!” messages in the logs:

This is expected. This happens because we build a distributed password for use with keepalived. Before the cluster state has settled, the password will be different from host to host. Only when the cluster is coherent will the password be identical everywhere, which incidentally is the only time when the VIP matters.

How did you build that awesome base image?

I used virt-builder, some scripts, and a clever Makefile. I’ll be publishing this code shortly. Hasn’t this been enough to keep you busy for a while?

Are you still here?

If you’ve read this far, then good for you! I’m sorry that it has been a long read, but I figured I would try to answer everyone’s questions in advance. I’d like to hear your comments! I get very little feedback, and I’ve never gotten a single tip! If you find this useful, please let me know.

Until then,

Happy hacking,

James

Vagrant clustered SSH and ‘screen’

Some fun updates for vagrant hackers… I wanted to use the venerable clustered SSH (cssh) and screen with vagrant. I decided to expand on my vsftp script. First read:

Vagrant on Fedora with libvirt

and

Vagrant vsftp and other tricks

to get up to speed on the background information.

Vagrant screen:

First, a simple screen hack… I often use my vssh alias to quickly ssh into a machine, but I don’t want to have to waste time with sudo-ing to root and then running screen each time. Enter vscreen:

# vagrant screen
function vscreen {
	[ "$1" = '' ] || [ "$2" != '' ] && echo "Usage: vscreen <vm-name> - vagrant screen" 1>&2 && return 1
	wd=`pwd`		# save wd, then find the Vagrant project
	while [ "`pwd`" != '/' ] && [ ! -e "`pwd`/Vagrantfile" ] && [ ! -d "`pwd`/.vagrant/" ]; do
		#echo "pwd is `pwd`"
		cd ..
	done
	pwd=`pwd`
	cd $wd
	if [ ! -e "$pwd/Vagrantfile" ] || [ ! -d "$pwd/.vagrant/" ]; then
		echo 'Vagrant project not found!' 1>&2 && return 2
	fi

	d="$pwd/.ssh"
	f="$d/$1.config"
	h="$1"
	# hostname extraction from user@host pattern
	p=`expr index "$1" '@'`
	if [ $p -gt 0 ]; then
		let "l = ${#h} - $p"
		h=${h:$p:$l}
	fi

	# if mtime of $f is > than 5 minutes (5 * 60 seconds), re-generate...
	if [ `date -d "now - $(stat -c '%Y' "$f" 2> /dev/null) seconds" +%s` -gt 300 ]; then
		mkdir -p "$d"
		# we cache the lookup because this command is slow...
		vagrant ssh-config "$h" > "$f" || rm "$f"
	fi
	[ -e "$f" ] && ssh -t -F "$f" "$1" 'screen -xRR'
}

I usually run it this way:

$ vscreen root@machine

which logs in as root, to machine and gets me (back) into screen. This is almost identical to the vsftp script which I explained in an earlier blog post.

Vagrant cssh:

First you’ll need to install cssh. On my Fedora machine it’s as easy as:

# yum install -y clusterssh

I’ve been hacking a lot on Puppet-Gluster lately, and occasionally multi-machine hacking demands multi-machine key punching. Enter vcssh:


# vagrant cssh
function vcssh {
	[ "$1" = '' ] && echo "Usage: vcssh [options] [user@]<vm1>[ [user@]vm2[ [user@]vmN...]] - vagrant cssh" 1>&2 && return 1
	wd=`pwd`		# save wd, then find the Vagrant project
	while [ "`pwd`" != '/' ] && [ ! -e "`pwd`/Vagrantfile" ] && [ ! -d "`pwd`/.vagrant/" ]; do
		#echo "pwd is `pwd`"
		cd ..
	done
	pwd=`pwd`
	cd $wd
	if [ ! -e "$pwd/Vagrantfile" ] || [ ! -d "$pwd/.vagrant/" ]; then
		echo 'Vagrant project not found!' 1>&2 && return 2
	fi

	d="$pwd/.ssh"
	cssh="$d/cssh"
	cmd=''
	cat='cat '
	screen=''
	options=''

	multi='f'
	special=''
	for i in "$@"; do	# loop through the list of hosts and arguments!
		#echo $i

		if [ "$special" = 'debug' ]; then	# optional arg value...
			special=''
			if [ "$1" -ge 0 -o "$1" -le 4 ]; then
				cmd="$cmd $i"
				continue
			fi
		fi

		if [ "$multi" = 'y' ]; then	# get the value of the argument
			multi='n'
			cmd="$cmd '$i'"
			continue
		fi

		if [ "${i:0:1}" = '-' ]; then	# does argument start with: - ?

			# build a --screen option
			if [ "$i" = '--screen' ]; then
				screen=' -o RequestTTY=yes'
				cmd="$cmd --action 'screen -xRR'"
				continue
			fi

			if [ "$i" = '--debug' ]; then
				special='debug'
				cmd="$cmd $i"
				continue
			fi

			if [ "$i" = '--options' ]; then
				options=" $i"
				continue
			fi

			# NOTE: commented-out options are probably not useful...
			# match for key => value argument pairs
			if [ "$i" = '--action' -o "$i" = '-a' ] || \
			[ "$i" = '--autoclose' -o "$i" = '-A' ] || \
			#[ "$i" = '--cluster-file' -o "$i" = '-c' ] || \
			#[ "$i" = '--config-file' -o "$i" = '-C' ] || \
			#[ "$i" = '--evaluate' -o "$i" = '-e' ] || \
			[ "$i" = '--font' -o "$i" = '-f' ] || \
			#[ "$i" = '--master' -o "$i" = '-M' ] || \
			#[ "$i" = '--port' -o "$i" = '-p' ] || \
			#[ "$i" = '--tag-file' -o "$i" = '-c' ] || \
			[ "$i" = '--term-args' -o "$i" = '-t' ] || \
			[ "$i" = '--title' -o "$i" = '-T' ] || \
			[ "$i" = '--username' -o "$i" = '-l' ] ; then
				multi='y'	# loop around to get second part
				cmd="$cmd $i"
				continue
			else			# match single argument flags...
				cmd="$cmd $i"
				continue
			fi
		fi

		f="$d/$i.config"
		h="$i"
		# hostname extraction from user@host pattern
		p=`expr index "$i" '@'`
		if [ $p -gt 0 ]; then
			let "l = ${#h} - $p"
			h=${h:$p:$l}
		fi

		# if mtime of $f is > than 5 minutes (5 * 60 seconds), re-generate...
		if [ `date -d "now - $(stat -c '%Y' "$f" 2> /dev/null) seconds" +%s` -gt 300 ]; then
			mkdir -p "$d"
			# we cache the lookup because this command is slow...
			vagrant ssh-config "$h" > "$f" || rm "$f"
		fi

		if [ -e "$f" ]; then
			cmd="$cmd $i"
			cat="$cat $f"	# append config file to list
		fi
	done

	cat="$cat > $cssh"
	#echo $cat
	eval "$cat"			# generate combined config file

	#echo $cmd && return 1
	#[ -e "$cssh" ] && cssh --options "-F ${cssh}$options" $cmd
	# running: bash -c glues together --action 'foo --bar' type commands...
	[ -e "$cssh" ] && bash -c "cssh --options '-F ${cssh}${screen}$options' $cmd"
}

This can be called like this:

$ vcssh annex{1..4} -l root

or like this:

$ vcssh root@hostname foo user@bar james@machine --action 'pwd'

which, as you can see, passes cssh arguments through! Can you see any other special surprises in the code? Well, you can run vcssh like this too:

$ vcssh root@foo james@bar --screen

which will perform exactly as vscreen did above, but in cssh!

You’ll see that the vagrant ssh-config lookups are cached, so this will be speedy when it’s running hot, but expect a few seconds delay when you first run it. If you want a longer cache timeout, it’s easy to change yourself in the function.

I’ve uploaded the code here, so that you don’t have to copy+paste it from my blog!

Happy hacking,

James