Remote execution in mgmt

Bootstrapping a cluster from your laptop, or managing machines without needing to first setup a separate config management infrastructure are both very reasonable and fundamental asks. I was particularly inspired by Ansible‘s agent-less remote execution model, but never wanted to build a centralized orchestrator. I soon realized that I could have my ice cream and eat it too.

Prior knowledge

If you haven’t read the earlier articles about mgmt, then I recommend you start with those, and then come back here. The first and fourth are essential if you’re going to make sense of this article.

Limitations of existing orchestrators

Current orchestrators have a few limitations.

  1. They can be a single point of failure
  2. They can have scaling issues
  3. They can’t respond instantaneously to node state changes (they poll)
  4. They can’t usually redistribute remote node run-time data between nodes

Despite these limitations, orchestration is still very useful because of the facilities it provides. Since these facilities are essential in a next generation design, I set about integrating these features, but with a novel twist.

Implementation, Usage and Design

Mgmt is written in golang, and that decision was no accident. One benefit is that it simplifies our remote execution model.

To use this mode you run mgmt with the --remote flag. Each use of the --remote argument points to a different remote graph to execute. Eventually this will be integrated with the DSL, but this plumbing is exposed for early adopters to play around with.

Startup (part one)

Each invocation of --remote causes mgmt to remotely connect over SSH to the target hosts. This happens in parallel, and runs up to --cconns simultaneous connections.

A temporary directory is made on the remote host, and the mgmt binary and graph are copied across the wire. Since mgmt compiles down to a single statically compiled binary, it simplifies the transfer of the software. The binary is cached remotely to speed up future runs unless you pass the --no-caching option.

A TCP connection is tunnelled back over SSH to the originating hosts etcd server which is embedded and running inside of the initiating mgmt binary.

Execution (part two)

The remote mgmt binary is now run! It wires itself up through the SSH tunnel so that its internal etcd client can connect to the etcd server on the initiating host. This is particularly powerful because remote hosts can now participate in resource exchanges as if they were part of a regular etcd backed mgmt cluster! They don’t connect directly to each other, but they can share runtime data, and only need an incoming SSH port open!

Closure (part three)

At this point mgmt can either keep running continuously or it can close the connections and shutdown.

In the former case, you can either remain attached over SSH, or you can disconnect from the child hosts and let this new cluster take on a new life and operate independently of the initiator.

In the latter case you can either shutdown at the operators request (via a ^C on the initiator) or when the cluster has simultaneously converged for a number of seconds.

This second possibility occurs when you run mgmt with the familiar --converged-timeout parameter. It is indeed clever enough to also work in this distributed fashion.

Diagram

I’ve used by poor libreoffice draw skills to make a diagram. Hopefully this helps out my visual readers.

remote-execution

If you can improve this diagram, please let me know!

Example

I find that using one or more vagrant virtual machines for the remote endpoints is the best way to test this out. In my case I use Oh-My-Vagrant to set up these machines, but the method you use is entirely up to you! Here’s a sample remote execution. Please note that I have omitted a number of lines for brevity, and added emphasis to the more interesting ones.

james@hostname:~/code/mgmt$ ./mgmt run --remote examples/remote2a.yaml --remote examples/remote2b.yaml --tmp-prefix 
17:58:22 main.go:76: This is: mgmt, version: 0.0.5-3-g4b8ad3a
17:58:23 remote.go:596: Remote: Connect...
17:58:23 remote.go:607: Remote: Sftp...
17:58:23 remote.go:164: Remote: Self executable is: /home/james/code/gopath/src/github.com/purpleidea/mgmt/mgmt
17:58:23 remote.go:221: Remote: Remotely created: /tmp/mgmt-412078160/remote
17:58:23 remote.go:226: Remote: Remote path is: /tmp/mgmt-412078160/remote/mgmt
17:58:23 remote.go:221: Remote: Remotely created: /tmp/mgmt-412078160/remote
17:58:23 remote.go:226: Remote: Remote path is: /tmp/mgmt-412078160/remote/mgmt
17:58:23 remote.go:235: Remote: Copying binary, please be patient...
17:58:23 remote.go:235: Remote: Copying binary, please be patient...
17:58:24 remote.go:256: Remote: Copying graph definition...
17:58:24 remote.go:618: Remote: Tunnelling...
17:58:24 remote.go:630: Remote: Exec...
17:58:24 remote.go:510: Remote: Running: /tmp/mgmt-412078160/remote/mgmt run --hostname '192.168.121.201' --no-server --seeds 'http://127.0.0.1:2379' --file '/tmp/mgmt-412078160/remote/remote2a.yaml' --depth 1
17:58:24 etcd.go:2088: Etcd: Watch: Path: /_mgmt/exported/
17:58:24 main.go:255: Main: Waiting...
17:58:24 remote.go:256: Remote: Copying graph definition...
17:58:24 remote.go:618: Remote: Tunnelling...
17:58:24 remote.go:630: Remote: Exec...
17:58:24 remote.go:510: Remote: Running: /tmp/mgmt-412078160/remote/mgmt run --hostname '192.168.121.202' --no-server --seeds 'http://127.0.0.1:2379' --file '/tmp/mgmt-412078160/remote/remote2b.yaml' --depth 1
17:58:24 etcd.go:2088: Etcd: Watch: Path: /_mgmt/exported/
17:58:24 main.go:291: Config: Parse failure
17:58:24 main.go:255: Main: Waiting...
^C17:58:48 main.go:62: Interrupted by ^C
17:58:48 main.go:397: Destroy...
17:58:48 remote.go:532: Remote: Output...
|    17:58:23 main.go:76: This is: mgmt, version: 0.0.5-3-g4b8ad3a
|    17:58:47 main.go:419: Goodbye!
17:58:48 remote.go:636: Remote: Done!
17:58:48 remote.go:532: Remote: Output...
|    17:58:24 main.go:76: This is: mgmt, version: 0.0.5-3-g4b8ad3a
|    17:58:48 main.go:419: Goodbye!
17:58:48 remote.go:636: Remote: Done!
17:58:48 main.go:419: Goodbye!

You should see that we kick off the remote executions, and how they are wired back through the tunnel. In this particular case we terminated the runs with a ^C.

The example configurations I used are available here and here. If you had a terminal open on the first remote machine, after about a second you would have seen:

[root@omv1 ~]# ls -d /tmp/file*  /tmp/mgmt*
/tmp/file1a  /tmp/file2a  /tmp/file2b  /tmp/mgmt-412078160
[root@omv1 ~]# cat /tmp/file*
i am file1a
i am file2a, exported from host a
i am file2b, exported from host b

You can see the remote execution artifacts, and that there was clearly data exchange. You can repeat this example with --converged-timeout=5 to automatically terminate after five seconds of cluster wide inactivity.

Live remote hacking

Since mgmt is event based, and graph structure configurations manifest themselves as event streams, you can actually edit the input configuration on the initiating machine, and as soon as the file is saved, it will instantly remotely propagate and apply the graph differential.

For this particular example, since we export and collect resources through the tunnelled SSH connections, it means editing the exported file, will also cause both hosts to update that file on disk!

You’ll see this occurring with this message in the logs:

18:00:44 remote.go:973: Remote: Copied over new graph definition: examples/remote2b.yaml

While you might not necessarily want to use this functionality on a production machine, it will definitely make your interactive hacking sessions more useful, in particular because you never need to re-run parts of the graph which have already converged!

Auth

In case you’re wondering, mgmt can look in your ~/.ssh/ for keys to use for the auth, or it can prompt you interactively. It can also read a plain text password from the connection string, but this isn’t a recommended security practice.

Hierarchial remote execution

Even though we recommend running mgmt in a normal clustered mode instead of over SSH, we didn’t want to limit the number of hosts that can be configured using remote execution. For this reason it would be architecturally simple to add support for what we’ve decided to call “hierarchial remote execution”.

In this mode, the primary initiator would first connect to one or more secondary nodes, which would then stage a second series of remote execution runs resulting in an order of depth equal to two or more. This fan out approach can be used to distribute the number of outgoing connections across more intermediate machines, or as a method to conserve remote execution bandwidth on the primary link into your datacenter, by having the secondary machine run most of the remote execution runs.

remote-execution2

This particular extension hasn’t been built, although some of the plumbing has been laid. If you’d like to contribute this feature to the upstream project, please join us in #mgmtconfig on Freenode and let us (I’m @purpleidea) know!

Docs

There is some generated documentation for the mgmt remote package available. There is also the beginning of some additional documentation in the markdown docs. You can help contribute to either of these by sending us a patch!

Novel resources

Our event based architecture can enable some previously improbable kinds of resources. In particular, I think it would be quite beautiful if someone built a provisioning resource. The Watch method of the resource API normally serves to notify us of events, but since it is a main loop that blocks in a select call, it could also be used to run a small server that hosts a kickstart file and associated TFTP images. If you like this idea, please help us build it!

Conclusion

I hope you enjoyed this article and found this remote execution methodology as novel as we do. In particular I hope that I’ve demonstrated that configuration software doesn’t have to be constrained behind a static orchestration topology.

Happy Hacking,

James

Upcoming speaking

I’ve got a few upcoming speaking engagements. If you’ll be attending one of these events, come see me or any of the other excellent speakers!

Please remember to check the official schedules in case there are any changes!

I’ll be speaking at the Brussels CentOS Dojo:

Automated Infrastructure Testing with Oh-My-Vagrant
…and the CentOS CI

Time/date unconfirmed: I’ll be showing some CI tricks, and showing you how the CentOS CI is the perfect CI for multi-machine test environments.

~

I’ll be speaking at FOSDEM:

TL;DR on legal strategy for commercial ventures
An abridged review of legal strategy and licensing issues for commercial ventures and enterprises

30/Jan/2016 (Saturday) @ 18:00: I’ll be giving my first ever “legal” talk!

~

Oh, My! Oh-My-Vagrant (with live demos!)
Oh-My-Vagrant development environments for hackers

31/Jan/2016 (Sunday) @ 16:30: A short Oh-My-Vagrant talk with a few live demos.

~

I’ll be speaking at Config Management Camp:

Next Generation Config Mgmt
A prototype for a next generation config management tool, and the specific problems this design solves.

02/Feb/2016 (Tuesday) @ 11:00: I’ll be introducing my magnum opus for 2016. Particularly excited about this presentation.

~

I’ll be speaking at DevConf.cz:

Next Generation Config Mgmt
A prototype for a next generation config management tool, and the specific problems this design solves.

06/Feb/2016 (Saturday) @ 14:00: I’ll be repeating the same presentation from Config Management Camp, but with renewed enthusiasm!

~

If you want to hack on something or talk about tech, I’ll be at the above mentioned events, so feel free to ping me or contact me.

Cheers,

James

 

Trying out Ceph with Oh-My-Vagrant

Daniel P. Berrangé wrote about trying out a single node ceph cluster. I decided to take his article and turn it into an Oh-My-Vagrant omv.yaml file. It took me about two minutes to do so, and two hours to debug a problem caused by something I had broken on my laptop.

If you’d like to replicate his article in less than 5 minutes, pull down the omv.yaml file that I’ve just published and run omv up. Here’s the full terminal output of my session:

james@computer:~/code/oh-my-vagrant/examples$ git pull
Already up-to-date.
james@computer:~/code/oh-my-vagrant/examples$ cdtmpmkdir 
james@computer:/tmp/tmp.DhD$ cp $OLDPWD/ceph-deploy.yaml omv.yaml
james@computer:/tmp/tmp.DhD$ time omv up
Bringing machine 'omv1' up with 'libvirt' provider...
==> omv1: Creating image (snapshot of base box volume).
==> omv1: Creating domain with the following settings...
==> omv1:  -- Name:              omv_omv1
==> omv1:  -- Domain type:       kvm
==> omv1:  -- Cpus:              1
==> omv1:  -- Memory:            512M
==> omv1:  -- Base box:          fedora-23
==> omv1:  -- Storage pool:      default
==> omv1:  -- Image:             /var/lib/libvirt/images/omv_omv1.img
==> omv1:  -- Volume Cache:      default
==> omv1:  -- Kernel:            
==> omv1:  -- Initrd:            
==> omv1:  -- Graphics Type:     spice
==> omv1:  -- Graphics Port:     5900
==> omv1:  -- Graphics IP:       127.0.0.1
==> omv1:  -- Graphics Password: Not defined
==> omv1:  -- Video Type:        qxl
==> omv1:  -- Video VRAM:        9216
==> omv1:  -- Keymap:            en-us
==> omv1:  -- Command line : 
==> omv1: Creating shared folders metadata...
==> omv1: Starting domain.
==> omv1: Waiting for domain to get an IP address...
==> omv1: Waiting for SSH to become available...
==> omv1: Setting hostname...
==> omv1: Configuring and enabling network interfaces...
==> omv1: Rsyncing folder: /tmp/tmp.DhD/ => /vagrant
==> omv1: Updating /etc/hosts file on active guest machines...
==> omv1: Running provisioner: shell...
    omv1: Running: inline script
==> omv1: Changing password for user root.
==> omv1: passwd: all authentication tokens updated successfully.
==> omv1: Running provisioner: shell...
    omv1: Running: inline script
==> omv1: TARGET SOURCE    FSTYPE OPTIONS
==> omv1: /      /dev/vda3 xfs    rw,relatime,seclabel,attr2,inode64,noquota
==> omv1: TARGET                                SOURCE     FSTYPE     OPTIONS
==> omv1: /                                     /dev/vda3  xfs        rw,relatime,seclabel,attr2,inode64,noquota
==> omv1: ├─/sys                                sysfs      sysfs      rw,nosuid,nodev,noexec,relatime,seclabel
==> omv1: │ ├─/sys/kernel/security              securityfs securityfs rw,nosuid,nodev,noexec,relatime
==> omv1: │ ├─/sys/fs/cgroup                    tmpfs      tmpfs      ro,nosuid,nodev,noexec,seclabel,mode=755
==> omv1: │ │ ├─/sys/fs/cgroup/systemd          cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
==> omv1: │ │ ├─/sys/fs/cgroup/devices          cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,devices
==> omv1: │ │ ├─/sys/fs/cgroup/cpu,cpuacct      cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
==> omv1: │ │ ├─/sys/fs/cgroup/memory           cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,memory
==> omv1: │ │ ├─/sys/fs/cgroup/perf_event       cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,perf_event
==> omv1: │ │ ├─/sys/fs/cgroup/blkio            cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,blkio
==> omv1: │ │ ├─/sys/fs/cgroup/hugetlb          cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,hugetlb
==> omv1: │ │ ├─/sys/fs/cgroup/freezer          cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,freezer
==> omv1: │ │ ├─/sys/fs/cgroup/net_cls,net_prio cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
==> omv1: │ │ └─/sys/fs/cgroup/cpuset           cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,cpuset
==> omv1: │ ├─/sys/fs/pstore                    pstore     pstore     rw,nosuid,nodev,noexec,relatime,seclabel
==> omv1: │ ├─/sys/fs/selinux                   selinuxfs  selinuxfs  rw,relatime
==> omv1: │ ├─/sys/kernel/debug                 debugfs    debugfs    rw,relatime,seclabel
==> omv1: │ └─/sys/kernel/config                configfs   configfs   rw,relatime
==> omv1: ├─/proc                               proc       proc       rw,nosuid,nodev,noexec,relatime
==> omv1: │ ├─/proc/sys/fs/binfmt_misc          systemd-1  autofs     rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct
==> omv1: │ └─/proc/fs/nfsd                     nfsd       nfsd       rw,relatime
==> omv1: ├─/dev                                devtmpfs   devtmpfs   rw,nosuid,seclabel,size=242328k,nr_inodes=60582,mode=755
==> omv1: │ ├─/dev/shm                          tmpfs      tmpfs      rw,nosuid,nodev,seclabel
==> omv1: │ ├─/dev/pts                          devpts     devpts     rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000
==> omv1: │ ├─/dev/hugepages                    hugetlbfs  hugetlbfs  rw,relatime,seclabel
==> omv1: │ └─/dev/mqueue                       mqueue     mqueue     rw,relatime,seclabel
==> omv1: ├─/run                                tmpfs      tmpfs      rw,nosuid,nodev,seclabel,mode=755
==> omv1: │ └─/run/user/1000                    tmpfs      tmpfs      rw,nosuid,nodev,relatime,seclabel,size=50112k,mode=700,uid=1000,gid=1000
==> omv1: ├─/tmp                                tmpfs      tmpfs      rw,seclabel
==> omv1: ├─/boot                               /dev/vda1  ext4       rw,relatime,seclabel,data=ordered
==> omv1: └─/var/lib/nfs/rpc_pipefs             sunrpc     rpc_pipefs rw,relatime
==> omv1: meta-data=/dev/vda3              isize=512    agcount=4, agsize=321792 blks
==> omv1:          =                       sectsz=512   attr=2, projid32bit=1
==> omv1:          =                       crc=1        finobt=1
==> omv1: data     =                       bsize=4096   blocks=1287168, imaxpct=25
==> omv1:          =                       sunit=0      swidth=0 blks
==> omv1: naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
==> omv1: log      =internal               bsize=4096   blocks=2560, version=2
==> omv1:          =                       sectsz=512   sunit=0 blks, lazy-count=1
==> omv1: realtime =none                   extsz=4096   blocks=0, rtextents=0
==> omv1: data blocks changed from 1287168 to 10199744
==> omv1: Running provisioner: shell...
    omv1: Running: inline script
==> omv1: this is ceph-deploy on fedora-23
==> omv1: Last metadata expiration check performed 0:00:12 ago on Mon Dec 28 14:34:32 2015.
==> omv1: Dependencies resolved.
==> omv1: ================================================================================
==> omv1:  Package               Arch          Version                Repository     Size
==> omv1: ================================================================================
==> omv1: Installing:
==> omv1:  ceph-deploy           noarch        1.5.25-2.fc23          fedora        160 k
==> omv1:  python-execnet        noarch        1.3.0-3.fc23           fedora        275 k
==> omv1:  python-remoto         noarch        0.0.25-2.fc23          fedora         26 k
==> omv1: 
==> omv1: Transaction Summary
==> omv1: ================================================================================
==> omv1: Install  3 Packages
==> omv1: Total download size: 461 k
==> omv1: Installed size: 1.5 M
==> omv1: Downloading Packages:
==> omv1: --------------------------------------------------------------------------------
==> omv1: Total                                           214 kB/s | 461 kB     00:02     
==> omv1: Running transaction check
==> omv1: Transaction check succeeded.
==> omv1: Running transaction test
==> omv1: Transaction test succeeded.
==> omv1: Running transaction
==> omv1:   Installing  : python-execnet-1.3.0-3.fc23.noarch                          1/3
==> omv1:  
==> omv1:   Installing  : python-remoto-0.0.25-2.fc23.noarch                          2/3
==> omv1:  
==> omv1:   Installing  : ceph-deploy-1.5.25-2.fc23.noarch                            3/3
==> omv1:  
==> omv1:   Verifying   : ceph-deploy-1.5.25-2.fc23.noarch                            1/3
==> omv1:  
==> omv1:   Verifying   : python-remoto-0.0.25-2.fc23.noarch                          2/3
==> omv1:  
==> omv1:   Verifying   : python-execnet-1.3.0-3.fc23.noarch                          3/3
==> omv1:  
==> omv1: 
==> omv1: Installed:
==> omv1:   ceph-deploy.noarch 1.5.25-2.fc23       python-execnet.noarch 1.3.0-3.fc23    
==> omv1:   python-remoto.noarch 0.0.25-2.fc23    
==> omv1: Complete!
==> omv1: [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
==> omv1: [ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy new omv1.example.com
==> omv1: [ceph_deploy.new][DEBUG ] Creating new cluster named ceph
==> omv1: [ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
==> omv1: [omv1.example.com][DEBUG ] connected to host: omv1.example.com 
==> omv1: [omv1.example.com][DEBUG ] detect platform information from remote host
==> omv1: [omv1.example.com][DEBUG ] detect machine type
==> omv1: [omv1.example.com][DEBUG ] find the location of an executable
==> omv1: [omv1.example.com][INFO  ] Running command: /usr/sbin/ip link show
==> omv1: [omv1.example.com][INFO  ] Running command: /usr/sbin/ip addr show
==> omv1: [omv1.example.com][DEBUG ] IP addresses found: ['192.168.123.100', '172.17.0.1', '192.168.121.48']
==> omv1: [ceph_deploy.new][DEBUG ] Resolving host omv1.example.com
==> omv1: [ceph_deploy.new][DEBUG ] Monitor omv1 at 192.168.123.100
==> omv1: [ceph_deploy.new][DEBUG ] Monitor initial members are ['omv1']
==> omv1: [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.123.100']
==> omv1: [ceph_deploy.new][DEBUG ] Creating a random mon key...
==> omv1: [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
==> omv1: [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
==> omv1: [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
==> omv1: [ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy install --no-adjust-repos omv1.example.com
==> omv1: [ceph_deploy.install][DEBUG ] Installing stable version hammer on cluster ceph hosts omv1.example.com
==> omv1: [ceph_deploy.install][DEBUG ] Detecting platform for host omv1.example.com ...
==> omv1: [omv1.example.com][DEBUG ] connected to host: omv1.example.com 
==> omv1: [omv1.example.com][DEBUG ] detect platform information from remote host
==> omv1: [omv1.example.com][DEBUG ] detect machine type
==> omv1: [ceph_deploy.install][INFO  ] Distro info: Fedora 23 Twenty Three
==> omv1: [omv1.example.com][INFO  ] installing ceph on omv1.example.com
==> omv1: [omv1.example.com][INFO  ] Running command: yum -y -q install ceph ceph-radosgw
==> omv1: [omv1.example.com][WARNING] Yum command has been deprecated, redirecting to '/usr/bin/dnf -y -q install ceph ceph-radosgw'.
==> omv1: [omv1.example.com][WARNING] See 'man dnf' and 'man yum2dnf' for more information.
==> omv1: [omv1.example.com][WARNING] To transfer transaction metadata from yum to DNF, run:
==> omv1: [omv1.example.com][WARNING] 'dnf install python-dnf-plugins-extras-migrate && dnf-2 migrate'
==> omv1: [omv1.example.com][WARNING] 
==> omv1: [omv1.example.com][INFO  ] Running command: ceph --version
==> omv1: [omv1.example.com][DEBUG ] ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
==> omv1: [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
==> omv1: [ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy mon create-initial
==> omv1: [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts omv1
==> omv1: [ceph_deploy.mon][DEBUG ] detecting platform for host omv1 ...
==> omv1: [omv1][DEBUG ] connected to host: omv1 
==> omv1: [omv1][DEBUG ] detect platform information from remote host
==> omv1: [omv1][DEBUG ] detect machine type
==> omv1: [ceph_deploy.mon][INFO  ] distro info: Fedora 23 Twenty Three
==> omv1: [omv1][DEBUG ] determining if provided host has same hostname in remote
==> omv1: [omv1][DEBUG ] get remote short hostname
==> omv1: [omv1][DEBUG ] deploying mon to omv1
==> omv1: [omv1][DEBUG ] get remote short hostname
==> omv1: [omv1][DEBUG ] remote hostname: omv1
==> omv1: [omv1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
==> omv1: [omv1][DEBUG ] create the mon path if it does not exist
==> omv1: [omv1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-omv1/done
==> omv1: [omv1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-omv1/done
==> omv1: [omv1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-omv1.mon.keyring
==> omv1: [omv1][DEBUG ] create the monitor keyring file
==> omv1: [omv1][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i omv1 --keyring /var/lib/ceph/tmp/ceph-omv1.mon.keyring
==> omv1: [omv1][DEBUG ] ceph-mon: mon.noname-a 192.168.123.100:6789/0 is local, renaming to mon.omv1
==> omv1: [omv1][DEBUG ] ceph-mon: set fsid to 94276740-7431-49ab-80da-a37325d2d020
==> omv1: [omv1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-omv1 for mon.omv1
==> omv1: [omv1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-omv1.mon.keyring
==> omv1: [omv1][DEBUG ] create a done file to avoid re-doing the mon deployment
==> omv1: [omv1][DEBUG ] create the init path if it does not exist
==> omv1: [omv1][DEBUG ] locating the `service` executable...
==> omv1: [omv1][INFO  ] Running command: /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.omv1
==> omv1: [omv1][DEBUG ] === mon.omv1 === 
==> omv1: [omv1][DEBUG ] Starting Ceph mon.omv1 on omv1...
==> omv1: [omv1][WARNING] Running as unit run-3493.service.
==> omv1: [omv1][DEBUG ] Starting ceph-create-keys on omv1...
==> omv1: [omv1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.omv1.asok mon_status
==> omv1: [omv1][DEBUG ] ********************************************************************************
==> omv1: [omv1][DEBUG ] status for monitor: mon.omv1
==> omv1: [omv1][DEBUG ] {
==> omv1: [omv1][DEBUG ]   "election_epoch": 2, 
==> omv1: [omv1][DEBUG ]   "extra_probe_peers": [], 
==> omv1: [omv1][DEBUG ]   "monmap": {
==> omv1: [omv1][DEBUG ]     "created": "0.000000", 
==> omv1: [omv1][DEBUG ]     "epoch": 1, 
==> omv1: [omv1][DEBUG ]     "fsid": "94276740-7431-49ab-80da-a37325d2d020", 
==> omv1: [omv1][DEBUG ]     "modified": "0.000000", 
==> omv1: [omv1][DEBUG ]     "mons": [
==> omv1: [omv1][DEBUG ]       {
==> omv1: [omv1][DEBUG ]         "addr": "192.168.123.100:6789/0", 
==> omv1: [omv1][DEBUG ]         "name": "omv1", 
==> omv1: [omv1][DEBUG ]         "rank": 0
==> omv1: [omv1][DEBUG ]       }
==> omv1: [omv1][DEBUG ]     ]
==> omv1: [omv1][DEBUG ]   }, 
==> omv1: [omv1][DEBUG ]   "name": "omv1", 
==> omv1: [omv1][DEBUG ]   "outside_quorum": [], 
==> omv1: [omv1][DEBUG ]   "quorum": [
==> omv1: [omv1][DEBUG ]     0
==> omv1: [omv1][DEBUG ]   ], 
==> omv1: [omv1][DEBUG ]   "rank": 0, 
==> omv1: [omv1][DEBUG ]   "state": "leader", 
==> omv1: [omv1][DEBUG ]   "sync_provider": []
==> omv1: [omv1][DEBUG ] }
==> omv1: [omv1][DEBUG ] ********************************************************************************
==> omv1: [omv1][INFO  ] monitor: mon.omv1 is running
==> omv1: [omv1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.omv1.asok mon_status
==> omv1: [ceph_deploy.mon][INFO  ] processing monitor mon.omv1
==> omv1: [omv1][DEBUG ] connected to host: omv1 
==> omv1: [omv1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.omv1.asok mon_status
==> omv1: [ceph_deploy.mon][INFO  ] mon.omv1 monitor has reached quorum!
==> omv1: [ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
==> omv1: [ceph_deploy.mon][INFO  ] Running gatherkeys...
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Checking omv1 for /etc/ceph/ceph.client.admin.keyring
==> omv1: [omv1][DEBUG ] connected to host: omv1 
==> omv1: [omv1][DEBUG ] detect platform information from remote host
==> omv1: [omv1][DEBUG ] detect machine type
==> omv1: [omv1][DEBUG ] fetch remote file
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from omv1.
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Checking omv1 for /var/lib/ceph/bootstrap-osd/ceph.keyring
==> omv1: [omv1][DEBUG ] connected to host: omv1 
==> omv1: [omv1][DEBUG ] detect platform information from remote host
==> omv1: [omv1][DEBUG ] detect machine type
==> omv1: [omv1][DEBUG ] fetch remote file
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from omv1.
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Checking omv1 for /var/lib/ceph/bootstrap-mds/ceph.keyring
==> omv1: [omv1][DEBUG ] connected to host: omv1 
==> omv1: [omv1][DEBUG ] detect platform information from remote host
==> omv1: [omv1][DEBUG ] detect machine type
==> omv1: [omv1][DEBUG ] fetch remote file
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from omv1.
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Checking omv1 for /var/lib/ceph/bootstrap-rgw/ceph.keyring
==> omv1: [omv1][DEBUG ] connected to host: omv1 
==> omv1: [omv1][DEBUG ] detect platform information from remote host
==> omv1: [omv1][DEBUG ] detect machine type
==> omv1: [omv1][DEBUG ] fetch remote file
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-rgw.keyring key from omv1.
==> omv1: [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
==> omv1: [ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy osd prepare omv1.example.com:/srv/ceph/osd
==> omv1: [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks omv1.example.com:/srv/ceph/osd:
==> omv1: [omv1.example.com][DEBUG ] connected to host: omv1.example.com 
==> omv1: [omv1.example.com][DEBUG ] detect platform information from remote host
==> omv1: [omv1.example.com][DEBUG ] detect machine type
==> omv1: [ceph_deploy.osd][INFO  ] Distro info: Fedora 23 Twenty Three
==> omv1: [ceph_deploy.osd][DEBUG ] Deploying osd to omv1.example.com
==> omv1: [omv1.example.com][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
==> omv1: [omv1.example.com][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
==> omv1: [ceph_deploy.osd][DEBUG ] Preparing host omv1.example.com disk /srv/ceph/osd journal None activate False
==> omv1: [omv1.example.com][INFO  ] Running command: ceph-disk -v prepare --fs-type xfs --cluster ceph -- /srv/ceph/osd
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Preparing osd data dir /srv/ceph/osd
==> omv1: [omv1.example.com][INFO  ] checking OSD status...
==> omv1: [omv1.example.com][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
==> omv1: [ceph_deploy.osd][DEBUG ] Host omv1.example.com is now ready for osd use.
==> omv1: [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
==> omv1: [ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy osd activate omv1.example.com:/srv/ceph/osd
==> omv1: [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks omv1.example.com:/srv/ceph/osd:
==> omv1: [omv1.example.com][DEBUG ] connected to host: omv1.example.com 
==> omv1: [omv1.example.com][DEBUG ] detect platform information from remote host
==> omv1: [omv1.example.com][DEBUG ] detect machine type
==> omv1: [ceph_deploy.osd][INFO  ] Distro info: Fedora 23 Twenty Three
==> omv1: [ceph_deploy.osd][DEBUG ] activating host omv1.example.com disk /srv/ceph/osd
==> omv1: [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
==> omv1: [omv1.example.com][INFO  ] Running command: ceph-disk -v activate --mark-init sysvinit --mount /srv/ceph/osd
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Cluster uuid is 94276740-7431-49ab-80da-a37325d2d020
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Cluster name is ceph
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:OSD uuid is 43a20af4-7a45-4f70-a0dd-baf906d58026
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Allocating OSD id...
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 43a20af4-7a45-4f70-a0dd-baf906d58026
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:OSD id is 0
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Initializing OSD...
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /srv/ceph/osd/activate.monmap
==> omv1: [omv1.example.com][WARNING] got monmap epoch 1
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /srv/ceph/osd/activate.monmap --osd-data /srv/ceph/osd --osd-journal /srv/ceph/osd/journal --osd-uuid 43a20af4-7a45-4f70-a0dd-baf906d58026 --keyring /srv/ceph/osd/keyring
==> omv1: [omv1.example.com][WARNING] 2015-12-28 14:36:47.525213 7f66f9f0c880 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
==> omv1: [omv1.example.com][WARNING] 2015-12-28 14:36:49.412257 7f66f9f0c880 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
==> omv1: [omv1.example.com][WARNING] 2015-12-28 14:36:49.413018 7f66f9f0c880 -1 filestore(/srv/ceph/osd) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
==> omv1: [omv1.example.com][WARNING] 2015-12-28 14:36:49.813515 7f66f9f0c880 -1 created object store /srv/ceph/osd journal /srv/ceph/osd/journal for osd.0 fsid 94276740-7431-49ab-80da-a37325d2d020
==> omv1: [omv1.example.com][WARNING] 2015-12-28 14:36:49.813566 7f66f9f0c880 -1 auth: error reading file: /srv/ceph/osd/keyring: can't open /srv/ceph/osd/keyring: (2) No such file or directory
==> omv1: [omv1.example.com][WARNING] 2015-12-28 14:36:49.813683 7f66f9f0c880 -1 created new key in keyring /srv/ceph/osd/keyring
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Marking with init system sysvinit
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Authorizing OSD key...
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.0 -i /srv/ceph/osd/keyring osd allow * mon allow profile osd
==> omv1: [omv1.example.com][WARNING] added key for osd.0
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:ceph osd.0 data dir is ready at /srv/ceph/osd
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/osd/ceph-0 -> /srv/ceph/osd
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Starting ceph osd.0...
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/sbin/service ceph --cluster ceph start osd.0
==> omv1: [omv1.example.com][DEBUG ] === osd.0 === 
==> omv1: [omv1.example.com][WARNING] create-or-move updating item name 'osd.0' weight 0.04 at location {host=omv1,root=default} to crush map
==> omv1: [omv1.example.com][DEBUG ] Starting Ceph osd.0 on omv1...
==> omv1: [omv1.example.com][WARNING] Running as unit run-4116.service.
==> omv1: [omv1.example.com][INFO  ] checking OSD status...
==> omv1: [omv1.example.com][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
==> omv1: [omv1.example.com][INFO  ] Running command: systemctl enable ceph
==> omv1: [omv1.example.com][WARNING] ceph.service is not a native service, redirecting to systemd-sysv-install
==> omv1: [omv1.example.com][WARNING] Executing /usr/lib/systemd/systemd-sysv-install enable ceph
==> omv1:     cluster 94276740-7431-49ab-80da-a37325d2d020
==> omv1:      health HEALTH_WARN
==> omv1:             64 pgs stuck inactive
==> omv1:             64 pgs stuck unclean
==> omv1:      monmap e1: 1 mons at {omv1=192.168.123.100:6789/0}
==> omv1:             election epoch 2, quorum 0 omv1
==> omv1:      osdmap e5: 1 osds: 1 up, 1 in
==> omv1:       pgmap v6: 64 pgs, 1 pools, 0 bytes data, 0 objects
==> omv1:             0 kB used, 0 kB / 0 kB avail
==> omv1:                   64 creating
==> omv1: you can now use ceph with the rbd command

real    4m14.815s
user    0m2.239s
sys    0m0.288s
james@computer:/tmp/tmp.DhD$ vscreen root@omv1
[root@omv1 ~]# ceph status
    cluster 94276740-7431-49ab-80da-a37325d2d020
     health HEALTH_OK
     monmap e1: 1 mons at {omv1=192.168.123.100:6789/0}
            election epoch 2, quorum 0 omv1
     osdmap e5: 1 osds: 1 up, 1 in
      pgmap v9: 64 pgs, 1 pools, 0 bytes data, 0 objects
            6642 MB used, 33190 MB / 39832 MB avail
                  64 active+clean
[root@omv1 ~]#

You’ll notice that the ceph status command initially returned a warning when run with OMV, but you’ll notice that if I logged in to the machine and ran it a second time, everything is now happy. I imagine this is because it takes a moment to converge on a healthy state.

I’ve also just released a new Fedora-23 vagrant box, which is what this example uses. Naturally if you haven’t already downloaded the box, this example will take a bit longer to run the first time. The download will cost you about 865M and will happen automatically if you’re missing the box.

Happy hacking!

James

PS: If you’re curious what that cdtmpmkdir is, it’s here:

$ type cdtmpmkdir 
cdtmpmkdir is a function
cdtmpmkdir () 
{ 
    cd `mktemp --tmpdir -d tmp.XXX`
}

Thanking Oh-My-Vagrant contributors for version 1.0.0

The Oh-My-Vagrant project became public about one year ago and at the time it was more of a fancy template than a robust project, but 188 commits (and counting) later, it has gotten surprisingly useful and mature.

james@computer:~/code/oh-my-vagrant$ git rev-list HEAD --count
188
james@computer:~/code/oh-my-vagrant$ git log $(git log --pretty=format:%H|tail -1)
commit 4faa6c89cce01c62130ef5a6d5fa0fff833da371
Author: James Shubin <james@shubin.ca>
Date:   Thu Aug 28 01:08:03 2014 -0400

    Initial commit of vagrant-puppet-docker-template...
    
    This is an attempt to prototype a default environment for
    vagrant+puppet+docker hacking. More improvements are needed for it to be
    useful, but it's probably already useful as a reference for now.

It would be easy to take most of the credit for taking the project this far, as I’ve been responsible for about 87% of the commits, but as is common, the numbers don’t tell the whole story. It is also a bug (but hopefully just an artifact) that I’ve had such a large percentage of commits. It’s quite common for a new project to start this way, but for Free Software to succeed long-term, it’s essential that the users become the contributors. Let’s try to change that going forward.

james@computer:~/code/oh-my-vagrant$ git shortlog -s | sort -rn
   165    James Shubin
     5    Vasyl Kaigorodov
     4    Randy Barlow
     2    Scott Collier
     2    Milan Zink
     2    Christoph Görn
     2    aweiteka
     1    scollier
     1    Russell Tweed
     1    ncoghlan
     1    John Browning
     1    Flavio Fernandes
     1    Carsten Clasohm
james@computer:~/code/oh-my-vagrant$ echo '165/188*100' | bc -l
87.76595744680851063800

The true story behind these 188 commits is the living history of the past year. Countless hours testing the code, using the project, suggesting features, getting hit by bugs, debugging issues, patching those bugs, and so on… If you could see an accurate graph of the number of hours put into the project, you’d see a much longer list of individuals, and I would have nowhere close to 87% of that breakdown.

Contributions are important!

Contributions are important, and patches especially help. Patches from your users are what make something a community project as opposed to two separate camps of consumers and producers. It’s about time we singled out some of those contributors!

Vasyl Kaigorodov

Vasyl is a great hacker who first fixed the broken networking in OMV. Before his work merged, it was not possible to run two different OMV environments at the same time. Now networking makes a lot more sense. Unfortunately the GitHub contributors graph doesn’t acknowledge his work because he doesn’t have a GitHub account. Shame on them!

Randy Barlow (bowlofeggs)

Randy came up with the idea for “mainstream mode“, and while his initial proof of concept didn’t quite work, the idea was good. His time budget didn’t afford the project this new feature, but he has sent in some other patches including some, tweaks used by the Pulp Vagrantfile. He’s got a patch or two pending on his TODO list which we’re looking forward to, as he finishes the work to port Pulp to OMV.

Scott Collier

Scott is a great model user. He gets very enthusiastic, he’s great at testing things out and complaining if they don’t behave as he’d like, and if you’re lucky, you can brow beat him to write a quick patch or two. He actually has three commits in the project so far, which would show up correctly above if he had set his git user variables correctly ;) Thanks for spending the time to deal with OMV when there was a lot more cruft, and fewer features. I look forward to your next patch!

Milan Zink

Milan is a ruby expert who fixed the ruby xdg bugs we had in an earlier version of the project. Because of his work new users don’t even realize that there was ever an issue!

Christoph Görn

Christoph has been an invaluable promoter and dedicated user of the project. His work pushing OMV to the limit has generated real world requirements and feature requests, which have made the project useful for real users! It’s been hard to say no when he opens an issue ticket, but I’ve been able to force him to write a patch or two as well.

Russell Tweed

Russell is a new OMV user who jumped right into the code and sent in a patch for adding an arbitrary number of disks to OMV machines. As a first time contributor, I thank him for his patch and for withstanding the number of reviews it had to go through. It’s finally merged, even though we might have let one bug (now fixed) slip in too. I particularly like his patch, because I actually wrote the initial patch to add extra disks support to vagrant-libvirt, and I’m pleased to see it get used one level up!

John Browning

John actually found an edge case in the subscription manager code and after an interesting discussion, patched the issue. More users means more edge cases will fall out! Thanks John!

Flavio Fernandes

Even though Flavio is an OSX user, we’re thankful that he wrote and tested the virtualbox patch for OMV. OMV still needs an installer for OSX + mainstream mode, but once that’s done, we know the rest will work great!

Carsten Clasohm

Carsten actually wrote a lovely patch for a subtle OMV issue that is very hard to reproduce. I was able to merge his patch on the first review, and in fact it looked nicer than how I would have written it!

Nick Coghlan

Nick is actually a python hacker, so getting a ruby contribution proved a bit tricky! Fortunately, he is also a master of words, and helped clean up the documentation a bit. We’d love to get a few more doc patches if you have the time and some love!

Aaron Weitekamp

Even though aweiteka (as we call him) has only added five lines of source (2 of which were comments), he was an early user and tester, and we thank him for his contributions! Hopefully we’ll see him in our commit logs in the future!

Máirín Duffy

Máirín is a talented artist who does great work using free tools. I asked her if she’d be kind enough to make us a logo, and I’ll hopefully be able to show it to you soon!

Everyone else

To everyone else who isn’t in the commit log yet, thank you for using and testing OMV, finding bugs, opening issues and even for your social media love in getting the word out! I hope to get a patch from you soon!

The power of the unknown user

They’re sometimes hard to measure, but a recently introduced bug was reported to me independently by two different (and previously unknown) users very soon after the bug was introduced! I’m sorry for letting the bug in, but I am glad that people picked up on it so quickly! I’d love to have your help in improving our automated test infrastructure!

The AUTHORS file

Every good project needs a “hall of fame” for its contributors. That’s why, starting today there is an AUTHORS file, and if you’re a contributor, we urge you to send a one-line patch with your name, so it can be immortalized in the project forever. We could try to generate this file with git log, but that would remove the prestige behind getting your first and second patches in. If you’re not in the AUTHORS file, and you should be, send me your patch already!

Version 1.0.0

I think it’s time. The project deserves a 1.0.0 release, and I’ve now made it so. Please share and enjoy!

I hope you enjoy this project, and I look forward to receiving your patch.

Happy Hacking!

James

PS: Thanks to Brian Bouterse for encouraging me to focus on community, and for inspiring me to write this post!

Vagrant and Oh-My-Vagrant on RHEL7

My employer keeps paying me, which I appreciate, so it’s good to spend some time to make sure RHEL7 customers get a great developer experience! So here’s how to make vagrant, vagrant-libvirt and Oh-My-Vagrant work on RHEL 7+. The same steps should work for CentOS 7+.

I’ll first paste the commands you need to run, and then I’ll explain what’s happening for those that are interested:

# run these commands, and then get hacking!
# this requires the rhel-7-server-optional-rpms repo enabled
sudo subscription-manager repos --enable rhel-7-server-optional-rpms
sudo yum install -y gcc ruby-devel libvirt-devel libvirt qemu-kvm
sudo systemctl start libvirtd.service
wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.4_x86_64.rpm
sudo yum install -y vagrant_1.7.4_x86_64.rpm
vagrant plugin install vagrant-libvirt
wget https://copr.fedoraproject.org/coprs/purpleidea/vagrant-libvirt/repo/epel-7/purpleidea-vagrant-libvirt-epel-7.repo
sudo cp -a purpleidea-vagrant-libvirt-epel-7.repo /etc/yum.repos.d/
sudo yum install -y vagrant-libvirt    # noop plugin for oh-my-vagrant dependency
wget https://copr.fedoraproject.org/coprs/purpleidea/oh-my-vagrant/repo/epel-7/purpleidea-oh-my-vagrant-epel-7.repo
sudo cp -a purpleidea-oh-my-vagrant-epel-7.repo /etc/yum.repos.d/
sudo yum install -y oh-my-vagrant
. /etc/profile.d/oh-my-vagrant.sh # logout/login or source

Let’s go through it line by line.

sudo subscription-manager repos --enable rhel-7-server-optional-rpms

Make sure you have the optional repos enabled, which are needed for the ruby-devel package.

sudo yum install -y gcc ruby-devel libvirt-devel libvirt
sudo systemctl start libvirtd.service

Other than the base os, these are the dependencies you’ll need. If you have some sort of super minimal installation, and find that there is another dependency needed, please let me know and I’ll update this article. Usually libvirt is already installed, and libvirtd is started, but this includes those two operations in case they are needed.

wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.4_x86_64.rpm
sudo yum install -y vagrant_1.7.4_x86_64.rpm

Vagrant has finally landed in Fedora 22, but unfortunately it’s not in RHEL or any of the software collections yet. As a result, we install it from the upstream.

vagrant plugin install vagrant-libvirt

Similarly, vagrant-libvirt hasn’t been packaged for RHEL either, so we’ll install it into the users home directory via the vagrant plugin system.

wget https://copr.fedoraproject.org/coprs/purpleidea/vagrant-libvirt/repo/epel-7/purpleidea-vagrant-libvirt-epel-7.repo
sudo cp -a purpleidea-vagrant-libvirt-epel-7.repo /etc/yum.repos.d/
sudo yum install -y vagrant-libvirt    # noop plugin for oh-my-vagrant dependency

Since there isn’t a vagrant-libvirt RPM, and because the RPM’s for Oh-My-Vagrant depend on that “requires” to install correctly, I built an empty vagrant-libvirt RPM so that Oh-My-Vagrant thinks the dependency has been met in system wide RPM land, when it’s actually been met in the user specific home directory space. I couldn’t think of a better way to do this, and as a result, you get to read about the exercise that prompted my recent “empty RPM” article.

wget https://copr.fedoraproject.org/coprs/purpleidea/oh-my-vagrant/repo/epel-7/purpleidea-oh-my-vagrant-epel-7.repo
sudo cp -a purpleidea-oh-my-vagrant-epel-7.repo /etc/yum.repos.d/
sudo yum install -y oh-my-vagrant

This last part installs Oh-My-Vagrant from the COPR. There is no “dnf enable” command in RHEL, so we manually wget the repo file into place.

. /etc/profile.d/oh-my-vagrant.sh # logout/login or source

Lastly if you’d like to reuse your current terminal session, source the /etc/profile.d/ file that is installed, otherwise close and reopen your terminal.

You’ll need to do an omv init at least once to make sure all the user plugins are installed, and you should be ready for your first vagrant up! Please note, that the above process definitely includes some dirty workarounds until vagrant is more easily consumable in RHEL, but I wanted to get you hacking earlier rather than later!

I hope this article helps you hack it out in RHEL land, be sure to read about how to build your own custom RHEL vagrant boxes too!

Happy Hacking,

James

Git archive with submodules and tar magic

Git submodules are actually a very beautiful thing. You might prefer the word powerful or elegant, but that’s not the point. The downside is that they are sometimes misused, so as always, use with care. I’ve used them in projects like puppet-gluster, oh-my-vagrant, and others. If you’re not familiar with them, do a bit of reading and come back later, I’ll wait.

I recently did some work packaging Oh-My-Vagrant as RPM’s. My primary goal was to make sure the entire process was automatic, as I have no patience for manually building RPM’s. Any good packager knows that the pre-requisite for building a SRPM is a source tarball, and I wanted to build those automatically too.

Simply running a tar -cf on my source directory wouldn’t work, because I only want to include files that are stored in git. Thankfully, git comes with a tool called git archive, which does exactly that! No scary tar commands required:

Nobody likes tar

Here’s how you might run it:

$ git archive --prefix=some-project/ -o output.tar.bz2 HEAD

Let’s decompose:

The --prefix argument prepends a string prefix onto every file in the archive. Therefore, if you’d like the root directory to be named some-project, then you prepend that string with a trailing slash, and you’ll have everything nested inside a directory!

The -o flag predictably picks the output file and format. Using .tar.bz2 is quite common.

Lastly, the HEAD portion at the end specifies which git tree to pull the files from. I usually specify a git tag here, but you can specify a commit id if you prefer.

Obligatory, "make this article more interesting" meme image.

Obligatory, “make this article more interesting” meme image.

This is all well and good, but unfortunately, when I open my newly created archive, it is notably missing my git submodules! It would probably make sense for there to be an upstream option so that a --recursive flag would do this magic for you, but unfortunately it doesn’t exist yet.

There are a few scripts floating around that can do this, but I wanted something small, and without any real dependencies, that I can embed in my project Makefile, so that it’s all self-contained.

Here’s what that looks like:

sometarget:
    @echo Running git archive...
    # use HEAD if tag doesn't exist yet, so that development is easier...
    git archive --prefix=oh-my-vagrant-$(VERSION)/ -o $(SOURCE) $(VERSION) 2> /dev/null || (echo 'Warning: $(VERSION) does not exist.' && git archive --prefix=oh-my-vagrant-$(VERSION)/ -o $(SOURCE) HEAD)
    # TODO: if git archive had a --submodules flag this would easier!
    @echo Running git archive submodules...
    # i thought i would need --ignore-zeros, but it doesn't seem necessary!
    p=`pwd` && (echo .; git submodule foreach) | while read entering path; do \
        temp="$${path%\'}"; \
        temp="$${temp#\'}"; \
        path=$$temp; \
        [ "$$path" = "" ] && continue; \
        (cd $$path && git archive --prefix=oh-my-vagrant-$(VERSION)/$$path/ HEAD > $$p/rpmbuild/tmp.tar && tar --concatenate --file=$$p/$(SOURCE) $$p/rpmbuild/tmp.tar && rm $$p/rpmbuild/tmp.tar); \
    done

This is a bit tricky to read, so I’ll try to break it down. Remember, double dollar signs are used in Make syntax for embedded bash code since a single dollar sign is a special Make identifier. The $(VERSION) variable corresponds to the version of the project I’m building, which matches a git tag that I’ve previously created. $(SOURCE) corresponds to an output file name, ending in the .tar.bz2 suffix.

    p=`pwd` && (echo .; git submodule foreach) | while read entering path; do \

In this first line, we store the current working directory for use later, and then loop through the output of the git submodule foreach command. That output normally looks something like this:

james@computer:~/code/oh-my-vagrant$ git submodule foreach 
Entering 'vagrant/gems/xdg'
Entering 'vagrant/kubernetes/templates/default'
Entering 'vagrant/p4h'
Entering 'vagrant/puppet/modules/module-data'
Entering 'vagrant/puppet/modules/puppet'
Entering 'vagrant/puppet/modules/stdlib'
Entering 'vagrant/puppet/modules/yum'

As you can see, this shows that the above read command, eats up the Entering string, and pulls the quoted path into the second path variable. The next part of the code:

        temp="$${path%\'}"; \
        temp="$${temp#\'}"; \
        path=$$temp; \
        [ "$$path" = "" ] && continue; \

uses bash idioms to remove the two single quotes that wrap our string, and then skip over any empty versions of the path variable in our loop. Lastly, for each submodule found, we first switch into that directory:

        (cd $$path &&

Run a normal git archive command and create a plain uncompressed tar archive in a temporary directory:

git archive --prefix=oh-my-vagrant-$(VERSION)/$$path/ HEAD > $$p/rpmbuild/tmp.tar &&

Then use the magic of tar to overlay this new tar file, on top of the source file that we’re now building up with each iteration of this loop, and then remove the temporary file.

tar --concatenate --file=$$p/$(SOURCE) $$p/rpmbuild/tmp.tar && rm $$p/rpmbuild/tmp.tar); \

Finally, we end the loop:

    done

Boom, magic! Short, concise, and without any dependencies but bash and git.

Nobody should have to figure that out by themselves, and I wish it was built in to git, but until then, here’s how it’s done! Many thanks to #git on IRC for pointing me in the right direction.

This is the commit where I landed this patch for oh-my-vagrant, if you’re curious to see this in the wild. Now that this is done, I can definitely say that it was worth the time:

Is it worth the time? In this case, it was.

With this feature merged, along with my automatic COPR builds, a simple ‘make rpm‘, causes all of this automation to happen, and delivers a fresh build from git in a few minutes.

I hope you enjoyed this technique, and I hope you have some coding skills to get this feature upstream in git.

Happy Hacking,

James

Oh-My-Vagrant “Mainstream” mode and COPR RPM’s

Making Oh-My-Vagrant (OMV) more developer accessible and easy to install (from a distribution package like RPM) has always been a goal, but was previously never a priority. This is all sorted out now. In this article, I’ll explain how “mainstream” mode works, and how the RPM work was done. (I promise this will be somewhat interesting!)

Prerequisites:

If you haven’t read any of the previous articles about Oh-My-Vagrant, I’d recommend you start there. Many of the articles include screencasts, and combined with the examples/ folder, this is probably the best way to learn OMV, because the documentation could use some love.

Installation:

OMV is now easily installable on Fedora 22 via COPR. It probably works on other distros and versions, but I haven’t tested all of those combinations. This is a colossal improvement from when I first posted about this publicly in 2013. There is still one annoying bug that I occasionally hit. Let me know if you can reproduce.

Install from COPR:

james@computer:~$ sudo dnf copr enable purpleidea/oh-my-vagrant

You are about to enable a Copr repository. Please note that this
repository is not part of the main Fedora distribution, and quality
may vary.

The Fedora Project does not exercise any power over the contents of
this repository beyond the rules outlined in the Copr FAQ at
, and
packages are not held to any quality or security level.

Please do not file bug reports about these packages in Fedora
Bugzilla. In case of problems, contact the owner of this repository.

Do you want to continue? [y/N]: y
Repository successfully enabled.
james@computer:~$ sudo dnf install oh-my-vagrant
Last metadata expiration check performed 0:05:08 ago on Tue Jul  7 22:58:45 2015.
Dependencies resolved.
================================================================================
 Package           Arch     Version            Repository                  Size
================================================================================
Installing:
 oh-my-vagrant     noarch   0.0.7-1            purpleidea-oh-my-vagrant   270 k
 vagrant           noarch   1.7.2-7.fc22       updates                    428 k
 vagrant-libvirt   noarch   0.0.26-2.fc22      fedora                      57 k

Transaction Summary
================================================================================
Install  3 Packages

Total download size: 755 k
Installed size: 2.5 M
Is this ok [y/N]: n
Operation aborted.
james@computer:~$ sudo dnf install -y oh-my-vagrant
Last metadata expiration check performed 0:05:19 ago on Tue Jul  7 22:58:45 2015.
Dependencies resolved.
================================================================================
 Package           Arch     Version            Repository                  Size
================================================================================
Installing:
 oh-my-vagrant     noarch   0.0.7-1            purpleidea-oh-my-vagrant   270 k
 vagrant           noarch   1.7.2-7.fc22       updates                    428 k
 vagrant-libvirt   noarch   0.0.26-2.fc22      fedora                      57 k

Transaction Summary
================================================================================
Install  3 Packages

Total download size: 755 k
Installed size: 2.5 M
Downloading Packages:
(1/3): vagrant-1.7.2-7.fc22.noarch.rpm          626 kB/s | 428 kB     00:00    
(2/3): vagrant-libvirt-0.0.26-2.fc22.noarch.rpm  70 kB/s |  57 kB     00:00    
(3/3): oh-my-vagrant-0.0.7-1.noarch.rpm         243 kB/s | 270 kB     00:01    
--------------------------------------------------------------------------------
Total                                           246 kB/s | 755 kB     00:03     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Installing  : vagrant-1.7.2-7.fc22.noarch                                 1/3 
  Installing  : vagrant-libvirt-0.0.26-2.fc22.noarch                        2/3 
  Installing  : oh-my-vagrant-0.0.7-1.noarch                                3/3 
  Verifying   : oh-my-vagrant-0.0.7-1.noarch                                1/3 
  Verifying   : vagrant-libvirt-0.0.26-2.fc22.noarch                        2/3 
  Verifying   : vagrant-1.7.2-7.fc22.noarch                                 3/3 

Installed:
  oh-my-vagrant.noarch 0.0.7-1                vagrant.noarch 1.7.2-7.fc22       
  vagrant-libvirt.noarch 0.0.26-2.fc22       

Complete!
james@computer:~$

If you’d like to avoid typing passwords over and over again when using vagrant, you can add yourself into the vagrant group. 99% of people do this. The downside is that it could allow your user account to get root privileges. Since most developers have a single user environment, it’s not a big issue. This is necessary because vagrant uses the qemu:///system connection instead of qemu:///session. If you can help fix this, please hack on it.

james@computer:~$ groups
james wheel docker
james@computer:~$ sudo usermod -aG vagrant james
# you'll need to logout/login for this change to take effect...

Lastly, there is a user session plugin addition that is required. Installation is automatic the first time you create a new OMV project. Let’s do that and see how it works!

james@computer:~$ mkdir /tmp/omvtest
james@computer:~$ cd !$
cd /tmp/omvtest
james@computer:/tmp/omvtest$ which omv
/usr/bin/omv
james@computer:/tmp/omvtest$ omv init
Oh-My-Vagrant needs to install a modified vagrant-hostmanager plugin.
Is this ok [y/N]: y
Cloning into 'vagrant-hostmanager'...
remote: Counting objects: 801, done.
remote: Total 801 (delta 0), reused 0 (delta 0), pack-reused 801
Receiving objects: 100% (801/801), 132.22 KiB | 0 bytes/s, done.
Resolving deltas: 100% (467/467), done.
Checking connectivity... done.
Branch feat/oh-my-vagrant set up to track remote branch feat/oh-my-vagrant from origin.
Switched to a new branch 'feat/oh-my-vagrant'
sending incremental file list
./
vagrant-hostmanager.rb
vagrant-hostmanager/
vagrant-hostmanager/action.rb
vagrant-hostmanager/command.rb
vagrant-hostmanager/config.rb
vagrant-hostmanager/errors.rb
vagrant-hostmanager/plugin.rb
vagrant-hostmanager/provisioner.rb
vagrant-hostmanager/util.rb
vagrant-hostmanager/version.rb
vagrant-hostmanager/action/
vagrant-hostmanager/action/update_all.rb
vagrant-hostmanager/action/update_guest.rb
vagrant-hostmanager/action/update_host.rb
vagrant-hostmanager/hosts_file/
vagrant-hostmanager/hosts_file/updater.rb

sent 20,560 bytes  received 286 bytes  41,692.00 bytes/sec
total size is 19,533  speedup is 0.94
Patched successfully!
Current machine states:

omv1                      not created (libvirt)

The Libvirt domain is not created. Run `vagrant up` to create it.
james@computer:/tmp/omvtest$ ls
ansible/  docker/  kubernetes/  omv.yaml  puppet/  shell/
james@computer:/tmp/omvtest$

You can see that the plugin installation worked perfectly, and that OMV created a few files and folders.

More usage:

You can hide that generated mess in a subfolder if you prefer:

james@computer:/tmp/omvtest$ mkdir /tmp/omvtest2
james@computer:/tmp/omvtest$ cd !$
cd /tmp/omvtest2
james@computer:/tmp/omvtest2$ omv init mess
Current machine states:

omv1                      not created (libvirt)

The Libvirt domain is not created. Run `vagrant up` to create it.
james@computer:/tmp/omvtest2$ ls
mess/  omv.yaml@
james@computer:/tmp/omvtest2$ ls -lAh
total 0
drwxrwxr-x. 7 james 160 Jul  7 23:26 mess/
lrwxrwxrwx. 1 james  13 Jul  7 23:26 omv.yaml -> mess/omv.yaml
drwxrwxr-x. 3 james  60 Jul  7 23:26 .vagrant/
james@computer:/tmp/omvtest2$ tree
.
├── mess
│   ├── ansible
│   │   └── modules
│   ├── docker
│   ├── kubernetes
│   │   ├── applications
│   │   └── templates
│   ├── omv.yaml
│   ├── puppet
│   │   └── modules
│   └── shell
└── omv.yaml -> mess/omv.yaml

10 directories, 2 files
james@computer:/tmp/omvtest2$

As you can see all the mess is wrapped up in a single folder. This could even be named .omv if you prefer, and should all be committed inside of your project. Now that we’re installed, let’s get hacking!

Mainstream mode:

Mainstream mode further hides the ruby/Vagrantfile aspect of a Vagrant project and extends OMV so that you can define your entire project via the omv.yaml file, without the rest of the OMV project cluttering up your development tree. This makes it possible to have your project use OMV by only committing that one yaml file into the project repo.

The main difference is that you now control everything with the new omv command line tool. It’s essentially a smart wrapper around the vagrant command, so any command you used to use vagrant for, you can now substitute in omv. It also saves typing four extra characters!

As it turns out (and by no accident) the omv tool works exactly like the vagrant tool. For example:

james@computer:/tmp/omvtest2$ omv status
Current machine states:

omv1                      not created (libvirt)

The Libvirt domain is not created. Run `vagrant up` to create it.
james@computer:/tmp/omvtest2$ omv up
Bringing machine 'omv1' up with 'libvirt' provider...
==> omv1: Box 'centos-7.1' could not be found. Attempting to find and install...
    omv1: Box Provider: libvirt
    omv1: Box Version: >= 0
==> omv1: Adding box 'centos-7.1' (v0) for provider: libvirt
    omv1: Downloading: https://dl.fedoraproject.org/pub/alt/purpleidea/vagrant/centos-7.1/centos-7.1.box
[snip]
james@computer:/tmp/omvtest2$ omv destroy
Unlocking shell provisioning for: omv1...
==> omv1: Domain is not created. Please run `vagrant up` first.
james@computer:/tmp/omvtest2$

BUT THAT’S NOT ALL…

The existing tools you know and love, like vlog, vsftp, vscreen, vcssh, vfwd, vansible, have all been modified to work with OMV mainstream mode as well. The same goes for common aliases such as vs, vp, vup, vdestroy, vrsync, and the useful (but occasionally dangerous) vrm-rf. Have a look at the above links on my blog and the source to see what these do. If it’s not clear enough, let me know!

All of these are now packaged up in the oh-my-vagrant COPR and are installed automatically into /etc/profile.d/oh-my-vagrant.sh for your convenience. Since they’re part of the OMV project, you’ll get updates when new functions or bug fixes are made.

The plumbing:

Mainstream mode is possible because of an idea rbarlow had. He gets full credit for the idea, in particular for teaching me about VAGRANT_CWD which is what makes it all work. I rejected his 6 line prototype, but loved the idea, and since he was busy making juice, I got bored one day and hacked on a full implementation.

james@computer:~/code/oh-my-vagrant$ git diff --stat 853073431d227cbb0ba56aaf4fedd721904de9a8 aa764ae79d69475b87f293c43af4f20fd7d1d000
 DOCUMENTATION.md    | 18 +++++++++++++++
 bin/omv.sh          | 50 +++++++++++++++++++++++++++++++++++++++++
 vagrant/Vagrantfile | 65 ++++++++++++++++++++++++++++++++++-------------------
 3 files changed, 110 insertions(+), 23 deletions(-)
james@computer:~/code/oh-my-vagrant$

It turned out it was a little longer, but I artificially inflated this by including some quick doc patches. What does it actually do differently? It sets VAGRANT_CWD and VAGRANT_DOTFILE_PATH so that the vagrant command looks in a different directory for the Vagrantfile and .vagrant/ directories. That way, all the plumbing is hidden and part of the RPM.

Making the RPM:

The RPM’s happened because stefw made me feel bad about not having them. He was right to do so. In an case, RPM packaging still scares me. I think repetitive work scares me even more. That’s why I automate as much as I can. So after a lot of brain loss, I finally made you an RPM so that you could easily install it. Here’s how it went:

I started by adding the magic so that my Makefile could build an RPM.

This made it so I can easily run make srpm to get a new RPM or SRPM.

Then I added COPR integration, so a make copr automatically kicks off a new COPR build. This was the interesting part. You’ll need a Fedora account for this to work. Once you’re logged in, if you go to https://copr.fedoraproject.org/api you’ll be able to download a snippet to put in your ~/.config/copr file. Lastly, the work happens in copr-build.py where the python copr library does the heavy lifting.

#!/usr/bin/python

# README:
# for initial setup, browse to: https://copr.fedoraproject.org/api/
# and it will have a ~/.config/copr config that you can download.
# happy hacking!

import os
import sys
import copr

COPR = 'oh-my-vagrant'
if len(sys.argv) != 2:
    print("Usage: %s &lt;srpm url&gt;" % sys.argv[0])
    sys.exit(1)

url = sys.argv[1]

client = copr.CoprClient.create_from_file_config(os.path.expanduser("~/.config/copr"))

result = client.create_new_build(COPR, [url])
if result.output != "ok":
    print(result.error)
    sys.exit(1)
print(result.message)

A build looks like this:

james@computer:~/code/oh-my-vagrant$ git tag 0.0.8 # set a new tag
james@computer:~/code/oh-my-vagrant$ make copr 
Running templater...
Running git archive...
Running git archive submodules...
Running rpmbuild -bs...
Wrote: /home/james/code/oh-my-vagrant/rpmbuild/SRPMS/oh-my-vagrant-0.0.8-1.src.rpm
Running SRPMS sha256sum...
/home/james/code/oh-my-vagrant
Running SRPMS gpg...

You need a passphrase to unlock the secret key for
user: "James Shubin (Third PGP key.) <james@shubin.ca>"
4096-bit RSA key, ID 24090D66, created 2012-05-09

gpg: WARNING: The GNOME keyring manager hijacked the GnuPG agent.
gpg: WARNING: GnuPG will not work properly - please configure that tool to not interfere with the GnuPG system!
Running SRPMS upload...
sending incremental file list
SHA256SUMS
SHA256SUMS.asc
oh-my-vagrant-0.0.8-1.src.rpm

sent 8,583 bytes  received 2,184 bytes  4,306.80 bytes/sec
total size is 1,456,741  speedup is 135.30
Build was added to oh-my-vagrant.
james@computer:~/code/oh-my-vagrant$

A few minutes later, the COPR build page should look like this:

a screenshot of the Oh-My-Vagrant COPR build page for people who like to look at pretty pictures instead of just terminal output

A screenshot of the Oh-My-Vagrant COPR build page for people who like to look at pretty pictures instead of just terminal output.

There was a bunch of additional fixing and polishing required to get this as seamless as possible for you. Have a look at the git commits and you’ll get an idea of all the work that was done, and you’ll probably even learn about some new, features I haven’t blogged about yet. It was exhausting!

omv-exhaustedAs a result of all this, you can download fresh builds easily. Visit the COPR page to see how things are cooking:

https://copr.fedoraproject.org/coprs/purpleidea/oh-my-vagrant/

I’ll try to keep this pumping out releases regularly. If I lag behind, please holler at me. In any case, please let me know if you appreciate this work. Comment, tweeter, or contact me!

Happy Hacking,

James

A super privileged Puppet container

In this new crazy world of containers and immutable hosts, one might still want to run previous generation software such as Puppet on a current generation Atomic host. This article will explain how you can do that, and offer some proof of concept code.

The atomic host doesn’t provide a yum or dnf command, because the software is pre-baked into a read-only /usr/ partition. To “install” (to use) additional software, it usually needs to be distributed and run as a container.

The Dockerfile which describes the docker container that we will build, has two important sections:

ENV FACTER_fqdn=localhost
ENTRYPOINT ["puppet", "apply"]

The ENTRYPOINT verb causes docker to run the puppet command that we built into the container, when the container is run with externally provided arguments. The ENV verb causes some facter errors to be suppressed, since facter isn’t running on the host. The rest of the file contains boilerplate and other build/run dependencies.

You can build this container yourself manually, or if you are an Oh-My-Vagrant user, then you can use the supplied omv.yaml file to build the container automatically. A build with Oh-My-Vagrant looks like this:

$ time vup omv1
Bringing machine 'omv1' up with 'libvirt' provider...
==> omv1: Creating image (snapshot of base box volume).
==> omv1: Creating domain with the following settings...
==> omv1:  -- Name:              omv_omv1
==> omv1:  -- Domain type:       kvm
==> omv1:  -- Cpus:              1
==> omv1:  -- Memory:            512M
==> omv1:  -- Base box:          centos-7.1-docker
==> omv1:  -- Storage pool:      default
==> omv1:  -- Image:             /var/lib/libvirt/images/omv_omv1.img
==> omv1:  -- Volume Cache:      default
==> omv1:  -- Kernel:            
==> omv1:  -- Initrd:            
==> omv1:  -- Graphics Type:     vnc
==> omv1:  -- Graphics Port:     5900
==> omv1:  -- Graphics IP:       127.0.0.1
==> omv1:  -- Graphics Password: Not defined
==> omv1:  -- Video Type:        cirrus
==> omv1:  -- Video VRAM:        9216
==> omv1:  -- Keymap:            en-us
==> omv1:  -- Command line : 
==> omv1: Starting domain.
==> omv1: Waiting for domain to get an IP address...
==> omv1: Waiting for SSH to become available...
==> omv1: Starting domain.
==> omv1: Waiting for domain to get an IP address...
==> omv1: Waiting for SSH to become available...
==> omv1: Creating shared folders metadata...
==> omv1: Setting hostname...
==> omv1: Rsyncing folder: /home/james/code/oh-my-vagrant/vagrant/ => /vagrant
==> omv1: Configuring and enabling network interfaces...
==> omv1: Updating /etc/hosts file on active guest machines...
==> omv1: Running provisioner: shell...
    omv1: Running: inline script
==> omv1: Running provisioner: shell...
    omv1: Running: inline script
==> omv1: Running provisioner: docker...
    omv1: Configuring Docker to autostart containers...
==> omv1: Running provisioner: docker...
    omv1: Configuring Docker to autostart containers...
==> omv1: Building Docker images...
==> omv1: -- Path: /vagrant/docker/spc-puppet-apply
==> omv1: Sending build context to Docker daemon 122.9 kB
==> omv1: Sending build context to Docker daemon 
==> omv1: Step 0 : FROM centos:7
==> omv1:  ---> 0114405f9ff1
==> omv1: Step 1 : MAINTAINER James Shubin <james@shubin.ca>
==> omv1:  ---> Running in 2dbdc37494e9
==> omv1:  ---> e154b3cfed7f
==> omv1: Removing intermediate container 2dbdc37494e9
==> omv1: Step 2 : RUN echo Hello from purpleidea and aweiteka > README
==> omv1:  ---> Running in 97475154152f
==> omv1:  ---> e4728d71caac
==> omv1: Removing intermediate container 97475154152f
==> omv1: Step 3 : ADD el7-puppet.repo /etc/yum.repos.d/
==> omv1:  ---> c591e0a9a54d
==> omv1: Removing intermediate container 24e55893313c
==> omv1: Step 4 : ADD RPM-GPG-KEY-puppetlabs /etc/pki/rpm-gpg/
==> omv1:  ---> 223549fbf661
==> omv1: Removing intermediate container 2bceb998f1c3
==> omv1: Step 5 : RUN yum install -y puppet
==> omv1:  ---> Running in e44c9cf55dc5
==> omv1: Loaded plugins: fastestmirror
==> omv1: Determining fastest mirrors
==> omv1:  * base: centos.mirror.vexxhost.com
==> omv1:  * extras: mirror.gpmidi.net
==> omv1:  * updates: centos.mirror.vexxhost.com
==> omv1: Resolving Dependencies
==> omv1: --> Running transaction check
==> omv1: ---> Package puppet.noarch 0:3.7.5-1.el7 will be installed
==> omv1: --> Processing Dependency: ruby >= 1.8 for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: facter >= 1:1.7.0 for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: ruby >= 1.8.7 for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: hiera >= 1.0.0 for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: rubygem-json for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: libselinux-utils for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: ruby-augeas for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: ruby(selinux) for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: /usr/bin/ruby for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: ruby-shadow for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Running transaction check
==> omv1: ---> Package facter.x86_64 1:2.4.3-1.el7 will be installed
==> omv1: --> Processing Dependency: net-tools for package: 1:facter-2.4.3-1.el7.x86_64
==> omv1: --> Processing Dependency: virt-what for package: 1:facter-2.4.3-1.el7.x86_64
==> omv1: --> Processing Dependency: pciutils for package: 1:facter-2.4.3-1.el7.x86_64
==> omv1: --> Processing Dependency: dmidecode for package: 1:facter-2.4.3-1.el7.x86_64
==> omv1: ---> Package hiera.noarch 0:1.3.4-1.el7 will be installed
==> omv1: ---> Package libselinux-ruby.x86_64 0:2.2.2-6.el7 will be installed
==> omv1: ---> Package libselinux-utils.x86_64 0:2.2.2-6.el7 will be installed
==> omv1: ---> Package ruby.x86_64 0:2.0.0.598-24.el7 will be installed
==> omv1: --> Processing Dependency: ruby-libs(x86-64) = 2.0.0.598-24.el7 for package: ruby-2.0.0.598-24.el7.x86_64
==> omv1: --> Processing Dependency: rubygem(bigdecimal) >= 1.2.0 for package: ruby-2.0.0.598-24.el7.x86_64
==> omv1: --> Processing Dependency: ruby(rubygems) >= 2.0.14 for package: ruby-2.0.0.598-24.el7.x86_64
==> omv1: --> Processing Dependency: libruby.so.2.0()(64bit) for package: ruby-2.0.0.598-24.el7.x86_64
==> omv1: ---> Package ruby-augeas.x86_64 0:0.4.1-3.el7 will be installed
==> omv1: --> Processing Dependency: augeas-libs >= 0.8.0 for package: ruby-augeas-0.4.1-3.el7.x86_64
==> omv1: --> Processing Dependency: libaugeas.so.0(AUGEAS_0.8.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
==> omv1: --> Processing Dependency: libaugeas.so.0(AUGEAS_0.1.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
==> omv1: --> Processing Dependency: libaugeas.so.0(AUGEAS_0.12.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
==> omv1: --> Processing Dependency: libaugeas.so.0(AUGEAS_0.10.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
==> omv1: --> Processing Dependency: libaugeas.so.0(AUGEAS_0.11.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
==> omv1: --> Processing Dependency: libaugeas.so.0()(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
==> omv1: ---> Package ruby-shadow.x86_64 1:2.2.0-2.el7 will be installed
==> omv1: ---> Package rubygem-json.x86_64 0:1.7.7-24.el7 will be installed
==> omv1: --> Running transaction check
==> omv1: ---> Package augeas-libs.x86_64 0:1.1.0-17.el7 will be installed
==> omv1: ---> Package dmidecode.x86_64 1:2.12-5.el7 will be installed
==> omv1: ---> Package net-tools.x86_64 0:2.0-0.17.20131004git.el7 will be installed
==> omv1: ---> Package pciutils.x86_64 0:3.2.1-4.el7 will be installed
==> omv1: --> Processing Dependency: pciutils-libs = 3.2.1-4.el7 for package: pciutils-3.2.1-4.el7.x86_64
==> omv1: --> Processing Dependency: libpci.so.3(LIBPCI_3.2)(64bit) for package: pciutils-3.2.1-4.el7.x86_64
==> omv1: --> Processing Dependency: libpci.so.3(LIBPCI_3.1)(64bit) for package: pciutils-3.2.1-4.el7.x86_64
==> omv1: --> Processing Dependency: libpci.so.3(LIBPCI_3.0)(64bit) for package: pciutils-3.2.1-4.el7.x86_64
==> omv1: --> Processing Dependency: hwdata for package: pciutils-3.2.1-4.el7.x86_64
==> omv1: --> Processing Dependency: libpci.so.3()(64bit) for package: pciutils-3.2.1-4.el7.x86_64
==> omv1: ---> Package ruby-libs.x86_64 0:2.0.0.598-24.el7 will be installed
==> omv1: ---> Package rubygem-bigdecimal.x86_64 0:1.2.0-24.el7 will be installed
==> omv1: ---> Package rubygems.noarch 0:2.0.14-24.el7 will be installed
==> omv1: --> Processing Dependency: rubygem(rdoc) >= 4.0.0 for package: rubygems-2.0.14-24.el7.noarch
==> omv1: --> Processing Dependency: rubygem(psych) >= 2.0.0 for package: rubygems-2.0.14-24.el7.noarch
==> omv1: --> Processing Dependency: rubygem(io-console) >= 0.4.2 for package: rubygems-2.0.14-24.el7.noarch
==> omv1: ---> Package virt-what.x86_64 0:1.13-5.el7 will be installed
==> omv1: --> Running transaction check
==> omv1: ---> Package hwdata.noarch 0:0.252-7.5.el7 will be installed
==> omv1: ---> Package pciutils-libs.x86_64 0:3.2.1-4.el7 will be installed
==> omv1: ---> Package rubygem-io-console.x86_64 0:0.4.2-24.el7 will be installed
==> omv1: ---> Package rubygem-psych.x86_64 0:2.0.0-24.el7 will be installed
==> omv1: --> Processing Dependency: libyaml-0.so.2()(64bit) for package: rubygem-psych-2.0.0-24.el7.x86_64
==> omv1: ---> Package rubygem-rdoc.noarch 0:4.0.0-24.el7 will be installed
==> omv1: --> Processing Dependency: ruby(irb) = 2.0.0.598 for package: rubygem-rdoc-4.0.0-24.el7.noarch
==> omv1: --> Running transaction check
==> omv1: ---> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed
==> omv1: ---> Package ruby-irb.noarch 0:2.0.0.598-24.el7 will be installed
==> omv1: --> Finished Dependency Resolution
==> omv1: 
==> omv1: Dependencies Resolved
==> omv1: 
==> omv1: ================================================================================
==> omv1:  Package            Arch   Version                    Repository           Size
==> omv1: ================================================================================
==> omv1: Installing:
==> omv1:  puppet             noarch 3.7.5-1.el7                puppetlabs-products 1.5 M
==> omv1: Installing for dependencies:
==> omv1:  augeas-libs        x86_64 1.1.0-17.el7               base                332 k
==> omv1:  dmidecode          x86_64 1:2.12-5.el7               base                 78 k
==> omv1:  facter             x86_64 1:2.4.3-1.el7              puppetlabs-products  98 k
==> omv1:  hiera              noarch 1.3.4-1.el7                puppetlabs-products  23 k
==> omv1:  hwdata             noarch 0.252-7.5.el7              base                2.0 M
==> omv1:  libselinux-ruby    x86_64 2.2.2-6.el7                base                127 k
==> omv1:  libselinux-utils   x86_64 2.2.2-6.el7                base                135 k
==> omv1:  libyaml            x86_64 0.1.4-11.el7_0             base                 55 k
==> omv1:  net-tools          x86_64 2.0-0.17.20131004git.el7   base                304 k
==> omv1:  pciutils           x86_64 3.2.1-4.el7                base                 90 k
==> omv1:  pciutils-libs      x86_64 3.2.1-4.el7                base                 45 k
==> omv1:  ruby               x86_64 2.0.0.598-24.el7           base                 67 k
==> omv1:  ruby-augeas        x86_64 0.4.1-3.el7                puppetlabs-deps      22 k
==> omv1:  ruby-irb           noarch 2.0.0.598-24.el7           base                 88 k
==> omv1:  ruby-libs          x86_64 2.0.0.598-24.el7           base                2.8 M
==> omv1:  ruby-shadow        x86_64 1:2.2.0-2.el7              puppetlabs-deps      14 k
==> omv1:  rubygem-bigdecimal x86_64 1.2.0-24.el7               base                 79 k
==> omv1:  rubygem-io-console x86_64 0.4.2-24.el7               base                 50 k
==> omv1:  rubygem-json       x86_64 1.7.7-24.el7               base                 75 k
==> omv1:  rubygem-psych      x86_64 2.0.0-24.el7               base                 77 k
==> omv1:  rubygem-rdoc       noarch 4.0.0-24.el7               base                318 k
==> omv1:  rubygems           noarch 2.0.14-24.el7              base                212 k
==> omv1:  virt-what          x86_64 1.13-5.el7                 base                 26 k
==> omv1: 
==> omv1: Transaction Summary
==> omv1: ================================================================================
==> omv1: Install  1 Package (+23 Dependent packages)
==> omv1: 
==> omv1: Total download size: 8.6 M
==> omv1: Installed size: 27 M
==> omv1: Downloading packages:
==> omv1: warning: /var/cache/yum/x86_64/7/puppetlabs-products/packages/hiera-1.3.4-1.el7.noarch.rpm: Header V4 RSA/SHA512 Signature, key ID 4bd6ec30: NOKEY
==> omv1: 
==> omv1: Public key for hiera-1.3.4-1.el7.noarch.rpm is not installed
==> omv1: warning: /var/cache/yum/x86_64/7/base/packages/dmidecode-2.12-5.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
==> omv1: 
==> omv1: Public key for dmidecode-2.12-5.el7.x86_64.rpm is not installed
==> omv1: Public key for ruby-augeas-0.4.1-3.el7.x86_64.rpm is not installed
==> omv1: --------------------------------------------------------------------------------
==> omv1: Total                                              811 kB/s | 8.6 MB  00:10     
==> omv1: Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
==> omv1: Importing GPG key 0xF4A80EB5:
==> omv1:  Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
==> omv1:  Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
==> omv1:  Package    : centos-release-7-1.1503.el7.centos.2.8.x86_64 (@CentOS/$releasever)
==> omv1:  From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
==> omv1: 
==> omv1: Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
==> omv1: Importing GPG key 0x4BD6EC30:
==> omv1:  Userid     : "Puppet Labs Release Key (Puppet Labs Release Key) <info@puppetlabs.com>"
==> omv1:  Fingerprint: 47b3 20eb 4c7c 375a a9da e1a0 1054 b7a2 4bd6 ec30
==> omv1:  From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
==> omv1: 
==> omv1: Running transaction check
==> omv1: Running transaction test
==> omv1: Transaction test succeeded
==> omv1: Running transaction
==> omv1:   Installing : ruby-libs-2.0.0.598-24.el7.x86_64                           1/24
==> omv1:  
==> omv1:   Installing : 1:dmidecode-2.12-5.el7.x86_64                               2/24
==> omv1:  
==> omv1:   Installing : virt-what-1.13-5.el7.x86_64                                 3/24
==> omv1:  
==> omv1:   Installing : libyaml-0.1.4-11.el7_0.x86_64                               4/24
==> omv1:  
==> omv1:   Installing : rubygem-psych-2.0.0-24.el7.x86_64                           5/24
==> omv1:  
==> omv1:   Installing : rubygem-bigdecimal-1.2.0-24.el7.x86_64                      6/24
==> omv1:  
==> omv1:   Installing : rubygem-io-console-0.4.2-24.el7.x86_64                      7/24
==> omv1:  
==> omv1:   Installing : ruby-irb-2.0.0.598-24.el7.noarch                            8/24
==> omv1:  
==> omv1:   Installing : ruby-2.0.0.598-24.el7.x86_64                                9/24
==> omv1:  
==> omv1:   Installing : rubygems-2.0.14-24.el7.noarch                              10/24
==> omv1:  
==> omv1:   Installing : rubygem-json-1.7.7-24.el7.x86_64                           11/24
==> omv1:  
==> omv1:   Installing : rubygem-rdoc-4.0.0-24.el7.noarch                           12/24
==> omv1:  
==> omv1:   Installing : hiera-1.3.4-1.el7.noarch                                   13/24
==> omv1:  
==> omv1:   Installing : 1:ruby-shadow-2.2.0-2.el7.x86_64                           14/24
==> omv1:  
==> omv1:   Installing : hwdata-0.252-7.5.el7.noarch                                15/24
==> omv1:  
==> omv1:   Installing : libselinux-utils-2.2.2-6.el7.x86_64                        16/24
==> omv1:  
==> omv1:   Installing : pciutils-libs-3.2.1-4.el7.x86_64                           17/24
==> omv1:  
==> omv1:   Installing : pciutils-3.2.1-4.el7.x86_64                                18/24
==> omv1:  
==> omv1:   Installing : augeas-libs-1.1.0-17.el7.x86_64                            19/24
==> omv1:  
==> omv1:   Installing : ruby-augeas-0.4.1-3.el7.x86_64                             20/24
==> omv1:  
==> omv1:   Installing : net-tools-2.0-0.17.20131004git.el7.x86_64                  21/24
==> omv1:  
==> omv1:   Installing : 1:facter-2.4.3-1.el7.x86_64                                22/24
==> omv1:  
==> omv1:   Installing : libselinux-ruby-2.2.2-6.el7.x86_64                         23/24
==> omv1:  
==> omv1:   Installing : puppet-3.7.5-1.el7.noarch                                  24/24
==> omv1:  
==> omv1:   Verifying  : libselinux-ruby-2.2.2-6.el7.x86_64                          1/24
==> omv1:  
==> omv1:   Verifying  : rubygem-json-1.7.7-24.el7.x86_64                            2/24
==> omv1:  
==> omv1:   Verifying  : ruby-irb-2.0.0.598-24.el7.noarch                            3/24
==> omv1:  
==> omv1:   Verifying  : ruby-libs-2.0.0.598-24.el7.x86_64                           4/24
==> omv1:  
==> omv1:   Verifying  : net-tools-2.0-0.17.20131004git.el7.x86_64                   5/24
==> omv1:  
==> omv1:   Verifying  : augeas-libs-1.1.0-17.el7.x86_64                             6/24
==> omv1:  
==> omv1:   Verifying  : pciutils-libs-3.2.1-4.el7.x86_64                            7/24
==> omv1:  
==> omv1:   Verifying  : rubygem-psych-2.0.0-24.el7.x86_64                           8/24
==> omv1:  
==> omv1:   Verifying  : 1:facter-2.4.3-1.el7.x86_64                                 9/24
==> omv1:  
==> omv1:   Verifying  : pciutils-3.2.1-4.el7.x86_64                                10/24
==> omv1:  
==> omv1:   Verifying  : puppet-3.7.5-1.el7.noarch                                  11/24
==> omv1:  
==> omv1:   Verifying  : hiera-1.3.4-1.el7.noarch                                   12/24
==> omv1:  
==> omv1:   Verifying  : rubygem-rdoc-4.0.0-24.el7.noarch                           13/24
==> omv1:  
==> omv1:   Verifying  : virt-what-1.13-5.el7.x86_64                                14/24
==> omv1:  
==> omv1:   Verifying  : rubygems-2.0.14-24.el7.noarch                              15/24
==> omv1:  
==> omv1:   Verifying  : libselinux-utils-2.2.2-6.el7.x86_64                        16/24
==> omv1:  
==> omv1:   Verifying  : 1:ruby-shadow-2.2.0-2.el7.x86_64                           17/24
==> omv1:  
==> omv1:   Verifying  : rubygem-bigdecimal-1.2.0-24.el7.x86_64                     18/24
==> omv1:  
==> omv1:   Verifying  : 1:dmidecode-2.12-5.el7.x86_64                              19/24
==> omv1:  
==> omv1:   Verifying  : hwdata-0.252-7.5.el7.noarch                                20/24
==> omv1:  
==> omv1:   Verifying  : libyaml-0.1.4-11.el7_0.x86_64                              21/24
==> omv1:  
==> omv1:   Verifying  : rubygem-io-console-0.4.2-24.el7.x86_64                     22/24
==> omv1:  
==> omv1:   Verifying  : ruby-augeas-0.4.1-3.el7.x86_64                             23/24
==> omv1:  
==> omv1:   Verifying  : ruby-2.0.0.598-24.el7.x86_64                               24/24
==> omv1:  
==> omv1: 
==> omv1: Installed:
==> omv1:   puppet.noarch 0:3.7.5-1.el7                                                   
==> omv1: 
==> omv1: Dependency Installed:
==> omv1:   augeas-libs.x86_64 0:1.1.0-17.el7                                             
==> omv1:   dmidecode.x86_64 1:2.12-5.el7                                                 
==> omv1:   facter.x86_64 1:2.4.3-1.el7                                                   
==> omv1:   hiera.noarch 0:1.3.4-1.el7                                                    
==> omv1:   hwdata.noarch 0:0.252-7.5.el7                                                 
==> omv1:   libselinux-ruby.x86_64 0:2.2.2-6.el7                                          
==> omv1:   libselinux-utils.x86_64 0:2.2.2-6.el7                                         
==> omv1:   libyaml.x86_64 0:0.1.4-11.el7_0                                               
==> omv1:   net-tools.x86_64 0:2.0-0.17.20131004git.el7                                   
==> omv1:   pciutils.x86_64 0:3.2.1-4.el7                                                 
==> omv1:   pciutils-libs.x86_64 0:3.2.1-4.el7                                            
==> omv1:   ruby.x86_64 0:2.0.0.598-24.el7                                                
==> omv1:   ruby-augeas.x86_64 0:0.4.1-3.el7                                              
==> omv1:   ruby-irb.noarch 0:2.0.0.598-24.el7                                            
==> omv1:   ruby-libs.x86_64 0:2.0.0.598-24.el7                                           
==> omv1:   ruby-shadow.x86_64 1:2.2.0-2.el7                                              
==> omv1:   rubygem-bigdecimal.x86_64 0:1.2.0-24.el7                                      
==> omv1:   rubygem-io-console.x86_64 0:0.4.2-24.el7                                      
==> omv1:   rubygem-json.x86_64 0:1.7.7-24.el7                                            
==> omv1:   rubygem-psych.x86_64 0:2.0.0-24.el7                                           
==> omv1:   rubygem-rdoc.noarch 0:4.0.0-24.el7                                            
==> omv1:   rubygems.noarch 0:2.0.14-24.el7                                               
==> omv1:   virt-what.x86_64 0:1.13-5.el7                                                 
==> omv1: Complete!
==> omv1:  ---> bf169104271a
==> omv1: Removing intermediate container e44c9cf55dc5
==> omv1: Step 6 : RUN yum install -y hostname
==> omv1:  ---> Running in 6e5bd5a59223
==> omv1: Loaded plugins: fastestmirror
==> omv1: Loading mirror speeds from cached hostfile
==> omv1:  * base: centos.mirror.vexxhost.com
==> omv1:  * extras: mirror.gpmidi.net
==> omv1:  * updates: centos.mirror.vexxhost.com
==> omv1: Resolving Dependencies
==> omv1: --> Running transaction check
==> omv1: ---> Package hostname.x86_64 0:3.13-3.el7 will be installed
==> omv1: --> Finished Dependency Resolution
==> omv1: 
==> omv1: Dependencies Resolved
==> omv1: 
==> omv1: ================================================================================
==> omv1:  Package            Arch             Version               Repository      Size
==> omv1: ================================================================================
==> omv1: Installing:
==> omv1:  hostname           x86_64           3.13-3.el7            base            17 k
==> omv1: 
==> omv1: Transaction Summary
==> omv1: ================================================================================
==> omv1: Install  1 Package
==> omv1: 
==> omv1: Total download size: 17 k
==> omv1: Installed size: 19 k
==> omv1: Downloading packages:
==> omv1: Running transaction check
==> omv1: Running transaction test
==> omv1: Transaction test succeeded
==> omv1: Running transaction
==> omv1:   Installing : hostname-3.13-3.el7.x86_64                                   1/1
==> omv1:  
==> omv1:   Verifying  : hostname-3.13-3.el7.x86_64                                   1/1
==> omv1:  
==> omv1: 
==> omv1: Installed:
==> omv1:   hostname.x86_64 0:3.13-3.el7                                                  
==> omv1: 
==> omv1: Complete!
==> omv1:  ---> 2a20361f2d9d
==> omv1: Removing intermediate container 6e5bd5a59223
==> omv1: Step 7 : ADD run.sh /
==> omv1:  ---> 5493e62ac377
==> omv1: Removing intermediate container 84c8b7677f72
==> omv1: Step 8 : ADD test.pp /
==> omv1:  ---> 4f4d0023e612
==> omv1: Removing intermediate container 84cb691304a3
==> omv1: Step 9 : ENV FACTER_fqdn localhost
==> omv1:  ---> Running in 104fe3925dd6
==> omv1:  ---> 864c113d54b5
==> omv1: Removing intermediate container 104fe3925dd6
==> omv1: Step 10 : ENTRYPOINT puppet apply
==> omv1:  ---> Running in f6fe26141b19
==> omv1:  ---> cd19e4344bf1
==> omv1: Removing intermediate container f6fe26141b19
==> omv1: Step 11 : USER root
==> omv1:  ---> Running in 435752b9fb17
==> omv1:  ---> 86193e9815a8
==> omv1: Removing intermediate container 435752b9fb17
==> omv1: Step 12 : LABEL INSTALL docker run --rm -it --privileged -v /etc:/etc -v /var:/var -v /run:/run --net=host IMAGE
==> omv1:  ---> Running in d24f98a56936
==> omv1:  ---> 72f2dfbc4e07
==> omv1: Removing intermediate container d24f98a56936
==> omv1: Successfully built 72f2dfbc4e07

real    2m46.515s
user    0m10.036s
sys    0m1.678s

Once you’ve built the container, you should confirm its existence:

$ vscreen root@omv1
# docker images | grep spc
spc-puppet-apply    latest              72f2dfbc4e07        7 minutes ago       314.5 MB

The project repository tries to make your life easier, so it comes with a test.pp file that you can run to confirm puppet is working correctly. If you are using a regular host (not Atomic) you can run:

# mkdir /var/tmp/spc-puppet-apply/
# cp -a /vagrant/docker/spc-puppet-apply/test.pp /var/tmp/spc-puppet-apply/

or if you’re using an Atomic host, you can run:

# mkdir /var/tmp/spc-puppet-apply/
# cp -a /home/vagrant/sync/docker/spc-puppet-apply/test.pp /var/tmp/spc-puppet-apply/

since Oh-My-Vagrant can’t put files in /vagrant on an immutable Atomic host.

Finally, run the application with this monster docker command:

# docker run --rm -it --privileged -v /etc:/etc -v /var:/var -v /run:/run --net=host spc-puppet-apply /var/tmp/spc-puppet-apply/test.pp
Notice: Compiled catalog for omv1.example.com in environment production in 0.02 seconds
Notice: This is a puppet test!

Notice: /Stage[main]/Main/Notify[hello]/message: defined 'message' as 'This is a puppet test!'
Notice: Finished catalog run in 0.06 seconds

Since remembering that monster each time you want to do a simple puppet run is a bit of a monster, the Atomic project provides labels which make it so you can use the atomic run command instead.

Future work:

Interested parties are encouraged to test this, find any issues, and become comfortable with the idea of applications running from within containers. If running puppet with a puppet master is desirable, an spc-puppet-agent would be needed.

Happy Hacking,

James

Kubernetes clusters with Oh-My-Vagrant

I’ve added the ability to deploy a Kubernetes cluster with Oh-My-Vagrant (omv). I’ve also built an automated developer experience so that you can test your Kubernetes powered app in minutes. If you want to redeploy a new version, or see how your app behaves during a rolling update, you can use omv to test this out in minutes! I’ve recorded a screencast (~15 min), if you’d like to see some of this in action.

Background:

Kubernetes is a container cluster manager. It groups containers into pods, and those pods get scheduled to run on a certain machine in the cluster. We’ll be talking about Docker containers in this article, although there are lots of great new technologies such as nspawn.

An advantage of using Kubernetes is that it was created by a team at Google, who has a lot of experience running containers. A lot of smart folks (including many from Red Hat) are working on this project too! It is becoming the foundation for other software such as OpenShift v3. Google has released a paper on their earlier work (Borg), which is interesting, but lacks many details, and (unsurprisingly) no source code is present.

Some big disadvantages include the requirement of a CLA to contribute to the project, and the lack of good documentation and articles about it. Kubernetes itself, can’t yet be decentralized, but this might change in the future. It’s (currently) very difficult to construct the foo.json files required to build an application.

While the project is open source (ALv2) I’ve gotten the feeling that Google has a pretty strong hold on the project and has some changes to make before the community really trusts them. This can be a learning experience for any company where proprietary software is the culture. I wish them well on their journey!

Oh-My-Vagrant integration:

To deploy a Kubernetes cluster with omv, the kubernetes variable needs to be set to something meaningful. It’s also recommended that you boot up a minimum of three machines. Here is an except from an example omv.yaml config:

---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.1-docker
:sync: rsync
:extern:
- type: git
  system: docker
  repository: https://github.com/purpleidea/docker-simple1
  directory: docker-simple1
- type: git
  system: kubernetes
  repository: https://github.com/purpleidea/kube-simple1
  directory: kube-simple1
:puppet: false
:docker: []
:kubernetes:
  applications:
  - kube-simple1/simple1.json
:vms: []
:namespace: omv
:count: 3

In the above example, you can see that we’ve only listed one Kubernetes application, but an arbitrary number can be included. The kubernetes variable can also accept a key named master if you’d like to choose which of your vm’s should be the primary. If you don’t specify this, the first vagrant machine will be chosen.

In the list of applications, instead of only specifying the .json file, you can instead specify a dictionary of key/value pairs. The .json key (file) is required, but you can also specify additional keys, such as the boolean key roll. When true, it will cause a rolling update to occur instead of a normal” update.

If you’re interested to see what needs to be done to set up Kubernetes, the bulk of the work was done by me in 1f26, but figuring out manual steps was only possible thanks to the work of my hacker friends: scollier and eparis.

Due to the power of the omv project, when you vagrant up, the necessary docker and Kubernetes projects listed in the extern variable will be automatically cloned and pulled into your omv environment.

Screencast:

You’re probably due for a screencast (~15 min). Have a watch, and then if you need review, go back and read what I’ve written above.

https://download.gluster.org/pub/gluster/purpleidea/screencasts/oh-my-vagrant-kubernetes-screencast.ogv

(Thanks to the Gluster community for generously hosting this video!)

Final thoughts:

I haven’t talked about networking, or actually building applications.

Container network might be a lot easier if you use an overlay network like flannel. Unfortunately, this isn’t yet built into Oh-My-Vagrant, but is in the list of feature requests if someone shows some interest.

Building useful applications is harder. In my screencast, you’ll see where to put the code, and how to iterate on it, but not what kind of code to write, or architecturally how your multi-container applications should work. Unfortunately this is out of scope for today’s article! My goal was to make it easy for you to focus on that topic, instead of having to figure out how to build the infrastructure. Hopefully Oh-My-Vagrant helps you accomplish that!

Happy hacking,

James

 

Docker containers in Oh-My-Vagrant

The Oh-My-Vagrant (omv) project is an easy way to bootstrap a development environment. It is particularly useful for spinning up an arbitrary number of virtual machines in Vagrant without writing ruby code. For multi-machine container development, omv can be used to help this happen more naturally.

Oh-My-Vagrant can be very useful as a docker application development environment. I’ve made a quick (<9min) screencast demoing this topic. Please have a look:

https://download.gluster.org/pub/gluster/purpleidea/screencasts/oh-my-vagrant-docker-screencast.ogv

If you watched the screencast, you should have a good overview of what’s possible. Let’s discuss some of these features in more detail.

Pull an arbitrary list of docker images:

If you use an image that was baked with vagrant-builder, you can make sure that an arbitrary list of docker images will be pre-cached into the base image so that you don’t have to wait for the slow docker registry every time you boot up a development vm.

This is easily seen in the CentOS-7.1 image definition file seen here. Here’s an excerpt:

VERSION='centos-7.1'
POSTFIX='docker'
SIZE='40'
DOCKER='centos fedora'		# list of docker images to include

The GlusterFS community gracefully hosts a copy of this image here.

If you’d like to add images to a vm you can add a list of things to pull in the docker omv.yaml variable:

---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.1-docker
:docker:
- ubuntu
- busybox
:count: 1
: vms: []

This key is also available in the vms array.

Automatic docker builds:

If you have a Dockerfile in a vagrant/docker/*/ folder, then it will get automatically added to the running vagrant vm, and built every time you run a vagrant up. If the machine is already running, and you’d like to rebuild it from your local working directory, you can run: vagrant rsync && vagrant provision.

Automatic docker environments:

Building and defining docker applications can be a tricky process, particularly because the techniques are still quite new to developers. With Oh-My-Vagrant, this process is simplified for container developers because you can build an enhanced omv.yaml file which defines your app for you:

---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0-docker
:extern:
- type: git
  system: docker
  repository: https://github.com/purpleidea/docker-simple1
  directory: simple-app1
:docker: []
:vms: []
:count: 3

By listing multiple git repos in your omv.yaml file, they will be automatically pulled down and built for you. An example of the above running would look similar to this:

$ time vup omv1
Cloning into 'simple-app1'...
remote: Counting objects: 6, done.
remote: Total 6 (delta 0), reused 0 (delta 0), pack-reused 6
Unpacking objects: 100% (6/6), done.
Checking connectivity... done.

Bringing machine 'omv1' up with 'libvirt' provider...
==> omv1: Creating image (snapshot of base box volume).
==> omv1: Creating domain with the following settings...
==> omv1:  -- Name:              omv_omv1
==> omv1:  -- Domain type:       kvm
==> omv1:  -- Cpus:              1
==> omv1:  -- Memory:            512M
==> omv1:  -- Base box:          centos-7.0-docker
==> omv1:  -- Storage pool:      default
==> omv1:  -- Image:             /var/lib/libvirt/images/omv_omv1.img
==> omv1:  -- Volume Cache:      default
==> omv1:  -- Kernel:            
==> omv1:  -- Initrd:            
==> omv1:  -- Graphics Type:     vnc
==> omv1:  -- Graphics Port:     5900
==> omv1:  -- Graphics IP:       127.0.0.1
==> omv1:  -- Graphics Password: Not defined
==> omv1:  -- Video Type:        cirrus
==> omv1:  -- Video VRAM:        9216
==> omv1:  -- Command line : 
==> omv1: Starting domain.
==> omv1: Waiting for domain to get an IP address...
==> omv1: Waiting for SSH to become available...
==> omv1: Starting domain.
==> omv1: Waiting for domain to get an IP address...
==> omv1: Waiting for SSH to become available...
==> omv1: Creating shared folders metadata...
==> omv1: Setting hostname...
==> omv1: Rsyncing folder: /home/james/code/oh-my-vagrant/vagrant/ => /vagrant
==> omv1: Configuring and enabling network interfaces...
==> omv1: Running provisioner: shell...
    omv1: Running: inline script
==> omv1: Running provisioner: docker...
    omv1: Configuring Docker to autostart containers...
==> omv1: Running provisioner: docker...
    omv1: Configuring Docker to autostart containers...
==> omv1: Building Docker images...
==> omv1: -- Path: /vagrant/docker/simple-app1
==> omv1: Sending build context to Docker daemon 54.27 kB
==> omv1: Sending build context to Docker daemon 
==> omv1: Step 0 : FROM fedora
==> omv1:  ---> 834629358fe2
==> omv1: Step 1 : MAINTAINER James Shubin <james@shubin.ca>
==> omv1:  ---> Running in 2afded16eec7
==> omv1:  ---> a7baf4784f57
==> omv1: Removing intermediate container 2afded16eec7
==> omv1: Step 2 : RUN echo Hello and welcome to the Technical Blog of James > README
==> omv1:  ---> Running in 709b9dc66e9b
==> omv1:  ---> b955154474f4
==> omv1: Removing intermediate container 709b9dc66e9b
==> omv1: Step 3 : ENTRYPOINT python -m SimpleHTTPServer
==> omv1:  ---> Running in 76840da9e963
==> omv1:  ---> b333c179dd56
==> omv1: Removing intermediate container 76840da9e963
==> omv1: Step 4 : EXPOSE 8000
==> omv1:  ---> Running in ebf83f08328e
==> omv1:  ---> f13049706668
==> omv1: Removing intermediate container ebf83f08328e
==> omv1: Successfully built f13049706668

real	1m12.221s
user	0m5.923s
sys	0m0.932s

All that happened in about a minute!

Conclusion:

I hope these tools help, if you’re following my git commits, you’ll notice that there are some new features I haven’t blogged about yet. Kubernetes integration exists, so please have a look, and hopefully I’ll have some screencasts and blog posts about this shortly.

Happy hacking,

James