A super privileged Puppet container

In this new crazy world of containers and immutable hosts, one might still want to run previous generation software such as Puppet on a current generation Atomic host. This article will explain how you can do that, and offer some proof of concept code.

The atomic host doesn’t provide a yum or dnf command, because the software is pre-baked into a read-only /usr/ partition. To “install” (to use) additional software, it usually needs to be distributed and run as a container.

The Dockerfile which describes the docker container that we will build, has two important sections:

ENV FACTER_fqdn=localhost
ENTRYPOINT ["puppet", "apply"]

The ENTRYPOINT verb causes docker to run the puppet command that we built into the container, when the container is run with externally provided arguments. The ENV verb causes some facter errors to be suppressed, since facter isn’t running on the host. The rest of the file contains boilerplate and other build/run dependencies.

You can build this container yourself manually, or if you are an Oh-My-Vagrant user, then you can use the supplied omv.yaml file to build the container automatically. A build with Oh-My-Vagrant looks like this:

$ time vup omv1
Bringing machine 'omv1' up with 'libvirt' provider...
==> omv1: Creating image (snapshot of base box volume).
==> omv1: Creating domain with the following settings...
==> omv1:  -- Name:              omv_omv1
==> omv1:  -- Domain type:       kvm
==> omv1:  -- Cpus:              1
==> omv1:  -- Memory:            512M
==> omv1:  -- Base box:          centos-7.1-docker
==> omv1:  -- Storage pool:      default
==> omv1:  -- Image:             /var/lib/libvirt/images/omv_omv1.img
==> omv1:  -- Volume Cache:      default
==> omv1:  -- Kernel:            
==> omv1:  -- Initrd:            
==> omv1:  -- Graphics Type:     vnc
==> omv1:  -- Graphics Port:     5900
==> omv1:  -- Graphics IP:       127.0.0.1
==> omv1:  -- Graphics Password: Not defined
==> omv1:  -- Video Type:        cirrus
==> omv1:  -- Video VRAM:        9216
==> omv1:  -- Keymap:            en-us
==> omv1:  -- Command line : 
==> omv1: Starting domain.
==> omv1: Waiting for domain to get an IP address...
==> omv1: Waiting for SSH to become available...
==> omv1: Starting domain.
==> omv1: Waiting for domain to get an IP address...
==> omv1: Waiting for SSH to become available...
==> omv1: Creating shared folders metadata...
==> omv1: Setting hostname...
==> omv1: Rsyncing folder: /home/james/code/oh-my-vagrant/vagrant/ => /vagrant
==> omv1: Configuring and enabling network interfaces...
==> omv1: Updating /etc/hosts file on active guest machines...
==> omv1: Running provisioner: shell...
    omv1: Running: inline script
==> omv1: Running provisioner: shell...
    omv1: Running: inline script
==> omv1: Running provisioner: docker...
    omv1: Configuring Docker to autostart containers...
==> omv1: Running provisioner: docker...
    omv1: Configuring Docker to autostart containers...
==> omv1: Building Docker images...
==> omv1: -- Path: /vagrant/docker/spc-puppet-apply
==> omv1: Sending build context to Docker daemon 122.9 kB
==> omv1: Sending build context to Docker daemon 
==> omv1: Step 0 : FROM centos:7
==> omv1:  ---> 0114405f9ff1
==> omv1: Step 1 : MAINTAINER James Shubin <james@shubin.ca>
==> omv1:  ---> Running in 2dbdc37494e9
==> omv1:  ---> e154b3cfed7f
==> omv1: Removing intermediate container 2dbdc37494e9
==> omv1: Step 2 : RUN echo Hello from purpleidea and aweiteka > README
==> omv1:  ---> Running in 97475154152f
==> omv1:  ---> e4728d71caac
==> omv1: Removing intermediate container 97475154152f
==> omv1: Step 3 : ADD el7-puppet.repo /etc/yum.repos.d/
==> omv1:  ---> c591e0a9a54d
==> omv1: Removing intermediate container 24e55893313c
==> omv1: Step 4 : ADD RPM-GPG-KEY-puppetlabs /etc/pki/rpm-gpg/
==> omv1:  ---> 223549fbf661
==> omv1: Removing intermediate container 2bceb998f1c3
==> omv1: Step 5 : RUN yum install -y puppet
==> omv1:  ---> Running in e44c9cf55dc5
==> omv1: Loaded plugins: fastestmirror
==> omv1: Determining fastest mirrors
==> omv1:  * base: centos.mirror.vexxhost.com
==> omv1:  * extras: mirror.gpmidi.net
==> omv1:  * updates: centos.mirror.vexxhost.com
==> omv1: Resolving Dependencies
==> omv1: --> Running transaction check
==> omv1: ---> Package puppet.noarch 0:3.7.5-1.el7 will be installed
==> omv1: --> Processing Dependency: ruby >= 1.8 for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: facter >= 1:1.7.0 for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: ruby >= 1.8.7 for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: hiera >= 1.0.0 for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: rubygem-json for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: libselinux-utils for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: ruby-augeas for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: ruby(selinux) for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: /usr/bin/ruby for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Processing Dependency: ruby-shadow for package: puppet-3.7.5-1.el7.noarch
==> omv1: --> Running transaction check
==> omv1: ---> Package facter.x86_64 1:2.4.3-1.el7 will be installed
==> omv1: --> Processing Dependency: net-tools for package: 1:facter-2.4.3-1.el7.x86_64
==> omv1: --> Processing Dependency: virt-what for package: 1:facter-2.4.3-1.el7.x86_64
==> omv1: --> Processing Dependency: pciutils for package: 1:facter-2.4.3-1.el7.x86_64
==> omv1: --> Processing Dependency: dmidecode for package: 1:facter-2.4.3-1.el7.x86_64
==> omv1: ---> Package hiera.noarch 0:1.3.4-1.el7 will be installed
==> omv1: ---> Package libselinux-ruby.x86_64 0:2.2.2-6.el7 will be installed
==> omv1: ---> Package libselinux-utils.x86_64 0:2.2.2-6.el7 will be installed
==> omv1: ---> Package ruby.x86_64 0:2.0.0.598-24.el7 will be installed
==> omv1: --> Processing Dependency: ruby-libs(x86-64) = 2.0.0.598-24.el7 for package: ruby-2.0.0.598-24.el7.x86_64
==> omv1: --> Processing Dependency: rubygem(bigdecimal) >= 1.2.0 for package: ruby-2.0.0.598-24.el7.x86_64
==> omv1: --> Processing Dependency: ruby(rubygems) >= 2.0.14 for package: ruby-2.0.0.598-24.el7.x86_64
==> omv1: --> Processing Dependency: libruby.so.2.0()(64bit) for package: ruby-2.0.0.598-24.el7.x86_64
==> omv1: ---> Package ruby-augeas.x86_64 0:0.4.1-3.el7 will be installed
==> omv1: --> Processing Dependency: augeas-libs >= 0.8.0 for package: ruby-augeas-0.4.1-3.el7.x86_64
==> omv1: --> Processing Dependency: libaugeas.so.0(AUGEAS_0.8.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
==> omv1: --> Processing Dependency: libaugeas.so.0(AUGEAS_0.1.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
==> omv1: --> Processing Dependency: libaugeas.so.0(AUGEAS_0.12.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
==> omv1: --> Processing Dependency: libaugeas.so.0(AUGEAS_0.10.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
==> omv1: --> Processing Dependency: libaugeas.so.0(AUGEAS_0.11.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
==> omv1: --> Processing Dependency: libaugeas.so.0()(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
==> omv1: ---> Package ruby-shadow.x86_64 1:2.2.0-2.el7 will be installed
==> omv1: ---> Package rubygem-json.x86_64 0:1.7.7-24.el7 will be installed
==> omv1: --> Running transaction check
==> omv1: ---> Package augeas-libs.x86_64 0:1.1.0-17.el7 will be installed
==> omv1: ---> Package dmidecode.x86_64 1:2.12-5.el7 will be installed
==> omv1: ---> Package net-tools.x86_64 0:2.0-0.17.20131004git.el7 will be installed
==> omv1: ---> Package pciutils.x86_64 0:3.2.1-4.el7 will be installed
==> omv1: --> Processing Dependency: pciutils-libs = 3.2.1-4.el7 for package: pciutils-3.2.1-4.el7.x86_64
==> omv1: --> Processing Dependency: libpci.so.3(LIBPCI_3.2)(64bit) for package: pciutils-3.2.1-4.el7.x86_64
==> omv1: --> Processing Dependency: libpci.so.3(LIBPCI_3.1)(64bit) for package: pciutils-3.2.1-4.el7.x86_64
==> omv1: --> Processing Dependency: libpci.so.3(LIBPCI_3.0)(64bit) for package: pciutils-3.2.1-4.el7.x86_64
==> omv1: --> Processing Dependency: hwdata for package: pciutils-3.2.1-4.el7.x86_64
==> omv1: --> Processing Dependency: libpci.so.3()(64bit) for package: pciutils-3.2.1-4.el7.x86_64
==> omv1: ---> Package ruby-libs.x86_64 0:2.0.0.598-24.el7 will be installed
==> omv1: ---> Package rubygem-bigdecimal.x86_64 0:1.2.0-24.el7 will be installed
==> omv1: ---> Package rubygems.noarch 0:2.0.14-24.el7 will be installed
==> omv1: --> Processing Dependency: rubygem(rdoc) >= 4.0.0 for package: rubygems-2.0.14-24.el7.noarch
==> omv1: --> Processing Dependency: rubygem(psych) >= 2.0.0 for package: rubygems-2.0.14-24.el7.noarch
==> omv1: --> Processing Dependency: rubygem(io-console) >= 0.4.2 for package: rubygems-2.0.14-24.el7.noarch
==> omv1: ---> Package virt-what.x86_64 0:1.13-5.el7 will be installed
==> omv1: --> Running transaction check
==> omv1: ---> Package hwdata.noarch 0:0.252-7.5.el7 will be installed
==> omv1: ---> Package pciutils-libs.x86_64 0:3.2.1-4.el7 will be installed
==> omv1: ---> Package rubygem-io-console.x86_64 0:0.4.2-24.el7 will be installed
==> omv1: ---> Package rubygem-psych.x86_64 0:2.0.0-24.el7 will be installed
==> omv1: --> Processing Dependency: libyaml-0.so.2()(64bit) for package: rubygem-psych-2.0.0-24.el7.x86_64
==> omv1: ---> Package rubygem-rdoc.noarch 0:4.0.0-24.el7 will be installed
==> omv1: --> Processing Dependency: ruby(irb) = 2.0.0.598 for package: rubygem-rdoc-4.0.0-24.el7.noarch
==> omv1: --> Running transaction check
==> omv1: ---> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed
==> omv1: ---> Package ruby-irb.noarch 0:2.0.0.598-24.el7 will be installed
==> omv1: --> Finished Dependency Resolution
==> omv1: 
==> omv1: Dependencies Resolved
==> omv1: 
==> omv1: ================================================================================
==> omv1:  Package            Arch   Version                    Repository           Size
==> omv1: ================================================================================
==> omv1: Installing:
==> omv1:  puppet             noarch 3.7.5-1.el7                puppetlabs-products 1.5 M
==> omv1: Installing for dependencies:
==> omv1:  augeas-libs        x86_64 1.1.0-17.el7               base                332 k
==> omv1:  dmidecode          x86_64 1:2.12-5.el7               base                 78 k
==> omv1:  facter             x86_64 1:2.4.3-1.el7              puppetlabs-products  98 k
==> omv1:  hiera              noarch 1.3.4-1.el7                puppetlabs-products  23 k
==> omv1:  hwdata             noarch 0.252-7.5.el7              base                2.0 M
==> omv1:  libselinux-ruby    x86_64 2.2.2-6.el7                base                127 k
==> omv1:  libselinux-utils   x86_64 2.2.2-6.el7                base                135 k
==> omv1:  libyaml            x86_64 0.1.4-11.el7_0             base                 55 k
==> omv1:  net-tools          x86_64 2.0-0.17.20131004git.el7   base                304 k
==> omv1:  pciutils           x86_64 3.2.1-4.el7                base                 90 k
==> omv1:  pciutils-libs      x86_64 3.2.1-4.el7                base                 45 k
==> omv1:  ruby               x86_64 2.0.0.598-24.el7           base                 67 k
==> omv1:  ruby-augeas        x86_64 0.4.1-3.el7                puppetlabs-deps      22 k
==> omv1:  ruby-irb           noarch 2.0.0.598-24.el7           base                 88 k
==> omv1:  ruby-libs          x86_64 2.0.0.598-24.el7           base                2.8 M
==> omv1:  ruby-shadow        x86_64 1:2.2.0-2.el7              puppetlabs-deps      14 k
==> omv1:  rubygem-bigdecimal x86_64 1.2.0-24.el7               base                 79 k
==> omv1:  rubygem-io-console x86_64 0.4.2-24.el7               base                 50 k
==> omv1:  rubygem-json       x86_64 1.7.7-24.el7               base                 75 k
==> omv1:  rubygem-psych      x86_64 2.0.0-24.el7               base                 77 k
==> omv1:  rubygem-rdoc       noarch 4.0.0-24.el7               base                318 k
==> omv1:  rubygems           noarch 2.0.14-24.el7              base                212 k
==> omv1:  virt-what          x86_64 1.13-5.el7                 base                 26 k
==> omv1: 
==> omv1: Transaction Summary
==> omv1: ================================================================================
==> omv1: Install  1 Package (+23 Dependent packages)
==> omv1: 
==> omv1: Total download size: 8.6 M
==> omv1: Installed size: 27 M
==> omv1: Downloading packages:
==> omv1: warning: /var/cache/yum/x86_64/7/puppetlabs-products/packages/hiera-1.3.4-1.el7.noarch.rpm: Header V4 RSA/SHA512 Signature, key ID 4bd6ec30: NOKEY
==> omv1: 
==> omv1: Public key for hiera-1.3.4-1.el7.noarch.rpm is not installed
==> omv1: warning: /var/cache/yum/x86_64/7/base/packages/dmidecode-2.12-5.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
==> omv1: 
==> omv1: Public key for dmidecode-2.12-5.el7.x86_64.rpm is not installed
==> omv1: Public key for ruby-augeas-0.4.1-3.el7.x86_64.rpm is not installed
==> omv1: --------------------------------------------------------------------------------
==> omv1: Total                                              811 kB/s | 8.6 MB  00:10     
==> omv1: Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
==> omv1: Importing GPG key 0xF4A80EB5:
==> omv1:  Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
==> omv1:  Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
==> omv1:  Package    : centos-release-7-1.1503.el7.centos.2.8.x86_64 (@CentOS/$releasever)
==> omv1:  From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
==> omv1: 
==> omv1: Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
==> omv1: Importing GPG key 0x4BD6EC30:
==> omv1:  Userid     : "Puppet Labs Release Key (Puppet Labs Release Key) <info@puppetlabs.com>"
==> omv1:  Fingerprint: 47b3 20eb 4c7c 375a a9da e1a0 1054 b7a2 4bd6 ec30
==> omv1:  From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
==> omv1: 
==> omv1: Running transaction check
==> omv1: Running transaction test
==> omv1: Transaction test succeeded
==> omv1: Running transaction
==> omv1:   Installing : ruby-libs-2.0.0.598-24.el7.x86_64                           1/24
==> omv1:  
==> omv1:   Installing : 1:dmidecode-2.12-5.el7.x86_64                               2/24
==> omv1:  
==> omv1:   Installing : virt-what-1.13-5.el7.x86_64                                 3/24
==> omv1:  
==> omv1:   Installing : libyaml-0.1.4-11.el7_0.x86_64                               4/24
==> omv1:  
==> omv1:   Installing : rubygem-psych-2.0.0-24.el7.x86_64                           5/24
==> omv1:  
==> omv1:   Installing : rubygem-bigdecimal-1.2.0-24.el7.x86_64                      6/24
==> omv1:  
==> omv1:   Installing : rubygem-io-console-0.4.2-24.el7.x86_64                      7/24
==> omv1:  
==> omv1:   Installing : ruby-irb-2.0.0.598-24.el7.noarch                            8/24
==> omv1:  
==> omv1:   Installing : ruby-2.0.0.598-24.el7.x86_64                                9/24
==> omv1:  
==> omv1:   Installing : rubygems-2.0.14-24.el7.noarch                              10/24
==> omv1:  
==> omv1:   Installing : rubygem-json-1.7.7-24.el7.x86_64                           11/24
==> omv1:  
==> omv1:   Installing : rubygem-rdoc-4.0.0-24.el7.noarch                           12/24
==> omv1:  
==> omv1:   Installing : hiera-1.3.4-1.el7.noarch                                   13/24
==> omv1:  
==> omv1:   Installing : 1:ruby-shadow-2.2.0-2.el7.x86_64                           14/24
==> omv1:  
==> omv1:   Installing : hwdata-0.252-7.5.el7.noarch                                15/24
==> omv1:  
==> omv1:   Installing : libselinux-utils-2.2.2-6.el7.x86_64                        16/24
==> omv1:  
==> omv1:   Installing : pciutils-libs-3.2.1-4.el7.x86_64                           17/24
==> omv1:  
==> omv1:   Installing : pciutils-3.2.1-4.el7.x86_64                                18/24
==> omv1:  
==> omv1:   Installing : augeas-libs-1.1.0-17.el7.x86_64                            19/24
==> omv1:  
==> omv1:   Installing : ruby-augeas-0.4.1-3.el7.x86_64                             20/24
==> omv1:  
==> omv1:   Installing : net-tools-2.0-0.17.20131004git.el7.x86_64                  21/24
==> omv1:  
==> omv1:   Installing : 1:facter-2.4.3-1.el7.x86_64                                22/24
==> omv1:  
==> omv1:   Installing : libselinux-ruby-2.2.2-6.el7.x86_64                         23/24
==> omv1:  
==> omv1:   Installing : puppet-3.7.5-1.el7.noarch                                  24/24
==> omv1:  
==> omv1:   Verifying  : libselinux-ruby-2.2.2-6.el7.x86_64                          1/24
==> omv1:  
==> omv1:   Verifying  : rubygem-json-1.7.7-24.el7.x86_64                            2/24
==> omv1:  
==> omv1:   Verifying  : ruby-irb-2.0.0.598-24.el7.noarch                            3/24
==> omv1:  
==> omv1:   Verifying  : ruby-libs-2.0.0.598-24.el7.x86_64                           4/24
==> omv1:  
==> omv1:   Verifying  : net-tools-2.0-0.17.20131004git.el7.x86_64                   5/24
==> omv1:  
==> omv1:   Verifying  : augeas-libs-1.1.0-17.el7.x86_64                             6/24
==> omv1:  
==> omv1:   Verifying  : pciutils-libs-3.2.1-4.el7.x86_64                            7/24
==> omv1:  
==> omv1:   Verifying  : rubygem-psych-2.0.0-24.el7.x86_64                           8/24
==> omv1:  
==> omv1:   Verifying  : 1:facter-2.4.3-1.el7.x86_64                                 9/24
==> omv1:  
==> omv1:   Verifying  : pciutils-3.2.1-4.el7.x86_64                                10/24
==> omv1:  
==> omv1:   Verifying  : puppet-3.7.5-1.el7.noarch                                  11/24
==> omv1:  
==> omv1:   Verifying  : hiera-1.3.4-1.el7.noarch                                   12/24
==> omv1:  
==> omv1:   Verifying  : rubygem-rdoc-4.0.0-24.el7.noarch                           13/24
==> omv1:  
==> omv1:   Verifying  : virt-what-1.13-5.el7.x86_64                                14/24
==> omv1:  
==> omv1:   Verifying  : rubygems-2.0.14-24.el7.noarch                              15/24
==> omv1:  
==> omv1:   Verifying  : libselinux-utils-2.2.2-6.el7.x86_64                        16/24
==> omv1:  
==> omv1:   Verifying  : 1:ruby-shadow-2.2.0-2.el7.x86_64                           17/24
==> omv1:  
==> omv1:   Verifying  : rubygem-bigdecimal-1.2.0-24.el7.x86_64                     18/24
==> omv1:  
==> omv1:   Verifying  : 1:dmidecode-2.12-5.el7.x86_64                              19/24
==> omv1:  
==> omv1:   Verifying  : hwdata-0.252-7.5.el7.noarch                                20/24
==> omv1:  
==> omv1:   Verifying  : libyaml-0.1.4-11.el7_0.x86_64                              21/24
==> omv1:  
==> omv1:   Verifying  : rubygem-io-console-0.4.2-24.el7.x86_64                     22/24
==> omv1:  
==> omv1:   Verifying  : ruby-augeas-0.4.1-3.el7.x86_64                             23/24
==> omv1:  
==> omv1:   Verifying  : ruby-2.0.0.598-24.el7.x86_64                               24/24
==> omv1:  
==> omv1: 
==> omv1: Installed:
==> omv1:   puppet.noarch 0:3.7.5-1.el7                                                   
==> omv1: 
==> omv1: Dependency Installed:
==> omv1:   augeas-libs.x86_64 0:1.1.0-17.el7                                             
==> omv1:   dmidecode.x86_64 1:2.12-5.el7                                                 
==> omv1:   facter.x86_64 1:2.4.3-1.el7                                                   
==> omv1:   hiera.noarch 0:1.3.4-1.el7                                                    
==> omv1:   hwdata.noarch 0:0.252-7.5.el7                                                 
==> omv1:   libselinux-ruby.x86_64 0:2.2.2-6.el7                                          
==> omv1:   libselinux-utils.x86_64 0:2.2.2-6.el7                                         
==> omv1:   libyaml.x86_64 0:0.1.4-11.el7_0                                               
==> omv1:   net-tools.x86_64 0:2.0-0.17.20131004git.el7                                   
==> omv1:   pciutils.x86_64 0:3.2.1-4.el7                                                 
==> omv1:   pciutils-libs.x86_64 0:3.2.1-4.el7                                            
==> omv1:   ruby.x86_64 0:2.0.0.598-24.el7                                                
==> omv1:   ruby-augeas.x86_64 0:0.4.1-3.el7                                              
==> omv1:   ruby-irb.noarch 0:2.0.0.598-24.el7                                            
==> omv1:   ruby-libs.x86_64 0:2.0.0.598-24.el7                                           
==> omv1:   ruby-shadow.x86_64 1:2.2.0-2.el7                                              
==> omv1:   rubygem-bigdecimal.x86_64 0:1.2.0-24.el7                                      
==> omv1:   rubygem-io-console.x86_64 0:0.4.2-24.el7                                      
==> omv1:   rubygem-json.x86_64 0:1.7.7-24.el7                                            
==> omv1:   rubygem-psych.x86_64 0:2.0.0-24.el7                                           
==> omv1:   rubygem-rdoc.noarch 0:4.0.0-24.el7                                            
==> omv1:   rubygems.noarch 0:2.0.14-24.el7                                               
==> omv1:   virt-what.x86_64 0:1.13-5.el7                                                 
==> omv1: Complete!
==> omv1:  ---> bf169104271a
==> omv1: Removing intermediate container e44c9cf55dc5
==> omv1: Step 6 : RUN yum install -y hostname
==> omv1:  ---> Running in 6e5bd5a59223
==> omv1: Loaded plugins: fastestmirror
==> omv1: Loading mirror speeds from cached hostfile
==> omv1:  * base: centos.mirror.vexxhost.com
==> omv1:  * extras: mirror.gpmidi.net
==> omv1:  * updates: centos.mirror.vexxhost.com
==> omv1: Resolving Dependencies
==> omv1: --> Running transaction check
==> omv1: ---> Package hostname.x86_64 0:3.13-3.el7 will be installed
==> omv1: --> Finished Dependency Resolution
==> omv1: 
==> omv1: Dependencies Resolved
==> omv1: 
==> omv1: ================================================================================
==> omv1:  Package            Arch             Version               Repository      Size
==> omv1: ================================================================================
==> omv1: Installing:
==> omv1:  hostname           x86_64           3.13-3.el7            base            17 k
==> omv1: 
==> omv1: Transaction Summary
==> omv1: ================================================================================
==> omv1: Install  1 Package
==> omv1: 
==> omv1: Total download size: 17 k
==> omv1: Installed size: 19 k
==> omv1: Downloading packages:
==> omv1: Running transaction check
==> omv1: Running transaction test
==> omv1: Transaction test succeeded
==> omv1: Running transaction
==> omv1:   Installing : hostname-3.13-3.el7.x86_64                                   1/1
==> omv1:  
==> omv1:   Verifying  : hostname-3.13-3.el7.x86_64                                   1/1
==> omv1:  
==> omv1: 
==> omv1: Installed:
==> omv1:   hostname.x86_64 0:3.13-3.el7                                                  
==> omv1: 
==> omv1: Complete!
==> omv1:  ---> 2a20361f2d9d
==> omv1: Removing intermediate container 6e5bd5a59223
==> omv1: Step 7 : ADD run.sh /
==> omv1:  ---> 5493e62ac377
==> omv1: Removing intermediate container 84c8b7677f72
==> omv1: Step 8 : ADD test.pp /
==> omv1:  ---> 4f4d0023e612
==> omv1: Removing intermediate container 84cb691304a3
==> omv1: Step 9 : ENV FACTER_fqdn localhost
==> omv1:  ---> Running in 104fe3925dd6
==> omv1:  ---> 864c113d54b5
==> omv1: Removing intermediate container 104fe3925dd6
==> omv1: Step 10 : ENTRYPOINT puppet apply
==> omv1:  ---> Running in f6fe26141b19
==> omv1:  ---> cd19e4344bf1
==> omv1: Removing intermediate container f6fe26141b19
==> omv1: Step 11 : USER root
==> omv1:  ---> Running in 435752b9fb17
==> omv1:  ---> 86193e9815a8
==> omv1: Removing intermediate container 435752b9fb17
==> omv1: Step 12 : LABEL INSTALL docker run --rm -it --privileged -v /etc:/etc -v /var:/var -v /run:/run --net=host IMAGE
==> omv1:  ---> Running in d24f98a56936
==> omv1:  ---> 72f2dfbc4e07
==> omv1: Removing intermediate container d24f98a56936
==> omv1: Successfully built 72f2dfbc4e07

real    2m46.515s
user    0m10.036s
sys    0m1.678s

Once you’ve built the container, you should confirm its existence:

$ vscreen root@omv1
# docker images | grep spc
spc-puppet-apply    latest              72f2dfbc4e07        7 minutes ago       314.5 MB

The project repository tries to make your life easier, so it comes with a test.pp file that you can run to confirm puppet is working correctly. If you are using a regular host (not Atomic) you can run:

# mkdir /var/tmp/spc-puppet-apply/
# cp -a /vagrant/docker/spc-puppet-apply/test.pp /var/tmp/spc-puppet-apply/

or if you’re using an Atomic host, you can run:

# mkdir /var/tmp/spc-puppet-apply/
# cp -a /home/vagrant/sync/docker/spc-puppet-apply/test.pp /var/tmp/spc-puppet-apply/

since Oh-My-Vagrant can’t put files in /vagrant on an immutable Atomic host.

Finally, run the application with this monster docker command:

# docker run --rm -it --privileged -v /etc:/etc -v /var:/var -v /run:/run --net=host spc-puppet-apply /var/tmp/spc-puppet-apply/test.pp
Notice: Compiled catalog for omv1.example.com in environment production in 0.02 seconds
Notice: This is a puppet test!

Notice: /Stage[main]/Main/Notify[hello]/message: defined 'message' as 'This is a puppet test!'
Notice: Finished catalog run in 0.06 seconds

Since remembering that monster each time you want to do a simple puppet run is a bit of a monster, the Atomic project provides labels which make it so you can use the atomic run command instead.

Future work:

Interested parties are encouraged to test this, find any issues, and become comfortable with the idea of applications running from within containers. If running puppet with a puppet master is desirable, an spc-puppet-agent would be needed.

Happy Hacking,

James

Kubernetes clusters with Oh-My-Vagrant

I’ve added the ability to deploy a Kubernetes cluster with Oh-My-Vagrant (omv). I’ve also built an automated developer experience so that you can test your Kubernetes powered app in minutes. If you want to redeploy a new version, or see how your app behaves during a rolling update, you can use omv to test this out in minutes! I’ve recorded a screencast (~15 min), if you’d like to see some of this in action.

Background:

Kubernetes is a container cluster manager. It groups containers into pods, and those pods get scheduled to run on a certain machine in the cluster. We’ll be talking about Docker containers in this article, although there are lots of great new technologies such as nspawn.

An advantage of using Kubernetes is that it was created by a team at Google, who has a lot of experience running containers. A lot of smart folks (including many from Red Hat) are working on this project too! It is becoming the foundation for other software such as OpenShift v3. Google has released a paper on their earlier work (Borg), which is interesting, but lacks many details, and (unsurprisingly) no source code is present.

Some big disadvantages include the requirement of a CLA to contribute to the project, and the lack of good documentation and articles about it. Kubernetes itself, can’t yet be decentralized, but this might change in the future. It’s (currently) very difficult to construct the foo.json files required to build an application.

While the project is open source (ALv2) I’ve gotten the feeling that Google has a pretty strong hold on the project and has some changes to make before the community really trusts them. This can be a learning experience for any company where proprietary software is the culture. I wish them well on their journey!

Oh-My-Vagrant integration:

To deploy a Kubernetes cluster with omv, the kubernetes variable needs to be set to something meaningful. It’s also recommended that you boot up a minimum of three machines. Here is an except from an example omv.yaml config:

---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.1-docker
:sync: rsync
:extern:
- type: git
  system: docker
  repository: https://github.com/purpleidea/docker-simple1
  directory: docker-simple1
- type: git
  system: kubernetes
  repository: https://github.com/purpleidea/kube-simple1
  directory: kube-simple1
:puppet: false
:docker: []
:kubernetes:
  applications:
  - kube-simple1/simple1.json
:vms: []
:namespace: omv
:count: 3

In the above example, you can see that we’ve only listed one Kubernetes application, but an arbitrary number can be included. The kubernetes variable can also accept a key named master if you’d like to choose which of your vm’s should be the primary. If you don’t specify this, the first vagrant machine will be chosen.

In the list of applications, instead of only specifying the .json file, you can instead specify a dictionary of key/value pairs. The .json key (file) is required, but you can also specify additional keys, such as the boolean key roll. When true, it will cause a rolling update to occur instead of a normal” update.

If you’re interested to see what needs to be done to set up Kubernetes, the bulk of the work was done by me in 1f26, but figuring out manual steps was only possible thanks to the work of my hacker friends: scollier and eparis.

Due to the power of the omv project, when you vagrant up, the necessary docker and Kubernetes projects listed in the extern variable will be automatically cloned and pulled into your omv environment.

Screencast:

You’re probably due for a screencast (~15 min). Have a watch, and then if you need review, go back and read what I’ve written above.

https://download.gluster.org/pub/gluster/purpleidea/screencasts/oh-my-vagrant-kubernetes-screencast.ogv

(Thanks to the Gluster community for generously hosting this video!)

Final thoughts:

I haven’t talked about networking, or actually building applications.

Container network might be a lot easier if you use an overlay network like flannel. Unfortunately, this isn’t yet built into Oh-My-Vagrant, but is in the list of feature requests if someone shows some interest.

Building useful applications is harder. In my screencast, you’ll see where to put the code, and how to iterate on it, but not what kind of code to write, or architecturally how your multi-container applications should work. Unfortunately this is out of scope for today’s article! My goal was to make it easy for you to focus on that topic, instead of having to figure out how to build the infrastructure. Hopefully Oh-My-Vagrant helps you accomplish that!

Happy hacking,

James

 

Docker containers in Oh-My-Vagrant

The Oh-My-Vagrant (omv) project is an easy way to bootstrap a development environment. It is particularly useful for spinning up an arbitrary number of virtual machines in Vagrant without writing ruby code. For multi-machine container development, omv can be used to help this happen more naturally.

Oh-My-Vagrant can be very useful as a docker application development environment. I’ve made a quick (<9min) screencast demoing this topic. Please have a look:

https://download.gluster.org/pub/gluster/purpleidea/screencasts/oh-my-vagrant-docker-screencast.ogv

If you watched the screencast, you should have a good overview of what’s possible. Let’s discuss some of these features in more detail.

Pull an arbitrary list of docker images:

If you use an image that was baked with vagrant-builder, you can make sure that an arbitrary list of docker images will be pre-cached into the base image so that you don’t have to wait for the slow docker registry every time you boot up a development vm.

This is easily seen in the CentOS-7.1 image definition file seen here. Here’s an excerpt:

VERSION='centos-7.1'
POSTFIX='docker'
SIZE='40'
DOCKER='centos fedora'		# list of docker images to include

The GlusterFS community gracefully hosts a copy of this image here.

If you’d like to add images to a vm you can add a list of things to pull in the docker omv.yaml variable:

---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.1-docker
:docker:
- ubuntu
- busybox
:count: 1
: vms: []

This key is also available in the vms array.

Automatic docker builds:

If you have a Dockerfile in a vagrant/docker/*/ folder, then it will get automatically added to the running vagrant vm, and built every time you run a vagrant up. If the machine is already running, and you’d like to rebuild it from your local working directory, you can run: vagrant rsync && vagrant provision.

Automatic docker environments:

Building and defining docker applications can be a tricky process, particularly because the techniques are still quite new to developers. With Oh-My-Vagrant, this process is simplified for container developers because you can build an enhanced omv.yaml file which defines your app for you:

---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0-docker
:extern:
- type: git
  system: docker
  repository: https://github.com/purpleidea/docker-simple1
  directory: simple-app1
:docker: []
:vms: []
:count: 3

By listing multiple git repos in your omv.yaml file, they will be automatically pulled down and built for you. An example of the above running would look similar to this:

$ time vup omv1
Cloning into 'simple-app1'...
remote: Counting objects: 6, done.
remote: Total 6 (delta 0), reused 0 (delta 0), pack-reused 6
Unpacking objects: 100% (6/6), done.
Checking connectivity... done.

Bringing machine 'omv1' up with 'libvirt' provider...
==> omv1: Creating image (snapshot of base box volume).
==> omv1: Creating domain with the following settings...
==> omv1:  -- Name:              omv_omv1
==> omv1:  -- Domain type:       kvm
==> omv1:  -- Cpus:              1
==> omv1:  -- Memory:            512M
==> omv1:  -- Base box:          centos-7.0-docker
==> omv1:  -- Storage pool:      default
==> omv1:  -- Image:             /var/lib/libvirt/images/omv_omv1.img
==> omv1:  -- Volume Cache:      default
==> omv1:  -- Kernel:            
==> omv1:  -- Initrd:            
==> omv1:  -- Graphics Type:     vnc
==> omv1:  -- Graphics Port:     5900
==> omv1:  -- Graphics IP:       127.0.0.1
==> omv1:  -- Graphics Password: Not defined
==> omv1:  -- Video Type:        cirrus
==> omv1:  -- Video VRAM:        9216
==> omv1:  -- Command line : 
==> omv1: Starting domain.
==> omv1: Waiting for domain to get an IP address...
==> omv1: Waiting for SSH to become available...
==> omv1: Starting domain.
==> omv1: Waiting for domain to get an IP address...
==> omv1: Waiting for SSH to become available...
==> omv1: Creating shared folders metadata...
==> omv1: Setting hostname...
==> omv1: Rsyncing folder: /home/james/code/oh-my-vagrant/vagrant/ => /vagrant
==> omv1: Configuring and enabling network interfaces...
==> omv1: Running provisioner: shell...
    omv1: Running: inline script
==> omv1: Running provisioner: docker...
    omv1: Configuring Docker to autostart containers...
==> omv1: Running provisioner: docker...
    omv1: Configuring Docker to autostart containers...
==> omv1: Building Docker images...
==> omv1: -- Path: /vagrant/docker/simple-app1
==> omv1: Sending build context to Docker daemon 54.27 kB
==> omv1: Sending build context to Docker daemon 
==> omv1: Step 0 : FROM fedora
==> omv1:  ---> 834629358fe2
==> omv1: Step 1 : MAINTAINER James Shubin <james@shubin.ca>
==> omv1:  ---> Running in 2afded16eec7
==> omv1:  ---> a7baf4784f57
==> omv1: Removing intermediate container 2afded16eec7
==> omv1: Step 2 : RUN echo Hello and welcome to the Technical Blog of James > README
==> omv1:  ---> Running in 709b9dc66e9b
==> omv1:  ---> b955154474f4
==> omv1: Removing intermediate container 709b9dc66e9b
==> omv1: Step 3 : ENTRYPOINT python -m SimpleHTTPServer
==> omv1:  ---> Running in 76840da9e963
==> omv1:  ---> b333c179dd56
==> omv1: Removing intermediate container 76840da9e963
==> omv1: Step 4 : EXPOSE 8000
==> omv1:  ---> Running in ebf83f08328e
==> omv1:  ---> f13049706668
==> omv1: Removing intermediate container ebf83f08328e
==> omv1: Successfully built f13049706668

real	1m12.221s
user	0m5.923s
sys	0m0.932s

All that happened in about a minute!

Conclusion:

I hope these tools help, if you’re following my git commits, you’ll notice that there are some new features I haven’t blogged about yet. Kubernetes integration exists, so please have a look, and hopefully I’ll have some screencasts and blog posts about this shortly.

Happy hacking,

James

Sharing dev environments with Oh-My-Vagrant

With Oh-My-Vagrant (omv) you can set up a dev environment in seconds. (Read the omv introduction if you’ve never used it before!) Since everything is defined in a single omv.yaml file, it is easy to share your cluster prototype with a friend! The one missing feature was associating code with this config file. This is now possible! Let me show you how it works…

In the omv.yaml file there is an extern variable. It is a list of each external repository which you’d like to include. Each element in this list is a hash of key value pairs. Currently four are supported: type, system, repository, directory.

An example will help you visualize this:

---
:domain: example.com
:network: 192.168.123.0/24
:image: fedora-21
:extern:
- type: git
  system: ansible
  repository: https://github.com/eparis/kubernetes-ansible
  directory: kubernetes
:reallyrm: true

In this example, we list one external repository. It is of type git, it is intended for use with the ansible integration provided by omv, the repository is hosted by eparis, and we’ll store this in a local directory called kubernetes.

We currently only support git repositories, but patches for other systems are welcome. A few different “systems” are supported, including puppet, docker and kubernetes. They each integrate with omv, and, as a result can pull code and modules into the appropriate places. Any repository path that is valid is acceptable (including local file paths) and lastly, the directory you choose is entirely up to you!

The most important part that I need to mention is the reallyrm variable. If this is set to true, and you remove a git repository from the list, omv will make sure that it removes it! Since some users might not expect this behaviour, it defaults to false, and shouldn’t bite you! If you’re not comfortable with it, don’t use it! I find it incredibly helpful.

Here’s a small screencast to show you some examples of this in action:

oh-my-vagrant-extern-screencast.ogv

I hope you enjoyed this. Please share and enjoy, and I’ll be back soon to explain some more of the features! Documentation patches are appreciated!

Happy hacking,

James

Building RHEL Vagrant Boxes with Vagrant-Builder

Vagrant is a great tool for development, but Red Hat Enterprise Linux (RHEL) customers have typically been left out, because it has been impossible to get RHEL boxes! It would be extremely elegant if hackers could quickly test and prototype their code on the same OS as they’re running in production.

Secondly, when hacking on projects that have a long initial setup phase (eg: a long rpm install) it would be excellent if hackers could roll their own modified base boxes, so that certain common operations could be re-factored out into the base image.

This all changes today.

Please continue reading if you’d like to know how :)

Subscriptions:

In order to use RHEL, you first need a subscription. If you don’t already have one, go sign up… I’ll wait. You do have to pay money, but in return, you’re funding my salary (and many others) so that we can build you lots of great hacks.

Prerequisites:

I’ll be working through this whole process on a Fedora 21 laptop. It should probably work on different OS versions and flavours, but I haven’t tested it. Please test, and let me know your results! You’ll also need virt-install and virt-builder installed:

$ sudo yum install -y /usr/bin/virt-{install,builder}

Step one:

Login to https://access.redhat.com/ and check that you have a valid subscription available. This should look like this:

A view of my available subscriptions.

A view of my available subscriptions.

If everything looks good, you’ll need to download an ISO image of RHEL. First head to the downloads section and find the RHEL product:

A view of my available product downloads.

A view of my available product downloads.

In the RHEL download section, you’ll find a number of variants. You want the RHEL 7.0 Binary DVD:

A view of the available RHEL downloads.

A view of the available RHEL downloads.

After it has finished downloading, verify the SHA-256 hash is correct, and continue to step two!

$ sha256sum rhel-server-7.0-x86_64-dvd.iso
85a9fedc2bf0fc825cc7817056aa00b3ea87d7e111e0cf8de77d3ba643f8646c  rhel-server-7.0-x86_64-dvd.iso

Step two:

Grab a copy of vagrant-builder:

$ git clone https://github.com/purpleidea/vagrant-builder
Cloning into 'vagrant-builder'...
[...]
Checking connectivity... done.

I’m pleased to announce that it now has some documentation! (Patches are welcome to improve it!)

Since we’re going to use it to build RHEL images, you’ll need to put your subscription manager credentials in ~/.vagrant-builder/auth.sh:

$ cat ~/.vagrant-builder/auth.sh
# these values are used by vagrant-builder
USERNAME='purpleidea@redhat.com' # replace with your access.redhat.com username
PASSWORD='hunter2'               # replace with your access.redhat.com password

This is a simple shell script that gets sourced, so you could instead replace the static values with a script that calls out to the GNOME Keyring. This is left as an exercise to the reader.

To build the image, we’ll be working in the v7/ directory. This directory supports common OS families and versions that have high commonality, and this includes Fedora 20, Fedora 21, CentOS 7.0, and RHEL 7.0.

Put the downloaded RHEL ISO in the iso/ directory. To allow qemu to see this file, you’ll need to add some acl’s:

$ sudo -s # do this as root
$ cd /home/
$ getfacl james # james is my home directory
# file: james
# owner: james
# group: james
user::rwx
group::---
other::---
$ setfacl -m u:qemu:r-x james # this is the important line
$ getfacl james
# file: james
# owner: james
# group: james
user::rwx
user:qemu:r-x
group::---
mask::r-x
other::---

If you have an unusually small /tmp directory, it might also be an issue. You’ll need at least 6GiB free, but a bit extra is a good idea. Check your free space first:

$ df -h /tmp
Filesystem Size Used Avail Use% Mounted on
tmpfs 1.9G 1.3M 1.9G 1% /tmp

Let’s increase this a bit:

$ sudo mount -o remount,size=8G /tmp
$ df -h /tmp
Filesystem Size Used Avail Use% Mounted on
tmpfs 8.0G 1.3M 8.0G 1% /tmp

You’re now ready to build an image…

Step three:

In the versions/ directory, you’ll see that I have provided a rhel-7.0-iso.sh script. You’ll need to run it from its parent directory. This will take a while, and will cause two sudo prompts, which are required for virt-install. One downside to this process is that your https://access.redhat.com/ password will be briefly shown in the virt-builder output. Patches to fix this are welcome!

$ pwd
/home/james/code/vagrant-builder/v7
$ time ./versions/rhel-7.0-iso.sh
[...]
real    38m49.777s
user    13m20.910s
sys     1m13.832s
$ echo $?
0

With any luck, this should eventually complete successfully. This uses your cpu’s virtualization instructions, so if they’re not enabled, this will be a lot slower. It also uses the network, which in North America, means you’re in for a wait. Lastly, the xz compression utility will use a bunch of cpu building the virt-builder base image. On my laptop, this whole process took about 30 minutes. The above run was done without an SSD and took a bit longer.

The good news is that most of hard work is now done and won’t need to be repeated! If you want to see the fruits of your CPU labour, have a look in: ~/tmp/builder/rhel-7.0-iso/.

$ ls -lAhGs
total 4.1G
1.7G -rw-r--r--. 1 james 1.7G Feb 23 18:48 box.img
1.7G -rw-r--r--. 1 james  41G Feb 23 18:48 builder.img
 12K -rw-rw-r--. 1 james  10K Feb 23 18:11 docker.tar
4.0K -rw-rw-r--. 1 james  388 Feb 23 18:39 index
4.0K -rw-rw-r--. 1 james   64 Feb 23 18:11 metadata.json
652M -rw-rw-r--. 1 james 652M Feb 23 18:50 rhel-7.0-iso.box
200M -rw-r--r--. 1 james 200M Feb 23 18:28 rhel-7.0.xz

As you can see, we’ve produced a bunch of files. The rhel-7.0-iso.box is your RHEL 7.0 vagrant base box! Congratulations!

Step four:

If you haven’t ever installed vagrant, you’ll pleased to know that as of last week, vagrant and vagrant-libvirt RPM’s have hit Fedora 21! I started trying to convince the RPM wizards about a year ago, and we finally have something that is quite usable! Hopefully we’ll iterate on any packaging bugs, and keep this great work going! There are now only three things you need to do to get a working vagrant-libvirt setup on Fedora 21:

  1. $ yum install -y vagrant-libvirt
  2. Source this .bashrc add-on from: https://gist.github.com/purpleidea/8071962
  3. Add a vagrant.pkla file as mentioned here

Now that we’re now in well-known vagrant territory. Adding the box into vagrant is a simple:

$ vagrant box add rhel-7.0-iso.box --name rhel-7.0

Using the box effectively:

Having a base box is great, but having to manage subscription-manager manually isn’t fun in a DevOps environment. Enter Oh-My-Vagrant (omv). You can use omv to automatically register and unregister boxes! Edit the omv.yaml file so that the image variable refers to the base box you just built, enter your https://access.redhat.com/ username and password, and vagrant up away!

$ cat omv.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: rhel-7.0
:boxurlprefix: ''
:sync: rsync
:folder: ''
:extern: []
:puppet: false
:classes: []
:docker: false
:cachier: false
:vms: []
:namespace: omv
:count: 2
:username: 'purpleidea@redhat.com'
:password: 'hunter2'
:poolid: true
:repos: []
$ vs
Current machine states:

omv1                      not created (libvirt)
omv2                      not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

You might want to set repos to be:

['rhel-7-server-rpms', 'rhel-7-server-extras-rpms']

but it depends on what subscriptions you want or have available. If you’d like to store your credentials in an external file, you can do so like this:

$ cat ~/.config/oh-my-vagrant/auth.yaml
---
:username: purpleidea@redhat.com
:password: hunter2

Here’s an actual run to see the subscription-manager action:

$ vup omv1
[...]
==> omv1: The system has been registered with ID: 00112233-4455-6677-8899-aabbccddeeff
==> omv1: Installed Product Current Status:
==> omv1: Product Name: Red Hat Enterprise Linux Server
==> omv1: Status:       Subscribed
$ # the above lines shows that your machine has been registered
$ vscreen root@omv1
[root@omv1 ~]# echo thanks purpleidea!
thanks purpleidea!
[root@omv1 ~]# exit

Make sure to unregister when you are permanently done with a machine, otherwise your subscriptions will be left idle. This happens automatically on vagrant destroy when using Oh-My-Vagrant:

$ vdestroy omv1 # make sure to unregister when you are done
Unlocking shell provisioning for: omv1...
Running 'subscription-manager unregister' on: omv1...
Connection to 192.168.121.26 closed.
System has been unregistered.
==> omv1: Removing domain...

Idempotence:

One interesting aspect of this build process, is that it’s mostly idempotent. It’s able to do this, because it uses GNU Make to ensure that only out of date steps or missing targets are run. As a result, if the build process fails part way through, you’ll only have to repeat the failed steps! This speeds up debugging and iterative development enormously!

To prove this to you, here is what a second run looks like (after the first successful run):

$ time ./versions/rhel-7.0-iso.sh 

real    0m0.029s
user    0m0.013s
sys    0m0.017s

As you can see it completes almost instantly.

Variants:

To build a variant of the base image that we just built, create a versions/*.sh file, and modify the variables to add your changes in. If you start with a copy of the ~/tmp/builder/${VERSION}-${POSTFIX} folder, then you shouldn’t need to repeat the initial steps. Hint: btrfs is excellent at reflinking data, so you don’t unnecessarily store multiple copies!

Plumbing Pipeline:

What actually happens behind the scenes? Most of the magic happens in the Makefile. The relevant series of transforms is as follows:

  1. virt-install: install from iso
  2. virt-sysprep: remove unneeded junk
  3. virt-sparsify: make sparse
  4. xz –best: compress into builder format
  5. virt-builder: use builder to bake vagrant box
  6. qemu-img convert: convert to correct format
  7. tar -cvz: tar up into vagrant box format

There are some intermediate dependency steps that I didn’t mention, so feel free to explore the source.

Future work:

  • Some of the above steps in the pipeline are actually bundled under the same target. It’s not a huge issue, but it could be changed if someone feels strongly about it.
  • Virt-builder can’t run docker commands during build. This would be very useful for pre-populating images with docker containers.
  • Oh-My-Vagrant, needs to have its DNS management switched to use vagrant-hostmanager instead of puppet resource commands.

Disclaimers:

While I expect you’ll love using these RHEL base boxes with Vagrant, the above builder methodology is currently not officially supported, and I can’t guarantee that the RHEL vagrant dev environments will be either. I’m putting this out there for the early (DevOps) adopters who want to hack on this and who didn’t want to invent their own build tool chain. If you do have issues, please leave a comment here, or submit a vagrant-builder issue.

Thanks:

Special thanks to Richard WM Jones and Pino Toscano for their great work on virt-builder that this is based on. Additional thanks to Randy Barlow for encouraging me to work on this. Thanks to Red Hat for continuing to pay my salary :)

Subscriptions?

If I’ve convinced you that you want some RHEL subscriptions, please go have a look, and please let Red Hat know that you appreciated this post and my work.

Happy Hacking!

James

UPDATE: I’ve tested that this process also works with the new RHEL 7.1 release!
UPDATE: I’ve tested that this process also works with the new RHEL 7.2 release!

Introducing: Oh My Vagrant!

If you’re a reader of my code or of this blog, it’s no secret that I hack on a lot of puppet and vagrant. Recently I’ve fooled around with a bit of docker, too. I realized that the vagrant, environments I built for puppet-gluster and puppet-ipa needed to be generalized, and they needed new features too. Therefore…

Introducing: Oh My Vagrant!

Oh My Vagrant is an attempt to provide an easy to use development environment so that you can be up and hacking quickly, and focusing on the real devops problems. The README explains my choice of project name.

Prerequisites:

I use a Fedora 20 laptop with vagrant-libvirt. Efforts are underway to create an RPM of vagrant-libvirt, but in the meantime you’ll have to read: Vagrant on Fedora with libvirt (reprise). This should work with other distributions too, but I don’t test them very often. Please step up and help test :)

The bits:

First clone the oh-my-vagrant repository and look inside:

git clone --recursive https://github.com/purpleidea/oh-my-vagrant
cd oh-my-vagrant/vagrant/

The included Vagrantfile is the current heart of this project. You’re welcome to use it as a template and edit it directly, or you can use the facilities it provides. I’d recommend starting with the latter, which I’ll walk you through now.

Getting started:

Start by running vagrant status (vs) and taking a look at the vagrant.yaml file that appears.

james@computer:/oh-my-vagrant/vagrant$ ls
Dockerfile  puppet/  Vagrantfile
james@computer:/oh-my-vagrant/vagrant$ vs
Current machine states:

template1                 not created (libvirt)

The Libvirt domain is not created. Run `vagrant up` to create it.
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms: []
:namespace: template
:count: 1
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$

Here you’ll see the list of resultant machines that vagrant thinks is defined (currently just template1), and a bunch of different settings in YAML format. The values of these settings help define the vagrant environment that you’ll be hacking in.

Changing settings:

The settings exist so that your vagrant environment is dynamic and can be changed quickly. You can change the settings by editing the vagrant.yaml file. They will be used by vagrant when it runs. You can also change them at runtime with --vagrant-foo flags. Running a vagrant status will show you how vagrant currently sees the environment. Let’s change the number of machines that are defined. Note the location of the --vagrant-count flag and how it doesn’t work when positioned incorrectly.

james@computer:/oh-my-vagrant/vagrant$ vagrant status --vagrant-count=4
An invalid option was specified. The help for this command
is available below.

Usage: vagrant status [name]
    -h, --help                       Print this help
james@computer:/oh-my-vagrant/vagrant$ vagrant --vagrant-count=4 status
Current machine states:

template1                 not created (libvirt)
template2                 not created (libvirt)
template3                 not created (libvirt)
template4                 not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms: []
:namespace: template
:count: 4
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$

As you can see in the above example, changing the count variable to 4, causes vagrant to see a possible four machines in the vagrant environment. You can change as many of these parameters at a time by using the --vagrant- flags, or you can edit the vagrant.yaml file. The latter is much easier and more expressive, in particular for expressing complex data types. The former is much more powerful when building one-liners, such as:

vagrant --vagrant-count=8 --vagrant-namespace=gluster up gluster{1..8}

which should bring up eight hosts in parallel, named gluster1 to gluster8.

Other VM’s:

Since one often wants to be more expressive in machine naming and heterogeneity of machine type, you can specify a list of machines to define in the vagrant.yaml file vms array. If you’d rather define these machines in the Vagrantfile itself, you can also set them up in the vms array defined there. It is empty by default, but it is easy to uncomment out one of the many examples. These will be used as the defaults if nothing else overrides the selection in the vagrant.yaml file. I’ve uncommented a few to show you this functionality:

james@computer:/oh-my-vagrant/vagrant$ grep example[124] Vagrantfile 
    {:name => 'example1', :docker => true, :puppet => true, },    # example1
    {:name => 'example2', :docker => ['centos', 'fedora'], },    # example2
    {:name => 'example4', :image => 'centos-6', :puppet => true, },    # example4
james@computer:/oh-my-vagrant/vagrant$ rm vagrant.yaml # note that I remove the old settings
james@computer:/oh-my-vagrant/vagrant$ vs
Current machine states:

template1                 not created (libvirt)
example1                  not created (libvirt)
example2                  not created (libvirt)
example4                  not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms:
- :name: example1
  :docker: true
  :puppet: true
- :name: example2
  :docker:
  - centos
  - fedora
- :name: example4
  :image: centos-6
  :puppet: true
:namespace: template
:count: 1
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$ vim vagrant.yaml # edit vagrant.yaml file...
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms:
- :name: example1
  :docker: true
  :puppet: true
- :name: example4
  :image: centos-7.0
  :puppet: true
:namespace: template
:count: 1
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$ vs
Current machine states:

template1                 not created (libvirt)
example1                  not created (libvirt)
example4                  not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
james@computer:/oh-my-vagrant/vagrant$

The above output might seem a little long, but if you try these steps out in your terminal, you should get a hang of it fairly quickly. If you poke around in the Vagrantfile, you should see the format of the vms array. Each element in the array should be a dictionary, where the keys correspond to the flags you wish to set. Look at the examples if you need help with the formatting.

Other settings:

As you saw, other settings are available. There are a few notable ones that are worth mentioning. This will also help explain some of the other features that this Vagrantfile provides.

  • domain: This sets the domain part of each vm’s FQDN. The default is example.com, which should work for most environments, but you’re welcome to change this as you see fit.
  • network: This sets the network that is used for the vm’s. You should pick a network/cidr that doesn’t conflict with any other networks on your machine. This is particularly useful when you have multiple vagrant environments hosted off of the same laptop.
  • image: This is the default base image to use for each machine. It can be overridden per-machine in the vm’s list of dictionaries.
  • sync: This is the sync type used for vagrant. rsync is the default and works in all environments. If you’d prefer to fight with the nfs mounts, or try out 9p, both those options are available too.
  • puppet: This option enables or disables integration with puppet. It is possible to override this per machine. This functionality will be expanded in a future version of Oh My Vagrant.
  • docker: This option enables and lists the docker images to set up per vm. It is possible to override this per machine. This functionality will be expanded in a future version of Oh My Vagrant.
  • namespace: This sets the namespace that your Vagrantfile operates in. This value is used as a prefix for the numbered vm’s, as the libvirt network name, and as the primary puppet module to execute.

More on the docker option:

For now, if you specify a list of docker images, they will be automatically pulled into your vm environment. It is recommended that you pre-cache them in an existing base image to save bandwidth. Custom base vagrant images can be easily be built with vagrant-builder, but this process is currently undocumented.

I’ll try to write-up a post on this process if there are enough requests. To keep you busy in the meantime, I’ve published a CentOS 7 vagrant base image that includes docker images for CentOS and Fedora. It is being graciously hosted by the GlusterFS community.

What other magic does this all do?

There is a certain amount of magic glue that happens behind the scenes. Here’s a list of some of it:

  • Idempotent /etc/hosts based DNS
  • Easy docker base image installation
  • IP address calculations and assignment with ipaddr
  • Clever cleanup on ‘vagrant destroy
  • Vagrant docker base image detection
  • Integration with Puppet

If you don’t understand what all of those mean, and you don’t want to go source diving, don’t worry about it! I will explain them in greater detail when it’s important, and hopefully for now everything “just works” and stays out of your way.

Future work:

There’s still a lot more that I have planned, and some parts of the Vagrantfile need clean up, but I figured I’d try and release this early so that you can get hacking right away. If it’s useful to you, please leave a comment and let me know.

Happy hacking,

James