Remote execution in mgmt

Bootstrapping a cluster from your laptop, or managing machines without needing to first setup a separate config management infrastructure are both very reasonable and fundamental asks. I was particularly inspired by Ansible‘s agent-less remote execution model, but never wanted to build a centralized orchestrator. I soon realized that I could have my ice cream and eat it too.

Prior knowledge

If you haven’t read the earlier articles about mgmt, then I recommend you start with those, and then come back here. The first and fourth are essential if you’re going to make sense of this article.

Limitations of existing orchestrators

Current orchestrators have a few limitations.

  1. They can be a single point of failure
  2. They can have scaling issues
  3. They can’t respond instantaneously to node state changes (they poll)
  4. They can’t usually redistribute remote node run-time data between nodes

Despite these limitations, orchestration is still very useful because of the facilities it provides. Since these facilities are essential in a next generation design, I set about integrating these features, but with a novel twist.

Implementation, Usage and Design

Mgmt is written in golang, and that decision was no accident. One benefit is that it simplifies our remote execution model.

To use this mode you run mgmt with the --remote flag. Each use of the --remote argument points to a different remote graph to execute. Eventually this will be integrated with the DSL, but this plumbing is exposed for early adopters to play around with.

Startup (part one)

Each invocation of --remote causes mgmt to remotely connect over SSH to the target hosts. This happens in parallel, and runs up to --cconns simultaneous connections.

A temporary directory is made on the remote host, and the mgmt binary and graph are copied across the wire. Since mgmt compiles down to a single statically compiled binary, it simplifies the transfer of the software. The binary is cached remotely to speed up future runs unless you pass the --no-caching option.

A TCP connection is tunnelled back over SSH to the originating hosts etcd server which is embedded and running inside of the initiating mgmt binary.

Execution (part two)

The remote mgmt binary is now run! It wires itself up through the SSH tunnel so that its internal etcd client can connect to the etcd server on the initiating host. This is particularly powerful because remote hosts can now participate in resource exchanges as if they were part of a regular etcd backed mgmt cluster! They don’t connect directly to each other, but they can share runtime data, and only need an incoming SSH port open!

Closure (part three)

At this point mgmt can either keep running continuously or it can close the connections and shutdown.

In the former case, you can either remain attached over SSH, or you can disconnect from the child hosts and let this new cluster take on a new life and operate independently of the initiator.

In the latter case you can either shutdown at the operators request (via a ^C on the initiator) or when the cluster has simultaneously converged for a number of seconds.

This second possibility occurs when you run mgmt with the familiar --converged-timeout parameter. It is indeed clever enough to also work in this distributed fashion.

Diagram

I’ve used by poor libreoffice draw skills to make a diagram. Hopefully this helps out my visual readers.

remote-execution

If you can improve this diagram, please let me know!

Example

I find that using one or more vagrant virtual machines for the remote endpoints is the best way to test this out. In my case I use Oh-My-Vagrant to set up these machines, but the method you use is entirely up to you! Here’s a sample remote execution. Please note that I have omitted a number of lines for brevity, and added emphasis to the more interesting ones.

james@hostname:~/code/mgmt$ ./mgmt run --remote examples/remote2a.yaml --remote examples/remote2b.yaml --tmp-prefix 
17:58:22 main.go:76: This is: mgmt, version: 0.0.5-3-g4b8ad3a
17:58:23 remote.go:596: Remote: Connect...
17:58:23 remote.go:607: Remote: Sftp...
17:58:23 remote.go:164: Remote: Self executable is: /home/james/code/gopath/src/github.com/purpleidea/mgmt/mgmt
17:58:23 remote.go:221: Remote: Remotely created: /tmp/mgmt-412078160/remote
17:58:23 remote.go:226: Remote: Remote path is: /tmp/mgmt-412078160/remote/mgmt
17:58:23 remote.go:221: Remote: Remotely created: /tmp/mgmt-412078160/remote
17:58:23 remote.go:226: Remote: Remote path is: /tmp/mgmt-412078160/remote/mgmt
17:58:23 remote.go:235: Remote: Copying binary, please be patient...
17:58:23 remote.go:235: Remote: Copying binary, please be patient...
17:58:24 remote.go:256: Remote: Copying graph definition...
17:58:24 remote.go:618: Remote: Tunnelling...
17:58:24 remote.go:630: Remote: Exec...
17:58:24 remote.go:510: Remote: Running: /tmp/mgmt-412078160/remote/mgmt run --hostname '192.168.121.201' --no-server --seeds 'http://127.0.0.1:2379' --file '/tmp/mgmt-412078160/remote/remote2a.yaml' --depth 1
17:58:24 etcd.go:2088: Etcd: Watch: Path: /_mgmt/exported/
17:58:24 main.go:255: Main: Waiting...
17:58:24 remote.go:256: Remote: Copying graph definition...
17:58:24 remote.go:618: Remote: Tunnelling...
17:58:24 remote.go:630: Remote: Exec...
17:58:24 remote.go:510: Remote: Running: /tmp/mgmt-412078160/remote/mgmt run --hostname '192.168.121.202' --no-server --seeds 'http://127.0.0.1:2379' --file '/tmp/mgmt-412078160/remote/remote2b.yaml' --depth 1
17:58:24 etcd.go:2088: Etcd: Watch: Path: /_mgmt/exported/
17:58:24 main.go:291: Config: Parse failure
17:58:24 main.go:255: Main: Waiting...
^C17:58:48 main.go:62: Interrupted by ^C
17:58:48 main.go:397: Destroy...
17:58:48 remote.go:532: Remote: Output...
|    17:58:23 main.go:76: This is: mgmt, version: 0.0.5-3-g4b8ad3a
|    17:58:47 main.go:419: Goodbye!
17:58:48 remote.go:636: Remote: Done!
17:58:48 remote.go:532: Remote: Output...
|    17:58:24 main.go:76: This is: mgmt, version: 0.0.5-3-g4b8ad3a
|    17:58:48 main.go:419: Goodbye!
17:58:48 remote.go:636: Remote: Done!
17:58:48 main.go:419: Goodbye!

You should see that we kick off the remote executions, and how they are wired back through the tunnel. In this particular case we terminated the runs with a ^C.

The example configurations I used are available here and here. If you had a terminal open on the first remote machine, after about a second you would have seen:

[root@omv1 ~]# ls -d /tmp/file*  /tmp/mgmt*
/tmp/file1a  /tmp/file2a  /tmp/file2b  /tmp/mgmt-412078160
[root@omv1 ~]# cat /tmp/file*
i am file1a
i am file2a, exported from host a
i am file2b, exported from host b

You can see the remote execution artifacts, and that there was clearly data exchange. You can repeat this example with --converged-timeout=5 to automatically terminate after five seconds of cluster wide inactivity.

Live remote hacking

Since mgmt is event based, and graph structure configurations manifest themselves as event streams, you can actually edit the input configuration on the initiating machine, and as soon as the file is saved, it will instantly remotely propagate and apply the graph differential.

For this particular example, since we export and collect resources through the tunnelled SSH connections, it means editing the exported file, will also cause both hosts to update that file on disk!

You’ll see this occurring with this message in the logs:

18:00:44 remote.go:973: Remote: Copied over new graph definition: examples/remote2b.yaml

While you might not necessarily want to use this functionality on a production machine, it will definitely make your interactive hacking sessions more useful, in particular because you never need to re-run parts of the graph which have already converged!

Auth

In case you’re wondering, mgmt can look in your ~/.ssh/ for keys to use for the auth, or it can prompt you interactively. It can also read a plain text password from the connection string, but this isn’t a recommended security practice.

Hierarchial remote execution

Even though we recommend running mgmt in a normal clustered mode instead of over SSH, we didn’t want to limit the number of hosts that can be configured using remote execution. For this reason it would be architecturally simple to add support for what we’ve decided to call “hierarchial remote execution”.

In this mode, the primary initiator would first connect to one or more secondary nodes, which would then stage a second series of remote execution runs resulting in an order of depth equal to two or more. This fan out approach can be used to distribute the number of outgoing connections across more intermediate machines, or as a method to conserve remote execution bandwidth on the primary link into your datacenter, by having the secondary machine run most of the remote execution runs.

remote-execution2

This particular extension hasn’t been built, although some of the plumbing has been laid. If you’d like to contribute this feature to the upstream project, please join us in #mgmtconfig on Freenode and let us (I’m @purpleidea) know!

Docs

There is some generated documentation for the mgmt remote package available. There is also the beginning of some additional documentation in the markdown docs. You can help contribute to either of these by sending us a patch!

Novel resources

Our event based architecture can enable some previously improbable kinds of resources. In particular, I think it would be quite beautiful if someone built a provisioning resource. The Watch method of the resource API normally serves to notify us of events, but since it is a main loop that blocks in a select call, it could also be used to run a small server that hosts a kickstart file and associated TFTP images. If you like this idea, please help us build it!

Conclusion

I hope you enjoyed this article and found this remote execution methodology as novel as we do. In particular I hope that I’ve demonstrated that configuration software doesn’t have to be constrained behind a static orchestration topology.

Happy Hacking,

James

Making an empty RPM

I am definitely not an RPM expert, in fact, I’m afraid of it, but with recent tools such as COPR, and my glorious Makefile, some aspects of it have become palatable. This post will be about a recent journey I had building the most useless RPM ever.

A video of what my work building this RPM looked like.

A video of my journey building this RPM.

Because of reasons, I wanted to satisfy an RPM dependency for a package that I wanted to install without rebuilding that RPM. As a result, I wanted to build as small an RPM as possible. This took me down a much longer path than I thought it would.

Step 1: The empty spec file

I thought this would be easy. It turns out it was not. Here’s what happened…

james@computer:/tmp/rpmbuild$ cat vagrant-libvirt.spec
%global project_version 0.0.24

Name:       vagrant-libvirt
Version:    0.0.24
Release:    noop
Summary:    A fake vagrant-libvirt RPM
License:    AGPLv3+
BuildArch:  noarch

Requires:   vagrant >= 1.6.5

%description
A fake vagrant-libvirt RPM

%prep
%setup -c -q -T -D -a 0

%build

%install

%files

%changelog
james@computer:/tmp/rpmbuild$ rpmbuild -bs vagrant-libvirt.spec 
error: No "Source:" tag in the spec file

Amazingly, rpmbuild fails to build without specifying a Source0 directive. Gah… As an aside, yes the License field was also required, or it won’t build either! So let’s create a dummy RPM to use as the source!

Step 2: The empty tarball

james@computer:/tmp/rpmbuild$ tar -cjf vagrant-libvirt-noop.tar.bz2
tar: Cowardly refusing to create an empty archive
Try 'tar --help' or 'tar --usage' for more information.

Apparently tar doesn’t want to cooperate either! Maybe these utilities have some sort of ingrained existential fear of nothingness? I can work around this though.

Step 3: The empty file

james@computer:/tmp/rpmbuild$ echo hello > README
james@computer:/tmp/rpmbuild$ tar -cjf vagrant-libvirt-noop.tar.bz2 README
james@computer:/tmp/rpmbuild$ echo $?
0
james@computer:/tmp/rpmbuild$ cat vagrant-libvirt.spec
%global project_version 0.0.24

Name:       vagrant-libvirt
Version:    0.0.24
Release:    noop
Summary:    A fake vagrant-libvirt RPM
License:    AGPLv3+
Source0:    vagrant-libvirt-noop.tar.bz2
BuildArch:  noarch

Requires:   vagrant >= 1.6.5

%description
A fake vagrant-libvirt RPM

%prep
%setup -c -q -T -D -a 0

%build

%install

%files

%changelog

Okay great! Now to build the RPM…

Step 4: The empty RPM

james@computer:/tmp/rpmbuild$ mkdir SOURCES
james@computer:/tmp/rpmbuild$ mv vagrant-libvirt-noop.tar.bz2 SOURCES/
james@computer:/tmp/rpmbuild$ rpmbuild --define "_topdir $(pwd)/" -bs vagrant-libvirt.spec
Wrote: /tmp/rpmbuild/SRPMS/vagrant-libvirt-0.0.24-noop.src.rpm
james@computer:/tmp/rpmbuild$ rpmbuild --define "_topdir $(pwd)/" -bb vagrant-libvirt.spec
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.dUivHv
+ umask 022
+ cd /tmp/rpmbuild//BUILD
+ cd /tmp/rpmbuild/BUILD
+ /usr/bin/mkdir -p vagrant-libvirt-0.0.24
+ cd vagrant-libvirt-0.0.24
+ /usr/bin/bzip2 -dc /tmp/rpmbuild/SOURCES/vagrant-libvirt-noop.tar.bz2
+ /usr/bin/tar -xf -
+ STATUS=0
+ '[' 0 -ne 0 ']'
+ /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w .
+ exit 0
Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.kLSHn2
+ umask 022
+ cd /tmp/rpmbuild//BUILD
+ cd vagrant-libvirt-0.0.24
+ exit 0
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.xTiM4y
+ umask 022
+ cd /tmp/rpmbuild//BUILD
+ '[' /tmp/rpmbuild/BUILDROOT/vagrant-libvirt-0.0.24-noop.x86_64 '!=' / ']'
+ rm -rf /tmp/rpmbuild/BUILDROOT/vagrant-libvirt-0.0.24-noop.x86_64
++ dirname /tmp/rpmbuild/BUILDROOT/vagrant-libvirt-0.0.24-noop.x86_64
+ mkdir -p /tmp/rpmbuild/BUILDROOT
+ mkdir /tmp/rpmbuild/BUILDROOT/vagrant-libvirt-0.0.24-noop.x86_64
+ cd vagrant-libvirt-0.0.24
+ /usr/lib/rpm/find-debuginfo.sh --strict-build-id -m --run-dwz --dwz-low-mem-die-limit 10000000 --dwz-max-die-limit 110000000 /tmp/rpmbuild//BUILD/vagrant-libvirt-0.0.24
/usr/lib/rpm/sepdebugcrcfix: Updated 0 CRC32s, 0 CRC32s did match.
+ /usr/lib/rpm/check-rpaths /usr/lib/rpm/check-buildroot
+ /usr/lib/rpm/brp-compress
+ /usr/lib/rpm/brp-strip-static-archive /usr/bin/strip
+ /usr/lib/rpm/brp-python-bytecompile /usr/bin/python 1
+ /usr/lib/rpm/brp-python-hardlink
+ /usr/lib/rpm/redhat/brp-java-repack-jars
Processing files: vagrant-libvirt-0.0.24-noop.noarch
Checking for unpackaged file(s): /usr/lib/rpm/check-files /tmp/rpmbuild/BUILDROOT/vagrant-libvirt-0.0.24-noop.x86_64
Wrote: /tmp/rpmbuild/RPMS/noarch/vagrant-libvirt-0.0.24-noop.noarch.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.0lR0a6
+ umask 022
+ cd /tmp/rpmbuild//BUILD
+ cd vagrant-libvirt-0.0.24
+ /usr/bin/rm -rf /tmp/rpmbuild/BUILDROOT/vagrant-libvirt-0.0.24-noop.x86_64
+ exit 0
james@computer:/tmp/rpmbuild$

This worked too! It has some interesting output though…

james@computer:/tmp/rpmbuild$ rpm -qlp RPMS/noarch/vagrant-libvirt-0.0.24-noop.noarch.rpm
(contains no files)
james@computer:/tmp/rpmbuild$ ls -lAh RPMS/noarch/vagrant-libvirt-0.0.24-noop.noarch.rpm
-rw-rw-r--. 1 james 5.5K Aug 11 11:53 RPMS/noarch/vagrant-libvirt-0.0.24-noop.noarch.rpm
james@computer:/tmp/rpmbuild$

As you can see this has created an empty RPM, but which is about 5k in size. While this worked, builds submitted in COPR don’t generate any output. I suppose this is a bug in COPR, but in the meantime, I still wanted something working. I added some nonsense to the spec file to continue.

Step 5: The final product

james@computer:/tmp/rpmbuild$ cat vagrant-libvirt.spec 
%global project_version 0.0.24

Name:       vagrant-libvirt
Version:    0.0.24
Release:    noop
Summary:    A fake vagrant-libvirt RPM
License:    AGPLv3+
Source0:    vagrant-libvirt-noop.tar.bz2
BuildArch:  noarch

Requires:   vagrant >= 1.6.5

%description
A fake vagrant-libvirt RPM

%prep
%setup -c -q -T -D -a 0

%build

%install
rm -rf %{buildroot}
# _datadir is typically /usr/share/
install -d -m 0755 %{buildroot}/%{_datadir}/vagrant-libvirt/
echo "This is a phony vagrant-libvirt package." > %{buildroot}/%{_datadir}/vagrant-libvirt/README

%files
%{_datadir}/vagrant-libvirt/README

%changelog

After running the usual build commands, and sticking an SRPM up in COPR, this builds and installs as expected! Phew! There might a manual way to do this with cpio, but I wanted to use the official tools, and avoid hacking the spec.

Perhaps there is a simpler way to workaround all of this, but until I find it, I hope you’ve enjoyed my story,

Happy Hacking!

James

UPDATE: Reader Jan pointed out, that you could use fpm to accomplish the same thing with a one-liner. The modified one-liner is:

fpm -s empty -t rpm -d 'vagrant >= 1.6.5' -n vagrant-libvirt -v 0.0.24 --iteration noop

This is a much shorter and more elegant solution, with the one exception that fpm doesn’t currently produce SRPMS, which are needed so that a trusted build service like COPR distributes them to the users.

Here’s the full output and comparison and anyways:

james@computer:/tmp/ftest$ fpm -s empty -t rpm -d 'vagrant >= 1.6.5' -n vagrant-libvirt -v 0.0.24 --iteration noop
no value for epoch is set, defaulting to nil {:level=>:warn}
no value for epoch is set, defaulting to nil {:level=>:warn}
Created package {:path=>"vagrant-libvirt-0.0.24-noop.x86_64.rpm"}
james@computer:/tmp/ftest$ sha1sum vagrant-libvirt-0.0.24-noop.x86_64.rpm 61b1c200d2efa87d790a2243ccbc4c4ebb7ef64d  vagrant-libvirt-0.0.24-noop.x86_64.rpm
james@computer:/tmp/ftest$ sha1sum ~/code/oh-my-vagrant/extras/.rpmbuild/RPMS/noarch/vagrant-libvirt-0.0.24-noop.noarch.rpm 
5f2abb15264de6c1c7f09039945cd7bbd3a96404  /home/james/code/oh-my-vagrant/extras/.rpmbuild/RPMS/noarch/vagrant-libvirt-0.0.24-noop.noarch.rpm

While the two sha1sums aren’t identical (probably due to timestamps or some other variant) the two RPM’s should be functionally identical.

adding range support to python’s http server to kickstart with anaconda

I’ve been working on automatic installs using kickstart and puppet. I’m using a modified python httpserver because it’s lightweight, and easy to integrate into my existing python code base. The server was churning away perfectly until anaconda started downloading the full rpm’s for installation. What was going wrong?

Traceback (most recent call last):
[...]
error: [Errno 32] Broken pipe
BorkedError: See TTBOJ for explanation and discussion

As it turns out, anaconda first downloads the headers, and then later requests the full rpm with an http range request. This second range request which begins at byte 1384, causes the “simple” httpserver to bork, because it doesn’t support this more elaborate feature.

After a bit of searching, I found rangehttpserver and was very grateful that I wouldn’t have to write this feature myself. This work by smgoller was based on the similar httpserver by xyne. Both of these people have been very responsive and kind in giving me special permission to the relevant portions of their code that I needed under the GPLv2/3+. Thanks to these two and their contribution to Free Software this let’s us all see further, instead of having to reinvent previously solved problems.

This derivative work is only one part of a larger software release that I have coming shortly, but I wanted to put this out here early to thank these guys and to make you all aware of the range issue and solution.

Thank you again and,
Happy Hacking,

James

A quick anaconda trick

Here’s a quick anaconda solution that I am now using in some of my kickstart files…

I wanted to bootstrap a machine and do all the partitioning and logical volume creation, but not format or mount one of the logical volumes. The magic parameter I needed was:

--fstype=none

This seems to work perfectly for me. It’s not 100% intuitive to me, but it does work. I hope it’s not an accidental bug in the anaconda code! The full text of my partitioning is:

clearpart --all --drives=sda
part /boot --fstype=ext4 --size=1024
part pv.01 --grow --size=1024
volgroup VolGroup00 --pesize=4096 pv.01
logvol / --fstype=ext4 --name=root --vgname=VolGroup00 --size=65536
logvol swap --name=swap --vgname=VolGroup00 --size=16384
logvol /foo --name=foo --vgname=VolGroup00 --fstype=none --grow --size=1

Now all that anaconda is missing is support for RAID1 EFI /boot.

Happy hacking,

James