Copyleft is Dead. Long live Copyleft!

As you may have noticed, we recently re-licensed mgmt from the AGPL (Affero General Public License) to the regular GPL. This is a post explaining the decision and which hopefully includes some insights at the intersection of technology and legal issues.

Disclaimer:

I am not a lawyer, and these are not necessarily the opinions of my employer. I think I’m knowledgeable in this area, but I’m happy to be corrected in the comments. I’m friends with a number of lawyers, and they like to include disclaimer sections, so I’ll include this so that I blend in better.

Background:

It’s well understood in infrastructure coding that the control of, and trust in the software is paramount. It can be risky basing your business off of a product if the vendor has the ultimate ability to change the behaviour, discontinue the software, make it prohibitively expensive, or in the extreme case, use it as a backdoor for corporate espionage.

While many businesses have realized this, it’s unfortunate that many individuals have not. The difference might be protecting corporate secrets vs. individual freedoms, but that’s a discussion for another time. I use Fedora and GNOME, and don’t have any Apple products, but you might value the temporary convenience more. I also support your personal choice to use the software you want. (Not sarcasm.)

This is one reason why Red Hat has done so well. If they ever mistreated their customers, they’d be able to fork and grow new communities. The lack of an asymmetrical power dynamic keeps customers feeling safe and happy!

Section 13:

The main difference between the AGPL and the GPL is the “Remote Network Interaction” section. Here’s a simplified explanation:

Both licenses require that if you modify the code, you give back your contributions. “Copyleft” is Copyright law that legally requires this share-alike provision. These licenses never require this when using the software privately, whether as an individual or within a company. The thing that “activates” the licenses is distribution. If you sell or give someone a modified copy of the program, then you must also include the source code.

The AGPL extends the GPL in that it also activates the license if that software runs on a application providers computer which is common with hosted software-as-a-service. In other words, if you were an external user of a web calendaring solution containing AGPL software, then that provider would have to offer up the code to the application, whereas the GPL would not require this, and neither license would require distribution of code if the application was only available to employees of that company nor would it require distribution of the software used to deploy the calendaring software.

Network Effects and Configuration Management:

If you’re familiar with the infrastructure automation space, you’re probably already aware of three interesting facts:

  1. Hosted configuration management as a service probably isn’t plausible
  2. The infrastructure automation your product uses isn’t the product
  3. Copyleft does not apply to the code or declarations that describe your configuration

As a result of this, it’s unlikely that the Section 13 requirement of the AGPL would actually ever apply to anyone using mgmt!

A number of high profile organizations outright forbid the use of the AGPL. Google and Openstack are two notable examples. There are others. Many claim this is because the cost of legal compliance is high. One argument I heard is that it’s because they live in fear that their entire proprietary software development business would be turned on its head if some sufficiently important library was AGPL. Despite weak enforcement, and with many companies flouting the GPL, Linux and the software industry have not shown signs of waning. Compliance has even helped their bottom line.

Nevertheless, as a result of misunderstanding, fear and doubt, using the AGPL still cuts off a portion of your potential contributors. Possible overzealous enforcing has also probably caused some to fear the GPL.

Foundations and Permissive Licensing:

Why use copyleft at all? Copyleft is an inexpensive way of keeping the various contributors honest. It provides an organization constitution so that community members that invest in the project all get a fair, representative stake.

In the corporate world, there is a lot of governance in the form of “foundations”. The most well-known ones exist in the United States and are usually classified as 501(c)(6) under US Federal tax law. They aren’t allowed to generate a profit, but they exist to fulfill the desires of their dues-paying membership. You’ve probably heard of the Linux Foundation, the .NET foundation, the OpenStack Foundation, and the recent Linux Foundation child, the CNCF. With the major exception being Linux, they primarily fund permissively licensed projects since that’s what their members demand, and the foundation probably also helps convince some percentage of their membership into voluntarily contributing back code.

Running an organization like this is possible, but it certainly adds a layer of overhead that I don’t think is necessary for mgmt at this point.

It’s also interesting to note that of the top corporate contributions to open source, virtually all of the licensing is permissive, usually under the Apache v2 license. I’m not against using or contributing to permissively licensed projects, but I do think there’s a danger if most of our software becomes a monoculture of non-copyleft, and I wanted to take a stand against that trend.

Innovation:

I started mgmt to show that there was still innovation to be done in the automation space, and I think I’ve achieved that. I still have more to prove, but I think I’m on the right path. I also wanted to innovate in licensing by showing that the AGPL isn’t actually  harmful. I’m sad to say that I’ve lost that battle, and that maybe it was too hard to innovate in too many different places simultaneously.

Red Hat has been my main source of funding for this work up until now, and I’m grateful for that, but I’m sad to say that they’ve officially set my time quota to zero. Without their support, I just don’t have the energy to innovate in both areas. I’m sad to say it, but I’m more interested in the technical advancements than I am in the licensing progress it might have brought to our software ecosystem.

Conclusion / TL;DR:

If you, your organization, or someone you know would like to help fund my mgmt work either via a development grant, contract or offer of employment, or if you’d like to be a contributor to the project, please let me know! Without your support, mgmt will die.

Happy Hacking,

James

You can follow James on Twitter for more frequent updates and other random noise.

EDIT: I mentioned in my article that: “Hosted configuration management as a service probably isn’t plausible“. Turns out I was wrong. The splendiferous Nathen Harvey was kind enough to point out that Chef offers a hosted solution! It’s free for five hosts as well!

I was probably thinking more about how I would be using mgmt, and not about the greater ecosystem. If you’d like to build or use a hosted mgmt solution, please let me know!

Metaparameters in mgmt

In mgmt we have meta parameters. They are similar in concept to what you might be familiar with from other tools, except that they are more clearly defined (in a single struct) and vastly more powerful.

In mgmt, a meta parameter is a parameter which is codified entirely in the engine, and which can be used by any resource. In contrast with Puppet, require/before are considered meta parameters, whereas in mgmt, the equivalent is a graph edge, which is not a meta parameter. [1]

Kinds

As of this writing we have seven different kinds of meta parameters:

The astute reader will note that there are actually nine different meta parameters listed, but I have grouped them into seven categories since some of them are very tightly interconnected. The first two, AutoEdge and AutoGroup have been covered in separate articles already, so they won’t be discussed here. To learn about the others, please read on…

Noop

Noop stands for no-operation. If it is set to true, we tell the CheckApply portion of the resource to not make any changes. It is up to the individual resource implementation to respect this facility, which is the case for all correctly written resources. You can learn more about this by reading the CheckApply section in the resource guide.

If you’d like to set the noop state on all resources at runtime, there is a cli flag which you can use to do so. It is unsurprisingly named --noop, and overrides all the resources in the graph. This is in stark contrast with Puppet which will allow an individual resource definition to override the user’s choice!

james@computer:/tmp$ cat noop.pp 
file { '/tmp/puppet.noop':
    content => "nope, nope, nope!\n",
    noop => false,    # set at the resource level
}
james@computer:/tmp$ time puppet apply noop.pp 
Notice: Compiled catalog for freed in environment production in 0.29 seconds
Notice: /Stage[main]/Main/File[/tmp/puppet.noop]/ensure: defined content as '{md5}d8bda32dd3fbf435e5a812b0ba3e9a95'
Notice: Applied catalog in 0.03 seconds

real    0m15.862s
user    0m7.423s
sys    0m1.260s
james@computer:/tmp$ file puppet.noop    # verify it worked
puppet.noop: ASCII text
james@computer:/tmp$ rm -f puppet.noop    # reset
james@computer:/tmp$ time puppet apply --noop noop.pp    # safe right?
Notice: Compiled catalog for freed in environment production in 0.30 seconds
Notice: /Stage[main]/Main/File[/tmp/puppet.noop]/ensure: defined content as '{md5}d8bda32dd3fbf435e5a812b0ba3e9a95'
Notice: Class[Main]: Would have triggered 'refresh' from 1 events
Notice: Stage[main]: Would have triggered 'refresh' from 1 events
Notice: Applied catalog in 0.02 seconds

real    0m15.808s
user    0m7.356s
sys    0m1.325s
james@computer:/tmp$ cat puppet.noop 
nope, nope, nope!
james@computer:/tmp$

If you look closely, Puppet just trolled you by performing an operation when you thought it would be noop! I think the behaviour is incorrect, but if this isn’t supposed to be a bug, then I’d sure like to know why!

It’s worth mentioning that there is also a noop resource in mgmt which is similarly named because it does absolutely nothing.

Retry & Delay

In mgmt we can run continuously, which means that it’s often more useful to do something interesting when there is a resource failure, rather than simply shutting down completely. As a result, if there is an error during the CheckApply phase of the resource execution, the operation can be retried (retry) a number of times, and there can be a delay between each retry.

The delay value is an integer representing the number of milliseconds to wait between retries, and it defaults to zero. The retry value is an integer representing the maximum number of allowed retries, and it defaults to zero. A negative value will permit an infinite number of retries. If the number of retries is exhausted, then the temporary resource failure will be converted into a permanent failure. Resources which depend on a failed resource will be blocked until there is a successful execution. When there is a successful CheckApply, the resource retry counter is reset.

In general it is best to leave these values at their defaults unless you are expecting spurious failures, this way if you do get a failure, it won’t be masked by the retry mechanism.

It’s worth mentioning that the Watch loop can fail as well, and that the retry and delay meta parameters apply to this as well! While these could have had their own set of meta parameters, I felt it would have unnecessarily cluttered up the interface, and I couldn’t think of a reason where it would be helpful to have different values. They do have their own separate retry counter and delay timer of course! If someone has a valid use case, then I’m happy to separate these.

If someone would like to implement a pluggable back-off algorithm (eg: exponential back-off) to be used here instead of a simple delay, then I think it would be a welcome addition!

Poll

Despite mgmt being event based, there are situations where you’d really like to poll instead of using the Watch method. For these cases, I reluctantly implemented a poll meta parameter. It does exactly what you’d expect, generating events every poll seconds. It defaults to zero which means that it is disabled, and Watch is used instead.

Despite my earlier knock of it, it is actually quite useful, in that some operations might require or prefer polling, and having it as a meta parameter means that those resources won’t need to duplicate the polling code.

This might be very powerful for an aws resource that can set up hosted Amazon ec2 resources. When combined with the retry and delay meta parameters, it will even survive outages!

One particularly interesting aspect is that ever since the converged graph detection was improved, we can still converge a graph and shutdown with the converged-timeout functionality while using polling! This is described in more detail in the documentation.

Limit & Burst

In mgmt, the events generated by the Watch main loop of a resource do not need to be 1-1 matched with the CheckApply remediation step. This is very powerful because it allows mgmt to collate multiple events into a single CheckApply step which is helpful for when the duration of the CheckApply step is longer than the interval between Watch events that are being generated often.

In addition, you might not want to constantly Check or Apply (converge) the state of your resource as often as it goes out of state. For this situation, that step can be rate limited with the limit and burst meta parameters.

The limit and burst meta parameters implement something known as a token bucket. This models a bucket which is filled with tokens and which is drained slowly. It has a particular rate limit (which sets a maximum rate) and a burst count which sets a maximum bolus which can be absorbed.

This doesn’t cause us to permanently miss events (and stay un-converged) because when the bucket overfills, instead of dropping events, we actually cache the last one for playback once the bucket falls within the execution rate threshold. Remember, we expect to be converged in the steady state, not at every infinitesimal delta t in between.

The limit and burst metaparams default to allowing an infinite rate, with zero burst. As it turns out, if you have a non-infinite rate, the burst must be non-zero or you will cause a Validate error. Similarly, a non-zero burst, with an infinite rate is effectively the same as the default. A good rule of thumb is to remember to either set both values or neither. This is all because of the mathematical implications of token buckets which I won’t explain in this article.

Sema

Sema is short for semaphore. In mgmt we have implemented P/V style counting semaphores. This is a mechanism for reducing parallelism in situations where there are not explicit dependencies between resources. This might be useful for when the number of operations might outnumber the number of CPUs on your machine and you want to avoid starving your other processes. Alternatively, there might be a particular operation that you want to add a mutex (mutual exclusion) around, which can be represented with a semaphore of size (1) one. Lastly, it was a particularly fun meta parameter to write, and I had been itching to do so for some time.

To use this meta parameter, simply give a list of semaphore ids to the resource you want to lock. These can be any string, and are shared globally throughout the graph. By default, they have a size of one. To specify a semaphore with a different size, append a colon (:) followed by an integer at the end of the semaphore id.

Valid ids might include: “some_id:42“, “hello:13“, and “lockname“. Remember, the size parameter is the number of simultaneous resources which can run their CheckApply methods at the same time. It does not prevent multiple Watch methods from returning events simultaneously.

If you would like to force a semaphore globally on all resources, you can pass in the --sema argument with a size integer. This will get appended to the existing semaphores. For example, to simulate Puppet’s traditional non-parallel execution, you could specify --sema 1.

Oh, no! Does this mean I can deadlock my graphs? Interestingly enough, this is actually completely safe! The reason is that because all the semaphores exist in the mgmt directed acyclic graph, and because that DAG represents dependencies that are always respected, there will always be a way to make progress, which eventually unblocks any waiting resources! The trick to doing this is ensuring that each resource always acquires the list of semaphores in alphabetical order. (Actually the order doesn’t matter as long as it’s consistent across the graph, and alphabetical is as good as any!) Unfortunately, I don’t have a formal proof of this, but I was able to convince myself on the back of an envelope that it is true! Please contact me if you can prove me right or wrong! The one exception is that a counting semaphore of size zero would never let anyone acquire it, so by definition it would permanently block, and as a result is not currently permitted.

The last important point to mention is about the interplay between automatic grouping and semaphores. When more than one resource is grouped, they are considered to be part of the same resource. As a result, the resulting list of semaphores is the sum of the individual semaphores, de-duplicated. This ensures that individual locking guarantees aren’t broken when multiple resources are combined.

Future

If you have ideas for future meta parameters, please let me know! We’d love to hear about your ideas on our mailing list or on IRC. If you’re shy, you can contact me privately as well.

Happy Hacking,

James

[1] This is a bit of an apples vs. flame-throwers comparison because I’m comparing the mgmt engine meta parameters with the puppet language meta parameters, but I think it’s worth mentioning because there’s a clear separation between the two in mgmt, where as the separation is much more blurry in the puppet scenario. It’s also true that the mgmt language might grow a concept of language-level meta parameters which has a partial set that only maps partially to engine meta parameters, but this is a discussion for another day!

Send/Recv in mgmt

I previously published “A revisionist history of configuration management“. I meant for that to be the intro to this article, but it ended up being long enough that it deserved a separate post. I will explain Send/Recv in this article, but first a few clarifications to the aforementioned article.

Clarifications

I mentioned that my “revisionist history” was inaccurate, but I failed to mention that it was also not exhaustive! Many things were left out either because they were proprietary, niche, not well-known, of obscure design or simply for brevity. My apologies if you were involved with Bcfg2, Bosh, Heat, Military specifications, SaltStack, SmartFrog, or something else entirely. I’d love it if someone else wrote an “exhaustive history”, but I don’t think that’s possible.

It’s also worth re-iterating that without the large variety of software and designs which came before me, I wouldn’t have learned or have been able to build anything of value. Thank you giants!  By discussing the problems and designs of other tools, then it makes it easier to contrast with and explaining what I’m doing in mgmt.

Notifications

If you’re not familiar with the directed acyclic graph model for configuration management, you should start by reviewing that material first. It models a system of resources (workers) as the vertices in that DAG, and the edges as the dependencies. We’re going to add some additional mechanics to this model.

There is a concept in mgmt called notifications. Any time the state of a resource is successfully changed by the engine, a notification is emitted. These notifications are emitted along any graph edge (dependency) that has been asked to relay them. Edges with the Notify property will do so. These are usually called refresh notifications.

Any time a resource receives a refresh notification, it can apply a special action which is resource specific. The svc resource reloads the service, the password resource generates a new password, the timer resource resets the timer, and the noop resource prints a notification message. In general, refresh notifications should be avoided if possible, but there are a number of legitimate use cases.

In mgmt notifications are designed to be crash-proof, that is to say, undelivered notifications are re-queued when the engine restarts. While we don’t expect mgmt to crash, this is also useful when a graph is stopped by the user before it has completed.

You’ll see these notifications in action momentarily.

Send/Recv

I mentioned in the revisionist history that I felt that Chef opted for raw code as a solution to the lack of power in Puppet. Having resources in mgmt which are event-driven is one example of increasing their power. Send/Recv is another mechanism to make the resource primitive more powerful.

Simply put: Send/Recv is a mechanism where resources can transfer data along graph edges.

The status quo

Consider the following pattern (expressed as Puppet code):

# create a directory
file { '/var/foo/':
    ensure => directory,
}
# download a file into that directory
exec { 'wget http://example.com/data -O - > /var/foo/data':
    creates => '/var/foo/data',
    require => File['/var/foo/'],
}
# set some property of the file
file { '/var/foo/data':
    mode => 0644,
    require => File['/var/foo/data'],
}

First a disclaimer. Puppet now actually supports an http url as a source. Nevertheless, this was a common pattern for many years and that solution only improves a narrow use case. Here are some of the past and current problems:

  • File can’t take output from an exec (or other) resource
  • File can’t pull from an unsupported protocol (sftp, tftp, imap, etc…)
  • File can get created with zero-length data
  • Exec won’t update if http endpoint changes the data
  • Requires knowledge of bash and shell glue
  • Potentially error-prone if a typo is made

There’s also a layering violation if you believe that network code (http downloading) shouldn’t be in a file resource. I think it adds unnecessary complexity to the file resource.

The solution

What the file resource actually needs, is to be able to accept (Recv) data of the same type as any of its input arguments. We also need resources which can produce (Send) data that is useful to consumers. This occurs along a graph (dependency) edge, since the sending resource would need to produce it before the receiver could act!

This also opens up a range of possibilities for new resource kinds that are clever about sending or receiving data. an http resource could contain all the necessary network code, and replace our use of the exec { 'wget ...': } pattern.

Diagram

in this graph, a password resource generates a random string and stores it in a file

in this graph, a password resource generates a random string and stores it in a file; more clever linkages are planned

Example

As a proof of concept for the idea, I implemented a Password resource. This is a prototype resource that generates a random string of characters. To use the output, it has to be linked via Send/Recv to something that can accept a string. The file resource is one such possibility. Here’s an excerpt of some example output from a simple graph:

03:06:13 password.go:295: Password[password1]: Generating new password...
03:06:13 password.go:312: Password[password1]: Writing password token...
03:06:13 sendrecv.go:184: SendRecv: Password[password1].Password -> File[file1].Content
03:06:13 file.go:651: contentCheckApply: Invalidating sha256sum of `Content`
03:06:13 file.go:579: File[file1]: contentCheckApply(true)
03:06:13 noop.go:115: Noop[noop1]: Received a notification!

What you can see is that initially, a random password is generated. Next Send/Recv transfers the generated Password to the file’s Content. The file resource invalidates the cached Content checksum (a performance feature of the file resource), and then stores that value in the file. (This would normally be a security problem, but this is for example purposes!) Lastly, the file sends out a refresh notification to a Noop resource for demonstration purposes. It responds by printing a log message to that effect.

Libmgmt

Ultimately, mgmt will have a DSL to express the different graphs of configuration. In the meantime, you can use Puppet code, or a raw YAML file. The latter is primarily meant for testing purposes until we have the language built.

Lastly, you can also embed mgmt and use it like a library! This lets you write raw golang code to build your resource graphs. I decided to write the above example that way! Have a look at the code! This can be used to embed mgmt into your existing software! There are a few more examples available here.

Resource internals

When a resource receives new values via Send/Recv, it’s likely that the resource will have work to do. As a result, the engine will automatically mark the resource state as dirty and then poke it from the sending node. When the receiver resource runs, it can lookup the list of keys that have been sent. This is useful if it wants to perform a cache invalidation for example. In the resource, the code is quite simple:

if val, exists := obj.Recv["Content"]; exists && val.Changed {
    // the "Content" input has changed
}

Here is a good example of that mechanism in action.

Future work

This is only powerful if there are interesting resources to link together. Please contribute some ideas, and help build these resources! I’ve got a number of ideas already, but I’d love to hear yours first so that I don’t influence or stifle your creativity. Leave me a message in the comments below!

Happy Hacking,

James

A revisionist history of configuration management

I’ve got a brand new core feature in mgmt called send/recv which I plan to show you shortly, but first I’d like to start with some background.

History

This is my historical perspective and interpretation about the last twenty years in configuration management. It’s likely inaccurate and slightly revisionist, but it should be correct enough to tell the design story that I want to share.

Sometime after people started to realize that writing bash scripts wasn’t a safe, scalable, or reusable way to automate systems, CFEngine burst onto the scene with the first real solution to this problem. I think it was mostly all quite sane, but it wasn’t a tool which let us build autonomous systems, so people eventually looked elsewhere.

Later on, a new tool called Puppet appeared, and advertised itself as a “CFEngine killer”. It was written in a flashy new language called Ruby, and started attracting a community. I think it had some great ideas, and in particular, the idea of a safe declarative language was a core principle of the design.

I first got into configuration management around this time. My first exposure was to Puppet version 0.24, IIRC. Two major events followed.

  1. Puppet (the company, previously named “Reductive Labs”) needed to run a business (rightly so!) and turned their GPL licensed project, into an ALv2 licensed one. This opened the door to an open-core business model, and I think was ultimately a detriment to the Puppet community.
  2. Some felt that the Puppet DSL (language) was too restrictive, and that this was what prevented them from building autonomous systems. They eventually started a project called Chef which let you write your automation using straight Ruby code. It never did lead them to build autonomous systems.

At this point, as people began to feel that the complexity (in particular around multi-machine environments) starting to get too high, a flashy new orchestrator called Ansible appeared. While I like to put centralized orchestrators in a different category than configuration management, it sits in the same problem space so we’ll include it here.

Ansible tried to solve the complexity and multi-machine issue by determining the plan of action in advance, and then applying those changes remotely over SSH. Instead of a brand new “language”, they ended up with a fancy YAML syntax which has been loved by many and disliked by others. You also couldn’t really exchange host-local information between hosts at runtime, but this was a more advanced requirement anyway. They never did end up building reactive, autonomous systems, but this might not have been a goal.

Sometime later, container technology had a renaissance. The popular variant that caused a stir was called Docker. This dominant form was one in which you used a bash script wrapped in some syntactic sugar (a “Dockerfile”) to build your container images. Many believed (although incorrectly) that container technology would be a replacement for this configuration management scene. The solution was to build these blobs with shell scripts, and to mix-in the mostly useless concept of image layering.

They seem to have taken the renaissance too literally, and when they revived container technology, they also brought back using the shell as a build primitive. While I am certainly a fan and user of bash, and I do appreciate the nostalgia, it isn’t the safe, scalable design that I was looking for.

Docker is definitely in a different category than configuration management, and in fact, I think the two are actually complementary, and even though I prefer systemd-nspawn, we’ll mention Docker here so that I can publicly discredit the notion that it sits in or replaces this problem space.

While in some respects they got much closer to being able to build autonomous systems, you had to rewrite your software to be able to fit into this model, and even then, there are many shortcomings that still haven’t been resolved.

Analysis

On the path to autonomous systems, there is certainly a lot of trial and error. I don’t pretend to think that I have solved this problem, but I think I’ll get pretty close.

  • Where CFEngine chose the C language for portability, it lacked safety, and
  • Where Puppet chose a declarative language for safety, it lacked power, and
  • Where Chef chose raw code for power, it lacked simplicity, and
  • Where Ansible chose an orchestrator for simplicity, it lacked distribution and
  • Where Docker chose multiple instances for distribution, it lacked coordination.

I believe that instead the answer to all of these is still ahead. When discussing power, I think the main mistake was the lack of a sufficiently advanced resource primitive. The event based engine in mgmt is intended to be the main aspect of this solution, but not the whole story. For another piece of this story, I invented something I’m calling send/recv.

Send/Recv

I’d like to go into this today, but I think I’m going to split this discussion into a separate blog post. Expect something here within a week!

If you hate the suspense, become a contributor and be involved in these discussions! We’re hanging out in #mgmtconfig on Freenode. I also hold occasional videoconferences with code contributors where we talk about the future.

Thanks

I learned a tremendous amount from all of these earlier tools and communities, and even though I am working on a next generation tool, I would never be where I am now if it wasn’t for all of those who came before me. I’m even trying to borrow ideas where it is appropriate to do so! I welcome all of those communities into the mgmt circle, and I hope that their users all continue to positively influence the design of mgmt.

Happy hacking,

James

Remote execution in mgmt

Bootstrapping a cluster from your laptop, or managing machines without needing to first setup a separate config management infrastructure are both very reasonable and fundamental asks. I was particularly inspired by Ansible‘s agent-less remote execution model, but never wanted to build a centralized orchestrator. I soon realized that I could have my ice cream and eat it too.

Prior knowledge

If you haven’t read the earlier articles about mgmt, then I recommend you start with those, and then come back here. The first and fourth are essential if you’re going to make sense of this article.

Limitations of existing orchestrators

Current orchestrators have a few limitations.

  1. They can be a single point of failure
  2. They can have scaling issues
  3. They can’t respond instantaneously to node state changes (they poll)
  4. They can’t usually redistribute remote node run-time data between nodes

Despite these limitations, orchestration is still very useful because of the facilities it provides. Since these facilities are essential in a next generation design, I set about integrating these features, but with a novel twist.

Implementation, Usage and Design

Mgmt is written in golang, and that decision was no accident. One benefit is that it simplifies our remote execution model.

To use this mode you run mgmt with the --remote flag. Each use of the --remote argument points to a different remote graph to execute. Eventually this will be integrated with the DSL, but this plumbing is exposed for early adopters to play around with.

Startup (part one)

Each invocation of --remote causes mgmt to remotely connect over SSH to the target hosts. This happens in parallel, and runs up to --cconns simultaneous connections.

A temporary directory is made on the remote host, and the mgmt binary and graph are copied across the wire. Since mgmt compiles down to a single statically compiled binary, it simplifies the transfer of the software. The binary is cached remotely to speed up future runs unless you pass the --no-caching option.

A TCP connection is tunnelled back over SSH to the originating hosts etcd server which is embedded and running inside of the initiating mgmt binary.

Execution (part two)

The remote mgmt binary is now run! It wires itself up through the SSH tunnel so that its internal etcd client can connect to the etcd server on the initiating host. This is particularly powerful because remote hosts can now participate in resource exchanges as if they were part of a regular etcd backed mgmt cluster! They don’t connect directly to each other, but they can share runtime data, and only need an incoming SSH port open!

Closure (part three)

At this point mgmt can either keep running continuously or it can close the connections and shutdown.

In the former case, you can either remain attached over SSH, or you can disconnect from the child hosts and let this new cluster take on a new life and operate independently of the initiator.

In the latter case you can either shutdown at the operators request (via a ^C on the initiator) or when the cluster has simultaneously converged for a number of seconds.

This second possibility occurs when you run mgmt with the familiar --converged-timeout parameter. It is indeed clever enough to also work in this distributed fashion.

Diagram

I’ve used by poor libreoffice draw skills to make a diagram. Hopefully this helps out my visual readers.

remote-execution

If you can improve this diagram, please let me know!

Example

I find that using one or more vagrant virtual machines for the remote endpoints is the best way to test this out. In my case I use Oh-My-Vagrant to set up these machines, but the method you use is entirely up to you! Here’s a sample remote execution. Please note that I have omitted a number of lines for brevity, and added emphasis to the more interesting ones.

james@hostname:~/code/mgmt$ ./mgmt run --remote examples/remote2a.yaml --remote examples/remote2b.yaml --tmp-prefix 
17:58:22 main.go:76: This is: mgmt, version: 0.0.5-3-g4b8ad3a
17:58:23 remote.go:596: Remote: Connect...
17:58:23 remote.go:607: Remote: Sftp...
17:58:23 remote.go:164: Remote: Self executable is: /home/james/code/gopath/src/github.com/purpleidea/mgmt/mgmt
17:58:23 remote.go:221: Remote: Remotely created: /tmp/mgmt-412078160/remote
17:58:23 remote.go:226: Remote: Remote path is: /tmp/mgmt-412078160/remote/mgmt
17:58:23 remote.go:221: Remote: Remotely created: /tmp/mgmt-412078160/remote
17:58:23 remote.go:226: Remote: Remote path is: /tmp/mgmt-412078160/remote/mgmt
17:58:23 remote.go:235: Remote: Copying binary, please be patient...
17:58:23 remote.go:235: Remote: Copying binary, please be patient...
17:58:24 remote.go:256: Remote: Copying graph definition...
17:58:24 remote.go:618: Remote: Tunnelling...
17:58:24 remote.go:630: Remote: Exec...
17:58:24 remote.go:510: Remote: Running: /tmp/mgmt-412078160/remote/mgmt run --hostname '192.168.121.201' --no-server --seeds 'http://127.0.0.1:2379' --file '/tmp/mgmt-412078160/remote/remote2a.yaml' --depth 1
17:58:24 etcd.go:2088: Etcd: Watch: Path: /_mgmt/exported/
17:58:24 main.go:255: Main: Waiting...
17:58:24 remote.go:256: Remote: Copying graph definition...
17:58:24 remote.go:618: Remote: Tunnelling...
17:58:24 remote.go:630: Remote: Exec...
17:58:24 remote.go:510: Remote: Running: /tmp/mgmt-412078160/remote/mgmt run --hostname '192.168.121.202' --no-server --seeds 'http://127.0.0.1:2379' --file '/tmp/mgmt-412078160/remote/remote2b.yaml' --depth 1
17:58:24 etcd.go:2088: Etcd: Watch: Path: /_mgmt/exported/
17:58:24 main.go:291: Config: Parse failure
17:58:24 main.go:255: Main: Waiting...
^C17:58:48 main.go:62: Interrupted by ^C
17:58:48 main.go:397: Destroy...
17:58:48 remote.go:532: Remote: Output...
|    17:58:23 main.go:76: This is: mgmt, version: 0.0.5-3-g4b8ad3a
|    17:58:47 main.go:419: Goodbye!
17:58:48 remote.go:636: Remote: Done!
17:58:48 remote.go:532: Remote: Output...
|    17:58:24 main.go:76: This is: mgmt, version: 0.0.5-3-g4b8ad3a
|    17:58:48 main.go:419: Goodbye!
17:58:48 remote.go:636: Remote: Done!
17:58:48 main.go:419: Goodbye!

You should see that we kick off the remote executions, and how they are wired back through the tunnel. In this particular case we terminated the runs with a ^C.

The example configurations I used are available here and here. If you had a terminal open on the first remote machine, after about a second you would have seen:

[root@omv1 ~]# ls -d /tmp/file*  /tmp/mgmt*
/tmp/file1a  /tmp/file2a  /tmp/file2b  /tmp/mgmt-412078160
[root@omv1 ~]# cat /tmp/file*
i am file1a
i am file2a, exported from host a
i am file2b, exported from host b

You can see the remote execution artifacts, and that there was clearly data exchange. You can repeat this example with --converged-timeout=5 to automatically terminate after five seconds of cluster wide inactivity.

Live remote hacking

Since mgmt is event based, and graph structure configurations manifest themselves as event streams, you can actually edit the input configuration on the initiating machine, and as soon as the file is saved, it will instantly remotely propagate and apply the graph differential.

For this particular example, since we export and collect resources through the tunnelled SSH connections, it means editing the exported file, will also cause both hosts to update that file on disk!

You’ll see this occurring with this message in the logs:

18:00:44 remote.go:973: Remote: Copied over new graph definition: examples/remote2b.yaml

While you might not necessarily want to use this functionality on a production machine, it will definitely make your interactive hacking sessions more useful, in particular because you never need to re-run parts of the graph which have already converged!

Auth

In case you’re wondering, mgmt can look in your ~/.ssh/ for keys to use for the auth, or it can prompt you interactively. It can also read a plain text password from the connection string, but this isn’t a recommended security practice.

Hierarchial remote execution

Even though we recommend running mgmt in a normal clustered mode instead of over SSH, we didn’t want to limit the number of hosts that can be configured using remote execution. For this reason it would be architecturally simple to add support for what we’ve decided to call “hierarchial remote execution”.

In this mode, the primary initiator would first connect to one or more secondary nodes, which would then stage a second series of remote execution runs resulting in an order of depth equal to two or more. This fan out approach can be used to distribute the number of outgoing connections across more intermediate machines, or as a method to conserve remote execution bandwidth on the primary link into your datacenter, by having the secondary machine run most of the remote execution runs.

remote-execution2

This particular extension hasn’t been built, although some of the plumbing has been laid. If you’d like to contribute this feature to the upstream project, please join us in #mgmtconfig on Freenode and let us (I’m @purpleidea) know!

Docs

There is some generated documentation for the mgmt remote package available. There is also the beginning of some additional documentation in the markdown docs. You can help contribute to either of these by sending us a patch!

Novel resources

Our event based architecture can enable some previously improbable kinds of resources. In particular, I think it would be quite beautiful if someone built a provisioning resource. The Watch method of the resource API normally serves to notify us of events, but since it is a main loop that blocks in a select call, it could also be used to run a small server that hosts a kickstart file and associated TFTP images. If you like this idea, please help us build it!

Conclusion

I hope you enjoyed this article and found this remote execution methodology as novel as we do. In particular I hope that I’ve demonstrated that configuration software doesn’t have to be constrained behind a static orchestration topology.

Happy Hacking,

James

Automatic clustering in mgmt

In mgmt, deploying and managing your clustered config management infrastructure needs to be as automatic as the infrastructure you’re using mgmt to manage. With mgmt, instead of a centralized data store, we function as a distributed system, built on top of etcd and the raft protocol.

In this article, I’ll cover how this feature works.

Foreword:

Mgmt is a next generation configuration management project. If you haven’t heard of it yet, or you don’t remember why we use a distributed database, start by reading the previous articles:

Embedded etcd:

Since mgmt and etcd are both written in golang, the etcd code can be built into the same binary as mgmt. As a result, etcd can be managed directly from within mgmt. Unfortunately, there’s currently no recommended API to do this, but I’ve tried to get such a feature upstream to avoid code duplication in mgmt. If you can help out here, I’d really appreciate it! In the meantime, I’ve had to copy+paste the necessary portions into mgmt.

Clustering mechanics:

You can deploy an automatically clustered mgmt cluster by following these three steps:

1) If no mgmt servers exist you can start one up by running mgmt normally:

./mgmt run --file examples/graph0.yaml

2) To add any subsequent mgmt server, run mgmt normally, but point it at any number of existing mgmt servers with the --seeds command:

./mgmt run --file examples/graph0.yaml --seeds <ip address:port>

3) Profit!

We internally implement a clustering algorithm which does the hard-working of building and managing the etcd cluster for you, so that you don’t have to. If you’re interested, keep reading to find out how it works!

Clustering algorithm:

The clustering algorithm works as follows:

If you aren’t given any seeds, then assume you are the first etcd server (peer) and start-up. If you are given a seeds argument, then connect to that peer to join the cluster as a client. If you’d like to be promoted to a server, then you can “volunteer” by setting a special key in the cluster data store.

The existing cluster of peers will decide if they want additional peers, and if so, they can “nominate” someone from the pool of volunteers. If you have been nominated, you can start-up an etcd peer and peer with the rest of the cluster. Similarly, the cluster can decide to un-nominate a peer, and if you’ve been un-nominated, then you should shutdown your etcd server.

All cluster decisions are made by consensus using the raft algorithm. In practice this means that the elected cluster leader looks at the state of the system, and makes the necessary nomination changes.

Lastly, if you don’t want to be a peer any more, you can revoke your volunteer message, which will be seen by the cluster, and if you were running a server, you should receive an un-nominate message in response, which will let you shutdown cleanly.

Disclaimer:

It’s probably worth mentioning that the current implementation has a few issues, and at least one race. The goal is to have it polished up by the time etcd v3 is released, but it’s perfectly usable for testing and experimentation today! If you don’t want to automatically cluster, you can always use the --no-server flag, and point mgmt at a manually managed mgmt cluster using the --seeds flag.

Testing:

Testing this feature on a single machine makes development and experimentation easier, so as a result, there are a few flags which make this possible.

--hostname <hostname>
With this flag, you can force your mgmt client to pretend it is running on a host with the above mentioned name. You can use this to specify --hostname h1, or --hostname h2, and so on; one for each mgmt agent you want to run on the same machine.

--server-urls <ip:port>
With this flag you can specify which IP address and port the etcd server will listen on for peer requests. By default this will use 127.0.0.1:2379, but when running multiple mgmt agents on the same machine you’ll need to specify this manually to avoid collisions. You can specify as many IP address and port pairs as you’d like by separating them with commas or semicolons. The --peer-urls flag is an alias which does the same thing.

--client-urls <ip:port>
This flag specifies which IP address and port the etcd server will listen on for client connections. It defaults to 127.0.0.1:2380, but you’ll occasionally want to specify this manually for the same reasons as mentioned above. You can specify as many IP address and port pairs as you’d like by separating them with commas or semicolons. This is the address that will be used by the --seeds flag when joining an existing cluster.

Elastic clustering:

In the future, you’ll be able to specify a much more elaborate method to decide how many hosts should be promoted into peers, and which hosts should be nominated or un-nominated when growing or shrinking the cluster.

At the moment, we do the grow or shrink operation when the current peer count does not match the requested cluster size. This value has a default of 5, and can even be changed dynamically. To do so you can run:

ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 put /_mgmt/idealClusterSize 3

You can also set it at start-up by using the --ideal-cluster-size flag.

Example:

Here’s a real example if you want to dive in. Try running the following four commands in separate terminals:

./mgmt run --file examples/etcd1a.yaml --hostname h1 --ideal-cluster-size 3
./mgmt run --file examples/etcd1b.yaml --hostname h2 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2381 --server-urls http://127.0.0.1:2382
./mgmt run --file examples/etcd1c.yaml --hostname h3 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2383 --server-urls http://127.0.0.1:2384
./mgmt run --file examples/etcd1d.yaml --hostname h4 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2385 --server-urls http://127.0.0.1:2386

Once you’ve done this, you should have a three host cluster! Check this by running any of these commands:

ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 member list
ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2381 member list
ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2383 member list

Note that you’ll need a v3 beta version of the etcdctl command which you can get by running ./build in the etcd git repo.

To grow your cluster, try increasing the desired cluster size to five:

ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2381 put /_mgmt/idealClusterSize 5

You should see the last host start-up an etcd server. If you reduce the idealClusterSize, you’ll see servers shutdown! You’re responsible if you destroy the cluster by setting it too low! You can then try growing your cluster again, but unfortunately due to a bug, hosts can’t be re-used yet, and you’ll get a “bind: address already in use” error. We hope to have this fixed shortly!

Security:

Unfortunately no authentication security or transport security has been implemented yet. We have a great design, but are busy working on other parts of the project at the moment. If you’d like to help out here, please let us know!

Future work:

There’s still a lot of work to do, to improve this feature. The biggest challenge has been getting a reasonable embedded server API upstream. It’s not clear whether this patch can be made to work or if something different will need to be written, but at least one other project looks like it could benefit from this as well.

Video:

A recording from the recent Berlin CoreOSFest 2016 has been published! I demoed these recent features, but one interesting note is that I am actually presenting an earlier version of the code which used the etcd V2 API. I’ve since ported the code to V3, but it is functionally similar. It’s probably worth mentioning, that I found the V3 API to be more difficult, but also more correct and powerful. I think it is a net improvement to the project.

Community:

I can’t end this blog post without mentioning some of the great stuff that’s been happening in the mgmt community! In particular, Felix has written some great code to run existing Puppet code on mgmt. Check out his work!

Upcoming speaking:

I’ve got some upcoming speaking in Hong Kong at HKOSCon16 and in Cape Town at DebConf16 about the project. Please ping me if you’ll be in one of these cities and would like to hack on mgmt or just chat about the project. I’m happy to give some impromptu demos if you ask!

Thanks for reading!

Happy Hacking,

James

PS: We now have a community run twitter account. Check us out!

Upcoming speaking In Hong Kong and South Africa

I’m thrilled to tell you that I’ll be speaking about mgmt in Hong Kong and South Africa. It will be my first time to both countries and my first time to Asia and Africa!

In Hong Kong I’ll be speaking at HKOSCon2016.

In South Africa I’ll be speaking at DebConf16.

I’m looking forward to meeting with many of the hard-working Debian hackers, and collaborating with them to build and promote excellent Free Software. The mgmt project considers both Fedora and Debian to be first class platforms, and parity is a primary design goal.

I’ll be presenting and demoing many of the new features in mgmt. If you haven’t heard about the project, please read some of the posts about it…

You’re also welcome to come join our IRC channel! It’s #mgmtconfig on Freenode.

Many special thanks to both conference organizers as well as Red Hat and the OSAS department for sponsoring my travel! Without it, visiting these places and interacting with new continents of hackers would be impossible.

Happy Hacking,

James

PS: We now have a community run twitter account. Check us out!

Automatic grouping in mgmt

In this post, I’ll tell you about the recently released “automatic grouping” or “AutoGroup” feature in mgmt, a next generation configuration management prototype. If you aren’t already familiar with mgmt, I’d recommend you start by reading the introductory post, and the second post. There’s also an introductory video.

Resources in a graph

Most configuration management systems use something called a directed acyclic graph, or DAG. This is a fancy way of saying that it is a bunch of circles (vertices) which are connected with arrows (edges). The arrows must be connected to exactly two vertices, and you’re only allowed to move along each arrow in one direction (directed). Lastly, if you start at any vertex in the graph, you must never be able to return to where you started by following the arrows (acyclic). If you can, the graph is not fit for our purpose.

A DAG from Wikipedia

An example DAG from Wikipedia

The graphs in configuration management systems usually represent the dependency relationships (edges) between the resources (vertices) which is important because you might want to declare that you want a certain package installed before you start a service. To represent the kind of work that you want to do, different kinds of resources exist which you can use to specify that work.

Each of the vertices in a graph represents a unique resource, and each is backed by an individual software routine or “program” which can check the state of the resource, and apply the correct state if needed. This makes each resource idempotent. If we have many individual programs, this might turn out to be a lot of work to do to get our graph into the desired state!

Resource grouping

It turns out that some resources have a fixed overhead to starting up and running. If we can group resources together so that they share this fixed overhead, then our graph might converge faster. This is exactly what we do in mgmt!

Take for example, a simple graph such as the following:

Simple DAG showing three pkg, two file, and one svc resource

Simple DAG showing one svc, two file, and three pkg resources…

We can logically group the three pkg resources together and redraw the graph so that it now looks like this:

DAG with the three pkg resources now grouped into one.

DAG with the three pkg resources now grouped into one! Overlapping vertices mean that they act as if they’re one vertex instead of three!

This all happens automatically of course! It is very important that the new graph is a faithful, logical representation of the original graph, so that the specified dependency relationships are preserved. What this represents, is that when multiple resources are grouped (shown by overlapping vertices in the graph) they run together as a single unit. This is the practical difference between running:

$ dnf install -y powertop
$ dnf install -y sl
$ dnf install -y cowsay

if not grouped, and:

$ dnf install -y powertop sl cowsay

when grouped. If you try this out you’ll see that the second scenario is much faster, and on my laptop about three times faster! This is because of fixed overhead such as cache updates, and the dnf dep solver that each process runs.

This grouping means mgmt uses this faster second scenario instead of the slower first scenario that all the current generation tools do. It’s also important to note that different resources can implement the grouping feature to optimize for different things besides performance. More on that later…

The algorithm

I’m not an algorithmist by training, so it took me some fiddling to come up with an appropriate solution. I’ve implemented it along with an extensive testing framework and a series of test cases, which it passes of course! If we ever find a graph that does not get grouped correctly, then we can iterate on the algorithm and add it as a new test case.

The algorithm turns out to be relatively simple. I first noticed that vertices which had a relationship between them must not get grouped, because that would undermine the precedence ordering of the vertices! This property is called reachability. I then attempt to group every vertex to every other vertex that has no reachability or reverse reachability to it!

The hard part turned out to be getting all the plumbing surrounding the algorithm correct, and in particular the actual vertex merging algorithm, so that “discarded edges” are reattached in the correct places. I also took a bit of extra time to implement the algorithm as a struct which satisfies an “AutoGrouper” interface. This way, if you’d like to implement a different algorithm, it’s easy to drop in your replacement. I’m fairly certain that a more optimal version of my algorithm is possible for anyone wishing to do the analysis.

A quick note on nomenclature: I’ve actually decided to call this grouping and not merging, because we actually preserve the unique data of each resource so that they can be taken apart and mixed differently when (and if) there is a change in the compiled graph. This makes graph changeovers very cheap in mgmt, because we don’t have to re-evaluate anything which remains constant between graphs. Merging would imply a permanent reduction and loss of unique identity.

Parallelism and user choice

It’s worth noting two important points:

  1. Auto grouping of resources usually decreases the parallelism of a graph.
  2. The user might not want a particular resource to get grouped!

You might remember that one of the novel properties of mgmt, is that it executes the graph in parallel whenever possible. Although the grouping of resources actually removes some of this parallelism, certain resources such as the pkg resource already have an innate constraint on sequential behaviour, namely: the package manager lock. Since these tools can’t operate in parallel, and since each execution has a fixed overhead, it’s almost always beneficial to group pkg resources together.

Grouping is also not mandatory, so while it is a sensible default, you can disable grouping per resource with a simple meta parameter.

Lastly, it’s also worth mentioning that grouping doesn’t “magically” happen without some effort. The underlying resource needs to know how to optimize, watch, check and apply multiple resources simultaneously for it to support the feature. At the moment, only the pkg resource can do any grouping, and even then, there could always be some room for improvement. It’s also not optimal (or even logical) to group certain types of resources, so those will never be able to do any grouping. We also don’t group together resources of different kinds, although mgmt could support this if a valid use case is ever found.

File grouping

As I mentioned, only the pkg resource supports grouping at this time. The file resource demonstrates a different use case for resource grouping. Suppose you want to monitor 10000 files in a particular directory, but they are specified individually. This would require far too many inotify watches than a normal system usually has, so the grouping algorithm could group them into a single resource, which then uses a recursive watcher such as fanotify to reduce the watcher count by a factor of 10000. Unfortunately neither the file resource grouping, nor the fanotify support for this exist at the moment. If you’d like to implement either of these, please let me know!

If you can think of another resource kind that you’d like to write, or in particular, if you know of one which would work well with resource grouping, please contact me!

Science!

I wouldn’t be a very good scientist (I’m actually a Physiologist by training) if I didn’t include some data and a demonstration that this all actually works, and improves performance! What follows will be a good deal of information, so skim through the parts you don’t care about.

Science <3 data

Science <3 data

I decided to test the following four scenarios:

  1. single package, package check, package already installed
  2. single package, package install, package not installed
  3. three packages, package check, packages already installed
  4. three packages, package install, packages not installed

These are the situations you’d encounter when running your tool of choice to install one or more packages, and finding them either already present, or in need of installation. I timed each test, which ends when the tool tells us that our system has converged.

Each test is performed multiple times, and the average is taken, but only after we’ve run the tool at least twice so that the caches are warm.

We chose small packages so that the fixed overhead delays due to bandwidth and latencies are minimal, and so that our data is more representative of the underlying tool.

The single package tests use the powertop package, and the three package tests use powertop, sl, and cowsay. All tests were performed on an up-to-date Fedora 23 laptop, with an SSD. If you haven’t tried sl and cowsay, do give them a go!

The four tools tested were:

  1. puppet
  2. mgmt
  3. pkcon
  4. dnf

The last two are package manager front ends so that it’s more obvious how expensive something is expected to cost, and so that you can discern what amount of overhead is expected, and what puppet or mgmt is causing you. Here are a few representative runs:

mgmt installation of powertop:

$ time sudo ./mgmt run --file examples/pkg1.yaml --converged-timeout=0
21:04:18 main.go:63: This is: mgmt, version: 0.0.3-1-g6f3ac4b
21:04:18 main.go:64: Main: Start: 1459299858287120473
21:04:18 main.go:190: Main: Running...
21:04:18 main.go:113: Etcd: Starting...
21:04:18 main.go:117: Main: Waiting...
21:04:18 etcd.go:113: Etcd: Watching...
21:04:18 etcd.go:113: Etcd: Watching...
21:04:18 configwatch.go:54: Watching: examples/pkg1.yaml
21:04:20 config.go:272: Compile: Adding AutoEdges...
21:04:20 config.go:533: Compile: Grouping: Algorithm: nonReachabilityGrouper...
21:04:20 main.go:171: Graph: Vertices(1), Edges(0)
21:04:20 main.go:174: Graphviz: No filename given!
21:04:20 pgraph.go:764: State: graphStateNil -> graphStateStarting
21:04:20 pgraph.go:825: State: graphStateStarting -> graphStateStarted
21:04:20 main.go:117: Main: Waiting...
21:04:20 pkg.go:245: Pkg[powertop]: CheckApply(true)
21:04:20 pkg.go:303: Pkg[powertop]: Apply
21:04:20 pkg.go:317: Pkg[powertop]: Set: installed...
21:04:25 packagekit.go:399: PackageKit: Woops: Signal.Path: /8442_beabdaea
21:04:25 packagekit.go:399: PackageKit: Woops: Signal.Path: /8443_acbadcbd
21:04:31 pkg.go:335: Pkg[powertop]: Set: installed success!
21:04:31 main.go:79: Converged for 0 seconds, exiting!
21:04:31 main.go:55: Interrupted by exit signal
21:04:31 pgraph.go:796: Pkg[powertop]: Exited
21:04:31 main.go:203: Goodbye!

real    0m13.320s
user    0m0.023s
sys    0m0.021s

puppet installation of powertop:

$ time sudo puppet apply pkg.pp 
Notice: Compiled catalog for computer.example.com in environment production in 0.69 seconds
Notice: /Stage[main]/Main/Package[powertop]/ensure: created
Notice: Applied catalog in 10.13 seconds

real    0m18.254s
user    0m9.211s
sys    0m2.074s

dnf installation of powertop:

$ time sudo dnf install -y powertop
Last metadata expiration check: 1:22:03 ago on Tue Mar 29 20:04:29 2016.
Dependencies resolved.
==========================================================================
 Package          Arch           Version            Repository       Size
==========================================================================
Installing:
 powertop         x86_64         2.8-1.fc23         updates         228 k

Transaction Summary
==========================================================================
Install  1 Package

Total download size: 228 k
Installed size: 576 k
Downloading Packages:
powertop-2.8-1.fc23.x86_64.rpm            212 kB/s | 228 kB     00:01    
--------------------------------------------------------------------------
Total                                     125 kB/s | 228 kB     00:01     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Installing  : powertop-2.8-1.fc23.x86_64                            1/1 
  Verifying   : powertop-2.8-1.fc23.x86_64                            1/1 

Installed:
  powertop.x86_64 2.8-1.fc23                                              

Complete!

real    0m10.406s
user    0m4.954s
sys    0m0.836s

puppet installation of powertop, sl and cowsay:

$ time sudo puppet apply pkg3.pp 
Notice: Compiled catalog for computer.example.com in environment production in 0.68 seconds
Notice: /Stage[main]/Main/Package[powertop]/ensure: created
Notice: /Stage[main]/Main/Package[sl]/ensure: created
Notice: /Stage[main]/Main/Package[cowsay]/ensure: created
Notice: Applied catalog in 33.02 seconds

real    0m41.229s
user    0m19.085s
sys    0m4.046s

pkcon installation of powertop, sl and cowsay:

$ time sudo pkcon install powertop sl cowsay
Resolving                     [=========================]         
Starting                      [=========================]         
Testing changes               [=========================]         
Finished                      [=========================]         
Installing                    [=========================]         
Querying                      [=========================]         
Downloading packages          [=========================]         
Testing changes               [=========================]         
Installing packages           [=========================]         
Finished                      [=========================]         

real    0m14.755s
user    0m0.060s
sys    0m0.025s

and finally, mgmt installation of powertop, sl and cowsay with autogrouping:

$ time sudo ./mgmt run --file examples/autogroup2.yaml --converged-timeout=0
21:16:00 main.go:63: This is: mgmt, version: 0.0.3-1-g6f3ac4b
21:16:00 main.go:64: Main: Start: 1459300560994114252
21:16:00 main.go:190: Main: Running...
21:16:00 main.go:113: Etcd: Starting...
21:16:00 main.go:117: Main: Waiting...
21:16:00 etcd.go:113: Etcd: Watching...
21:16:00 etcd.go:113: Etcd: Watching...
21:16:00 configwatch.go:54: Watching: examples/autogroup2.yaml
21:16:03 config.go:272: Compile: Adding AutoEdges...
21:16:03 config.go:533: Compile: Grouping: Algorithm: nonReachabilityGrouper...
21:16:03 config.go:533: Compile: Grouping: Success for: Pkg[powertop] into Pkg[cowsay]
21:16:03 config.go:533: Compile: Grouping: Success for: Pkg[sl] into Pkg[cowsay]
21:16:03 main.go:171: Graph: Vertices(1), Edges(0)
21:16:03 main.go:174: Graphviz: No filename given!
21:16:03 pgraph.go:764: State: graphStateNil -> graphStateStarting
21:16:03 pgraph.go:825: State: graphStateStarting -> graphStateStarted
21:16:03 main.go:117: Main: Waiting...
21:16:03 pkg.go:245: Pkg[autogroup:(cowsay,powertop,sl)]: CheckApply(true)
21:16:03 pkg.go:303: Pkg[autogroup:(cowsay,powertop,sl)]: Apply
21:16:03 pkg.go:317: Pkg[autogroup:(cowsay,powertop,sl)]: Set: installed...
21:16:08 packagekit.go:399: PackageKit: Woops: Signal.Path: /8547_cbeaddda
21:16:08 packagekit.go:399: PackageKit: Woops: Signal.Path: /8548_bcaadbce
21:16:16 pkg.go:335: Pkg[autogroup:(cowsay,powertop,sl)]: Set: installed success!
21:16:16 main.go:79: Converged for 0 seconds, exiting!
21:16:16 main.go:55: Interrupted by exit signal
21:16:16 pgraph.go:796: Pkg[cowsay]: Exited
21:16:16 main.go:203: Goodbye!

real    0m15.621s
user    0m0.040s
sys    0m0.038s

Results and analysis

My hard work seems to have paid off, because we do indeed see a noticeable improvement from grouping package resources. The data shows that even in the single package comparison cases, mgmt has very little overhead, which is demonstrated by seeing that the mgmt run times are very similar to the times it takes to run the package managers manually.

In the three package scenario, performance is approximately 2.39 times faster than puppet for installation. Checking was about 12 times faster! These ratios are expected to grow with a larger number of resources.

Sweet graph...

Bigger bars is worse… Puppet is in Red, mgmt is in blue.

The four groups at the bottom along the x axis correspond to the four scenarios I tested, 1, 2 and 3 corresponding to each run of that scenario, with the average of the three listed there too.

Versions

The test wouldn’t be complete if we didn’t tell you which specific version of each tool that we used. Let’s time those as well! ;)

puppet:

$ time puppet --version 
4.2.1

real    0m0.659s
user    0m0.525s
sys    0m0.064s

mgmt:

$ time ./mgmt --version
mgmt version 0.0.3-1-g6f3ac4b

real    0m0.007s
user    0m0.006s
sys    0m0.002s

pkcon:

$ time pkcon --version
1.0.11

real    0m0.013s
user    0m0.006s
sys    0m0.005s

dnf:

$ time dnf --version
1.1.7
  Installed: dnf-0:1.1.7-2.fc23.noarch at 2016-03-17 13:37
  Built    : Fedora Project at 2016-03-09 16:45

  Installed: rpm-0:4.13.0-0.rc1.12.fc23.x86_64 at 2016-03-03 09:39
  Built    : Fedora Project at 2016-02-29 09:53

real    0m0.438s
user    0m0.379s
sys    0m0.036s

Yep, puppet even takes the longest to tell us what version it is. Now I’m just teasing…

Methodology

It might have been more useful to time the removal of packages instead so that we further reduce the variability of internet bandwidth and latency, although since most configuration management is used to install packages (rather than remove), we figured this would be more appropriate and easy to understand. You’re welcome to conduct your own study and share the results!

Additionally, for fun, I also looked at puppet runs where three individual resources were used instead of a single resource with the title being an array of all three packages, and found no significant difference in the results. Indeed puppet runs dnf three separate times in either scenario:

$ ps auxww | grep dnf
root     12118 27.0  1.4 417060 110864 ?       Ds   21:57   0:03 /usr/bin/python3 /usr/bin/dnf -d 0 -e 0 -y install powertop
$ ps auxww | grep dnf
root     12713 32.7  2.0 475204 159840 ?       Rs   21:57   0:02 /usr/bin/python3 /usr/bin/dnf -d 0 -e 0 -y install sl
$ ps auxww | grep dnf
root     13126  0.0  0.7 275324 55608 ?        Rs   21:57   0:00 /usr/bin/python3 /usr/bin/dnf -d 0 -e 0 -y install cowsay

Data

If you’d like to download the raw data as a text formatted table, and the terminal output from each type of run, I’ve posted it here.

Conclusion

I hope that you enjoyed this feature and analysis, and that you’ll help contribute to making it better. Come join our IRC channel and say hello! Thanks to those who reviewed my article and pointed out some good places for improvements!

Happy Hacking,

James

 

Automatic edges in mgmt

It’s been two months since I announced mgmt, and now it’s time to continue the story by telling you more about the design of what’s now in git master. Before I get into those details, let me quickly recap what’s happened since then.

Mgmt community recap:

Okay, time to tell you what’s new in mgmt!

Types vs. resources:

Configuration management systems have the concept of primitives for the basic building blocks of functionality. The well-known ones are “package”, “file”, “service” and “exec” or “execute”. Chef calls these “resources”, while puppet (misleadingly) calls them “types”.

I made the mistake of calling the primitives in mgmt, “types”, no doubt due to my extensive background in puppet, however this overloads the word because it usually refers to programming types, so I’ve decided not to use it any more to refer to these primitives. The Chef folks got this right! From now on they’ll be referred to as “resources” or “res” when abbreviated.

Mgmt now has: “noop“, “file“, “svc“, “exec“, and now: “pkg“…

The package (pkg) resource:

The most obvious change to mgmt, is that it now has a package resource. I’ve named it “pkg” because we hackers prefer to keep strings short. Creating a pkg resource is difficult for two reasons:

  1. Mgmt is event based
  2. Mgmt should support many package managers

Event based systems would involve an inotify watch on the rpmdb (or a similar watch on /var/lib/dpkg/), and the logic to respond to it. This engineering problem boils down to being able to support the entire matrix of possible GNU/Linux packaging systems. Yuck! Additionally, it would be particularly unfriendly if we primarily supported RPM and DNF based systems, but left the DPKG and APT backend out as “an exercise for the community”.

Therefore, we solve both of these problems by basing the pkg resource on the excellent PackageKit project! PackageKit provides the events we need, and more importantly, it supports many backends already! If there’s ever a new backend that needs to be added, you can add it upstream in PackageKit, and everyone (including mgmt) will benefit from your work.

As a result, I’m proud to announce that both Debian and Fedora, (and many other friends) all got a working pkg resource on day one. Here’s a small demo:

Run mgmt:

root@debian:~/mgmt# time ./mgmt run --file examples/pkg1.yaml 
18:58:25 main.go:65: This is: mgmt, version: 0.0.2-41-g963f025
[snip]
18:18:44 pkg.go:208: Pkg[powertop]: CheckApply(true)
18:18:44 pkg.go:259: Pkg[powertop]: Apply
18:18:44 pkg.go:266: Pkg[powertop]: Set: installed...
18:18:52 pkg.go:284: Pkg[powertop]: Set: installed success!

The “powertop” package will install… Now while mgmt is still running, remove powertop:

root@debian:~# pkcon remove powertop
Resolving                     [=========================]         
Testing changes               [=========================]         
Finished                      [=========================]         
Removing                      [=========================]         
Loading cache                 [=========================]         
Running                       [=========================]         
Removing packages             [=========================]         
Committing changes            [=========================]         
Finished                      [=========================]         
root@debian:~# which powertop
/usr/sbin/powertop

It gets installed right back! Similarly, you can do it like this:

root@debian:~# apt-get -y remove powertop
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  libnl-3-200 libnl-genl-3-200
Use 'apt-get autoremove' to remove them.
The following packages will be REMOVED:
  powertop
0 upgraded, 0 newly installed, 1 to remove and 80 not upgraded.
After this operation, 542 kB disk space will be freed.
(Reading database ... 47528 files and directories currently installed.)
Removing powertop (2.6.1-1) ...
Processing triggers for man-db (2.7.0.2-5) ...
root@debian:~# which powertop
/usr/sbin/powertop

And it will also get installed right back! Try it yourself to see it happen “live”! Similar behaviour can be seen on Fedora and other distros.

As a quite aside. If you’re a C hacker, and you would like to help with the upstream PackageKit project, they would surely love your contributions, and in particular, we here working on mgmt would especially like it if you worked on any of the open issues that we’ve uncovered. In order from increasing to decreasing severity, they are: #118 (please help!), #117 (needs love), #110 (needs testing), and #116 (would be nice to have). If you’d like to test mgmt on your favourite distro, and report and fix any issues, that would be helpful too!

Automatic edges:

Since we’re now hooked into the pkg resource, there’s no reason we can’t use that wealth of knowledge to make mgmt more powerful. For example, the PackageKit API can give us the list of files that a certain package would install. Since any file resource would obviously want to get “applied” after the package is installed, we use this information to automatically generate the relationships or “edges” in the graph. This means that module authors don’t have to waste time manually adding or updating the “require” relationships in their modules!

For example, the /etc/drbd.conf file, will want to require the drbd-utils package to be installed first. With traditional config management systems, without this dependency chain, after one run, your system will not be in a converged state, and would require another run. With mgmt, since it is event based, it would converge, except it might run in a sub-optimal order. That’s one reason why we add this dependency for you automatically.

This is represented via what mgmt calls the “AutoEdges” API. (If you can think of a better name, please tell me now!) It’s also worth noting that this isn’t entirely a novel idea. Puppet has a concept of “autorequires”, which is used for some of their resources, but doesn’t work with packages. I’m particularly proud of my design, because in my opinion, I think the API and mechanism in mgmt are much more powerful and logical.

Here’s a small demo:

james@fedora:~$ ./mgmt run --file examples/autoedges3.yaml 
20:00:38 main.go:65: This is: mgmt, version: 0.0.2-42-gbfe6192
[snip]
20:00:38 configwatch.go:54: Watching: examples/autoedges3.yaml
20:00:38 config.go:248: Compile: Adding AutoEdges...
20:00:38 config.go:313: Compile: Adding AutoEdge: Pkg[drbd-utils] -> Svc[drbd]
20:00:38 config.go:313: Compile: Adding AutoEdge: Pkg[drbd-utils] -> File[file1]
20:00:38 config.go:313: Compile: Adding AutoEdge: Pkg[drbd-utils] -> File[file2]
20:00:38 main.go:149: Graph: Vertices(4), Edges(3)
[snip]

Here we define four resources: pkg (drbd-utils), svc (drbd), and two files (/etc/drbd.conf and /etc/drbd.d/), both of which happen to be listed inside the RPM package! The AutoEdge magic works out these dependencies for us by examining the package data, and as you can see, adds the three edges. Unfortunately, there is no elegant way that I know of to add an automatic relationship between the svc and any of these files at this time. Suggestions welcome.

Finally, we also use the same interface to make sure that a parent directory gets created before any managed file that is a child of it.

Automatic edges internals:

How does it work? Each resource has a method to generate a “unique id” for that resource. This happens in the “UIDs” method. Additionally, each resource has an “AutoEdges” method which, unsurprisingly, generates an “AutoEdge” object (struct). When the compiler is generating the graph and adding edges, it calls two methods on this AutoEdge object:

  1. Next()
  2. Test(…)

The Next() method produces a list of possible matching edges for that resource. Whichever edges match are added to the graph, and the results of each match is fed into the Test(…) function. This information is used to tell the resource whether there are more potential matches available or not. The process iterates until Test returns false, which means that there are no other available matches.

This ensures that a file: /var/lib/foo/bar/baz/ will first seek a dependency on /var/lib/foo/bar/, but be able to fall back to /var/lib/ if that’s the first file resource available. This way, we avoid adding more resource dependencies than necessary, which would diminish the amount of parallelism possible, while running the graph.

Lastly, it’s also worth noting that users can choose to disable AutoEdges per resource if they so desire. If you’ve got an idea for a clever automatic edge, please contact me, send a patch, or leave the information in the comments!

Contributing:

Good ideas and designs help, but contributors is what will make the project. All sorts of help is appreciated, and you can join in even if you’re not an “expert”. I’ll try and tag “easy” or “first time” patches with the “mgmtlove” tag. Feel free to work on other issues too, or suggest something that you think will help! Want to add more interesting resources! Anyone want to write a libvirt resource? How about a network resource? Use your imagination!

Lastly, thanks to both Richard and Matthias Klumpp in particular from the PackageKit project for their help so far, and to everyone else who has contributed in some way.

That’s all for today. I’ve got more exciting things coming. Please contribute!

Happy Hacking!

James

Next generation configuration mgmt

It’s no secret to the readers of this blog that I’ve been active in the configuration management space for some time. I owe most of my knowledge to what I’ve learned while working with Puppet and from other hackers working in and around various other communities.

I’ve published, a number, of articles, in an, attempt, to push, the field, forwards, and to, share the, knowledge, that I’ve, learned, with others. I’ve spent many nights thinking about these problems, but it is not without some chagrin that I realized that the current state-of-the-art in configuration management cannot easily (or elegantly) solve all the problems for which I wish to write solutions.

To that end, I’d like to formally present my idea (and code) for a next generation configuration management prototype. I’m calling my tool mgmt.

Design triad

Mgmt has three unique design elements which differentiate it from other tools. I’ll try to cover these three points, and explain why they’re important. The summary:

  1. Parallel execution, to run all the resources concurrently (where possible)
  2. Event driven, to monitor and react dynamically only to changes (when needed)
  3. Distributed topology, so that scale and centralization problems are replaced with a robust distributed system

The code is available, but you may prefer to read on as I dive deeper into each of these elements first.

1) Parallel execution

Fundamentally, all configuration management systems represent the dependency relationships between their resources in a graph, typically one that is directed and acyclic.

directed acyclic graph g1, showing the dependency relationships with black arrows, and the linearized dependency sort order with red arrows.

Directed acyclic graph g1, showing the dependency relationships with black arrows, and the linearized dependency sort order (a topological sort) with red arrows.

Unfortunately, the execution of this graph typically has a single worker that runs through a linearized, topologically sorted version of it. There is no reason that a graph with a number of disconnected parts cannot run each separate section in parallel with each other.

g2

Graph g2 with the red arrows again showing the execution order of the graph. Please note that this graph is composed of two disconnected parts: one diamond on the left and one triplet on the right, both of which can run in parallel. Additionally, nodes 2a and 2b can run in parallel only after 1a has run, and node 3a requires the entire left diamond to have succeeded before it can execute.

Typically, some nodes will have a common dependency, which once met will allow its children to all execute simultaneously.

This is the first major design improvement that the mgmt tool implements. It has obvious improvements for performance, in that a long running process in one part of the graph (eg: a package installation) will cause no delay on a separate disconnected part of the graph which is in the process of converging unrelated code. It also has other benefits which we will discuss below.

In practice this is particularly powerful since most servers under configuration management typically combine different modules, many of which have no inter-dependencies.

An example is the best way to show off this feature. Here we have a set of four long (10 second) running processes or exec resources. Three of them form a linear dependency chain, while the fourth has no dependencies or prerequisites. I’ve asked the system to exit after it has been converged for five seconds. As you can see in the example, it is finished five seconds after the limiting resource should be completed, which is the longest running delay to complete in the whole process. That limiting chain took 30 seconds, which we can see in the log as being from three 10 second executions. The logical total of about 35 seconds as expected is shown at the end:

$ time ./mgmt run --file graph8.yaml --converged-timeout=5 --graphviz=example1.dot
22:55:04 This is: mgmt, version: 0.0.1-29-gebc1c60
22:55:04 Main: Start: 1452398104100455639
22:55:04 Main: Running...
22:55:04 Graph: Vertices(4), Edges(2)
22:55:04 Graphviz: Successfully generated graph!
22:55:04 State: graphStarting
22:55:04 State: graphStarted
22:55:04 Exec[exec4]: Apply //exec4 start
22:55:04 Exec[exec1]: Apply //exec1 start
22:55:14 Exec[exec4]: Command output is empty! //exec4 end
22:55:14 Exec[exec1]: Command output is empty! //exec1 end
22:55:14 Exec[exec2]: Apply //exec2 start
22:55:24 Exec[exec2]: Command output is empty! //exec2 end
22:55:24 Exec[exec3]: Apply //exec3 start
22:55:34 Exec[exec3]: Command output is empty! //exec3 end
22:55:39 Converged for 5 seconds, exiting! //converged for 5s
22:55:39 Interrupted by exit signal
22:55:39 Exec[exec4]: Exited
22:55:39 Exec[exec1]: Exited
22:55:39 Exec[exec2]: Exited
22:55:39 Exec[exec3]: Exited
22:55:39 Goodbye!

real    0m35.009s
user    0m0.008s
sys     0m0.008s
$

Note that I’ve edited the example slightly to remove some unnecessary log entries for readability sake, and I have also added some comments and emphasis, but aside from that, this is actual output! The tool also generated graphviz output which may help you better understand the problem:

 

example1.dot

This example is obviously contrived, but is designed to illustrate the capability of the mgmt tool.

Hopefully you’ll be able to come up with more practical examples.

2) Event driven

All configuration management systems have some notion of idempotence. Put simply, an idempotent operation is one which can be applied multiple times without causing the result to diverge from the desired state. In practice, each individual resource will typically check the state of the element, and if different than what was requested, it will then apply the necessary transformation so that the element is brought to the desired state.

The current generation of configuration management tools, typically checks the state of each element once every 30 minutes. Some do it more or less often, and some do it only when manually requested. In all cases, this can be an expensive operation due to the size of the graph, and the cost of each check operation. This problem is compounded by the fact that the graph doesn’t run in parallel.

g3

In this time / state sequence diagram g3, time progresses from left to right. Each of the three elements (from top to bottom) want to converge on states a, b and c respectively. Initially the first two are in states x and y, where as the third is already converged. At t1 the system runs and converges the graph, which entails a state change of the first and second elements. At some time t2, the elements are changed by some external force, and the system is no longer converged. We won’t notice till later! At this time t3 when we run for the second time, we notice that the second and third elements are no longer converged and we apply the necessary operations to fix this. An unknown amount of time passed where our cluster was in a diverged or unhealthy state. Traditionally this is on the order of 30 minutes.

More importantly, if something diverges from the requested state you might wait 30 minutes before it is noticed and repaired by the system!

The mgmt system is unique, because I realized that an event based system could fulfill the same desired behaviour, and in fact it offers a more general and powerful solution. This is the second major design improvement that the mgmt tool implements.

These events that we’re talking about are inotify events for file changes, systemd events (from dbus) for service changes, packagekit events (from dbus again) for package change events, and events from exec calls, timers, network operations and more! In the inotify example, on first run of the mgmt system, an inotify watch is taken on the file we want to manage, the state is checked and it is converged if need be. We don’t ever need to check the state again unless inotify tells us that something happened!

g4

In this time / state sequence diagram g4, time progresses from left to right. After the initial run, since all the elements are being continuously monitored, the instant something changes, mgmt reacts and fixes it almost instantly.

Astute config mgmt hackers might end up realizing three interesting consequences:

  1. If we don’t want the mgmt program to continuously monitor events, it can be told to exit after the graph converges, and run again 30 minutes later. This can be done with my system by running it from cron with the --converged-timeout=1, flag. This effectively offers the same behaviour that current generation systems do for the administrators who do not want to experiment with a newer model. Thus, the current systems are a special, simplified model of mgmt!
  2. It is possible that some resources don’t offer an event watching mechanism. In those instances, a fallback to polling is possible for that specific resource. Although there currently aren’t any missing event APIs that your narrator knows about at this time.
  3. A monitoring system (read: nagios and friends) could probably be built with this architecture, thus demonstrating that my world view of configuration management is actually a generalized version of system monitoring! They’re the same discipline!

Here is a small real-world example to demonstrate this feature. I have started the agent, and I have told it to create three files (f1, f2, f3) with the contents seen below, and also, to ensure that file f4 is not present. As you can see the mgmt system responds quite quickly:

james@computer:/tmp/mgmt$ ls
f1  f2  f3
james@computer:/tmp/mgmt$ cat *
i am f1
i am f2
i am f3
james@computer:/tmp/mgmt$ rm -f f2 && cat f2
i am f2
james@computer:/tmp/mgmt$ echo blah blah > f2 && cat f2
i am f2
james@computer:/tmp/mgmt$ touch f4 && file f4
f4: cannot open `f4' (No such file or directory)
james@computer:/tmp/mgmt$ ls
f1  f2  f3
james@computer:/tmp/mgmt$

That’s fast!

3) Distributed topology

All software typically runs with some sort of topology. Puppet and Chef normally run in a client server topology, where you typically have one server with many clients, each running an agent. They also both offer a standalone mode, but in general this is not more interesting than running a fancy bash script. In this context, I define interesting as “relating to clustered, multiple machine architectures”.

g5

Here in graph g5 you can see one server which has three clients initiating requests to it.

This traditional model of computing is well-known, and fairly easy to reason about. You typically put all of your code in one place (on the server) and the clients or agents need very little personalized configuration to get working. However, it can suffer from performance and scalability issues, and it can also be a single point of failure for your infrastructure. Make no mistake: if you manage your infrastructure properly, then when your configuration management infrastructure is down, you will be unable to bring up new machines or modify existing ones! This can be a disastrous type of failure, and is one which is seldom planned for in disaster recovery scenarios!

Other systems such as Ansible are actually orchestrators, and are not technically configuration management in my opinion. That doesn’t mean they don’t share much of the same problem space, and in fact they are usually idempotent and share many of the same properties of traditional configuration management systems. They are useful and important tools!

graph6The key difference about an orchestrator, is that it typically operates with a push model, where the server (or the sysadmin laptop) initiates a connection to the machines that it wants to manage. One advantage is that this is sometimes very easy to reason about for multi machine architectures, however it shares the usual downsides around having a single point of failure. Additionally there are some very real performance considerations when running large clusters of machines. In practice these clusters are typically segmented or divided in some logical manner so as to lessen the impact of this, which unfortunately detracts from the aforementioned simplicity of the solution.

Unfortunately with either of these two topologies, we can’t immediately detect when an issue has occurred and respond immediately without sufficiently advanced third party monitoring. By itself, a machine that is being managed by orchestration, cannot detect an issue and communicate back to its operator, or tell the cluster of servers it peers with to react accordingly.

The good news about current and future generation topologies is that algorithms such as the Paxos family and Raft are now gaining wider acceptance and good implementations now exist as Free Software. Mgmt depends on these algorithms to create a mesh of agents. There are no clients and servers, only peers! Each peer can choose to both export and collect data from a distributed data store which lives as part of the cluster of peers. The software that currently implements this data store is a marvellous piece of engineering called etcd.

graph7

In graph g7, you can see what a fully interconnected graph topology might look like. It should be clear that the numbed of connections (or edges) is quite large. Try and work out the number of edges required for a fully connected graph with 128 nodes. Hint, it’s large!

In practice the number of connections required for each peer to connect to each other peer would be too great, so instead the cluster first achieves distributed consensus, and then the elected leader picks a certain number of machines to run etcd masters. All other agents then connect through one of these masters. The distributed data store can easily handle failures, and agents can reconnect seamlessly to a different temporary master should they need to. If there is a lack or an abundance of transient masters, then the cluster promotes or demotes an agent automatically by asking it to start or stop an etcd process on its host.

g8

In graph g8, you can see a tightly interconnected centre of nodes running both their configuration management tasks, but also etcd masters. Each additional peer picks any of them to connect to. As the number of nodes scale, it is far easier to scale such a cluster. Future algorithm designs and optimizations should help this system scale to unprecedented host counts. It should go without saying that it would be wise to ensure that the nodes running etcd masters are in different failure domains.

By allowing hosts to export and collect data from the distributed store, we actually end up with a mechanism that is quite similar to what Puppet calls exported resources. In my opinion, the mechanism and data interchange is actually a brilliant idea, but with some obvious shortcomings in its implementation. This is because for a cluster of N nodes, each wishing to exchange data with one another, puppet must run N times (once on each node) and then N-1 times for the entire cluster to see all of the exchanged data. Each of these runs requires an entire sequential run through every resource, and an expensive check of each resource, each time.

In contrast, with mgmt, the graph is redrawn only when an etcd event notifies us of a change in the data store, and when the new graph is applied, only members who are affected either by a change in definition or dependency need to be re-run. In practice this means that a small cluster where the resources themselves have a negligible apply time, can converge a complete connected exchange of data in less than one second.

An example demonstrates this best.

  • I have three nodes in the system: A, B, C.
  • Each creates four files, two of which it will export.
  • On host A, the two files are: /tmp/mgmtA/f1a and /tmp/mgmtA/f2a.
  • On host A, it exports: /tmp/mgmtA/f3a and /tmp/mgmtA/f4a.
  • On host A, it collects all available (exported) files into: /tmp/mgmtA/
  • It is done similarly with B and C, except with the letters B and C substituted in to the emphasized locations above.
  • For demonstration purposes, I startup the mgmt engine first on A, then B, and finally C, all the while running various terminal commands to keep you up-to-date.

As before, I’ve trimmed the logs and annotated the output for clarity:

james@computer:/tmp$ rm -rf /tmp/mgmt* # clean out everything
james@computer:/tmp$ mkdir /tmp/mgmt{A..C} # make the example dirs
james@computer:/tmp$ tree /tmp/mgmt* # they're indeed empty
/tmp/mgmtA
/tmp/mgmtB
/tmp/mgmtC

0 directories, 0 files
james@computer:/tmp$ # run node A, it converges almost instantly
james@computer:/tmp$ tree /tmp/mgmt*
/tmp/mgmtA
├── f1a
├── f2a
├── f3a
└── f4a
/tmp/mgmtB
/tmp/mgmtC

0 directories, 4 files
james@computer:/tmp$ # run node B, it converges almost instantly
james@computer:/tmp$ tree /tmp/mgmt*
/tmp/mgmtA
├── f1a
├── f2a
├── f3a
├── f3b
├── f4a
└── f4b
/tmp/mgmtB
├── f1b
├── f2b
├── f3a
├── f3b
├── f4a
└── f4b
/tmp/mgmtC

0 directories, 12 files
james@computer:/tmp$ # run node C, exit 5 sec after converged, output:
james@computer:/tmp$ time ./mgmt run --file examples/graph3c.yaml --hostname c --converged-timeout=5
01:52:33 main.go:65: This is: mgmt, version: 0.0.1-29-gebc1c60
01:52:33 main.go:66: Main: Start: 1452408753004161269
01:52:33 main.go:203: Main: Running...
01:52:33 main.go:103: Etcd: Starting...
01:52:33 config.go:175: Collect: file; Pattern: /tmp/mgmtC/
01:52:33 main.go:148: Graph: Vertices(8), Edges(0)
01:52:38 main.go:192: Converged for 5 seconds, exiting!
01:52:38 main.go:56: Interrupted by exit signal
01:52:38 main.go:219: Goodbye!

real    0m5.084s
user    0m0.034s
sys    0m0.031s
james@computer:/tmp$ tree /tmp/mgmt*
/tmp/mgmtA
├── f1a
├── f2a
├── f3a
├── f3b
├── f3c
├── f4a
├── f4b
└── f4c
/tmp/mgmtB
├── f1b
├── f2b
├── f3a
├── f3b
├── f3c
├── f4a
├── f4b
└── f4c
/tmp/mgmtC
├── f1c
├── f2c
├── f3a
├── f3b
├── f3c
├── f4a
├── f4b
└── f4c

0 directories, 24 files
james@computer:/tmp$

Amazingly, the cluster converges in less than one second. Admittedly it didn’t have large amounts of IO to do, but since those are fixed constants, it still shows how fast this approach should be. Feel free to do your own tests to verify.

Code

The code is publicly available and has been for some time. I wanted to release it early, but I didn’t want to blog about it until I felt I had the initial design triad completed. It is written entirely in golang, which I felt was a good match for the design requirements that I had. It is my first major public golang project, so I’m certain there are many things I could be doing better. As a result, I welcome your criticism and patches, just please try and keep them constructive and respectful! The project is entirely Free Software, and I plan to keep it that way. As long as Red Hat is involved, I’m pretty sure you won’t have to deal with any open core compromises!

Community

There’s an IRC channel going. It’s #mgmtconfig on Freenode. Please come hangout! If we get bigger, we’ll add a mailing list.

Caveats

There are a few caveats that I’d like to mention. Please try to keep these in mind.

  • This is still an early prototype, and as a result isn’t ready for production use, or as a replacement for existing config management software. If you like the design, please contribute so that together we can turn this into a mature system faster.
  • There are some portions of the code which are notably absent. In particular, there is no lexer or parser, nor is there a design for what the graph building language (DSL) would look like. This is because I am not a specialist in these areas, and as a result, while I have some ideas for the design, I don’t have any useful code yet. For testing the engine, there is a (quickly designed) YAML graph definition parser available in the code today.
  • The etcd master election/promotion portion of the code is not yet available. Please stay tuned!
  • There is no design document, roadmap or useful documentation currently available. I’ll be working to remedy this, but I first wanted to announce the project, gauge interest and get some intial feedback. Hopefully others can contribute to the docs, and I’ll try to write about my future design ideas as soon as possible.
  • The name mgmt was the best that I could come up with. If you can propose a better alternative, I’m open to the possibility.
  • I work for Red Hat, and at first it might seem confusing to announce this work alongside our existing well-known Puppet and Ansible teams. To clarify, this is a prototype of some work and designs that I started before I was employed at Red Hat. I’m grateful that they’ve been letting me work on interesting projects, and I’m very pleased that my managers have had the vision to invest in future technologies and projects that (I hope) might one day become the de facto standard.

Presentations

It is with great honour, that my first public talk about this project will be at Config Management Camp 2016. I am thrilled to be speaking at such an excellent conference, where I am sure the subject matter will be a great fit for all the respected domain experts who will be present. Find me on the schedule, and please come to my session.

I’m also fortunate enough to be speaking about the same topic, just four days later in Brno, at DevConf.CZ. It’s a free conference, in an excellent city, and you’ll be sure to find many excellent technical sessions and hackers!

I hope to see you at one of these events or at a future conference. If you’d like to have me speak at your conference or event, please contact me!

Conclusion

Thank you for reading this far! I hope you appreciate my work, and I promise to tell you more about some of the novel designs and properties that I have planned for the future of mgmt. Please leave me your comments, even if they’re just +1’s.

Happy hacking!

James

 

Post scriptum

There seems to be a new trend about calling certain types of management systems or designs “choreography”. Since this term is sufficiently overloaded and without a clear definition, I choose to avoid it, however it’s worth mentioning that some of the ideas from some of the definitions of this word as pertaining to the configuration management field match what I’m trying to do with this design. Instead of using the term “choreography”, I prefer to refer to what I’m doing as “configuration management”.

Some early peer reviews suggested this might be a “puppet-killer”. In fact, I actually see it as an opportunity to engage with the puppet community and to share my design and engine, which I hope some will see as a more novel solution. Existing puppet code could be fed through a cross compiler to output a graph that actually runs on my engine. While I plan to offer an easier to use and more powerful DSL language, the 80% of existing puppet code isn’t more than plumbing, package installation, and simple templating, so a gradual migration would be possible, where the multi-system configuration management parts are implemented using my new patterns instead of with slowly converging puppet. The same things could probably also be done with other languages like Chef. Anyone from Puppet Labs, Chef Software Inc., or the greater hacker community is welcome to contact me personally if they’d like to work on this.

Lastly, but most importantly, thanks to everyone who has discussed these ideas with me, reviewed this article, and contributed in so many ways. You’re awesome!