Send/Recv in mgmt

I previously published “A revisionist history of configuration management“. I meant for that to be the intro to this article, but it ended up being long enough that it deserved a separate post. I will explain Send/Recv in this article, but first a few clarifications to the aforementioned article.

Clarifications

I mentioned that my “revisionist history” was inaccurate, but I failed to mention that it was also not exhaustive! Many things were left out either because they were proprietary, niche, not well-known, of obscure design or simply for brevity. My apologies if you were involved with Bcfg2, Bosh, Heat, Military specifications, SaltStack, SmartFrog, or something else entirely. I’d love it if someone else wrote an “exhaustive history”, but I don’t think that’s possible.

It’s also worth re-iterating that without the large variety of software and designs which came before me, I wouldn’t have learned or have been able to build anything of value. Thank you giants!  By discussing the problems and designs of other tools, then it makes it easier to contrast with and explaining what I’m doing in mgmt.

Notifications

If you’re not familiar with the directed acyclic graph model for configuration management, you should start by reviewing that material first. It models a system of resources (workers) as the vertices in that DAG, and the edges as the dependencies. We’re going to add some additional mechanics to this model.

There is a concept in mgmt called notifications. Any time the state of a resource is successfully changed by the engine, a notification is emitted. These notifications are emitted along any graph edge (dependency) that has been asked to relay them. Edges with the Notify property will do so. These are usually called refresh notifications.

Any time a resource receives a refresh notification, it can apply a special action which is resource specific. The svc resource reloads the service, the password resource generates a new password, the timer resource resets the timer, and the noop resource prints a notification message. In general, refresh notifications should be avoided if possible, but there are a number of legitimate use cases.

In mgmt notifications are designed to be crash-proof, that is to say, undelivered notifications are re-queued when the engine restarts. While we don’t expect mgmt to crash, this is also useful when a graph is stopped by the user before it has completed.

You’ll see these notifications in action momentarily.

Send/Recv

I mentioned in the revisionist history that I felt that Chef opted for raw code as a solution to the lack of power in Puppet. Having resources in mgmt which are event-driven is one example of increasing their power. Send/Recv is another mechanism to make the resource primitive more powerful.

Simply put: Send/Recv is a mechanism where resources can transfer data along graph edges.

The status quo

Consider the following pattern (expressed as Puppet code):

# create a directory
file { '/var/foo/':
    ensure => directory,
}
# download a file into that directory
exec { 'wget http://example.com/data -O - > /var/foo/data':
    creates => '/var/foo/data',
    require => File['/var/foo/'],
}
# set some property of the file
file { '/var/foo/data':
    mode => 0644,
    require => File['/var/foo/data'],
}

First a disclaimer. Puppet now actually supports an http url as a source. Nevertheless, this was a common pattern for many years and that solution only improves a narrow use case. Here are some of the past and current problems:

  • File can’t take output from an exec (or other) resource
  • File can’t pull from an unsupported protocol (sftp, tftp, imap, etc…)
  • File can get created with zero-length data
  • Exec won’t update if http endpoint changes the data
  • Requires knowledge of bash and shell glue
  • Potentially error-prone if a typo is made

There’s also a layering violation if you believe that network code (http downloading) shouldn’t be in a file resource. I think it adds unnecessary complexity to the file resource.

The solution

What the file resource actually needs, is to be able to accept (Recv) data of the same type as any of its input arguments. We also need resources which can produce (Send) data that is useful to consumers. This occurs along a graph (dependency) edge, since the sending resource would need to produce it before the receiver could act!

This also opens up a range of possibilities for new resource kinds that are clever about sending or receiving data. an http resource could contain all the necessary network code, and replace our use of the exec { 'wget ...': } pattern.

Diagram

in this graph, a password resource generates a random string and stores it in a file

in this graph, a password resource generates a random string and stores it in a file; more clever linkages are planned

Example

As a proof of concept for the idea, I implemented a Password resource. This is a prototype resource that generates a random string of characters. To use the output, it has to be linked via Send/Recv to something that can accept a string. The file resource is one such possibility. Here’s an excerpt of some example output from a simple graph:

03:06:13 password.go:295: Password[password1]: Generating new password...
03:06:13 password.go:312: Password[password1]: Writing password token...
03:06:13 sendrecv.go:184: SendRecv: Password[password1].Password -> File[file1].Content
03:06:13 file.go:651: contentCheckApply: Invalidating sha256sum of `Content`
03:06:13 file.go:579: File[file1]: contentCheckApply(true)
03:06:13 noop.go:115: Noop[noop1]: Received a notification!

What you can see is that initially, a random password is generated. Next Send/Recv transfers the generated Password to the file’s Content. The file resource invalidates the cached Content checksum (a performance feature of the file resource), and then stores that value in the file. (This would normally be a security problem, but this is for example purposes!) Lastly, the file sends out a refresh notification to a Noop resource for demonstration purposes. It responds by printing a log message to that effect.

Libmgmt

Ultimately, mgmt will have a DSL to express the different graphs of configuration. In the meantime, you can use Puppet code, or a raw YAML file. The latter is primarily meant for testing purposes until we have the language built.

Lastly, you can also embed mgmt and use it like a library! This lets you write raw golang code to build your resource graphs. I decided to write the above example that way! Have a look at the code! This can be used to embed mgmt into your existing software! There are a few more examples available here.

Resource internals

When a resource receives new values via Send/Recv, it’s likely that the resource will have work to do. As a result, the engine will automatically mark the resource state as dirty and then poke it from the sending node. When the receiver resource runs, it can lookup the list of keys that have been sent. This is useful if it wants to perform a cache invalidation for example. In the resource, the code is quite simple:

if val, exists := obj.Recv["Content"]; exists && val.Changed {
    // the "Content" input has changed
}

Here is a good example of that mechanism in action.

Future work

This is only powerful if there are interesting resources to link together. Please contribute some ideas, and help build these resources! I’ve got a number of ideas already, but I’d love to hear yours first so that I don’t influence or stifle your creativity. Leave me a message in the comments below!

Happy Hacking,

James

4 thoughts on “Send/Recv in mgmt

  1. I like mgmt. Did you think about the language as well? Personally, I like the idea of a declarative language like Puppet. Unfortunately, Puppet failed to deliver: https://people.cs.umass.edu/~arjun/papers/2016-rehearsal.html

    One example, I have personally suffered from: Installing packages in conflict. Puppet will play ping-pong forever. One core problem is that apt does provide the necessary interface to do this cleanly. I built myself a wrapper, which in theory suffers from racing, but will actually show a conflict: https://github.com/qznc/dot/blob/master/bin/apt-requirements

    That script is just bandaid though. There should be a safe declarative language for configuration. That might be impossible with package managers like apt in the loop though.

    Here is my current benchmark: Three items specified independently. 1. A file change in /etc/apt/sources.list.d/something, 2. An apt package installation foo, 3. A system service file configuration change which affects foo.

    The configuration language must figure out that 1-2-3 is the only valid order. Puppet and others will only do that, if you add explicit dependencies, which is obviously error-prone.

    From your posts, it seems this aspect of configuration is not your focus?

    • > Did you think about the language as well

      Currently working on the language alongside other things. Would you like to help out? We’ve got something quite special planned I think.

      > The configuration language must figure out that 1-2-3 is the only valid order. Puppet and others will only do that, if you add explicit dependencies, which is obviously error-prone. From your posts, it seems this aspect of configuration is not your focus?

      We actually aim to add as many dependencies automatically as we can. Did you see the article on autoedges? https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/ If this is not what you were referring to, LMK.

      • I have read that after my comment and I will need some more reading still.

        Autoedges (and autogrouping) look good, but I also see that I can disable them. That smells. Why would I do that? Is it not safe? Not fast? Not correct? It also seems incomplete, since you asked for more autoedges?

        This is not meant as criticism, just for clarification. As far as I know, there is no good solution. I suspect that it is a fundamental conflict with package management (and maybe other parts). A package manager is also kind of a configuration manager, so mgmt (and all others) must cooperate.

      • > Autoedges (and autogrouping) look good, but I also see that I can disable them. That smells. Why would I do that? Is it not safe? Not fast? Not correct? It also seems incomplete, since you asked for more autoedges?

        In general, I would expect that users writing safe code won’t disable them, but I also thought it was unreasonable to *force* them on users when they might want something different. So it’s the default, but you can also turn them off per resource.

        They’re safe, fast (although they do reduce parallelism, even though that’s usually necessary anyways) and AFAIK never not correct. As for incompleteness, mgmt and its core resources aren’t complete, so there are still areas where we should add more autoedges. For example: if the path to a binary in an exec resource exists in a package, then we could add an autoedge between those two. Other scenarios exist too! Patches welcome! https://github.com/purpleidea/mgmt/blob/master/resources/exec.go#L315

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s