Extracting movies from libreoffice

I have a short movie that I imported into a libreoffice presentation. I wanted a copy of that movie back, but I couldn’t figure out how to extract a copy. In desperation, I figured I’d try opening the file with file-roller, the GNOME archive manager.

james@computer:/tmp$ file mgmt-berlin-osdc-17may2017.odp 
mgmt-berlin-osdc-17may2017.odp: OpenDocument Presentation
james@computer:/tmp$ mkdir out
james@computer:/tmp$ file-roller -f mgmt-berlin-osdc-17may2017.odp -e out/
[snip]
james@computer:/tmp$ cd out/
james@computer:/tmp/out$ ls
Configurations2/ Media/ meta.xml Pictures/ styles.xml
content.xml META-INF/ mimetype settings.xml Thumbnails/
james@computer:/tmp/out$

To my amazement, this worked perfectly! I found my video in the Media/ folder, and out it came! You can do this entirely with the GUI if you prefer:

file-roller-ods

This is the graphical view of file-roller, opening up a libreoffice .ods presentation file.

As you can assume, pictures are also available. I haven’t cared to dig much further, but hopefully you enjoyed this tip!

Happy Hacking,

James

Advertisements

Declarative vs. Imperative paradigms

Recently, while operating two different remote-controlled appliances, I realized that it was high time for a discussion about declarative and imperative paradigms. Let’s start by looking at the two remotes:

declarative-imperative

Two different “remotes”. The one on the left operates a television, and the one on the right controls a central heating and cooling system.

At first glance you will notice that one of these remotes is dark, and the other is light. You might also notice that my photography skills are terrible. Neither of these facts is very important to the discussion at hand. Is there anything interesting that you can infer?

Here’s a hint: to both reduce costs, preserve battery life, and to keep the design simple, the manufacturers of these two devices only included infrared transmitters on the two devices. That is to say, there is no infrared receiver built into the remote control units, those are only found on the appliances.

The difference is that the dark coloured television remote control sends commands imperatively, whilst the light coloured climate control system sends them declaratively. What does this mean?

Imperative:

When you press the “increase volume” button on the dark-coloured remote, it sends the equivalent of a literal “increase the volume” command to the television. It has no idea what the previous volume was, or even if the television is powered on.

It is fine to operate this way because the human will discern that an action occurred by an on-screen pop up which informs them about the new volume. If the remote wasn’t properly aimed at the television, then the lack of a feedback loop will probably convince you to repeat the command.

This is not idempotent, and there is always the chance that an erroneous extra command could be sent. (Perhaps the television had a GC pause and didn’t display a notification till some seconds after you had retried the command.) It is not particularly safe, but since we’re changing channels and not launching missiles, it’s not a significant problem.

Declarative:

When you press the “increase temperature” button on the light-coloured remote, it actually sends the entire state as shown on the display including the equivalent of “22oC please”. It is able to remember what it believes to be the previous state by storing this on-board, and it even displays it on the built-in LCD screen.

It is useful to operate in this manner, because the designers decided that it was preferable to view a display which was positioned in your hand, rather than on a fan unit which might be mounted in an awkward area and which could exacerbate neck strain to view.

In this case to provide feedback that the command was actually received, each key press will trigger a “beep” noise from the wall unit, and a brief flash of its power LED. This is idempotent, although the designers didn’t provide a button to “resend state”.

Advantages and disadvantages:

Clearly, it is quite straightforward to imperatively send an action command. This is the common paradigm which we’re usually most comfortable with, and layered beneath a declarative design there usually are some imperative operations. This often works best for low latency operations where feedback is guaranteed.

Anyone who has operated a recent television might remember that they require a few seconds to “start-up”. In my impatience when trying to turn on a television, I’ve pressed the “power button” a second time only to see the television turn itself back off. This is because that button only knows to send a “change power state” command which must be both sent and received in odd quanta to change the (binary) power state.

In the declarative world we always request the desired result, and allow the machine to figure out the mechanism and if an action even needs to be performed at all. (It’s worth mentioning that I could always send a subset of the desired state if sending the full “struct” would be too onerous!) I can however trick the light-coloured remote into displaying an incorrect visual by pressing buttons when it is out of transmitting range of the wall unit. The good news is that this is only due to a lack of bi-directional communication, and isn’t a failure of the declarative paradigm.

Bi-directional communication:

In practice after either an imperative or declarative command is sent, we can usually expect to receive a response, whether it be a 200 OK, or an error message. We’re talking about software and systems engineering now, and while TCP makes everyone’s lives easier, it’s not immune from failure. If the successful ACK message to an imperative command never got delivered (maybe the server crashed after having sent it) and the client repeats the request, you’ve got a possible problem on your hands!

With the declarative message, it’s safe to retry your command ad-nauseum until you’re satisfied you were heard.

Multiple senders:

With a declarative approach it might seem that if two near-simultaneous users send differing commands, the “winner” would overwrite the other users command! It might also seem that this isn’t a problem in imperative systems. It turns out that both these assumptions are wrong.

In the imperative system two independent senders might both decide that the volume wasn’t loud enough. If they were to both send a command, they might now have increased it twice as much as desired and are in a similar situation but with the problem that their ears now hurt.

In the declarative system it is easy to avoid overwriting another users request with a simple CAS system (basically test and set). For any state change command, the user first looks up the current state and gets a unique integer (possibly incremental) that represents the returned state. When the new state is sent, it should include the integer from the previous state, and should know to only process the state change if it currently matches this value. In this manner with simultaneous requests, as long as the endpoint linearizes the requests, only one will succeed, while the others will know that their cached view of the world needs to be refreshed. This might annoy them, but it is the safer mechanism.

Multiple endpoints:

If multiple endpoints form a consistent cluster where all members can serve client requests using a declarative tasking system, the client can send the request to one system, and retrieve the status from someone else. This is particularly useful if the operation might take some time, and there is a chance that the initial endpoint was replaced during this interval.

declarative-imperative-diagram

Hand drawn diagram of the scenario. As you can see the user is happy because they are using a declarative paradigm, and are more resistant to failures.

While this could be replicated to some degree with imperative systems, it’s much more complicated to keep track of whether an operation occurred or not as it’s easier for each cluster member to be able to determine whether the current state matches what’s expected.

Conclusion:

This wasn’t an exhaustive discussion about the topic, but I couldn’t help writing about it since the two paradigms present some interesting engineering questions. As systems become more complex, will the adoption of new methodologies be widespread and expedient, or niche and sluggish?

Some of my declarative work involves my next generation configuration management project! Get involved today!

Happy Hacking,

James

Osmocom femtocell un-boxing

LaForge and the fine folks at Osmocom (Sysmocom) recently had a femtocell giveaway. I didn’t expect to have much time to hack on things, but they were still quite generous in sending me one. It arrived, and I took some un-boxing photos for anyone who is curious.

IMG_20170317_211113

A box arrived in the mail…

IMG_20170317_211256

Which recurses into an inner box…

IMG_20170317_211348

Inner box is box like.

IMG_20170317_211402

Finally… The unit is displayed.

IMG_20170317_211440

Here it is in all its glory.

IMG_20170317_211600

It came complete with all the cables and power adapters necessary.

IMG_20170317_212813

The sysmocom folks even included some special test sim cards so that you can use the femtocell with your own phones.

I’m very excited about the work the project is doing in their efforts to create and support free software telephony. If you are a telecom, please check out the various Sysmocom products and services available.

Happy Hacking,

James

Disclaimer: I received the above femtocell for free, but I was not asked to publish this article and the opinions stated are my own.

Metaparameters in mgmt

In mgmt we have meta parameters. They are similar in concept to what you might be familiar with from other tools, except that they are more clearly defined (in a single struct) and vastly more powerful.

In mgmt, a meta parameter is a parameter which is codified entirely in the engine, and which can be used by any resource. In contrast with Puppet, require/before are considered meta parameters, whereas in mgmt, the equivalent is a graph edge, which is not a meta parameter. [1]

Kinds

As of this writing we have seven different kinds of meta parameters:

The astute reader will note that there are actually nine different meta parameters listed, but I have grouped them into seven categories since some of them are very tightly interconnected. The first two, AutoEdge and AutoGroup have been covered in separate articles already, so they won’t be discussed here. To learn about the others, please read on…

Noop

Noop stands for no-operation. If it is set to true, we tell the CheckApply portion of the resource to not make any changes. It is up to the individual resource implementation to respect this facility, which is the case for all correctly written resources. You can learn more about this by reading the CheckApply section in the resource guide.

If you’d like to set the noop state on all resources at runtime, there is a cli flag which you can use to do so. It is unsurprisingly named --noop, and overrides all the resources in the graph. This is in stark contrast with Puppet which will allow an individual resource definition to override the user’s choice!

james@computer:/tmp$ cat noop.pp 
file { '/tmp/puppet.noop':
    content => "nope, nope, nope!\n",
    noop => false,    # set at the resource level
}
james@computer:/tmp$ time puppet apply noop.pp 
Notice: Compiled catalog for freed in environment production in 0.29 seconds
Notice: /Stage[main]/Main/File[/tmp/puppet.noop]/ensure: defined content as '{md5}d8bda32dd3fbf435e5a812b0ba3e9a95'
Notice: Applied catalog in 0.03 seconds

real    0m15.862s
user    0m7.423s
sys    0m1.260s
james@computer:/tmp$ file puppet.noop    # verify it worked
puppet.noop: ASCII text
james@computer:/tmp$ rm -f puppet.noop    # reset
james@computer:/tmp$ time puppet apply --noop noop.pp    # safe right?
Notice: Compiled catalog for freed in environment production in 0.30 seconds
Notice: /Stage[main]/Main/File[/tmp/puppet.noop]/ensure: defined content as '{md5}d8bda32dd3fbf435e5a812b0ba3e9a95'
Notice: Class[Main]: Would have triggered 'refresh' from 1 events
Notice: Stage[main]: Would have triggered 'refresh' from 1 events
Notice: Applied catalog in 0.02 seconds

real    0m15.808s
user    0m7.356s
sys    0m1.325s
james@computer:/tmp$ cat puppet.noop 
nope, nope, nope!
james@computer:/tmp$

If you look closely, Puppet just trolled you by performing an operation when you thought it would be noop! I think the behaviour is incorrect, but if this isn’t supposed to be a bug, then I’d sure like to know why!

It’s worth mentioning that there is also a noop resource in mgmt which is similarly named because it does absolutely nothing.

Retry & Delay

In mgmt we can run continuously, which means that it’s often more useful to do something interesting when there is a resource failure, rather than simply shutting down completely. As a result, if there is an error during the CheckApply phase of the resource execution, the operation can be retried (retry) a number of times, and there can be a delay between each retry.

The delay value is an integer representing the number of milliseconds to wait between retries, and it defaults to zero. The retry value is an integer representing the maximum number of allowed retries, and it defaults to zero. A negative value will permit an infinite number of retries. If the number of retries is exhausted, then the temporary resource failure will be converted into a permanent failure. Resources which depend on a failed resource will be blocked until there is a successful execution. When there is a successful CheckApply, the resource retry counter is reset.

In general it is best to leave these values at their defaults unless you are expecting spurious failures, this way if you do get a failure, it won’t be masked by the retry mechanism.

It’s worth mentioning that the Watch loop can fail as well, and that the retry and delay meta parameters apply to this as well! While these could have had their own set of meta parameters, I felt it would have unnecessarily cluttered up the interface, and I couldn’t think of a reason where it would be helpful to have different values. They do have their own separate retry counter and delay timer of course! If someone has a valid use case, then I’m happy to separate these.

If someone would like to implement a pluggable back-off algorithm (eg: exponential back-off) to be used here instead of a simple delay, then I think it would be a welcome addition!

Poll

Despite mgmt being event based, there are situations where you’d really like to poll instead of using the Watch method. For these cases, I reluctantly implemented a poll meta parameter. It does exactly what you’d expect, generating events every poll seconds. It defaults to zero which means that it is disabled, and Watch is used instead.

Despite my earlier knock of it, it is actually quite useful, in that some operations might require or prefer polling, and having it as a meta parameter means that those resources won’t need to duplicate the polling code.

This might be very powerful for an aws resource that can set up hosted Amazon ec2 resources. When combined with the retry and delay meta parameters, it will even survive outages!

One particularly interesting aspect is that ever since the converged graph detection was improved, we can still converge a graph and shutdown with the converged-timeout functionality while using polling! This is described in more detail in the documentation.

Limit & Burst

In mgmt, the events generated by the Watch main loop of a resource do not need to be 1-1 matched with the CheckApply remediation step. This is very powerful because it allows mgmt to collate multiple events into a single CheckApply step which is helpful for when the duration of the CheckApply step is longer than the interval between Watch events that are being generated often.

In addition, you might not want to constantly Check or Apply (converge) the state of your resource as often as it goes out of state. For this situation, that step can be rate limited with the limit and burst meta parameters.

The limit and burst meta parameters implement something known as a token bucket. This models a bucket which is filled with tokens and which is drained slowly. It has a particular rate limit (which sets a maximum rate) and a burst count which sets a maximum bolus which can be absorbed.

This doesn’t cause us to permanently miss events (and stay un-converged) because when the bucket overfills, instead of dropping events, we actually cache the last one for playback once the bucket falls within the execution rate threshold. Remember, we expect to be converged in the steady state, not at every infinitesimal delta t in between.

The limit and burst metaparams default to allowing an infinite rate, with zero burst. As it turns out, if you have a non-infinite rate, the burst must be non-zero or you will cause a Validate error. Similarly, a non-zero burst, with an infinite rate is effectively the same as the default. A good rule of thumb is to remember to either set both values or neither. This is all because of the mathematical implications of token buckets which I won’t explain in this article.

Sema

Sema is short for semaphore. In mgmt we have implemented P/V style counting semaphores. This is a mechanism for reducing parallelism in situations where there are not explicit dependencies between resources. This might be useful for when the number of operations might outnumber the number of CPUs on your machine and you want to avoid starving your other processes. Alternatively, there might be a particular operation that you want to add a mutex (mutual exclusion) around, which can be represented with a semaphore of size (1) one. Lastly, it was a particularly fun meta parameter to write, and I had been itching to do so for some time.

To use this meta parameter, simply give a list of semaphore ids to the resource you want to lock. These can be any string, and are shared globally throughout the graph. By default, they have a size of one. To specify a semaphore with a different size, append a colon (:) followed by an integer at the end of the semaphore id.

Valid ids might include: “some_id:42“, “hello:13“, and “lockname“. Remember, the size parameter is the number of simultaneous resources which can run their CheckApply methods at the same time. It does not prevent multiple Watch methods from returning events simultaneously.

If you would like to force a semaphore globally on all resources, you can pass in the --sema argument with a size integer. This will get appended to the existing semaphores. For example, to simulate Puppet’s traditional non-parallel execution, you could specify --sema 1.

Oh, no! Does this mean I can deadlock my graphs? Interestingly enough, this is actually completely safe! The reason is that because all the semaphores exist in the mgmt directed acyclic graph, and because that DAG represents dependencies that are always respected, there will always be a way to make progress, which eventually unblocks any waiting resources! The trick to doing this is ensuring that each resource always acquires the list of semaphores in alphabetical order. (Actually the order doesn’t matter as long as it’s consistent across the graph, and alphabetical is as good as any!) Unfortunately, I don’t have a formal proof of this, but I was able to convince myself on the back of an envelope that it is true! Please contact me if you can prove me right or wrong! The one exception is that a counting semaphore of size zero would never let anyone acquire it, so by definition it would permanently block, and as a result is not currently permitted.

The last important point to mention is about the interplay between automatic grouping and semaphores. When more than one resource is grouped, they are considered to be part of the same resource. As a result, the resulting list of semaphores is the sum of the individual semaphores, de-duplicated. This ensures that individual locking guarantees aren’t broken when multiple resources are combined.

Future

If you have ideas for future meta parameters, please let me know! We’d love to hear about your ideas on our mailing list or on IRC. If you’re shy, you can contact me privately as well.

Happy Hacking,

James

[1] This is a bit of an apples vs. flame-throwers comparison because I’m comparing the mgmt engine meta parameters with the puppet language meta parameters, but I think it’s worth mentioning because there’s a clear separation between the two in mgmt, where as the separation is much more blurry in the puppet scenario. It’s also true that the mgmt language might grow a concept of language-level meta parameters which has a partial set that only maps partially to engine meta parameters, but this is a discussion for another day!

Faster golang builds

I’ve been hacking in golang since before version 1.4, and the speed at which my builds finished has been mostly trending downwards. Let’s look into the reasons and some fixes. TL;DR click-bait title: “Get 4x faster golang builds with this one trick!”.

Here are the three reasons my builds got slower:

The compiler

Before version 1.5, the compiler was written in C but with that release, it moved to being pure golang. This unfortunately reduced build performance quite measurably, even though it was the right decision for the project.

There have been slight improvements with newer versions, however Google’s focus has been improving runtime performance instead of build performance. This is understandable since they want to save lots of electricity dollars at scale, which is not as helpful for smaller shops where the developer iteration cycle is the metric to optimize.

This could still be improved if folks wanted to put in the effort. A gcc style -O0 option could help. The sad thing about this whole story is that “instant” builds were a major marketing feature of early golang presentations.

Project size

Over time, my main project (mgmt) has gotten much bigger! It compiles in a number of libraries, including etcd, prometheus, and more! This naturally increases build time and is a mostly unavoidable consequence of building cool things and standing on giants!

This mostly can’t be helped, but it can be mitigated…

Dependency caching

When you build a project and all of its dependencies, the unchanged dependencies shouldn’t need to be rebuilt! Unfortunately golang does a great job of silently rebuilding things unnecessarily. Here’s why…

When you run a build, golang will attempt to re-use any common artefacts from previous builds. If the golang versions or library versions don’t match, these won’t be used, and the compiler will redo this work. Unfortunately, by default, those results won’t be saved, causing you to waste CPU cycles every time you test!

When the intermediate results are kept, they are found in your $GOPATH/pkg/. To save them, you need to either run go install (which makes a mess in your $GOPATH/bin/, or you can run go build -i. (Thanks to Dave for the tip!)

The sad part of this story is that these aren’t cached by default, and stale results aren’t discarded by default! If you’re experiencing slow builds, you should rm -rf $GOPATH/pkg/ and then go build -i. After one successful build, future builds should be much faster!

Example

james@computer:~/code/mgmt$ time go build    # before

real    0m28.152s
user    1m17.097s
sys     0m5.235s

james@computer:~/code/mgmt$ time go build -i    # after

real    0m8.129s
user    0m12.014s
sys     0m0.839s

Debugging

If you want to debug what’s going on, you can always run go build -x.

Blame!

I don’t like assigning blame, but this feels like a case of the golang tools being obtuse, and the man pages being non-existent. The golang project has a lot of maturing to do to integrate sanely with a stock GNU environment:

  • build intermediates could be saved and discarded by default
  • man go build could exist and provide useful information
  • go build --help go help build could provide more useful information
  • POSIX style flags could be used (eg: --help)
  • build cache could be stored in $XDG_CACHE_HOME.

Hope this helped improve your golang experience! I always knew something was going in in $GOPATH/pkg/, but I think it’s pretty absurd that I only fully understood it now. My builds are about 4x faster now. :)

Happy Hacking,

James

Ten minute hacks: Process pause & resume

I’m old school and still rocking an old X220 laptop because I didn’t like the new ones. My battery life isn’t as great as I’d like it to be, but it gets worse when some “webapp” (which I’d much rather have as a native GTK+ app) causes Firefox to rev my CPU with their websocket (hi gmail!) poller.

This seems to happen most often on planes or when I’m disconnected from the internet. Since it’s difficult to know which tab is the offending one, and since I might want to keep that tabs state anyway, I decided to write a little shell script to pause and resume misbehaving processes.

After putting it into my ~/bin/ and running chmod u+x ~/bin/pause-continue.sh on it, you can now:

james@computer:~$ pause-continue.sh firefox
Stopping 'firefox'...
[press any key to continue]
Continuing 'firefox'...
james@computer:~$ echo $?        # error codes work iirc
0
james@computer:~$

The code is trivially simple, with an added curses hack to make this 13% more fun. It sends a SIGSTOP signal initially, and then when you press a key it resumes it with SIGCONT. Here it is the code.

You should obviously substitute in the name of the process that you’d like to pause and resume. If your process breaks because it didn’t deal well with the signals, then you get to keep both pieces!

This should help me on my upcoming travel! I’ll be presenting some of my mgmtconfig work at DevConf.cz, FOSDEM and CfgMgmtCamp.eu! CfgMgmtCamp will also have a short mgmt track (looking forward to seeing Felix present!) and we’ll be around to hack on the 8th during fringe (the day after the official camp) if you’d like help to get your patch merged! I’m looking forward to it!

Happy hacking!

James

Send/Recv in mgmt

I previously published “A revisionist history of configuration management“. I meant for that to be the intro to this article, but it ended up being long enough that it deserved a separate post. I will explain Send/Recv in this article, but first a few clarifications to the aforementioned article.

Clarifications

I mentioned that my “revisionist history” was inaccurate, but I failed to mention that it was also not exhaustive! Many things were left out either because they were proprietary, niche, not well-known, of obscure design or simply for brevity. My apologies if you were involved with Bcfg2, Bosh, Heat, Military specifications, SaltStack, SmartFrog, or something else entirely. I’d love it if someone else wrote an “exhaustive history”, but I don’t think that’s possible.

It’s also worth re-iterating that without the large variety of software and designs which came before me, I wouldn’t have learned or have been able to build anything of value. Thank you giants!  By discussing the problems and designs of other tools, then it makes it easier to contrast with and explaining what I’m doing in mgmt.

Notifications

If you’re not familiar with the directed acyclic graph model for configuration management, you should start by reviewing that material first. It models a system of resources (workers) as the vertices in that DAG, and the edges as the dependencies. We’re going to add some additional mechanics to this model.

There is a concept in mgmt called notifications. Any time the state of a resource is successfully changed by the engine, a notification is emitted. These notifications are emitted along any graph edge (dependency) that has been asked to relay them. Edges with the Notify property will do so. These are usually called refresh notifications.

Any time a resource receives a refresh notification, it can apply a special action which is resource specific. The svc resource reloads the service, the password resource generates a new password, the timer resource resets the timer, and the noop resource prints a notification message. In general, refresh notifications should be avoided if possible, but there are a number of legitimate use cases.

In mgmt notifications are designed to be crash-proof, that is to say, undelivered notifications are re-queued when the engine restarts. While we don’t expect mgmt to crash, this is also useful when a graph is stopped by the user before it has completed.

You’ll see these notifications in action momentarily.

Send/Recv

I mentioned in the revisionist history that I felt that Chef opted for raw code as a solution to the lack of power in Puppet. Having resources in mgmt which are event-driven is one example of increasing their power. Send/Recv is another mechanism to make the resource primitive more powerful.

Simply put: Send/Recv is a mechanism where resources can transfer data along graph edges.

The status quo

Consider the following pattern (expressed as Puppet code):

# create a directory
file { '/var/foo/':
    ensure => directory,
}
# download a file into that directory
exec { 'wget http://example.com/data -O - > /var/foo/data':
    creates => '/var/foo/data',
    require => File['/var/foo/'],
}
# set some property of the file
file { '/var/foo/data':
    mode => 0644,
    require => File['/var/foo/data'],
}

First a disclaimer. Puppet now actually supports an http url as a source. Nevertheless, this was a common pattern for many years and that solution only improves a narrow use case. Here are some of the past and current problems:

  • File can’t take output from an exec (or other) resource
  • File can’t pull from an unsupported protocol (sftp, tftp, imap, etc…)
  • File can get created with zero-length data
  • Exec won’t update if http endpoint changes the data
  • Requires knowledge of bash and shell glue
  • Potentially error-prone if a typo is made

There’s also a layering violation if you believe that network code (http downloading) shouldn’t be in a file resource. I think it adds unnecessary complexity to the file resource.

The solution

What the file resource actually needs, is to be able to accept (Recv) data of the same type as any of its input arguments. We also need resources which can produce (Send) data that is useful to consumers. This occurs along a graph (dependency) edge, since the sending resource would need to produce it before the receiver could act!

This also opens up a range of possibilities for new resource kinds that are clever about sending or receiving data. an http resource could contain all the necessary network code, and replace our use of the exec { 'wget ...': } pattern.

Diagram

in this graph, a password resource generates a random string and stores it in a file

in this graph, a password resource generates a random string and stores it in a file; more clever linkages are planned

Example

As a proof of concept for the idea, I implemented a Password resource. This is a prototype resource that generates a random string of characters. To use the output, it has to be linked via Send/Recv to something that can accept a string. The file resource is one such possibility. Here’s an excerpt of some example output from a simple graph:

03:06:13 password.go:295: Password[password1]: Generating new password...
03:06:13 password.go:312: Password[password1]: Writing password token...
03:06:13 sendrecv.go:184: SendRecv: Password[password1].Password -> File[file1].Content
03:06:13 file.go:651: contentCheckApply: Invalidating sha256sum of `Content`
03:06:13 file.go:579: File[file1]: contentCheckApply(true)
03:06:13 noop.go:115: Noop[noop1]: Received a notification!

What you can see is that initially, a random password is generated. Next Send/Recv transfers the generated Password to the file’s Content. The file resource invalidates the cached Content checksum (a performance feature of the file resource), and then stores that value in the file. (This would normally be a security problem, but this is for example purposes!) Lastly, the file sends out a refresh notification to a Noop resource for demonstration purposes. It responds by printing a log message to that effect.

Libmgmt

Ultimately, mgmt will have a DSL to express the different graphs of configuration. In the meantime, you can use Puppet code, or a raw YAML file. The latter is primarily meant for testing purposes until we have the language built.

Lastly, you can also embed mgmt and use it like a library! This lets you write raw golang code to build your resource graphs. I decided to write the above example that way! Have a look at the code! This can be used to embed mgmt into your existing software! There are a few more examples available here.

Resource internals

When a resource receives new values via Send/Recv, it’s likely that the resource will have work to do. As a result, the engine will automatically mark the resource state as dirty and then poke it from the sending node. When the receiver resource runs, it can lookup the list of keys that have been sent. This is useful if it wants to perform a cache invalidation for example. In the resource, the code is quite simple:

if val, exists := obj.Recv["Content"]; exists && val.Changed {
    // the "Content" input has changed
}

Here is a good example of that mechanism in action.

Future work

This is only powerful if there are interesting resources to link together. Please contribute some ideas, and help build these resources! I’ve got a number of ideas already, but I’d love to hear yours first so that I don’t influence or stifle your creativity. Leave me a message in the comments below!

Happy Hacking,

James