Puppet-Gluster and me at Linuxcon

John Mark Walker, (from Redhat) has been kind enough to invite me to speak at the Linuxcon Gluster Workshop in New Orleans. I’ll be speaking about puppet-gluster, giving demos, and hopefully showing off some new features. I’m also looking forward to meeting up with gluster expert Joe Julian.

If there are features that puppet-gluster is missing, or you have a use case that I haven’t covered, please let me know, and I’ll try to work on it for you ahead of the conference. If you want to meet up for some puppet-gluster help, or to hack on code, I’ll be around from the 16th to the 20th of September. My talk is on the 19th.

Special thanks to John Mark and Redhat for sponsoring this trip. Without them, none of this would be possible.

Happy hacking,

James

a puppet-ipa user type and a new difference engine

A simple hack to add a user type to my puppet-ipa module turned out to cause quite a stir. I’ve just pushed these changes out for your testing:

3 files changed, 1401 insertions(+), 215 deletions(-)

You should now have a highly capable user type, along with some quick examples.

I’ve also done a rewrite of the difference engine, so that it is cleaner and more robust. It now uses function decorators and individual function comparators to help wrangle the data into easily comparable forms. This should make adding future types easier, and less error prone. If you’re not comfortable with ruby, that’s okay, because it’s written in python!

Have a look at the commit message, and please test this code and let me know how it goes.

Happy hacking,

James

PS: This update also adds server configuration globals management which you may find useful. Not all keys are supported, but all the framework and placeholders have been added.

 

scary cool bash scripting inside a Makefile

Makefiles are both scary and wonderful. When both these adjectives are involved, it often makes for interesting hacking. This is likely the reason I use bash.

In any case, I digress, back to real work. I use Makefiles as a general purpose tool to launch any of a number of shell scripts which I use to maintain my code, and instead of actually having external shell scripts, I just build any necessary bash right into the Makefile.

One benefit of all this is that when you type “Make <target>”, the <target> can actually autocomplete which makes your shell experience that much more friendly.

In any case, let me show you the code in question. Please note the double $$ for shell execution and for variable referencing. The calls to rsync and sort make me pleased.

rsync -avz --include=*$(EXT) --exclude='*' --delete dist/ $(WWW)
# empty the file
echo -n '' > $(METADATA)
cd $(WWW);
for i in *$(EXT); do
b=$$(basename $$i $(EXT));
V=$$(echo -n $$(basename "`echo -n "$$b" | rev`"
"`echo -n "$(NAME)-" | rev`") | rev);
echo $(NAME) $$V $$i >> $(METADATA);
done;
sort -V -k 2 -o $(METADATA) $(METADATA) # sort by version key

The full Makefile can be found inside of the bash-tutor tarball.

getting gedit to work like magic

i use gnu/linux. it’s probably no secret. what is more of a secret, is that i secretly (well actually not so secretly) love using gedit for editing text. i still use vim, echo (gnu bash) and emacs (but only for org-mode).

vim is really, really great. but for day to day full-screen coding, i love working in gedit. i only have one [1] longstanding gripe, and today i believe that it is solved. here is the magic combination which appeases my troubled spirit:

  • gedit smart spaces plugin [2]
  • gedit autotab plugin [3]
  • gedit modelines plugin [4]

install these, restart gedit, enable them, and happy coding!
while it will be much friendlier to use spaces for indentation, i still recommend using tabs, i mean, that’s what the 0x09 was invented for!

[1] actually i wish that everyone would just use eight-space-tabs for all their coding needs, but i realize there are some problems with this, and so i reluctantly am glad that modelines and the above magic exist.

[2] http://git.gnome.org/browse/gedit-plugins/tree/plugins/smartspaces

[3] http://code.google.com/p/gedit-autotab/

[4] http://library.gnome.org/users/gedit/stable/gedit-modelines-plugin.html.en

getopt vs. optparse vs. argparse

sooner or later you’ll end up needing to do some argument parsing. the foolish end up writing their own yucky parser that ends up having a big if statement filled with things like:

if len(sys.argv) > 1

in it. don’t do this unless you have a really good excuse.

sooner or later, someone directs you to getopt, and you happily continue on with buggy manual parsing thinking you’ve “found the way“. useful in some circumstances, but should generally be avoided.

since you’re a good student, you read the docs, and one chapter later, you find out about optparse. higher level parsing! alright! the library that we all wanted to write, actually exists, and it seems to follow some ideals too. this i actually appreciate, and it is lovely to use. you dream about all programs using this common library and unifying the world. consistency is a dream.

you then remember that the positional syntax of cp, git, man, and friends actually does makes sense, and you’d like for them not to change. you go on with life, hacking up optparse when needed. everything is pretty good, and you’re a seasoned coder by now, but sooner or later, someone sets you straight with a nice blog post like this.

there’s a new kid in town, and it’s called argparse. you read the docs, and you promise yourself to use standard argument styles. subparsers, and types finally exist in a sensible way. you love the inheritance schemes, and you’re one step away from being able to complete your parsing code, but you still haven’t found that magic place in the manual that hides the precious answer you need. and now you have (probably the fourth code block down from that link- maybe also the fifth). why this way buried in with the api specs, i don’t know, but i’m glad it was there.

thanks to ivan for getting me to check out argparse in the first place.

sorting out the confusion

if i’ve been silent as of late, it’s because i’ve been furiously coding away. i’ve got what i think are some elegant implementations cooking, and with any luck my extra work will pay off in hours and days and months of time saved down the road. i’ve got a few interesting (interesting with respect to your average rating of the blog posts on this site) posts cooking in my mind, and hopefully they’ll appear shortly!

in other news, i’d like to reference an already pretty well referenced, but probably less read link, explaining the confusion you’ve no doubt once had to suffer through (or still do)

hth: http://gstreamer.freedesktop.org/documentation/splitup.html

the python subprocess module

i’m sure that i won’t be able to tell you anything revolutionary which can’t be found out by reading the manual, but i thought i would clarify it, and by showing you a specific example which i needed.

subprocess.Popen accepts a bunch or args, one of which is the shell argument, which is False by default. If you specify shell=True then the first argument of popen should be a string which is what gets parsed by the shell and then eventually run. (nothing revolutionary)

the magic happens if you use shell=False (the default), in which case the first argument then accepts an array of arguments to pass. this array exactly transforms to become the sys.argv of the subprocess that you’ve opened with popen. magic!

this means you could pass an argument like: “hello how are you” and it will get received as one element in sys.argv, versus being split up into 4 arguments: “hello”, “how”, “are”, “you”. it’s still possible to try to do some shell quoting magic, and achieve the same result, but it’s much harder that way.


>>> _ = subprocess.Popen(['python', '-c', 'print "dude, this is sweet"'])
>>> dude, this is sweet

vs.


>>> _ = subprocess.Popen("python -c 'print "dude, this isnt so sweet"'", shell=True)
>>> dude, this isnt so sweet

and i’m not 100% sure how i would even add an ascii apostrophe for the isn’t.

the second thing i should mention is that you have to remember that each argument actually needs to be split up; for example:


>>> _ = subprocess.Popen(['ls', '-F', '--human-readable', '-G'])
[ ls output ]

yes it’s true that you can combine flags into one argument, but that’s magic happening inside the program.

all this wouldn’t be powerful if we couldn’t pipe programs together. here is a simple example:


>>> p1 = subprocess.Popen(['dmesg'], stdout=subprocess.PIPE)
>>> p2 = subprocess.Popen(['grep', '-i', 'sda'], stdin=p1.stdout)
[ dmesg output that matches sda ]

i think it’s pretty self explanatory. now let’s say we wanted to add one more stage to the pipeline, but have it be something that usually gets executed with os.system:


p1 = subprocess.Popen(['dmesg'], stdout=subprocess.PIPE)
p2 = subprocess.Popen(['grep', '-i', 'sda'], stdin=p1.stdout)
p3 = subprocess.Popen(['less'], stdin=p2.stdout)
sts = os.waitpid(p3.pid, 0)
print 'all done'

this above example should all be pasted into a file and run; the call to waitpid is important, because it stops the interpreter from continuing on before less has finished executing.

hope this took the learning curve and guessing out of using the new subprocess module, (even though it actually has existed for a while…)

cheetah == fortran

turns out the cheetah python templating engine (2.0 since year 2006) is quite reminiscent of fortran (since the 1950’s) in their use of the #slurp directive (cheetah) and the $ string. either one, when appended to the end of a string, remove the implicit newline which usually gets printed. it took me ages to figure out how to suppress newline printing back when i did someone’s fortran homework, now i’ve had to struggle with it all over again.

i’m not a language designer, but it never seemed like the best idea to me! but what do i know? i hope this saves someone an hour of searching.

[py]inotify, polling, gtk and gio

i have this software with a gtk mainloop, using dbus and all that fun stuff that seems to play together nicely. i know about the kernel inotify support, and i wanted it to get integrated with that above stack. i thought i was supposed to do it with pyinotify and io_add_watch, but on closer inspection into the pyinotify code it turns out that it seems to actually use polling! (search for: select.poll)

thinking i was confused, i emailed a friend to see if he could confirm my suspicions. we both weren’t 100% sure, a little searching later i was convinced when i found this blog posting. i’m surprised i didn’t find out about this sooner. in any case, my application seems to be happy now.

as a random side effect, it seems that when a file is written, i still see the G_FILE_MONITOR_EVENT_ATTRIBUTE_CHANGED *after* the G_FILE_MONITOR_EVENT_CHANGES_DONE_HINT event, which i would have expected to always come last. maybe this is a bug, or maybe this is something magical that $EDITOR is doing- in any case it doesn’t affect me, i just wasn’t sure if it was a bug or not. to make it harder, different editors save the file in different ways. gedit seems to first delete the file, and then create it again– or at least from what i see in the gio debug.

the code i’m using to test all this is:

#!/usr/bin/python
import gtk
import gio
count = 0
def file_changed(monitor, file, unknown, event):
  global count
  print 'debug: %d' % count
  count = count + 1
  print 'm: %s' % monitor
  print 'f: %s' % file
  print 'u: %s' % unknown
  print 'e: %s' % event
  if event == gio.FILE_MONITOR_EVENT_CHANGES_DONE_HINT:
    print "file finished changing"
  print '#'*79
  print 'n'
myfile = gio.File('/tmp/gio')
monitor = myfile.monitor_file()
monitor.connect("changed", file_changed)
gtk.main()

(very similar to the aforementioned blog post)

and if you want to see how i’m using it in the particular app, the latest version of evanescent is now available. (look inside of evanescent-client.py)