Ten minute hacks: Process pause & resume

I’m old school and still rocking an old X220 laptop because I didn’t like the new ones. My battery life isn’t as great as I’d like it to be, but it gets worse when some “webapp” (which I’d much rather have as a native GTK+ app) causes Firefox to rev my CPU with their websocket (hi gmail!) poller.

This seems to happen most often on planes or when I’m disconnected from the internet. Since it’s difficult to know which tab is the offending one, and since I might want to keep that tabs state anyway, I decided to write a little shell script to pause and resume misbehaving processes.

After putting it into my ~/bin/ and running chmod u+x ~/bin/pause-continue.sh on it, you can now:

james@computer:~$ pause-continue.sh firefox
Stopping 'firefox'...
[press any key to continue]
Continuing 'firefox'...
james@computer:~$ echo $?        # error codes work iirc
0
james@computer:~$

The code is trivially simple, with an added curses hack to make this 13% more fun. It sends a SIGSTOP signal initially, and then when you press a key it resumes it with SIGCONT. Here it is the code.

You should obviously substitute in the name of the process that you’d like to pause and resume. If your process breaks because it didn’t deal well with the signals, then you get to keep both pieces!

This should help me on my upcoming travel! I’ll be presenting some of my mgmtconfig work at DevConf.cz, FOSDEM and CfgMgmtCamp.eu! CfgMgmtCamp will also have a short mgmt track (looking forward to seeing Felix present!) and we’ll be around to hack on the 8th during fringe (the day after the official camp) if you’d like help to get your patch merged! I’m looking forward to it!

Happy hacking!

James

Ten minute hacks: Hacking airplane headphones

I was stuck on a 14 hour flight last week, and to my disappointment, only one of the two headphone speakers were working. The plane’s media centre has an audio connector that looks like this:

airplane-headphones-jack

Someone should consider probing this USB port.

The hole to the left is smaller than a 3.5mm headphone jack, and designed for a proprietary headphone connector that I didn’t have, and the two holes to the right are part of a different proprietary connector which match with the cheap airline headphones to provide the left and right audio channels.

airplane-headphones-connected

Completely reversible, and therefore completely ambiguous. Stereo is so 1880’s anyways.

By reversing the connector, I was quickly able to determine that the headphones were not faulty, because this swapped the missing audio channel to the other ear. It’s also immediately obvious that since there are no left vs. right polarity markings on either the receptacle or the headphones, there’s a 50% chance that you’ll get reverse stereo.

With the fault identified, and lots of time to kill, I decided to try to hack a workaround. I borrowed some tweezers from a nearby passenger, and slowly ripped off some of the exterior plastic to expose the signal wires. To my surprise there were actually four wires, instead of three using a shared ground.

airplane-headphones-separated

Headphone wires stripped, exposed and ready for splicing.

With a bit of care this only took about five minutes. The next step was to “patch” the working positive and ground wires from the working channel, into the speaker from the broken channel. I did this by trial and error using a bit of intuition to try to keep both speakers in phase.

airplane-headphones-spliced

After a twist splice and using paper as an insulator.

A small scrap of paper acted as an insulator to prevent short circuits between the positive and negative wires. Lastly, a figure eight on a bight was tied to isolate the weak splice from any tension, thus preventing damage and disconnects.

airplane-headphones-knotted

All wrapped up neatly and tied with a knot.

The finished product worked beautifully, despite now only providing monaural audio and is about five centimetres shorter, which is still perfectly usable since the seats hardly recline. The flight staff weren’t angry that I had cannibalized their headphones, but also didn’t understand how my contraption was able to solve the problem.

This fun little ten minute hack helped provide some distraction in economy class, and maybe it will be useful to you since I doubt they’ve repaired the media system in the seat! If you work for Emirates, let me know and I’ll give you the seat and flight number.

Happy hacking!

James

One hour hacks: Remote LUKS over SSH

I have a GNU/Linux server which I mount a few LUKS encrypted drives on. I only ever interact with the server over SSH, and I never want to keep the LUKS credentials on the remote server. I don’t have anything especially sensitive on the drives, but I think it’s a good security practice to encrypt it all, if only to add noise into the system and for solidarity with those who harbour much more sensitive data.

This means that every time the server reboots or whenever I want to mount the drives, I have to log in and go through the series of luksOpen and mount commands before I can access the data. This turned out to be a bit laborious, so I wrote a quick script to automate it! I also made sure that it was idempotent.

I decided to share it because I couldn’t find anything similar, and I was annoyed that I had to write this in the first place. Hopefully it saves you some anguish. It also contains a clever little bash hack that I am proud to have in my script.

Here’s the script. You’ll need to fill in the map of mount folder names to drive UUID’s, and you’ll want to set your server hostname and FQDN to match your environment of course. It will prompt you for your root password to mount, and the LUKS password when needed.

Example of mounting:

james@computer:~$ rluks.sh 
Running on: myserver...
[sudo] password for james: 
Mount/Unmount [m/u] ? m
Mounting...
music: mkdir ✓
LUKS Password: 
music: luksOpen ✓
music: mount ✓
files: mkdir ✓
files: luksOpen ✓
files: mount ✓
photos: mkdir ✓
photos: luksOpen ✓
photos: mount ✓
Done!
Connection to server.example.com closed.

Example of unmounting:

james@computer:~$ rluks.sh 
Running on: myserver...
[sudo] password for james: 
Sorry, try again.
[sudo] password for james: 
Mount/Unmount [m/u] ? u
Unmounting...
music: umount ✓
music: luksClose ✓
music: rmdir ✓
files: umount ✓
files: luksClose ✓
files: rmdir ✓
photos: umount ✓
photos: luksClose ✓
photos: rmdir ✓
Done!
Connection to server.example.com closed.
james@computer:~$

It’s worth mentioning that there are many improvements that could be made to this script. If you’ve got patches, send them my way. After all, this is only a: one hour hack.

Happy hacking,

James

PS: One day this sort of thing might be possible in mgmt. Let me know if you want to help work on it!

Trying out Ceph with Oh-My-Vagrant

Daniel P. Berrangé wrote about trying out a single node ceph cluster. I decided to take his article and turn it into an Oh-My-Vagrant omv.yaml file. It took me about two minutes to do so, and two hours to debug a problem caused by something I had broken on my laptop.

If you’d like to replicate his article in less than 5 minutes, pull down the omv.yaml file that I’ve just published and run omv up. Here’s the full terminal output of my session:

james@computer:~/code/oh-my-vagrant/examples$ git pull
Already up-to-date.
james@computer:~/code/oh-my-vagrant/examples$ cdtmpmkdir 
james@computer:/tmp/tmp.DhD$ cp $OLDPWD/ceph-deploy.yaml omv.yaml
james@computer:/tmp/tmp.DhD$ time omv up
Bringing machine 'omv1' up with 'libvirt' provider...
==> omv1: Creating image (snapshot of base box volume).
==> omv1: Creating domain with the following settings...
==> omv1:  -- Name:              omv_omv1
==> omv1:  -- Domain type:       kvm
==> omv1:  -- Cpus:              1
==> omv1:  -- Memory:            512M
==> omv1:  -- Base box:          fedora-23
==> omv1:  -- Storage pool:      default
==> omv1:  -- Image:             /var/lib/libvirt/images/omv_omv1.img
==> omv1:  -- Volume Cache:      default
==> omv1:  -- Kernel:            
==> omv1:  -- Initrd:            
==> omv1:  -- Graphics Type:     spice
==> omv1:  -- Graphics Port:     5900
==> omv1:  -- Graphics IP:       127.0.0.1
==> omv1:  -- Graphics Password: Not defined
==> omv1:  -- Video Type:        qxl
==> omv1:  -- Video VRAM:        9216
==> omv1:  -- Keymap:            en-us
==> omv1:  -- Command line : 
==> omv1: Creating shared folders metadata...
==> omv1: Starting domain.
==> omv1: Waiting for domain to get an IP address...
==> omv1: Waiting for SSH to become available...
==> omv1: Setting hostname...
==> omv1: Configuring and enabling network interfaces...
==> omv1: Rsyncing folder: /tmp/tmp.DhD/ => /vagrant
==> omv1: Updating /etc/hosts file on active guest machines...
==> omv1: Running provisioner: shell...
    omv1: Running: inline script
==> omv1: Changing password for user root.
==> omv1: passwd: all authentication tokens updated successfully.
==> omv1: Running provisioner: shell...
    omv1: Running: inline script
==> omv1: TARGET SOURCE    FSTYPE OPTIONS
==> omv1: /      /dev/vda3 xfs    rw,relatime,seclabel,attr2,inode64,noquota
==> omv1: TARGET                                SOURCE     FSTYPE     OPTIONS
==> omv1: /                                     /dev/vda3  xfs        rw,relatime,seclabel,attr2,inode64,noquota
==> omv1: ├─/sys                                sysfs      sysfs      rw,nosuid,nodev,noexec,relatime,seclabel
==> omv1: │ ├─/sys/kernel/security              securityfs securityfs rw,nosuid,nodev,noexec,relatime
==> omv1: │ ├─/sys/fs/cgroup                    tmpfs      tmpfs      ro,nosuid,nodev,noexec,seclabel,mode=755
==> omv1: │ │ ├─/sys/fs/cgroup/systemd          cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
==> omv1: │ │ ├─/sys/fs/cgroup/devices          cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,devices
==> omv1: │ │ ├─/sys/fs/cgroup/cpu,cpuacct      cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
==> omv1: │ │ ├─/sys/fs/cgroup/memory           cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,memory
==> omv1: │ │ ├─/sys/fs/cgroup/perf_event       cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,perf_event
==> omv1: │ │ ├─/sys/fs/cgroup/blkio            cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,blkio
==> omv1: │ │ ├─/sys/fs/cgroup/hugetlb          cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,hugetlb
==> omv1: │ │ ├─/sys/fs/cgroup/freezer          cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,freezer
==> omv1: │ │ ├─/sys/fs/cgroup/net_cls,net_prio cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
==> omv1: │ │ └─/sys/fs/cgroup/cpuset           cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,cpuset
==> omv1: │ ├─/sys/fs/pstore                    pstore     pstore     rw,nosuid,nodev,noexec,relatime,seclabel
==> omv1: │ ├─/sys/fs/selinux                   selinuxfs  selinuxfs  rw,relatime
==> omv1: │ ├─/sys/kernel/debug                 debugfs    debugfs    rw,relatime,seclabel
==> omv1: │ └─/sys/kernel/config                configfs   configfs   rw,relatime
==> omv1: ├─/proc                               proc       proc       rw,nosuid,nodev,noexec,relatime
==> omv1: │ ├─/proc/sys/fs/binfmt_misc          systemd-1  autofs     rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct
==> omv1: │ └─/proc/fs/nfsd                     nfsd       nfsd       rw,relatime
==> omv1: ├─/dev                                devtmpfs   devtmpfs   rw,nosuid,seclabel,size=242328k,nr_inodes=60582,mode=755
==> omv1: │ ├─/dev/shm                          tmpfs      tmpfs      rw,nosuid,nodev,seclabel
==> omv1: │ ├─/dev/pts                          devpts     devpts     rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000
==> omv1: │ ├─/dev/hugepages                    hugetlbfs  hugetlbfs  rw,relatime,seclabel
==> omv1: │ └─/dev/mqueue                       mqueue     mqueue     rw,relatime,seclabel
==> omv1: ├─/run                                tmpfs      tmpfs      rw,nosuid,nodev,seclabel,mode=755
==> omv1: │ └─/run/user/1000                    tmpfs      tmpfs      rw,nosuid,nodev,relatime,seclabel,size=50112k,mode=700,uid=1000,gid=1000
==> omv1: ├─/tmp                                tmpfs      tmpfs      rw,seclabel
==> omv1: ├─/boot                               /dev/vda1  ext4       rw,relatime,seclabel,data=ordered
==> omv1: └─/var/lib/nfs/rpc_pipefs             sunrpc     rpc_pipefs rw,relatime
==> omv1: meta-data=/dev/vda3              isize=512    agcount=4, agsize=321792 blks
==> omv1:          =                       sectsz=512   attr=2, projid32bit=1
==> omv1:          =                       crc=1        finobt=1
==> omv1: data     =                       bsize=4096   blocks=1287168, imaxpct=25
==> omv1:          =                       sunit=0      swidth=0 blks
==> omv1: naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
==> omv1: log      =internal               bsize=4096   blocks=2560, version=2
==> omv1:          =                       sectsz=512   sunit=0 blks, lazy-count=1
==> omv1: realtime =none                   extsz=4096   blocks=0, rtextents=0
==> omv1: data blocks changed from 1287168 to 10199744
==> omv1: Running provisioner: shell...
    omv1: Running: inline script
==> omv1: this is ceph-deploy on fedora-23
==> omv1: Last metadata expiration check performed 0:00:12 ago on Mon Dec 28 14:34:32 2015.
==> omv1: Dependencies resolved.
==> omv1: ================================================================================
==> omv1:  Package               Arch          Version                Repository     Size
==> omv1: ================================================================================
==> omv1: Installing:
==> omv1:  ceph-deploy           noarch        1.5.25-2.fc23          fedora        160 k
==> omv1:  python-execnet        noarch        1.3.0-3.fc23           fedora        275 k
==> omv1:  python-remoto         noarch        0.0.25-2.fc23          fedora         26 k
==> omv1: 
==> omv1: Transaction Summary
==> omv1: ================================================================================
==> omv1: Install  3 Packages
==> omv1: Total download size: 461 k
==> omv1: Installed size: 1.5 M
==> omv1: Downloading Packages:
==> omv1: --------------------------------------------------------------------------------
==> omv1: Total                                           214 kB/s | 461 kB     00:02     
==> omv1: Running transaction check
==> omv1: Transaction check succeeded.
==> omv1: Running transaction test
==> omv1: Transaction test succeeded.
==> omv1: Running transaction
==> omv1:   Installing  : python-execnet-1.3.0-3.fc23.noarch                          1/3
==> omv1:  
==> omv1:   Installing  : python-remoto-0.0.25-2.fc23.noarch                          2/3
==> omv1:  
==> omv1:   Installing  : ceph-deploy-1.5.25-2.fc23.noarch                            3/3
==> omv1:  
==> omv1:   Verifying   : ceph-deploy-1.5.25-2.fc23.noarch                            1/3
==> omv1:  
==> omv1:   Verifying   : python-remoto-0.0.25-2.fc23.noarch                          2/3
==> omv1:  
==> omv1:   Verifying   : python-execnet-1.3.0-3.fc23.noarch                          3/3
==> omv1:  
==> omv1: 
==> omv1: Installed:
==> omv1:   ceph-deploy.noarch 1.5.25-2.fc23       python-execnet.noarch 1.3.0-3.fc23    
==> omv1:   python-remoto.noarch 0.0.25-2.fc23    
==> omv1: Complete!
==> omv1: [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
==> omv1: [ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy new omv1.example.com
==> omv1: [ceph_deploy.new][DEBUG ] Creating new cluster named ceph
==> omv1: [ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
==> omv1: [omv1.example.com][DEBUG ] connected to host: omv1.example.com 
==> omv1: [omv1.example.com][DEBUG ] detect platform information from remote host
==> omv1: [omv1.example.com][DEBUG ] detect machine type
==> omv1: [omv1.example.com][DEBUG ] find the location of an executable
==> omv1: [omv1.example.com][INFO  ] Running command: /usr/sbin/ip link show
==> omv1: [omv1.example.com][INFO  ] Running command: /usr/sbin/ip addr show
==> omv1: [omv1.example.com][DEBUG ] IP addresses found: ['192.168.123.100', '172.17.0.1', '192.168.121.48']
==> omv1: [ceph_deploy.new][DEBUG ] Resolving host omv1.example.com
==> omv1: [ceph_deploy.new][DEBUG ] Monitor omv1 at 192.168.123.100
==> omv1: [ceph_deploy.new][DEBUG ] Monitor initial members are ['omv1']
==> omv1: [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.123.100']
==> omv1: [ceph_deploy.new][DEBUG ] Creating a random mon key...
==> omv1: [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
==> omv1: [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
==> omv1: [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
==> omv1: [ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy install --no-adjust-repos omv1.example.com
==> omv1: [ceph_deploy.install][DEBUG ] Installing stable version hammer on cluster ceph hosts omv1.example.com
==> omv1: [ceph_deploy.install][DEBUG ] Detecting platform for host omv1.example.com ...
==> omv1: [omv1.example.com][DEBUG ] connected to host: omv1.example.com 
==> omv1: [omv1.example.com][DEBUG ] detect platform information from remote host
==> omv1: [omv1.example.com][DEBUG ] detect machine type
==> omv1: [ceph_deploy.install][INFO  ] Distro info: Fedora 23 Twenty Three
==> omv1: [omv1.example.com][INFO  ] installing ceph on omv1.example.com
==> omv1: [omv1.example.com][INFO  ] Running command: yum -y -q install ceph ceph-radosgw
==> omv1: [omv1.example.com][WARNING] Yum command has been deprecated, redirecting to '/usr/bin/dnf -y -q install ceph ceph-radosgw'.
==> omv1: [omv1.example.com][WARNING] See 'man dnf' and 'man yum2dnf' for more information.
==> omv1: [omv1.example.com][WARNING] To transfer transaction metadata from yum to DNF, run:
==> omv1: [omv1.example.com][WARNING] 'dnf install python-dnf-plugins-extras-migrate && dnf-2 migrate'
==> omv1: [omv1.example.com][WARNING] 
==> omv1: [omv1.example.com][INFO  ] Running command: ceph --version
==> omv1: [omv1.example.com][DEBUG ] ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
==> omv1: [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
==> omv1: [ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy mon create-initial
==> omv1: [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts omv1
==> omv1: [ceph_deploy.mon][DEBUG ] detecting platform for host omv1 ...
==> omv1: [omv1][DEBUG ] connected to host: omv1 
==> omv1: [omv1][DEBUG ] detect platform information from remote host
==> omv1: [omv1][DEBUG ] detect machine type
==> omv1: [ceph_deploy.mon][INFO  ] distro info: Fedora 23 Twenty Three
==> omv1: [omv1][DEBUG ] determining if provided host has same hostname in remote
==> omv1: [omv1][DEBUG ] get remote short hostname
==> omv1: [omv1][DEBUG ] deploying mon to omv1
==> omv1: [omv1][DEBUG ] get remote short hostname
==> omv1: [omv1][DEBUG ] remote hostname: omv1
==> omv1: [omv1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
==> omv1: [omv1][DEBUG ] create the mon path if it does not exist
==> omv1: [omv1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-omv1/done
==> omv1: [omv1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-omv1/done
==> omv1: [omv1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-omv1.mon.keyring
==> omv1: [omv1][DEBUG ] create the monitor keyring file
==> omv1: [omv1][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i omv1 --keyring /var/lib/ceph/tmp/ceph-omv1.mon.keyring
==> omv1: [omv1][DEBUG ] ceph-mon: mon.noname-a 192.168.123.100:6789/0 is local, renaming to mon.omv1
==> omv1: [omv1][DEBUG ] ceph-mon: set fsid to 94276740-7431-49ab-80da-a37325d2d020
==> omv1: [omv1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-omv1 for mon.omv1
==> omv1: [omv1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-omv1.mon.keyring
==> omv1: [omv1][DEBUG ] create a done file to avoid re-doing the mon deployment
==> omv1: [omv1][DEBUG ] create the init path if it does not exist
==> omv1: [omv1][DEBUG ] locating the `service` executable...
==> omv1: [omv1][INFO  ] Running command: /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.omv1
==> omv1: [omv1][DEBUG ] === mon.omv1 === 
==> omv1: [omv1][DEBUG ] Starting Ceph mon.omv1 on omv1...
==> omv1: [omv1][WARNING] Running as unit run-3493.service.
==> omv1: [omv1][DEBUG ] Starting ceph-create-keys on omv1...
==> omv1: [omv1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.omv1.asok mon_status
==> omv1: [omv1][DEBUG ] ********************************************************************************
==> omv1: [omv1][DEBUG ] status for monitor: mon.omv1
==> omv1: [omv1][DEBUG ] {
==> omv1: [omv1][DEBUG ]   "election_epoch": 2, 
==> omv1: [omv1][DEBUG ]   "extra_probe_peers": [], 
==> omv1: [omv1][DEBUG ]   "monmap": {
==> omv1: [omv1][DEBUG ]     "created": "0.000000", 
==> omv1: [omv1][DEBUG ]     "epoch": 1, 
==> omv1: [omv1][DEBUG ]     "fsid": "94276740-7431-49ab-80da-a37325d2d020", 
==> omv1: [omv1][DEBUG ]     "modified": "0.000000", 
==> omv1: [omv1][DEBUG ]     "mons": [
==> omv1: [omv1][DEBUG ]       {
==> omv1: [omv1][DEBUG ]         "addr": "192.168.123.100:6789/0", 
==> omv1: [omv1][DEBUG ]         "name": "omv1", 
==> omv1: [omv1][DEBUG ]         "rank": 0
==> omv1: [omv1][DEBUG ]       }
==> omv1: [omv1][DEBUG ]     ]
==> omv1: [omv1][DEBUG ]   }, 
==> omv1: [omv1][DEBUG ]   "name": "omv1", 
==> omv1: [omv1][DEBUG ]   "outside_quorum": [], 
==> omv1: [omv1][DEBUG ]   "quorum": [
==> omv1: [omv1][DEBUG ]     0
==> omv1: [omv1][DEBUG ]   ], 
==> omv1: [omv1][DEBUG ]   "rank": 0, 
==> omv1: [omv1][DEBUG ]   "state": "leader", 
==> omv1: [omv1][DEBUG ]   "sync_provider": []
==> omv1: [omv1][DEBUG ] }
==> omv1: [omv1][DEBUG ] ********************************************************************************
==> omv1: [omv1][INFO  ] monitor: mon.omv1 is running
==> omv1: [omv1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.omv1.asok mon_status
==> omv1: [ceph_deploy.mon][INFO  ] processing monitor mon.omv1
==> omv1: [omv1][DEBUG ] connected to host: omv1 
==> omv1: [omv1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.omv1.asok mon_status
==> omv1: [ceph_deploy.mon][INFO  ] mon.omv1 monitor has reached quorum!
==> omv1: [ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
==> omv1: [ceph_deploy.mon][INFO  ] Running gatherkeys...
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Checking omv1 for /etc/ceph/ceph.client.admin.keyring
==> omv1: [omv1][DEBUG ] connected to host: omv1 
==> omv1: [omv1][DEBUG ] detect platform information from remote host
==> omv1: [omv1][DEBUG ] detect machine type
==> omv1: [omv1][DEBUG ] fetch remote file
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from omv1.
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Checking omv1 for /var/lib/ceph/bootstrap-osd/ceph.keyring
==> omv1: [omv1][DEBUG ] connected to host: omv1 
==> omv1: [omv1][DEBUG ] detect platform information from remote host
==> omv1: [omv1][DEBUG ] detect machine type
==> omv1: [omv1][DEBUG ] fetch remote file
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from omv1.
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Checking omv1 for /var/lib/ceph/bootstrap-mds/ceph.keyring
==> omv1: [omv1][DEBUG ] connected to host: omv1 
==> omv1: [omv1][DEBUG ] detect platform information from remote host
==> omv1: [omv1][DEBUG ] detect machine type
==> omv1: [omv1][DEBUG ] fetch remote file
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from omv1.
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Checking omv1 for /var/lib/ceph/bootstrap-rgw/ceph.keyring
==> omv1: [omv1][DEBUG ] connected to host: omv1 
==> omv1: [omv1][DEBUG ] detect platform information from remote host
==> omv1: [omv1][DEBUG ] detect machine type
==> omv1: [omv1][DEBUG ] fetch remote file
==> omv1: [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-rgw.keyring key from omv1.
==> omv1: [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
==> omv1: [ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy osd prepare omv1.example.com:/srv/ceph/osd
==> omv1: [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks omv1.example.com:/srv/ceph/osd:
==> omv1: [omv1.example.com][DEBUG ] connected to host: omv1.example.com 
==> omv1: [omv1.example.com][DEBUG ] detect platform information from remote host
==> omv1: [omv1.example.com][DEBUG ] detect machine type
==> omv1: [ceph_deploy.osd][INFO  ] Distro info: Fedora 23 Twenty Three
==> omv1: [ceph_deploy.osd][DEBUG ] Deploying osd to omv1.example.com
==> omv1: [omv1.example.com][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
==> omv1: [omv1.example.com][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
==> omv1: [ceph_deploy.osd][DEBUG ] Preparing host omv1.example.com disk /srv/ceph/osd journal None activate False
==> omv1: [omv1.example.com][INFO  ] Running command: ceph-disk -v prepare --fs-type xfs --cluster ceph -- /srv/ceph/osd
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Preparing osd data dir /srv/ceph/osd
==> omv1: [omv1.example.com][INFO  ] checking OSD status...
==> omv1: [omv1.example.com][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
==> omv1: [ceph_deploy.osd][DEBUG ] Host omv1.example.com is now ready for osd use.
==> omv1: [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
==> omv1: [ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy osd activate omv1.example.com:/srv/ceph/osd
==> omv1: [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks omv1.example.com:/srv/ceph/osd:
==> omv1: [omv1.example.com][DEBUG ] connected to host: omv1.example.com 
==> omv1: [omv1.example.com][DEBUG ] detect platform information from remote host
==> omv1: [omv1.example.com][DEBUG ] detect machine type
==> omv1: [ceph_deploy.osd][INFO  ] Distro info: Fedora 23 Twenty Three
==> omv1: [ceph_deploy.osd][DEBUG ] activating host omv1.example.com disk /srv/ceph/osd
==> omv1: [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
==> omv1: [omv1.example.com][INFO  ] Running command: ceph-disk -v activate --mark-init sysvinit --mount /srv/ceph/osd
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Cluster uuid is 94276740-7431-49ab-80da-a37325d2d020
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Cluster name is ceph
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:OSD uuid is 43a20af4-7a45-4f70-a0dd-baf906d58026
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Allocating OSD id...
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 43a20af4-7a45-4f70-a0dd-baf906d58026
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:OSD id is 0
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Initializing OSD...
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /srv/ceph/osd/activate.monmap
==> omv1: [omv1.example.com][WARNING] got monmap epoch 1
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /srv/ceph/osd/activate.monmap --osd-data /srv/ceph/osd --osd-journal /srv/ceph/osd/journal --osd-uuid 43a20af4-7a45-4f70-a0dd-baf906d58026 --keyring /srv/ceph/osd/keyring
==> omv1: [omv1.example.com][WARNING] 2015-12-28 14:36:47.525213 7f66f9f0c880 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
==> omv1: [omv1.example.com][WARNING] 2015-12-28 14:36:49.412257 7f66f9f0c880 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
==> omv1: [omv1.example.com][WARNING] 2015-12-28 14:36:49.413018 7f66f9f0c880 -1 filestore(/srv/ceph/osd) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
==> omv1: [omv1.example.com][WARNING] 2015-12-28 14:36:49.813515 7f66f9f0c880 -1 created object store /srv/ceph/osd journal /srv/ceph/osd/journal for osd.0 fsid 94276740-7431-49ab-80da-a37325d2d020
==> omv1: [omv1.example.com][WARNING] 2015-12-28 14:36:49.813566 7f66f9f0c880 -1 auth: error reading file: /srv/ceph/osd/keyring: can't open /srv/ceph/osd/keyring: (2) No such file or directory
==> omv1: [omv1.example.com][WARNING] 2015-12-28 14:36:49.813683 7f66f9f0c880 -1 created new key in keyring /srv/ceph/osd/keyring
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Marking with init system sysvinit
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Authorizing OSD key...
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.0 -i /srv/ceph/osd/keyring osd allow * mon allow profile osd
==> omv1: [omv1.example.com][WARNING] added key for osd.0
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:ceph osd.0 data dir is ready at /srv/ceph/osd
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/osd/ceph-0 -> /srv/ceph/osd
==> omv1: [omv1.example.com][WARNING] DEBUG:ceph-disk:Starting ceph osd.0...
==> omv1: [omv1.example.com][WARNING] INFO:ceph-disk:Running command: /usr/sbin/service ceph --cluster ceph start osd.0
==> omv1: [omv1.example.com][DEBUG ] === osd.0 === 
==> omv1: [omv1.example.com][WARNING] create-or-move updating item name 'osd.0' weight 0.04 at location {host=omv1,root=default} to crush map
==> omv1: [omv1.example.com][DEBUG ] Starting Ceph osd.0 on omv1...
==> omv1: [omv1.example.com][WARNING] Running as unit run-4116.service.
==> omv1: [omv1.example.com][INFO  ] checking OSD status...
==> omv1: [omv1.example.com][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
==> omv1: [omv1.example.com][INFO  ] Running command: systemctl enable ceph
==> omv1: [omv1.example.com][WARNING] ceph.service is not a native service, redirecting to systemd-sysv-install
==> omv1: [omv1.example.com][WARNING] Executing /usr/lib/systemd/systemd-sysv-install enable ceph
==> omv1:     cluster 94276740-7431-49ab-80da-a37325d2d020
==> omv1:      health HEALTH_WARN
==> omv1:             64 pgs stuck inactive
==> omv1:             64 pgs stuck unclean
==> omv1:      monmap e1: 1 mons at {omv1=192.168.123.100:6789/0}
==> omv1:             election epoch 2, quorum 0 omv1
==> omv1:      osdmap e5: 1 osds: 1 up, 1 in
==> omv1:       pgmap v6: 64 pgs, 1 pools, 0 bytes data, 0 objects
==> omv1:             0 kB used, 0 kB / 0 kB avail
==> omv1:                   64 creating
==> omv1: you can now use ceph with the rbd command

real    4m14.815s
user    0m2.239s
sys    0m0.288s
james@computer:/tmp/tmp.DhD$ vscreen root@omv1
[root@omv1 ~]# ceph status
    cluster 94276740-7431-49ab-80da-a37325d2d020
     health HEALTH_OK
     monmap e1: 1 mons at {omv1=192.168.123.100:6789/0}
            election epoch 2, quorum 0 omv1
     osdmap e5: 1 osds: 1 up, 1 in
      pgmap v9: 64 pgs, 1 pools, 0 bytes data, 0 objects
            6642 MB used, 33190 MB / 39832 MB avail
                  64 active+clean
[root@omv1 ~]#

You’ll notice that the ceph status command initially returned a warning when run with OMV, but you’ll notice that if I logged in to the machine and ran it a second time, everything is now happy. I imagine this is because it takes a moment to converge on a healthy state.

I’ve also just released a new Fedora-23 vagrant box, which is what this example uses. Naturally if you haven’t already downloaded the box, this example will take a bit longer to run the first time. The download will cost you about 865M and will happen automatically if you’re missing the box.

Happy hacking!

James

PS: If you’re curious what that cdtmpmkdir is, it’s here:

$ type cdtmpmkdir 
cdtmpmkdir is a function
cdtmpmkdir () 
{ 
    cd `mktemp --tmpdir -d tmp.XXX`
}

Fixing dropbox “conflicted copy” problems

I usually avoid proprietary cloud services because of freedom, privacy and vendor lock-in concerns. In addition, there are some excellent libre (and hosted) services such as WordPress, Wikipedia and OpenShift which don’t have the above problems. Thirdly, there are every day Free Software tools such as Fedora GNU/Linux, Libreoffice, and git-annex-assistant which make my computing much more powerful. Finally, there are some hosted services that I use that don’t lock me in because I use them as push-only mirrors, and I only interact with them using Free Software tools. The two examples are GitHub and Dropbox.

Today, Dropbox bit me. Here’s how I saved my data.

Dropbox integrates with GNOME‘s nautilus to sync your data to their proprietary cloud hosting. I periodically run the dropbox client to sync any changes to my public files up to their servers. Today, the client decided that some of my newer files were older than the stored server-side versions, and promptly over-wrote my newer versions.

Thankfully I have real backups, and, to be fair, Dropbox actually renamed my newer files instead of blatantly clobbering them. My filesystem now looks like this:

$ tree files/
files/
|-- bar
|-- baz
|   |-- file1
|   |-- file1\ (james's\ conflicted\ copy\ 2014-09-29)
|   |-- file2\ (james's\ conflicted\ copy\ 2014-09-29).sh
|   `-- file2.sh
`-- foo
    `-- magic.sh

You’ll note that my previously clean file system now has the “conflicted copy” versions everywhere. These are the good versions, whereas in the example above file1 and file2.sh are the older unwanted versions.

I spent some time with find and diff convincing myself that this was true, and eventually I wrote a script. The script looks through the current working directory for “conflicted copy” matches, saves the unwanted versions (just in case) and then clobbers them with the good “conflicted” version.

Please look through, edit, and understand this script before running it. It might not be what you want, and it was designed to only work for me. It is available as a gist, and below in the body of this article.

$ cat fix-dropbox.sh 
#!/bin/bash

# XXX: use at your own risk - do not run without understanding this first!
exit 1

# safety directory
BACKUP='/tmp/fix-dropbox/'

# TODO: detect or pick manually...
NAME=`hostname`
#NAME='myhostname'
DATE='2014-09-29'

mkdir -p "$BACKUP"
find . -path "*(*'s conflicted copy [0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]*" -print0 | while read -d $'' -r file; do
    printf 'Found: %s\n' "$file"

    # TODO: detect or pick manually...
    #NAME='XXX'
    #DATE='2014-09-29'

    STRING=" (${NAME}'s conflicted copy ${DATE})"
    #echo $STRING
    RESULT=`echo "$file" | sed "s/$STRING//"`
    #echo $RESULT

    SAVE="$BACKUP"`dirname "$RESULT"`
    #echo $SAVE
    mkdir -p "$SAVE"
    cp "$RESULT" "$SAVE"
    mv "$file" "$RESULT"

done

You can thank bash for saving your data. Stop bashing it and read this article instead.

Happy hacking,

James

 

One minute hacks: the nautilus scripts folder

Master SDN hacker Flavio sent me some tunes. They were sitting on my desktop in a folder:

$ ls ~/Desktop/
uncopyrighted_tunes_from_flavio/

I wanted to listen them while hacking, but what was the easiest way…? I wanted to use the nautilus file browser to select which folder to play, and the totem music/video player to do the playing.

Drop a file named totem into:

~/.local/share/nautilus/scripts/

with the contents:

#!/bin/bash
# o hai from purpleidea
exec totem -- "$@"

and make it executable with:

$ chmod u+x ~/.local/share/nautilus/scripts/totem

Now right-click on that music folder in nautilus, and you should see a Scripts menu. In it there will be a totem menu item. Clicking on it should load up all the contents in totem and you’ll be rocking out in no time. You can also run scripts with a selection of various files.

Here’s a screenshot:

nautilus is pretty smart and lets you know that this folder is special

nautilus is pretty smart and even lets you know that this folder is special

I wrote this to demonstrate a cute nautilus hack. Hopefully you’ll use this idea to extend this feature for something even more useful.

Happy hacking,

James

 

Finite state machines in puppet

In my attempt to push puppet to its limits, (for no particular reason), to develop more powerful puppet modules, to build in a distributed lock manager, and to be more dynamic, I’m now attempting to build a Finite State Machine (FSM) in puppet.

Is this a real finite state machine, and why would you do this?

Computer science professionals might not approve of the purity level, but they will hopefully appreciate the hack value. I’ve done this to illustrate a state transition technique that will be necessary in a module that I am writing.

Can we have an example?

Sure! I’ve decided to model thermodynamic phase transitions. Here’s what we’re building:

Phase_change_-_en.svg

How does it work?

Start off with a given define that accepts an argument. It could have one argument, or many, and be of whichever type you like, such as an integer, or even a more complicated list type. To keep the example simple, let’s work with a single argument named $input.

define fsm::transition(
        $input = ''
) {
        # TODO: add amazing code here...
}

The FSM runs as follows: On first execution, the $input value is saved to a local file by means of a puppet exec type. A corresponding fact exists to read from that file and create a unique variable for the fsm::transition type. Let’s call that variable $last. This is the special part!

# ruby fact to pull in the data from the state file
found = {}
Dir.glob(transition_dir+'*').each do |d|
    n = File.basename(d)    # should be the fsm::transition name
    if n.length > 0 and regexp.match(n)
        f = d.gsub(/\/$/, '')+'/state'    # full file path
        if File.exists?(f)
            # TODO: future versions should unpickle (but with yaml)
            v = File.open(f, 'r').read.strip    # read into str
            if v.length > 0 and regexp.match(v)
                found[n] = v
            end
        end
    end
end

found.keys.each do |x|
    Facter.add('fsm_transition_'+x) do
        #confine :operatingsystem => %w{CentOS, RedHat, Fedora}
        setcode {
            found[x]
        }
    end
end

On subsequent runs, the process gets more interesting: The $input value and the $last value are used to decide what to run. They can be different because the user might have changed the $input value. Logic trees then decide what actions you’d like to perform. This lets us compare the previous state to the new desired state, and as a result, be more intelligent about which actions need to run for a successful state transition. This is the FSM part.

# logic tree modeling phase transitions
# https://en.wikipedia.org/wiki/Phase_transition
$transition = "${valid_last}" ? {
        'solid' => "${valid_input}" ? {
               'solid' => true,
               'liquid' => 'melting',
               'gas' => 'sublimation',
               'plasma' => false,
               default => '',
        },
        'liquid' => "${valid_input}" ? {
               'solid' => 'freezing',
               'liquid' => true,
               'gas' => 'vaporization',
               'plasma' => false,
               default => '',
        },
        'gas' => "${valid_input}" ? {
               'solid' => 'deposition',
               'liquid' => 'condensation',
               'gas' => true,
               'plasma' => 'ionization',
               default => '',
        },
        'plasma' => "${valid_input}" ? {
               'solid' => false,
               'liquid' => false,
               'gas' => 'recombination',
               'plasma' => true,
               default => '',
        },
        default => '',
}

Once the state transition actions have completed successfully, the exec must store the $input value in the local file for future use as the unique $last fact for the next puppet run. If there are errors during state transition execution, you may choose to not store the updated value (to cause a re-run) and/or to add an error condition fact that the subsequent puppet run will have to read in and handle accordingly. This is the important part.

$f = "${vardir}/transition/${name}/state"
$diff = "/usr/bin/test '${valid_input}' != '${valid_last}'"

# TODO: future versions should pickle (but with yaml)
exec { "/bin/echo '${valid_input}' > '${f}'":
        logoutput => on_failure,
        onlyif => "/usr/bin/test ! -e '${f}' || ${diff}",
        require => File["${vardir}/"],
        alias => "fsm-transition-${name}",
}

Can we take this further?

It might be beneficial to remember the path we took through our graph. To do this, on each transition we append the new state to a file on our local puppet client. The corresponding fact, is similar to the $last fact, except it maintains a list of values instead of just one. There is a max length variable that can be used to avoid storing unlimited old states.

Does this have a practical use?

Yes, absolutely! I realized that something like this could be useful for puppet-gluster. Stay tuned for more patches.

Hopefully you enjoyed this. By following the above guidelines, you should now have some extra tricks for building state transitions into your puppet modules. Let me know if you found this hack awesome and unique.

I’ve posted the full example module here.

Happy Hacking,

James

 

Collecting duplicate resources in puppet

I could probably write a long design article explaining why identical duplicate resources should be allowed [1] in puppet. If puppet is going to survive in the long-term, they will have to build in this feature. In the short-term, I will have to hack around deficiency. As luck would have it, Mr. Bode has already written part one of the hack: ensure_resource.

Why?

Suppose you have a given infrastructure with N vaguely identical nodes. N could equal 2 for a dual primary or active-passive cluster, or N could be greater than 2 for a more elaborate N-ary cluster. It is sufficient to say, that each of those N nodes might export an identical puppet resource which one (or many) clients might need to collect, to operate correctly. It’s important that each node export this, so that there is no single point of failure if one or more of the cluster nodes goes missing.

How?

As I mentioned, ensure_resources is a good enough hack to start. Here’s how you take an existing resource, and make it duplicate friendly. Take for example, the bulk of my dhcp::subnet resource:

define dhcp::subnet(
      $subnet,
      # [...]
      $range = [],
      $allow_duplicates = false
) {
      if $allow_duplicates { # a non empty string is also a true
            # allow the user to specify a specific split string to use...
            $c = type($allow_duplicates) ? {
                  'string' => "${allow_duplicates}",
                  default => '#',
            }
            if "${c}" == '' {
                  fail('Split character(s) cannot be empty!')
            }

            # split into $realname-$uid where $realname can contain split chars
            $realname = inline_template("<%= name.rindex('${c}').nil?? name : name.slice(0, name.rindex('${c}')) %>")
            $uid = inline_template("<%= name.rindex('${c}').nil?? '' : name.slice(name.rindex('${c}')+'${c}'.length, name.length-name.rindex('${c}')-'${c}'.length) %>")

            $params = { # this must use all the args as listed above...
                  'subnet' => $subnet,
                  # [...]
                  'range' => $range,
                  # NOTE: don't include the allow_duplicates flag...
            }

            ensure_resource('dhcp::subnet', "${realname}", $params)
      } else { # body of the actual resource...

            # BUG: lol: https://projects.puppetlabs.com/issues/15813
            $valid_range = type($range) ? {
                  'array' => $range,
                  default => [$range],
            }

            # the templating part of the module... 
            frag { "/etc/dhcp/subnets.d/${name}.subnet.frag":
                  content => template('dhcp/subnet.frag.erb'),
            }
      }
}

As you can see, I added an $allow_duplicates parameter to my resource. If it is set to true, then when the resource is defined, it parses out a trailing #comment from the $namevar. This can guarantee uniqueness for the $name (if they happen to be on the same node) but more importantly, it can guarantee uniqueness on a collector, where you will otherwise be unable to workaround the $name collision.

This is how you use this on one of the exporting nodes:

@@dhcp::subnet { "dmz#${hostname}":
    subnet => ...,
      range => [...],
      allow_duplicates => '#',
}

and on the collector:

Dhcp::Subnet <<| tag == 'dhcp' and title != "${dhcp_zone}" |>> {
}

There are a few things to notice:

  1. The $allow_duplicates argument can be set to true (a boolean), or to any string. If you pick a string, then that will be used to “split” out the end comment. It’s smart enough to split with a reverse index search so that your name can contain the #’s if need be. By default it looks for a single #, but you could replace this with ‘XXX123HACK‘ if that was the only unique string match you can manage. Make sure not to use the string value of ‘true‘.
  2. On my collector I like to filter by title. This is the $namevar. Sadly, this doesn’t support any fancier matching like in_array or startswith. I consider this a puppet deficiency. Hopefully someone will fix this to allow general puppet code here.
  3. Adding this to each resource is kind of annoying. It’s obviously a hack, but it’s the right thing to do for the time being IMHO.

Hope you had fun with this.

Happy hacking,

James

PS: [1] One side note, in the general case for custom resources, I actually think that by default duplicate parameters should be required, but that a resource could provide an optional function such as is_matched which would take as input the two parameter hash trees, and decide if they’re “functionally equivalent”. This would let an individual resource decide if it matters that you specified thing=>yes in one and thing=>true in the other. Functionally it matters that duplicate resources don’t have conflicting effects. I’m sure this would be particularly bug prone, and probably cause thrashing in some cases, which is why, by default the parameters should all match. </babble>

Forcing firefox to remember passwords

There are a handful of websites out there that decide that they know better than your browser and tell it to not offer to save passwords. They do this by setting a form autocomplete attribute to off.

Since we already agree that HTML and the web are a terrible idea, hopefully we can find a way to hack around this. It turns out that I didn’t have to, because many others have solved this hack before me. The cleanest version I found is here: http://www.howtogeek.com/62980/how-to-force-your-browser-to-remember-passwords/

It’s not that complicated actually, a little bookmarklet (javascript code, stored in a bookmark, and activated when you open it) is saved in your browser, and on activation, it loops through all the page’s forms and turns on the autocomplete off‘s.

I’ve copied the code here, in the interests of archiving this very useful hack. Here you go:

Shortened form:

javascript:(function(){var%20ac,c,f,fa,fe,fea,x,y,z;ac="autocomplete";c=0;f=document.forms;for(x=0;x<f.length;x++){fa=f[x].attributes;for(y=0;y<fa.length;y++){if(fa[y].name.toLowerCase()==ac){fa[y].value="on";c++;}}fe=f[x].elements;for(y=0;y<fe.length;y++){fea=fe[y].attributes;for(z=0;z<fea.length;z++){if(fea[z].name.toLowerCase()==ac){fea[z].value="on";c++;}}}}alert("Enabled%20'"+ac+"'%20on%20"+c+"%20objects.");})();

Long form:

function() {
   var ac, c, f, fa, fe, fea, x, y, z;
   //ac = autocomplete constant (attribute to search for)
   //c = count of the number of times the autocomplete constant was found
   //f = all forms on the current page
   //fa = attibutes in the current form
   //fe = elements in the current form
   //fea = attibutes in the current form element
   //x,y,z = loop variables

   ac = "autocomplete";
   c = 0;
   f = document.forms;

   //cycle through each form
   for(x = 0; x < f.length; x++) {
      fa = f[x].attributes;
      //cycle through each attribute in the form
      for(y = 0; y < fa.length; y++) {
         //check for autocomplete in the form attribute
         if(fa[y].name.toLowerCase() == ac) {
            fa[y].value = "on";
            c++;
         }
      }

      fe = f[x].elements;
      //cycle through each element in the form
      for(y = 0; y < fe.length; y++) {
         fea = fe[y].attributes;
         //cycle through each attribute in the element
         for(z = 0; z < fea.length; z++) {
            //check for autocomplete in the element attribute
            if(fea[z].name.toLowerCase() == ac) {
               fea[z].value = "on";
               c++;
            }
         }
      }
   }

   alert("Enabled '" + ac + "' on " + c + " objects.");
}

Happy hacking,

James

PS: Best friends forever if you can get firefox to natively integrate with the gnome-keyring. No I don’t want to force it to myself, get this code merged upstream please!

Mothers day hacks

Firstly Happy Mother’s day to my mother.

Google is, as usual, busily releasing doodles. Today, the doodle takes you through a Rube Goldberg -esque sequence, giving you four decisions to make along the way. Each decision gives you one of three different choices, and at the end, a unique drawing is displayed. I expect:

3 * 3 * 3 * 3 = 81

different permutations. At the end of the process, you can print your image. I got directed to:

https://www.google.ca/logos/2013/mom/print/3311.html

which suspiciously has a four digit filename composed of three’s and ones. Inspecting the file in firefox shows that the main image url is:

https://www.google.ca/logos/2013/mom/hdcards/3311.jpg

with a similarly suspicious “3311“. I decided that I wanted to see all the permutations without having to refresh google’s page. Jumping to a terminal, and using some bash expansion magic:

$ cd /tmp
$ mkdir hack1
$ cd hack1/
$ wget https://www.google.ca/logos/2013/mom/hdcards/{1..3}{1..3}{1..3}{1..3}.jpg
[snip]
FINISHED --2013-05-12 07:07:22--
Total wall clock time: 1m 23s
Downloaded: 81 files, 138M in 1m 19s (1.75 MB/s)

I am pleased to see that 81 files have downloaded, and that they’re all unique:

$ md5sum * | sort | uniq | wc -l
81

and all png’s:

$ file * | awk '{print $2}' | sort | uniq | wc -l
1
$ file * | head -1
1111.jpg: JPEG image data, EXIF standard

Now I can very easily browse through the images with eog, and select my favourite.

Hope this has been instructive,

Happy hacking,

James