Saturday, June 6, 2015

Personal kanban and process-oriented work...

So several weeks have passed since I wrote my last post about doing personal kanban and about using a weekly board and pulling cards on to this board.

The big thing is I'm still using this system - which is a big deal for me.  I've been going at it for many weeks and I feel I'm unlikely to discard it any time soon.  But things have changed (evolved) quite a bit...

The biggest thing I've noticed is that an activity that I had earmarked as an everyday one, that I would do for as little 10 minutes in some cases and record in a tally next to my weekly board, very quickly became part of my daily habit.  It was really "sticking".  And it led me to this realisation: If you want to get good at something, you should do it regularly.

Mind-blowing I know, but I think it bears examining more carefully here.  I'd wager that small, continual, regular sessions learning a skill beat longer and irregular ones any time, though this might depend on the activity you are doing and also the way in which you spend that time.  At any rate if you're trying to build a skill that's really important to you, work on it either everyday or very regularly (think "recurring") even if it's for a short period.  [Note: my context for writing this is that I am trying to make efficient use of the time I have outside of regular work and other commitments that I already have.]

How does this fit in with (personal) kanban and WIP?  I think for the moment I have to make a distinction between activities that have a recurring nature, that don't easily fit into a backlog, and other activities that are more discrete, task-like in nature and which do.  That distinction is the difference between

  • process-oriented work, and
  • goal-oriented work

This is the difference that is described in Thomas Sterner's book "The Practicing Mind".  In his book, Sterner, a musician and piano tuner amongst other things, points out the power of process-oriented work and its de-emphasis in the western mindset and culture.  The Western mindset is goal-oriented; it is one of getting results and measuring success by results to the point that how the result is achieved becomes irrelevant (eg "cheating" an exam or faking a result).  Thinking in terms of outcomes or results isn't intrinsically bad.  I think it can be powerful way to think.  But there is a darker side perhaps symptomatic of a culture that only knows how to think this way.  It is a culture that assesses people's worth and capabilities purely by some abstract metric (a mark or grade), that is materialistic ... "If I get this [thing], *then* I'll have made it / I'll be happy" etc...

By contrast, process-oriented thinking is about focusing on what you are doing, the quality of what you are doing, how you are doing it, what you focus on as you do it. Process-oriented behaviour focuses on the process of achieving a goal rather than focusing on having it.  You don't achieve something, you do it, and in a neurological sense, you are what you do, because you're building connections in your brain as you do something hopefully making you better at doing it, if you do it enough and focus on the right things.

As a quick aside Timothy Gallwey's analogy of the "10 cent computer" which is your conscious mind and the "billion dollar computer" which is the rest of your brain, is apt here.  It is your billion dollar computer (the not-so-conscious part of you) that performs the juggling you spent a week learning how to do, or hit a golf ball effortlessly (if you ever got that far), or a tennis ball, or writes code cleanly in a programming language or framework that you've mastered.  When you're engaging in recurring tasks, in a process-oriented way, your 10 cent computer is directing ("allowing" might be a better word) your billion dollar computer in what to focus on, steadily and regularly, allowing it to learn subtleties and complexities that you could never perform consciously without such an investment.  And every day you wake up to do it again, with your brain having formed new connections overnight from the previous day, ready to make new insights.

So picking something to work on everyday or with similar level of regularity is quite a significant thing.

For me, I can support maybe 2 "everyday"-type things.  There's a limit here, just like there's a limit on the number of discretegoal-oriented / task-oriented things you should be tackling (your WIP); you can only make yourself do so many different things every day, and you might just want to start with one and see how many more you can add from there.  Mine are currently a musical skill I'm trying to work on, and the other is a set of technical skills I'm trying to build.  These things are so important to me, that I make a point of doing them everyday, even if it's just 10 to 20 minutes that I can spare.   

There are other recurring, process-oriented things you can fit into your week that don't have that "everyday" type of intensity.  For instance there are things you might do 2 or 3 times a week, such as  exercise.  These things might require scheduling, so I have cards for these that I put on to my weekly board.  They may be things you want to tally (do "n" times a week) or simply make a habit of doing on a particular day etc... The weekly board really helps me to plan these out and shuffle them around as circumstances change.

Process-oriented elements in my personal kanban give me something else: balance.
In the past (before personal kanban and visualising my situation), if I wanted to work on a skill, I'd spend long periods on it, usually late into the evening.  Then maybe a day or two later, I'd totally neglect it or be distracted by some other task, eventually I'd lose track of it or revisit it after a long hiatus.  This irregular and lumpy workflow doesn't happen so much now.  I know I can't spend indefinite periods of time on a favoured activity du jour anymore.  If I did, other things both mundane and important would suffer.  On the flip-side, when time is short, I will try to work on an everyday type activity even if I can only spare 10 or 20 minutes of my time for that day.  At least I've kept it "warm".

My personal kanban (if I can call it that) is a marriage of the "smooth": continuous, process-oriented elements and the granular: discrete, task-like goal-oriented elements.  I don't see these 2 ways of thinking as antithetical or incompatible; as I engage in a process-oriented way, specific goals and tasks may emerge that I can put into my backlog.

These roughly are the main elements in my pkb now:
  • a weekly board with days of the week with cards that represent both process-oriented and goal-oriented items of work; I've dispensed with the "blue" cards that I push on
  • an everyday / tally column for recording recurring things
  • a backlog of goal-oriented items, short range and long range; the short range stuff is a bit like a sprint, a small number of things I want to try to achieve that week; I actually have some more specific backlogs for different things that feed into this 
  • skill area columns; I have about 4 of these that represent areas I want to focus on in a process-oriented way; these are the home of my process-oriented efforts.

Saturday, May 9, 2015

Personal kanban evolution

In this blog post I look at the idea of a "daily or day-of-the-week WIP" in the context of a personal week.  Some of this takes its cues from the Personal Kanban book [PKB], which discusses things like sequestering, "large project" approach with "roll up" tasks - basically, techniques to visualise recurring personal tasks and recurring work generated from on-going projects.

Perhaps the biggest idea I've gotten from kanban so far is the importance of visualisation or representing your situation.  It's hard to start thinking about WIP or fine-tuning things until you've had time to do this and evolve it a bit.  So here goes...

It started with...

My situation and context for this article: I'm always trying to do stuff outside of my day job for one reason or another, and I'm interested in how to get the most out of myself given such a huge constraint.   So I started trying out personal kanban after reading the personal kanban book a couple of weeks ago.

The 2 rules of personal kanban are:
  • visualise your work
  • limit your work-in-progress (WIP) (aka "don't multitask or avoid incessantly switching between (unfinished) things")
I started with a backlog of things I wanted to do in trello with a "READY" column for things that I could potentially work on, a "DOING" column where you limit how many things you are working on and a "DONE" column.

This was an interesting exercise as I realised just how much I tend to jump around from one thing to another. Having a WIP limit on my work forces me to focus and it forces me to think about what I want to do next.

What to do with recurring things....

As I was reading through the first part of the book however I started wondering about periodic or recurring things.

There are things I really don't want to record like brushing my teeth - that makes no sense. And I feel like there are definitely things I'd rather leave "unstructured".  But there are other periodic things which are things I want to do but which don't have any kind of immediate goal or end state but which I'm keen to track in some way.  Things like going for a run for instance - especially in winter, when I need to motivate myself. Or doing some weight training or doing some music exercises because I want to train my ear.  These are regular things that I want to coordinate but I don't want to drag them repeatedly into a "done" column.

The appendix A of PKB turns out to be the most interesting part from my perspective because the authors discuss some real case studies where recurring work was important.  One case involves handling both a training regime and a regular study schedule. In these cases additional "value streams" or swim lanes (grids or additional boards of some sort) were created in addition to the main board with the express purpose of visualising the recurring work and tracking it.

Finding a better visualisation...

So, a week into my trello pkb experience I started thinking...   How do I track these little regular things I want to do alongside the bigger projects and things I want to take on... all outside of my regular job?

I can plan big goals and projects; prioritize them,  put them into a backlog,  split them up and manage in a semi-scrum like fashion.

But my initial attempts to visualise the recurring aspects of my life involved having a daily column or a weekly column and using things like trello checklists or cards that recorded tallies for particular activities. One of the limitations I was hitting was my desire to use more grid-like visualisations (ones that had both rows and columns) to explore better ways to do this rather than the columnar approach that trello provides.

The weekly board...

This is what I came up with (still in trello), after allowing my thinking to evolve over about a week...
  • I created a new board I call the "weekly board" separate to my backlog board
  • I created a column for each day of the week (a list in trello) - something I had seen in one of the case studies at the back of the PKB book. 
  • I added an "everyday / tally" column for (a small number of) things that were daily that I wanted to track or tally over weekly or even more extended periods  (one way might be to use a trello checklist inside a card and periodically resetting it). 
Next to these I have a whole bunch of other columns, but 2 in particular are:
  • recurring list containing recurring things,  things that are weekly or less regular than weekly; 
  • and a list I call the "try-to-do" list, also a list of recurring things which I'll discuss shortly
  • (An example of another column I have is a "calendar" list; it lists things like particular meetups I want to try to go to or at least be aware of during the week etc)
Where my recurring list consists of regular chores and the stuff of life,  my "try to do" column focuses on recurring things I want to target on a regular basis because they are significant to me in some way.

So for running, I create 2 cards that initially go in my "try to do" column one for each run. Let's also say I want to target doing 3 weight training sessions per week if I can possibly squeeze this in. And maybe I want to do 2 music reading exercises during the week as well.

So:
  • I create the requisite number of cards for each of these activities and have them all start in my "try to do" column
  • Then I push these cards onto my weekly board - onto one of the days of the week. 
I have to space out the running and the weights with rest days between like-exercises.  I regularly review my "recurring" list and drag things from there onto my weekly board as well, things like shopping and cooking (if I don't plan cooking I end up eating badly the whole week).  And there are potentially "one-off" cards that I need to create.

Next to my days of the week is my "everyday / tally" column where I have a small number of cards that are so important to me I want to track or do them every day.  This reduces clutter on my week day columns since I don't have to create a card for each day of the week.

Getting a weekly rhythm - the engine

The final inversion of my PKB experience occurred when I created some additional "try to do" cards explicitly for the purpose of doing "work".  These "work" cards are placeholders to indicate time spent doing "non-recurring" stuff, at the moment each one is roughly equivalent to an hour.   I push these onto my day columns along with my recurring cards.  I colour my "work" cards blue in trello so they stick out as points where I hope to do a solid block of (non-recurring) work.  Knowing where I can put these work cards and how many I can sustain on top of my regular week is one of the key things I want to visualise.

And suddenly... I have a dashboard;  a gauge (or set of gauges??) showing my week.  Each day of the week on my board has a WIP limit of sorts;  I can see if I've got too much on any given day. I can drag things around and make trade-offs as unexpected things happen during the course of the week, and I can prioritise my day, eg "can I try to do this before I go to work" etc .

I say "inversion" because prior to building my weekly board, my main focus was my backlog board with a single WIP (doing) column.  Now my primary dashboard is my weekly board with a column for each day of the week forming a "daily WIP" in conjunction with the "everyday / tally" column.  Is it a WIP in the kanban sense?  Well, maybe it roughly maps to the idea of a "today" column as proposed in the Personal Kanban book.  When I'm ready to do one of my blue "work" cards that I've scheduled for the current day, I can switch to my backlog board to easily see and review what my current focus / WIP for non-recurring work is.

My aim is to build up a "rhythm" between my recurring "try to do's" and my non-recurring "work" items whilst fitting all the other recurring stuff in as necessary.

The PKB book mentions that small tasks can be periodically rounded up and "sweated out" to help you clear out your backlog - these tasks are dubbed "ankle biters".  Small recurring things for me are handled by my weekly board.  I've rounded up the other small non-recurring stuff into a mini-backlog on my weekly board so that I can drag items from there to a day of the week as and when I think I can do them - I could even create a couple of "sweat-out" cards if I wanted. [Actually, I've put the ankle biters backlog list back on my backlog board next to my main backlog.  It's a way to weed out smaller things from my backlog.]

So, my backlog board is allowed to focus on the bigger things I'm trying to do with my life and my weekly board is the engine I need to tune to help do that as best I can given all my other constraints.

Sprinting on (non-recurring) work?

Knowing how my week is going to look when I plan out my weekly board also means I can look to gauge how much (non-recurring) "work" (on my backlog board) I can try achieve in a week and on which day.  So I corral some high-priority cards on my backlog board into a "week" column.  This maybe is a little bit like creating a scrum sprint inside kanban.  This gives me a weekly focus or goal, it makes me think more sharply about what I'm trying to do with my blue non-recurring placeholder "work" cards on my weekly board.

Every gauge needs a dial - using a slider card....

For each day of the week on my weekly board I also have a "slider" card called "-- done --" which starts at the top of the list.  As I complete tasks I drag this slider card down so that only cards not done are below it. This gives me a nice little indicator of where I'm at during the day, and a visual indicator of how much I got done on previous days and what didn't get done... plus I get a small dopamine hit for pushing things above that "--done--" card :)

Is my week a "value stream"?  Maybe, hopefully, I come out at the end of it a little better and a little closer to what I want to do. For me, Sunday is a good day to review the week, look how far my "-- done --" sliders got, think about which days were good or bad, whether I hit my everyday / tally targets, what rhythms or discoveries I made about how to do things better.   There's usually a relationship between my (recurring) try-to-dos and my "work" items as they represent things that are significant to me, as well as rhythms around the more humdrum stuff I have to do.

Monday, January 5, 2015

Arch linux on Dell XPS13

What?

Installing archlinux on a Dell XPS13 using archboot (instead of the standard archiso installer) via usb stick.

This turned out to be a pretty straightforward process. I also set up dm-crypt / luks and gave btrfs a go.

When: I ordered mine late 2014, got it before xmas.

Specs...

  • XPS 13BASE NBK XPS BTX 9333 WW (if that means anything)
  • i7-4510U processor
  • 8G ram
  • 256G ssd
  • basically this is the high end model

Disclaimer

I only infrequently do installs or set up operating systems. I don't really want to know more about UEFI than I absolutely have to, and I'm not bothered about Secure Boot either, nor am I all that knowledgeable on disk encryption. The commands and steps here should be taken as a guide only. Any security settings such as encryption settings and configurations should be researched by you.

Impressions of the XPS 13

Dell XPS13 just after switching over to Archlinux.

I asked about the developer edition of the XPS 13, aka project sputnik but was told this wasn't available in Australia. I bit my tongue, and went ahead anyway and paid the microsoft tax.

This is a sleek piece of kit. However I was alarmed on first switching this on to have the fans roaring into sudden and furious action. There I was, staring at this thing and trying to reconcile the desire to play with something so obviously new and shiny, with the increasing horror that maybe I had a turkey on my hands. Visions flashed through my head of me on the train or at work having to explain to people around me why the fan was so loud: "All the cool new ultrabooks do it", I'd protest weakly.

Fortunately, a reboot into the firmware (hit f12 at startup and look for Diagnostics) and then running some diagnostics including fan control seemed to quiet the system down and I haven't had a repeat episode since.

When doing nothing, this system is whisper quiet, no whirring and moving parts etc. There is a slight high-pitched and directional "eeeee" sound which goes from a lower pitch when the keyboard lights are off to a higher pitch when the keyboard lights go on - this apparently was a real big issue with earlier models. It feels like the "whine" is louder (at least lower so I can hear) when the lights are off. UPDATE: I feel the noise the keyboard makes when the lights go off (which happens after some inactivity) has gotten louder after a day or two and is potentially an issue.

My main concern at this point is that the XPS 13 may run hot. I'm packing an i7 in there and the base is really not that much thicker then a usb slot - and that's the end that doesn't taper. Time will tell. Browsing (using chromium) seems to provoke the fan almost straight away especially if scrolling. That being said, my experience during the install of arch on this system just got better and better. Working on a bare console (before building a gui) was almost a pleasant experience, probably because the gorgeous screen and keyboard were such a pleasure to use.

First things

  • f2 brings up the UEFI firmware setup utility
  • f12 brings up the UEFI loader
    • which you can also activate the setup utility or do other things
  • fn + arrow keys gives you Home, End, Pg up, Pg down which might be worth knowing if you've got to hit the man pages
  • there is no optical drive; this is a usb job; nothing spins on this thing, which is awesome

Installing Arch...

There are 2 choices up front.

  • which installer to use:
    • archiso
      • when I booted with this in early 2014 on a different system using an optical disk, I was dropped into a shell, and I did some re-partitioning and some fancy mounting and chrooting on the host file system to build the new arch system directly; this was a good experience, but I tried archboot this time...
    • or archboot
      • this is a larger image and seems dedicated to installing arch
      • the key differences are apparently listed here
        • archboot will boot up in ram and will provide a terminal UI (basic installer) to walk you through the setup
        • it won't mount your host system by defaut
        • a welcome script sits on tty1 through tty6 (well, I tested up to 2)
        • you hit enter on any of these and it will launch you into basic install mode which is a UI that walks you through an install (which is what I did below)
        • but if you switch over to another one (eg alt + f2), hit enter, you'll just get dropped into zsh where you'll have a lot more options about what you decide to do; personally I think it would be less confusing if the system just dropped you into zsh and printed a helpful message
        • so archboot can be used as a rescue utility: I was able to rescue my system this way after I b0rked my xorg input settings and lost my keyboard, so archboot can function as a rescue utility
  • and secondly either:
    • writing the image to the usb stick
      • this will limit your usb stick to the size of the image
    • or installing to a partition on the usb stick; involves copying files to the partition, more complicated, but you can still use the left over free space on your usb stick

I went with archboot and wrote the image directly onto the usb stick because this seemed like the most expedient option:

dd bs=4M if=archlinux-2014.11-1-archboot.iso of=/dev/sdX && sync

... which turned my 4G stick into a 1G stick.

This can be reset afterwards.

dd count=1 bs=512 if=/dev/zero of=/dev/sdx && sync

...and re-partition.

Booting the usb

At the beginning of this process my Phoenix SecureCore uefi was set to

  • boot mode: UEFI
  • Secure boot: ON
  • Legacy mode: disabled

These settings were the default.

I could cut a long story short here and just say disable secure boot . But I did initally try to go with these settings.

I gave up trying to set boot order in the UEFI firmware setup utility (f2); I was able to add a usb entry to the top of the list (only after plugging the usb in) but windows would always boot; maybe it was falling through to windows, but it wasn't terribly obvious.

Instead, I used f12 to load the uefi loader which showed the usb I had inserted into the laptop.

Selecting the usb entry from this menu gave an error dialog:

### Secure Boot ###
Image failed to verify with *ACCESS DENIED*.
Press any key to continue.

This led to another screen:

Failed to start loader
It should be called loader.efi (in the current directory)
Please enrol its hash and try again
I will now execute HashTool for you to do this
OK

This led to another screen

### Select Binary ###
The Selected Binary will have its hash Enrolled.
This means it will subsequently boot with no prompting
Remember to make sure it is a genuine binary before Enrolling its hash.
[a selection box with files in it including loader.efi]

(At this point I think I should have just turned Secure Boot off since I don't need it, but I persisted for a bit longer...)

I selected loader.efi from the selection box.

Which led to

Enroll this hash in MOK database?
Hash: big scary hash

Restarting with f12 and selecting the usb brought up the ACCESS DENIED error again. BUT this time, hitting ok brought up the archboot loader with the following selection

* Arch Linux x86_64 Archboot EFISTUB
* GRUB X64 - if EFISTUB boot fails
* UEFI Shell X64 v1
* UEFI Shell X64 v2
* EFI Default Loader
* Reboot into firmware interface

I tried the EFISTUB option 1. This ultimately failed for me. I got ACCESS DENIED again and a message about "enroll /boot/vmlinuz..." (I can't remember the last bit). I tried to enrol this using the Select Binary screen.

I did this, but then hit another crop of errors:

Failed to open file: boot\intel-uecode.img
Trying to load files to higher address
Failed to open file: boot\intel-uecode.img

I tried option 2. This time I had to enrol a grub file. I can't remember which one, I think it was /boot/EFI/grub/grubx64.efi . Rebooting, hitting f12 and hitting the 2nd option as before, getting ACCESS DENIED but clicking passed this, I eventually got the usb to boot.

As I mentioned, I should have turned secure boot off to avoid this rigmarole.

Archboot installer

You should see a message like this on booting the usb:

Welcome to Arch Linux (archboot environment)

Hitting enter put me straight into the basic install utility which is a series of sections you can enter and configure things.

  • Keyboard and console font
    • Pretty much skipped over this
  • Networking
    • I was able to get wireless working straight away. archboot will up a wireless profile in /etc/netctl and it also asked me if I preferred to use dhclient over dhcpcd, which from previous experience I did. So I had wireless working straight away which was super encouraging.
  • Prepare Storage Drive
    • I went with a GUID partition table (GPT)
    • I made a 2G swap partition; overkill? probably
    • archboot seems to make you setup separate / and /home partitoins; I didn't like this initially, but after some consideration I went with it; I selected ext4 for both and gave / root 20G. Not sure if that is enough, hopefully it will be.
    • selected /boot as the UEFISYS mountpoint
    • select the device name scheme
      • PARTUUID and PARTLABEL are specific to GPT disks, PARTUUID is recommended for GPT
      • I went with PARTUUID
  • Select a source
    • we are given 2 options: peripheral device cd/usb or network
    • I went with usb
  • Install packages
    • I selected BASE and SUPPORT
  • there were some steps around configuring the system
    • I set /etc/hostname
    • mirror list (enabled australian sites)
    • set root password
  • Install bootloader
    • Setup has detected that you are using "X64 UEFI", do you want to install a X64 UEFI bootloader?
      • yes
    • now it gets tricky, we have 3 choices
      • EFISTUB
      • GRUB_UEFI
      • SYSLINUX_UEFI
    • I went with GRUB_UEFI
      • Got this cryptic message

        You have entered /boot as the mountpoitn of your EFISYS partition. Any other partioin using /boot as mountpoint will be ignored. You may have to re-install kernel and bootloader files (currently existing in /boot) to the EFISYS partition once it is setup at /boot.

      • asked to edit grub.conf file; I did, but didn't change anything
      • then: do you want to copy /boot/EFI/grub/grubx64.efi to /boot/EFI/BOOT/bootx64.efi? This might be needed in some systems where efibootmgr may not work due to firmware issues.
        • yes

Booting Arch

After going through the options, I then exited the installer and rebooted the system and removed the usb.

Rebooting gave me the ACCESS DENIED dialog again. It appeared to try twice, then gave up with this message:

No bootable devices - strike F1 to retry boot, F2 for setup utility.
Press F5 to run onboard diagnostics.

Time to give secure boot the boot. I hit f2 to get back into setup, and when to Boot tab, and set 'Secure Boot' to Disabled.

And voila, it boots.

Might be a good time to do

pacman -Syu

While you're at it, get rid of the console beep:

echo "blacklist pcspkr" > /etc/modprobe.d/nobeep.conf

or

rmmod pcspkr  # for immediate non-permanent relief

Modifications

At this point I had a system with /, /home using ext4, and swap.

I decided I wanted to:

  • encrypt swap and /home
  • and also try btrfs on /home for snapshots, bitrot and compression and also for eventually super fast syncing.
  • I also wanted to tune the system a little for the solid state drive.

There is still debate about the stability of btrfs, but things look like they are getting to a point where it's becoming ok to use. One of the big motivations was this article (a year ago now). Since that time at least one major distro has made it the default.

And then there is https://btrfs.wiki.kernel.org/index.php/FAQ#Is_btrfs_stable.3F from 2 years ago:

Pragmatic answer: (2012-12-19) Many of the developers and testers run btrfs as their primary filesystem for day-to-day usage, or with various forms of "real" data. With reliable hardware and up-to-date kernels, we see very few unrecoverable problems showing up. As always, keep backups, test them, and be prepared to use them.

I've used zfs on my backup media for some time now via the arch AUR zfs-git package which requires kernel modules so I'm hoping to replace this eventually with just btrfs.

Crypting Swap

This turned out to be super easy. I'm not bothered about hibernating aka suspend-to-disk, so this simplified the task.

See:

I need gdisk for my partiion:

pacman -S gptfdisk # gdisk

This identified my swap partition:

gdisk -l /dev/sda  # for me /dev/sda3 was swap type=82

This shows what swap you're using if any:

swapon -s

You can turn swap off like this:

swapoff /dev/sdaX

Use

blkid

to give you UUID and PARTUUID etc.

So, all I had to do to get this working was adding a line like this to /etc/crypttab:

swap           PARTUUID=d4ae0d26-8df6-4005-aaf7-f419418134c2 /dev/urandom           swap,cipher=aes-cbc-essiv:sha256,size=256,discard

and adding this to /etc/fstab:

/dev/mapper/swap    none    swap    sw  0 0

Later I experimented with an alternative setting in /etc/crypttab:

swap           PARTUUID=d4ae0d26-8df6-4005-aaf7-f419418134c2 /dev/urandom           swap,cipher=aes-xts-plain64:sha256,size=512,discard

Originally, I tried to use UUID but this failed on a second reboot. Turned out the UUID for /dev/sda3 (my swap partition) was no longer present. So I switched to PARTUUID.

That's pretty much it. Reboot and see if it works. If it doesn't, the system may hang for a while at bootup and then give up and boot the system without any swap devices.

You might be able to get some information this way:

systemctl list-units | grep swap
journal -xe -u swap.target

you'll see not-so-helpful messages like:

Dependency failed for Swap.
Job swap.target/start failed with result 'dependency'.

If you forget the /etc/fstab entry, the system may also hang for a while at startup and ask for a password. You'll just have to sit it out.

fstab

I should note that my fstab had no uncommented entries in it.

genfstab might help in this regard:

genfstab -U -p  # prints entries you could put in fstab

Get it using: pacman -S arch-install-scripts .

Crypting Home

ae2-cbc-essiv looks to no longer be the standard for encryption (see https://wiki.archlinux.org/index.php/Dm-crypt/Device_encryption ):

Please note that with release 1.6.0, the defaults have changed to an AES cipher in XTS mode. It is advised against using the previous default --cipher aes-cbc-essiv, because of its known issues and practical attacks against them.

Also see for cbc vs xts etc:

You should have an empty /home directory courtesy of archboot.

Unmount it and prepare it for block encryption:

umount /dev/sdaX  # /home partition (for me /dev/sda5)
cryptsetup -s 512 luksFormat /dev/sdaX

Check:

cryptsetup luksDump /dev/sdaX

should show aes plain xts.

Now we can open this device and put a filesystem on it:

cryptsetup luksOpen /dev/sdaX home
mkfs.btrfs /dev/mapper/home

I have this entry in my /etc/crypttab - the system can boot without an entry in here, but i want to use the discard option:

home           PARTUUID=df5ba5f5-9492-4ae2-b1aa-2ac548841394 none                   home,luks,size=512,discard

Using discard may have security implications if you're using block encryption. Also confession: I don't know if this is actually enabling TRIM support, it is listed in man 5 crypttab.

And in /etc/fstab:

UUID=3f05ccbe-137b-4fb0-9236-be2520fda140   /           ext4        rw,noatime,discard,data=ordered 0 1
/dev/mapper/home    /home   btrfs   rw,compress=zlib,discard,ssd,relatime   0 0

I've included / above which I added with some additional flags: noatime / relatime, discard and ssd are used.

If this is all working, your system will ask for a password to open the /home partition as part of the boot up process. This is separate to your user account password(s) that you may have.

GUI

I won't cover xorg set up. I do use infinality to improve fonts especially in the browser.

Sound

I had some system sounds, but nothing when playing video.

Alsamixer and pavucontrol both showed 2 devices, the second one was the one I wanted: HDA Intel PCH. Following the advice of

I found this setting worked:

cat /etc/modprobe.d/modprobe.conf 
options snd_hda_intel enable=0,1

After this, earphones and pc speakers worked as expected.

Touchpad and touchscreen

The system is almost impossible to type on with the touchpad default settings. A slight glance will either click or scroll.

I have /etc/X11/xorg.conf.d/50-synaptics.conf with:

Section "InputClass"
    Identifier "touchpad"
    Driver "synaptics"
    MatchIsTouchpad "on"
        Option "TapButton1" "-1"
        Option "TapButton2" "-1"
        Option "TapButton3" "3"
        Option "VertEdgeScroll" "-1"
        Option "VertTwoFingerScroll" "on"
        Option "HorizEdgeScroll" "-1"
        Option "HorizTwoFingerScroll" "on"
        Option "CircularScrolling" "on"
        Option "CircScrollTrigger" "2"
        Option "EmulateTwoFingerMinZ" "40"
        Option "EmulateTwoFingerMinW" "8"
        Option "CoastingSpeed" "0"
        Option "FingerLow" "35"
        Option "FingerHigh" "40"
EndSection

This disables tap clicking (you can still depress the end of the touchpad for a normal click), and it enables 2-finger scrolling. I can still trigger 2-finger scrolling when both hands glance the trackpad so nothing's perfect.

The touchscreen is a bit of a mystery. It is recognised and looks to be treated like a mouse / pointer device (xinput --list). I can touch links in chromium but they are not clicked. What I really want is the ability to scroll but I haven't figured this out.

cat /proc/bus/input/devices  | egrep 'Bus|Name'
...
I: Bus=0018 Vendor=06cb Product=2734 Version=0100
N: Name="DLL060A:00 06CB:2734"
I: Bus=0003 Vendor=06cb Product=0af8 Version=0111
N: Name="SYNAPTICS Synaptics Large Touch Screen"
...

So I'm assuming DLL060A is the touchpad and the second set of entries is my screen.

I'm wondering if I can set up another Section in xorg.conf.d with

MatchIsTouchscreen "on"

But haven't got anywhere yet.

Links

Sunday, December 28, 2014

Emacs, flycheck and jshint and other jslinters

One of the benefits to emacs 24 is the new flycheck package.

Once you've installed flycheck be sure to head over to M-x flycheck-info, which should throw up the info manual for flycheck where you can learn that flycheck is the new flymake and how you can extend it. The author(s) of this document aren't shy in expressing their opinion:

"...we consider Flycheck superior to Flymake in all aspects"

I'm no judge of this. I only know that it works well for the things I need it on: js and jshint and also other languages such as php. I won't go into php here, but flycheck will happily use both phpcs - the code sniffer - and phpmd, the mess detector.

I'm probably nowhere near close to using the full capabilities of flycheck. Below I look at chaining an additional linter after jshint to provide indentation warnings.

But before I go any further...

Installing / Setup

I installed flycheck using the emacs package manager (ELPA) and specifically with a the melpa.milkbox.net repository.

Note I have the following in one of my emacs initialisation files (rightly or wrongly) which gives me several package repositories to choose from:

(require 'package)
(add-to-list 'package-archives
             '("marmalade" . "http://marmalade-repo.org/packages/") t)
(add-to-list 'package-archives
             '("melpa-stable" . "http://melpa-stable.milkbox.net/packages/") t)
(add-to-list 'package-archives
             '("melpa" . "http://melpa.milkbox.net/packages/") t)
(package-initialize)

I'm using:

  • flycheck 20141102.652 (use M-x package-list-packages to list and install packages)
    • WARNING: version 20141224.16 is more recent but blows up with my version of gjslint
    • delete it from *Packages* buffer (D) to reinstate the previous version

In my emacs configuration I have this to have flycheck always on:

(global-flycheck-mode 1)

Also, I use helm which means I can run interactive functions using M-x almost and sometimes faster than hitting keyboard shortcuts. It's gotten to the point that I often don't bother trying to add a keyboard shortcut for a new piece of functionality I'm using, preferring instead to hit M-x (helm-M-x under the covers) and typing a few short key strokes to precisely identify the command I want to use.

helm is a revelation, if you haven't tried it, you should.

The vexing issue of javascript major modes in emacs

Yes... it seems we have at least 3:

  • js-mode / javascript-mode
    • this is the in-built js package in emacs
  • js2-mode
    • version 20141224.347 melpa.milkbox.net
  • js3-mode
    • version 20140805.1529 melpa.milkbox.net
  • (There's also a json-mode which is worth installing as well to assist with things like ensuring double quotes.)

I sometimes switch between these.

  • One thing to note is that js2-mode insists on semi-colons, at least by default, and will apply an angry red underline to any offending line that doesn't.
    • This can be turned off with M-x customize-option RET js2-strict-missing-semi-waring and then toggling it off.
  • js3-mode appears to have been modelled on js2-mode and js-mode but tries to support npm-style js conventions which means out of the box it won't worry about semicolons.

Both js2-mode and js3-mode offer some error / linting / parsing and make this available via emacs' next-error facility.

  • In a js file, hitting M-g M-n will invoke next-error which will move you to the next error / warning or issue in your file.
  • M-g M-p will take you to the previous one
  • (Seasoned emacs users may be familiar with these keys since they are used by facilities such as M-x occur (file grepping) and M-x find-grep allowing you to traverse through a list of search hits.)
  • js2-mode will flag standard nodejs globals like require and process as undeclared, whereas js3-mode doesn't.
  • Also, js2-mode doesn't seem to detect some obvious syntax errors such as a double dot "..".

All of this however is unnecessary if you're using flycheck as flycheck will override the next-error system for its own purposes.

I tend to go with js2-mode as it seems more solid with things like switch-statement indentation.

Linters and hinters and code styler sniffers ...

M-x customize-group RET flycheck-executables will give you a good idea of what tools flycheck can handle. For js, there is jshint, gjslint, eslint and also jsonlint (for json).

A bunch of these come from the node eco-system:

sudo npm install -g jshint   # /usr/bin/jshint
sudo npm install -g jsonlint # /usr/bin/jsonlint
sudo npm install -g eslint   # /usr/bin/eslint

For google closure js linter (this is on an archlinux system) I had to do:

sudo easy_install-2.7 http://closure-linter.googlecode.com/files/closure_linter-latest.tar.gz
# /usr/bin/gjslint

Looking up eslint led to some interesting links: jshint and jslint apparently have their own parsers as opposed to using esprima or some other toolkit. eslint has now gone the same way and has its own parser called espree.

My version of flycheck didn't have out-of-the-box support for jscs -- the javascript code style(r) / ?sniffer. But it's not too hard to get it working with flycheck and it also solves the indent problem :) (see further down).

sudo npm install -g jscs     # /usr/bin/jscs

Flycheck and linter configurations

M-x customize-group RET flycheck-config-files allows you to set configuration files for your linters.

You can explicitly set config files for you linters eg you can set:

  • Flycheck Jshintrc: ".jshintrc"
  • Flycheck Gjslintrc: ".gjslintrc"
  • Flycheck Ejslintrc: ".eslintrc"

Flycheck will probably default to looking for dot files of the above form for jshint and gjslint. And it will probably intelligently determine where to look for these dot files in your project.

Using flycheck

Flycheck appears to use jshint by default (assuming you installed it), probably because it is listed before any of the other eligible checkers in flycheck-checkers variable.

In a js file, you should be able to do M-g M-n and M-g M-p to move forwards and backwards over warnings and errors that have been detected (using the next-error interface provided by emacs).

C-c ! l (M-x flycheck-list-errors) will list the errors in a buffer called *Flycheck errors* next to your js file giving you a point and click index into all your linting issues.

I use M-x flycheck-mode to toggle flycheck on and off.

Invoking multiple js linters in flycheck

Use M-x flycheck-select-checker to select a different checker (hint: you should be using helm or something like it which greatly accelerates selections and use of M-x).

  • M-x flycheck-select-checker RET javascript-jshint
  • M-x flycheck-select-checker RET javascript-gjslint
  • M-x flycheck-select-checker RET javascript-eslint

Flycheck has a mechanism for running multiple checkers at the same time which it refers to as chaining. For instance, by default flycheck's php facility will chain mess detector and code sniffer checkers so you will see output from both in the *Flycheck errors* buffer. But for js, by default, flycheck only uses one of supported linters at a time.

Checkers in flycheck are defined using flycheck-define-checker. Chaining checkers requires adding the :next-checkers property to this definition. All the standard in-built checkers are located in elpa/flycheck-20141102.652/flycheck.el .

Here's the jshint checker (it doesn't have :next-checkers property by default):

(flycheck-define-checker javascript-jshint
  "A JavaScript syntax and style checker using jshint.

See URL `http://www.jshint.com'."
  :command ("jshint" "--checkstyle-reporter"
            (config-file "--config" flycheck-jshintrc)
            source)
  :error-parser flycheck-parse-checkstyle
  :error-filter flycheck-dequalify-error-ids
  :modes (js-mode js2-mode js3-mode))

Chaining additional checkers is a topic of interest because, well, indentation...

The equally vexed issue of indenting in javascript

jshint no longer explicitly warns for indentation via the indent option. Boo, sad-face.

Indenting warnings with gjslint...

gjslint may come to the rescue here, it does show warnings for indentation, but it's not clear to me at this point how to control its indentation facility. It looks to be hard-coded to 2 spaces and it has particular ideas about breaking up lines of code that fall under the same rule (0006).

Here's an example of default gjslint preferences on indentation of function parameters :

// BAD: and anything less
promise.then (
  function promiseOk (result) {
  ...

// BAD: Up to this point...
promise.then (
             function promiseOk (result) {
             ...

// OK: if param comes after opening paren...
promise.then (
              function promiseOk (result) {
              ...

Personally I don't have a problem with the first form above, I think it makes the code more readable when a function has complex and extended arguments and reduces chance of exceeding line length.

You can put your gjslint flags passed to the linter on the cli into a flagfile eg create .gjslintrc in the root of your project with the following content:

--jslint_error=indentation

and then invoke like this:

gjslint --flagfile=path/to/.gjslintrc <file>

Flycheck presumably sets the flagfile option if it finds .gjslintrc (assuming you've configured the flycheck-config-files variable (above)). So you can have a .gjslintrc in the root of your project and flycheck will use this for all files.

gjslint lists a rule number or id for each rule that has been infringed. This provides another way to suppress some warnings provided by this linter (if you're using jshint). You can add to your flagfile like this:

--jslint_error=indentation
--disable=0001,0002,0220,0110

The above disabled rules are extra space (0001), missing space (0002), and "no docs found" (0220), line length (0110).

We can get gjslint to chain after jshint by using flycheck-add-next-checker.

In an emacs configuration file:

(with-eval-after-load 'flycheck
  ;; Chain javascript-jscs to run after javascript-jshint.
  (flycheck-add-next-checker 'javascript-jshint '(t . javascript-gjslint)))

This bit of elisp code waits for flycheck to get loaded and then executes its body. It calls flycheck-add-next-checker to add javascript-gjslint as the next checker after javascript-jshint. This means that both will be run on your js file and errors will be organised by line number in the *Flycheck errors* buffer.

Indentation warnings with eslint?

I ran out of gas here. It looks like eslint has taken the same stance as jshint. You can add it all the same using the above with-eval-after-load trick we just did above - but this is not something I'm doing atm.

(with-eval-after-load 'flycheck
  (flycheck-add-next-checker 'javascript-jshint '(t . javascript-eslint)))

Indentation warnings with jscs ... finally!

There doesn't look to be an in-built checker for jscs . This is something that should probably go into flycheck but there's nothing stopping us from using it with flycheck right now.

First, define a flycheck checker in your emacs configuration file(s) somewhere:

(flycheck-define-checker javascript-jscs
  "A JavaScript style checker using jscs.

See URL `https://www.npmjs.com/package/jscs'."
  :command ("jscs" "--reporter=checkstyle" 
            (config-file "--config" flycheck-jscsrc)
            source)
  :error-parser flycheck-parse-checkstyle
  :modes (js-mode js2-mode js3-mode))

For it to work, we'll need to add a file config for it as well:

(flycheck-def-config-file-var flycheck-jscsrc javascript-jscs ".jscsrc"
  :safe #'stringp)

Then add a .jscsrc file to the root of your project. eg

{
    "preset": "google",
    "requireCurlyBraces": null
}

At this point, you can do M-x flycheck-select-checker and see if you can select javascript-jscs. C-c ! l should give you a list of issues (assuming your file has issues).

With the above, I now can get indentation warnings eg if a line is not indented to 2 spaces (using the google preset here) - hooray!

So maybe what we want to do now, apart from tweaking configurations and presets, is combine this with jshint .

Easy enough, we extend what we did before with gjslint but replace it with jscs:


    (with-eval-after-load 'flycheck
    
      ;; Define a checker for jscs...
    
      (flycheck-define-checker javascript-jscs
        "A JavaScript style checker using jscs.
    
      See URL `https://www.npmjs.com/package/jscs'."
        :command ("jscs" "--reporter=checkstyle" 
                  (config-file "--config" flycheck-jscsrc)
                  source)
        :error-parser flycheck-parse-checkstyle
        :modes (js-mode js2-mode js3-mode))
    
      ;; Make flycheck-jscsrc configuration with default.
    
      (flycheck-def-config-file-var flycheck-jscsrc javascript-jscs ".jscsrc"
        :safe #'stringp)
    
      ;; Make javascript-jscs automatically selectable to flycheck
      ;;
      ;; Use t to append at the end so it's not used by default.
    
      (add-to-list 'flycheck-checkers 'javascript-jscs t)
    
      ;; Chain javascript-jscs to run after javascript-jshint.
    
      (flycheck-add-next-checker 'javascript-jshint '(t . javascript-jscs)))

Note we use add-to-list to add our new checker to flycheck-checkers to make it automatically selectable by flycheck, chaining may not work without this. And we take advantage of jscs's checkstyle output which flycheck can already handle.

Tweaking faces for flycheck

Probably because of my theme setup in emacs, the highlighted row in *Flycheck errors* buffer is all grey with the text hidden, so I do

M-x customize-group RET flycheck-faces

and customize Flycheck Error List Highlight and turn inherit off and set background to inverse video.

Similarly for mouse over text: M-x customize-option RET mouse-highlight which I disable.

Saturday, September 11, 2010

Adventures in Ruby Land: a tutorial on ruby

  • Updated: 15-Sep-2010; section on modules and constants
escher like hands drawing hands
A Penrose Impossible Triangle around which circle the 3 main objects in ruby land
This is a tutorial on the ruby programming language and how it's structured which I'll refer to as ruby land. Ruby is a very dynamic programming language with a particular emphasis on object oriented programming but with some nice functional aspects.
This article might be useful to anyone who wants to get a bit of a roadmap on how ruby is structured. It could also be wrong. But if it's wrong, I'd like to think that it might still help to provide a framework or starting point from which to get the right picture.
You should be familiar with things like object oriented programming, basic ruby programming and irb (ruby's repl) which is very useful for trying things out as you think and experiment.

In the beginning... there was Object

The first thing you need to know in ruby is that everything is an object. Even classes are objects.
So what is an object? An object is an instance of a class. Objects don't contain methods - these are defined by the class of the object1. All an object does is keep track of the particular state of a particular instance of that class 2. Consequences of this will be discussed further on.
1 Ok, we could say that classes "have" or probably better, "define" instance methods. But these methods are intended for instances of the class. However an object that is a class has a class in turn which defines instance methods for it.
2 state is stored in instance variables in ruby; these are variables that start with a single `@` in their name.

Class and Superclass

We said everything in ruby is an object and this can be seen by inspecting the class method. Every object has one, even the classes because every object is an instance of some class when you get down to it.
Let's create a class:
class Foo
end
Now let's create an object:
foo = Foo.new
Now we can look at the class of foo
foo.class # => Foo
This tells us that foo's class is Foo. foo gets its "instance methods" from Foo. But we can also look at the class of Foo
Foo.class # => Class
They're both objects and objects must have a class that they belong to.
Objects that have a class of Class are clearly special objects because they are classes and they can be used to instantiate new instances (foo = Foo.new).
  • We'll call any object with a class of Class, a class-like object.
  • We'll call any other object as an instance or object instance.
  • We'll refine this a little when we get to modules.

Superclass

Another thing that makes class-like objects stick out from their peers is the fact that they have a superclass. This is no surprise. If you're a class, you're probably a subclass of some other class. That's how classes work. Instances of a class may have access not only to the methods of their class, but also any superclasses of that class.
Back to our example of Foo
  Foo.superclass # => Object
But foo, being just a lowly object instance, doesn't:
  foo.superclass # => Error!
We could make a new class SubFoo and make it a subclass of Foo like this:
  class SubFoo < Foo
  end
Now we have:
  SubFoo.superclass # => Foo
  SubFoo.superclass.superclass # => Object
In the land of class-like objects and by extension, the instances that are built from them, all roads eventually lead back to Object.

Instance Land

Here is Instance Land:
ruby instance land
This is the land of object instances. foo is an instance of Foo (foo = Foo.new); bar is an instance of Bar. obj is an instance of Object (obj = Object.new).
A lot of the basic work gets done in Instance Land because ruby encourages you to define and then instantiate classes like Foo in order to create object instances that store specific state information that can be readily modified as your program goes about solving its problems.

Class Land

Now let's add in Class Land. Like the men in Plato's cave staring at the shadows cast on the wall by the fire behind them, the object instances in Instance Land look up to Class Land, the land of Platonic ideas. This is where we define the idea of a Dog, or an Animal, or in our case a Foo or a Bar. Once those ideas have been fashioned we can then instantiate them to create object instances.
ruby class land
We draw the dotted lines from object instances to the classes in Class Land which define them. These dotted lines represent the class relationship that stamps a particular object instance as being an instance of a certain class.
In Class Land we can also trace out the relationships that classes maintain between themselves. The thick black lines with white arrow tips are the superclass lines that tell us that a certain class is a subclass of another.
SubFoo is a subclass of Foo. We write it like this:
  class SubFoo < Foo
  end
  SubFoo.superclass # => Foo
  Foo.superclass # => Object
But, this being ruby, there is yet another level. A level that the class-like objects themselves look up to. Welcome to...

Strange Land

Strange Land is strange. We have come to the axis of the ruby world; this is the backbone on which ruby is built. To stretch the Plato cave allegory just a little bit too far, it is as if we climbed out of the cave, past the fire and up into the daylight to see the real world. There are essentially three main protagonists: Class which is a subclass of Module, Module, which is a subclass of Object and of course Object itself.
ruby strange land
The class-like objects in Class Land look up to these objects in Strange Land the way object instances in Instance Land look up to classes in Class Land. (Well, excluding Object which we include in Class Land to keep the diagram slightly this side of sane.)
The denizens of Strange Land are class-like objects. Their class is Class. Even Class has a class of Class. And as just noted they form a class hierarchy that can be explored using superclass.
What makes things slightly strange is that Object is at the top of the tree but its class is one of its subclasses Class.
escher like hands drawing hands
Object has class of Class; Class has superclass of Object. A small homage to M.C Escher's "Drawing Hands".
Object itself is a funny beast. As noted earlier, all roads lead back to Object because, well, everything is an object.
self is always the same in Object
  class Object
    def self.self1
      self
    end
    def self2
      self
    end
  end
  Object.self1==Object.self2  
Object allows you to extend it with methods which will then become available to all other objects at all levels of ruby land.
  class Object
    def everywhere
      "hi I'm #{self}"
    end
  end
Whilst Object defines the notion of class, Class gives it the ability to instantiate objects and the idea of class hierarchies (superclass).

Module Land: Modules and instance methods

We need to briefly step down from Strange Land to look at modules.
Module Land is a strange little adjunct to Class Land. Members of Module Land have a class of Module, not Class. They are instances of a thing that precedes the notion of Class itself. So the denizens of Module Land can't be instantiated like Foo or Bar. This is because in Strange Land, Module sits in-between Object and Class and appears to represent a primordial Class; a thing that is not able to confer the ability of instantiation upon its instances, but which encapsulates the idea of instance methods.
Unlike Object where self is always the same, self is most definitely not always the same in an instance of Module. This is because Module introduces the idea of instance methods, methods that are defined in a class-like object but which are used only by instances of that class-like object.
  m = Module.new do
    # Singleton method
    def self.self1
      self
    end
    # Instance method
    def self2
      self
    end
    self
  end
  m == m.self1       # => true
  # At this point is already clear self1 and self2
  # are not the same because self2 does not apply to m.
  # Instead we could try to add self2 to some other object:
  o = Object.new.extend(m)
  o.self2 == m.self1 # => false
In the above, self1 is a singleton method which can be called on m - more on singleton methods later. self2 cannot be called on m because self2 is an instance method. The only we can use this is to extend another object with it or include it into a class-like object at which point self2 will return the self of that object.
So modules (the instances of Module) can't create instances of themselves but they can provide instance methods to be mixed in to other classes or used to extend other objects.
Classes (instances of Class) inherit the idea of instance methods from modules but go one step further, allowing themselves to be instantiated so that these instance methods can be used.

Modules and Constants

Ruby constants are an interesting construct. Use a variable name that starts with a capital letter and you have a constant, a very different thing to a non-capitalised variable. Constants can be seen by the rest of your program.
You can create constants as easy as this
Foo=1
You can't nest constants in constants:
Foo::Bar=2  # Error
makes no sense. This is where modules come in again. You can set constants inside a module and access them accordingly. Instances of Module have instance methods for handling constants contained by a module.
If our module is Bar, we can define a constant inside it:
module Bar
  Foo=1
end
Bar::Foo # => 1
The module that houses the constant doesn't have to be referenced by a constant:
m = Bar
m::Foo # => 1
Here m is a local variable.
You can see the instance methods for handling constants that Module defines for its instances like this:
irb(main):013:0> Module.instance_methods(false).sort.grep(/const/)
=> [:const_defined?, :const_get, :const_missing, :const_set, :constants]
For example:
irb(main):019:0> m.constants
=> [:Foo]
Modules and constants work hand in hand to allow you to house and namespace your code. Because classes are instances of a subclass of Module, they inherit the same capabilities.

Where do class-like objects get their methods from?

Modules and classes define instance methods which ultimately get used by instances of classes. So where do classes and modules themselves as objects get their instance methods from?
Take for example the include method. This is a private instance method of Module:
    irb(main):010:0> Module.private_instance_methods(false).sort
    => [:alias_method, :append_features, :attr, :attr_accessor,
    :attr_reader, :attr_writer, :define_method, :extend_object,
    :extended, :include, :included, :initialize, :initialize_copy,
    :method_added, :method_removed, :method_undefined,
    :module_function, :private, :protected, :public,
    :remove_const, :remove_method, :undef_method]
What does that mean?
It means that instances of Module (ie modules) will have an include method. Because Class is a subclass of Module, instances of Class (ie classes in Class Land) will also have include.
But because Module has a class of Class, it too will have include as a method, inherited, oddly enough, from itself since Class is a subclass of Module1. :)
Perhaps another name for Strange Land might be "Metaclass Land".
1Note, the underlying implementation of ruby (in C) may be somewhat more straightforward (I don't actually know). We are just looking at the "logic" that ruby presents to us as ruby programmers.

Singleton Methods

It gets stranger.
Recall that we said that everything is an object and that objects don't contain their own methods1 but derive their methods from the class that they belong to.
1 Class-like objects "contain" instance methods, but these are for their instances.
Well, if you've programmed for any time in ruby you might be aware that you can add singleton methods to objects:
  foo = Foo.new
  foo2 = Foo.new
  def foo.a
    'a'
  end
  foo.a # => 'a'
  foo2.a # => Error!
foo and foo2 are both instances of Foo and have the instance methods that are defined by Foo. But foo now has a method that foo2 does not.
We can do the same thing with class-like objects of course:
  def Foo.a
    'a'
  end
  Foo.a # => 'a'
Often, singleton methods for classes are defined like this:
  class Foo
    def self.a
      'a'
    end
  end
Singleton methods for class are like class methods in other programming languages like java.

Extend and include

As a small digression:
Yet another way to add singleton methods to an object is extend. extend is a method defined by ruby's Kernel module which is mixed in to Object and so is available to any other object in ruby land. extend takes the instance methods defined by a module and adds them as singleton methods to the object being extended.
Compare this to include which is a private instance method defined by Module which injects instance methods into a class-like object.
So where do these mysterious singleton methods belong if objects always derive their "instance methods" from a class?

Behold, Shadow Land!

Well, the shadowy answer to this conundrum is that they get it from a class, just not the usual class that we've been talking about up till now.

Virtual Classes

In fact, this is going to come as a shock, but every object in ruby land has two classes. It has a class - the bright shiny platonic things in Class Land we discussed earlier. But also something called a virtual class (some people refer to it as an eigenclass), a shadowy denizen that resides in a kind of bottomless underworld of usually unnamed and often unseen classes.
shadow land
The dotted lines are `class`; the white arrow lines are `superclass`.
One way to think about virtual classes is that every object in ruby, class-like or instance or whatever, has its own special, unique class that shadows it and can be called upon to stash instance methods that are unique to that object. Such methods are best referred to as singleton methods.

Getting the virtual class

Getting the virtual class is a tricky business. (It may have gotten less tricky in recent times - I don't know - I'm just referring to the state of play with the now bewhiskered ruby 1.8.6 as an example).
One way to get it is by using ruby's class syntax in a form that is both totally ungoogleable and rather unintuitve. We can open one up like this:
  class << Foo
    ... do something with Foo's eigenclass ...
  end
The self inside this class-statement is our virtual class.
We can make life easy for ourselves here and stash a method directly in Object to get the above self
  class Object
    def eigenclass
      class << self; self; end
    end
  end
Adding a method to Object will make it accessible to all other objects in ruby land; self in Object is always the same. This technique was mentioned some years ago on the ruby mailing list.
Back to our example of def Foo.a, the singleton method we defined above. We note that:
  Foo.instance_methods(false).sort
does not show a; but
  Foo.eigenclass.instance_methods(false).sort
does.
You'll note that Shadow Land has several layers in the above diagram. In fact those layers just keep going down:
  irb(main):015:0> Object.class
  => Class
  irb(main):016:0> Object.eigenclass
  => #<Class:Object>
  irb(main):017:0> Object.eigenclass.eigenclass
  => #<Class:#<Class:Object>>
  irb(main):018:0> Object.eigenclass.eigenclass.eigenclass
  => #<Class:#<Class:#<Class:Object>>>
  ...
And so it goes...

Practicalities

So how does mapping out ruby land help in the real world?
Well, if we look back at the above map of ruby land we can see that it is segmented into the different "lands" and that this separation revolves around the act of instantiation.
We can see, for instance, that
  Class.new
will give use an anonymous Class-like object in class land. We can further see that
  Class.new.new
will give us an anonymous instance of an anonymous class.
We can also create anonymous modules:
  m = Module.new
and it should be clear that modules can't be instantiated any further.
We can see as a result that:
  class Foo
  end
is just a nice way to assign a new class to the constant Foo
  Foo = Class.new
Similarly for modules:
  module M
  end
becomes
  M = Module.new
We're really only a hop, skip and a jump away from meta programming - building classes and object oriented structures on the fly.
We can also see that to understand ruby to any deep level including metaprogramming we must first inspect and appreciate the 3 central objects that form the backbone of ruby land: Object, Module and Class and the roles they play.

Final Thoughts...

Personally, I think one can go a little too far by viewing the world exclusively in an object oriented way. This is a static world where things like to be isolated and named and allotted associated behaviours and where potentially they can be specialisations (subclass) of more general things. Ruby sits in an interesting space with its highly dynamic approach to this static world.

Final notes

A word on variables and scope

There are at least 5 different types of variables. Ruby's scoping rules vary depending on the type.
  • Globals
    • These start with a $
    • Ruby has many in-built globals
    • Globals are accessible everywhere
  • Constants
    • Any variable name that starts with a capital becomes a constant
    • like globals, constants can be seen anywhere
    • however, constants can be nested
    • if you define a constant within the context of a module or class, that constant will be nested within that context
          class Foo
            Bar='bar'
          end
          Foo::Bar # => 'bar'
      
    • constants are usually camel-cased where as instance, class and local variables are usually underscored
    • constants are often used to store modules or classes
  • Instance variables
    • Any variable that starts with @ is an instance variable.
    • Every object will have a bunch of these.
    • One gotcha for ruby novices is to assume that @inst1 is an instance variable of an instance of Foo (eg Foo.new).
        class Foo
          attr_reader :inst1
          @inst1='inst1'
        end
      
      It is not.
        Foo.new.inst1 # => nil
      
      We can show that it is an instance variable of the object Foo by defining an instance method to access it in Foo's virtual class:
        class << Foo
          attr_reader :inst1
        end
        Foo.inst1 # => 'inst1'
      
  • Class variables
    • Any variable that starts with @@ is an class variable.
    • class variables allow instances of a class to access state that is associated with that class; these variables are per-class whereas instance variables are per-instance.
  • local variables
    • anything that isn't prefixed by @ or @@ or $ or a capital letter is a local variable (or possibly the name of a method available within the context you are working in)

Ruby 1.9

Ruby 1.9 introduced a superclass to Object called BasicObject. I've omitted it here.

Tuesday, September 7, 2010

Split and join (in Javascript)

This article...

I want to take a quick look at splitting and joining text using javascript.

Splitting

Suppose you want to split some text. A language like javascript (and many other languages besides) makes this very easy:

  ',,a,,,,b,,c,,'.split(/,/) // case (I)
  => ["", "", "a", "", "", "", "b", "", "", ]

Or

  ',,a,,,,b,,'.split(/,+/) // case (II)
  => ["", "a", "b", "", ]

You can recreate the string for (I) using an in-built join function:

  ',,a,,,,b,,'.split(/,/).join(',')
  => ',,a,,,,b,,'

Case (II) can't be put back the way it was because we do not know for any given joining point the size of the joining item since /,+/ matches a variable number of characters (commas in this case).

Joining for case (I)

Sometimes, you want to join a split as in case (I) but not back into a string. When I first tried to do this I ended up writing a horrifically complicated function.

Looking at this case again:

  ',,a,,,,b,,c,,'.split(/,/) // case (I)
  => ["", "", "a", "", "", "", "b", "", c, "", "", ]

The thing to remember is that "" represents the gaps between the commas in ',,a,,,,b,,c,,' including the gap before the very first comma and the gap after the very last comma. "a","b" etc are filled-in gaps. This is probably what is confusing about manually joining such a split array; because it's easy to fall into thinking that the ""-terms represent commas instead of the gaps.

Algorithm for manual joining

From an algorithmic point of view we want to map over the array produced in case (I) and process both the "" and non-"" terms.

The commas in the string may signify a point where we want to insert something. In my case, the strings I was splitting were text nodes from preformatted text (in pre-tags) that contained line feeds (\n or \r\n). I was tokenizing the text and wanted to preserve line feeds in the form of individual span tokens. So in this case the commas in case (I) would represent line feeds eg '\n\na\n\n\n\nb\n\nc\n\n' instead of ',,a,,,,b,,c,,'.

Going back to case (I), the terms (or gaps) are the best indication of where the commas are; if there are n commas, then there will be n+1 gaps (including filled in ones). Keeping this in mind the rules we could follow as we map over the array might be:

  • when we have a ""-term we insert comma
  • when we have a non-"" term we insert term followed by a comma
  • at the last position in the array don't insert a comma
    • if last position in the array is a "" then do nothing
    • if last position in the array is a filled-in gap, process it but don't insert comma

Functional approach

There are some nice ways to do this in javascript. Ecmascript 5 probably has mapping functions that might assist but here is a manual version that whilst not overly functional, facilitates a functional style when used (using the term 'functional' in a very loose sense):

  // Join elements that have been split by String.prototype.split(...).
  var join = function(arr,unsplit,process) {
      var i,l=arr.length;
      for(i=0;i<l;i++) {
          if(arr[i]!=='') process(arr[i],this);
          if(i!=l-1) unsplit(this);
      }
  }

Notes:

  • unsplit is a function that represents the "insert comma" operation
  • process is a function that represents the "insert term" operation which we apply to filled-in gaps like "a"
  • in addition, we pass this to both unsplit and process as this can faciliate sharing privileged information between unsplit and process; although this isn't necessary.

We could run join like this:

join(arr,f,g)

for some array arr and functions f and g.

But suppose we want to accumulate a result as join maps over arr or otherwise share privileged information between f and g, this is where this could be used:

var module1 = function() {
  var prog1 = function(text) {
    ...
    var someObj = {};
    ... initialize someObj ...
    var arr = text.split(...);
    join.call(someObj,arr,unsplit,process);
    ...
  }     
  var unsplit = function(obj) {
    ...
  }     
  var process = function(item,obj) {
    ...
  }     
}();

In the above we have a function prog1 inside a module that performs a split on some text. We invoke join using call passing someObj as the first argument; this becomes the this reference within join which in turn passes this to unsplit and process

Variations

We could skip using call/this and simply add an extra paramter to join to allow us to pass an object in.

Or we could also call unsplit and process. This removes the need to specify the obj parameter in these two functions:

  // Join elements that have been split by String.prototype.split(...).
  var join = function(arr,unsplit,process) {
      var i,l=arr.length;
      for(i=0;i<l;i++) {
          if(arr[i]!=='') process.call(this,arr[i]);
          if(i!=l-1) unsplit.call(this);
      }
  }
  var unsplit = function() {
    ... do something with 'this' ...
  }     
  var process = function(item) {
    ... do something with 'this' ...
  }     

We could also define unsplit and process within prog1 giving these functions privileged access to someObj. These functions would be generated every time prog1 is invoked. But there would be no need to mess about with an extra parameter or this.

Wednesday, September 1, 2010

Surviving the twitter OAuthcalypse on the commandline (using Ruby)

Surviving the twitter OAuthcalypse on the command line

In this article...

  • I try to cover how to use twitter apis from the (linux) commandline via OAuth using the ruby twitter gem

Warning:

  • I'm a very light user of web services and social media in general
  • I have little knowledge of OAuth other than a general appreciation of what it is trying to do

Quick background...

I woke up to the OAuthcalypse today.

Up till now I had been using twitter in a very innocent, low-cal kind of way from the commandline (via an ungodly combination of curl and bash/shell utilities) and also from emacs. Both methods mysteriously failed today leaving me with blank screens and cryptic error messages.

Whilst I should have probably given up at this point and embraced one of the popular twitter services, I ended up instead, wasting half a day wrestling with OAuth in a bid to get my twitter commandline working again.

Ruby twitter gem

I've given up for the moment using shell utilities to access twitter like before the OAuthcalypse, although this might be possible. Instead, I'm going to use ruby which for me is rapidly turning into the new perl.

There are probably numerous libraries in ruby for doing twitter but I chose John Nunemaker's twitter gem

  • I found it helpful to get the actual source which I git cloned
    • this turned out to be useful because the source includes a number of example files that are worth looking at
  • That being done, I installed the twitter gem in the normal way
      gem install twitter
    
    • This will load several other gems; in my case:
      • oauth-0.4.2
      • hashie-0.2.2
      • crack-0.1.6
      • httparty-0.5.2
      • yajl-ruby-0.7.7
      • twitter-0.9.8
  • At this point you should be able to do a require 'twitter' successfully

OAuth Terminology

Just to be clear, here are the main protagonists in an OAuth exchange:

  • service is an oauth enabled web service like twitter
  • user is a person that has an account with a service and who is using a consumer (or app) to access that service
  • consumer - consumes a service; a consumer may itself be some sort of service or a client application that the user is using; the consumer has to use oauth protocols to access the user's information in service
  • app is alternative name for a consumer; I use both interchangeably

OAuth 1.0a and "out of band" (oob) processing

I'm going to cover the OAuth 1.0a process as it pertains to twitter and as best I can understand it after one day of head pounding.

Note:

Back to OAuth:

OAuth requires 3 sets of tokens; each set consists of a token and a shared secret:

  • Consumer token / secret ctoken/csecret
    • this is a once-only token and shared secret that identifies the app (consumer)
    • you only need one for your app; so once you get it, you stash it somewhere where your app can load it
    • in the case of twitter you can set up access privileges when setting these up; for twitter this is whether the consumer will have read-only or read/write access
    • you can go here to arrange twitter to generate the ctoken/csecret pair for you
      • twitter will require you to give it some information such as the application name and a description
        • interestingly, you can't leave the description blank and twitter doesn't like you putting in an app name that has 'twitter' in it; I think you are also required to put in a url for the app
        • I never had to bother with this when I was using my old commandline app with http auth api so I am wondering if this is the only way to proceed now
  • Request token / secret rtoken/rsecret
    • this is a transient token/secret pair that appears to represent the act of requesting an atoken/asecret pair; once you've used it to request an atoken/asecret pair (and hopefully succeeded), you can dump it
    • you need to present a valid ctoken/csecret pair to twitter before you can get a rtoken/rsecret pair
    • to "activate" this rtoken/rsecret pair the user is required to authenticate with the service (in the case of twitter via a specially crafted login url)
      • the user will be asked to login and then specify whether the consumer that initiated the request token should be allowed to proceed (allow or deny)
        • note: it may also be possible here to specify authorization privileges but in the case of twitter, this was done during the consumer token phase above
      • the user authenticates successfully and clicks 'allow',
      • because we're using out of band (oob) processing, the service will display a pin number; the user needs to (manually) give this to the app (our commandline twitter-gem based app) in order for it to proceed
        • the pin number is part of the "out of band" process flow; since we're trying to access twitter from the command line, we are very much out of band
  • Access token / secret atoken/asecret
    • the app uses the pin and the associated rtoken/rsecret pair from the previous step, to authenticate itself with the service; all going well, the service should provide an atoken/asecret pair
    • Once the app/consumer has an atoken/asecret pair, it can access the user's data from the service; this pair is acting like a substitute username/password.
      • note that the app/consumer never gets to see the real username/password of the user's account for the service and that the access pair can be easily revoked or set to expire

Using ruby twitter gem

Setting up config

  • If you looked at the examples section of the source for the twitter gem, there is a helpers/ directory containing config_store.rb
    • This defines a small class called ConfigStore that can be configured to load information (such as stored tokens) from a yaml file
    • Here's an example yaml file
        --- 
        ctoken: random-string-of-characters
        csecret: random-string-of-characters
        atoken: random-string-of-characters
        asecret: random-string-of-characters
      
    • You'll need to set ctoken and csecret by visiting twitter to register your application.
    • You won't be able to set atoken or asecret; these will be stored by ConfigStore when you do a successful authentication so leave these out for now
    • I use a slightly modified version of config_store.rb which I copied from examples/ into my directory of choice

Managing an OAuth session

The first thing we've got to do is manage an OAuth session.

I've managed to boil it down to one of two routes:

  • if you don't have a valid atoken/asecret, the you'll need to do a "full login" which means requesting an rtoken/rsecret pair and then getting a pin out-of-band and feeding it back to our commandline app
  • if we have a valid atoken/asecret, then we can skip the above rigmarole and access the service directly since the atoken/asecret is acting like a temporary username/password.

I've encapsulated this behaviour in a TwitterSession class:


require 'twitter'
require File.join(File.dirname(__FILE__), 'config_store')
require 'pp'

# Handles OAuth authentication with twitter.
#
# @config must already contain a valid 'ctoken' and 'csecret'
# which you can get from twitter: http://twitter.com/oauth_clients/new

class TwitterSession

  attr_reader :oauth,:config

  def initialize config
    @config = ConfigStore.new(config)

    # Request rtoken/rsecret and login url from service:
    @oauth = Twitter::OAuth.new(@config['ctoken'], @config['csecret'])
    @config.update({ 'rtoken'  => @oauth.request_token.token,
                     'rsecret' => @oauth.request_token.secret, })
  end

  # Request new atoken.
  #
  # @config must already contain a valid 'ctoken' and 'csecret'
  # which you can get from twitter: http://twitter.com/oauth_clients/new
  #
  # You will need to do an out-of-band process which
  # will load a browser (lynx) to log the user into
  # twitter and which will provide a pin.

  def login

    # Get user to login and allow the consumer to proceed:
    #%x(firefox #{@oauth.request_token.authorize_url})
    system %{lynx #{@oauth.request_token.authorize_url}}

    STDOUT.print "> what was the PIN twitter provided you with? "
    pin = STDIN.gets.chomp

    @oauth.authorize_from_request(@oauth.request_token.token,
                                  @oauth.request_token.secret,
                                  pin)
    @config.update({ 'atoken'  => @oauth.access_token.token,
                     'asecret' => @oauth.access_token.secret,
                   }).delete('rtoken', 'rsecret')
  end

  # Login with existing atoken.

  def login_with_atoken
    if(@config['atoken'] && @config['asecret'])
      @oauth.authorize_from_access(@config['atoken'], @config['asecret'])
    else
      login
    end
  end

end


In the above file:

  • In addition to requiring the twitter gem, I also require my version of config_store.rb which is almost identical to the one in examples/
  • we initialize TwitterSession by giving it a name of the config store
    • I have multiple accounts which, for the moment, I'll manage in separate stores and which we will instantiate separate TwitterSession instances for
  • intialize makes an oauth call to get the request token/secret pair
  • login is the full login method
    • it uses ruby's system to call lynx which loads up the authentication/authorization page on twitter where we will get the pin; this will all get done in the same console;
    • we copy the pin to the clipboard
    • we then quit lynx
    • the procedure will then ask for the pin and read it from STDIN
    • login will then attempt to get atoken/asecret using the rtoken/rsecret pair and associated pin
  • login_with_atoken is the quick login method which will only work if you have an existing valid atoken/asecret in your config store
    • You'll be able to run this most of the time after doing a single login.

Getting your timeline

We take the above TwitterSession class and use it like so:


require File.join(File.dirname(__FILE__), 'session')
sess = TwitterSession.new("/home/danb/twitter2/config_store")

# Force full login if we specify -l on commandline.
if ARGV[0]=='-l'
  sess.login
else
  sess.login_with_atoken
end

client = Twitter::Base.new(sess.oauth)
pp client.user_timeline


In the above file:

  • I require session.rb which houses TwitterSession.
  • I pass in a yaml file to TwitterSession which represents a config store for my particular twitter account.