2013 masthead
Canberra, Australia - January 28 to February 2, 2013

April 19, 2014

Swap Space and SSD

In 2007 I wrote a blog post about swap space [1]. The main point of that article was to debunk the claim that Linux needs a swap space twice as large as main memory (in summary such advice is based on BSD Unix systems and has never applied to Linux and that most storage devices aren’t fast enough for large swap). That post was picked up by Barrapunto (Spanish Slashdot) and became one of the most popular posts I’ve written [2].

In the past 7 years things have changed. Back then 2G of RAM was still a reasonable amount and 4G was a lot for a desktop system or laptop. Now there are even phones with 3G of RAM, 4G is about the minimum for any new desktop or laptop, and desktop/laptop systems with 16G aren’t that uncommon. Another significant development is the use of SSDs which dramatically improve speed for some operations (mainly seeks).

As SATA SSDs for desktop use start at about $110 I think it’s safe to assume that everyone who wants a fast desktop system has one. As a major limiting factor in swap use is the seek performance of the storage the use of SSDs should allow greater swap use. My main desktop system has 4G of RAM (it’s an older Intel 64bit system and doesn’t support more) and has 4G of swap space on an Intel SSD. My work flow involves having dozens of Chromium tabs open at the same time, usually performance starts to drop when I get to about 3.5G of swap in use.

While SSD generally has excellent random IO performance the contiguous IO performance often isn’t much better than hard drives. My Intel SSDSC2CT12 300i 128G can do over 5000 random seeks per second but for sustained contiguous filesystem IO can only do 225M/s for writes and 274M/s for reads. The contiguous IO performance is less than twice as good as a cheap 3TB SATA disk. It also seems that the performance of SSDs aren’t as consistent as that of hard drives, when a hard drive delivers a certain level of performance then it can generally do so 24*7 but a SSD will sometimes reduce performance to move blocks around (the erase block size is usually a lot larger than the filesystem block size).

It’s obvious that SSDs allow significantly better swap performance and therefore make it viable to run a system with more swap in use but that doesn’t allow unlimited swap. Even when using programs like Chromium (which seems to allocate huge amounts of RAM that aren’t used much) it doesn’t seem viable to have swap be much bigger than 4G on a system with 4G of RAM. Now I could buy another SSD and use two swap spaces for double the overall throughput (which would still be cheaper than buying a PC that supports 8G of RAM), but that still wouldn’t solve all problems.

One issue I have been having on occasion is BTRFS failing to allocate kernel memory when managing snapshots. I’m not sure if this would be solved by adding more RAM as it could be an issue of RAM fragmentation – I won’t file a bug report about this until some of the other BTRFS bugs are fixed. Another problem I have had is when running Minecraft the driver for my ATI video card fails to allocate contiguous kernel memory, this is one that almost certainly wouldn’t be solved by just adding more swap – but might be solved if I tweaked the kernel to be more aggressive about swapping out data.

In 2007 when using hard drives for swap I found that the maximum space that could be used with reasonable performance for typical desktop operations was something less than 2G. Now with a SSD the limit for usable swap seems to be something like 4G on a system with 4G of RAM. On a system with only 2G of RAM that might allow the system to be usable with swap being twice as large as RAM, but with the amounts of RAM in modern PCs it seems that even SSD doesn’t allow using a swap space larger than RAM for typical use unless it’s being used for hibernation.


It seems that nothing has significantly changed in the last 7 years. We have more RAM, faster storage, and applications that are more memory hungry. The end result is that swap still isn’t very usable for anything other than hibernation if it’s larger than RAM.

It would be nice if application developers could stop increasing the use of RAM. Currently it seems that the RAM requirements for Linux desktop use are about 3 years behind the RAM requirements for Windows. This is convenient as a PC is fully depreciated according to the tax office after 3 years. This makes it easy to get 3 year old PCs cheaply (or sometimes for free as rubbish) which work really well for Linux. But it would be nice if we could be 4 or 5 years behind Windows in terms of hardware requirements to reduce the hardware requirements for Linux users even further.

Phone Based Lectures

Early this month at a LUV meeting I gave a talk with only my mobile phone to store notes. I used Google Keep to write the notes as it’s one of the easiest ways of writing a note on a PC and quickly transferring it to a phone – if I keep doing this I will find some suitable free software for this task. Owncloud seems promising [1], but at the moment I’m more concerned with people issues than software.

Over the years I’ve experimented with different ways of presenting lectures. I’m now working with the theory that presenting the same data twice (by speaking and text on a projector) distracts the audience and decreases learning.

Editing and Viewing Notes

Google Keep is adequate for maintaining notes, it’s based on notes that are a list of items (like a shopping list) which is fine for lecture notes. It probably has lots of other functionality but I don’t care much about that. Keep is really fast at updating notes, I can commit a change on my laptop and have it visible on my phone in a few seconds over 3G.

Most of the lectures that I’ve given have involved notes on a laptop. My first laptop was a Thinkpad 385XD with a 12.1″ display and all my subsequent laptops have had a bigger screen. When a laptop with a 12″ or larger screen is on a lectern I can see the notes at a glance without having to lean forward when 15 or fewer lines of text are displayed on the screen. 15 lines of text is about the maximum that can be displayed on a slide for the audience to read and with the width of a computer display or projector is enough for a reasonable quantity of text.

When I run Keep on my Galaxy Note 2 it displays about 20 rather short lines of text in a “portrait” orientation (5 points for a lecture) and 11 slightly longer lines in a “landscape” orientation (4 points). In both cases the amount of text displayed on a screen is less than that with a laptop while the font is a lot smaller. My aim is to use free software for everything, so when I replace Keep with Owncloud (or something similar) I will probably have some options for changing the font size. But that means having less than 5 points displayed on screen at a time and thus a change in the way I present my talks (I generally change the order of points based on how well the audience seem to get the concepts so seeing multiple points on screen at the same time is a benefit).

The Samsung Galaxy Note 2 has a 5.5″ display which is one of the largest displays available in a phone. The Sony Xperia X Ultra is one of the few larger phones with a 6.44″ display – that’s a large phone but still not nearly large enough to have more than a few points on screen with a font readable by someone with average vision while it rests on a lectern.

The most obvious solution to the problem of text size is to use a tablet. Modern 10″ tablets have resolutions ranging from 1920*1080 to 2560*1600 and should be more readable than the Thinkpad I used in 1998 which had a 12″ 800*600 display. Another possibility that I’m considering is using an old phone, a Samsung Galaxy S weighs 118 to 155 grams and is easier to hold up than a Galaxy Note 2 which weighs 180g. While 60g doesn’t seem like much difference if I’m going to hold a phone in front of me for most of an hour the smaller and lighter phone will be easier and maybe less distracting for the audience.

Distributing URLs

When I give a talk I often want to share the addresses of relevant web sites with the audience. When I give a talk with the traditional style lecture notes I just put the URLs on the final page (sometimes using for people to copy during question time. When I use a phone I have to find another way.

I did a test with QR code recognition and found that a code that takes up most of the width of the screen of my Galaxy Note 2 can be recognised by a Galaxy S at a distance of 50cm. If I ran the same software on a 10″ tablet then it would probably be readable at a distance of a meter, if I had the QR code take up the entire screen on a tablet it might be readable at 1.5m away, so it doesn’t seem plausible to hold up a tablet and allow even the first few rows of the audience to decode a QR code. Even if newer phones have better photographic capabilities than the Galaxy S that I had available for testing there are still lots of people using old phones who I want to support. I think that if QR codes are to be used they have to be usable by at least the first three rows of the audience for a small audience of maybe 50 people as that would allow everyone who’s interested to quickly get in range and scan the code at the end.

Chris Samuel has a photo (taken at the same meeting) showing how a QR code from a phone could be distributed to a room [2]. But that won’t work for all rooms.

One option is to just have the QR code on my phone and allow audience members to scan it after the lecture. As most members of the audience won’t want the URLs it should be possible for the interested people to queue up to scan the QR code(s).

Another possibility I’m considering is to use a temporary post on my documents blog (which isn’t syndicated) for URLs. The WordPress client for Android works reasonably well so I could edit the URL list at any time. That would work reasonably well for talks that have lots of URLs – which is quite rare for me.

A final option is to use Twitter, at the end of a talk I could just tweet the URLs with suitable descriptions. A good portion of the Tweets that I have written is URLs for web sites that I find interesting so this isn’t a change. This is probably the easiest option, but with the usual caveat of using a proprietary service as an interim measure until I get a free software alternative working.

Any suggestions?

Please comment if you have any ideas about ways of addressing these issues.

Also please let me know if anyone is working on a distributed Twitter replacement. Please note that anything which doesn’t support followers on multiple servers and re-tweets and tweeting to users on other servers isn’t useful in this regard.

April 17, 2014


There will be possibilities to meet with me in some exotic places (at least for me as I never travelled there before in May !

I’ll first be in Wien, Austria, early May but that’s to celebrate somewhere my 50th birthday (half a century as my kids like to call that ;-)) and during vacations so won’t talk something else than early music or rchitecture and pictures of the nice building over there !!

But after that, I’ll attend the UEFI plugfest in Seatlle again, and be in charge of managing the interface between Linux distributions and HP. So if you plan to attend, and want to test your Linux distribution on nice shiny UEFI hardware platforms, feel free to contact me so we can organize that meeting over there.

The week after that I’ll be in Japan to present again during a LinuxCon event ! I’m very lucky first to be retained as a presenter to talk another time about Mageia. And then to be sponsored by our VP & Deputy General Counsel, Cloud Computing and Open Source Eileen Evans who is leading HP’s Open Source Program Office and allowing me to attend.

So feel free to drop me a mail if you want to chat about any topic I can decently talk about such as Disaster Recovery and Imaging or Continuous Packaging and some other surely HP related !

See you there.

Filed under: FLOSS Tagged: Event, HP, HPLinux, Japan, Linux, LinuxCon, LinuxFoundation, Mageia, Open Source, UEFI

April 13, 2014

After the poster.

Thank you to everyone who dropped by. Thank you if you let me guide you through the map in the poster, if you took the handout, if you just looked at my Grumpy Cat telling you to test your code.

You'll be able to see and download a version of both poster and handout here in the next days.

And things might evolve into something more...

When I grow up.

[Puts on her best wise auntie look.]

Growing up is also about choices.

[Wise auntie look comes crashing down.]

No, I'm not that good at following my own wisdom. I want to do everything, at once.

But you should also find out which way is your way. Start with your strength first, feel great, then challenge yoursef out of your comfort zone later.

For instance, let's say that you want to move your Python knowledge to yet another level.

But the further you go, the more the roads you see in front of you.

So now it's time for a test. You know, like those "Which Star Wars Character Are You?" tests. But simpler.

And let's hope that this time I don't come out as Greedo.

Q1: Where did you sit when you were in school?

  1. First row. Hand up.
  2. Last row. Under the radar.
  3. School?

Q2: Pick one light reading.

  1. War and Peace. And without skipping the philosophical digressions.
  2. Harry Potter.
  3. The Evil Genius Guide to Taking Over the World.

Q3: Are you still here?

  1. Of course. I don't leave what I'm reading behind.
  2. Uh... yeah?
  3. [Silence. Somewhere someone's smashing something to see how it works.]

Results! Majority of...

  1. Have a cookie. Yes, you can have it. Go to Think Python: How to Think Like a Computer Scientist. You will be thorough, as usual, and patient. You will come out of the book with a methodic understanding of concepts with some theoretical background (you're allowed to peek into appendices before the end and see what the big-Oh notation is), the satisfaction of overcoming the tricks of many nice exercises, and a beautiful vocabulary (courtesy of the glossary at the end of each chapter).
  2. Have a cookie. Yes, you can take a cookie for each one of your friends. Bring them all here, while we're at it. No, I'm not sending you to the principal's office. I'm sending you to the intermediate projects of the Python Workshops that you can find via those great guys at OpenHatch. Some projects of the Boston Python Workshop are here; but you can find more, and they're always growing. If you're lucky (and, mostly, US-based) you can go and play with your friends live! at one of these workshops. But I guess that just the idea of making your own game of Snakes beats that tic-tac-toe that you were playing with over there.
  3. Have a cookie. Come on, come here. Please. I'll be quick. Have a cookie, yes. I know you're thinking about how to steal the jar. But I have something better for you: go to and see if you can get those tasty cookies. They're not easy to get. You have to deal with the world out there. You'll have to get your hands dirty, I'm afraid. I bet you... Hey! Where are you?

No majority?

You got me. Have all the cookies, you deserve them. Now throw a dice with a number of faces that is a multiple of three to choose between the options above.


Now have fun. Whoever you are, Python's got something for you.

And after you had fun your way, explore the rest. Challenge and surprise yourself.

Let's learn Python and grow up.

My poster at PyCon 2014 will be tomorrow (today, in EST) at 10:00am. Come and say hello.

April 12, 2014

An official tutorial (and a gentleman).

I have fond memories of The Python Tutorial on My first Python textbook was Learn Python The Hard Way, but as you might have gathered, around the tenth chapter of printing text I was a bit dispirited. So I went for something that sounded short, to the point, and a bit official, even dry.

The Python Tutorial was a great help. I read (more on this "read") it, I found what I wanted to know: that's the way you write a while loop, this is the way you define a function, classes are written according to this syntax. Then I moved on, keeping the website as a go-to reference for my little doubts.

I've just gone through it again, and I realise that I was saved by my worst flaws. No, not from, by.

First of all, I can be restless: I got to Defining Functions I enjoyed it, then I peeked into the next chapter and I saw Data Structures, I thought that was useful and quite easy to understand, and that section on Lambda Expressions in the middle of the two left my radar.

Then there's the fact that I knew something about programming. It was in Java, it was very little, but it was more than zero. This means that I knew the names of the topics: if I wanted to write was a while, I knew to look for "loops." (Actually, the while is covered in an example in the chapter before loops. The organisation of the topics is not the strong point of The Python Tutorial.)

Even how I didn't know very much played in my favour: it that the scope of my quest to do what I already knew wasn't so wide to get myself lost in small(ish) details.

So I found The Python Tutorial a great tool, under these two conditions: some (even very basic) background in code, some tendency (ability?) to skim and overlook topics that you'll get back to at a second reading. Also, the topics are many but the pace is quick, and the examples are simple: it's a perfect complement to Dive Into Python 3.

Looking again at The Python Tutorial after using it as a "beginners' text" was a bit a "Princess and the Frog" story: you think that you're over what's in there, then you find out that there's a lot more. The most important thing is in the URL: it's the official tutorial on the PSF website, so it's a gateway for the official documentation.

And there are many other good reasons why The Python Tutorial should always be there in your bookmarks: that drop-down menu (on the upper left of each page) that takes you from the page in Python 2.x to the same page in Python 3.x; the glossary (again, both in the 2.x and 3.x flavour).

So: this official tutorial can sound (or even be) a bit aloof sometimes. But you can (and shall) become fond of the company of this gentleman.

Replacement Credit Cards and Bank Failings

I just read an interesting article by Brian Krebs about the difficulty in replacing credit cards [1].

The main reason that credit cards need to be replaced is that they have a single set of numbers that is used for all transactions. If credit cards were designed properly for modern use (IE since 2000 or so) they would act as a smart-card as the recommended way of payment in store. Currently I have a Mastercard and an Amex card, the Mastercard (issued about a year ago) has no smart-card feature and as Amex is rejected by most stores I’ve never had a chance to use the smart-card part of a credit card. If all American credit cards had a smart card feature which was recommended by store staff then the problems that Brian documents would never have happened, the attacks on Target and other companies would have got very few card numbers and the companies that make cards wouldn’t have a backlog of orders.

If a bank was to buy USB smart-card readers for all their customers then they would be very cheap (the hardware is simple and therefore the unit price would be low if purchasing a few million). As banks are greedy they could make customers pay for the readers and even make a profit on them. Then for online banking at home the user could use a code that’s generated for the transaction in question and thus avoid most forms of online banking fraud – the only possible form of fraud would be to make a $10 payment to a legitimate company become a $1000 payment to a fraudster but that’s a lot more work and a lot less money than other forms of credit card fraud.

A significant portion of all credit card transactions performed over the phone are made from the customer’s home. Of the ones that aren’t made from home a significant portion would be done from a hotel, office, or other place where a smart-card reader might be conveniently used to generate a one-time code for the transaction.

The main remaining problem seems to be the use of raised numbers. Many years ago it used to be common for credit card purchases to involve using some form of “carbon paper” and the raised numbers made an impression on the credit card transfer form. I don’t recall ever using a credit card in that way, I’ve only had credit cards for about 18 years and my memories of the raised numbers on credit cards being used to make an impression on paper only involve watching my parents pay when I was young. It seems likely that someone who likes paying by credit card and does so at small companies might have some recent experience of “carbon paper” payment, but anyone who prefers EFTPOS and cash probably wouldn’t.

If the credit card number (used for phone and Internet transactions in situations where a smart card reader isn’t available) wasn’t raised then it could be changed by posting a sticker with a new number that the customer could apply to their card. The customer wouldn’t even need to wait for the post before their card could be used again as the smart card part would never be invalid. The magnetic stripe on the card could be changed at any bank and there’s no reason why an ATM couldn’t identify a card by it’s smart-card and then write a new magnetic stripe automatically.

These problems aren’t difficult to solve. The amounts of effort and money involved in solving them are tiny compared to the costs of cleaning up the mess from a major breach such as the recent Target one, the main thing that needs to be done to implement my ideas is widespread support of smart-card readers and that seems to have been done already. It seems to me that the main problem is the incompetence of financial institutions. I think the fact that there’s no serious competitor to Paypal is one of the many obvious proofs of the incompetence of financial companies.

The effective operation of banks is essential to the economy and the savings of individuals are guaranteed by the government (so when a bank fails a lot of tax money will be used). It seems to me that we need to have national banks run by governments with the aim of financial security. Even if banks were good at their business (and they obviously aren’t) I don’t think that they can be trusted with it, an organisation that’s “too big to fail” is too big to lack accountability to the citizens.

April 10, 2014

The Big Jump.

So, are you ready for the next level?

So let's Dive Into Python 3.

You should know how to deal with an if or with a loop, in Python or in another language. You should have learned what a function is. You should have an idea of what "Object Oriented" means. You should be able to keep cool when you see that there's a parallel with a language you don't know.

So you dive straight into the good stuff. In detail.

I said "in detail": the chapters are quite long. But they're well subdivided into sections, so if you don't have to swallow everything in one gulp. I'm looking at you, dear chapter on regular expressions.

And since we're on the subject of chapters and sections: Dive Into Python 3 is probably the most pleasantly readable textbook that I have met. There are the collapsible tables of contents; there is just one column, so you're not distracted from what you're studying. The font is beautiful and easy to read.

Another great idea: the difficulty of each topic is marked at the beginning of each chapter. I think that difficulty is always quite subjective; but it's always good to remark that sometimes prerequisites are harder than more advanced topics.

And then there's the text itself. The writing is conversational but precise; it takes you seriously and it doesn't make you feel stupid if you don't know something. The examples and the snippets of code are neither trivial nor unnecessarily convoluted, and they are cleverly annotated outside the code itself.

So: if you're already comfortable with programming, if you automatically go beyond the "reading" part of teaching yourself into the "writing code and playing around with it" part, here's a fantastic book for you.

Now you might wonder where's the catch. There's mostly one: Dive Into Python 3 is, well, a textbook on Python 3. This means that you have to look elsewhere for Python 2.x; but if you're comfortable with Dive Into Python 3 you shouldn't find googling "Python 2 and 3 differences" too hard. I recommend this page on, that being the website of the PSF is trustworthy by definition. Dive Into Python 3 has an appendix on the 2to3 script; but it's an appendix, and it's marked as "very difficult" in a book that is already not so easy.

Another possible issue is that you cannot really skip chapters. Sections, maybe. But you have to follow the path that's been laid down in the book. This is not a bad thing; but if you're reading something at this level you might sometimes wish to have the chance of a more flexible syllabus. But, on the other hand, if you're at this level you can realise when you have to go back and where to; so no harm done.

One last thing: there are many links from which you can take your study to a deeper level, so you can complement the somewhat "cookbook" flavour of Dive Into Python 3 by looking at the official documentation or even at a good post on a blog. But beware the "Problem with Wikipedia"...

The Problem with Wikipedia

April 06, 2014 2015 – Getting started

Disclaimer: The below is my personal opinion and does not represent the views of the 2015 LCA organising committee. Some details have been left out, stuff may change, names may be wrong, may contain nuts, etc.

In January 2015 the conference will be held in Auckland, New Zealand. Each year the conference brings together 600 ( +-100 ) Linux developers and users for 5 days for talks, chat and social events. LCA 2015 will be the 12th I’ve attended (every year since 2004) and the first I’ve helped organise. It will be the 3rd time the conference has been held in New Zealand.

Each year’s LCA is held in a different city by a group who bid for and run it. The Auckland team consists of a “core team” of about 10 under the overall lead of Cherie Ellis, another dozen “supporters” (including me). Others volunteers  will be recruited closer to the time and there are also external groups like the papers committee and people from Auckland University doing various jobs.

The majority of the conference will be held in the Owen G Glenn Building at Auckland University. The is single big building with several large lecture theatres along with big central areas and smaller rooms. The currently plan is for just about the whole conference proper to happen there.

Over half the attendees with probably stay at nearby student accommodation, this is cheap, nearby and lets people mingle with other attendees after-hours. There will also be some planned social events (like the conference dinner) elsewhere in Auckland.

Since January 2014 when Auckland was announced as the winning bid for 2015 the pace has gradually been picking up. Over 30 main positions have been filled (most with both a main and backup person) and the core team is meeting (usually online) weekly and the second supporters meeting is coming up.

The amount of stuff to organise is pretty big. As well as the venues, there is food, travel, accommodation, swag, the programme, the websites, network, dinners, registration, etc etc. A huge amount of stuff which will take up many hours per week for the rest of 2015.

At the end of March there was a “Ghosts visit”, this is where half a dozen previous conference organisers ( “Ghosts of conferences past” ) come over for a weekend to look over the setup and talk to the group. The purpose is twofold, the Ghosts check that everything is on track and look for problems, while the 2015 organisers get to pick the Ghost’s brains

Large Brain possibly belonging to Ghost

Even the Ghosts’ event itself is a small test of the organizers’ ability. They have  to fly, meeting, accommodate, hosts, feed and otherwise look after half a dozen people, a mini rehearsal  for the full conference.

You and me.

Codecademy sweeps you off your feet. At least, as you might have guessed in the last post (sorry for the hiatus), it swept me off my feet. But you need a quiet bedroom to rest after the most amazing party, to regain the strength you need to plan the next one.

Welcome to Python for you and me.

Take a seat. A cup of tea, possibly caffeine-free. Reorganise what’s your mind has devoured.

The topics are already there for you, clearly subdivided into short chapters. Nothing too theoretical, mostly to-the-point examples that will immediately bring back to your mind what you’ve studied, or make you wish to learn more about something you haven’t seen before.

It’s great if you want to revise the very basics. It’s even better if you’ve never met the topic: you are likely to get the general idea of it, the sketch of a map to help you not to get lost when you will bring your study to the next level.

That’s basically the best and the worst about Python for you and me: it’s simple. It’s unthreatening, in the best meaning of the word, so you don’t spin into "I’ll never get this!" mode. But you must be wary of complacency. You read a chapter, then the other, then the next: everything is calm and quiet. So quiet that you don’t check if you really understood what you read. The code samples are simple. (I just tried to say the last sentence out loud, my tongue is tied in a knot. Let’s move on.) The lack of exercises is tempting you into not challenging yourself.

I love Python for you and me. It’s the reference text to keep on your (metaphorical) bedside table: you revise your background, you go back and forth a few chapter to get some context, you check your general position. It’s the perfect companion for Codecademy or Learn Python the Hard Way: I suggest a combination with the latter if you like the Hard Way and you don’t mind (or even enjoy) being taken between two opposite poles; if you like a more playful or relaxed approach, I would pair with the former.

Last but not least: don’t underestimate how far the book will take you. Under the unassuming look you will find an amazing guide to PEP 8 guidelines, a great introduction to testing (and since you will already be nervous because if test will fail, the relaxed approach of Python for you and me will be a real help), the basis of structuring and releasing a project (and since you will already be self-conscious of the big step, the Python for you and me style will put you at ease).

All in all: a fantastic resource. The soft bed from which you rise so much more rested, before moving on to the next level.

And now, on to the next level.

Finding Corrupt Files that cause a Kernel Error

There is a BTRFS bug in kernel 3.13 which is triggered by Kmail and causes Kmail index files to become seriously corrupt. Another bug in BTRFS causes a kernel GPF when an application tries to read such a file, that results in a SEGV being sent to the application. After that the kernel ceases to operate correctly for any files on that filesystem and no command other than “reboot -nf” (hard reset without flushing write-back caches) can be relied on to work correctly. The second bug should be fixed in Linux 3.14, I’m not sure about the first one.

In the mean time I have several systems running Kmail on BTRFS which have this problem.

(strace tar cf – . |cat > /dev/null) 2>&1|tail

To discover which file is corrupt I run the above command after a reboot. Below is a sample of the typical output of that command which shows that the file named “.trash.index” is corrupt. After discovering the file name I run “reboot -nf” and then delete the file (the file can be deleted on a clean system but not after a kernel GPF). Of recent times I’ve been doing this about once every 5 days, so on average each Kmail/BTRFS system has been getting disk corruption every two weeks. Fortunately every time the corruption has been on an index file so I don’t need to restore from backups.

newfstatat(4, ".trash.index", {st_mode=S_IFREG|0600, st_size=33, …}, AT_SYMLINK_NOFOLLOW) = 0

openat(4, ".trash.index", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_NOFOLLOW|O_CLOEXEC) = 5

fstat(5, {st_mode=S_IFREG|0600, st_size=33, …}) = 0

read(5,  <unfinished …>

+++ killed by SIGSEGV +++


Java Sound on Linux

I’m often in the position of having my favorite Java program (AltosUI) unable to make any sounds. Here’s a history of the various adventures I’ve had.

Java and PulseAudio ALSA support

When we started playing with Java a few years ago, we discovered that if PulseAudio were enabled, Java wouldn’t make any sound. Presumably, that was because the ALSA emulation layer offered by PulseAudio wasn’t capable of supporting Java.

The fix for that was to make sure pulseaudio would never run. That’s harder than it seems; pulseaudio is like the living dead; rising from the grave every time you kill it. As it’s nearly impossible to install any desktop applications without gaining a bogus dependency on pulseaudio, the solution that works best is to make sure dpkg never manages to actually install the program with dpkg-divert:

# dpkg-divert --rename /usr/bin/pulseaudio

With this in place, Java was a happy camper for a long time.

Java and PulseAudio Native support

More recently, Java has apparently gained some native PulseAudio support in some fashion. Of course, I couldn’t actually get it to work, even after running the PulseAudio daemon but some kind Debian developer decided that sound should be broken by default for all Java applications and selected the PulseAudio back-end in the Java audio configuration file.

Fixing that involved learning about said Java audio configuration file and then applying patch to revert the Debian packaging damage.

$ cat /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/

You can see the PulseAudio mistakes at the top of that listing, with the corrected native interface settings at the bottom.

Java and single-open ALSA drivers

It used to be that ALSA drivers could support multiple applications having the device open at the same time. Those with hardware mixing would use that to merge the streams together; those without hardware mixing might do that in the kernel itself. While the latter is probably not a great plan, it did make ALSA a lot more friendly to users.

My new laptop is not friendly, and returns EBUSY when you try to open the PCM device more than once.

After downloading the jdk and alsa library sources, I figured out that Java was trying to open the PCM device multiple times when using the standard Java sound API in the simplest possible way. I thought I was going to have to fix Java, when I figured out that ALSA provides user-space mixing with the ‘dmix’ plugin. I enabled that on my machine and now all was well.

$ cat /etc/asound.conf
pcm.!default {
    type plug
    slave.pcm "dmixer"

pcm.dmixer  {
    type dmix
    ipc_key 1024
    slave {
        pcm "hw:1,0"
        period_time 0
        period_size 1024
        buffer_size 4096
        rate 44100
    bindings {
        0 0
        1 1

ctl.dmixer {
    type hw
    card 1

ctl.!default {
    type hw
    card 1

As you can see, my sound card is not number 0, it’s number 1, so if your card is a different number, you’ll have to adapt as necessary.

April 05, 2014


I’m in contact with some fellows at HP such as Dong Wei for the UEFI forum. I thought we would promote them outside and reference their career such as what is done for some of our partners: Intel has a page describing their senior fellows and pages givig also access to the Intel fellows. Similarly, IBM promote their fellow each year, maintains a list of the 257 fellows they have !

But after searching on, I was unable to find such pages for our Fellows. And when you know the level of the people we have internally, they would definitely deserve a page with their Bio. You can fin the one of Kirk Bresniker e.g. but you have to know them to find them :-(

I think it’s time to repair what I consider as an error. Come on HP, it’s important to give visibility and show that we too have great people. Why do you think we’re the numer one IT company ? It’s because we have some of the best people in the industry.

Filed under: FLOSS Tagged: HP

April 02, 2014

Do you want to build a project? (Come on, let's go and play.)

This time I'm going to play it safe. Maybe too safe. But hey - I've always been against criticising something just because it's fashionable.

So, let's talk about Codecademy: the Python track and beyond.

I was wary of Codecademy. Exercises without apparent theory? Mmmh. A little stern teacher inside me kept repeating me sermons about pampering myself too much. But then, at PyCon 2013, Jessica McKellar introduced Python for beginners with the hands-on Codecademy tool. And I fell in love.

Codecademy is the Addictive and Playful Way to approach the Hard Way. It's the cool aunt who gives you her Led Zeppelin CDs, teaches you not to give up listening even if it's not what you're used to, but never brags about "in my times". And as an icing on the cake she tells you about her escapades when she was listening to that song.

You do your exercises and you don't fool yourself into thinking that you know how to do something just because you read the theory. Even better: your exercises are checked as soon as you've done them, so you know if you know or you don't know. You soon become committed not to lose your winning streak, so you exercise every day. You want to grab those nice colourful badges, so you challenge your fear of not being good enough.

And you learn. Not so slowly, and surely.

The topics are introduced at a fundamentally constant pace. If you pick up the basics quickly, you can dash through the first tutorials and then take your time to absorb the harder stuff (and since the track touches concepts as far as lambda expressions, the harder stuff is not always so intuitive). If you are the kind of student who gains velocity over time, you can spend your first days to get acquainted with the first concepts and then enjoy the more complex ideas.

After you've learned your concepts (even if you didn't notice that you were learning, you were too busy solving the problem at hand), for each section of the track you have a project to put in practice everything: a project on larger and more satisfying scale than an "exercise section" in a traditional textbook, but small enough not to derail you in an overwhelmingly ambitious plan. (There isn't one for the Advanced Topics, but once you'll be there you'll be old and wise enough to take care of yourself. It would be nice, but that's a minor issue.)

And then, you go beyond! The community gives you nice projects. You can give back to the community with nice projects.

You try your skills at dealing with APIs, and you learn the wider concepts you need for those APIs.

You learn to ask questions and get answers (and that's another skill, and a most useful one) discussing with other students in the forum: a good practice for when you will eventually interact with others on Stack Overflow.

To top it all off, you have a glossary that will help you in those "I knew what this word meant"/"How should I say that?" moments.

Theory and practice, play and commitment. You have it all. You just have not to fall into the trap of "it's not serious if it's not painful."


The title of this post is a reference to my favourite movie of this past winter, the story of a young woman who learns to control her power and to put it to a better use, for the fun of everyone including herself. Here's something to show you its funny side:

April 01, 2014

Comparing Telcos Again

Late last year I compared the prices of mobile providers after Aldi started getting greedy [1]. Now Aldi have dramatically changed their offerings [2] so at least some of the phones I manage have to be switched to another provider.

There are three types of use that are of interest to me. One is for significant use, that means hours of calls per month, lots of SMS, and at least 2G of data transfer. Another is for very light use, maybe a few minutes of calls per month where the aim is to have the lowest annual price for an almost unused phone. The third is somewhere in between – and being able to easily switch between plans for moderate and significant use is a major benefit.

Firstly please note that I have no plans to try and compare all telcos, I’ll only compare ones that seem to have good offers. Ones with excessive penalty clauses or other potential traps are excluded.

Sensible Plans

The following table has the minimum costs for plans where the amount paid counts as credit for calls and data, this makes it easy to compare those plans.

Plan Cost per min or SMS Data Minimum cost
AmaySIM As You Go [3] $0.12 $0.05/meg, $19.90 for 2.5G in 30 days, $99.90 for 10G in 365days $10 per 90 days
AmaySIM Flexi [4] $0.09 500M included, free calls to other AmaySIM users, $19.90 for 2.5G in 30 days, $99.90 for 10G in 365days $19.90 per 30 days
Aldi pre-paid [5] $0.12 $0.05/meg, $30 for 3G in 30 days $15 per 365 days

Amaysim has a $39.90 “Unlimited” plan which doesn’t have any specific limits on the number of calls and SMS (unlike Aldi “Unlimited”) [6], that plan also offers 4G of data per month. The only down-side is that changing between plans is difficult enough to discourage people from doing so, but if you use your phone a lot every month then this would be OK. AmaySIM uses the Optus network.

Lebara has a $29.90 “National Unlimited” plan that offers unlimited calls and SMS and 2G of data [7]. The Lebara web site doesn’t seem to include details such as how long pre-paid credit lasts, the lack of such detail doesn’t give me confidence in their service. Lebara uses the Vodafone network which used to have significant problems, hopefully they fixed it. My lack of confidence in the Vodafone network and in Lebara’s operations makes me inclined to avoid them.

Obscure Plans

Telechoice has a $28 per month “i28″ plan that offers unlimited SMS, $650 of calls (which can be international) at a rate of over $1 per minute, unlimited SMS, unlimited calls to other Telechoice customers, and 2G of data [8]. According to the Whirlpool forum they use the Telstra network although the TeleChoice web site doesn’t state this (one of many failings of a horrible site).

The TeleChoice Global Liberty Starter plan costs $20 per month and includes unlimited calls to other TeleChoice customers, unlimited SMS, $500 of calls at a rate of over $1 per minute, and 1G of data [9].

Which One to Choose

For my relatives who only rarely use their phones the best options are the AmaySIM “As You Go” [3] plan which costs $40 per 360 days and the Aldi prepaid which costs $15 per year. Those relatives are already on Aldi and it seems that the best option for them is to keep using it.

My wife typically uses slightly less than 1G of data per month and makes about 25 minutes of calls and SMS. For her use the best option is the AmaySIM “As You Go” [3] plan which will cost her about $4 in calls per month and $99.90 for 10G of data which will last 10 months. That will average out to about $13 per month. It could end up being a bit less because the 10G of data that can be used in a year gives an incentive to reduce data use while previously with Aldi she had no reason to use less than 2G of data per month. Her average cost will be $11.30 per month if she can make 10G of data last a year. The TeleChoice “Global Liberty Starter” [9] plan is also appealing, but it is a little more expensive at $20 per month, it would be good value for someone who averages more than 83 minutes per month and also uses almost 1G of data.

Some of my relatives use significantly less than 1G of data per month. For someone who uses less than 166MB of billable data per month then the Aldi pre-paid rate of $0.05 per meg [5] is the best, but with a modern phone that does so many things in the background and a plan that rounds up data use it seems almost impossible to be billed for less than 300MB/month. Even when you tell the phone not to use any mobile data some phones still do, on a Nexus 4 and a Nexus 5 I’ve found that the only way to prevent being billed for 3G data transfer is to delete the APN from the phone’s configuration. So it seems that the AmaySIM “As You Go” [3] plan with a 10G annual data pack is the best option.

One of my relatives needs less than 1G of data per month and not many calls, but needs to be on the Telstra network because their holiday home is out of range of Optus. For them the TeleChoice Global Liberty Starter [9] plan seems best.

I have been averaging a bit less than 2G of data transfer per month. If I use the AmaySIM “As You Go” [3] plan with the 10G data packs then I would probably average about $18 worth of data per month. If I could keep my average number of phone calls below $10 (83 minutes) then that would be the cheapest option. However I sometimes spend longer than that on the phone (one client with a difficult problem can involve an hour on the phone). So the TeleChoice i28 plan looks like the best option for me, it gives $650 of calls at a rate of $0.97 per minute + $0.40 connection (that’s $58.60 for a hour long call – I can do 11 of those calls in a month) and 2G of data. The Telstra coverage is an advantage for TeleChoice, I can run my phone as a Wifi access point so my wife can use the Internet when we are out of Optus range.

Please let me know if there are any good Australian telcos you think I’ve missed or if there are any problems with the above telcos that I’m not aware of.

I did it my (hard) way.

Let's start this series by gambling all my credibility.

[Deep breath.]

[Another deep breath.]

[Yet another deep breath.]

The hard way is not my way.

There, I said it. And now, please, let me defend my position.

I know that Learn Python The Hard Way is one of the, if not the, most common answers to the question "I want to learn Python, can you suggest me a tutorial?" - so I don't doubt that there are people who find it extremely helpful. And I don't think it's a bad book; but I think that it has some serious limits.

Learn Python The Hard Way reminds me of Gioacchino Rossini's joke about Richard Wagner: "a composer who has beautiful moments but awful quarter hours."

Here's an example of a beautiful moment; even better, a deeply wise one:

As you study this book, and continue with programming, remember that anything worth doing is difficult at first. Maybe you are the kind of person who is afraid of failure so you give up at the first sign of difficulty. Maybe you never learned self-discipline so you can't do anything that's "boring." Maybe you were told that you are "gifted" so you never attempt anything that might make you seem stupid or not a prodigy. Maybe you are competitive and unfairly compare yourself to someone like me who's been programming for 20+ years.

Whatever your reason for wanting to quit, keep at it. Force yourself. If you run into a Study Drill you can't do, or a lesson you just do not understand, then skip it and come back to it later. Just keep going because with programming there's this very odd thing that happens. At first, you will not understand anything. It'll be weird, just like with learning any human language. You will struggle with words, and not know what symbols are what, and it'll all be very confusing. Then one day BANG your brain will snap and you will suddenly "get it."

(Emphasis mine.)

I cannot find a single fault in this passage.

It's encouraging and it keeps you grounded at the same time. It spells out the incredibly important study trick of "your tutorial is not a TV series, spoilers are good." It reminds me of what John von Neumann once said, that in mathematics you don't understand things, you get used to them (something should be taught to every child, to dispel that horrid idea that "I don't understand this" means "I'm hopelessly not good at this").

I want to stand up and cheer.

But then there are the quarter hours.

Twenty. Nine. Chapters. To. Get. To. See. An. If.

Twenty. Nine.

This self-discipline borders on masochism.

As someone who always finds it easier to solve other people's problems than to imagine brand new problems (although I'm good at finding the problematic limits of any solution; but I digress) I truly appreciate the incredible amount of exercises that the book offers. But there's a terrible side effect of this "old master teaches young apprentice by the way of apparently mundane and repetitive tasks" approach: the young apprentice becomes so diligent that they never step out of line. It's not just a matter of being overwhelmed by the amount of lines that you have to copy: it's ending up depending on the teacher even to break the toy to see what's in there.

Perhaps I'm giving for granted the desire to rebel. The joy of "hey, let's see what it happens if I turn left instead of right as you told me!" The mindset of not taking anything for granted.

(I'm not sure whether the paragraph above is more introspective or paradoxical.)

If you don't have this questioning instinct, it's likely that Learn Python The Hard Way is the perfect book to kickstart it; and since this instinct is a fundamental tool in any scientific and technical field, The Hard Way could be a very good way to start your journey into programming.

By the way: The Hard Way will also take you quite far if you stick with it. Many of the topics in the poster are there.

I still think it's very hard, if not altogether impossible, to teach rebellion - especially through discipline. But maybe it's just my hedonism, all play and not enough work, no reverence to my betters.

You decide. I'm off to boost my rebellious feelings with some Wagner. The last quarter hour of The Valkyrie. Beautiful opera about a girl rebelling to her father to obey her father's wishes; it's worth sitting through all four hours (plus intervals) of it. By the way: have I ever told you that I love Wagner much more than I like Rossini? Personal taste can be so strange and, well, personal.

Everyone has their own way, after all.

Unfortunately necessary disclaimer: let's not go into the "Wagner and politics" or the "Wagner was an [expletive] who lived on his friends' money while having affairs with their wives" issue. At least, not here.

March 31, 2014

So you want to learn Python. What's next?

So in ten days I will be in Montréal for PyCon2014.

So in two weeks I will be presenting a poster.

I'm honoured and grateful to be there. If I should start thanking everyone who's helped me, I'd get too tearful, you would space out, and anyway I would forget someone and then feel terrible. I just hope that I'll be able to tackle these people without being annoying and thank them in person. I don't think that a cup of coffee makes up for two dozens emails exchanged due to my clumsiness, but I can try. Sugar, milk?

But now! Onwards with the main reason why I'm writing here!

The poster I'm presenting is a road map. Not a roadmap, a road map.

Let's say that you can survive in Python, but not much more. If Python were a natural language and you were visiting its country, you could ask for some water in a cafe; but you couldn't discuss with the waiter which wine goes better with your dinner. You know the basics. You like it (otherwise: why are you at PyCon?!) but now you want more.

But there's so much to explore in this Python country! Which way should you go first? And how should you get there?

And here's why the poster is a road map and not a roadmap: there is no should.

There are many travel guides out there, and all but a few of them are a guided tour. I'm here to give you a travel guide to navigate between travel guides, and a map to plan your own tour jumping on an off the buses of their guided tours.

When I was planning this poster I thought about handouts. The handouts should have been a smaller version of the poster. But wait! There's not enough space to do justice to the resources that you can use, even if I'm selecting just a few, and include the graphical part of the poster! But wait! There's not enough space to do justice to the resources that I want to tell you about, period!

But wait! I have a blog! A blog with posts and tags!

In the next days I will review some resources that you can use to take your Python to an intermediate level. The posts will be tagged with the title of my poster.

So, if you're at PyCon on the 13th April 2014 and you want to have a hat, or just to say hello, look for So you want to learn Python. What's next? in the poster session. I will be there.

If you're not at PyCon on the 13th April 2014 but you're interested in the topic, watch this space.

Links March 2014

Typing Animal wrote an interesting article about the dangers of stainless steel in a medical environment [1]. Apparently silver and copper are best due to the oligodynamic effect. Instead of stainless steel drinking bottles they should sell silver plated drinking bottles for kids, I’m sure that lots of parents would pay extra for that.

Mark Kendall gave an interesting TED talk about a replacement for the hypodermic syringe in vaccinations [2]. His invention can reduce the cost of immunisation while increasing the effectiveness and avoiding problems with people who have a needle phobia.

The TED blog has an interesting interview with Will Potter about the use of the “war on terror” to silence journalists and the invention of the term “eco terrorism” for non-violent people who are politically active [3].

The TED blog has an interesting article by Kate Torgovnick May about designing products for sustainability [4]. It links to an insightful TED talk by Leyla Acaroglu about some of the complex issues related to sustainability [5].

Manoush Zomorodi wrote an informative article about How one college went from 10% female computer-science majors to 40% [6].

Slate has an interesting article by Jamelle Bouie showing the way that support for capital punishment in the US is linked to racism [7].

The Southern California Public Radio blog has an interesting article by Josie Huang about Suey Park and her success in using twitter to oppose racism [8].

Andrew Solomon wrote an insightful interview with the father of Adam Lanza for the New Yorker [9].

Waleed Aly wrote an insightful article about George Brandis’ attempt to change the Racial Discrimination Act specifically to allow Andrew Bolt to be racist [10]. He describes it as “the whitest piece of proposed legislation I’ve encountered” which is significant in a country with as much racism as Australia. Really we need stronger laws against racism, there should be no right to be bigoted.

A German Court has ruled that “non commercial” licenses don’t permit non-commercial organisations to re-publish material [11]. This seems bogus to me, I’d be happy to have my non-commercial licensed work published by a non-commercial publishing organisation – just as long as they don’t run adverts on the page.

Professors Woolley and Malone wrote an interesting article about their research into group performance, apparently having more women in a group improves the collective intelligence of a group, but having smarter men in the group doesn’t [12].

Susie Hill wrote an article about the SPARX computer game that is designed to treat adolescent depression [13]. They are working on a “rainbow” edition for GLBT kids and a version for Maoris. Unfortunately their web site is down right now and the version at says that it’s currently only available to participants in a clinical trial.

Tim Chevalier wrote an insightful article explaining why people who campaign against equality shouldn’t be given senior positions in corporations [14].

Zeynep Tufekci wrote an insightful article about how French High Theory and Dr. Seuss can help explain gender problems in geek communities [15].

Hannah Levintova wrote an informative article for Mother Jones about how the US based hate group the World Congress of Families incites homophobic violence in Russia [16].

Josh Sanburn wrote an article for Time about people in the Deep South who claim to be Christian giving away guns to encourage people to attend church [17]. This is the same part of the world where people who claimed to be Christian used their “religion” as an excuse for supporting slavery. I’m quitting bourbon, too much evil comes from that part of the world and I’m not buying anything that comes from there.

March 30, 2014

OpenWRT WDS between legacy WRT54G and recent TP-Link devices

For a while now I had a multiple wifi routers all providing access points, and a connection to each other, using a feature called WDS. All of the routers run OpenWRT. Recently one of them died and everything kind of stopped working properly. I actually had the following configuration: TP-LINK <--wired,bridged--> ASUS WL500G <--wireless,WDS,bridged--> Linksys […]

March 26, 2014

The Aspie Accent

I am often asked about my “accent”. The most common guess is that it’s a “British” accent, while I lived in London for about a year I don’t think that my accent changed much during that time (people have commented on the way I speak since I was in primary school). Also there isn’t a “British accent” anyway, the Wikipedia page of Regional Accents of English has the first three sections devoted to accents in the island of Britain (and Northern Ireland is part of the United Kingdom which people often mean when they sat “Britain”). The Received Pronounciation is the main BBC accent and the accent that is most associated with Britain/England/the UK (which are three different things even though most people don’t know it) and I don’t think that I sound like that at all.

I’ve had various other guesses, the Netherlands (where I lived for a few years but mostly spoke to other foreigners), New Zealand (which I’ve visited a couple of times for conferences), Denmark (the closest I got was attending a conference in Sweden), and probably others I can’t remember.

If I actually had developed an accent from another country then it would probably be from the US. The amount of time I’ve spent watching Hollywood movies and watching US TV shows greatly exceeds the amount of time I’ve spent listening to people from all other countries. The fact that among all the people who wanted to try and guess where my accent supposedly originated none have ever included the US seems like strong evidence to suggest that I don’t have any sort of accent that really derives from another country. Also I have never had someone mistake me for being a resident of their own country based on accent which seems like clear evidence that all claims about me having a foreign accent are bogus.

Autism forums such as [1] always turn up plenty of results for a search on “accent”. In such discussions it seems that a “British accent” is most common mistake and there are often theories raised about why that is – often related to speaking in a formal or precise way or by using a large vocabulary. Also in such discussions the list of countries that people supposedly have accents from is very inclusive, it seems that any country that the listener has heard of but doesn’t know that well is a good candidate. The fact that Aspies from outside the US are rarely regarded as having an American accent could be due to the fact that Hollywood has made most of the world population aware of what most American accents sound like.

Also if I really had some sort of accent from another country then probably someone would comment on that when I’m outside Australia. When I’m travelling people tend to recognise my accent as Australian, while it doesn’t please me when someone thinks that I sound like Crocodile Dundee (as happened in the Netherlands) it might not be entirely inaccurate.

This is Annoying

The way the issue of accent is raised is generally in the form of people asking where I’m from, it seems to imply that they don’t think I belong in Australia because of the way I speak. It’s particularly annoying when people seem unable to realise that they are being obnoxious after the first wrong guess. When I reply “no” to the first “are you from $COUNTRY” question and don’t offer any further commentary it’s not an invitation to play 20 questions regarding where I’m supposedly from, it’s actually an indication that I’m not interested in a conversation on that topic. A Social Skills 101 course would include teaching people that when someone uses one-word answers to your questions it usually means that they either don’t like your questions or don’t want to talk to you.

Social Skills vs Status

The combination of persistence and misreading a social situation which are involved when someone interrogates me about my supposed accent are both parts of the diagnostic criteria for Autism. But I generally don’t get questions about my “accent” in situations where there are many Aspies (IE anything related to the Free Software community). I think that this is because my interactions with people in the Free Software community are based around work (with HR rules against being a jerk) and community events where no-one would doubt that I belong.

I mostly get questions about my “accent” from random middle-class white people who feel entitled to query other people about their status who I meet in situations where there is nothing restraining them from being a jerk. For example random people I meet on public transport.

March 25, 2014

Nexus5 Armourdillo Hybrid Case

back of case showing both layersfront of casecase standshowing the mirror surface of the Nexus 5

I’ve just been given an Armourdillo Hybrid case for the Nexus 5 [1] to review. The above pictures show the back of the case, the front of the case, the stand, and the front of the case with the screen blank. When I first photographed the case the camera focused on a reflection of the window, I include that picture for amusement and to demonstrate how reflective the phone screen is.

This case is very hard, the green plastic is the soft inner layer which is still harder than the plastic in a typical “gel case”. The black part is polycarbonate which is very hard and also a little slippery. The case is designed with lots of bumps for grip (a little like the sole of a running shoe) so it’s not likely to slip out of your hand. But the polycarbonate slides easily on plastic surfaces such as the dash of a car. It’s fortunate that modern cars have lots of “cup holders” that can be used for holding a phone.

I haven’t dropped the phone since getting the new case, but I expect that the combination of a hard outer shell and a slightly softer inner shell (to cushion the impact) will protect it well. All the edges of the case extend above the screen so dropping the phone face down on a hard flat surface shouldn’t cause any damage.

The black part has a stand for propping the phone on it’s side to watch a movie. The stand is very solid and is in the ideal position for use on soft surfaces such as a doona or pillow for watching TV in bed.


This case is mostly designed to protect the phone and the bumps that are used for grip detract from the appearance IMHO. I think that the Ringke Fusion case for my Nexus 4 [2] looks much better, it’s a trade-off between appearance and functionality.

My main criteria for this case were good protection (better than a gel case) and small size (not one of the heavy waterproof cases). It was a bonus to get a green case for the Enlightened team in Ingress. NB Armourdillo also offers a blue case for the Resistance team in Ingress as well as other colors.

MobileZap also have a number of other cases for the Nexus 5 [3].

March 24, 2014

Legal Questions About and Australia

As my previous post documented, I’ve experimented with  Following the arrest of two Miami men for trading on localbitcoins, I decided to seek legal advice on the sitation in Australia.

Online research led me to Nick Karagiannis of Kelly and Co, who was already familiar with Bitcoin: I guess it’s a rare opportunity for excitement in financial regulatory circles!  This set me back several thousand dollars (in fiat, unfortunately), but the result was reassuring.

They’ve released an excellent summary of the situation, derived from their research.  I hope that helps other bitcoin users in Australia, and I’ll post more in future should the legal situation change.

March 23, 2014

Aldi Deserves an Award for Misleading Email

Aldi Mobile has made a significant change to their offerings. They previously had an offer of $35 for “unlimited” calls and 2.5G of data in a month for which they had to publicly apologise for misleading customers as 2500 minutes of calls a month (83 minutes per day) is no-where near unlimited [1]. They also had an offer of $15 for 2G of data.

In an email about this Aldi said “Many of our customers are using a lot less than what is included in our current $35 plan. So we will soon be introducing new Value Packs with more flexible options; meaning you only pay for what you really need and they start from just $10.“. That is grossly misleading, if they offered new plans in addition to the old ones and allowed customers to choose the plan that is the best match then it would be useful to some customers. But removing the supposedly “unlimited” plan and the $15 for 2G of data option is removing affordable options for people who want to use their phones for lots of calls or very few calls but moderate amounts of data use.

New Plans

The base rate for calls on Aldi pre-paid is $0.12 per minute and $0.12 per SMS, consider every mention of “minute” in this section to be “minute or SMS”. The Aldi Newplans page [2] starts with a $10 per month plan which offers 100 minutes of calls which would be $12 at the previous rate of $0.12 per minute. That is OK value when compared to just using the pre-paid calls if you consistently use more than 83 minutes of calls per month. However if you don’t use 84 minutes of calls (EG you don’t speak much on the phone and use Google Hangouts instead of SMS) then it’s not good value. Also the advertised data use is $5 per 100MB, which is way below what is needed for a typical user with an Android phone. My mother in law was barely able to stick within a limit of 300MB/month when that was her limit, but while using the Aldo 2G/month bolt-on she’s increased her data usage.

The smallest of the new plans costs $20 per month, it provides 300 minutes of calls and includes 300MB of data. For an extra $7 you can get another 300MB of data. For my mother in law it seems that the cheapest option on the new plans would be $27 per month, that would cover the 60 minutes of calls she might make and the 450MB of data she’s probably using. That’s significantly more expensive than her previous cost of $15 for 2G of data and $7.20 for calls and has the additional difficulty that I would have to be more involved in helping her avoid excessive data use.

The 2G data bolt-on was really good for some of my relatives, when they use that and configure their phones not to update software over 3G they never had to ask me about any problems related to excess data use. So my mother in law is facing an extra $5 per month (or maybe more depending on data use) and more time spent calling me for tech support.

The data bolt on that Aldi is going to offer in future is $30 for 3G of data to replace the previous offer of $15 for 2G of data. The cost will be unchanged for anyone who uses between 2G and 3G a month, for everyone who uses less than 2G or more than 3G the data bolt-on will cost more. There is simply no possibility for any Aldi data-only customer to save on data use. The only way someone who uses a moderate amount of data could save money is if they use more than 160 minutes of calls and less than 1G of data.


My analysis above is based on interpreting the Aldi web site. As with most telcos they aren’t trying to make things easy in this regard, it seems that the consensus of opinion among telcos is to use complex pricing to make it difficult to compare and reduce competitive pressure. I blame any inaccuracies in my analysis on the Aldi web site.

Why Aldi Shouldn’t Mislead Customers

Aldi isn’t primarily a mobile phone company, their main business is running a supermarket. The trust of customers is important to them, raising prices when competition goes away is one thing, but misleading customers about it is another. If Aldi were to honestly say “now that Kogan Mobile no longer exists there is nothing forcing us to have low prices” then I’d have a lot more respect for their company and be more inclined to shop at their supermarket.

It’s a sad indictment of our society that I need to include a “why lying is wrong” section in such a blog post.

March 22, 2014

Patterns and things underwater

Sea Urchins

Its been a while since I have had an update on the blog again, but rest assured I have been taking many photos and there will be lots of catch-ups once things slow down again.  Today was one of those days where I could not decide what to do, so I grabbed my Olympus OM-D E-M5, 60mm macro lens, Nauticam underwater housing, and two Sea & Sea YS-D1 strobes and decided to go for a dive in the Tweed River.  I knew the high tide was around lunch time and that the visibility would be fairly horrible due to the recent weather, but I really needed to get wet.  The water was quite warm (not 27 de


Core Rendering with Glamor

I’ve hacked up the intel driver to bypass all of the UXA paths when Glamor is enabled so I’m currently running an X server that uses only Glamor for all rendering. There are still too many fall backs, and performance for some operations is not what I’d like, but it’s entirely usable. It supports DRI3, so I even have GL applications running.

Core Rendering Status

I’ve continued to focus on getting the core X protocol rendering operations complete and correct; those remain a critical part of many X applications and are a poor match for GL. At this point, I’ve got accelerated versions of the basic spans functions, filled rectangles, text and copies.

GL and Scrolling

OpenGL has been on a many-year vendetta against one of the most common 2D accelerated operations — copying data within the same object, even when that operation overlaps itself. This used to be the most performance-critical operation in X; it was used for scrolling your terminal windows and when moving windows around on the screen.

Reviewing the OpenGL 3.x spec, Eric and I both read the glCopyPixels specification as clearly requiring correct semantics for overlapping copy operations — it says that the operation must be equivalent to reading the pixels and then writing the pixels. My CopyArea acceleration thus uses this path for the self-copy case. However, the ARB decided that having a well defined blt operation was too nice to the users, so the current 4.4 specification adds explicit language to assert that this is not well defined anymore (even in the face of the existing language which is pretty darn unambiguous).

I suspect we’ll end up creating an extension that offers what we need here; applications are unlikely to stop scrolling stuff around, and GPUs (at least Intel) will continue to do what we want. This is the kind of thing that makes GL maddening for 2D graphics — the GPU does what we want, and the GL doesn’t let us get at it.

For implementations not capable of supporting the required semantic, someone will presumably need to write code that creates a temporary copy of the data.

PBOs for fall backs

For operations which Glamor can’t manage, we need to fall back to using a software solution. Direct-to-hardware acceleration architectures do this by simply mapping the underlying GPU object to the CPU. GL doesn’t provide this access, and it’s probably a good thing as such access needs to be carefully synchronized with GPU access, and attempting to access tiled GPU objects with the CPU require either piles of CPU code to ‘de-tile’ accesses (ala wfb), or special hardware detilers (like the Intel GTT).

However, GL does provide a fairly nice abstraction called pixel buffer objects (PBOs) which work to speed access to GPU data from the CPU.

The fallback code allocates a PBO for each relevant X drawable, asks GL to copy pixels in, and then calls fb, with the drawable now referencing the temporary buffer. On the way back out, any potentially modified pixels are copied back through GL and the PBOs are freed.

This turns out to be dramatically faster than malloc’ing temporary buffers as it allows the GL to allocate memory that it likes, and for it to manage the data upload and buffer destruction asynchronously.

Because X pixmaps can contain many X windows (the root pixmap being the most obvious example), they are often significantly larger than the actual rendering target area. As an optimization, the code only copies data from the relevant area of the pixmap, saving considerable time as a result. There’s even an interface which further restricts that to a subset of the target drawable which the Composite function uses.

Using Scissoring for Clipping

The GL scissor operation provides a single clipping rectangle. X provides a list of rectangles to clip to. There are two obvious ways to perform clipping here — either perform all clipping in software, or hand each X clipping rectangle in turn to GL and re-execute the entire rendering operation for each rectangle.

You’d think that the former plan would be the obvious choice; clearly re-executing the entire rendering operation potentially many times is going to take a lot of time in the GPU.

However, the reality is that most X drawing occurs under a single clipping rectangle. Accelerating this common case by using the hardware clipper provides enough benefit that we definitely want to use it when it works. We could duplicate all of the rendering paths and perform CPU-based clipping when the number of rectangles was above some threshold, but the additional code complexity isn’t obviously worth the effort, given how infrequently it will be used. So I haven’t bothered. Most operations look like this:

Allocate VBO space for data

Fill VBO with X primitives

loop over clip rects {

This obviously out-sources as much of the problem as possible to the GL library, reducing the CPU time spent in glamor to a minimum.

A Peek at Some Code

With all of these changes in place, drawing something like a list of rectangles becomes a fairly simple piece of code:

First, make sure the program we want to use is available and can be used with our GC configuration:

prog = glamor_use_program_fill(pixmap, gc,

if (!prog)
    goto bail_ctx;

Next, allocate the VBO space and copy all of the X data into it. Note that the data transfer is simply ‘memcpy’ here — that’s because we break the X objects apart in the vertex shader using instancing, avoiding the CPU overhead of computing four corner coordinates.

/* Set up the vertex buffers for the points */

v = glamor_get_vbo_space(drawable->pScreen, nrect * (4 * sizeof (GLshort)), &vbo_offset);

glVertexAttribDivisor(GLAMOR_VERTEX_POS, 1);
                      4 * sizeof (GLshort), vbo_offset);

memcpy(v, prect, nrect * sizeof (xRectangle));


Finally, loop over the pixmap tile fragments, and then over the clip list, selecting the drawing target and painting the rectangles:


glamor_pixmap_loop(pixmap_priv, box_x, box_y) {
    int nbox = RegionNumRects(gc->pCompositeClip);
    BoxPtr box = RegionRects(gc->pCompositeClip);

    glamor_set_destination_drawable(drawable, box_x, box_y, TRUE, FALSE, prog->matrix_uniform, &off_x, &off_y);

    while (nbox--) {
        glScissor(box->x1 + off_x,
                  box->y1 + off_y,
                  box->x2 - box->x1,
                  box->y2 - box->y1);
        glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, nrect);

GL texture size limits

X pixmaps use 16 bit dimensions for width and height, allowing them to be up to 65536 x 65536 pixels. Because the X coordinate space is signed, only a quarter of this space is actually useful, which makes the useful size of X pixmaps only 32767 x 32767. This is still larger than most GL implementations offer as a maximum texture size though, and while it would be nice to just say ‘we don’t allow pixmaps larger than GL textures’, the reality is that many applications expect to be able to allocate such pixmaps today, largely to hold the ever increasing size of digital photographs.

Glamor has always supported large X pixmaps; it does this by splitting them up into tiles, each of which is no larger than the largest texture supported by the driver. What I’ve added to Glamor is some simple macros that walk over the array of tiles, making it easy for the rendering code to support large pixmaps without needing any special case code.

Glamor also had some simple testing support — you can compile the code to ignore the system-provided maximum texture size and supply your own value. This code had gone stale, and couldn’t work as there were parts of the code for which tiling support just doesn’t make sense, like the glyph cache, or the X scanout buffer. I fixed things so that you could leave those special cases as solitary large tiles while breaking up all other pixmaps into tiles no larger than 32 pixels square.

I hope to remove the single-tile case and leave the code supporting only the multiple-tile case; we have to have the latter code, and so having the single-tile code around simply increases our code size for not obvious benefit.

Getting accelerated copies between tiled pixmaps added a new coordinate system to the mix and took a few hours of fussing until it was working.

Rebasing Many (many) Times

I’m sure most of us remember the days before git; changes were often monolithic, and the notion of changing how the changes were made for the sake of clarity never occurred to anyone. It used to be that the final code was the only interesting artifact; how you got there didn’t matter to anyone. Things are different today; I probably spend a third of my development time communicating how the code should change with other developers by changing the sequence of patches that are to be applied.

In the case of Glamor, I’ve now got a set of 28 patches. The first few are fixes outside of the glamor tree that make the rest of the server work better. Then there are a few general glamor infrastructure additions. After that, each core operation is replaced, one a at a time. Finally, a bit of stale code is removed. By sequencing things in a logical fashion, I hope to make review of the code easier, which should mean that people will spend less time trying to figure out what I did and be able to spend more time figuring out if what I did is reasonable and correct.

Supporting Older Versions of GL

All of the new code uses vertex instancing to move coordinate computation from the CPU to the GPU. I’m also pulling textures apart using integer operations. Right now, we should correctly fall back to software for older hardware, but it would probably be nicer to just fall back to simpler GL instead. Unless everyone decides to go buy hardware with new enough GL driver support, someone is going to need to write simplified code paths for glamor.

If you’ve got such hardware, and are interested in making it work well, please take this as an opportunity to help yourself and others.

Near-term Glamor Goals

I’m pretty sure we’ll have the code in good enough shape to merge before the window closes for X server 1.16. Eric is in charge of the glamor tree, so it’s up to him when stuff is pulled in. He and Markus Wick have also been generating code and reviewing stuff, but we could always use additional testing and review to make the code as good as possible before the merge window closes.

Markus has expressed an interest in working on Glamor as a part of the summer of code this year; there’s clearly plenty of work to do here, Eric and I haven’t touched the render acceleration stuff at all, and that code could definitely use some updating to use more modern GL features.

If that works as well as the core rendering code changes, then we can look forward to a Glamor which offers GPU-limited performance for classic X applications, without requiring GPU-specific drivers for every generation of every chip.

March 19, 2014

AppRTC : Google’s WebRTC test app and its parameters

If you’ve been interested in WebRTC and haven’t lived under a rock, you will know about Google’s open source testing application for WebRTC: AppRTC.

When you go to the site, a new video conferencing room is automatically created for you and you can share the provided URL with somebody else and thus connect (make sure you’re using Google Chrome, Opera or Mozilla Firefox).

We’ve been using this application forever to check whether any issues with our own WebRTC applications are due to network connectivity issues, firewall issues, or browser bugs, in which case AppRTC breaks down, too. Otherwise we’re pretty sure to have to dig deeper into our own code.

Now, AppRTC creates a pretty poor quality video conference, because the browsers use a 640×480 resolution by default. However, there are many query parameters that can be added to the AppRTC URL through which the connection can be manipulated.

Here are my favourite parameters:

  • hd=true : turns on high definition, ie. minWidth=1280,minHeight=720
  • stereo=true : turns on stereo audio
  • debug=loopback : connect to yourself (great to check your own firewalls)
  • tt=60 : by default, the channel is closed after 30min – this gives you 60 (max 1440)

For example, here’s how a stereo, HD loopback test would look like: .

This is not the limit of the available parameter, though. Here are some others that you may find interesting for some more in-depth geekery:

  • ss=[stunserver] : in case you want to test a different STUN server to the default Google ones
  • ts=[turnserver] : in case you want to test a different TURN server to the default Google ones
  • tp=[password] : password for the TURN server
  • audio=true&video=false : audio-only call
  • audio=false : video-only call
  • audio=googEchoCancellation=false,googAutoGainControl=true : disable echo cancellation and enable gain control
  • audio=googNoiseReduction=true : enable noise reduction (more Google-specific parameters)
  • asc=ISAC/16000 : preferred audio send codec is ISAC at 16kHz (use on Android)
  • arc=opus/48000 : preferred audio receive codec is opus at 48kHz
  • dtls=false : disable datagram transport layer security
  • dscp=true : enable DSCP
  • ipv6=true : enable IPv6

AppRTC’s source code is available here. And here is the file with the parameters (in case you want to check if they have changed).

Have fun playing with the main and always up-to-date WebRTC application: AppRTC.


Some time late last Sunday night, I stumbled upon a fight discussion on Twitter. It turns out there are actually more threads to it than I’ve reproduced here, so if you’re really keen, do feel free to click through on individual tweets to see where it went. Here I’ve only reproduced the thread I read at the time. It starts with this:


I’m pretty sure this next tweet is the one I first noticed in my timeline:



At this point my stupidity got the better of me, and I decided to engage:



On reflection, that’s not too terrible an ending. But this exchange go me thinking about the words we use. Various ancient cultures had a notion that words had power; that to name a thing would cause it to come into existence. So I tried this myself today. I said, in the most solemn voice I could manage, “there is a bacon cheeseburger on the corner of my desk”. Alas, it didn’t work. I was stuck with my 2014 lanyard, my headset, sunglasses, a stapler, some Ceph and HP stickers, a stack of SUSE post-it notes and a pile of folded up Tasmanian state election propaganda I’ve been meaning to set fire to.

Perhaps I’m not as adept as the ancients at manipulating reality. Perhaps “bacon cheeseburger” isn’t actually a word of power. Or perhaps that notion was simply never meant to be taken literally. Maybe it was more that the words we use frame the way we think. More disturbingly, that the words we use put walls up around the way we are able to think.

Cassy O’Connor said “rape”, which I (with the benefit of never having actually been raped; apparently it helps to be a reasonably sized, allegedly scary looking, bearded white male) took to be a rather evocative analogy for the violence that can be wrought upon forests. But, she was shot down for this usage, because it was seen to be “disrespectful toward those who have been raped”.

Rob from Taroona seems to be referring to forests as “resources”, and while it’s apparent he understands that there’s a balance to be struck between the existence of forests and our use of them, for me the term “resource” is problematic. Dig up “resource” in a dictionary if you still have one (or just go the lazy approach), and it tends to be defined along the lines of “something that one uses to achieve an objective”.

I can’t bring myself to see forests that way. Rather I see timber as a resource, and trees as life forms.

And I wonder to what extent the words I choose to describe things trap my thinking inside little mental boxes.

Bitcoin Trading In Australia

I bought 10 BTC to play with back in 2011, and have been slowly spending them to support bitcoin adoption.  One thing which I couldn’t get reliable information on was how to buy and sell bitcoin within Australia, so over the last few months I decided to sell a few via different methods and report the results here (this also helps my budget, since I’m headed off on paternity leave imminently!).

All options listed here use two-factor authentication, otherwise I wouldn’t trust them with more than cents.  And obviously you shouldn’t leave your bitcoins in an exchange for any longer than necessary, since most exchanges over time have gone bankrupt.

Option 1: MtGox AUD

Yes, I transferred some BTC into MtGox and sold them.  This gave the best price, but after over two months of waiting the bank transfer to get my money hadn’t been completed.  So I gave up, bought back into bitcoins (fewer, since the price had jumped) and thus discovered that MtGox was issuing invalid BTC transactions so I couldn’t even get those out.  Then they halted transactions altogether blaming TX malleability.  Then they went bankrupt.  Then they leaked my personal data just for good measure.  The only way their failure could be more complete is if my MtGox Yubikey catches on fire and burns my home to the ground.

Volume: Great (5M AUD/month)

Price Premium: $25 – $50 / BTC

Charge: 0.65%

Hassle: Infinite

Summary: 0/10

Option 2:

According to, localbitcoins is the largest volume method for AUD exchange.  It’s not an exchange, so much as a matching and escrow service, though there are a number of professional traders active on the site.  The bulk of AUD trades are online, though I sold face to face (and I’ll be blogging about the range of people I met doing that). is a great place for online BTC buyers, since they have been around for quite a while and have an excellent reputation with no previous security issues, and they hold bitcoins in escrow as soon as you hit “buy”.  It’s a bit more work than an exchange, since you have to choose the counter-party yourself.

For online sellers, transfers from stolen bank accounts is a real issue.  Electronic Funds Transfer (aka “Pay Anyone”) is reversible, so when the real bank account owner realizes their money is missing, the bank tends to freeze the receiving (ie. BTC seller’s) bank account to make sure they can’t remove the disputed funds.  This process can take weeks or months, and banks’ anti-fraud departments generally treat bitcoin sellers who get defrauded with hostility (ANZ is reported to be the exception here).  A less common scam is fraudsters impersonating the Australian Tax Office and telling the victim to EFT to the localbitcoins seller.

Mitigations for sellers include any combination of:

  1. Only accepting old-fashioned cash deposits via a branch (though I’m aware of one US report where a fraudster convinced the teller to reverse the deposit, I haven’t heard of that in Australia)
  2. Insisting on “” in the transfer message (to avoid the ATO fraud problem)
  3. Only dealing with buyers with significant reputation (100+ trades with over 150 BTC is the Gold Standard)
  4. Insisting on real ID checking (eg. Skype chat of buyer with drivers’ license)
  5. Only dealing with buyers whose accounts are older than two weeks (most fraudsters are in and out before then, though their reputation can be very good until they get caught)
  6. Only allowing internal transfers between the same bank (eg. Commonwealth), relying on the bank’s use of two factor authentication to reduce fraud.

Many buyers on are newcomers, so anticipate honest mistakes for the most part.  The golden rule always applies: if someone is offering an unrealistic price, it’s because they’re trying to cheat you.

Volume: Good (1M AUD/month)

Price Premium: $5 – $20 / BTC

Charge: 1% (selling), 0% (buying)

Hassle: Medium

Summary: 7/10

Option 3:

You’ll need to get your bank account checked to use this fairly low-volume exchange, but it’s reasonably painless.  Their issues are their lack of exposure (I found out about them through and lack of volume (about a quarter of the volume), but they also trade litecoin if you’re into that.  You can leave standing orders, or just manually place one which is going to be matched instantly.

They seem like a small operation, based in Sydney, but my interactions with them have been friendly and fast.

Volume: Low (300k AUD/month)

Price Premium: $0 / BTC

Charge: 1%

Hassle: Low

Summary: 7/10

Option 4:

I heard about this site from a well-circulated blog post on Commonwealth Bank closing their bank account last year.  I didn’t originally consider them since they don’t promote themselves as an exchange, but you can use their filler to sell them bitcoins at a spot rate.  It’s limited to $4000 per day according to their FAQ.

They have an online ID check, using the usual sources which didn’t quite work for me due to out-of-date electoral information, but they cleared that manually within a day.  They deposit 1c into your bank account to verify it, but that hasn’t worked for me, so I’ve no way to withdraw my money and they haven’t responded to my query 5 days ago leaving me feeling nervous.  A search of reddit points to common delays, and founder’s links to the hacked-and-failed Bitcoinica give me a distinct “magical gathering” feel.

Volume: Unknown (self-reports indicate ~250k/month?)

Price Premium: $0 / BTC

Charge: 1.1% (selling) 2% (buying)

Hassle: Medium

Summary: 4/10

If you trade, I’d love to hear corrections, comments etc. or email me on

March 16, 2014

BTRFS Status March 2014

I’m currently using BTRFS on most systems that I can access easily. It’s not nearly reliable enough that I want to install it on a server in another country or an embedded device that’s only accessible via 3G, but for systems where I can access the console it’s not doing too badly.

Balancing and Space Allocation

# btrfs filesystem df /

Data, single: total=103.97GiB, used=85.91GiB

System, DUP: total=32.00MiB, used=20.00KiB

Metadata, DUP: total=1.78GiB, used=1.31GiB

# df -h /

Filesystem Size Used Avail Use% Mounted on
/dev/disk/by-uuid/ac696117-473c-4945-a71e-917e09c6503c 108G 89G 19G 84% /

Currently there are still situations where it can run out of space and deadlock on freeing space. The above shows the output of the btrfs df command and the regular df command, I have about 106G of disk space used by data and metadata in BTRFS while df shows that the entire filesystem (IE the block device) is 108G. So if I use another 2G of data or metadata then the system is at risk of deadlocking. To avoid that happening I have to run “btrfs balance start /” to start a balance which defragments the space use and frees some blocks. Currently there is a bug in BTRFS (present in all Debian/Unstable kernels) which prevents a balance operation from completing when systemd is used in a default configuration (there’s something about the way systemd accesses it’s journal files that triggers a BTRFS bug). This is really inconvenient, particularly given that there’s probably a strong correlation between people who use experimental filesystems and people who use experimental init programs.

When you get to the stage of the filesystem being deadlocked you can sometimes recover by removing snapshots and sometimes by adding a new device to the filesystem (even a USB flash drive will do). But I once had a filesystem get into a state where there wasn’t enough space to balance, add a device, or remove a snapshot – so I had to do a backup/format/restore.

Quota Groups

Last time I asked the developers (a few weeks ago) they told me that quota groups aren’t ready to use. They also said that they know about enough bugs that there’s no benefit in testing that feature. Even people who want to report bugs in BTRFS shouldn’t use quotas.

Kernel Panics with Kmail

I’ve had three systems develop filesystem corruption on files related to Kmail (the email program from KDE). I suspect that Kmail is triggering a bug in BTRFS. On all three systems the filesystem developed corruption that persisted across a reboot. One of the three systems was fixed by deleting the file for the Outbox, the others are waiting for kernel 3.14 which is supposed to fix the bug that causes kernel panics when accessing the corrupted files in question.

I don’t know whether kernel 3.14 will fix the bug that caused the corruption in the first place.


As I don’t use quotas BTRFS is working well for me on systems that have plenty of storage space and don’t run Kmail. There are some systems running systemd where I plan to upgrade the kernel before all the filesystem space is allocated. One of my systems is currently running SysVinit so I can balance the filesystem.

Apart from these issues BTRFS is working reasonably well for me. I haven’t yet had it’s filesystem checksums correct corrupted data from disk in any situation other than tests (I have had ZFS correct such an error, so hardware I use does benefit from this). I have restored data from BTRFS snapshots on many occasions, so that feature has been a major benefit for me. When I had a system with faulty RAM the internal checks in BTRFS alerted me to the problem and I didn’t lose any data, the filesystem became read-only and I was able to copy everything off even though it was too corrupted for writes.

March 14, 2014


Even if stuff do not progrees at the speed I’d like them to progress (lots of travels on HP side since early 2014 and 4 concerts to perform on the private side) I’ve tried to improve the 3.2 version I published unofficially as beta. Interestingly enough, even when I do not announce that packages are delivered, there are people who do use them !!! Which gave me some feedback (you can guess it wasn’t that positive), so in fact it’s already my second delivery :-) and it contains some interesting new features:

  • mindi now uses the new mr-kernel-get-modules perl script which allows now in mindi to just mention end modules names andd not dependencies anymore, which are now computed by the script !! This will help a lot to maintain the list of modules, which was always impacted by low-level dependencies changes at kernel level.
  • Support of symlinks for newest distributions based on systemd such as Fedora, Mageia, … is now finally working !! Again this was done exporting the existing wrong code into a separate perl script which now operates correclty. This is part of the global willingness to recode most of mindi and some of mondo in perl. This took quite a long time, as of course, we need to stay compatible (a word systemd team doesn’t care about of course) with other tools, and older distributions. Side note, this is probably one of the reason MondoRescue is still appreciated by its community :-)
  • The introduction of a dependency on a perl function was incorrect and people trying to install from packages gave feedback that they had errors dof course ue to that. This is now fixed, as indeed had an issue because a low level function was depending on a higher level function not part of the perl modules provided for MondoRescue. With 0.12.5 of this is completely solved.
  • Now I still have regressions with the isolinux menus, NFS on Mageia 4 and systemd not working anymore (change of network NIC name is the root cause). However the ldlinux.c32 issue for syslinux > 5.x is now solved.

Next week is the TES, so won’t have much time to work on it. Expect news the week after.

Filed under: FLOSS Tagged: Fedora, Linux, Mageia, Mondorescue, Open Source,

March 09, 2014


The first paper from Tim Highfield and my Mapping Movements project is now out in First Monday:

Social media have become crucial tools for political activists and protest movements, providing another channel for promoting messages and garnering support. Twitter, in particular, has been identified as a noteworthy medium for protests in countries including Iran and Egypt to receive global attention. The Occupy movement, originating with protests in, and the physical occupation of, Wall Street, and inspiring similar demonstrations in other U.S. cities and around the world, has been intrinsically linked with social media through location–specific hashtags: #ows for Occupy Wall Street, #occupysf for San Francisco, and so on. While the individual protests have a specific geographical focus–highlighted by the physical occupation of parks, buildings, and other urban areas — Twitter provides a means for these different movements to be linked and promoted through tweets containing multiple hashtags. It also serves as a channel for tactical communications during actions and as a space in which movement debates take place.

This paper examines Twitter’s use within the Occupy Oakland movement. We use a mixture of ethnographic research through interviews with activists and participant observation of the movements’ activities, and a dataset of public tweets containing the #oo hashtag from early 2012. This research methodology allows us to develop a more accurate and nuanced understanding of how movement activists use Twitter by cross–checking trends in the online data with observations and activists’ own reported use of Twitter. We also study the connections between a geographically focused movement such as Occupy Oakland and related, but physically distant, protests taking place concurrently in other cities. This study forms part of a wider research project, Mapping Movements, exploring the politics of place, investigating how social movements are composed and sustained, and the uses of online communication within these movements.

[Read the full paper.]

Satanic Shed of Geeveston

I've walked this trail "around the block" through the dense pine and eucalypt plantations many times before but it was only yesterday that a shape embedded in a pine plantation caught my eye:

Shed, long shot

Whomever had planted this particular planation had just left the old shed where it was and planted the plantation around it. How had I missed this before?

Shed, short shot

So I headed in for a closer look and the tin on the roof was much younger than the shed itself.

Shed, through shot

After crunching my way across the moss and pine needle carpeted forest floor, this spooky scene greated me as I stepped gingerly inside the shed:

Shed, inside shot

Admittedly the lack of evidence of actual satanic ritual was an anti-climax but I enjoyed the find none-the-less.

Shed, rear shot

March 08, 2014

Less Work

I previously wrote about the idea of a Basic Income for Australia [1], that post was mainly to show how it could be introduced with little real change. That is not because I don’t think that we need real changes, but because we should consider the various changes separately as much as possible.

In terms of how society works I think that we need to move from the current model where most people are expected to work most of the time and the people who don’t work are treated badly to encourage them to work for low wages. I think that we should aim as a society to have less time spent doing paid work which means more people working part time (maybe 6 months a year, 3 or 4 days a week, or other ways of doing less than 40 hours a week * ~45 weeks) and more people who aren’t doing paid work.

The idea of 100% employment is the cause of many jokes about the Soviet Union. I don’t know how much truth is behind the jokes about needless work being done to fulfill Soviet plans, but regardless of the accuracy I think we should take such things as an idea of what not to do in our society. The Broken Window Fallacy is an example of the Soviet production problem in supposedly capitalist societies.

Here are some of my ideas for decreasing the amount of needless work without decreasing anyone’s quality of life – in fact most of these make things better for most people.

The War on Drugs

The easiest way to reduce needless employees is to end the “war on drugs”. During the course of the “war” the drug use has steadily increased and the amount of law enforcement energy devoted to it has also increased. Some estimates claim that 50% of law enforcement is devoted to it. Also some of the hospital budget is related to drug use.

I think that we should allow adults to consume any “drug” that they wish (not just tobacco, alcohol, caffeine, and other popular drugs) and apply the same laws regarding product quality to all of them. That will reduce the violence related to drugs, reduce the health impact (it’s usually poor quality control that causes medical problems), and allow law enforcement to concentrate on crimes that hurt other people.

Treating drug addiction as a medical problem has been shown to be the most effective way of reducing drug use, reducing violent crime, and reducing the health impact of drug use.

Other Crime Issues

Gary Slutkin’s TED talk about treating violence as a contagious disease has some interesting ideas for reducing violence in the community [2]. Implementing those ideas on a wide scale seems to have the potential for reducing law enforcement and medical work.

Generally I think we should aim to have as few laws as possible. Whenever adults want to do things that don’t hurt other people they should be permitted to do so. There should also be an aim for laws to be consistent and easy to understand. Ideally there would be a single database with all laws in some form of hypertext (maybe some type of Wiki) so that any citizen can discover all implications of the laws that affect them.


More money should be spent on preventing people getting sick. One problem we have in recent times is silly people refusing to get their children vaccinated because they take medical advice from a playboy model instead of doctors. I think that as a society we need to do more to protect children from stupid things that their parents do and that reducing the amount of medical work is the least of the reasons for doing so.

Another problem is the quality of food. The big supermarket chains are pushing prices down which reduces food quality. The excessive use of anti-biotics is a serious threat to world health that is driven by the low price of meat.


Rodney Brooks gave an interesting TED talk about robotics [2]. He showed how robots can be programmed/trained and talked about the factory workers who want their children to do anything other than factory work. Work that can be done by robots should be done by robots so that people can do better things.


One problem that we have in Australia is rent-seeking companies being permitted to establish toll roads. To make such toll roads profitable (which is guaranteed by corrupt politicians) they close roads that might be used to bypass the toll roads. This creates needless traffic congestion (wasted work time driving). Also running toll roads involves employing people to collect the tolls and take legal action against people who don’t pay. I believe that toll roads shouldn’t exist, this will reduce the number of needless workers and make everything more efficient. Also as a matter of principle I believe that the government should own and control every monopoly in the country.

Currently in Australia most adults own cars, this involves a lot of work in car maintenance. Even when not being used a car needs to be maintained for safety. When people drive to work instead of using public transport their travel time counts as work. While time spent on a train, tram, or bus isn’t leisure time it’s a lesser degree of work than driving a car. You can read a book, play phone games, or do other recreational activities when on public transport.

Car sharing companies are taking off in urban areas, this allows some people to avoid owning a car and some families to own fewer cars. We also need more government investment in public transport including more routes, greater passenger capacity, and more service late at night.

We also need to encourage companies not to have employees working from 9 to 5 to reduce the peak demand for public transport. A short term tax incentive would do some good in this regard, if companies were to encourage their employees to work different hours for a year then it might change the norms for work enough to permanently break the 9-5 concept.

I believe that all land-based mass public transport (buses, trams, and trains) should be free. That would remove the need to pay people to collect fares and fine people who don’t have tickets, and also remove the work time involved in buying tickets. Not requiring tickets would also decrease the time needed to get on and off public transport which would improve the speed of public transport and reduce disruption to traffic. One simulation of traffic in New York City showed that collecting fares on buses slowed down traffic enough to impose costs on all workers that were greater than the fares collected – so it would be cheaper overall for people in NYC to have free buses paid by the government. I don’t think that Melbourne has congestion similar to NYC and even Sydney might not have the same economic issues. But I still think that we should have free transport for the convenience of everyone.

Google has been doing some interesting research into cars that are driven by computer, their prototypes have been shown to work well in practice but have not been approved for real use. Trains and trams are easier to drive because there is no steering so it seems that they would be good options for the first implementation of computer driven public transport. Robot trams would allow more regular service late at night and thus make the network more useful.

Stop Buying Rubbish

A significant waste of resources is the commercial events of Christmas, Valentine’s day, and Easter. A lot of marketing money is spent to encourage people to buy rubbish for other people in celebration of such events. I think that Christmas presents for children are a good thing and that even the trivial things (crackers and party hats) are OK, but adults don’t need it. Valentine’s day is OK for people who are in relationships, but coercing single people to rush to find someone so that they aren’t single on that day is bad for everyone. Most Easter and Valentine’s chocolate is rubbish, cheap and nasty chocolate in fancy wrappings. Buy a small quantity of good chocolate instead of a large quantity of rubbish.

There’s a big market for knick-nacks for adults outside of those holidays too. Stands at trade shows usually give away junk to delegates, some of it has enough value to be taken home but really it’s mostly rubbish. If you can’t sell your stuff without giving away freebies then giving away plastic toys or cheap chocolate isn’t going to make it sell.

Any Other Ideas?

Does anyone have other ideas about how to reduce the amount of work required to sustain our society? Suggestions for improvements that suit other countries are welcome too, while I’m mostly thinking about Australia while writing this I’m interested in ways of making things better all around the world.

March 07, 2014


Brief Glamor Hacks

Eric Anholt started writing Glamor a few years ago. The goal was to provide credible 2D acceleration based solely on the OpenGL API, in particular, to implement the X drawing primitives, both core and Render extension, without any GPU-specific code. When he started, the thinking was that fixed-function devices were still relevant, so that original code didn’t insist upon “modern” OpenGL features like GLSL shaders. That made the code less efficient and hard to write.

Glamor used to be a side-project within the X world; seen as something that really wasn’t very useful; something that any credible 2D driver would replace with custom highly-optimized GPU-specific code. Eric and I both hoped that Glamor would turn into something credible and that we’d be able to eliminate all of the horror-show GPU-specific code in every driver for drawing X text, rectangles and composited images. That hadn’t happened though, until now.

Fast forward to the last six months. Eric has spent a bunch of time cleaning up Glamor internals, and in fact he’s had it merged into the core X server for version 1.16 which will be coming up this July. Within the Glamor code base, he’s been cleaning some internal structures up and making life more tolerable for Glamor developers.

Using libepoxy

A big part of the cleanup was a transition all of the extension function calls to use his other new project, libepoxy, which provides a sane, consistent and performant API to OpenGL extensions for Linux, Mac OS and Windows. That library is awesome, and you should use it for everything you do with OpenGL because not using it is like pounding nails into your head. Or eating non-tasty pie.

Using VBOs in Glamor

One thing he recently cleaned up was how to deal with VBOs during X operations. VBOs are absolutely essential to modern OpenGL applications; they’re really the only way to efficiently pass vertex data from application to the GPU. And, every other mechanism is deprecated by the ARB as not a part of the blessed ‘core context’.

Glamor provides a simple way of getting some VBO space, dumping data into it, and then using it through two wrapping calls which you use along with glVertexAttribPointer as follows:

pointer = glamor_get_vbo_space(screen, size, &offset);
glVertexAttribPointer(attribute_location, count, type,
              GL_FALSE, stride, offset);
memcpy(pointer, data, size);

glamor_get_vbo_space allocates the specified amount of VBO space and returns a pointer to that along with an ‘offset’, which is suitable to pass to glVertexAttribPointer. You dump your data into the returned pointer, call glamor_put_vbo_space and you’re all done.

Actually Drawing Stuff

At the same time, Eric has been optimizing some of the existing rendering code. But, all of it is still frankly terrible. Our dream of credible 2D graphics through OpenGL just wasn’t being realized at all.

On Monday, I decided that I should go play in Glamor for a few days, both to hack up some simple rendering paths and to familiarize myself with the insides of Glamor as I’m getting asked to review piles of patches for it, and not understanding a code base is a great way to help introduce serious bugs during review.

I started with the core text operations. Not because they’re particularly relevant these days as most applications draw text with the Render extension to provide better looking results, but instead because they’re often one of the hardest things to do efficiently with a heavy weight GPU interface, and OpenGL can be amazingly heavy weight if you let it.

Eric spent a bunch of time optimizing the existing text code to try and make it faster, but at the bottom, it actually draws each lit pixel as a tiny GL_POINT object by sending a separate x/y vertex value to the GPU (using the above VBO interface). This code walks the array of bits in the font and checking each one to see if it is lit, then checking if the lit pixel is within the clip region and only then adding the coordinates of the lit pixel to the VBO. The amazing thing is that even with all of this CPU and GPU work, the venerable 6x13 font is drawn at an astonishing 3.2 million glyphs per second. Of course, pure software draws text at 9.3 million glyphs per second.

I suspected that a more efficient implementation might be able to draw text a bit faster, so I decided to just start from scratch with a new GL-based core X text drawing function. The plan was pretty simple:

  1. Dump all glyphs in the font into a texture. Store them in 1bpp format to minimize memory consumption.

  2. Place raw (integer) glyph coordinates into the VBO. Place four coordinates for each and draw a GL_QUAD for each glyph.

  3. Transform the glyph coordinates into the usual GL range (-1..1) in the vertex shader.

  4. Fetch a suitable byte from the glyph texture, extract a single bit and then either draw a solid color or discard the fragment.

This makes the X server code surprisingly simple; it computes integer coordinates for the glyph destination and glyph image source and writes those to the VBO. When all of the glyphs are in the VBO, it just calls glDrawArrays(GL_QUADS, 0, 4 * count). The results were “encouraging”:

1: fb-text.perf
2: glamor-text.perf
3: keith-text.perf

       1                 2                           3                 Operation
------------   -------------------------   -------------------------   -------------------------
   9300000.0      3160000.0 (     0.340)     18000000.0 (     1.935)   Char in 80-char line (6x13) 
   8700000.0      2850000.0 (     0.328)     16500000.0 (     1.897)   Char in 70-char line (8x13) 
   6560000.0      2380000.0 (     0.363)     11900000.0 (     1.814)   Char in 60-char line (9x15) 
   2150000.0       700000.0 (     0.326)      7710000.0 (     3.586)   Char16 in 40-char line (k14) 
    894000.0       283000.0 (     0.317)      4500000.0 (     5.034)   Char16 in 23-char line (k24) 
   9170000.0      4400000.0 (     0.480)     17300000.0 (     1.887)   Char in 80-char line (TR 10) 
   3080000.0      1090000.0 (     0.354)      7810000.0 (     2.536)   Char in 30-char line (TR 24) 
   6690000.0      2640000.0 (     0.395)      5180000.0 (     0.774)   Char in 20/40/20 line (6x13, TR 10) 
   1160000.0       351000.0 (     0.303)      2080000.0 (     1.793)   Char16 in 7/14/7 line (k14, k24) 
   8310000.0      2880000.0 (     0.347)     15600000.0 (     1.877)   Char in 80-char image line (6x13) 
   7510000.0      2550000.0 (     0.340)     12900000.0 (     1.718)   Char in 70-char image line (8x13) 
   5650000.0      2090000.0 (     0.370)     11400000.0 (     2.018)   Char in 60-char image line (9x15) 
   2000000.0       670000.0 (     0.335)      7780000.0 (     3.890)   Char16 in 40-char image line (k14) 
    823000.0       270000.0 (     0.328)      4470000.0 (     5.431)   Char16 in 23-char image line (k24) 
   8500000.0      3710000.0 (     0.436)      8250000.0 (     0.971)   Char in 80-char image line (TR 10) 
   2620000.0       983000.0 (     0.375)      3650000.0 (     1.393)   Char in 30-char image line (TR 24)

This is our old friend x11perfcomp, but slightly adjusted for a modern reality where you really do end up drawing billions of objects (hence the wider columns). This table lists the performance for drawing a range of different fonts in both poly text and image text variants. The first column is for Xephyr using software (fb) rendering, the second is for the existing Glamor GL_POINT based code and the third is the latest GL_QUAD based code.

As you can see, drawing points for every lit pixel in a glyph is surprisingly fast, but only about 1/3 the speed of software for essentially any size glyph. By minimizing the use of the CPU and pushing piles of work into the GPU, we manage to increase the speed of most of the operations, with larger glyphs improving significantly more than smaller glyphs.

Now, you ask how much code this involved. And, I can truthfully say that it was a very small amount to write:        |    2 
 glamor.c           |    5 
 glamor_core.c      |    8 
 glamor_font.c      |  181 ++++++++++++++++++++
 glamor_font.h      |   50 +++++
 glamor_priv.h      |   26 ++
 glamor_text.c      |  472 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 glamor_transform.c |    2 
 8 files changed, 741 insertions(+), 5 deletions(-)

Let’s Start At The Very Beginning

The results of optimizing text encouraged me to start at the top of x11perf and see what progress I could make. In particular, looking at the current Glamor code, I noticed that it did all of the vertex transformation with the CPU. That makes no sense at all for any GPU built in the last decade; they’ve got massive numbers of transistors dedicated to performing precisely this kind of operation. So, I decided to see what I could do with PolyPoint.

PolyPoint is absolutely brutal on any GPU; you have to pass it two coordinates for each pixel, and so the very best you can do is send it 32 bits, or precisely the same amount of data needed to actually draw a pixel on the frame buffer. With this in mind, one expects that about the best you can do compared with software is tie. Of course, the CPU version is actually computing an address and clipping, but those are all easily buried in the cost of actually storing a pixel.

In any case, the results of this little exercise are pretty close to a tie — the CPU draws 190,000,000 dots per second and the GPU draws 189,000,000 dots per second. Looking at the vertex and fragment shaders generated by the compiler, it’s clear that there’s room for improvement.

The fragment shader is simply pulling the constant pixel color from a uniform and assigning it to the fragment color in this the simplest of all possible shaders:

uniform vec4 color;
void main()
       gl_FragColor = color;

This generates five instructions:

Native code for point fragment shader 7 (SIMD8 dispatch):
   FB write target 0
0x00000000: mov(8)          g113<1>F        g2<0,1,0>F                      { align1 WE_normal 1Q };
0x00000010: mov(8)          g114<1>F        g2.1<0,1,0>F                    { align1 WE_normal 1Q };
0x00000020: mov(8)          g115<1>F        g2.2<0,1,0>F                    { align1 WE_normal 1Q };
0x00000030: mov(8)          g116<1>F        g2.3<0,1,0>F                    { align1 WE_normal 1Q };
0x00000040: sendc(8)        null            g113<8,8,1>F
                render ( RT write, 0, 4, 12) mlen 4 rlen 0      { align1 WE_normal 1Q EOT };
   END B0

As this pattern is actually pretty common, it turns out there’s a single instruction that can replace all four of the moves. That should actually make a significant difference in the run time of this shader, and this shader runs once for every single pixel.

The vertex shader has some similar optimization opportunities, but it only runs once for every 8 pixels — with the SIMD format flipped around, the vertex shader can compute 8 vertices in parallel, so it ends up executing 8 times less often. It’s got some redundant moves, which could be optimized by improving the copy propagation analysis code in the compiler.

Of course, improving the compiler to make these cases run faster will probably make a lot of other applications run faster too, so it’s probably worth doing at some point.

Again, the amount of code necessary to add this path was tiny:        |    1 
 glamor.c           |    2 
 glamor_polyops.c   |  116 ++++++++++++++++++++++++++++++++++++++++++++++++++--
 glamor_priv.h      |    8 +++
 glamor_transform.c |  118 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 glamor_transform.h |   51 ++++++++++++++++++++++
 6 files changed, 292 insertions(+), 4 deletions(-)

Discussion of Results

These two cases, text and points, are probably the hardest operations to accelerate with a GPU and yet a small amount of OpenGL code was able to meet or beat software easily. The advantage of this work over traditional GPU 2D acceleration implementations should be pretty clear — this same code should work well on any GPU which offers a reasonable OpenGL implementation. That means everyone shares the benefits of this code, and everyone can contribute to making 2D operations faster.

All of these measurements were actually done using Xephyr, which offers a testing environment unlike any I’ve ever had — build and test hardware acceleration code within a nested X server, debugging it in a windowed environment on a single machine. Here’s how I’m running it:

$ ./Xephyr  -glamor :1 -schedMax 2000 -screen 1024x768 -retro

The one bit of magic here is the ‘-schedMax 2000’ flag, which causes Xephyr to update the screen less often when applications are very busy and serves to reduce the overhead of screen updates while running x11perf.

Future Work

Having managed to accelerate 17 of the 392 operations in x11perf, it’s pretty clear that I could spend a bunch of time just stepping through each of the remaining ones and working on them. Before doing that, we want to try and work out some general principles about how to handle core X fill styles. Moving all of the stipple and tile computation to the GPU will help reduce the amount of code necessary to fill rectangles and spans, along with improving performance, assuming the above exercise generalizes to other primitives.

Getting and Testing the Code

Most of the changes here are from Eric’s glamor-server branch:

git:// glamor-server

The two patches shown above, along with a pair of simple clean up patches that I’ve written this week are available here:

git:// glamor-server

Of course, as this now uses libepoxy, you’ll need to fetch, build and install that before trying to compile this X server.

Because you can try all of this out in Xephyr, it’s easy to download and build this X server and then run it right on top of your current driver stack inside of X. I’d really like to hear from people with Radeon or nVidia hardware to know whether the code works, and how it compares with fb on the same machine, which you get when you elide the ‘-glamor’ argument from the example Xephyr command line above.


Andrew Solomon gave an interesting TED talk about depression [1].

I’ve had problems with depression at various times through my life, about 18 months ago I recognised it as depression after reading a list of depression symptoms on the Beyond Blue site. I think that they have changed their site recently they now have an interactive checklist for depression on their web site [2] (or maybe I just missed the interactive part last time).

There is a strong correlation between Autism and depression, this is noted both in research and described on the web, Elspeth’s article on Bluehackers is a good description of this [3]. Her experiences differ from mine in some ways, but it’s within the common range of experiences you see described on Autism forums etc.

Depression is getting more widely known, organisations such as Beyond Blue and Bluehackers are doing good work in spreading information to people who might be depressed and people who know them. The general advice is to see a psychologist, which is good advice for average people.

Alexithymia and Choice of Psychologists

One problem with such advice is that it doesn’t apply so well to people with alexithymia (read the Wikipedia page) [4], that means most people on the Autism Spectrum. The Wikipedia page says “psychosomatic illness or substance abuse is frequently exacerbated should these individuals enter psychotherapy”. Based on people I know and accounts I’ve read on the Internet I expect that anyone on the Autism Spectrum who sees a psychologist that doesn’t specialise in Autism (which means most psychologists) will get a result that’s about the opposite of what one might desire. In theory a psychologist could recognise someone as being possibly on the Autism Spectrum and refer them to an expert for assessment, but I haven’t heard of that happening to an adult.

I think that most people who have some degree of alexithymia will avoid psychologists, without ever seeing one you can just know that it’s going to be unpleasant. So while you wouldn’t want someone who has alexithymia to visit a random psychologist in practice that shouldn’t happen too often as such people will be more likely to reject any advice about seeing a psychologist.

My page of Autism self-diagnosis tests has a link to an Alexithymia test [5]. If you get a high score on that test (or if taking the test seems too unpleasant) then it’s best to see a psychologist who specialises in Autism. Such psychologists are usually happy to work for people who don’t quite meet the Autism diagnostic criteria, but they may strongly recommend an Autism assessment so that they can determine the best strategies for treatment.

In terms of addressing such problems it seems that the best thing we can do is try and reduce the stigma associated with Autism. The vast majority of people on the Autism Spectrum have little in common with Rain Man. Many of the so-called Autism advocacy organisations make things worse by implying that everyone who is Autistic is unable to live an independent life which helps them in fundraising but doesn’t help us.

March 05, 2014

IPv4 and NAT, the future

A discussion came up during lunch at the IPv6 Roadshow… is NAT robust and well understood?

Since this is kind-of an opinion piece, I’m not going to cite evidence; it’s just random experience and anecdote that (to me) makes a lot of sense.

NAT is robust: NAT, for the sake of NAT – sure, maybe it’s robust (or “robust enough”). But even if you argue that NAT itself is robust, it in turn breaks a host of other things. Off the top of my head, some examples are:

IP Reputation: Any sizable NAT endpoint (like 3G carriers) has a rubbish IP Reputation. Want to send mail directly to the internet, instead your 3G provider’s server? Simply not going to work reliably.

Rate limiting: Any thing that rate-limits by IP (say, to block obvious DoS attacks) is going to get triggered by heavy use from a NAT. Similar for things that limit the number of simultaneous connections by IP.

ALGs: ALGs in CPE are a crufty workaround that just as often as not, break the protocol they’re trying to fix. (Like rewriting SIP packets and messing things up). Likewise

Any P2P technology (Torrent, Skype, etc): If you don’t have a public IP just for yourself, these are either going to not work, or they’re going to increase in latency compared to the ideal. The common ways of making this stuff work (port forwarding, UPnP IGD) just isn’t likely to be supported at the carrier level (IMHO). More dependencies of routing through third-party servers, and more off-net traffic for carriers. There is a thing called PCP that may allow port forwarding in a CGNAT world, but I’m not holding my breath.

Timeouts: If your carrier going forwards needs to maintain a NAT entry for every connection, they’re going to have to time it out at some point. You’re going to be absolutely forced to use keepalives in every protocol that you don’t want summarily disconnected. And I hope those NAT tables are replicated to other gateways for if they fail…

Summary: There are challenges with this stuff in IPv6 – eg. you don’t want each /128 to have its own reputation – but the sort of wide-scale NAT that the future may hold is going to really mess up a lot of things and make them worse… but only over IPv4.

But it’s well understood - Actually, I’d also disagree that NAT is all that “well understood” – amongst sysadmins, do we all understand the different types of NAT (symmetric, full-cone; 1:1/1:N, NAT+PAT vs just NAT, etc) and the implications of each of those on every end-user applications we support?

February 28, 2014

Links February 2014

The Economist has an interesting and informative article about the lack of reproducability of scientific papers and the implications for scientific research [1].

Regina Dugan gave an interesting TED talk about some of the amazing DARPA projects [2].

Chris Anderson interviewed Elon Musk about the Tesla cars, SpaceX, and his new venture Solar City [3]. Elon has a lot of great ideas for improving humanity while also making money.

Smart Planet has an interesting article about Bhutan’s switch to electric vehicles [4].

Paul Piff gave an insightful and well researched TED talk about the ways that money makes people mean [5].

Maryn McKenna wrote an interesting article for Wired about what happens when the current anti-biotics stop working [6]. Unfortunately she lists increasing food prices as a consequence, really the unreasonably low price of meat is due to the misuse of anti-biotics that is causing this problem.

Linda Walther Tirado wrote an interesting article about being poor titled “Why I Make Terrible Decisions, or, Poverty Thoughts” [7]. It gives a real insight into the situation of people who are trapped in poverty. When someone who is as obviously intelligent as Linda feels that it’s impossible to escape poverty there is a real problem in the system. While Australia doesn’t suck nearly as badly as the US in this regard (higher minimum wage and better health care) we still need to improve things, I know people in Australia who’s experience bears some similarity to Linda’s.

Maxwell Neely-Cohen wrote an interesting article about peer pressure [8]. Some of the conclusions are dubious, but the ideas on the way the Internet changes peer relationships in high school are interesting.

An English pediatrician wrote an article for The Daily Beast about why he won’t accept anti-vac clients [9].

There are some decent people in the Liberal Party, Liberal MP Warren Entsch attacks Cory Bernardi on ‘gay obsession’ [10]. AFAIK we haven’t yet had a gay sex scandal involving a homophobic Australian politician…

February 24, 2014

iPads as in-flight entertainment

I’m writing this whilst sitting on a Qantas flight from Perth to Sydney, heading home after attending the fantastic 2014.

The plane is a Boeing 767, and unlike most flights I have been on in the last decade, this one has no in-flight entertainment system built into the backs of seats.

Instead, every passenger is issued with an Apple iPad (located in the back seat pocket), fitted with what appears to be a fairly robust leather jacket emblazoned with the words “SECURITY DEVICE ATTACHED” (presumably to discourage theft).

There are a few observations I have made about this setup.

At first glance, using a portable tablet device to deliver audio/visual entertainment instead of using a more traditional fixed setup seems like a good idea.

For one, the cost reduction for replacing faulty or out of date devices immediately becomes obvious.

And people by and large are reasonably capable these days when it comes to interacting with a tablet touchscreen interface, even if they don’t own one of those devices themselves.

However, I did notice quite a few passengers having issues with the iPads. The cabin crew spent roughly the first 30 minutes after takeoff dealing with support issues, which mainly involved pointing out user interface buttons, assisting with plugging in headphones, or replacing the occasional non-functional iPad with spares they had on-hand.

The iPad assigned to me initially had only 62% charge at the start of the flight, and had no Wi-Fi connection. Safari was the only app allowed, and due to the lack of Wi-Fi, the predefined home page could not load, resulting in an error. Due to the interface lockdown, I could not manually tell it to reconnect. The seat next to me was empty, so I was able to use the iPad assigned to that seat, which worked without a problem.

The fact that there were no charging ports or cables provided, suggests the iPads are charged elsewhere, perhaps off-plane and loaded on prior to boarding.

The entertainment system was delivered entirely through the web browser, which provided TV shows, movies, and some radio content. Unfortunately the system provided no flight information (e.g. location, altitude, or flight path), so I had to rely on spotting landmarks out the window to get a rough idea of where I was.

Because the iPads had an airplane mode symbol with a Wi-Fi indicator, it was obvious that the media was delivered wirelessly. That raises interesting questions about delivering streaming media to hundreds of devices at once. Presumably either they have been careful to distribute a substantial number of access points throughout the aircraft, or they are relying on not too many passengers using up all the bandwidth at once.

Running a site survey on my laptop (iwlist wlan0 scan), I was able to detect 13 access points within range of my seat (53K, just a few rows from the very back), each of them with a hidden SSID. There was a mixture of 2.4 GHz and 5 GHz frequencies in use. Because I did not have airodump-ng installed on my laptop at the time, and not being familiar with wireless sniffing with other methods, I was unable to find out the name of the hidden SSID used.

At this point I was wondering why Qantas have issued iPads, rather than any other non–Apple tablet device. Presumably the lockdown features that Apple provides works well enough for 90% of Qantas’ use case, but I can’t help but wonder whether a customised system, e.g. a custom built Android on a more generic tablet, would provide better lockdown security, and easier management.

I don’t think that a customised Android or other Linux–based system is outside Qantas’ reach, especially given that fixed setups in some of their other planes already appear to sport a similar amount of customisation.

Presumably the decision to deploy iPads was made by executive management, perhaps even being first drafted on a bar napkin, rather than being a technical decision that was made by objectively weighing up the benefits and disadvantages of various systems.

At this stage, I should probably point out that Qantas is a largely brand-oriented company, relying on a strong corporate identity to justify their markedly higher prices. For example, in my dinner serving, I was given brand-name Arnotts crackers, Bega cheese, Dairy Farmers milk, Mount Franklin water, Just Juice, and Coca-Cola. Perhaps given this, it is unsurprising that Apple iPads were chosen.

From a power perspective, given that the iPads run on battery power for the duration of the flight, and the only gear required to be powered by the aircraft are the access points and file serving infrastructure, rather than the fixed entertainment consoles as well, we are probably seeing a reduction on the aircraft’s power strain as a result of this. How significant this is, I am not sure, as I would imagine power is not a particularly scarce resource on an aircraft on the scale of a Boeing 767.

I wonder how much this idea will catch on. Assuming the Wi-Fi based approach continues to perform well, it makes a good retrofit solution for replacing older generation entertainment systems in current aircraft.

Given that what was once a cutting edge premium feature aboard aircraft has now become standard, and demonstrably delivers customer satisfaction whilst being built atop of commodity consumer hardware, I feel we will see more of this in aircraft in years to come.

Enlightenment 0.18 Won't Suspend on Debian

I recently switched my Debian (Sid) laptop from Enlightenment 0.17 to 0.18. After the upgrade my laptop would no longer suspend. A minor but inconvenient problem for which there were many possible rabbit holes to run down and I ran down a few.


All of the leads were dead ends for me but one thing I kept reading was how Enlightenment hooks into systemd if it's installed. Installing systemd was on my ToDo list, so I opted to install it and see if that brought me any E18 suspend love.


My Debian (Sid) installation already had systemd installed:

$ dpkg -l | grep -i systemd
ii  systemd     204-7   amd64   system and service manager

but it wasn't active:

$ cat /proc/1/comm

So I hit up the Debian wiki entry for systemd and saw that I just needed to install systemd-sysv and reboot:

$ sudo apt-get install systemd-sysv
$ sudo reboot

After the reboot, E18 suspended perfectly and systemd was the default:

$ cat /proc/1/comm

Can't get much more elegant than that :-)

February 23, 2014

Electric Car Charging in Melbourne

GoGet plug-in Prius chargingChargePoint Chargers

This morning I noticed some parking bays reserved for car charging in a car park at the corner of Sydney Rd and Glenlyon St in Brunswick (near Aldi). One of the parking spots was occupied by a Plug-in Prius from GoGet [1]. I didn’t even realise that you could get a plug-in Prius in Australia. The charging station is run by Charge Point [2].

The charging points are about 1.5m high and the cable is about 3cm thick (about as thick as the pipe used for filling a car with petrol), so it would charge a car much faster than could be done with a regular power point.

One big problem with the Charge Point web site is that they don’t give any information on pricing. They sell home charge points (which I guess means just an all-weather two-phase power point) but don’t give a price for that. They sell charge points that can be used by commercially but don’t give a price for them either. Also their infrastructure for billing is apparently based on companies installing charge points and setting a price for the service. Some charge points may offer free service (I guess staff car parks and some government agencies) and others will charge varying rates – none of which is available on the web site. Apparently they have an “online portal” which gives information on such things to registered users – so you have to register to discover what it costs. Of course hardly anyone is going to register before discovering the price, not even when registration is free. But while registration is free the web site demands the make and model of the electric car, so presumably one has to spend $40,000 or more on a vehicle before discovering the price and availability of charging it.

Charge Point can be used as an example of how not to design a web site that promotes a service, or at least how not to promote a service that is aimed at saving money (electricity is significantly cheaper than petrol so it’s of interest to people and organisations that want to save money). The Charge Point site seems to be better suited to showing that the concept can work than convincing people that they should sign up for it. It seems to me that the best thing that they could do would be to prominently display the average cost of all non-free charge points that are open to the public along with an explanation of the price of driving a desirable car (such as a plug-in Prius or a Nissan Leaf) with such an electricity cost.

The “contact” section on the web site only has a link for “careers”.

I don’t think it’s possible to get widespread use of electric vehicles without getting better information out there. It appears that Charge Point is relying on councils to do the work of promoting their business by installing their stations and reserving car parking as Moreland council has done in this case.

Using a Brother network scanner with Linux

For a while now we have had a Brother MFC-J415W all in one wireless printer / fax / scanner thingy. It prints fine using CUPS and we have used it as a scanner with SD cards in sneaker-net mode. Linux support Brother actually do a reasonably good job supporting all of their recent equipment under […]

February 20, 2014

TES fond

If you are an HP employee, or a Value Added Reseller (VAR), then I’d like to invite you to come to an Event I’m co-organizing:

TES logo

High Performance Computing and Open Source Linux

Technical Excellence Symposium

17 – 21 March 2014

Grenoble, France

TES fond

The goal is to gain expertise and increase your sales and earnings potential.

You are invited to the 2014 High Performance Computing and Open Source & Linux Technical Excellence Symposium. This highly regarded technological event dedicated to professionals is the unique opportunity to boost your skills and knowledge in just 4.5 days!!

Why should you attend the EMEA HPC & OSL event?

  • Receive an update on the entire HPC and Open Source & Linux solutions including Enterprise Storage, Servers and Networking (EG) portfolio.
  • Reinforce your knowledge in this focus area.
  • Connect with the HP management and peers in the HP Certified Professional Community.

Do not miss this exclusive opportunity to give your career a boost and be part of the HPC and OSL community. This is also an excellent opportunity to get the latest technology updates even if you participated in this event before.

Pre-register today to secure your seat by clicking on the following link:

Choose among over 100 technological and solution sessions, a mix of hands-on labs, breakout demos and round tables.

This technical event covers the HP’s broad enterprise products and solutions in an open source environment of high performance computing, web services portfolio and new product introductions (e.g. Moonshot program and big data solutions).

Along the sessions, you will meet the HPC and OSL experts to share knowledge and insights in many key business areas including competitive tools.

Back home and freshly armed with an enhanced skills set, you will be ready to provide a great business value to your customers.

Throughout the event, we will offer detailed tracks on High Performance Computing solutions portfolio using our HP Blade System, ProLiant Scalable servers, Storage Works and Network products and solutions using Open Source & Linux software products and services.

We will have a huge emphasis on topics like Hyperscale products, smart storage, purpose-built servers for big data, Moonshot program wave 2 solutions as well as open cloud solutions.

Our High-level agenda is available. The detailed one will be sent to you after your registration.

In addition, there will be:

  • Sponsor sessions delivered by top names in the industry
  • Presentations by HPC and Open Source experts
  • Evening events

For detailed program information, please visit the event website

Any question? We invite you to contact us at:

We are looking forward to meeting you at the 2014 HPC and OSL Technical Symposium!

Filed under: FLOSS Tagged: Event, High Performance Computing, HP, HPLinux, Linux, Open Source, ProLiant, TES

February 18, 2014

IPv6 Roadshow, in a city near you!

A group I’m a part of (SAGE-AU) is running an “IPv6 Roadshow”, to deliver a full day of training on IPv6 to major Australian cities.

Each city is covered on a different day, starting on March 3rd (ie., really soon). There’s also a live stream being broadcast from the Sydney event on March 4th.

Deep discounts are naturally available for SAGE-AU members (don’t tell anyone but it’s cheaper to become a member, then sign up for the roadshow :)

Check it out -

February 17, 2014

In progress.


This is just to say that I'm still around, and that I'm working on polishing (i.e., making a new start with) my life online.

And now I'm off to have a cup of tea and keep on studying. Bye!

MicroPeak Approved for NAR Contests

MicroPeak Approved for NAR Contests

The NAR Contest Board has approved MicroPeak for use in contests requiring a barometric altimeter starting on the 1st of April, 2014. You can read the announcement message on the contestRoc Yahoo message board here:

Contest Board Approves New Altimeter

The message was sent out on the 30th of January, but there is a 90 day waiting period after the announcement has been made before you can use MicroPeak in a contest, so the first date approved for contest flights is April 1. After that date, you should see MicroPeak appear in Appendix G of the pink book, which lists the altimeters approved for contest use

Thanks much to the NAR contest board and all of the fliers who helped get MicroPeak ready for this!

lazy-loading class-based-views in Django

So one of the nice things with method-based views in Django is the ability to do this sort of thing to load a view at the path frontend.views.home:

urlpatterns = patterns(

    url(r'^$', 'home', name='home'),

Unfortunately, if you’re using class-based-views, you can’t do this:

urlpatterns = patterns(

    url(r'^$', 'HomeView', name='home'),

And instead you had to resort to importing the view and calling HomeView.as_view(). Sort of annoying when you didn’t want to import all of those views.

It turns out however that overloading the code to resolve HomeView is not that difficult, and we can do it with a pretty straightforward monkeypatch. This version uses the kwargs argument of url() to pass keyword arguments to as_view().

from django.conf import urls
from django.views.generic import View

class ClassBasedViewURLPattern(urls.RegexURLPattern):
    A version of RegexURLPattern able to handle class-based-views

    Monkey-patch it in to support class-based-views

    def callback(self):
        Hook locating the view to handle class based views

        view = super(ClassBasedViewURLPattern, self).callback

        if isinstance(view, type) and issubclass(view, View):
            view = view.as_view(**self.default_args)
            self.default_args = {}

        return view

urls.RegexURLPattern = ClassBasedViewURLPattern

February 15, 2014


AltOS 1.3.2 — Bug fixes and improved APRS support

Bdale and I are pleased to announce the release of AltOS version 1.3.2.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a minor release of AltOS, including bug fixes for TeleMega, TeleMetrum v2.0 and AltosUI .

AltOS Firmware — GPS Satellite reporting and APRS improved

Firmware version 1.3.1 has a bug on TeleMega when it has data from more than 12 GPS satellites. This causes buffer overruns within the firmware. 1.3.2 limits the number of reported satellites to 12.

APRS now continues to send the last known good GPS position, and reports GPS lock status and number of sats in view in the APRS comment field, along with the battery and igniter voltages.

AltosUI — TeleMega GPS Satellite, GPS max height and Fire Igniters

AltosUI was crashing when TeleMega reported that it had data from more than 12 satellites. While the TeleMega firmware has been fixed to never do that, AltosUI also has a fix in case you fly a TeleMega board without updated firmware.

GPS max height is now displayed in the flight statistics. As the u-Blox GPS chips now provide accurate altitude information, we’ve added the maximum height as computed by GPS here.

Fire Igniters now uses the letters A through D to label the extra TeleMega pyro channels instead of the numbers 0-3.

February 14, 2014

A Script for Broken Links

The Scenario:

Imagine that there's been a tumultuous cut over from one website to another that's left you with over 6,000+ broken links ruining your search engine foo. As a sysadmin you can't fix the missing content but you can bring some peace to those 404s.

You programatically converted a report of those 6,000+ URLs into an Apache redirect file, reloaded Apache tested a couple of redirects manually and they were fine but you're not going to manually test those 6,000+ URLs are you? (the answer's "no", in case you're unsure).

My Nasty Hack Solution

I wrote to address testing workload (available from my git repo or via github). When run, asks for where the file of raw URLs is and where you would like any links that are still broken saved to.

Sample Input File of URLs to be Tested

$ cat /tmp/URLs.txt

When you run, your output will be similar to the below:

$ ./
Enter the path and file name that contains the broken URLs:
Enter the full or relative path & file name for the list of still broken URLs:
Now checking:
Now checking:
Now checking:

Total URLs in the file: 3
Total URLs still broken: 1 /tmp/broken.txt

The output to the requested broken.txt looks like this:

Happy hunting still broken URLs :-)

February 10, 2014

That's Not a Zuccini...

Most of the work we've done on the farm since we moved there has been around livestock, planting hedges and wildlife corridors but no food. Late last year I built Kristina a rose bed out of goat bedding, alpaca and horse manure which was promptly planted with tomatoes, pumpkins and zuccinis instead.

it's now zuccini harvest time and we thought we'd found a monster yesterday:

THAT's not a zuccini:

THAT's not a zuccini Then we found this one today:

THIS is a zuccini:

THIS is a zuccini

Fingerprints and Authentication

Dustin Kirkland wrote an interesting post about fingerprint authentication [1]. He suggests using fingerprints for identifying users (NOT authentication) and gives an example of a married couple sharing a tablet and using fingerprints to determine who’s apps are loaded.

In response Tollef Fog Heen suggests using fingerprints for lightweight authentication, such as resuming a session after a toilet break [2].

I think that one of the best comments on the issue of authentication for different tasks is in XKCD comic 1200 [3]. It seems obvious that the division between administrator (who installs new device drivers etc) and user (who does everything from playing games to online banking with the same privileges) isn’t working, and never could work well – particularly when the user in question installs their own software.

I think that one thing which is worth considering is the uses of a signature. A signature can be easily forged in many ways and they often aren’t checked well. It seems that there are two broad cases of using a signature, one is to enter into legally binding serious contract such as a mortgage (where wanting to sign is the relevant issue) and the other is cases where the issue doesn’t matter so much (EG signing off on a credit card purchase where the parties at risk can afford to lose money on occasion for efficient transactions). Signing is relatively easy but that’s because it either doesn’t matter much or because it’s just a legal issue which isn’t connected to authentication. The possibility of serious damage (sending life savings or incriminating pictures to criminals in another jurisdiction) being done instantly never applied to signatures. It seems to me that in many ways signatures are comparable to fingerprints and both of them aren’t particularly good for authentication to a computer.

In regard to Tollef’s ideas about “lightweight” authentication I think that the first thing that would be required is direct user control over the authentication required to unlock a system. I have read about some Microsoft research into a computer monitoring the office environment to better facilitate the user’s requests, an obvious extension to such research would be to have greater unlock requirements if there are more unknown people in the area or if the device is in a known unsafe location. But apart from that sort of future development it seems that having the user request a greater or lesser authentication check either at the time they lock their session or by policy would make sense. Generally users have a reasonable idea about the risk of another user trying to login with their terminal so user should be able to decide that a toilet break when at home only requires a fingerprint (enough to keep out other family members) while a toilet break at the office requires greater authentication. Mobile devices could use GPS location to determine unlock requirements, GPS can be forged, but if your attacker is willing and able to do that then you have a greater risk than most users.

Some users turn off authentication on their phone because it’s too inconvenient. If they had the option of using a fingerprint most of the time and a password for the times when a fingerprint can’t be read then it would give an overall increase in security.

Finally it should be possible to unlock only certain applications. Recent versions of Android support widgets on the lock screen so you can perform basic tasks such as checking the weather forecast without unlocking your phone. But it should be possible to have different authentication requirements for various applications. Using a fingerprint scan to allow playing games or reading email in the mailing list folder would be more than adequate security. But reading the important email and using SMS probably needs greater authentication. This takes us back to the XKCD cartoon.

February 09, 2014

I see Mannequins

I See Mannequins

As a street photographer you spend a lot of time walkign the streets and often the same streets many times.  To some these streets often look the same but with a trained eye you soon start to see the differences every day.  You also start to notice the little things that change such as signage in a shop window or the menu of a cafe, but the one thing I have noticed for many years is the changing Mannequins that many shops use to advertise or display a particular product.  Over the last couple of years I have slowly been photographing a large umber of Mannequins - especially those that stan

View of Navarinou Park

This Thursday I’ll be running a workshop around View of Navarinou Parkmore inclusive strategies for activism at Dasein Coffee Art & Spirits (Σολωμού 12, Εξάρχεια/Solonou 12, Exarcheia). Feel free to come along! (The Facebook event page is here.)

Στο εργαστήρι αυτό θα ερευνήσουμε διαφορετικούς τρόπους με τους οποίους μπορούμε να διοργανώνουμε ακτιβιστικά δρώμενα, ανοιχτά, προσβάσιμα, χωρίς αποκλεισμούς.

Υπάρχει μία τάση στον ακτιβισμό να καταλήγουμε να μιλάμε αποκλειστικά με ανθρώπους με τους οποίους συμφωνούμε, και να χτίζουμε κοινότητες στις οποίες, άνθρωποι που δεν θεωρούν τους εαυτούς τους ακτιβιστές, αισθάνονται άβολα να συμμετέχουν. Αν θέλουμε όμως να φέρουμε πραγματική αλλαγή, χρειάζεται να μάθουμε να είμαστε πιο ανοιχτοί προς τους ανθρώπους που σκέφτονται διαφορετικά από εμάς, που έχουν διαφορετικές ιδέες.

Αυτό το εργαστήρι θα εξερευνήσει μερικές βασικές θεματικές, όπως:

  • Tiered activism- Κλιμακωτός ακτιβισμός (επιτρέπει στα άτομα να πραγματοποιούν μικρά βήματα κάθε φορά προς την αλλαγή)
  • Η σημασία εναλλακτικών ιστοριών: Προτείνουμε πολλές εκδοχές για το πώς ο κόσμος μας μπορεί να αλλάξει, όχι μόνο μία.
  • Ενσυναισθητική επικοινωνία
  • Οι ταυτότητες του ακτιβιστή: Κριτική
  • Συζητώντας για κάθε μία από τις παραπάνω θεματικές, προτρέπουμε τους συμμετέχοντες να μοιραστούν δικές τους εμπειρίες σχετικά με το τι λειτούργησε καλά και τί όχι, και να ανακαλύψουν μεθόδους αποτελεσματικότερου ακτιβισμού.

Η συντονίστρια του εργαστηρίου, Sky Croeser, είναι ερευνήτρια και ακτιβίστρια από την Αυστραλία, που πρόσφατα μετακόμισε στο Τορόντο. Η έρευνά της επικεντρώνεται στο πώς ακτιβιστές σε όλο τον κόσμο προσπαθούν να επιδράσουν εναλλακτικά στο παρόν σύστημα, και ενδιαφέρεται να μοιραστεί τακτικές και στρατηγικές ώστε να βοηθήσει διαφορετικές κοινότητες και κολλεκτίβες να μάθουν η μία από την άλλη. Πρόσφατα, συνεπιμελήθηκε το βιβλίο “Lessons for Social Change in the Global Economy: Voices from the Field”. Μπορείτε να μάθετε περισσότερα για την έρευνά της στην ιστοσελίδα

This workshop explores different ways to build activist events that are open and inclusive. There is a tendency in activism to end up talking only to people we agree with, and to build communities where people who don’t think of themselves as activists are uncomfortable taking part. However, if we want to create meaningful change we need to think about how to be open to people who are different from us, and have different ideas. This workshop will explore a few basic principles, including:

  • Tiered activism (allowing people to take small steps),
  • The importance of other stories,
  • Empathic communication, and
  • Challenging activist identities.

As we talk about each principle, we will encourage people to share their own experiences about what has worked well and what has not, and imagine ways to build more effective activism

Bio: Sky is a researcher and activist from Australia who is currently based in Toronto. Her research focuses on how activists around the world are trying to build alternatives to the current system, and she is interested in sharing tactics and strategies to help different communities learn from each other. She recently co-edited Lessons for Social Change in the Global Economy: Voices from the Field.

Many thanks to Maria Sidiropoulou for her work organising this and translating the content (she will also be providing translation during the workshop). You can find her writing on Global Voices.

February 08, 2014

Next Phase of Learning

Whilst I'm still happily making my way through Learning Python the Hard Way the next phase of learning has arrived. It may not look it but Learning Python is 7cm thick. I might digress on Raspberry Pi Hacks for a little bit first...

Learning Python and Raspberry Pi Hacks

February 06, 2014


Finally, after having published the version 0.12.3 on time for, I found some remaining bugs in it (problem of time based realease and not enough time to test), then I made a 0.12.4 version soon after as promised to solve these, but still missed some aspects, and those are now solved with this 0.12.5 version.

So this version fixes an issue with messages printed which were done in double, and also the way we manage naming of Debian versions, following the recent change for Debian 7 (using now only the major version, and not mentioning the minor version which is now useless).

I also fixed a dependency issue, seen when trying to install the beta of MondoRescue, where a function wasn’t provided in the perl-ProjectBuilder module, while used in the code :-( This is now solved, and I’ll now start working on the next MondoRescue beta which should now be installable with yum/apt-get/urpmi.

This version should now be stable to use for some weeks, as I do not plan to make big changes in it, as this is perfectly fitting my needs now. Of course, more could always been done, but I can’t advance on all sides at the same time, so now it’s MondoRescue‘s turn !

Enjoy !

Filed under: FLOSS Tagged: Mondorescue, Open Source,

February 05, 2014

Pettycoin and working with limited visibility.

At I gave a last-minute talk previewing my work on pettycoin (video, slides), an experiment to shard a bitcoin-like network.  The idea is to trade off some security and robustness in return for scale, but use it only for small amounts where fraud is less worthwhile.  On the bitcoin network today this is already seen with zero-confirmation transactions, and this is the niche pettycoin seeks to fill.

There are numerous problems to be overcome (one being the time taken by my day job, of course).  But segmenting the network and the blockchain is an interesting challenge: bitcoin’s blockchain is already designed so that you can have partial knowledge (mainly so you can prune used outputs).  But there’s a clear divide between full nodes, and second-class partial nodes.  I want a system where no one need know everything, and I’m getting closer to that goal.

Consider the simplest useful transaction in the bitcoin network, with one input (ie. a previous transaction’s output) and one output.  To verify this is a fairly simple process:

  1. Is the transaction well-formed?
  2. Find the transaction whose output this is spending.
  3. Does the signature match the address of that output?
  4. Has that output already been spent?

With bitcoin, you’re expected to know every transaction with unspent outputs, so if you can’t find the transaction at step 2, the verification fails. Even better, you can verify that previous transaction, too, all the way back to the creating of the coins involved.  Your only worry is that the blockchain you have is the same as everyone else’s, so they’ll accept your transaction later.

If you don’t expect to know everything, it’s more difficult.  You can use a merkle proof to show that a transaction was present in a block; it takes just log(N) hashes for an N-transaction block.  So you could prove that all those previous transactions are in the blockchain (though that might be thousands of transactions) by providing me with each transaction and proof.

But this can’t prove that there are not duplicate transactions in the blockchain itself.  Only knowing the entire contents would do that.  So we’re relying on the rest of the network, each with a partial view, to check that didn’t happen.

This leads to the two requirements for any aspect of the pettycoin system which a node can’t verify by itself:

  1. The information to verify must be seen by some honest nodes.
  2. Each node must have an efficient way of reporting if it sees a problem.

The former is a bit tricky.  Consensus is formed by the blockchain, but you don’t want to see all of it.  You might expect to see some fraction of it, but if you don’t, how would you alert the network in a way that can’t be faked?   Imagine a miner holds back 5 transactions in the block, the miner might wait for your complaint message on one, then release that transaction making you look like the dishonest one.  By making you cry wolf, they can ensure you are ignored.

The solution used in pettycoin is that miners have to prove that they know the transactions in the 10 previous blocks.  They do this by hashing the transactions from the previous block into a merkle tree like normal, only they prefix each transaction with their payout address (this is called prev_merkle in the code).  The only way to generate this hash is to know the contents of each transaction, and you can’t make a valid block without it.  Unfortunately, the only way to demonstrate that this hash is wrong (thus the block is invalid) is to also know the contents of each transaction in the block.  Thus transactions are batched into groups of 4096; you only need send 4096 transactions to prove that one of the hashes in a block is wrong.  Miners will insist on knowing the transactions for those blocks, knowing that if they fake it they’ll likely be caught.

Reporting most other problems in a block is fairly:

  1. You can prove a duplicate spend in the block chain by showing both transactions and the merkle proofs that they are in each block.  The second block is invalid.
  2. You can prove a malformed transaction by showing the transactions and the merkle proof it is in the block.  That block is invalid.
  3. You can prove an overspend by showing the transactions used as inputs.  That block is invalid.

But if a transaction in a block relies on an output of a transaction which never existed, you can’t prove it.  Even if you know every transaction which ever happened, you can’t prove that to me (without sending me the whole blockchain).  The initial design lived with such warts in the blockchain, instead insisting that you would have to show all the predecessors when you paid me (via a payment protocol).  That predecessor tree quickly becomes unwieldy, however.

The new approach is that for each input of a transaction in the blockchain, the miner has to include the block and transaction number where it appeared.  Now anyone who knows that previous transaction can check it, and if there’s a problem it’s easy for any node to prove by showing the transaction which is in that previous block (with merkle proof that it is).

This means that the blockchain can be trusted, if half the mining power can be trusted.  This is a weaker guarantee that bitcoin, but sufficiently strong for pettycoin.  If you send me a new transaction along with transactions it uses as inputs  and merkle proofs that they are in the blockchain, I only need ensure that the new transaction isn’t a double-spend.  That’s the same as the bitcoin network, with zero-confirmation transactions (though pettycoin has a special double-spend report message to expedite it a little).

Next post, I’ll respond to the best criticism of pettycoin yet, the problem of gateways (by Jason Gi)…

First Day of School

My third little man started school today. Here's the obligatory photos:

In the Paddock

In the Paddock

Waiting for the Bus

Waiting for the bus

February 04, 2014

Clothing and Phone Cameras

In 2012 I wrote about my jeans from Rivers that fit the largest available phones (and the smaller tablets) in their pockets [1]. Those jeans are still working well for me, I can add the fact that they don’t wear out quickly to the list of positive attributes.

Recently my sister asked for advice on getting a new phone, she was considering the Samsung Galaxy Note 2 (the phone I’m using now) because it apparently takes better pictures than the Nexus 4 she’s using. I’ve used both those phones and I hadn’t noticed a difference in picture quality, but there is some variation in manufacturing and it could be that I’ve got a below average Note 2 and a better than average Nexus 4 – so I’ll assume for the sake of discussion that my sister would actually get an improvement in picture quality by using a Note 2.

If you have a phone that doesn’t have the picture quality you desire then one option is to buy a phone with a better camera, but you will be limited by issues of physics. A thin phone has a short focal length which means that the lens has to be small and therefore the amount of light that gets to the sensor is small. The Nokia Lumia 1020 has some of the best camera hardware that you’ll find in a phone, but it’s still only 14.5mm thick where the camera is and that will limit the quality a lot.

Any “compact” camera should be able to beat all phone cameras in terms of picture quality in most areas. The Samsung Galaxy Camera [2] is also worth considering, it has more features than a typical compact camera and good GUI that allows novice photographers to take advantage of it. Also being able to blog your photos directly from the camera could be a useful feature. But the big down-side of a “compact” camera is that it’s not that compact. Most people won’t find it convenient to carry a compact camera with them at all times and therefore they might miss a good opportunity to take a photo. The Galaxy Note series of phones also suffer in this regard because they are larger than most phones. If your phone won’t fit in your pocket and you have it in your backpack when on the move or on a bench at home then you will probably miss some good photos.

As I was at a Rivers store recently I tested my Note 2 in the pockets of women’s jeans. Rivers scored very poorly in this regard, one pair of women’s jeans had fake pockets (this is just wrong for working clothes) and of the rest only one pair could fit a Note 2. The pair that fit a Note 2 didn’t completely enclose the phone, one corner was sticking out, this would probably give a risk of having the phone fall out of the pocket and cause some discomfort to the wearer. I have a pair of shorts with similar size pockets and find it very annoying with the Note 2 in the pocket (for about 10 months of the year I wear jeans so this isn’t a big deal). Rivers jeans only count as “geeky jeans” for male geeks. It’s disappointing that with about a dozen different styles of women’s jeans there didn’t seem to be a single one with pockets of comparable size to the men’s jeans.

I had to recommend that my sister not get a phone from the Galaxy Note series if taking pictures is a priority due to the apparent difficulty in getting it to fit in a pocket and the probability that she would miss good photos due to this.

In past discussions of phone size there have been mentions of the possibility of getting clothing altered. Does anyone have a good experience in getting clothes altered to have bigger pockets or in the case of women’s clothing to have fake pockets replaced with real ones?

February 02, 2014

A libnss plugin for Docker

So we’ve been starting to use Docker at Infoxchange. A little while ago my Infoxchange colleague Ricky Cook wrote a Twisted service to resolve Docker container names into their private IP addresses.

I thought this was a pretty neat idea, although you had to do some things like return SERVFAIL to round-robin to another DNS server if it wasn’t a Docker host. However it got problematic when something like dhclient/NetworkManager was in charge of your resolv.conf.

I realised on Friday night the the Truely UNIX solution was to write an NSS plugin. This was not quite as trivial as I thought it would be. However I spent a bit of time staring at Lennart’s mdns plugin and started writing one of my own.

docker-nss should work well enough to resolve container IDs and temporary assigned names that you get from docker ps.

There’s a bit of work to do. It can’t reverse IPs back into container names yet. Nor does it return a full list of aliases for a container. Nor does it return a list of IPs for an image name. Patches welcome! It currently depends on GLib, which drags GLib into the memory space of every process on your system, however it wouldn’t be too hard to port away from it (I’m just lazy).

It has unit tests via a mocked docker client. You do have to have modified your nsswitch.conf in order to be able to run the tests. Although if this really bothers someone we could make the test suite itself a Docker container.

Hamish and Sen - Explorers

Another magical Summer's day in the Huon Valley:

Roaring Beach

The cave at Roaring Beach: The cave at Roaring Beach Intrepid explorers: Intrepid explorers: Glad their mothers didn't see them climb this: Glad their mothers didn't see them climb this The eagles over looking the beach: The eagles over looking the beach When too much fun is not enough: When too much fun is not enough: Panoramas are thing now: Panoramas are thing now

Geeveston Car Show

Getting in touch with her bourgeois side: Getting in touch with her bourgeious side We saw a lot of EH Holdens: We saw a lot of EH Holdens


Anne has announced that Mageia 4 is now available just on time for Fosdem !

But I won’t be at Fosdem this year again (I’m attending an HP event instead)

And while there are good reasons for Mageia to be my distribution of choice, I won’t update my laptop this week, as I’m presenting during this HP event, and want to stay on the safe side. And for my home computer, well, I generally do it after my laptop ;-)

But anyway, great job done by the Mageia team and lots of good apps in this new version, including OpenStack and UEFI ! Enjoy and try it. It’s really worth it !

Filed under: FLOSS Tagged: Event, Fosdem, Linux, Mageia, Open Source, OpenStack, UEFI

February 01, 2014

First Blackberry of the Season

Hamish spotted the first blackberry of the season:

It's a blackberry!

It didn't last long:

Optimus Prime recommends blackberries!

Next week I expect we'll be gorging ourselves on blackberries :-)

No Fosdem

Missing FOSDEM

I’m afraid Eric and I won’t be at FOSDEM this weekend; our flight got canceled, and the backup they offered would have gotten us there late Saturday night. It seemed crazy to fly to Brussels for one day of FOSDEM, so we decided to just stay home and get some work done.

Sorry to be missing all of the fabulous FOSDEM adventures and getting to see all of the fun people who attend one of the best conferences around. Hope everyone has a great time, and finds only the best chocolates.

January 31, 2014

Launchpad to Github round trip, save 5 giabytes on the way

As alluded to in my previous entry, as part of working on the OQGraph storage engine I have set up a development tree on GitHub, with beta and released code remaining hosted on Launchpad using bzr as part of the MariaDB official codebase.  This arrangement is subject to the following constraints: I will be doing […]

Django utility methods (including New Relic deployment notification)

So we’ve moved to Github here at Infoxchange as our primary development platform because pull requests and Travis CI are much nicer than yelling across the room at each other1. To enable Travis to build our code, we’ve needed to move our little utility libraries to Github too. Since some of these were already on pip, it made sense to open source the rest of them too.

The most useful is a package called IXDjango which includes a number of generally useful management commands for Django developers. Especially useful are deploy which will run a sequence of other commands for deployment, and newrelic_notify_deploy which you can use to notify New Relic of your deployment, which annotates all of your graphs with the version number.

We hope these are useful to people.

  1. big shout out to both Github and Travis CI for supporting our not-for-profit mission with gratis private accounts

January 30, 2014

Learning Python the Hard Way

I've recently started learning Python. In particular I've started Learning Python the Hard Way which I'm now half way through while I wait for O'Reilly's book Learning Python to arrive.


The free HTML version has been enjoyable and well paced experience. I highly recommend it for anyone else interested in picking up python.

I've been punting my exercises up into my gitweb repo which I'm also mirroring to my github account. Yay for free backups.

Utilising VLANs in OpenWRT

There are many ways you can utilise VLANs in OpenWRT. This documents particularly how to configure a TP-Link WDR4300 running OpenWRT 12.09 to conform to an existing VLAN topology, where VLAN 1 is for data and VLAN 2 is for VOIP (voice) traffic.

OpenWRT logo - Wireless Freedom!

Create a third VLAN:

By default OpenWRT comes with VLANs disabled and when enabled, there are two VLANs. VLAN 1 is for the lan interface and VLAN 2 is for the wan interface. Here we have to enable VLANs, create a third VLAN and swap configurations between VLAN 2 and 3:

  • Click on Network.
  • Click on Switch.
  • Select Enable VLAN functionality.
  • Scroll down and click on Add - VLAN ID 3 should appear.
  • Make VLAN 3 settings the same as VLAN 2
    • Set CPU to tagged for VLAN 3
    • Set Port 1 to off for VLAN 2
    • Set Port 1 to untagged for VLAN 3
  • Press Save

Re-Configure the WAN Interface

Now we need to tell the WAN interface to use VLAN 3 instead of VLAN 2:

  • Click on Network.
  • Click on Interfaces.
  • Click on Edit for the WAN interface
  • Click on Physical Settings
    • In Interface select VLAN Interface: "eth0.3".
    • Press Save & Apply.
  • Check that wan is set to VLAN Interface: "eth0.3"
  • Re-check the Switch settings are as per the previous section.
  • Click on System then Reboot.
  • Click on Perform reboot.

It can take about a minute for services to return after this reboot.

Create the Voice VLAN Interface

Welcome back! Hopefully you didn't make any typos and haven't had to factory reset the box once or twice to get here. Now we're going to create the interface for voice traffic:

  • Click on Network.
  • Click on Interfaces.
  • Click on Add new interface...
  • Name the interface appropriately, I'll use Voice for this example.
  • Ensure the interface is set to Static address.
  • Set Cover the following interface to VLAN Interface: "eth0.2".
  • Press Submit.

You will now be taken to the page titled Interfaces - Voice.

Common Configuration

General Setup

Here we set the basic IPv4 configuration for this interface (did I mention I'm assuming IPv4?).

  • Set the IPv4 address as appropriate.
  • Select a class C netmask ( if you have a class C network (most likely)
  • Press Save.

Firewall settings

After clicking on the Firewall Settings tab, we will assign the Voice interface to the lan firewall zone:

  • Select lan.
  • Press Save.

DHCP Server

General Setup

I required a DHCP server for the VOIP handsets, so here's what we set:

  • De-select Disable DHCP for this interface.
  • Set the Limit to 100
  • Press Save.

That's it. Everything else is automatically determined by the IPv4 address you set in Common Configuration. I set the limit 100 as that is the dedicated DHCP range for VOIP handsets on that LAN. You could probably safely leave it at 150 (the default) or set it to the range appropriate for your network.

Switch Configuration

Now we need to tell the OpenWRT switch which ports will be active on the Voice VLAN. In this example I select all 4 LAN ports on the WDR4300:

  • Click on Network.
  • Click on Switch.
  • Set Port 2 through to Port 5 as tagged for VLAN 2 (as per picture below)
  • Press Save & Apply.

This picture illustrates the connection between switch ports in OpentWRT and the physical ports on the WDR4300, as well as the final switch configuration: openWRT Switch to WDR4300 Ports

Send it live!

Okay, let's roill the dice and hope it comes back up:

  • Click on System then Reboot.
  • Click on Perform reboot.

If you've not made any typos, it should all came back up and you'll have a working VLAN 2 for voice. If you can't access the box, you've made typo. Factory reset and try again. Happy VLAN-ing!

The Movie Experience

Phandroid has one of many articles about a man being detained for wearing Google Glass in a cinema [1]. The article states as a “fact” that “it’s probably not smart to bring a recording device into a movie theater” which is totally bogus. I’ve visited a government office where recording devices were prohibited, they provided a locker for me to store everything that could be used for electronic storage outside their main security zone, that’s what you do when you ban recording devices. Any place that doesn’t have such facilities really isn’t banning recording. The Gadgeteer has the original story with more detail with an update showing that the Department of Homeland Security were responsible for detaining the victim [2].

There are lots of issues here with DHS continuing to do nothing good and more bad things than most people suspect and with the music and film industry organisations attacking innocent people. But one thing that seems to be ignored is that movies are a recreational activity, so it’s an experience that they are selling not just a movie.

Any organisation that wants to make money out of movies really should be trying to make movies fun. The movie experience has always involved queuing, paying a lot of money for tickets ($20 per seat seems common), buying expensive drinks/snacks, and having to waste time on anti-piracy adverts. Now they are adding the risk of assault, false-arrest, and harassment under color of law to the down-sides of watching a movie. Downloading a movie via Bittorrent takes between 20 minutes and a few hours (depending on size and internet connectivity). Sometimes it can be quicker to download a movie than to drive to a cinema and if you are organising a group to watch a movie it will definitely be easier to download it. When you watch a movie at home you can pause it for a toilet break and consume alcoholic drinks while watching (I miss the Dutch cinemas where an intermission and a bar were standard features). It’s just a better experience to download a movie via Bittorrent. I’ve previously written about the way that downloading movies is better than buying a DVD [3], now they are making the cinema a worse experience too.

I sometimes wonder if groups like the MPAA are actually trying to make money from movies or whether they just want to oppress their audiences for fun or psychological research. I could imagine someone like the young Phillip Zimbardo working for the MPAA and doing experiments to determine how badly movie industry employees can treat their customers before the customers revolt.

Anyone who watches a Jack Ryan movie (or any movie with a Marty-Stu/Gary-Stu character) obviously doesn’t even want to experience the stress of an unhappy ending to a movie. It seems obvious that such people won’t want the stress of potentially being assaulted in the cinema.

In terms of economics it seems a bad idea to do anything about recording in the cinema. When I was 11 I was offered the opportunity to watch a movie that had been recorded by a video camera in the US before it was released in Australia, I wasn’t interested because watching a low quality recording wouldn’t be fun. It seems to me that if The Pirate Bay (the main site for Bittorrent downloads of movies) [4] was filled with awful camera recordings of movies then it would discourage people from using it. A quick search shows some camera recordings on The Pirate Bay, it seems that if you want to download a movie of reasonable quality then you have to read the Wikipedia page about Pirated Movie Release Types [5] to make sure that you get a good quality download. But if you buy a DVD in a store or visit a cinema then you are assured of image and sound quality. If the movie industry were smarter they would start uploading camera recordings of movies described as Blue-Ray rips to mess with Bittorrent users and put newbies off downloading movies.

January 28, 2014

Crashing the Web UI in OpenWRT

The web user interface in OpenWRT is a reasonably good tool for configuring and managing OpenWRT devices. However it is possible to crash it with fairly innocuous behaviour.

OpenWRT logo - Wireless Freedom!

If you're doing a large volume of changes to firewall rules via the web UI, it is tempting to use the Save option rather than Save & Apply, as it saves you a little time and spares service disconnections as the rules are reloaded.

In the top-right-hand corner you will see in red the number of unsaved changes that have been made and require Save & Apply to be pressed to commit persistently. If you store too many changes via Save, somewhere >50, before pressing Save & Apply you can crash the web UI.

You'll start to receive partial page loads or complete failures to load. You'll still be able to ssh in and poke around. If you do, top will tell you everything is fine and restarting uhttpd will not correct the problem.

I was pressed for time and the only way to resolve this I've found thus far is to reboot the device. Everything came good, except for the changes you've accumulated since the last Save & Apply.

The lesson here is to not rely on Save. When you're happy with how a ruleset works, press Save & Apply early and often.

January 27, 2014

OQGraph – bazaar adventures in migrating to git

Background I have been acting as a ‘community maintainer’ for a project called OQGraph for a bit over a year now. OQGraph provides graph traversal algorithms over SQL data for MariaDB.  Briefly, it is very difficult if not impossible to write a SQL query to find the shortest path through a graph where the edges […]

Old Man of the Sea

Southern Tasmania put on a special day and we made the most of it by heading to Roaring Beach for some kayaking and Hamish's first surf:

Roaring Beach on a perfect Summer's day: Long shot of Roaring Beach Hamish and I had been out visiting "pirates" on their catamaran: All out to sea We're actually pulling onto what passes for a wave at Roaring Beach: Catching a wave with Hamish Hamish caught his first waves on my 9'4" McTavish Fireball: Hamish surfing The old man of the sea comes ashore after dusting off his surf board for the first time since they crossed Bass Strait 5+ years ago: Old Man of the Sea

Concert Variations

…Si le coeur vous en dit, si vous avez un peu de temps aussi, venez écouter l’ensemble Variations (Dir: Bruno Cornec) accompagné par le Jardin Musical (Dir: Christine Antoine) dans ce concert de musique sacrée autour du Requiem d’André Campra, qui nous l’espérons vous touchera par sa beauté et l’espoir qu’il porte. Après celui de Victoria, et avant celui de Mozart, c’est un grand chef d’oeuvre du siècle de Louis XIV que vous pourrez ainsi apprécier.

Tous les détails sont sur l’affiche:

Concert Variations

Les douze chanteurs et les six instrumentistes chanteront en complément de programme le Reniement de St Pierre de Marc-Antoine Charpentier, petit Oratorio de la passion et le programme sera agrémenté de pièces instrumentales du même compositeur, le tout sur dans un style historiquement informé.

Nous avons voulu garder un prix modique (12 EUR plein, 6 EUR réduit) pour nous permettre de rémunérer les instrumentistes professionnels qui nous accompagnent et vous permettre malgré tout de découvrir ou de venir réécouter ce magnifique répertoire de musique française baroque.

En espérant vous y rencontrer…

Filed under: Musique Tagged: français, Marc-Antoine Charpentier, Music

January 25, 2014

Links January 2014

Fast Coexist has an interesting article about the art that Simon Beck creates by walking in snow [1]. If you are an artist you can create art in any way, even by walking in patterns in the snow.

Russ Altman gave an interesting TED talk about using DNA testing before prescribing drugs [2]. I was surprised by the amount of variation in effects of codeine based on genetics, presumably many other drugs have a similar range.

Helen Epstein wrote an interesting article about Dr. Sara Josephine Baker who revolutionised child care and saved the lives of a huge number of children [3]. Her tenacity is inspiring. Also it’s interesting to note that the US Republican party was awful even before the “Southern Strategy”. The part about some doctors opposing child care because it’s “the will of God” for children to die and keep them in employment is chilling.

Jonathan Weiler wrote an insightful article about the problems with American journalism in defending the government [4]. He criticises the media for paying more attention to policing decorum than to content.

Tobias Buckell wrote an interesting post about the so-called “socialised” health-care in the US [5]. He suggests that Ronald Reagan “socialised” health-care by preventing hospitals from dumping dying people on the street. I guess if doing nothing for people until they have a medical emergency counts as “socialised” health-care then the US has it.

Kelvin Thomson MP made some insightful comments about climate change, the recent heat-wave in Australia, and renewable energy [6].

Iwan Baan gave an interesting TED talk about ways that people have built cheap homes in unexpected places [7], lots of good pictures.

Racialicious has an interesting article by Arturo R. García about research into the effects of concussion and the way the NFL in the US tried to prevent Dr. Bennet Omalu publicising the results of his research [8].

Stani (Jan Schmidt) wrote an interesting post about how they won a competition to design a commemerative Dutch 5 Euro coin [9]. The coin design is really good (a candidate for the geekiest coin ever), I want one! Seriously if anyone knows how to get one at a reasonable price (IE close to face value for circulated or not unreasonably expensive for uncirculated) then please let me know.

When writing about Edward Snowden, Nathan says “Imagine how great a country would be if if it were governed entirely by people who Dick Cheney would call Traitor” [10]. That’s so right, that might make the US a country I’d be prepared to live in.

Andrew Solomon gave an interesting TED talk “Love No Matter What” about raising different children [11].

Aditi Shankardass gave an interesting TED talk about using an ECG to analyse people diagnosed wit severe Autism and other developmental disorders [12]. Apparently some severe cases of Autism have a root cause that can be treated with anti-seizure medication.

George Monbiot wrote an insightful article about the way that Bono and Bob Geldoff promote G8 government intervention in Africa and steal air-time that might be given to allow Africans to represent themselves in public debates [13].

Daniel Pocock wrote an informative article about racism in Australian politics and how it is bad for job-seekers and the economy (in addition to being horribly wrong) [14].

Aeon Magazine has an interesting article by Anne Buchanan about the difference between scientists and farmers [15]. She has some interesting points about the way that the lack of general knowledge impacts research, but misses the point that in most fields of study there is a huge problem of people not knowing about recent developments in their own field. I don’t think it’s a pipe dream to be well educated in humanities and science, but I guess that depends on the definition of “well educated”.

Brian Cox gave an interesting TED talk titled “Why We Need the Explorers” about the benefits of scientific research [16].

Yupu Zhang, Abhishek Rajimwale, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau from the University of Wisconsin-Madison wrote an interesting paper about ZFS corruption in the face of disk and memory errors [17]. One thing to note is that turning off atime can reduce the probability of a memory error leading to corrupt data being written to disk, run “zfs set atime=off tank” to fix this.

The comedian Solomon Georgio celebrated Martin Luther King day by tweeting “I love you” to racists [18]. It’s an interesting approach and appears to have worked well.

January 24, 2014

Embed RoundCube In ownCloud

This describes how to get your RoundCube 0.9.5 instance embedded into your ownCloud 6.x installation. You would this primarily as a convenience for heavy ownCloud users, so they don't need to leave ownCloud to interact with their webmail client.

On Your Server:

  • Download the roundcube app from the ownCloud apps repo.
  • Unzip it in your ownCloud web root (usually /var/www/owncloud)
  • Set the permissions to be readable by your webserver:

In a Debian installation, that looks like this

$ sudo chown -R www-data:www-data /var/www/owncloud/apps/roundcube

Inside ownCloud's Web Interface

  • Scroll down the apps menu and click on the +Apps icon ownCloud Apps
  • Scoll down till you see RoundCube Mail and select it.
  • Press Enable ownCloud Enable RoundCube
  • Click on your name in the top-right hand corner to reveal your menu.
  • Press Admin ownCloud Admin
  • Scroll down to RoundCube Settings.
  • In Basic Settings type the absolute path to the RoundCube installation, usually /roundcube/
  • Press Save. ownCloud RoundCube Settings

Log into RoundCube Mail inside ownCloud

  • Click on your name in the top-right hand corner to reveal your menu again.
  • Press Personal
  • Scroll down to RoundCube Mailaccount and enter your username and password.
  • Click on the Webmail icon in the apps menu on the left ownCloud Webmail

You should now see the app logging you in and after a few seconds, hey presto, there's your RoundCube session inside ownCloud.

ownCloud RoundCube


Dans le cadre de mes activités pour le Conseil des technologistes d’HP France, j’ai écrit un article pour le Webzine IT experts sur la l’intégration de Open Source et la gouvernance informatique disponible sur Un grand merci à Aurélie Magniez pour m’avoir aidé à faire cette publication.

Ci-dessous, une version légèrement modifiée qui tient compte de retours et rétablit certaines formules auxquelles je tiens, quoique moins journalistiquement correctes et certains liens (jugés trop nombreux par le Webzine, mais je tiens à citer mes sources, et Tim Berners-Lee ne les a pas inventés pour que l’on ne s’en serve pas non ? :-))

Bonne lecture !

Aujourd’hui en 2013, toutes les entités, publiques comme privées, en France, comme partout dans le monde, utilisent massivement des Logiciels Free, Libres et Open Source (abrégé en FLOSS (1)). Quelques exemples de cet état de fait sont fournis par la Linux Foundation, comme les 600 000 télévisions intelligentes vendues quotidiennement fonctionnant sous Linux ou les 1,3 millions de téléphones Andoïd activés chaque jour. Le dernier rapport de, présentant les super-calculateurs mondiaux, indique une utilisation de Linux à 96,4%. Des sociétés ayant aujourd’hui un impact quotidien sur notre environnement numérique telles que FaceBook ou Twitter ont non seulement bâti leur infrastructure sur une grande variété de FLOSS, mais ont aussi publié de grandes quantités de code et des projets complets sous licence libre. Ceci concerne aussi des acteurs plus classiques du monde de l’informatique comme HP ou IBM.

Ceci peut sembler normal, car on évolue là dans le monde du numérique, mais le phénomène touche tous les secteurs comme le montre une récente étude de l’INSEE, qui reporte que 43% des entreprises françaises d’au moins 10 personnes utilisent des suites bureautique FLOSS ou encore que 15% des sociétés de construction utilisent un système d’exploitation FLOSS par exemple. Cette large adoption se trouve corroborée par le développement de la filière FLOSS en France, comme rapporté par le CNLL, représentant en 2013 2,5 milliard d’Euros et 30 000 emplois.

Enfin, le secteur public n’est pas en reste avec la publication en septembre 2012 de la circulaire du premier ministre qui reconnait la longue pratique de l’administration des FLOSS, et incite celle-ci, à tous les niveaux, à un “bon usage du logiciel libre”, ce qui se vérifie dans certains ministères comme celui de l’intérieur ou de l’économie. Le ministère de l’éducation nationale a ainsi déployé 23 000 serveurs EOLE sous Linux et utilise de nombreux projets FLOSS pour la gestion multi-fonctions (réseau, sécurité, partage) des établissements scolaires.

Services impliqués dans la gouvernance FLOSS

Dans ce contexte d’utilisation généralisée, se posent certaines questions quant à la gouvernance particulière à mettre en place ou l’adaptation de celle existante pour accroître l’usage, la distribution, la contribution au FLOSS, tant pour les fournisseurs que pour les utilisateurs de ces technologies. En effet, les FLOSS ont des spécificités tant techniques qu’organisationnelles (rapport à la communauté, méthodologie de développement, licence utilisée) qui ont un impact sur la façon de les gérer dans une entité. La Gouvernance Open Source, aujourd’hui, doit donc être partie intégrante d’une Gouvernance Informatique.

Contrairement à ce qu’une rapide analyse pourrait laisser penser, ce n’est pas uniquement le service informatique qui est concerné par l’utilisation des FLOSS. Celle-ci touche la totalité de l’entité et le modèle de gouvernance doit donc être adapté en conséquence. En effet, le service des achats se voit souvent court-circuité par l’utilisation de composants logiciels téléchargés et non achetés en suivant les procédures qu’il met en place, le service du personnel ne dispose pas de contrats de travail statuant sur les contributions des employés à des projets FLOSS (ne parlons pas des stagiaires ou co-traitants), le service juridique doit apprendre à distinguer la licence Apache de la GPLv2, ou v3, le service de propriété intellectuelle considérer si telle modification faite à un projet FLOSS peut ou doit être reversée au projet, et dans quel contexte, voire le PDG évaluer, lors d’une scission de sa société en différentes entitées juridiques, l’impact représenté sur la redistribution de logiciels faite à cette occasion et le respect des licences utilisées. Ce ne sont que quelques exemples des questions auxquelles les entités doivent répondre dans le cadre d’une Gouvernance Informatique intégrant les FLOSS.

Ceci n’est pas un débat oiseux: il y a eu maintenant trop d’exemples allant jusqu’au procès et sur des problématiques de non-respect des licences FLOSS pour que les entreprises et services publics ignorent le problème. Les conséquences tant financières que sur leur image de marque peuvent être très importantes et causer des dommages beaucoup plus graves que ne le représente la mise en conformité (qui consiste le plus souvent en la seule publications des codes sources modifiés).

Il ne s’agit pas ici d’énoncer des éléments qui tendraient à restreindre l’utilisation des FLOSS dans une entité. Au contraire, les bénéfices de leur utilisation sont aujourd’hui trop évidents, la baisse des coûts induite par la mutualisation, les gains technologiques d’avoir des souches logicielles si versatiles et éprouvées doivent juste s’accompagner des mesures de gestion nécessaires pour en retirer tous les bénéfices annoncés. L’analyse des risques fait partie des choix quotidiens exercés au sein d’une entité et de même que pour une démarche qualité, l’impulsion doit venir du sommet de la hiérarchie de l’entité. Celle-ci doit soutenir la création des instances nécessaires à l’établissement d’une gouvernance FLOSS en leur donnant le pouvoir requis et l’interaction avec les différents services de l’entité.

Composants d’une gouvernance FLOSS

Tout d’abord, il s’agira de développer la compréhension de l’écosystème libre au sein de l’entité pour en appréhender les spécificités.

La première d’entre elles est la licence gouvernant les FLOSS. Comme pour toute utilisation d’un logiciel, ou d’un service, un utilisateur se voit décrit ses droits et ses devoirs au sein de ce document. Ceux-ci diffèrent selon que la licence est permissive (type Apache v2 par exemple), qui permet une utilisation (y compris pour des développement non-FLOSS) et une redistribution avec peu de contraintes (mentions légales et paternité par exemple). Elle permet ainsi à des sociétés de vendre des versions propriétaires d’Andoïd distribué sous Licence Apache v2 embarquées dans leurs téléphones portables. C’est ce qui permet de considérer cette licence comme “libre”. En regard on donnera également l’exemple des licences de gauche d’auteur (copyleft en anglais, type GPL v2 par exemple), qui permettent une utilisation tant que le logiciel distribué s’accompagne des sources (éventuellement modifiées) servant à le fabriquer. Elle permet à des projets comme le noyau Linux d’être développé par des milliers de développeurs tout en restant toujours accessible dans toutes ses variantes par la mise à disposition de son code source, dû à cette contrainte. C’est ce qui permet de considérer cette licence comme “libre”. Simplement les libertés sont vues ici sous l’angle du projet (qui le reste ad vitam aeternam) plutôt que sous celui de l’utilisateur comme dans l’autre cas. C’est la raison pour laquelle toutes ces licences sont considérées comme Open Source par l’OSI.

Une entité doit donc choisir les briques FLOSS qu’elle souhaite utiliser en fonction de l’usage prévu pour respecter les droits et devoirs d’usage codifiés dans les licences (ni plus ni moins qu’avec une offre non-FLOSS), sachant que, dans la plupart des cas, l’élément déclenchant l’application de la licence est la distribution du logiciel. Ainsi une société peut parfaitement utiliser un logiciel sous licence GPL v2, y faire des modifications et ne pas les publier, tant que l’usage reste interne à sa structure juridique (cas fréquent en mode utilisation de logiciel dans un département informatique). En revanche, si elle l’incorpore à un produit qu’elle commercialise, elle devra juste se mettre en conformité avec la licence et fournir en parallèle du produit un acccès aux dites sources.

Ceci n’est finalement pas si compliqué, eu égard aux gains énormes qu’elle peut en retirer en bénéficiant d’une brique logicielle éprouvée qu’elle n’a ni à développer, ni à maintenir. Dans tous les cas, il est important que son service juridique ait une compréhension des droits et devoirs des licences utilisées pour apporter le conseil requis, comme lors de la signature de contrats avec tout fournisseur.

On le voit, la formation du service juridique est à la base de la mise en place de toute gouvernance. D’autre part, il faut organiser au sein de l’entité la mise en relation entre ce service juridique et les équipes de développement. Non seulement pour qu’elles apprennent à se connaître, mais aussi pour qu’elles échangent sur leurs besoins réciproques et qu’elles comprennent comment chacune cherche à protéger l’entité pour laquelle elle oeuvre. Les uns le faisant eu égard au respect des règles de droit, ce qui comprend l’explication envers les développeurs des licences libres, les autres eu égard au mode d’utilisation des composants techniques spécifiques des équipes de développement.

Personnellement, en tant qu’ingénieur de formation, il m’a été très bénéfique de discuter avec divers avocats spécialistes des licences libres, pour mieux comprendre leur volonté de protéger l’entreprise pour laquelle ils travaillent et comment ils devaient le faire dans ce contexte. Et réciproquement, je sais que les informations techniques et exemples parfois complexes d’agrégats de composants logiciels les aident en retour à mieux tenir compte des cas particuliers qui peuvent se faire jour. La communication sur ce sujet doit dépasser dans l’entité les structures classiques et fonctionner comme une communauté.

Du reste, la seconde spécificité du logiciel libre est le fait qu’il est développé par une communauté de personnes partageant un intérêt pour ce logiciel. Il en existe de toute taille (d’un développeur assurant tout, jusqu’à plusieurs centaines de personnes comme les larges fondations comme Apache ou OpenStack). Etudier une communauté avant d’utiliser le composant libre qu’elle produit est une bonne pratique pour avoir des informations sur sa vitalité, son organisation, sa feuille de route, en plus des caractéristiques purement techniques du composant. Certains sites comme Ohloh peuvent aider à se forger une opinion dans ce domaine, pour les projets suivis. De même qu’il peut être alors pertinent de se poser la question des modes de contributions en retour. Cela peut consister en des correctifs, du code apportant de nouvelles fonctions, de la documentation, des traductions, une animation de communauté, de l’achat de prestation intellectuelle auprès de professionnels oeuvrant sur le composant ou un soutien financier à l’organisation d’un événement permettant le rassemblement physique de la communauté. Certaines entreprises, comme la Compagnie Nationale des Commissaires aux Comptes témoignent de leurs contributions en retour envers un projet tel que LibreOffice.

Comme précédemment, chacun de ces aspects pourra faire l’objet d’une étude dans le volet Open Source de la Gouvernance Informatique. On notera que la gestion de la proprété intellectuelle sera à considérer tout particulièrement pour les contributions sous forme de code, et en liaison avec la licence utilisée. Mais cet aspect peut aussi avoir un impact sur les contrats de travail des employés, des co-traitants, des stagiaires, afin de déterminer sous quelles conditions leurs contributions sont autorisées.

Encore une fois, il s’agit d’inciter les entités utilisatrices de logiciels libres à ne pas se contenter d’être de simples utilisatrices de FLOSS, mais à être actrices de l’écosystème et à contribuer à leur tour à l’améliorer en s’intégrant dans les communautés. Le dynamisme actuel autour des FLOSS est le fait du soutien très actif de nombreux utilisateurs. Pour ne citer qu’un exemple, on regardera la synergie créée autour du projet GENIVI par ses 120+ membres, dont de nombreuses sociétés hors secteur informatique.

Enfin la dernière spécifcité du logiciel libre est la méthodologie de développement utilisée par la communauté. Quoiqu’elles soient toutes attachées à l’accès au code, elles varient énormément d’un projet à l’autre, en fonction de sa taille, de son style de gouvernance, des outils utilisés et de son historique. Mais il est important pour une entité qui souhaite interagir avec une communauté d’en comprendre la culture. Si le noyau Linux a une méthodologie organisée autour d’un “dictateur bénévole” (Linus Torvalds) qui prend les ultimes décisions et de ses lieutenants, nommés, en qui il a toute confiance pour prendre les décisions concernant une branche de développement, d’autres projets comme OpenStack cherchent à adopter le mode le plus “méridémocratique” en procédant à l’élection des représentants techniques des branches du projet par les développeurs, et à celle des représentants au conseil d’administration par la totalité des membres de la fondation, quels que soient leurs rôles. Le processus d’intégration continue d’OpenStack implique des étapes précises pour y ajouter un patch par exemple. Cela nécessite d’abord une application sur l’arbre courant sans erreur, avant de devoir recevoir deux votes positifs puis de satisfaire le passage de l’ensemble des tests automatiques prévus. Et ceci s’applique aussi bien aux représentants techniques des branches du projet qui proposent des centaines de patches par an, ou au contributeur occasionnel faisant une modification mineure de documentation. En revanche, celui qui souhaite soumettre une modification sur le noyau Linux devra passer par des listes de diffusion où les échanges peuvent parfois se révéler vifs, et s’adapter aux desiderata potentiellement différents des mainteneurs de branches.

Bonnes pratiques de gouvernance FLOSS

Face à tous ces aspects de ce monde foisonnant, certaines bonnes pratiques simples peuvent permettre aux entreprises de faire les bons choix et de s’assurer une utilisation optimale des FLOSS en en tirant le meilleur profit sans mettre à risque leur bonne réputation par des actions mal vues des communautés.

Une première bonne pratique peut consister à créer un comité Open Source. Par exemple, pour un grand groupe, il peut être utile pour la direction générale de nommer des représentants des différents services (achats, ressources humaines, informatique, technique, juridique, propriété intellectuelle) pour définir la politique à mettre en place. Ce comité devra se réunir régulièrement, tant dans la phase de définition de la partie Open Source de la Gouvernance Informatique, qu’ultérieurement pour la réviser sur la base des retours des utilisateurs et l’évolution de projets. Il devra également avoir les moyens associés à ses missions. Un groupe de travail du Syntec Numérique a développé, pour les aider dans cette activité, des contrats types pour leurs fournisseurs, leur demandant de préciser avec leur livraison logicielle, l’inventaire exhaustif des licences utilisées. Une présentation sur les contrats faite au sein de ce groupe pourra être aussi consultée avec profit. La FSF France propose aussi des avenants de contrats de travail type pour les employés contribuant à des projets libres, et l’AFUL des modèles économiques et financement de projets FLOSS ou de communautés. Il sera ensuite facile de donner des missions et des pouvoirs plus étendus à ce groupe de personnes quand l’utilisation des FLOSS augmente. Dans le cadre d’une PME, un correspondant FLOSS sera sans doute suffisant (comme il peut y avoir un correspondant sécurité ou CNIL), tâche qui pourra même être sous-traitée à des sociétés specialisées dans le domaine.

Une fois le comité/correspondant nommé et la politique FLOSS établie, il faudra prévoir des cycles de formations. D’une part pour le service juridique pour le cas où il manquerait de compétences sur le domaine spécifique des licences libres. La société Alterway propose par exemple une formation par un juriste pour des juristes. D’autre part, en interne, auprès de l’ensemble du personnel pour expliquer cette nouvelle politique FLOSS.

En parallèle, il est important d’avoir une vision précise de l’utilisation actuelle des FLOSS dans son entité. Notamment pour vérifier que leur utilisation est conforme aux licences sous lesquelles ils sont utilisés. Les non-conformités sont plus souvent dûes à la méconnaissance qu’à une réelle volonté d’enfreindre les licences. Cette tâche peut paraître fastidieuse de prime abord, mais elle est à mon sens fondamentale pour se prémunir, en particulier si votre activité vous amène à redistribuer du logiciel à vos clients. Heureusement des outils existent pour automatiser ce travail d’inventaire et faciliter l’analyse des licences utilisées. Le premier à recommander est libre: FOSSology a été développé par HP pour son utilisation interne, puis rendu libre en 2007 sous licence GPLv2. Il collecte dans une base de données toutes les meta-données associées aux logiciels analyés (il peut traiter des distributions Linux entières sans problème) et permet l’analyse des licences réellement trouvées dans le code depuis une interface Web. De nombreuses entités outre HP comme Alcatel-Lucent, l’INRIA ou OW2 l’utilisent, y compris pour certains, en couplage avec leurs forges de développement. Mais son accès libre et sa facilité de mise en oeuvre ne le réserve pas qu’aux grands groupes et il devrait être systématiquement utilisé comme complément naturel d’un gestionnaire de source, ou d’outillage d’intégration continue. En complément, des outils non-FLOSS peuvent également aider à ce travail d’inventaire en donnant accès à des bases préétablies de composants connus et déjà inventoriés et fournissent de nombreuses autres fonctions. La société française Antelink, émanation de l’INRIA, a développé une grande expertise dans ce domaine et a couplé son outillage avec FOSSology. D’autres acteurs tels que Blackduck et Palamida ont également un outillage complémentaire à considérer.

On pourra de plus prévoir ultérieurement un mode de déclaration des usages de FLOSS, voire, si les requêtes sont nombreuses et régulières, créer un comité de revue spécifique en charge de les évaluer et de les approuver.

Enfin certains documents de référence tel que le Guide Open Source du Syntec Numérique, les fondamentaux de la Gouvernance des logiciels libres, la vision des grandes entreprises sur la gouvernance et maturité de l’Open Source et le site de référence FOSSBazaar pourront permettre un approfondissement des sujets évoqués et donner des bonnes pratiques additionnelles quant à la mise en oeuvre d’une gouvernance Open Source.

Et pour ceux qui souhaiteraient être accompagnés dans la démarche, des sociétés telles que Smile, Alterway, Linagora, Atos, Inno3 ou HP disposent de prestations d’aide à la mise en oeuvre d’une gouvernance Open Source. Mais que vous le fassiez seuls ou accompagnés, il est temps et j’espère que cet article vous aura donné quelques clefs pour intégrer l’Open Source dans votre politique de Gouvernance Informatique.

(1): Dans tout ce document, on utilise le terme de FLOSS comme terme générique recouvrant aussi bien la notion de « logiciel libre », « Free Software » qu’« Open Source », tout en sachant que des nuances existent.

Filed under: FLOSS Tagged: AFUL, APRIL, FOSSBazaar, FOSSology, français, Gouvernance, HP, HPLinux, LinuxFoundation, Open Source

January 23, 2014


AltOS 1.3.1 — Bug fixes and improved APRS support

Bdale and I are pleased to announce the release of AltOS version 1.3.1.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a minor release of AltOS, including bug fixes for TeleMega, TeleMetrum v2.0 and AltosUI .

AltOS Firmware — Antenna down fixed and APRS improved

Firmware version 1.3 has a bug in the support for operating the flight computer with the antenna facing downwards; the accelerometer calibration data would be incorrect. Furthermore, the accelerometer self-test routine would be confused if the flight computer were moved in the first second after power on. The firmware now simply re-tries the self-test several times.

I went out and bought a “real” APRS radio, the Yaesu FT1D to replace my venerable VX 7R. With this in hand, I changed our APRS support to use the compressed position format, which takes fewer bytes to send, offers increased resolution and includes altitude data. I took the altitude data out of the comment field and replaced that with battery and igniter voltages. This makes APRS reasonably useful in pad mode to monitor the state of the flight computer before boost.

Anyone with a TeleMega should update to the new firmware eventually, although there aren’t any critical bug fixes here, unless you’re trying to operate the device with the antenna pointing downwards.

AltosUI — TeleMega support and offline map loading improved.

I added all of the new TeleMega sensor data as possible elements in the graph. This lets you see roll rates and horizontal acceleration values for the whole flight. The ‘Fire Igniter’ dialog now lists all of the TeleMega extra pyro channels so you can play with those on the ground as well.

Our offline satellite images are downloaded from Google, but they restrict us to reading 50 images per minute. When we tried to download a 9x9 grid of images to save for later use on the flight line, Google would stop feeding us maps after the first 50. You’d have to poke the button a second time to try and fill in the missing images. We fixed this by just limiting how fast we load maps, and now we can reliably load an 11x11 grid of images.

Of course, there are also a few minor bug fixes, so it’s probably worth updating even if the above issues don’t affect you.

January 22, 2014


For those of you working for an HP value added reseller (VAR) and attending the HP ExpertOne Technology & Solutions Summit on Barcelona from the 3rd to the 7th of February, feel free to meet during one of the session I’ll (co-)deliver:

For those of you wondering why and MondoRescue versions are not published more regularly, that gives you a hint ;-)

See you there to talk of these subjects or something else as you see fit. I’m around the whole week.

Filed under: Event, FLOSS Tagged: CloudOS, ETSS, Event, HP, HPLinux, Linux, Open Source, OpenStack, OSSI, ProLiant, Red Hat, RHEV

Writing your first web app using Python and Flask

I presented a tutorial at a couple of weeks ago on what there was for Python developers between CGI scripts and Django. That developers needn’t still be writing CGI scripts in 2014 (it happens) and there were frameworks that met your needs.

This tutorial introduces the microframework Flask, and shows of a whole bunch of things you can do with it, up to being a fully-fledged replacement for Django if you’re so inclined.


Examples source

In other news, I am now a maintainer of Lettuce, a BDD framework for Python/Django. So expect a few more Lettuce related blog posts (if I stop ignoring my blog).

January 21, 2014

Rocket Stove

According to Wikipedia, a Rocket Stove is:

an efficient cooking stove using small diameter wood fuel which is burned in a simple high-temperature combustion chamber containing a vertical chimney and a secondary air supply which ensures almost complete combustion prior to the flames reaching the cooking surface.

Last night I was fortunate enough to have drive of one that Rob and Ken had knocked up during the week. The ingredients were:

  • Some spare square piping
  • A small found drum
  • About 19 kilos of kitty litter

and the result was this:

Rocket Stove

Ignore the larger sticks in the picture above, this rocket stove generated an awful lot of heat using little more than bark and kindling while the kitty litter kept the outside cool enough to touch. It was brilliant.

A rocket stove generates high heat and very little smoke, keeping the air as clean as possible. You can even make them with as little as 16 bricks. There are 6 plans here which are very simple and easy to do.

As it turns out, there are a lot of possible uses for these little beauties.


Those interested by the previous part of the story can read the previous article on LCA14.

Time goes quick ! It was already time to pack and prepare to fly back. But before, a full day of interesting sessions was ahead of me.

Keynote at LCA

And the first one, the keynote, was particularly impressive. EVen if I must confess that space never really made me dream that much (sorry dale !), I was pretty impressed by the work done to send 15cm3 satellites, that Jonathan Oxer called ArduSat (in reference to the Arduino project), and the willingness to open source everything.

Jonathan Oxer with an ArduSat

Jonathan explained how he worked with the NASA in order to be able to launch his stellites alongside a more “normal” one, in order to benefit from an existing launch. I think that his idea to make them accessible to schools so they can create their own usage of such “tool”. He also underlined that satellite are still considered as a weapon in the USA and thus it wasn’t possible for him to disclose all what he would have liked to do. He mentioned that he is working with Bdale Garbee on trying to relax this.

ArduSat goal

I then attended a session on “Building APIs Developers Love”. While the presentation gave good overall recommendations, I’d have expect a bit more concrete examples, mention of tools to help you doing so, and a bit more code examples.

Darcy Laycock

After this, I was again more interested to follow closely what happens on the OpenStack side, so I followed the 3 sessions around that topic which were following.

The first was from Robert Collins, HP around Diskimage-builder. I was curious to see the commonalities with what I do around MondoRescue. But in fact the tool is really focussing on an OpenStack environment (as it should !) and they don’t need to manage physical setup, as they target virtual machines (IIUC). Also they really build the target image, not analyze an existing content to create an image from it, so the architecture i pretty different. For example when looking at their way to get the kernel needed, they use a qcow2 container in which they look for the kernel they want. On my side, I look in the local file system to do the same. So this shows that we could probably merge some techniques used, but it would take time and need some work. Maybe I can take that in account when refactoring mindi’s code, as I started to do with the new 3.0 version.

Anyway, again another tool to put on the TODO list and to look at.

Robert Collins

Next one was about Log content analysis in the continuous integration process of OpenStack by Clark Boylan, HP. Clark explained the tools they used and integrated to support this activity (Gearman, Logstash, and ElasticSearch), and as usual in the open. In particular, he showed how to extract useful information from billions of lines of log in their CI process.

Clark Boylan

Finally, the last session was on Bare metal provisioning with OpenStack by Devananda van der Veen, HP, who is the PTL for this Ironic project. Again my interest is also in relationship with MondoRescue which does similar stuff. But yet again from a different angle: deploying OpenStack with OpenStack. The project is still new, but progressing rapidly. It aimed at replacing crowbar typically. I really like the way Devananda present as I feel much clever at the end of his presentation, as this is always very clear and easy to get ! He even finished earlier his talk (which was very convenient for me as I had to leave quickly to catch my plane !). He could hav used that time for a short demo (I like demos it help me understand better !)

Devananda van der Veen

As you can see, HP’s involvement in OpenStack is really strong and not only throwing money in the project (thing that we also do !) but bringing expertise, project lead, build resources, code to it.


LCA is really a great event, focussing on various Open Source and Linux communities. It’s also one of the oldest worldwide, and the locations are always very nice, and people are just awsome there ! That’s why I’ll continue to submit proposals and hopefully be able to attend again in the future if they are retained. However, while I think that LCA’s reputation is well deserved, I think the largest event today for our communities is FOSDEM in Europe, attracting more than two thousands people each year only over a week-end however, but maybe without the same involvement of the Linux Kernel community. I won’t be there this year due to a conflict with another HP event, but I also recommend you to attend as a lot happens there as well.


Reminder, all the photos I took during the event are available at

Filed under: Event, FLOSS Tagged: Australia, Event, HP, HPLinux, Linux,, Open Source, OpenStack

Internal Port Forwarding with OpenWRT 12.09

This describes how to simply forward an internal port to Internet in OpenWRT 12.09:

OpenWRT logo - Wireless Freedom!

  • Click on Network.
    • Click on Firewall.
      • Click on Traffic Rules
        • Scroll down to New forward rule.
        • Enter in an appropriate name for the rule. Appending "- out" is useful.
        • Set the Source zone to lan, usually it's the default already.
        • The Destination zone should default to wan. Select wan if it isn't already selected.
        • Press Add and edit....
          • Set the Protocol as appropriate
          • Check the Source zone is correctly set to lan.
          • Check the Destination zone is correctly set to wan.
          • Set Destination port to the port you wish to allow out.
        • Press Save & Apply.

You should now be able to test this port access out from the lan to the wan and it ought to have been successful :-)

January 20, 2014

External Port Forwarding with OpenWRT 12.09

This describes how to forward an external port to an internal server / port in OpenWRT 12.09:

OpenWRT logo - Wireless Freedom!

  • Click on Network.
    • Click on Firewall.
      • Click on Port Forwards.
        • Scroll down to New port forward
        • Name the port forward appropriately
        • Select the Protocol as required.
        • Select the External zone, normally wan for a service you wish to forward internally.
        • Enter the external port you wish to forward internally
        • Select the Internal zone, normally lan for a service you wish to forward internally.
        • Select the Internal IP address you wish to receive the forward.
        • The Internal port should default to what you put in the external port. Change if required.
        • Press Add.
      • Click on Traffic Rules
        • Scroll down to New forward rule.
        • Enter in an appropriate name for the rule.
        • Select the source, usually wan for inbound forwarding
        • The Destination zone should default to wan. Select wan if it isn't already selected.
        • Press Add and edit....
        • Set an fields here as appropriate, ensure the protocol field is correct.
        • At the very least you will need to set the port, as per the previous step.
        • Press Save & Apply.

You should now be able to test this port foward and it ought to have been successful :-)

January 19, 2014

Robert Collins

Those interested by the previous part of the story can read the previous article on LCA14.

This time, even if still tired by night work, I had to be on time in order to attend the keynote.

Matthew Garrett's Keynote

Matthew Garrett was talking about Security in general Secure Boot in particular (a subject he has been covering for a long time now). This was complementary to the Keynote given on Day 1. Matthew insisted on the need, in a post Snowden area, for technologies such as Secure Boot to improve Security which are more and more turned on by default. Having a trusted and verified boot chain is a critical aspect of system security, which Secure Boot provides. And this letting users replacing their keys, so leaving freedom to them (on x86). And today Linux distributions are indeed supporting pretty well Secure Boot on UEFI, as I was able to test myelf during the UEFI Plugfest we had last year. Matthew posed more questions in fact than he gave answers around security: What level of trust to have for hosters, Software or HyperVisors or or Firmware or Hardware manufacturers. Of course as he said rigtly “Imperfect security is better than no security” and he mentionned that most of the attacks are model specific, so we may still have a certain level of trust in our platforms. He also reminded that Security agencies may not be the biggest concern with regards to security.

And BTW as Matthew said “if we don’t trust our Hardware, then why are we running a computer”.

I found that keynote very relevant, and making you think once more to how you treat security and how you can do that better to protect your contents.

Matthew Garrett

After the traditional morning coffee, during which people exchange on various topics of their choices such as shoes, it was time to attend the 2 sessions of the morning.

Coffee pause

The first was on Python Packaging 2.0 by Nick Coghlan which I was curious to see in order to measure the impact on Unfortunately, it went rapidly into a lot of concepts intimately linked to python, which I’m not savy enough in to follow closely and PEP proposals of evolutions I’d need to read before being able to understand stuff better. So with a doubt a good presentation, but I wasn’t the right audience ;-)

Nick Coghlan

And that was the same with the next one. Very good presentation from Katie Miller on functional languages (and in particular Elixir), but that’ very far from what I do and know. I hadn’t a better choice for that time slot, so at least enjoyed the pictures she used.

That was much better after lunch for me. The first session was full, and after having seen it I undertsood why !

Full session on mosh

mosh belongs to this category of small software that rapidly becomes mandatory in your environment. If you use ssh, and screen to manage disconnecitons, then mosh is for you. What does it do for you: well it uses ssh to login to your remote system and after that it does communicate with it using UDP on another port, and keeps the connection available and ready to use, even if you disconnect, and reconnect from another IP e.g. Typical usage when you hibernate to go back home. It also echo locally characters, and underline them when they are also received on the other side. The demo was pretty impressive from that perspective. I alwyas found scren annoying as unable to keep my keyboard shortcuts and here, mosh does the job for you. The only point is that you need an open port on your firewall to go through with the UDP dialog, but that’ a light drawback I’m ready to accept to benefit from the services it provides. Ah; and ^C is just immediate ! That was really a useful and great presentation. Thanks Jim Cheetham to make it obvious it was the tool I needed !

Jim Cheetham

The next session I chose on Petitboot, was also a good one. Again the principle is easy to understand: what if you were using Linux (the kernel) as your boot loader ? Instead of redeveloping each time driver support for each boot loader, and dealing with bugs already solved in the Linux kernel. Thats’ the simple yet powerful idea behind Petitboot. Jeremy explained clearly the advantages of his approach such as native support of a network stack and easy addition of client protocol for http, ftp, nfs… From that initial kernel acting as boot loader, the launch of the “real” kernel is done with kexec. Definitely worth exploring. So again added to my TODO list which never gets empty !

Jeremy Kerr

After the afternoon pause, it was time to go to Bdale’s session. I arrived early in the room, and that was a good choice as 5 minutes before the start the room was completely full, and they prevented more people to enter.

Full room for Bdale

He explained that when the LCA organizer contacted him for a talk in 2013, he was still sorting issues folliwng the fire disaster which occured to his house and wasn’t really thinking to a talk for LCA. But they convinced him to just talk about that, as they really wanted to have him onboard. So that was indeed not a technical talk on IT, rather a technical talk on life during and after such as disaster. That was a shoking story of course, and Bdale, despite the humour he always keep in his talks, was touched even months after the event when he told us the story. He also gave some hints in order to keep important papers outside of the house, or scanning them to keep them on a mobile computer you can get with you. Surprisingly (for me) he insisted on insurance contracts and the good choice he made without really realizing it when he was younger. Maybe insurance company explain more to us in Europe consequences of such choices when we do it. As I’m a bit paranoid on valuable data, I already have scanned docs of the most important ones, and the backup copies I do regularly on a laptop nd on a server to always have many versions of the file for both backup and disaster recovery purposes. Maybe my work oN MondoRescue made me that way :-) Bdale received a lot of help from the community during this period and he concluded by saying that people were awesome and he was really meaning at the tone of his voice. Pretty moving presentation I must confess.

Bdale Garbee

I didn’t attend the last session of the day due to a call with my daughter and later I attended the Dinner for Professionals, where I passed most of the time discussing with Thomas Petazzoni again. Robert Collins was HP’s representative to introduce the session and he did it that shortly and with lots of energy !

Robert Collins

Waiting now for the last day !

Filed under: Event, FLOSS Tagged: Australia, Event, HP, HPLinux, Linux, Linux kernel,, Open Source, Secure Boot

January 16, 2014

DVD playback in GStreamer 1.0

Some time in 2012, the GStreamer team was busy working toward the GStreamer 1.0 major release. Along the way, I did my part and ported the DVD playback components from 0.10. DVD is a pretty complex playback scenario (let’s not talk about Blu-ray)

I gave a talk about it at the GStreamer conference way back in 2010 – video here. Apart from the content of that talk, the thing I liked most was that I used Totem as my presentation tool :)

With all the nice changes that GStreamer 1.0 brought, DVD playback worked better than ever. I was able to delete a bunch of hacks and workarounds from the 0.10 days. There have been some bugs, but mostly minor things. Recently though, I became aware of a whole class of DVDs that didn’t work for a very silly reason. The symptom was that particular discs would error out at the start with a cryptic “The stream is in the wrong format” message.

It turns out that these are DVDs that begin with a piece of video that has no sound.

Sometimes, that’s implemented on a disc as a video track with accompanying silence, but in the case that was broken the DVDs have no audio track for that initial section at all. For a normal file, GStreamer would handle that by not creating any audio decoder chain or audio sink output element and just decode and play video. For DVD though, there are very few discs that are entirely without audio – so we’re going to need the audio decoder chain sooner or later. There’s no point creating and destroying when the audio track appears and disappears.

Accordingly, we create an audio output pad, and GStreamer plugs in a suitable audio output sink, and then nothing happens because the pipeline can’t get to the Playing state – the pipeline is stuck in the Paused state. Before a pipeline can start playing, it has to progress through Ready and Paused and then to Playing state. The key to getting from Paused to Playing is that each output element (video sink and audio sink) in our case, has to receive some data and be ready to output it. A process called Pre-roll. Pre-rolling the pipeline avoids stuttering at the start, because otherwise the decoders would have to race to try and deliver something in time for it to get on screen.

With no audio track, there’s no actual audio packets to deliver, and the audio sink can’t Pre-roll. The solution in GStreamer 1.0 is a GAP event, sent to indicate that there is a space in the data, and elements should do whatever they need to to skip or fill it. In the audio sink’s case it should handle it by considering itself Pre-rolled and allowing the pipeline to go to Playing, starting the ring buffer and the audio clock – from which the rest of the pipeline will be timed.

Everything up to that point was working OK – the sink received the GAP event… and then errored out. It expects to be told what format the audio samples it’s receiving are so it knows how to fill in the gap… when there’s no audio track and no audio data, it was never being told.

In the end, the fix was to make the dummy place-holder audio decoder choose an audio sample format if it gets a GAP event and hasn’t received any data yet – any format, it doesn’t really matter as long as it’s reasonable. It’ll be discarded and a new format selected and propagated when some audio data really is encountered later in playback.

That fix is #c24a12 – later fixed up a bit by thiagoss to add the ‘sensible’ part to format selection. The initial commit liked to choose a samplerate of 1Hz :)

If you have any further bugs in your GStreamer DVD playback, please let us know!

January 14, 2014

Perth by night

Those interested by the previous part of the story can read the previous article on LCA14.

I really had to sleep a bit that night (5 hours is a minimum these days) so I missed the lightning talks of the morning, but it doesn’t seem I missed a major announce there. But I didn’t want to miss the start of the main conferences.


As a followup of the previous day, and in order to know better what we, HP, do for the OpenStack project, I chose to attend Elisabeth Krumbach Joseph who was describing the management of the project infrastructure. She made a good description of the tools used, including some navigation on the various Web interfaces they are using. She insisted on the fact that all the infrssctructure is managed the same way as the rest of the OpenSTack project, which means in the open, and using gated trunk as well ,which is much less usual for an infrstucture than for a project. It sounded to me as a risk of adding too much bureaucracy, and was maybe more suited for an established infra not moving too much, but they seem to deal with it without problem. Definitely an interesting approach that to have peer review of infrastructure configuration changes before applying them. Maybe my Solution Center could get some inspiration out of it. Will see when I’m back. The ~6 people are managing around 35 servers for the OpenStack infra (all virtual, so they do not deal with HW issues).

As she had time left, I think nxt time she could demo in more details some of the tools she mentioned, and give a bit more details around sizing (which I gathered from questions I asked).

Elisabeth Krumbach Joseph

As I’m always interested by build or execution environment, I then attended the presentation from Jay Coles on Linception: playing with conatiners under Linux. It took me nearly the full presentation to understand what it really was about, as I first thought it would be an explanation of LXC or something similar, but it was in fact again a new implementation of containers for Linux, using various capabilities provided by the kernel. (Thanks to the audience member who asked for a demo at the, which made it completely clear to me then !). And re-reading the description of the talk, I think the content wasn’t completely what was described. Anyway, was interesting, but do we really need again a new solution in that area, after LXC, Libvirt-lxc and systemd-spawn ? Look yourself at to make your own idea. I’m not completely convinced.

Jay Cole

After lunch, Mark Nottingham explained to us what HTTP 2.0 was all about. Plenty of good reasons to improve HTTP for sure, with lots of graphs demonstrating the issues. But guys, please stop making stuff so administrator unfriendly ! Compressing header may be a win for resources, but for sure a loss for us. We won’t be able to make a simple telnet host 80 and type GET / HTTP/1.0 e.g to check rapidly stuff. I don’t think that helps at the end. Of course modifications are required to solve real issues. But be smart and invent something more friendly for us.

Mark Nottingham.

Then was a conference from someone I really wanted to see, if only for his initiative. And more over because Martin Krafft was talking about configuration management for system administration, which is something I’m also working on for my systems, even if I don’t have the final solution at the moment (ansible looking really close to what I like the most). And his presentation was in the line of what I expected: he prefers pets to cattle, and I must confess so do I ;-) He has strong opinion (he is a Debian advocate ;-)) on system management, and most of them sound pretty acurate to me. Martin developed a tool called reclass to help him encapsulate Salt and Ansible and do what he wants them to do. Definitely worth exploring. Even if it seems that latest versions of Salt have similar features, making the tool less useful. Anyway, I prefer ansible (for the SSH communication aspect), so added to my TODO list, which never gets empty !

Martin Krafft

After that talk, I could not miss the only one made by a french hacker ! Especially as I know Thomas Petazzoni for a long time now, have loved his Kernel presentations he was making in the past, and was interested to see a new tool such as BuildRoot, which could be relevant in the MondoRescue context. And it seems I wasn’t the only one as the room was pretty packed for his presentation, which is good for his first LCA.

Thomas Petazzoni

Is it chauvinism ? I found his presentation one of the best of the week. He gave a very detailed view on the tool features, giving concrete example of usage corresponding to the various configurations he was describing. But all that remaining cristal clear. The presentation was easily understood by the audience. The tool itself is pretty impressive, recompiling in order all what is needed to make a standalone Linux from scratch system ready to be embedded on any system. Of course, Thomsa knows the subject extremely well as he does that for a living for free electron, which was appearing on Jon Corbets’s stats as a major ARM kernel contributors company.

He even mentioned the help that is provided with regards to licenses of the software contained in the build, which is a topic too few developers take seriously, and it was great to see him mentioning that.

My only regret: I’d have loved to have a short demo of on of his existing system, but honestly he had no additional time to do it, so could be worth adding for a one hour time slot. Anyway, well done Thomas, and looking forward to attend more of your presentation, as I always learn stuff when I do, which makes my day :-)

And finally, I finished the day with another of my favorite speakers in such event, Ric Wheeler from Red Hat who was talking about File systems and storage systems of course ! But this time he was exploring the concerns that bring to the Linux Kernel community the new storage systems arriving: Flash components used on memory slots, post-flash components, with low lantency and high storage capabilities (such HP’s own memristor e.g.). But even if it creates some difficulties to the kernel hackers to keep with the pace of storage technology, these are really good news, as first storage technology will now also make a giant leap, as was done for network, CPU in the past. We will have TB of data in a very small form factor soon, providing bandwidth and latency optimal accesses. Then the Linux community is part of the developments made around these new devices, instead of running behind as it was the case in its enfancy. And finally it will change a lot the way we architect solutions in the future, especially thinking about optic introduction to interconnect all these new components. So the future 10 years are a as bright as were the past 20 ones at least.

Ric Wheeler

That days was also the day of the Penguin Dinner, which was at a walking distance from the University.

Penguin Dinner

And I must confess I passed that dinner as a lzay guy speaking french with Thomas, and also 2 other french persons from Neo Caledonia ! We were near the Swan River and beloeve me, it was pretty cold compared to the Monday, so I ended up having a small flu :-( But that was a nice time remaking the world again and discussing technology.

French discussion at LCA

The place was really neat with a view over the Swan River up to the city, and we stayed there a couple of hours, before going back to our hotels, tired, but happy of the day, and just waiting for the next one !

Swan River view

BTW all videos are now online so you can easily make your own opinion by looking yourself at them, as if you were there (you just lack the sun, the wine, and the nice talks ;-)) Cf:

Perth by night

Filed under: Event, FLOSS Tagged: ansible, Australia, Event, HP, HPLinux, Linux,, Open Source, OpenStack, presentation


My talk on building activism around the TPP at LinuxConf 2013 is now online – thanks to the AV team for all of their hard work, and for catching me to re-record the talk when they discovered the first recording had gone awry. You can listen to the full talk here [mp4 on the linux site] [or watch on YouTube], and a truncated version of the first version of the talk I gave, mostly notable for catching some good discussion in the questions (which begin a few minutes in – look for the slide that just has my name and contacts on it), here [mp4].

Abstract: This presentation suggests a variety of strategies and tactics that the Linux community might adopt when acting on political issues, with the Trans Pacific Partnership Agreement (TPPA) being of particular concern at the moment. The TPPA is a multinational free trade agreement (FTA), and will probably build on and extend the damaging provisions imposed by the 2004 Australia-US FTA. The extent of damage likely to be done by the TPPA is not yet known, as only draft copies have been leaked and the negotiations remain secret.

Currently, free and open source communities often find ways deal with problematic laws, such as the copyright extensions and restrictions on circumventing technological restrictions brought in by the 2004 Australia-US FTA, with clever hacks of the legal system (such as copyleft and creative commons licenses); workarounds which meet the letter of the law (such as providing Linux installations without potentially-illegal codecs); or ignoring laws which seem unlikely to be enforced. However, all of these strategies have problems. Hacks can only go so far; relying on a lack of enforcement is risky; and workarounds make free and open source software less accessible for novice users and others who would prefer software that works out of the box. Part of the work of promoting free and open source software must therefore involve activism that is directly aimed at the TPPA and other FTAs.

Important activism did take place around the 2004 Australia-US FTA, including work within Linux Australia led by Rusty Russell, Kimberlee Weatherall and others. Much of this took a similar form to activism currently happening around the TPPA: the focus has been on lobbying, letter-writing, and media relations. Coalition-building and other activism around the TPPA, as with the 2004 FTA, has predominantly taken place within tech communities. However, while this work has been valuable, it may be useful to explore ways to build alliances with other communities and to draw on a broader range of activist tactics. This discussion will draw on some of the lessons learned from relatively successful attempts to oppose FTAs in the past, including protests in the late 1990s around the Multilateral Agreement on Investment and World Trade Organization negotiations, as well more recent FTAs such as those between the US and Malaysia and the Free Trade Area of the Americas proposed by the US. Drawing on this work, I will suggest tactics for effective action, including use of a spectrum of allies model, organizational models which facilitate tiered levels of participation, and creative use of the Overton window. I will also outline some of the key groups opposing the TPPA outside of the tech community in both Australia and the US.

Many thanks go to Rusty Russell, Brendan Molloy, and Nathalie Latter for their help preparing the talk.


X bitmaps vs OpenGL

Of course, you all know that X started life as a monochrome window system for the VS100. Back then, bitmaps and rasterops were cool; you could do all kinds of things simple bit operations. Things changed, and eventually X bitmaps became useful only off-screen for clip masks, text and stipples. These days, you’ll rarely see anyone using a bitmap — everything we used to use bitmaps for has gone all alpha-values on us.

In OpenGL, there aren’t any bitmaps. About the most ‘bitmap-like’ object you’ll find is an A8 texture, holding 8 bits of alpha value for each pixel. There’s no way to draw to or texture from anything where each pixel is represented as a single bit.

So, as Eric goes about improving Glamor, he got a bit stuck with bitmaps. We could either:

  • Support them only on the CPU, uploading copies as A8 textures when used as a source in conjunction with GPU objects.

  • Support them as 1bpp on the CPU and A8 on the GPU, doing fancy tracking between the two objects when rendering occurred.

  • Fix the CPU code to deal with bitmaps stored 8 bits per pixel.

I thought the latter choice would be the best plan — directly share the same object between CPU and GPU rendering, avoiding all reformatting as things move around in the server.

Why is this non-trivial?

Usually, you can flip formats around with reckless abandon in X, it has separate bits-per-pixel and depth values everywhere. That’s how we do things like 32 bits-per-pixel RGB surfaces; we just report them as depth 24 and everyone is happy.

Bitmaps are special though. The X protocol has separate (and overly complicated) image formats for single bit images, and those have to be packed 1 bit per pixel. Within the server, bitmaps are used for drawing core text, stippling and creating clip masks. They’re the ‘lingua franca’ of image formats, allowing you to translate between depths by pulling a single “plane” out of a source image and painting it into a destination of arbitrary depth.

As such, the frame buffer rendering code in the server assumes that bitmaps are always 1 bit per pixel. Given that it must deal with 1bpp images on the wire, and given the history of X, it certainly made sense at the time to simplify the code with this assumption.

A blast from the past

I’d actually looked into doing this before. As part of the DECstation line, DEC built the MX monochrome frame buffer board, and to save money, they actually created it by populating a single bit in each byte rather than packed into 8 pixels per byte. I have this vague memory that they were able to use only 4 memory chips this way.

The original X driver for this exposed a depth-8 static color format because of the assumptions made by the (then current) CFB code about bitmap formats.

Jim Gettys wandered over to MIT while the MX frame buffer was in design and asked how hard it would be to support it as a monochrome device instead of the depth-8 static color format. At the time, fixing CFB would have been a huge effort, and there wasn’t any infrastructure for separating the wire image format from the internal pixmap format. So, we gave up and left things looking weird to applications.

Hacking FB

These days, the amount of frame buffer code in the X server is dramatically less; CFB and MFB have been replaced with the smaller (and more general) FB code. It turns out that the number of places which need to deal with individual bits in a bitmap are now limited to a few stippling and CopyPlane functions. And, in those functions, the number of individual read operations from the bitmap are few in number. Each of those fetches looked like:

bits = READ(src++)

All I needed to do was make this operation return 32 bits by pulling one bit from each of 8 separate 32-bit chunks and merge them together. The first thing to do was to pad the pixmap out to a 32 byte boundary, rather than a 32 bit boundary. This ensured that I would always be able to fetch data from the bitmap in 8 32-bit chunks. Next, I simply replaced the READ macro call with:

    bits = fb_stip_read(src, srcBpp);
    src += srcBpp;

The new fb_stip_read function checks srcBpp and packs things together for 8bpp images:

 * Given a depth 1, 8bpp stipple, pull out
 * a full FbStip worth of packed bits
static inline FbStip
fb_pack_stip_8_1(FbStip *bits) {
    FbStip      r = 0;
    int         i;

    for (i = 0; i < 8; i++) {
        FbStip  b;
        uint8_t p;

        b = FB_READ(bits++);
        p = (b & 1) | ((b >> 7) & 2) | ((b >> 14) & 4) | ((b >> 21) & 8);
        r |= p << (i << 2);
        p = (b & 0x80000000) | ((b << 7) & 0x40000000) |
            ((b << 14) & 0x20000000) | ((b << 21) & 0x10000000);
        r |= p >> (i << 2);
    return r;

 * Return packed stipple bits from src
static inline FbStip
fb_stip_read(FbStip *bits, int bpp)
    switch (bpp) {
        return FB_READ(bits);
    case 8:
        return fb_pack_stip_8_1(bits);

It turns into a fairly hefty amount of code, but the number of places this ends up being used is pretty small, so it shouldn’t increase the size of the server by much. Of course, I’ve only tested the LSBFirst case, but I think the MSBFirst code is correct.

I’ve sent the patches to do this to the xorg-devel mailing list, and they’re also on the ‘depth1’ branch in my repository



Eric also hacked up the test suite to be runnable by piglit, and I’ve run it in that mode against these changes. I had made a few mistakes, and the test suite caught them nicely. Let’s hope this adventure helps Eric out as he continues to improve Glamor.

January 13, 2014

Installing OpenWRT on a TP-Link TL-WDR4300

The TP-Link WDR4300 (v1.6) has an impressive array of hardware features and 128M of RAM for about $160. Unfortunately it comes with firmware that the vendor has deliberately crippled and a known backdoor.

Backdoor warning

Fortunately the amazing OpenWRT exists so we can make this nifty little machine work really hard for us doing a wide array of things. Here's how you get OpenWRT onto this router:


Install the Firmware

  • Log into the WDR4300 at using admin:admin
  • Select System Tools then Firmware Upgrade
  • Press Browse and select the firmware you just downloaded
  • Press Upgrade then OK.
  • You will then see the firmware upgrade in progress.
  • After a couple of minutes this should switch to Software Upgraded Successfully! and commence restarting.

Post Flashing

  • The network range will have changed to, so you will need to obtain a lease on this range by restarting your network services or disconnecting / reconnecting your network cable
  • The new OpenWRT firmware is on so point your browser in that direction
  • Youll be greeted with a warning that there is no password set for this device, click the link to set one.
  • Enter in your new password and confirmation, then scroll down press Save & Apply

That's it if for basic installation. You've now got a powerful little router running OpenWRT and the possibilities abound.

January 12, 2014

Snakes! Cicadas! On a Farm!

Coming to a cinema near you these holidays...

There are no cicadas in Tasmania. Coming from NSW, the summer hum of cicadas was a familiar sound. Since moving to Tasmania five years ago I've never heard one.

You get to know every sound in the country. There's no background city hum so you get to recognise each distinct sound, birds, frogs, insects and a new sound stands out. A loud new sound stands out dramatically and prompts you to rush outside to check it out...

There was what looked like a small "black prince" on the fence post which flew off before I could photograph it.

So, I chased it 20M up the driveway to another fence but as I got to within 1M it flew into the bucks' paddock.

I ran back down the drive, through the gate and ran in the direction I last saw it flying then stood very, very still. A few minutes later it began to sing again, so I aproached the source of the sound in a tuft of grass very carefully...and he flew off behind the bucks' shed!

I bolted up to the buck's shed and ran behind it stopped, waited and listened. Then I heard a sound, a new sound. My periphal vision caught some movement at my feet and my heart lept out of my chest:

Tiger Snake?

This jet black beauty was over 2M long, thicker than my wrist, its' head is the size of my fist (zoom in) and was probably a tiger snake. Fortunately it was significantly more terrified of me (it could undoubtedly see up the utility kilt) and was heading as fast as it could into its' bolt hole, where I left it.

Once my heart rate was back under control, I listened for a few minutes for the cicada but there was nothing to be heard so I sulked off back out of the paddock.

As I neared the gate, above the sound of frogs and crickets I heard the new sound again, from the direction of the bucks' shed! So I ran back through the paddock like I was in a Dr Who episode, up the hill, behind the bucks' shed and stood with my ears to the wind and my eyes to the ground.

I heard the sound again and walked steadily to the wildlife corridor, through the gate towards an old, moss covered fence post and started taking photos from about 2M away but this time I was eventually able to get only centimeters away:

Black Prince Cicada

What a little beauty. Their sound is very unlike that of cicadas in NSW and I have thus far only heard and seen this one. Good luck mating little fella!

Fun Facts!

So of course I did a little research on the intarwebs when I got back:

many Tasmanian invertebrates (and, of course, vertebrates and plants) are of Gondwanan origin, some reveal even more ancient lineages that extend back to Pangea - the supercontinent that predated Gondwana beyond 200 million years ago.

Nearly half of the invertebrate species found within the Tasmanian Wilderness World Heritage Area, for example, are found nowhere else on Earth.

IF YOU thought the noise of cicadas in the bush around Northern Tasmania this year was louder than usual, you would be correct.

Launceston entomologist Simon Fearn said our first wetter summer for a while had seen the hatching of millions of the large and loud black insects.

Many people around the world regularly eat cicadas. They are known to have been eaten in Ancient Greece as well as China, Malaysia, Burma, Latin America, and the Congo. Female cicadas are prized for being meatier.

LCA2014 - Friday

These are my raw notes from talks held on Friday at LCA2014. May contain errors, mis-heard quotes. Also completely un-reviewed or spell checked:


Deploying software updates to ArduSat in orbit by Jonathan Oxer

Arduino and Cubesat - ArduSat can allow every child to put an experiment into space as part of their schooling

  • One week's experiment time is < $300
  • Well affordable for schools
  • 30 ArduSats are going to be launched every year over the next 5 years
  • 500 thousand school kids will be participating
  • Power supply module has 2800mA/h
  • Uses amatuer band radio for communication (encrypted)
  • Orbit is 90 minutes long
  • CubeSat gets slammed from -40C to +80C every 90 minutes


  • Release of designs with out attracting black helicopters
  • Satellite technology is listed as a weapon
  • Collaborating with UNSW on a flight computer

The future

Water propelled cubesat with 2.5L of water has enough propellent to reach Mars.

Raspberry Pi Hacks: Building Great Pi Projects by Tom Callaway and Ruth Suehle

  • Authors of "Raspberry Pi Hacks" - discount code: authd
  • Designed for educational use and intended for Python
  • There are solar powered raspberry pi labs
  • have free shipping to Australia
  • HDMI displa, PAL / NTCS / DSI. No VGA
  • Atrix Lapdock can be used as a screen.
  • USB touch screens available
  • Occidentalis is a good hacking distro
  • NOOBS good for kids hacking on rPI
  • Needs a clean 5v - do not use laptop UCB ports for power. Do buy a power supply
  • Do grab a case.
  • Useful mini camera available
  • Lego (TARDIS) Case (you can but one too)
  • Use an old gameboy case.
  • PIP Boy case
  • Game emulating up to PS2
  • Scratch is included - teaches coding
  • Google Coder for teaching to teach HTML java nodejs
  • New SETI programme
  • PiGate - rPI stargate
  • PiLorean!
  • PiFM - turn it into an FM transmitter
  • Tux photobooth
  • RaspBMC - XBMC
  • Ardiuno hacks available
  • R2DR pi :-)
  • Video game table out of IKEA parts

Simple DIY Engine Management for simple software hackers by Josh Stewart


  • Existing Options are:
    • Too closed
    • Too expensive
    • Too complex
    • Too hacker unfriendly
  • Good for learning how it works
  • Goal: Engine management system for $100 - usable in the real world


  • Engine Speed
  • Engine Position
  • Engine Load
  • Engine Temp
  • Air temp
  • O2 reading


  • Volumetric Efficiency - measuring the engines efficuency pumping air
  • Often only about 80% VE
  • Fuel algorithm is a fixed constant based on fuel required against 100% VE

The Bits

  • Arduino Mega 2560
  • IO Shield
    • 4 injector circuits
    • 4 ignitions control circuits
    • Relevent protection for 12v power


  • Math capabilities
  • Slow IO
  • Results with timing / accuracy


  • Has an autotune function

Processing Continuous Integration Log Events for Great Good by Clark Boylan

Project Gating

  • Tests run on all proposed patches
  • Code merges are gated on tests
  • Ensure code quality
  • Protects developers
  • Protects code tree quality
  • Tests run continuosly.

Log Archive

  • Logs on disk
  • Fronted by Apache and mod_autoindex
  • Lots of data, no information


  • Often not the fault of newly submitted code
  • 1% failures (race conditions / hardware)
  • "recheck" comments automate rechecks
  • Bug:failure relationships are manual

Need Something Different

  • Accessbile logs with good UI, REST API & query language
  • Lots of existing options.
  • Went with Logstash, Kibana, ElasticSearch
  • Near realtime reults


  • Collects logs from untrausted Jenkins slaves
  • Redis is unreliable at this scale
  • Can't index all logs due to volume
  • 1.3 billion logs events archived
  • 72 thousand queries
  • Doesn't currently tie back to Launchpad
  • CRM114 SPAM filter helps identify probable success / failures "probablistic diffs"

Provisioning Bare Metal with OpenStack by Devananda van der Veen

  • Blocker to adoption is complexity of OpenStack install.
  • To resolve this installation to bare metal is critical
  • Ironic-conductor and ironic-api manage the hardware divers and abstracts them from Nova and other components

Openstack is not a virualisation layer, it's an abstraction layer

  • Consistent updates
  • nodes remapped to conductors
  • take-over hooks fire up

Conference Close

  • LCA2015 in Auckland
  • LCA2016 in Geelong

Lightning Talks

  • Freedombox 0.2 release coming out in a few weeks
  • OneRNG - hardware random number generator
  • Central Coast starting up
  • dlect - Lecture Recording Downloader
  • Debian is cool
  • EFA need volunteers

January 10, 2014 2014 – Day 5 – Finish

Winner Rusty Wrench Award: Andrew Tridgell


Host of LCA2015: Auckland!!



Lightning Talks part 2

  • My toothbrush has a serial number
    • after sales support
    • can they find it for me?
    • In the post-Snowden world this should be investigated
  • DIY Book Scanning for Fun
    • Scanned book useful for good reasons
  • Freedom Box project update
    • Almost ready for 0.2 release which will be pretty good
  • OneRNG
    • Open Hardware, Random number generator
    • Trustable, see raw or AES whitened
    • trying various options
  • Central Coast LUG
  • Bitcoin Myths
    • Anonymous – Nope, all transactions records
    • Bubble – nope, infrastaructer
    • Giant Ponzi scheme – Not sold as investment, no claims
  • dlect – Lecture recording downloader
    • Looking for help and the extend to other Universities
  • Debian in Australia
    • Trying to get Debian Australia mailing list started
  • Bitcoin architecture applied to capital markets
  • Learning Opportunities in Rocketry Software
    • Maths makes by head hurt
  • Electronic Frontiers Australia
    • Would like to invite you to volunteer and drink beer
  • LA does other things
    • pycon AU in August
    • Drupal camps
    • Barcamp
    • Join a user group
    • hacker space
    • add your blog to our planet 2014 – Day 5 – Session 2

Provisioning Bare Metal with OpenStack by Devananda van der Veen

  • Tried to use the existing NOVA tool (which was used for VM provisioning) but all sorts of limitations
    • Hacked it a lot and then gave up
  • Created a new project “Ironic” for bare-metal provisioning in May 2013
  • Status
    • Being working on but lots of devs
    • Not in main release yet
    • Some push to simplify openstack installation – Triple-O (Openstack on Openstack)
    • “Openstack is not a virtualisation layer”
  • Security not there yet
  • Driver Interface
    • 3 classes of interfaces: core, common, vendor
    • core: power management, deploy,
    • common: console, rescue
    • vendor: fireware? boot-from-volume? something-else?
  • Architecture
    • REST API
    • DB
    • conductor services
    • RPC
    • ( the slide makes more sense )
  • Many conductors, Many drivers
  • If the cluster changes
    • take-over hooks
    • consistent has updates
    • node(s) re-mapped to conductors
  • Feature equiv to bare-metal in a few months
  • Usable by that point, eventually to replace bare-metal, may require re-deployment
Talk only took 20 minutes. That was quick 2014 – Day 5 – Session 1

D-Bus in the kernel by Lennart Poettering

  • kdbus
  • Most newer OS designs started around powerful IPC
  • Linux only had IPC primitives ( sockets, fifos, shared memory)
  • DBUS is powerful IPC
    • Method Call Transactions, Signals, Properties, OO, Broadcasting, Discovery, Introspection, Policy, etc,  etc etc
  • D-Bus has it limitations
    • Suitable only for control, not payload
    • It’s inefficient ( 10 copies, 4 complete validations, 4 context switches per duplex method call transaction)
      • Can we optimise that rather than moving it into the kernel? – “We are, We are moving it into the kernel with kdbus”
    • Credentials one can send/recv are limited
    • No implicit timestamping
    • Not available in early boot, initrd, late boot
    • Hookup for security frameworks happens in userspace
    • Activatable bus services are independent from other system services
    • Codebase is a bit too baroque, XML
    • No race-free exit-on-idle bus activated services
  • D-Bus is fantastic, solves real problems
    • Right approach: good concepts, generic, comprehensive, covers all areas
    • Established, it is the single most used local high-level IPC system on linux, bindings most languages
    • Used in init systems, desktop, etc
  • kdbus
    • Suitable for large data (GiB), zero-copy, optionally reusable
    • Efficient ( 2 or fewer copies, 2 validations, 2 context switches, per duplex method call transaction)
    • Comprehensive credentials sent along ( uid, pid, gid, selinux, cgroup, caps, audit, etc etc)
    • Implicit timestamping
    • Always available from earliest boot to latest shutdown
    • Open for LSMs to hook into from the kernel side
    • Activation is identical to activation of other services
    • Userspace is much simpler, No XML
    • Rece-free exist-on-idle
  • Overview
    • Intention to replace current D-BUS implimentation
    • Should be drop-in so program using bindings should be drop in without changing anything
    • No performance tests yet
    • Receiver buffers
    • Single copy to destinations(s)
    • method call Windows
    • Name registry
    • memfds
      • File descriptors for memory regions
      • Zero Copy
      • Sealing
      • At 512k and above zero copy is faster than single copy, below that the mapping overhead is too expensive
      • a bit like Android ashmem
    • Signal Broadcasting
      • Bloom filters
    • Userspace
      • proxy provides compatibility with dbus1 sockets
    • When?
      • Code all there, pretty much implemented, policy still to do
      • Compile-time switch in systemd
      • We hope to get in the kernel in 2014
      • gdbus support coming soon

VisualEditor: Wikipedia’s new editing system, and how you can use it too by James Forrester

  • Why a visual editor
    • Drop off in new contributors is major issue for wikipedia
    • MediaWiki is based on the very rich wikitext syntax
    • Wikitext stops people becoming contributors or staying away
    • Not an appropriate way to ask people to contributive things in the 21st century
    • Prevents us from providing other helpful editing tools
  • Flexibility and modularity
    • Modular
    • support by 300 languages
    • replace or extend existing tools
    • expand it for editing new content
  • What?
    • Client-side JavaScript contentEditable HTML editor
    • Stand-alone, integrate into any platform
    • Mediawiki integration (parsoid converts HTML+RDFa to/from wikitext)
  • 3 Main layers
    • Data Model
      • We can’t edit the input HTML+RDFa directly
      • Reliable convert from HTML to things we can edit and back without corruption
      • Synchronise with ContentEditable (both ways)
      • Needs to allow real-time collaborative editing
      • All sorts of problems with spliting characters in various languages
    • Browser contentEditable
      • Messes up HTML, inconsistent key handling
      • Native text insertion, spell-check, selection and cursor
      • Equal rendering
    • Programmatic cursor handling
      • Intercept the browser’s native cursor handling
      • allows us to skip over non-editable elements
      • Thought this will allow to avoid UX issues
      • but this breaks internationalisation
      • Move to “observe and fixup” of native browser action
    • Input Method Editors
      • Lots of languages reply on IMEs
      • Mainly East Asian and Indic languages
      • Each IME has it’s own unique way of changing content
      • But can change by OS, browser, IME version
    • SurfaceObserver
      • Poll DOM for changes and notify model
      • Lots of events emit no of few events (eg spellcheck)
      • Possibly use MutationObserver in the future
      • Re-render if really necessary
    • Actual rich editing of stuff
      • Annotation (bold, underline)
      • Generated content (references, tables )
      • Tool UX types – Dialogs
  • Future
    • Patches welcome
    • Maybe can be integrated in wordpress or something ( but might be hard) 2014 – Day 5 – Keynote – Jonathan Oxer

  • How many people have wanted to work on something that went into space? – Everybody
  • How many people have actually done that? – about a dozen
  • How about if every kid could answer “tes” to that question
  • Cubesats – Small standard way to luanch tidy stats – 10cm cubes
  • ArduSat – Kickstart – June 2012
  • Feb 2013 – Revises design of board
  • Mar 2013 – New design built
  • June 2013 – Hardware integrated and built – ArduSat-1 and ArduSat-X
  • 3 Cubesats Launched to ISS include the 2 ArduSat’s
  • Need to upgrade/update Experiments in flight
    • Browser based updates
    • Code sent via Internet to california, run on duplicate of satellite
    • Transmitted to satellite
    • Distributed to one of the nodes
  • CubeSat System bus
    • Supervisor node controls smaller nodes
    • Stores stuff on Sd card
    • Allocates experiment to node, powers up the node
    • sends it via “serial port” to node
    • Supervisor controls Power and reset and serial for each node
    • node thinks it is just a stand-alone off the shelf Ardino
  • Satellite
    • 4-6 month life
    • 12-15 experiments
    • If allocated in 1 week blocks we can sell those for $300 each
    • So much backlog and interest is plan to launch 30 ArduSats/year for next 5 years
  • Make Satellite available
    • Flyable version costs approx $50k
    • Replace flying modules with open-source parts
  • TODO
    • Release of designs without attracting the lack helicopters
    • Flight computer design
    • Terrestrial sensor suite design
    • Development of education material
  • Wait there is more…
  • Water powered cube-sat
    • 5kg total mass, 2.5kg of water propellant
    • On kickstarter


January 09, 2014


Those interested by the start of the story can read the previous article on LCA14.

Kate Chapman

Day 2 at started with another keynote.

This time we had a presentation of a subproject of OpenStreetmap called HOT and showed how they’re using and promoting Open Street Map in humanitarian situations such as earth quake or floods. Typically was intereting that they now use it proactively in some part of the world such as Indonesia to *prevent* disasters, rather than reacting afterwards.

Kate Chapman

Listening to that, I thought that each country should propose to their young people in age of conscription (when they still have such thing in place) to contribute freely during a 3/6 months period to such an initiative: They don’t need high skilled IT engineers, but just people able to make correct maps of a given area in their neighbourhood. We could end up having a large Open Data collection in Open Street Map, regularly up to date.

Anyway, I was impressed by the generosity of this project, and to see how concretely useful some of the work which is done by hthe Open Source community is used.

The rest of the day was again dedicated to Miniconfs. I passed my time in OpenStack one. The track was organized by Michael Still of Rackspace who was last year Conference Director and made probably the most interesting session of the week in introduction, which consisted into giving an OpenStack T-shirt to all attendees ;-)

Michael Still

There were then some changes in the set of sessions due to infortunate problems for some speakers. So James E Blair from the OpenStack Foundation and Michael Still did the presentation on Open Stack governance on behalf of Tristan Goode.

James Blair & Michael Still

Followed by Paul Holland’s presentation on the move to a foundation made by the Open Stack community to manage their brand and governance model just explained, which is quite unique in its willingness to be egalitarian, which is probably a big concern when you gather thousand+ developers and dozens+ companies, and you want everybody plays the game. So the whole election mechanism, to board, to PTL roles, the gated trunk, … are probably mandatory now for such huge project with such a young history. I’d not necessarily recommend every project to adopt it as it will IMO be overkill for most of them,even if continuous testing (for them with Jenkins) is for sure a good idea (I wish they would do continuous packaging as well ;-)). However, for the projects I’m leading, it’s still an objective rather than a reality.

Paul Holland

In the afternoon, there were presentation on an incubated Open Stack project by Bruno Lago for billing called Artifice and based on the recently introduced telemetry OpenStack Ceilometer. There are different proposals around this billing need, and time will say which solution is retained as a project.

The following presentation was more interestig for me as it was on improving Network support in OpenStack by Iain Robertson from Brocade. It’s interesting to observe how this area of Network Functions Virtualzation (NFV) and Software Defined Network (SDN) is moving so fast, which is surely only possible due to the rapid evolutions made on lots of Open SOurce components to support them. I think there will be more and more overlap betwee nthe functions provided by the major Frameworks such as OpenStack (python based) OpenDaylight (java based) if only for language eason (API, interfaces, reuse of code) …

Then we had a very good presentation of Erasure Code for Open Stack Swift by John Dickinson, Swift PTL. which explained how this sort of RAID technology for Swift, the object storage brick of OpenStack, will be provided, and made a demo which helped understanding the concepts exposed.

John Dickinson

Anita Kuno, HP, came back on stage for a short presentation on some unknown aspects of OpenStack around the CI tooling (grab her presentation when it will be published to ge tthe links).

Anita Kuno

Then again Robert Collins, HP, took over Joe Gordon’s presentation on the evolution of OpenStack between Diablo and Havanna. Which on one hand looks very similar with the same core components, and also lots of differences with three times the number of lines of code and many additional bricks added. Of course, while not the original speaker, Robert’s extremely large knowledge of OpenStack in a whole, allowed him to very precisely show the dynamism of this project and the progresses made in the last 2 years. He also explained how Nova lost most of the features it had initialy to create separate components instead.

Robert Collins

Overall remarks about LCA2014: timing are respected very strictly, and they provide enough time in order to move from one conference to another (which could take 5 minutes crossing the university), they have timekeeper in each room, video in each room (at least all the one I was in). The first day they forgot to print banner to indicate where the rooms were, so it was a bit difficult to find its way (despite online maps, but regular readers know I’m old(fashioned) and also like signs !). Day 3 was thus perfect with regards to that !

Signs on Wednesday !

I had a very nice dinner that second day at the Friends Restaurant, with a Grilled barramundi fillet, quinoa risotto, roasted almonds and confit summer vegetable in extra virgin olive oil ! Of course, with such a title, It could only be a french touch, and indeed the chef was french. But the fish was local and very good !


That was a long evening full of mail, and some coding. More pictures are available at

Next day is the start of the main conferences !

Filed under: Event, FLOSS Tagged: Australia, Event, HP, HPLinux, Linux,, Open Source, Open Street Map, OpenStack, presentation

Adding a Git Repo To GitWeb

This assumes that you have a server with GitWeb installed on it already. The file paths are based on Debian.

On your server

Create a directory for the new git repo:

$ sudo mkdir /var/cache/git/MyRepo

Then change the permissions so you can use it:

$ sudo chown myusername:myusername /var/cache/git/MyRepo/

Now we need to initialise the repo for GitWeb:

$ cd /var/cache/git/MyRepo
$ git init --bare
Initialized empty Git repository in /var/cache/git/MyRepo/

Then provide an appropriate desciption of your repo by editing the description file:

$ vi description

On your workstation

Clone the new repo:

$ git clone ssh://

Cloning into 'MyRepo'...
warning: You appear to have cloned an empty repository.
Checking connectivity... done

Create some content:

$ cd ./MyRepo
$ touch me

Then add the file to git

$ git add me
$ git status
# On branch master
# Initial commit
# Changes to be committed:
#   (use "git rm --cached <file>..." to unstage)
#       new file:   me
$ git commit -am "Added me"
[master 61e4a60] Added me
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 me
Counting objects: 4, done.

Now push it up to your server:

$ git push
Delta compression using up to 2 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 268 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To ssh://
   0a18962..61e4a60  master -> master

If you check your GitWeb instance, you should the repo is now there and your first file is in it.

Happy GitWeb-ing :-)

LCA2014 - Thursday

These are my raw notes from talks held on Thursday at LCA2014. May contain errors, mis-heard quotes. Also completely un-reviewed or spell checked:


Matthew Garrett

  • Video
  • Interesting security events in 2013:
    • UEFI was deployed in production
    • Snowden revelations
    • Governments involved in sophisticated hacking on domestic populations
  • Who are we concerned about
    • NSA - complete set of capabilities are unknown, assume the worst.
    • Our hosting / service providers
    • Opportunistic attackers
  • NSA are able to perform attacks undetectable from th eoperating system
  • Leaks describe model-specific exploits
  • Plausible that vendors are not actively involved
  • Passive involvement is likely
  • Who would benefit from a generic exploit?
  • Intelligence agaencies are probably not our biggest concern
  • Most security breaches are political or profit driven
  • How can we protect our users?
    • Boot verification is an absolute requirement, despite being a vector for freedom infringements
    • Operating systems are too big to be perfect
    • Persistent infections make recovery impractical
    • Users will choose to keep an infected machine rather than have it repaired
    • Must be able to replace vendor components
    • Especially firmware
  • UEFI secure boot still allows users to replace keys
  • No guarentees to replace firmware
  • Android:
    • Some vendors allow OS replacement
    • No way to replace keys or firmware
    • Choice between freedom and no security or security and no freedom
  • Chromebooks are no better
  • Apple are the worst, cant replace OS, keys or firmware
  • How much can you trust your system:
    • OS backdoors? (not neccessary)
    • Firmware backdoors?
      • Jetway have not had their leak audited
      • Should be a project people engage in
      • There are obivous vulnerabilities in the code
    • Lower level hacks?
      • AMT, CPU microcode
  • Attack vector on modern devices is low due to move to cloud services
  • If you give your data to someone else, you're trusting them to not steal, share or lose it.
  • Spectrum of trust from software you run through to where you store your data.
  • Cloud security is poorly understood
  • Balance of probability suggests hypervisors have security vulnerabilities
  • SELinux / Apparmor allows you to run a VM in an isolated context
  • Introspection os bare metal is difficult
  • Introspection of VMs is trivial
Security for 2014
  • Be more aggressive about securing every layer
  • In a way that doesn't compromise freedom
  • Ask cloud vendors hard questions
  • Customers too
  • Don't by into exchanging freedom for security or vice versa

Without verified boot you are insecure. With verified boot you may be insecure.

Rapid OpenStack Deployment for Novices and Experts Alike by Florian Haas

Rough overview of OpenStack Architecture
  • OpenStack is the largest community driven cloud architecture
  • Keystone is the central identity/location service
  • Nova is the compute service that interacts with hypervisors (most of them)
  • Glance is the VM image service
  • Horizon is the OpenStack dashboard
  • All unified API's are RESTful JSON
  • Node roles are atomic, composable classes of nodes
    • Infrastructure NOde runs a database and a message queue (MySQL + RabbitMQ)
  • Authentication Node Runs the OpenStack Identifty Service providing authentication
  • API Node procides ReSTful endpoints to Openstack services
  • Controller node provides scheduling and registraion services internal to OpenStack
  • Network Node provides network connectivity within the cloud
  • Compute Node(s) hosts and runs VMs
  • Block Staorage Node provides storage
  • Dashboard Nodei provides unified user interface
  • Orchestration node runs and orachestration service
Tutorial Architecture
  • Using one node (alice) running all the node services except:
  • bob will run computes
  • Charlie will be the network node (this would normally have pone interface that is public)
  • Puppet node running... a puppet master.
  • Stackforge is a collection of puppet modules for OpenStack (and other things).
  • KickStack - OpenStack deployment with puppet made easy

Test the puppet architecture:

$ puppet agent --test

Set classes via puppet dashboard to define the node's roles

$ puppet agent --runinterval 10
  • Packstack is RedHat's tool.
  • Crowbar is a DELL project's deployment platform and is used by SuSE (along with Chef)
  • Juju is Canonical's deployment tool using a yaml file generated from "charms".
  • TripleO/Tusker (OpenStack on OpenStack). Uses Openstack scheduling and deployment for deploying hardware via PXE and IPMI - manage hardware like a VM.
  • Foreman "puppet on steroids"
  • Presentation tools is reveal.js, shell in a box in an iframe

Writing your first web app using Python and Flask by Danielle Madeley

  • Flask is a lightweight python web framework
  • Flask will gracefully finish requests

Debian on AWS by James Bromberger

  • AWS is a collection of remote computing services
    • Compute, storage etc
    • Certification available
    • Customers can chose software / operating systems
How is Debian using AWS:
  • Distributed Debian packagr compilation on EC2
    • Funded by grany
    • Helped find bugs in packages
    • Helped find bugs in compilers
    • Spot Instances allow you to name your own price for EC2, and you get the resources what you pay for - dynamic resource allocation based on market prices
    • 12 complete archive rebuilds
  • Accelerating
    • Speed up access for all regions
    • Use this in sources.list (or use
    • 24 hours caching for Debian-CD
    • Cached in 51 locations
Who's using it?
  • DD's mostly
  • Looking for people to do stats analysis
  • - 18TB of Debian packages 55K of files, 1 Postgres database - every package, ever.
  • All files are on S3 with automatic aging to Glacier
  • $200/month - donated by AWS
Official Debian images on EC2
  • AWS now has officual EC2 AMIs for debian generated by DDs.
  • Generation script is on GitHub
  • Available:
    • In AWS Marketplace ($0)
    • Shared from Debian AWS Account directly
  • Available in all regions (including GovCloud)
Creating the Officail AMIs
  • Uses build-debian-cloud
  • Uses in apt sources
  • resizes root file system if larger than default (8G)
  • Cloud-init is installed in the Debian AMI
  • ssh as admin using your keys
  • No remote access - can be changed after log in.
  • In the AWS marketplace for discoverability
  • 5% growth in usage every week.
Why Debian on AWS?
  • First place many people will discover Debian
  • Existing users now use at scale.
  • Providing a trusted operating system
  • AWS are hiring 2014 – Day 4 – Session 3

Disaster Recovery Lessons I Hoped I’d Never Have to Learn by Bdale Garbee

  • Got backups?
    • Do you keep a copy of your essential data off-site
    • If someone called you and said your house was gone: completely devastating or just a really bad day?
  • 11th June 2013
    • Got no warnings of fire, saw it one km away and then got instant evacuation order
    • Had 20m-1h to pack (unsure of time due to heat of situation)
    • Went to leave and fire was on driveway, unable to get car out.
    • Had to park car and firefighters walked them out.
  • The fire
    • Started 1pm 11th of June 2013
    • 13,000 homes evacuated
    • 94,000 acres evacuated
    • 500 fire fighters involved
    • Destroyed – 14,290 acres ( 57.8 km²) , 511 homes, 2 people
    • Let back into property on 21 June
  • Aftermath
    • House pretty destroyed
    • Normally 2000-2200 degrees, His house got to 3000 degrees plus Fahrenheit
    • Most metals melted
  • Ready to go – what would you grab?
    • Usuaul pocket contents.. mobile phone, wallet, keys
    • Photo albums and scrape books
    • Jewelry
    • Computers
  • Stuff you havn’t thought about
    • Mobile phone charger
    • Identity documents
    • Safe deposit box key
    • Account numbers and contact info
  • Hows your insurance
    • Types and levels of coverage
      • Structure
      • Contents
      • Loss of use
      • Outbuildings
      • Landscaping
    • Replacement cost vs actual cash value
    • Personal vs Business Property – Have you even taken a business tax deduction for any of these items?
    • Costed out how much it would cost to rebuild old house, got that much money to build new cost (not the same)
  • Rapidly recoving
    • Take care of human needs first
    • Find a place to live.. being homeless sucks
    • Minimise the amount of “throw away” stuff purchased ( don’t buy stuff you don’t want to keep, eg apartment-only stuff)
    • Pick one or two things to “put back to normal” ASAP
      • Mobile phone for my son
      • Big-screen TV with cable service
      • 3d printer
      • art supplies
    • Registries and wish lists
  • Cleaning up
    • Lots of metal extracted from the ashes – copper, silver
    • Found very few intact items – a Japanese doll, a few other ceramics
    • Since house taken out by embers around 2/3s of trees intact
  • Recovering Hobbies and Small Business
    • Back on the net quickly but not back in business
    • Commitment to AJ and Mike Beatie for rocket launch

A web page in seven syscalls by Tollef Fog Heen

  • “Most of us, well at least the ones of us who have CS degrees”
  • Modern Computers – Cores, Caches, 64 bit, Virtual memory
  • Massively multi-threaded – usually 1000 threads or more
  • Relative performance of CPU -> system calls -> Disk access
  • Varnish from HTTP’s point of view is a origin server not a “http cache”
  • Two processes
    • manager process – starts child, compiles config, watchdo on child and restarts if dies
    • Child process, handles all connections, handles storage
    • params shared via shared memory
  • VCL
    • Domain specific language
    • Compiled into C when loaded
    • Can escape to C from within VCL
    • Can import plugins
  • Optimisations
    • Avoid syscalls
    • Memory workspaces
    • Length-counted strings
    • Threads
    • Don’t fight the VM, use hints
    • Don’t copy data
    • LIFO schedule threads
    • Preallocate
    • Accept filters 2014 – Day 4 – Session 2

Is it safe to mosh? by Jim Cheetham

  • Replacement for ssh remote terminal connectivity, uses udp
  • Remote terminal applications, changing IPs, intermittent connectivity, more robust and responsive than ssh
  • It is safe? It depends…
  • Use cases differ, requirements differ
  • Highpoints
    • No “disconnect” when roaming/sleeping
    • SSP remains responsive; Control-C works when cat’ing a large file or big “find”
    • Instant predictive local echo
    • Very clean UTF-8 terminal
    • In all the main distros
    • Run from userspace
  • Demo “Luckily one of the things I need is an unreliable network”
  • Cloud at cost – – $35 VM for life
  • connect via ssh, run mosh-server, disconnects and reconnects back via mosh
  • Problems
    • Terminal scrollback is not yet implimented
    • “If you want scrollback, go get tmux. If you’ve got screen, go get tmux”
    • Logging is not mature
    • Server may live after client has died
  • SSP transport
    • diff and patch are the two main methods
    • RTT times are tracked
    • delayed acks reduce traffic requirements
    • 3s heartbeats keep the session alive
  • SSP Datagram
    • PAyload from transport layer is opaque
    • AES-128 protects the payload
    • UDP – receives packets from anywhere
    • Timestamps everything – maintain RTT estimates
  • SSP authentication
    • 63 bit monotonically increasing, unencrypted
    • out of order packaets discarded
    • at 2PB the session dies
    • Payload must decrypt – not realistic to brute-force
  • SSP allows roaming
    • The server knows where the client was
    • But doesn’t care – utmp is updated though
    • Other protocols are “protected” by having fixed network endpoints – which can be spoofed
  • Roaming
    • IP shouldn’t have tied IPs to location, but too late now
    • SSP is designed to ignore IP address
  • What is safety
    • Risk = Likelihood * damage
    • If client or server is compromised then session can always be taken over
  • What is unsafe
    • Connections from known-bad locations – known in advance
    • Connections from known-comprimised users – detected by behaviour
    • Connections to insecure software – Prohibited by administrator
  • Good and bad habits
    • ssh password vs keys
    • Detached terminal sessions with privilege
  • YES for home users and Small business
  • POSSIBLY  for Enterprise users

Below The Line: Fixing The Voting Process With Technology by Benno Rice

  • Australian Senate
  • So many people vote above the line because it is only one tick, below the line up to 100 seperate votes
  • If you vote above the line then you accept the order of preferences from the people you voted for
  • Can get party preference lists from Australian Electoral comission
  • Create a custom “how to vote card”
  • Site ideas
    • Store nothing
    • Just do it
  • First site 2010
    • Python
    • javascript, jquery, sortable
    • ballot renderer – python, reportlab, WSGI, truly awful code
    • Hosted on dreamhost
    • Melted on polling day
    • Typed in the data by hand, it was not fun
  • 2013 version of site
    • Got data in csv from AEC
    • Also did lower house (Geo lookup to find electorate)
    • Store and share ballots
    • Can shuffle parties as well as candidates
    • Links to party websites
    • Ruby
    • Javascript – Angular , ui.sortable
    • Ballot renderer – Python – reportlab
    • Geolocation – AEC has division boundaries mapped and availbale
    • PostGIS, Python, Google Maps API
    • Storing and sharing – python, redis
    • Ballot rendering in html – ruby, Haml, Reactive via bootstrap
    • Ballots stored under a random identifier that was never reused
    • Rackspace hosting – free hosting
    • Cloudflare as CDN
  • 2600 concurrent users
  • 165,000 unique visitors
  • 34,000 PDFs
  • Conclusion
    • The senate voting system is broken
    • You too can change the world
    • Just do it
  • 20+ people in the room used the site to vote below the line 2014 – Day 4 – Session 1

Programming Diversity by Ashe Dryden

  • What is diversity
  • More than gender
    • backgrounds experiences and lifestyles
    • not always visable
    • sexuality, age, language, class, race, ability
  • Terms
    • Intersectionality
      • The interaction of traits (race, sex, etc) and how people treat you beacuse of that
    • Privilege
      • unearned advantages a person gets for a perceived trait
      • Education, access to technology, higher pay, assumed competency, quality of network
      • Seen as a skill-set instead of traits
      • Easily fit/identify with subculture
    • Stereotype Threat
      • Worry you will confirm the stereotype that is applied to you
      • Lots of pressure
    • Imposter Syndrome
      • Unable to internalise their accomplishments
      • almost anyone can suffer
      • less likely to apply for jobs, apply to talk at conferences or even attend conferences
    • Marginalised
      • Doesn’t fit into the default groups
      • their needs or desires being ignored
      • Even marginalised groups marginalise others, nobody is trait blind
  • Women are about 20% of tech
  • Maybe Women aren’t into programming
    • Women like Grace Hopper prominat in field early
    • No physical of biological difference in race or gender affecting programming ability
  • Bulgaria
    • 73% of CS Students are women
    • teach children in schools that STEM is important to everybody, push everybody towards it
  • Diversity matters -
    • Companies that are more diverse tend to have better sales, profits, etc
    • Diverse teams:
      • solve complex problems faster
      • more creative and stimulated
      • get better decisions and generate for something
      • financial viability and success
  • Why lack of diversity?
    • Pipeline
      • Difference in toys and games for boys and girls
      • no famous role models that represent them
      • Access to technology. On average Boys get first computer at age 11, girls at age 14. Early teens great best age to learn and retain skills
    • Geek stereotypes
      • people who don’t identify and aren’t represented by the geek stereotype are turned off by those who do
    • Attrition
      • 56% of women leave tech in 10 years
      • twice the rate of men
      • our Grandmothers more likely to be programmers than our granddaughters are
    • Why attrition?
      • Harassment
      • People in marginalised groups twice as likely to report being harassed or mistreated
      • men 2.7 more likely to be promoted to higher ranking positions
    • Why can I do about this stuff?
      • Change starts with us
      • educate people who don’t understand this problem
      • Get to know people different from us – talk to people wearing a specific color that day
      • Follow people on twitter that are different from you
      • bias & discrimination are often subtle
      • learn to apologize
      • Talk about these issues openly ” That’s not cool :(
      • increase education and access
      • Facilitate event for marginalised groups
      • work with colleges and universities to remove bias
      • “have you programmed before?”
      • Thinks about what the “about” page of your website looks like
      • Think about the company culture
      • Job listing language and requirements –
      • Interviewing
      • equal pay
      • mentoring and career goal attainment

From Kookaburra to the Cloud: where to now for copyright in Australia by Ben Powell

  • Several recent cases
  • Australian law deals by exception, under copyright except where “fair dealing” , “fair use” etc allowed specicly by law
  • ALRC Review
    • More exceptions or general “fair use”
    • Report not yet tabled, but interim discussion paper released
  • Kookaburra
    • Song from 1932
    • “Down under” 1981
    • Nobody noticed till 2007 when on TV Quiz show
    • Court decided infringing
    • Two culturally significant songs
  • Fair Use vs Fair dealing
    • Fair dealing has specific exceptions
    • Things are not fair dealing
      • Sampling
      • non commercial use of incidental music
      • memes
      • commercial services to allow recording in the cloud
      • stoarage of copyright material
      • copying DVD to other devices
      • search engines (thumbnails)
      • digital archiving
    • More exceptions?
      • Quotations
        • in the Berne Convention
        • anachronistic term
        • doesn’t cover transformation, implies verbatim use
      • Transformation
        • not a substitute for the original work
        • low threshold – undermines creators rights
        • high threshold – confusing, how much change needed
        • How does commercial use fit?
        • Hard for court to decide
      • Private and Domestic use
        • Format shifting and time shifting exists already (VHS only, not DVD)
        • doesn’t cover the cloud
        • not technology neutral
        • Canadian more technology neutral but “non-commercial” bit heard to define
    • Fair Use
      • See US Copyright Act
      • Fair Use in Australia
        • Fairness facter
        • illustrative uses (non-exhaustive)
        • flexible defence, weighing up the factors
      • Advantages
        • Balance
        • Flexible
        • aligns with community expectations
      • Against Fair Use
        • Uncertainly ( parliament vs law)
        • Requires litigation
        • Originated from different legal enviroment
      • The reply to objections
        • Uncertainty – See normal consumer law with terms like “unfair contracts” , “misleading and deceptive conduct”
        • Different legal env – same common law roots, AUSFTA meant to “harmonise” copyright law.
        • International Law , 3 step test – The US gets away with it, never chellenged
  • Govt unlikely to go forward with fair use based on their leanings
  • The introduction of a Fair Use defence would encourage Australian innovation
  • “General the US likes to export the ‘bad parts’ of it’s copyright law, not the ‘good bits’  “ 2014 – Day 4 – Keynote – Matthew Garrett

Matthew Garrett

Security in the modern world

  • 2013 was an interesting year
    • UEFI Secure boot was deployed to the masses. On most PCs by default
    • ..and vendor implementations promptly broken
    • Snowden revelations
    • First highly visible hypervisor related compromise?
    • it turns out
  • Who are we protecting against?
    • The NSA?
    • Our Hosting providers?
    • Opportunistic attackers?
  • Imperfect security is better than no security
  • NSA
    • Leaked material is from 2007-2008 so don’t know how is advanced
    • No evidence that the entire stack has been subverted
    • Leaked material describes model-specific (rather than vendor-specific) exploits
    • Plausible that the vendors aren’t actively involved
      • although passive involvement is likely
    • Would it be in anyone’s interest to have a generic exploit?
  • Intelligence agencies are probably not your biggest concern
  • Most security compromises are either political or profit driven
    • But that doesn’t make the user feel better
  • What can we do to protect users
  • Protecting the entire chain
    • Boot verification is an absolute requirement
      • OS’s are just too big to be perfect
      • Persistent infections (of boot process) make recovery impractical
    • …but so is user freedom
      • Stopping users from building/booting their own kernels is not a good long term situation
      • …ideally including the firmware
  • Where do we stand
    • UEFI Secure boot on x86 systems
      • Guaranteed that user can replace keys
      • No guaranteed that the user can replace the fireware
    • Andriod
      • Primary compute device for many
      • Some permit the user to replace the OS
      • No ability to replace keys or fireware – cannot boot your own signed kernels
      • Need to push vendors to provide replacement of OS and keys
    • Apple
      • No ability to replace OS, keys or fireware
  • How much can I trust my system
    • OS backdoors
      • Doesn’t really seem necessary, too many holes already
    • Firmware backdoors
      • Why has nobody audited the Jetway (leaked BIO and fireware) leak?
    • Lower Level?
      • AMT, CPU microcode
      • AMT has a lot of access to running and even turned off systems, Intel would be greatly embarrassed
      • CPU Microcode – could be updates by OS-level exploit
    • It’s fine, all my data is in the cloud
      • What even is the cloud
      • If you are giving you data to someone else you are trusting them not to lose it or steal it
      • …History suggest this is not a good idea
      • But this is still a spectrum
        • Running you server means you trust all your software
        • Running a VM means you need to trust the hypervisor and other guests
        • you trusts those guests .. do you trusts those guest will be unable to compromise the hypervisor
      • Questions to ask you cloud providers
        • What secuity to isolates guests? selinux over kvm perhaps?
        • How do you manage hypervisor updates in response to security issues?
        • my mechanisms do you have to detect compromises to the hypervisor?
        • what is your response to to finding a compromised device?
      • Can you trust them at all?
        • Introspection of the bare metal is hard
        • Introspection of VMs is trivial
        • Virtualisation requires different security considerations than bare metal requirements, more attacks
  • Security in 2014
    • Be more agressive about securing every layer of systems
    • .. but do so in a way that ensures users don’t have to choose between freedom and security
    • Start asking cloud vendors hard questions
    • … and their customers, too
    • Security and free are two sides of the same coin
    • Don’t buy into any narrative that asks you you to give up one for the other