One of the talks I gave at Linux.conf.au this year was a quick-start guide to using Docker.
The slides begin with building Apache from source on your local host, using their documentation, and then how much simpler it is if instead of documentation, the project provides a
Dockerfile. I quickly gloss over making a slim production container from that large development container – see my other talk, which I’ll blog about a little later.
The second example, is using a
Dockerfile to create and execute a test environment, so everyone can replicate identical test results.
Finally, I end with a quite example of fig (Docker Compose), and running GUI applications in containers.
A core principle of the App Container (appc) specification is that it is open: multiple implementations of the spec should exist and be developed independently. Even though the spec is young and pre-1.0, it has already seen a number of implementations.
With this in mind, over the last few weeks we have been working on ways to make appc interoperable with the Docker v1 Image format. As we discovered, the two formats are sufficiently compatible that Docker v1 Images can easily be run alongside appc images (ACIs). Today we want to describe two different demonstrations of this interoperability, and start a conversation about closer integration between the Docker and appc communities.
Rocket is an App Container implementation that fully implements the current state of the spec. This means it can download, verify and run App Container Images (ACIs). And now along with ACI support the latest release of Rocket, v0.3.2, can download and run container images directly from the Docker Hub or any other Docker Registry:
$ rkt --insecure-skip-verify run docker://redis docker://tenstartups/redis-commander rkt: fetching image from docker://redis rkt: warning: signature verification has been disabled Downloading layer: 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 … _.-`` `. `_. ''-._ Redis 2.8.19 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in stand alone mode |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379 | `-._ `._ / _.-' | PID: 3 ...  12 Feb 09:09:19.071 # Server started, Redis version 2.8.19 # redis will be running on 127.0.0.1:6379 and redis-commander on 127.0.0.1:8081
At the same time as adding Docker support to Rocket, we have also opened a pull-request that enables Docker to run appc images (ACIs). This is a simple functional PR that includes many of the essential features of the image spec. Docker API operations such as image list, run image by appc image ID and more work as expected and integrate with the native Docker experience. As a simple example, downloading and running an etcd ACI works seamlessly with the addition of this patchset:
$ docker pull --format aci coreos.com/etcd:v2.0.0 $ docker run --format aci coreos.com/etcd 2015/02/12 11:21:05 no data-dir provided, using default data-dir ./default.etcd 2015/02/12 11:21:05 etcd: listening for peers on http://localhost:2380 2015/02/12 11:21:05 etcd: listening for peers on http://localhost:7001
For more details, check out the PR itself.
We think App Container represents the next logical iteration in what a container image format, runtime engine, and discovery protocol should look like. App Container is young but we want to continue to get wider community feedback and see the spec evolve into something that can work for a number of runtimes.
Before appc spec reaches 1.0 (stable) status, we would like feedback from the Docker community on what might need to be modified in the spec in order for it to be supported natively in Docker. To gather feedback and start the discussion, we have put up a proposal to add appc support to Docker.
We are looking forward to getting additional feedback from the Docker community on this proposal. Working together, we can create a better appc spec for everyone to use, and over time, work towards a shared standard.
Join us on a mission to create a secure, composable, and standards-based container runtime. If you are interested in hacking on Rocket or App Container we encourage you to get involved:
This release of Rocket includes new user-facing features and some important changes under the hood which further make progress towards our goals of security and composability.
First, the Rocket CLI has a couple of new commands:
rkt trust can be used to easily add keys to the public keystore for ACI signatures (introduced in the previous release).
This supports retrieving public keys directly from a URL or using discovery to locate public keys - a simple example of the latter is
rkt trust --prefix coreos.com/etcd. See the commit for other examples.
rkt list is a simple tool to list the containers on the system. It leverages the same file-based locking as
rkt status and
rkt gc to ensure safety during concurrent invocations of
As mentioned, v0.3.1 includes two significant changes to how Rocket is built internally.
Instead of embedding the (default) stage1 using go-bindata, Rocket now consumes a stage1 in the form of an actual ACI, containing a rootfs and stage1 init/enter binaries, via the
This makes it much more straightforward to use alternative stage1 image with rkt and facilitates packaging for other distributions like Fedora.
Rocket now vendors a copy of appc/spec instead of depending on HEAD. This means that Rocket can be built in a self-contained and reproducible way and that master will no longer break in response to changes to the spec. It also makes explicit the specific version of the spec against which particular release of Rocket is compiled.
As a consequence of these two changes, it is now possible to use the standard Go workflow to build the Rocket CLI (e.g.
go get github.com/coreos/rocket/rkt).
Note however that this does not implicitly build a stage1, so that will still need to be done using the included
./build script, or some other way for those desiring to use a different stage1.
This week saw a number of interesting projects emerge that implement the App Container Spec. Please note, all of these are very early and actively seeking more contributors.
Nose Cone is an appc runtime that is built on top of the libappc C++ library that was released a few weeks ago. This project is only a few days old but you can find it up on GitHub. It makes no use of Rocket, but implements the App Container specification. It is great to see this level of experimentation around the appc spec: having multiple, alternative runtimes with different goals is an important part of building a robust specification.
A few tools have emerged since last week for building App Container Images. All of these are very early and could use your contributions to help get them production ready.
A Dockerfile and the "docker build" command is a very convenient way to build an image, and many people already have existing infrastructure and pipelines around Docker images. To take advantage of this, the docker2aci tool and library takes an existing Docker image and generates an equivalent ACI. This means the container can now be run in any implementation of the appc spec.
$ docker2aci quay.io/lafolle/redis Downloading layer: 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 ... Generated ACI(s): lafolle-redis-latest.aci $ rkt run lafolle-redis-latest.aci  04 Feb 03:56:31.186 # Server started, Redis version 2.8.8
While a Dockerfile is a very convenient way to build, it should not be the only way to create a container image. With the new experimental goaci tool, it is possible to build a minimal golang ACI without the need of any additional build environment. Example:
$ goaci github.com/coreos/etcd Wrote etcd.aci $ actool -debug validate etcd.aci etcd.aci: valid app container image
Finally, we have added experimental support for App Container Images to Quay.io, our hosted container registry. Test it out by pulling any public image using Rocket:
$ rkt trust --prefix quay.io Prefix: "quay.io" Key: "https://quay.io/aci-signing-key" GPG key fingerprint is: BFF3 13CD AA56 0B16 A898 7B8F 72AB F5F6 799D 33BC Quay.io ACI Converter (ACI conversion signing key) <email@example.com> Are you sure you want to trust this key (yes/no)? yes $ rkt run quay.io/philips/golang-outyet $ curl 127.0.0.1:8080
While these tools are very young, they are an important milestone towards our goals with appc. We are on a path to being able to create images with multiple, independent tools (from Docker conversion to native language tools), and have multiple ways to run them (with runtimes like Rocket and Nose Cone). This is just the beginning, but a great early example of the power of open standards.
Join us on a mission to create a secure, composable, and standards-based container runtime. If you are interested in hacking on Rocket or App Container we encourage you to get involved:
There is still much to do - onward!
We’ve just come from FOSDEM ‘15 in Belgium and have an exciting rest of the month planned. We’ll be in Europe and the United States in February, and you can even catch Alex Polvi, CEO of CoreOS, keynoting at two events – TurboFest West (February 13) and Linux Collab Summit (February 18). Read more to see where we’ll be and meet us.
See slides from the Config Management Camp 2015 talk by Kelsey Hightower (@kelseyhightower), developer advocate at CoreOS. He presented in Belgium on February 2 about Managing Containers at Scale with CoreOS and Kubernetes.
Tuesday, February 3 at 7 p.m. CET – Munich, Germany
Learn about CoreOS and Rocket at the Munich CoreOS meetup led by Brian Harrington/Redbeard (@brianredbeard), principal architect at CoreOS, and Jonathan Boulle (@baronboulle), senior engineer at CoreOS.
Tuesday, February 3 at 7 p.m. GMT – London, United Kingdom
See the first Kubernetes London meetup with Craig Box, solutions engineer for Google Cloud Platform, and Kelsey Hightower (@kelseyhightower), developer advocate at CoreOS. Attendees will be guided through the first steps with Kubernetes and Kelsey will discuss managing containers at scale with CoreOS and Kubernetes.
Thursday, February 5 at 7:00 p.m. CET – Frankfurt, Germany
Check out the DevOps Frankfurt meetup, where we will give a rundown on CoreOS and Rocket from Redbeard (@brianredbeard), principal architect at CoreOS, and Jonathan Boulle (@baronboulle), senior engineer at CoreOS.
Monday, February 9 at 7:00 p.m. CET – Berlin, Germany
Wednesday, February 4 at 6:00 p.m. – New York, New York
Come to our February CoreOS New York City meetup at Work-Bench, 110 Fifth Avenue on the 5th floor, where our team will discuss our new container runtime, Rocket, as well as Quay.io new features. In addition, Nathan Smith, head of engineering at Wink, www.wink.com, will walk us through how they are using CoreOS.
Monday, February 9 at 6:30 p.m. EST – New York, New York
The CTO School meetup will host an evening on Docker and the Linux container ecosystem. See Jake Moshenko (@JacobMoshenko), product manager at CoreOS, and Borja Burgos-Galindo, CEO & co-founder of Tutum, for an intro to containers and an overview on the ecosystem, followed by a presentation from Tom Leach and Travis Thieman of Gamechanger.
Friday, February 13 – San Francisco, California
See Alex Polvi, CEO of CoreOS, keynote at TurboFest West, a program of cloud and virtualization thought leadership discussions hosted by VMTurbo. Register for more details.
Tuesday, February 17 at 5:30 p.m. CST – Kansas City, Missouri
Wednesday, February 18 at 10:00 a.m. PST – Santa Rosa, California
Alex Polvi, CEO of CoreOS, will present a keynote on Containers and the Changing Server Landscape at the Linux Collab Summit. See more about what Alex will discuss in a Q&A with Linux.com and tweet to us to meet at the event if you’ll be there.
Thursday, February 19 at 7:00 p.m. CST – Carrollton, Texas
February 19-February 22 – Los Angeles, California
Meet Jonathan Boulle (@baronboulle), senior engineer at CoreOS, at the SCALE 13x, the SoCal Linux Expo. Jon will present a session on Rocket and the App Container spec on Saturday, February 21 at 3:00 p.m. PT in the Carmel room.
More events will be added, so check back for updates here and at our community page!
In case you missed it, watch a webinar with Kelsey Hightower, developer advocate at CoreOS, and Matt Williams, DevOps evangelist at Datadog on Managing CoreOS Container Performance for Production Workloads
NO_HZ_FULL_SYSIDLEfunctionality, which is supposed to determine whether or not all non-housekeeping CPUs are idle. The normal Linux-kernel review process located an unexpected bug (which was allegedly fixed), so it seemed worthwhile to apply some formal verification. Unfortunately, all of the tools that I tried failed. Not simply failed to verify, but failed to run correctly at all—though I have heard a rumor that one of the tools was fixed, and thus advanced to the “failed to verify” state, where “failed to verify” apparently meant that the tool consumed all available CPU and memory without deigning to express an opinion as to the correctness of the code.
spinto do a full-state-space verification. After some back and forth, this model did claim verification, and correctly refused to verify bug-injected perturbations of the model. Mathieu Desnoyers created a separate Promela model that made more deft use of temporal logic, and this model also claimed verification and refused to verify bug-injected perturbations. So maybe I can trust them. Or maybe not.
NO_HZ_FULL_SYSIDLE? The relevant fragments of the C code, along with both Promela models, can be found here. See the README file for a description of the files, and you know where to find me for any questions that you might have.
For a quick overview, etcd is an open source, distributed, consistent key-value store for shared configuration, service discovery, and scheduler coordination. By using etcd, applications can ensure that even in the face of individual servers failing, the application will continue to work. etcd is a core component of CoreOS software that facilitates safe automatic updates, coordinating work being scheduled to hosts, and setting up overlay networking for containers.
The etcd team has been hard at work to improve the ease-of-use and stability of the project. Some of the highlights compared to the last official release, etcd 0.4.6, include
etcdctl backupwas added to make recovering from cluster failure easier
etcdctl member list/add/removecommands for easily managing a cluster
The major goal has been to make etcd more usable and stable with all of these changes. Over the hundreds of pull requests merged to make this release, many other improvements and bug fixes have been made. Thank you to the 150 contributors who have helped etcd get where it is today and provided those bug fixes, pull requests and more.
Many projects use etcd - Google’s Kubernetes, Pivotal’s Cloud Foundry, Mailgun and now Apache Mesos and Mesosphere DCOS too. In addition to these projects, there are more than 500 projects on GitHub, using etcd. The feedback from these application developers continues to be an important part of the development cycle; thank you for being involved.
Direct quotes from people using etcd:
"We evaluated a number of persistent stores, yet etcd’s HTTP API and strong Go client support was the best fit for Cloud Foundry," said Onsi Fakhouri, engineering manager at Pivotal. "Anyone currently running a recent version of Cloud Foundry is running etcd. We are big fans of etcd and are excited to see the rapid progress behind the key-value store."
"etcd is an important part of configuration management and service discovery in our infrastructure," said Sasha Klizhentas, lead engineer at Mailgun. "Our services use etcd for dynamic load-balancing, leader election and canary deployment patterns. etcd’s simple HTTP API helps make our infrastructure reliable and distributed."
"Shared configuration and shared state are two very tricky domains for distributed systems developers as services no longer run on one machine but are coordinated across an entire datacenter," said Benjamin Hindman, chief architect at Mesosphere and chair of Apache Mesos. "Apache Mesos and Mesosphere’s Datacenter Operating System (DCOS) will soon have a standard plugin to support etcd. Users and customers have asked for etcd support, and we’re delivering it as an option."
After nearly two years of diligent work, we are eager to hear your continued feedback on etcd. We will continue to work to make etcd a fundamental building block for Google-like infrastructure that users can take off the shelf, build upon and rely on.
CoreOS CTO Brandon Philips speaking about etcd 2.0 at the CoreOS San Francsico meet up:
The glibc vulnerability, CVE-2015-0235, known as “GHOST”, has been patched on CoreOS. If automatic updates are enabled (default configuration), your server should already be patched.
If automatic updates are disabled, you can force an update by running
Currently, the auto-update mechanism only applies to the base CoreOS, not inside your containers. If your container was built from an older ubuntu base, for example, you’ll need to update the container and get the patch from ubuntu.
If you have any questions or concerns, please join us in IRC freenode/#coreos.
I work on SE Linux to improve security for all computer users. I think that my work has gone reasonably well in that regard in terms of directly improving security of computers and helping developers find and fix certain types of security flaws in apps. But a large part of the security problems we have at the moment are related to subversion of Internet infrastructure. The Tor project is a significant step towards addressing such problems. So to achieve my goals in improving computer security I have to support the Tor project. So I decided to put my latest SE Linux Play Machine online as a Tor hidden service. There is no real need for it to be hidden (for the record it’s in my bedroom), but it’s a learning experience for me and for everyone who logs in.
A Play Machine is what I call a system with root as the guest account with only SE Linux to restrict access.
A Hidden Service in TOR is just a cryptographically protected address that forwards to a regular TCP port. It’s not difficult to setup and the Tor project has good documentation . For Debian the file to edit is /etc/tor/torrc.
I added the following 3 lines to my torrc to create a hidden service for SSH. I forwarded port 80 for test purposes because web browsers are easier to configure for SOCKS proxying than ssh.
HiddenServicePort 22 192.168.0.2:22
HiddenServicePort 80 192.168.0.2:22
Generally when setting up a hidden service you want to avoid using an IP address that gives anything away. So it’s a good idea to run a hidden service on a virtual machine that is well isolated from any public network. My Play machine is hidden in that manner not for secrecy but to prevent it being used for attacking other systems.
Howtoforge has a good article on setting up SSH with Tor . That has everything you need for setting up Tor for a regular ssh connection, but the tor-resolve program only works for connecting to services on the public Internet. By design the .onion addresses used by Hidden Services have no mapping to anything that reswemble IP addresses and tor-resolve breaks it. I believe that the fact that tor-resolve breaks thins in this situation is a bug, I have filed Debian bug report #776454 requesting that tor-resolve allow such things to just work .
ProxyCommand connect -5 -S localhost:9050 %h %p
I use the above ssh configuration (which can go in ~/.ssh/config or /etc/ssh/ssh_config) to tell the ssh client how to deal with .onion addresses. I also had to install the connect-proxy package which provides the connect program.
The authenticity of host ‘zp7zwyd5t3aju57m.onion ()
ECDSA key fingerprint is 3c:17:2f:7b:e2:f6:c0:c2:66:f5:c9:ab:4e:02:45:74.
Are you sure you want to continue connecting (yes/no)?
I now get the above message when I connect, the ssh developers have dealt with connecting via a proxy that doesn’t have an IP address.
This week both Rocket and the App Container (appc) spec have reached 0.2.0. Since our launch of the projects in December, both have been moving very quickly with a healthy community emerging. Rocket now has cryptographic signing by default and a community is emerging around independent implementations of the appc spec. Read on for details on the updates.
Development on Rocket has continued rapidly over the past few weeks, and today we are releasing v0.2.0. This important milestone release brings a lot of new features and improvements that enable securely verified image retrieval and tools for container introspection and lifecycle management.
Notably, this release introduces several important new subcommands:
rkt enter, to enter the namespace of an app within a container
rkt status, to check the status of a container and applications within it
rkt gc, to garbage collect old containers no longer in use
In keeping with Rocket's goals of being simple and composable, we've taken care to implement these lifecycle-related subcommands without introducing additional daemons or databases. Rocket achieves this by taking advantage of existing file-system and kernel semantics like advisory file-locking, atomic renames, and implicit closing (and unlocking) of open files at process exit.
v0.2.0 also marks the arrival of automatic signature validation: when retrieving an image during
rkt fetch or
rkt run, Rocket will verify its signature by default. Kelsey Hightower has written up an overview guide explaining this functionality. This signature verification is backed by a flexible system for storing public keys, which will soon be even easier to use with a new
rkt trust subcommand. This is a small but important step towards our goal of Rocket being as secure as possible by default.
Here's an example of the key validation in action when retrieving the latest etcd release (in this case the CoreOS ACI signing key has previously been trusted using the process above):
$ rkt fetch coreos.com/etcd:v2.0.0-rc.1 rkt: searching for app image coreos.com/etcd:v2.0.0-rc.1 rkt: fetching image from https://github.com/coreos/etcd/releases/download/v2.0.0-rc.1/etcd-v2.0.0-rc.1-linux-amd64.aci Downloading aci: [============================= ] 2.31 MB/3.58 MB Downloading signature from https://github.com/coreos/etcd/releases/download/v2.0.0-rc.1/etcd-v2.0.0-rc.1-linux-amd64.sig rkt: signature verified: CoreOS ACI Builder <firstname.lastname@example.org>
The appc spec continues to evolve but is now stabilizing. Some of the major changes are highlighted in the announcement email that went out earlier this week.
This last week has also seen the emergence of two different implementations of the spec: jetpack (a FreeBSD/Jails-based executor) and libappc (a C++ library for working with app containers). The authors of both projects have provided extremely helpful feedback and pull requests to the spec, and it is great to see these early implementations develop!
Jetpack is an implementation of the App Container Specification for FreeBSD. It uses jails as an isolation mechanism, and ZFS for layered storage. Jetpack is a great test of the cross platform portability of appc.
libappc is a C++ library for doing things with app containers. The goal of the library is to be a flexible toolkit: manifest parsing and creation, pluggable discovery, image creation/extraction/caching, thin-provisioned file systems, etc.
If you are interested in contributing to any of these projects, please get involved! A great place to start is issues in the Help Wanted label on GitHub. You can also reach out with questions and feedback on the Rocket and appc mailing lists:
Lastly, thank you to the community of contributors emerging around Rocket and App Container:
Alan LaMielle, Alban Crequy, Alex Polvi, Ankush Agarwal, Antoine Roy-Gobeil, azu, beadon, Brandon Philips, Brian Ketelsen, Brian Waldon, Burcu Dogan, Caleb Spare, Charles Aylward, Daniel Farrell, Dan Lipsitt, deepak1556, Derek, Emil Hessman, Eugene Yakubovich, Filippo Giunchedi, Ghislain Guiot, gprggr, Hector Fernandez, Iago López Galeiras, James Bayer, Jimmy Zelinskie, Johan Bergström, Jonathan Boulle, Josh Braegger, Kelsey Hightower, Keunwoo Lee, Krzesimir Nowak, Levi Gross, Maciej Pasternacki, Mark Kropf, Mark Lamourine, Matt Blair, Matt Boersma, Máximo Cuadros Ortiz, Meaglith Ma, PatrickJS, Pekka Enberg, Peter Bourgon, Rahul, Robo, Rob Szumski, Rohit Jnagal, sbevington, Shaun Jackman, Simone Gotti, Simon Thulbourn, virtualswede, Vito Caputo, Vivek Sekhar, Xiang Li
Our team has been on a fantastic tour meeting CoreOS contributors and friends around the world. A special thank you to the organizers of those meetups and to all those who came out to the meetups and made us feel at home. Come join us at the following events this month:
Tuesday, January 27 at 11 a.m. PST – Online
Join us for a webinar on Managing CoreOS Container Performance for Production Workloads. Kelsey Hightower (@kelseyhightower) from CoreOS and Matt Williams from Datadog will discuss trends in container usage and show how container performance can be monitored, especially as the container deployments grow. Register here.
Tuesday, January 27 at 6 p.m. EST – New York, NY
Come to our January New York City meetup at Work-Bench, 110 Fifth Avenue on the 5th floor, where our team will discuss our new container runtime, Rocket, as well as Quay.io new features. In addition, Nathan Smith, head of engineering at Wink, www.wink.com, will walk us through how they are using CoreOS. Register here.
Tuesday, January 27 at 6 p.m. PST – San Francisco, CA
Thursday, January 29 at 7 p.m. CET – Barcelona, Spain
Meet Brian Harrington, better known as Redbeard (@brianredbeard), for CoreOS: An Overview, at itnig. Dedicated VMs and configuration management tools are being replaced by containerization and new service management technologies like systemd. This meetup will give an overview of CoreOS, including etcd, schedulers (mesos, kubernetes, etc.), and containers (nspawn, docker, rocket). Understand how to use these new technologies to build performant, reliable, large distributed systems. Register here.
Saturday, January 31-Sunday, February 1 – Brussels, Belgium
Our team is attending FOSDEM ’15 to connect with developers and the open source community. See our talks and meet the team at our dev booth throughout the event.
A special shout out to the organizers of those meetups - Fintan Ryan, Ranganathan Balashanmugam, Muharem Hrnjadovic, Frédéric Ménez, Richard Paul, Piotr Zurek, Patrick Heneise, Benjamin Reitzammer, Sunday Ogwu, Tom Martin, Chris Kuhl and Johann Romefort.
If you are interested in hosting an event of your own or inviting someone from CoreOS to speak, reach out to us at email@example.com.
As I mentioned on Twitter last week, I’m very happy SUSE was able to support linux.conf.au 2015 with a keynote giveaway on Wednesday morning and sponsorship of the post-conference Beer O’Clock at Catalyst:
— Tim Serong, Esquire (@tserong) January 13, 2015
For those who were in attendance, I thought a little explanation of the keynote gift (a Samsung Galaxy Tab 4 8″) might be in order, especially given the winner came up to me during the post-conference drinks and asked “what’s up with the tablet?”
To put this in perspective, I’m in engineering at SUSE (I’ve spent a lot of time working on high availability, distributed storage and cloud software), and while it’s fair to say I represent the company in some sense simply by existing, I do not (and cannot) actually speak on behalf of my employer. Nevertheless, it fell to me to purchase a gift for us to provide to one lucky delegate sensible enough to arrive on time for Wednesday’s keynote.
I like to think we have a distinct engineering culture at SUSE. In particular, we run a hackweek once or twice a year where everyone has a full week to work on something entirely of their own choosing, provided it’s related to Free and Open Source Software. In that spirit (and given that we don’t make hardware ourselves) I thought it would be nice to be able to donate an Android tablet which the winner would either be able to hack on directly, or would be able to use in the course of hacking something else. So I’m not aware of any particular relationship between my employer and that tablet, but as it says on the back of the hackweek t-shirt I was wearing at the time:
Some things have to be done just because they are possible.
Not because they make sense.
NoOps with Ansible and Puppet – Monty Taylor
When Everything Falls Apart: Stories of Version Control System Scaling – Ben Kero
SL[AUO]B: Kernel memory allocator design and philosophy – Christopher Lameter
How to get one of those Open Source jobs – Mark Atwood
Pettycoin: Towards 1.0 – Rusty Russell
My first linux.conf.au was 2003 and it was absolutely fantastic and I’ve been to every one since. Since I like this radical idea of equality and the LCA2015 organizers said there were 20% female speakers this year, I thought I’d look through the history.
So, since there isn’t M or F on the conference program, I have to guess. This probably means I get things wrong and have a bias. But, heck, I’ll have a go and this is my best guess (and mostly excludes miniconfs as I don’t have programmes for them)
Or, in graph form:
Update/correction: lca2012 had around 20% women speakers at main conference (organizers gave numbers at opening) and 2006 had 3 at sysadmin miniconf and 1 in main conference.
Drupal8 outta the box – Donna Benjamin
Connecting Containers: Building a PaaS with Docker and Kubernetes – Katie Miller
Tunnels and Bridges: A drive through OpenStack Networking – Mark McClain
Crypto Won’t Save You Either – Peter Gutmann
8 writers in under 8 months: from zero to a docs team in no time flat – Lana Brindley
There was a Q&A. Mostly questions about diversity at the companies and grumps about having to move to US/Sydney for peopl eto work for them
Mass automatic roll out of Linux with Windows as a VM guest – Steven Sykes
etcd: distributed locking and service discovery – Brandon Philips
Linux at the University – Randy Appleton
Untangling the strings: Scaling Puppet with inotify – Steven McDonald
Configuration Management – A love Story – Javier Turegano
Healthy Operations – Phil Ingram
Developments in PCP (Performance Co-Pilot) : Nathan Scott
Security options for container implementations – Jay Coles
EQNZ – crisis response, open source style – Brenda Wallace
collectd in dynamic environments – Florian Forster
CoreOS: an introduction – Brandon Philips
Why you should consider using btrfs, real COW snapshots and file level incremental server OS upgrades like Google does. – Marc Merlin
Alerting Husbandry – Julien Goodwin
Managing microservices effectively – Daniel Hall
Corralling logs with ELK – Mark Walkom
FAI — the universal deployment tool – Thomas Lange
Documentation made complicated – Eric Burgueno
A few months ago I gave a lecture about systemd for the Linux Users of Victoria. Here are some of my notes reformatted as a blog post:
Scripts in /etc/init.d can still be used, they work the same way as they do under sysvinit for the user. You type the same commands to start and stop daemons.
To get a result similar to changing runlevel use the “systemctl isolate” command. Runlevels were never really supported in Debian (unlike Red Hat where they were used for starting and stopping the X server) so for Debian users there’s no change here.
The command systemctl with no params shows a list of loaded services and highlights failed units.
The command “journalctl -u UNIT-PATTERN” shows journal entries for the unit(s) in question. The pattern uses wildcards not regexs.
The systemd journal includes the stdout and stderr of all daemons. This solves the problem of daemons that don’t log all errors to syslog and leave the sysadmin wondering why they don’t work.
The command “systemctl status UNIT” gives the status and last log entries for the unit in question.
A program can use ioctl(fd, TIOCSTI, …) to push characters into a tty buffer. If the sysadmin runs an untrusted program with the same controlling tty then it can cause the sysadmin shell to run hostile commands. The system call setsid() to create a new terminal session is one solution but managing which daemons can be started with it is difficult. The way that systemd manages start/stop of all daemons solves this. I am glad to be rid of the run_init program we used to use on SE Linux systems to deal with this.
Systemd has a mechanism to ask for passwords for SSL keys and encrypted filesystems etc. There have been problems with that in the past but I think they are all fixed now. While there is some difficulty during development the end result of having one consistent way of managing this will be better than having multiple daemons doing it in different ways.
The commands “systemctl enable” and “systemctl disable” enable/disable daemon start at boot which is easier than the SysVinit alternative of update-rc.d in Debian.
Systemd has built in seat management, which is not more complex than consolekit which it replaces. Consolekit was installed automatically without controversy so I don’t think there should be controversy about systemd replacing consolekit.
Systemd improves performance by parallel start and autofs style fsck.
The command systemd-cgtop shows resource use for cgroups it creates.
The command “systemd-analyze blame” shows what delayed the boot process and
“systemd-analyze critical-chain” shows the critical path in boot delays.
Sysremd also has security features such as service private /tmp and restricting service access to directory trees.
For basic use things just work, you don’t need to learn anything new to use systemd.
It provides significant benefits for boot speed and potentially security.
It doesn’t seem more complex than other alternative solutions to the same problems.
Last spoke 10 years ago in Canberra Linux.conf.au
Things have improved in the last ten years
The Next 10 years
The last 10 years
Our common Values
AWS OpsWorks Orchestration War Stories – Andrew Boag
Slim Application Containers from Source – Sven Dowideit
Containers and PCP (Performance Co-Pilot) - Nathan Scott
The Challenges of Containerizing your Datacenter – Daniel Hall
Cloud Management and ManageIQ – John Mark Walker
LXD: The Container-Based Hypervisor That Isn’t - Tycho Andersen
Rocket and the App Container Spec – Brandon Philips
Since we launched in 2014, we have assisted numerous companies, opensource projects and individuals, in learning, experimenting and using automation tools that nowadays define operations. Many things are changing in this area.
We have helped many people to achieve their automation goals, and we are happy to see how their operational costs are reduced and how productivity is increased.
Stay tuned! Very soon we will release a new of tools that will make your life in operations even easier.
Recently I was invited to give a TEDx talk at a Canberra event for women speakers. It was a good opportunity to have some fun with some ideas I’ve been playing with for a while around the concept of being a citizen in the era of the Internet, and what that means for individuals and traditional power structures in society, including government. A snipped transcript below. Enjoy and comments welcome I’ve put a few links that might be of interest throughout and the slides are in the video for reference.
Video is at http://www.youtube.com/embed/iqjM_HU0WSw
I want to talk to you about digital citizenship and how, not only the geek will inherit the earth but, indeed, we already have. All the peoples just don’t know it yet.
We are in the most exciting of times. People are connected from birth and are engaged across the world. We are more powerful as individuals than ever before. We have, particularly in communities and societies like Australia, we have a population that has all of our basic needs taken care of. So we have got time to kill. And we’ve got resources. Time and resources gives a greater opportunity for introspection which has led over the last hundred years in particular, to enormous progress. To the establishment of the concept of individual rights and strange ideas like the concept that animals might actually have feelings and perhaps maybe shouldn’t be treated awfully or just as a food source.
We’ve had these huge, enormous revolutions and evolutions of thought and perspective for a long, long time but it’s been growing exponentially. It’s a combination of the growth in democracy, the rise of the concept of individual rights, and the concept of individuals being able to participate in the macro forces that shape their world.
But it’s also a combination of technology and the explosion in what an individual can achieve both as a individual but also en mass collaborating dynamically across the globe. It’s the fact that many of us are kind of fat, content and happy and now wanting to make a bit of a difference, which is quite exciting. So what we’ve got is a massive and unprecedented distribution of power.
We’ve got the distribution of publishing. The ability to publish whatever you want. Whether you do it through formal mechanisms or anonymously. You can distribute to a global audience with less barriers to entry than ever before. We have the distribution of the ability to communicate with whomever your please. The ability to monitor, which has traditionally been a top down thing for ensuring laws are followed and taxes are paid. But now people can monitor sideways, they can monitor up. They can monitor their governments. They can monitor companies. There is the distribution of enforcement. This gets a little tricky because if anyone can enforce than anyone can enforce any thing. And you start to get a little bit of active concerns there but it is an interesting time. Finally with the advent of 3D printing starting to get mainstream, we’re seeing the massive distribution of, of property.
And if you think about these five concepts – publishing, communications, monitoring, enforcement and property – these five power bases have traditionally been centralised. We usually look at the industrial revolution and the broadcast age as two majors periods in history but arguably they’re both actually part of the same era. Because both of them are about the centralised creation of stuff – whether it’s physical or information – by a small number of people that could afford to do so, and then distributed to the rest of the population.
The idea that anyone can create any of these things and distribute it to anyone else, or indeed for their own purposes is a whole new thing and very exciting. And what that means is that the relationship between people and governments and industry has changed quite fundamentally. Traditional institutions and bastions of any sort of power are struggling with this and are finding it rather scary but it is creating an imperative to change. It is also creating new questions about legitimacy and power relations between people, companies and governments.
Individuals however, are thriving in this environment. There’s always arguments about trolls and about whether the power’s being used trivially. The fact is the Internet isn’t all unicorns or all doom. It is something different, it is something exciting and it is something that is empowering people in a way that’s unprecedented and often unexpected.
The term singularity is one of those fluffy things that’s been touted around by futurists but it does have a fairly specific meaning which is kind of handy. The concept of the distance between things getting smaller. Whether that’s the distance between you and your publisher, you and your food, you and your network or you and your device. The concept of approaching the singularity is about reducing those distances between. Now, of course the internet has reduced the distance between people quite significantly and I put to you that we’re in a period of a “democratic singularity” because the distance between people and power has dramatically reduced.
People are in many ways now as powerful as a lot of the institutions which frame and shape their lives. So to paraphrase and slightly turn on it’s head the quote by William Gibson: the future is here and it is already widely distributed. So we’ve approached the democratic singularity and it’s starting to make democracy a lot more participatory, a lot more democratic.
So, what does this mean in reality? What does this actually translate to for us as people, as a society, as a “global village”, to quote Marshall McLuhan. There’s quite massive changing expectations of individual. I see a lot of people focused on the shift in power from the West to the East. But I believe the more interesting shift is the shift in power from institutions to individuals.
That is the more fascinating shift not just because individuals have power but because it is changing our expectations as a society. And when you start to get a massive change of expectations across an entire community of people, that starts to change behaviors, change economics, change socials patterns, change social norms.
What are those changing expectations? Well, the internet teaches us a lot of things. The foundation technical principles of the internet are effectively shaping the social characteristics of this new society. This distributed society or “Society 5″ if you will.
Some of the expectations are the ability to access what you want. The ability to talk to whom you want. The ability to cross reference. When I was a kid and you did a essay on anything you had to go look at Encyclopedia Britannica. It was a single source of truth. The concept that you could get multiple perspectives, some of which might be skewed by the way, but still to concept of getting the context of those different perspectives and a little comparison was hard and alien for the average person. Now you can often talk to someone who is there right now let alone find myriad sources to help inform your view. You can get a point of comparison against traditionally official sources like a government source or media report. People online start to intuitively understand that the world’s actually a lot more gray than we are generally taught in school and such. Learning that the world is gray is great because you start to say, “you know what? You could be right and I could be right and that doesn’t make either perspective necessarily invalid, and that isn’t a terrible thing.” It doesn’t have to be mutually exclusive or a zero sum game, or a single view of history. We can both have a perspective and be mutually respectful in a lot of cases and actually have a more diverse and interesting world as a result.
Changing expectations are helping many people overcome barriers that traditionally stopped them from being socially successful: economically, reputationally, etc. People are more empowered to basically be a superhero which is kinda cool. Online communities can be one of the most exciting and powerful places to be because it starts to transcend limitations and make it possible for people to excel in a way that perhaps traditionally they weren’t able to. So, it’s very exciting.
Individual power also brings a lot of responsibility. We’ve got all these power structures but at the end of the day there’s usually a techie implementing the big red button so the role of geeks in this world is very important. We are the ones who enable technology to be used for any agenda. Everything is basically based on technology, right? So everything is reliant upon technology. Well, this means we are exactly as free as the tools that we use.
If the tool that you’re using for social networking only allows you to talk to people in the same geographic area as you then you’re limited. If the email tool you’re using only allows you to send to someone who has another secure network then you’re only as free as that tool. Tech literacy becomes an enabler or an inhibitor, and it defines an individuals privacy. Because you might say to yourself, oh you know, I will never tell anyone where I am at a particular point in time cause I don’t want someone to rob my house while I’m out on holiday. But you’ll still put a photo up that you’re Argentina right now, because that’s fun, so now people know. Technical literacy for the masses is really important but largely, at this point, confined to the geeks. So hacker ethos ends up being a really important part of this.
For those that don’t know, hacker is not a rude word. It’s not a bad word. It’s the concept of having a creative and clever approach to technology and applying tech in cool and exciting ways. It helps people scratch an itch, test their skills, solve tricky problems collaboratively. Hacker ethos is a very important thing because you start to say freedom, including technical freedom is actually very, very important. It’s very high on the list. And with this ethos, technologists know that to implement and facilitate technologies that actually hobble our fellow citizens kind of screws them over.
Geeks will always be the most free in a digital society because we will always know how to route around the damage. Again, going back to the technical construct of the internet. But fundamentally we have a role to play to actually be leaders and pioneers in this society and to help lead the masses into a better future.
There’s also a lot of other sorts of dangers. Tools don’t discriminate. The same tools that can lead a wonderful social revolution or empower individuals to tell their stories is the same technology that can be used by criminals or those with a nefarious agenda. This is an important reason to remember we shouldn’t lock down the internet because someone can use it for a bad reason in the same way we don’t ban cars just because someone used a vehicle to rob a bank. The idea of hobbling technology because it’s used in a bad way is a highly frustrating one.
Another danger is “privilege cringe”. In communities like Australia we’re sort of taught to say, well, you’ve got privilege because you’ve been brought up in a safe stable environment, you’ve got an education, you’ve got enough money, you’ve got a sense of being able to go out and conquer the world. But you’ve got to hide that because you should be embarrassed of your opportunities when so many others have so little. I suggest to you all that you in this room, and pretty much anyone that would probably come and watch a TED event or go to a TED talk or watch it online, is the sort of person who is probably reasonably privileged in a lot of ways and you can use your privilege to influence the world in a powerful and positive way.
You’ve got access to the internet which makes you part of the third of the world that has access. So use your privilege for the power of good! This is the point. We are more powerful than ever before so if you’re not using your power for the power of good, if you’re not actually contributing to making the world a better place, what are you doing?
Hipsters are a major danger. Billy Bragg made the perfect quote which is, cynicism is the perfect enemy of progress. There is nothing more frustrating than actually making progress and having people tear you down because you haven’t done it exactly so.
Another danger is misdirection. We have a lot of people in Australia who want to do good. That’s very exciting and really cool. But Australians tend to say, I’m going to go to another country and feed some poor people and that’ll make me feel good, that’ll be doing some good and that’ll be great. Me personally, that would really not be good for people because I don’t cook very well. Deciding how you can actually contribute to making the world a better place in a way is like finding a lever? You need to identify what you are good at, what real differences you can make when you apply your skills very specifically. Where do you push to get a major change rather than, rather than contributing to actually maintaining the status quo? How do you rewrite the rules? How do you actually help those people that need help all around the world, including here in Australia, in a way that actually helps them sustainably? Enthused misdirection is I guess where I’m getting at.
And of course, one of the most frustrating dangers is hyperbole. It is literally destroying us. Figuratively speaking
So there’s a lot of dangers, there’s a lot of issues but there is a lot of opportunities and a lot of capacities to do awesome. How many people here have been to a TED talk of some sort before? So keep your hand up if, after that, you went out and did something world changing. OK. So now you’re gonna do that, yeah? Right. So next time we do this all of those hands will stay up.
I’ll make couple of last points. My terrible little diagram here maps the concept that if you look at the last 5,000 years. The quality of life for individuals in a many societies has been down here fairly low for a long time. In millennia past, kings come and go, people get killed, properties taken. All sorts of things happen and individuals were very much at the behest of the powers of the day but you just keep plowing your fields and try to be all right. But is has slowly improved over a long time time, and the collective epiphany of the individual starts to happen, the idea of having rights, the idea that things could be better and that the people could contribute to their own future and democracy starts to kick off. The many suffrage movements addressing gender, ethnicity and other biases with more and more individuals in societies starting to be granted more equal recognition and rights.
The last hundred years, boom! It has soared up here somewhere. And I’m not tall enough to actually make the point, right? This is so exciting! So where are we going to go next?
How do we contribute to the future if we’re not involved in shaping the future. If we aren’t involved, then other powerful individuals are going to shape it for us. And this, this is the thing I’ve really learned by working in government, but working in the Minister’s office, by working in the public service. I specifically went to work in for a politician – even though I’m very strongly apolitical – to work in the government and in the public service because I wanted to understand the executive, legislative, and administrative arms of the entity that shapes our lives so much. I feel like I have a fairly good understanding of that now and there’s a lot of people who influence your lives every day.
Have we really hit this tipping point? You know, is it, is it really any different today than it was yesterday? Well, we’ve had this exponential progress, we’ve got a third of the world online, we’ve got these super human powerful individuals in a large chunk of different societies around the world. I argue that we have hit and passed the tipping point but the realisation hasn’t hit everyone yet.
So, the question is for you to figure out your super power. How do you best contribute it to making the world a better place?
Powers and kryptonite
For me, going and working in a soup kitchen will not help anybody. I could possibly design a robot that creates super delicious and nutritional food to actually feed people. But me doing it myself would actually probably give them food poisoning and wouldn’t help anyone. You need to figure out your specific super powers so you can deploy them to some effect. Figure out how you can contribute to the world. Also figure out your kryptonite.
What biases do you have in place? What weaknesses do you have? What things will actually get in the way of you trying to do what you’re doing? I quite often see people apply critical analysis and critical thinking tools without any self-awareness and the problem is that we are super clever beings and we can rationalize anything we want if, emotionally, we like it or dislike it.
So try and have both self-awareness and critical analysis and now you’ve got a very powerful way to do some good. So I’m going to just finish with a quote.
What better place than here? What better time than now? All hell can’t stop us now — RATM
The future is being determined whether you like it or not. But it’s not really being determined by the traditional players in a lot of ways. The power’s been distributed. It’s not just the politicians or the scholars or the researchers or corporates. It’s being invented right here, right now. You are contributing to that future either passively or actively. So you may as well get up and be active about it.
We’re heading towards this and we’ve possibly even hit the tipping point of a digital singularity and a democratic singularity. So, what are you going do about it? I invite you to share with me in the creating the future together.
Thank you very much.
Recently I adventured to Antarctica. It’s not every day you get to say that and it has always been a dream of mine to travel to the south pole (or close to it!) and to see the glaciers, penguins, whales and birds that inhabit such a remote environment. There is something liberating and awesome (in the full sense of the word) in going somewhere where very few humans have traveled. Especially for someone like me who is spends so much time online.
Being Australian and unused to real cold, I think I was also attracted to exploring a truly cold problem with travelling to Antarctica is, as it turns out, the 48-60 hours of torment you need to go through to get there and to get back. The Drake Passage is the strip of open sea between the bottom of South America and the Peninsula of the Antarctic continent. It is by far the most direct way by ship to get to Antarctica and the port town of Ushuaia is well set up to support intrepid travelers in this venture. We took off from Ushuaia on a calm Wednesday afternoon and within a few hours, were into the dreaded Drake. I found whilst ever I was lying down I was ok but walking around was torture! So I ended up staying in bed about 40 hours by which time it had calmed down significantly. See my little video of the more calm but still awful parts And that was apparently a calm crossing! Ah well, turns out I don’t have sea legs. At least I wasn’t actually sick and I certainly caught up with a few months of sleep deprivation so arguably, it was the perfect enforced rest!
Now the adventure begins! We were accompanied by a number of stunning and enormous birds, including Cape Pestrels and a number of Albatrosses. Then we came across a Blue Whale which is apparently quite a rare thing to see in the Drake. It gave us a little show and then went on its way. We entered the Gerlache Strait and saw our first ice which was quite exciting, but by the end of the trip these early views were just breadcrumbs! We landed at Cuverville Island which was stunning! I had taken the snowshoeing option and so with 12 other adventurous travellers, we started up the snow covered hill to get some better views. We saw a large colony of Gentoo penguins which was fun, they are quite curious and cute creatures. We had to be careful to not block any “penguin highways” so often was giving way to scores of them as we explored. We saw a Leopard Seal in the water, which managed to catch one unfortunate penguin for lunch.
We then landed at Neko Harbour, our first step onto the actual Antarctic continent! Again, more stunning views and Gentoo penguins. We had the good fortune to also have time that day to land at Port Lockroy, an old British station in Antarctica and the southern most post office in the world. I send a bunch of postcards to friends and family on the 23rd December, I guess we’ll see how long they take to make the trip. We got to see a number of the Snowy Sheathbill birds, which is a bit of a scavenger. It eats everything, including penguin poo, which is truly horrible. Although their eating habits are awful, they are quite beautiful and I was lucky enough to score a really good shot of one mid flight.
The next day we traveled down the Lemaire Channel to Petermann Island where we saw more Gentoo penguins, but also Adalie penguins, which are terribly cute! Again we did some snowshoeing which was excellent. I took some time to just sit and drink in the remoteness and the pristine environment that is Antarctica. It was humbling and wonderful to remember how truly small we all are and the magnificence of this world on which we reside. We saw some Minke Whales in the water beside the ship.
In the afternoon we broke through a few kilometres of ice and took the small boats (zodiacs) a short distance, then walked a half kilometre over ocean ice to land at Vernadsky Base, a Ukranian scientific post. The dozen or so scientists there hadn’t seen any other humans for 8 months are were very pleased to see us All of them were men and when I asked why there weren’t any women scientists there I had a one word answer from our young Ukranian guide: politics. Interesting… At any rate it was fascinating and it looks like they do some incredible science down there. There was also a small Elephant Seal who crawled up to the bar to say hi. They also have the southern most bar in the world, and there were treated to home made sugar based vodka, which was actually pretty good. So good in fact that one of the guests from our ship drank a dozen shots, then exchanged her bra in exchange for some snow mobile moonlighting around the base. Was quite hilarious and poor expedition leader dealt with it very diplomatically.
To cap off a fantastic day, the catering crew put on a BBQ on the deck of the Ocean Nova which was a cold but excellent affair. The mulled wine and hot apple dessert went down particularly well against the cold! We did a trivia night which was great fun, and our team, “The Rise of the Gentoo” won! There was much celebration though the sweet victory was snatched from us when they found a score card for a team that hadn’t been marked. Ah well, all is fair in love and war! I had only one question for our expedition leader, would we see any Orca? Orca are a new favourite animal of mine. They are brilliant, social and strategic animals. Well worth looking into.
The next morning we were woken particularly early as there were some Orca in the water! I was first on deck, in my pyjamas and I have to admit I squealed quite a lot, much to the amusement of our new American friends. At one point I saw all five Orca come to the surface and I could only watch in awe. They really are stunning animals. I learned from the on board whale expert that Orca have some particularly unique hunting techniques. Often the come across a seal or two on a small iceberg surrounded by water and ao they swim up to it in formation and then dive and hit their tails simultaneously creating a small tidal wave that washes the seal off into the water ready for the taking. Very clever animals. Then they always share the spoils of a hunt amongst the pod, and often will simply daze a victim to teach young Orca how to hunt before dealing a death blow. Apparently Orca have been known to kill much larger animals including Humpback Whales.
Anyway, the rest of day we did some zodiac trips (the small courier boats) around Paradise Harbour which was bitterly cold, and then around the Melchior Islands in Dallman Bay which was spectacular. One of the birds down here is the Antarctic Cormorant, closely related to the Cormorants in Australia. They look quite similar We got to see number of them nesting. Going back through the Drake I had to confine myself to my room again, which meant I missed seeing Humpback Whales. This was unfortunate but I really did struggle to travel around the ship when in the Drake without getting very ill.
On a final note, I traveled with the Antarctica XX1, which has a caring and wonderful crew. The crew includes scientists, naturists, biologists and others who genuinely love Antarctica. As a result we had a number of amazing lectures throughout the trip about the wildlife and ecosystem of Antarctica. Learning about Krill, ice flow, climate change and the migratory patterns of the whales was awesome. I wish I had been able to attend more talks but I couldn’t get up during most of the Drake :/ The rest of the crew who looked after navigation, feeding us, cleaning and all the other operations were just amazing. A huge thank you to you all for making this voyage the trip of a lifetime!
One thing I didn’t anticipate was the land sickness! 24 hours after getting off the boat and I still feel the sway of the ocean! All of my photos, plus a couple of group photos and a video or two are up on my flickr account in the Antarctica 2013 set at http://www.flickr.com/photos/piawaugh/sets/72157638364999506/ You can also see photos from Buenos Aires if you are interested at http://www.flickr.com/photos/piawaugh/sets/72157638573728155/
A special thank you also to Jamie, our exhibition leader who delivered an incredible itinerary under some quite trying circumstances, and all the expedition crew! You guys totally rock
I met some amazing new friends on the trip, and got to spend some quality time with existing friends. You don’t go on adventures like this without meeting other people of a similar adventurous mindset, which is always wonderful.
For everyone else, I highly highly recommend you check out the Antarctica XXI (Ocean Nova) trips if you are interested in going to Antarctica or the Arctic.
For all my linux.conf.au friends, yes I did scope out Antartica for a potential future conference, but given the only LUGs there are Gentoos, I think we should all spare ourselves the pain
Below are links to some additional reading about the places we visited as provided by the Antarctic XX1 crew, the list of animals that were sighted throughout the journey and some other bits and pieces that might be of interest. Below are also some excellent quotes about Antarctica that were on the ship intranet that I just had to post to give you a flavour of what we experienced
The church says the earth is flat, but I know that it is round, for I have seen the shadow on the moon, and I have more faith in a shadow than in the church. — Ferdinando Magallanes
We were the only pulsating creatures in a dead world of ice. — Frederick Albert Cook
Below the 40th latitude there is no law; below the 50th no god; below the 60th no common sense and below the 70th no intelligence whatsoever. — Kim Stanley Robinson
I have never heard or felt or seen a wind like this. I wondered why it did not carry away the earth. — Cherry-Garrard
Great God ! this is an awful place. — Robert Falcon Scott, referring to the South Pole
Human effort is not futile, but Man fights against the giant forces of Nature in the spirit of humility. — Ernest Shackleton
Had we lived I should have had a tale to tell of the hardihood, endurance and courage of my companions …. These rough notes and our dead bodies must tell the tale. — Robert Falcon Scott
People do not decide to become extraordinary. They decide to accomplish extraordinary things. — Edmund Hillary
Superhuman effort isn’t worth a damn unless it achieves results. — Ernest Shackleton Adventure is just bad planning. — Roald Amundsen
For scientific leadership, give me Scott; for swift and efficient travel, Amundsen; but when you are in a hopeless situation, when there seems to be no way out, get on your knees and pray for Shackleton. — Sir Raymond Priestley
Below are some of the interesting imperatives I have observed as key drivers for changing how governments do things, especially in Australia. I thought it might be of interest for some of you Particularly those trying to understand “digital government”, and why technology is now so vital for government services delivery:
Note: I originally had some of this in another blog post about open data and digital government in NZ, buried some way down. Have republished with some updated ideas.
This was a speech I gave in Brisbane to launch the QUT OSS group. It talks about FOSS, hacker culture, open government/data, and why we all need to embrace our inner geek
Welcome to the beginning of something magnificent. I have had the luck, privilege and honour to be involved in some pretty awesome things over the 15 or so years I’ve been in the tech sector, and I can honestly say it has been my involvement in the free and Open Source software community that has been one of the biggest contributors.
It has connected me to amazing and inspiring geeks and communities nationally and internationally, it has given me an appreciation of the fact that we are exactly as free as the tools we use and the skills we possess, it has given me a sense of great responsibility as part of the pioneer warrior class of our age, and it has given me the instincts and tools to do great things and route around issues that get in the way of awesomeness.
As such it is really excited to be part of launching this new student focused Open Source group at QUT, especially one with academic and industry backing so congratulations to QUT, Red Hat, Microsoft and Tech One.
It’s also worth mentioning that Open Source skills are in high demand, both nationally and internationally, and something like 2/3 of Open Source developers are doing so in some professional capacity.
So thanks in advance for having me, and I should say up front that I am here in a voluntary capacity and not to represent my employer or any other organisation.
Who am I? Many things: martial artist, musician, public servant, recently recovered ministerial adviser, but most of all, I am a proud and reasonably successful geek.
So firstly, why does being a geek make me so proud? Because technology underpins everything we do in modern society. It underpins industry, progress, government, democracy, a more empowered, equitable and meritocratic society. Basically technology supports and enhances everything I care about, so being part of that sector means I can play some small part in making the world a better place.
It is the geeks of this world that create and forge the world we live in today. I like to go to non-geek events and tell people who usually take us completely for granted, “we made the Internet, you’re welcome”, just to try to embed a broader appreciation for tech literacy and creativity.
Geeks are the pioneers of the modern age. We are carving out the future one bit at a time, and leading the charge for mainstream culture. As such we have, I believe, a great responsibility to ensure our powers are used to improve life for all people, but that is another lecture entirely.
Geek culture is one of the driving forces of innovation and progress today, and it is organisations that embrace technology as an enabler and strategic benefit that are able to rapidly adapt to emerging opportunities and challenges.
FOSS culture is drawn very strongly from the hacker culture of the 60′s and 70′s. Unfortunately the term hacker has been stolen by the media and spooks to imply bad or illegal behaviours, which we would refer to as black hat hacking or cracking. But true hacker culture is all about being creative and clever with technology, building cool stuff, showing off one’s skills, scratching an itch.
Hacker culture led to free software culture in the 80′s and 90′s, also known as Open Source in business speak, which also led to a broader free culture movement in the 90′s and 00′s with Creative Commons, Wikipedia and other online cultural commons. And now we are seeing a strong emergence of open government and open science movements which is very exciting.
A lot of people are aware of the enormity of Wikipedia. Even though Open Source well predates Wikipedia, it ends up being a good tool to articulate to the general population the importance of Open Source.
Wikipedia is a globally crowdsourced phenomenon than, love it or hate it, has made knowledge more accessible than every before. I personally believe that the greatest success of Wikipedia is in demonstrating that truth is perception, and the “truth” held in the pages of Wikipedia ends up, ideally anyway, being the most credible middle ground of perspectives available. The discussion pages of any page give a wonderful insight to any contradicting perspectives or controversies and it teaches us the importance of taking everything with a grain of salt.
Open Source is the software equivalent of Wikipedia. There are literally hundreds of thousands if not millions of Open Source software projects in the world, and you would used thousands of the most mature and useful ones every day, without even knowing it. Open Source operating systems like Linux or MINIX powers your cars, devices, phones, telephone exchanges and the majority of servers and super computers in the world. Open Source web tools like WordPress, Drupal or indeed WikiMedia (the software behind Wikipedia) power an enormous amount of websites you go to everyday. Even Google heavily uses Open Source software to build the worlds most reliable infrastructure. If Google.com doesn’t work, you generally check your own network reliability first.
Open Source is all about people working together to scratch a mutual itch, sharing in the development and maintenance of software that is developed in an open and collaborative way. You can build on the top of existing Open Source software platforms as a technical foundation for innovation, or employ Open Source development methodologies to better innovate internally. I’m still terrified by the number of organisations I see that don’t use base code revision systems and email around zip files!
Open Source means you can leverage expertise far beyond what you could ever hope to hire, and you build your business around services. The IT sector used to be all about services before the proprietary lowest common denominator approach to software emerged in the 80s.
But we have seen the IT sector largely swing heavily back to services, except in the case on niche software markets, and companies compete on quality of services and whole solution delivery rather than specific products. Services companies that leverage Open Source often find their cost of delivery lower, particularly in the age of “cloud” software as a service, where customers want to access software functionality as a utility based on usage.
Open Source can help improve quality and cost effectiveness of technology solutions as it creates greater competition at the services level.
The Open Source movement has given us an enormous collective repository of stable, useful, innovative, responsive and secure software solutions. I must emphasise secure because many eyes reviewing code means a better chance of identifying and fixing issues. Security through obscurity is a myth and it always frustrates me when people buy into the line that Open Source is somehow less secure than proprietary solutions because you can see the code.
If you want to know about government use of Open Source, check out the Open Source policy on the Department of Finance and Deregulation website. It’s a pretty good policy not only because it encourages procurement processes to consider Open Source equally, but because it encourages government agencies to contribute to and get involved in the Open Source community.
It has been fascinating to see a lot of Open Source geeks taking their instincts and skills with them into other avenues. And to see non-technical and non-Open Source people converging on the same basic principles of openness and collaboration for mutual gain from completely different avenues.
For me, the most exciting recent evolution of hacker ethos is the Open Government movement.
Open Government has always been associated with parliamentary and bureacratic transparency bureaucratic, such as Freedom of Information and Hansard.
I currently work primarily on the nexus where open government meets technology. Where we start to look at what government means in a digital age where citizens are more empowered than ever before, where globalisation challenges sovereignty, where the need to adapt and evolve in the public service is vital to provide iterative, personalised and timely responses to new challenges and opportunities both locally and globally.
There are three key pillars of what we like to call “Government 2.0”. A stupid term I know, but bear with me:
Open data is very much my personal focus at the moment. I’m now in charge of data.gov.au, which we are in the process of migrating to an excellent Open Source data repository called CKAN which will be up soon. There is currently a beta up for people to play with.
I also am the head cat herder for a volunteer run project called GovHack which ran only just a week ago, where we had 1000 participants from 8 cities, including here in Brisbane, all working with government data to build 130 new hacks including mashups, data visualisations, mobile and other applications, interactive websites and more. GovHack shows clearly the benefits to society when you open up government data for public use, particularly if it is available in a machine readable way and is available under a very permissive copyright such as Creative Commons.
I would highly recommend you check out my blog posts about open data around the world from when I went to a conference in Helsinki last year and got to meet luminaries in this space including Hans Rosling, Dr Tim Hubbard and Rufus Pollock. I also did some work with the New Zealand Government looking at NZ open data practice and policy which might be useful, where we were also able to identify some major imperatives for changing how governments work.
The exciting thing is how keen government agencies in Federal, State, Territory and Local governments are to open up their data! To engage meaningfully with citizens. And to evolve their service delivery to be more personalised and effective for everyone. We are truly living in a very exciting time for technologists, democracy and the broader society.
Though to be fair, governments don’t really have much choice. Citizens are more empowered than ever before and governments have to adapt, delivery responsive, iterative and personalised services and policy, or risk losing relevance. We have seen the massive distribution now of every traditional bastion of power, from publishing, communications, monitoring, enforcement, and even property is about to dramatically shift, with the leaps in 3D printing and nano technologies. Ultimately governments are under a lot of pressure to adapt the way we do things, and it is a wonderful thing.
The Federal Australian Government already has in place several policies that directly support opening up government data:
Australia has also recently signed up to the Open Government Partnership, an international consortia of over 65 governments which will be a very exciting step for open data and other aspects of open government.
At the State and Territory level, there is also a lot of movement around open data. Queensland and the ACT launched your new open data platform late last year with some good success. NSW and South Australia have launched new platforms in the last few weeks with hundreds of new data sets. Western Australia and Victoria have been publishing some great data for some time and everyone is looking at how they can do so better!
Many local governments have been very active in trying to open up data, and a huge shout out to the Gold Coast City Council here in Queensland who have been working very hard and doing great things in this space!
It is worth noting that the NSW government currently have a big open data policy consultation happening which closes on the 17th June and is well worth looking into and contributing to.
One of my biggest bug bears is when people say “I’m sorry the software can’t do that”. It is the learned helplessness of the tech illiterate that is our biggest challenge for innovating and being globally competitive, and as countries like Australia are overwhelming well off, with the vast majority of our citizens living high quality lives, it is this learned helplessness that is becoming the difference between the haves and have nots. The empowered and the disempowered.
Teaching everyone to embrace their inner geek isn’t just about improving productivity, efficiency, innovation and competitiveness, it is about empowering our people to be safer, smarter, more collaborative and more empowered citizens in a digital world.
If everyone learnt and experienced even the tiniest amount of programming, we would all have embedded that wonderful instinct that says “the software can do whatever we can imagine”.
Open Source communities and ethos gives us a clear vision as to how we can overcome every traditional barrier to collaboration to make awesome stuff in a sustainable way. It teaches us that enlightened self interest in the age of the Internet translates directly to open and mutually beneficial collaboration.
We can all stand on the shoulders of giants that have come before, and become the giants that support the next generation of pioneers. We can all contribute to making this world just a bit more awesome.
So get out there, embrace your inner geek and join the open movement. Be it Open Source, open government or open knowledge, and whatever your particular skills, you can help shape the future for us all.
Thank you for coming today, thank you to Jim for inviting me to be a part of this launch, and good luck to you all in your endeavours with this new project. I look forward to working with you to create the future of our society, together.
Recently I spoke at BarCamp Canberra about my tips and tricks to changing the world. I thought it might be useful to get people thinking about how they can best contribute to the world, according to their skills and passions.
Completely coincidentally, my most excellent boss did a talk a few sessions ahead of me which was the American Civil War version of the same thing I highly recommend it. John Sheridan – Lincoln, Lee and ICT: Lessons from the Civil War.
So you want to change the world?
Here are the tactics I use to some success. I heartily recommend you find what works for you. Then you will have no excuse but to join me in implementing Operation World Awesomeness.
The Short Version:
No wasted movement.
The Long Version:
1) Pick your battles: there are a million things you could do. What do you most care about? What can you maintain constructive and positive energy about even in the face of towering adverseries and significant challenges? What do you think you can make a difference in? There is a subtle difference between choosing to knock down a mountain with your forehead, and renting a bulldozer. If you find yourself expending enormous energy on something, but not making a difference, you need to be comfortable to change tactics.
2) Work to your strengths: everyone is good at something. If you choose to contribute to your battle in a way that doesn’t work to your strengths, whatever they are, then you are wasting energy. You are not contributing in the best way you can. You need to really know yourself, understand what you can and can’t do, then do what you can do well, and supplement your army with the skills of others. Everyone has a part to play and a meaningful way to contribute. FWIW, I work to know myself through my martial arts training, which provides a useful cognitive and physical toolkit to engage in the world with clarity. Find what works for you. As Sun Tzu said: know yourself.
3) Identify success: Figure out what success actually looks like, otherwise you don’t have either a measurement of progress, nor a measurement of completion. I’ve seen too many activists get caught up on a battle and continued fighting well beyond the battle being won, or indeed keep hitting their heads against a battle that can’t be won. It’s important to continually be monitoring and measuring, holding yourself to account, and ensuring you are making progress. If not, change tactics.
4) Reconnaissance: do your research. Whatever your area of interest there is likely a body of work that has come before you that you can build upon. Learn about the environment you are working in, the politics, the various motivations and interests at play, the history and structure of your particular battlefield. Find levers in the system that you can press for maximum effect, rather than just straining against the weight of a mountain. Identify the various moving parts of the system and you have the best chance to have a constructive and positive influence.
5) Networks & Mentors: identify all the players in your field. Who is involved, influential, constructive, destructive, effective, etc. It is important to understand the motivations at play so you can engage meaningfully, collaboratively and build a mutually beneficial network in the persuit of awesomeness. Strong mentors are a vital asset and they will teach you how to navigate the rapids and make things happen. A strong network of allies is also vital to keep you on track, and accountable, and true to your own purpose. People usually strive to meet the expectations of those around them, so surround yourself with high expectations. Knowing your network also helps you identify issues and opportunities early.
6) Sustainability: have you put in place a succession plan? How will your legacy continue on without you? It’s important if your work is to continue on that it not be utterly reliant upon one individual. You need to share your vision, passion and success. Glory shared is glory sustained, so bring others on board, encourage and support them to succeed. Always give recognition and thanks to people who do great stuff.
7) Patience: remember the long game. Nothing changes overnight. It always take a lot of work and persistence, and remembering the long game will help during those times when it doesn’t feel like you are making progress. Again, your network is vital as it will help you maintain your strength, confidence and patience Speaking of which, a huge thanks to Geoff Mason for reminding me of this one on the day.
8) Shifting power: it is worth noting that we are living in the most exciting of times. Truly. Individuals are more empowered than ever before to do great things. The Internet has created a mechanism for the mass distribution of power, but putting into the hands of all people (all those online anyway), the tools to:
Side note: Poverty and hunger, we shall overcome you yet! Then we just urgently need to prioritise education of all the people. But that is a post for another day Check out my blog post on Unicorns and Doom, which goes into my thoughts on how online culture is fundamentally changing society.
This last aspect is particularly fascinating as it changes the game from one between the haves and the have nots, to one between those with and those without skills and knowledge. We are moving from a material wealth differentiation in society towards an intellectual wealth differentiation. Arguable we always had the latter, but the former has long been a bastion for law, structures, power and hierarchies. And it is all changing.
“What better place than here, what better time than now?” — RATM
I will be doing a longer blog post about the incredible adventure it was to bring Sir Tim Berners-Lee and Rosemary Leith to Australia 10 days ago, but tonight I have had something just amazing happen that I wanted to briefly reflect upon.
I feel humbled, amazed and extremely extremely thankful to be part of such an incredible community in Australia and New Zealand, and a lot of people have stood up and supported me with something I felt very uncomfortable having to deal with.
Basically, a large sponsor pulled out from the TBL Down Under Tour (which I was the coordinator for, supported by the incredible and hard working Jan Bryson) just a few weeks before the start, leaving us with a substantial hole in the budget. I managed to find sponsorship to cover most of the gap, but was left $20k short (for expenses only) and just decided to figure it out myself. Friends rallied around and suggested the crowdsourcing approach which I was hesitant to do, but eventually was convinced it wouldn’t be a bad thing.
We crowdsourced less than two days ago and raised around $6k ($4,800 on GoGetFunding and $1,200 from Jeff’s earlier effort). This was incredible, especially the wonderfully supportive and positive comments that people left. Honestly, it was amazing. And then, much to my surprise and shock, Linux Australia offered to contribute the rest of the $20k. Silvia is closing the crowdsourcing site as I write this and I’m thankful to her for setting it up in the first place.
I am truly speechless. And humbled. And….
It is worth noting that stress and exhaustion aside, and though I put over 350 hours of my own time into this project, for me it has been completely worth it. It has brought many subjects dear to my heart into the mainstream public narrative and media including open government, open data, open source, net neutrality, data retention and indeed, the importance of geeks. I think such a step forward in public narrative will help us take a few more steps towards the the future where Geeks Rule Over Kings ;) (my lca2013 talk)
It was also truly a pleasure to hang out with Tim and Rosemary who are extremely lovely people, clever and very interesting to chat to.
For the haters No I am not suffering from cultural cringe. No I am not needing an external voice to validate perspectives locally. There is only one TBL and if he was Australian I’d still have done what I did
More to come in the wrap up post on the weekend, but thank you again to all the individuals who contributed, and especially to Linux Australia for offering to fill the gap. There are definitely lessons learnt from this experience which I’ll outline later, but if I was an optimist before, this gives me such a sense of confidence, strength and support to continue to do my best to serve my community and the broader society as best I can.
And I promise I won’t burn out in the meantime
Po is looking forward to spending more time with his human. We all made sacrifices (old photo courtesy of Mary Gardiner)
On a recent trip to New Zealand I spent three action packed days working with Keitha Booth and Alison Stringer looking at open data. These two have an incredible amount of knowledge and experience to share, and it was an absolute pleasure to work with them, albeit briefly. They arranged meetings with about 3000* individuals from across different parts of the NZ government to talk about everything from open data, ICT policy, the role of government in a digital era, iterative policy, public engagement and the components that make up a feasible strategy for all of the above.
It’s important to note, I did this trip in a personal capacity only, and was sure to be clear I was not representing the Australian government in any official sense. I saw it as a bit of a public servant cultural exchange, which I think is probably a good idea even between agencies let alone governments
I got to hear about some of the key NZ Government data projects, including data.govt.nz, data.linz.govt.nz, the statistical data service, some additional geospatial and linked data work, some NZ government planning and efforts around innovation and finding more efficient ways to do tech, and much more. I also found myself in various conversations with extremely clever people about science and government communications, public engagement, rockets, circus and more.
It was awesome, inspiring, informative and exhausting. But this blog post aims to capture the key ideas from the visit. I’d love your feedback on the ideas/frameworks below, and I’ll extrapolate on some of these ideas in followup posts.
I’m also looking forward to working more collaboratively with my colleagues in New Zealand, as well as from across all three spheres of government in Australia. I’d like to set up a way for government people in the open data and open government space across Australia/New Zealand to freely share information and technologies (in code), identify opportunities to collaborate, share their policies and planning for feedback and ideas, and generally work together for more awesome outcomes all round. Any suggestions for how best to do this? GovDex? A new thing? Will continue public discussions on the Gov 2.0 mailing list, but I think it’ll be also useful to connect govvies privately whilst encouraging individuals and agencies to promote their work publicly.
This blog post is a collaboration with the wonderful Alison Stringer, in a personal capacity only. Enjoy!
* 3000 may be a wee stretch
Below are some basic building blocks we have found to be needed for an open data strategy to be sustainable and effective in gaining value for both the government and the broader community including industry, academia and civil society. It is based on the experiences in NZ, Aus and discussions with open data colleagues around the world. Would love your feedback, and I’ll expand this out to a broader post in the coming weeks.
Below are some potential technical building blocks for supporting a whole of government(s) approach to information management, proactive publishing and collaboration. Let me know what you think I’m missing
Please note, I am not in any way suggesting this should be a functional scope for a single tool. On the contrary, I would suggest for each functional requirement the best of breed tool be found and that there be a modular approach such that you can replace components as they are upgraded or as better alternatives arise. There is no reason why a clever frontend tool couldn’t talk to a number of backend services.
Although I was primarily in New Zealand to discuss open data, I ended up entering into a number of discussions about the broader aspects of digital and open government, which is entirely appropriate and a natural evolution. I was reminded of the three pillars of open government that we often discuss in Australia which roughly translate to:
There is a good speech by my old boss, Minister Kate Lundy, which explains these in some detail.
I got into a couple of discussions which went into the concept of public engagement at length. I highly recommend those people check out the Public Sphere consultation methodology that I developed with Minister Kate Lundy which is purposefully modular so that you can adapt it to any community and how they best communicate, digitally or otherwise. It also is focused on getting evidence based, peer reviewed, contextually analysed and useful actual outcomes. It got an international award from the World eDemocracy Forum, which was great to see. Particularly check out how we applied computer forensics tools to help figure out if a consultation is being gamed by any individual or group.
When I consider digital government, I find myself standing back in the first instance to consider the general role of government in a digital society. I think this is an important starting point as our understanding is broadly out of date. New Zealand has definitions in the State Sector Act 1988, but they aren’t necessarily very relevant to 2013, let alone an open and transparent digital government.
Below are some of the interesting imperatives I have identified as key drivers for changing how we do government:
Some additional reading and thoughts
Digital literacy and ICT skills – should be embedded into curriculum and encouraged across the board. I did a paper on this as a contribution to the National Australian Curriculum consultation in 2010 with Senator Kate Lundy which identified three areas of ICT competency: 1) Productivity skills, 2) Online engagement skills, & 3) Automation skills as key skills for all citizens. It’s also worth looking at the NSW Digital Citizenship courseware. It’s worth noting that public libraries are a low cost and effective way to deliver digital services, information and skills to the broader community and minimise the issue of the digital divide.
Media data – often when talking about open data, media is completely forgotten. Video, audio, arts, etc. The GLAM (galleries, libraries, archives and museums) are all over this and should be part of the conversation about how to manage this kind of content across whole of government.
Just a few additional links for those interested, somewhat related to some of the things I discussed this last week.
I worked for Senator Kate Lundy from April 2009 till January 2012. It was a fascinating experience learning how the executive and legislative arms of government work and working closely with Kate, who is extremely knowlegable and passionate about good policy and tech. As someone who is very interested in the interrelation between governments, society, the private sector and technology, I could not have asked for a better place to learn.
But last October (2011) I decided I really wanted to take the next step and expand my experience to better understand the public service, how policy goes from (and to) the political sphere from the administrative arm of government, how policy is implemented in practise and the impact/engagement with the general public.
I sat back and considered where I would ideally like to work if I could choose. I wanted to get an insight to different departments and public sector cultures across the whole govenrment. I wanted to work in tech policy, and open government stuff if at all possible. I wanted to be in a position where I might be able to make a difference, and where I could look at government in a holistic way. I think a whole of government approach is vital to serving the public in a coherent and consistent way, as is serious public engagement and transparency.
So I came up with my top three places to work that would satisfy this criteria. My top option happened to have a job going which I applied for and by November I was informed I was their first choice. This was remarkable and I was very excited to get started, but also wanted to tie up a few things in Kate’s office. So we arranged a starting date of January 31st 2012.
What is the job you ask? You’ll have to wait till the end of the post
Unfortunately for me, because I was already 6 months into a Top Secret Positive Vetting (TSPV) process (what you need for a Ministerial office in order to work with any classified information), and that process had to be completed, even though I needed a lower level for the new job. I was informed back in October that it should be done by Christmas.
So I blogged on my last day with Kate about what I had learned and indicated that I was entering the public service to get a better understanding of the administrative arm of government. There was some amusing speculation, and it has probably been the worst kept secret around Canberra for the last year
Of course, I thought I would be able to update my “Moving On” blog post within a few weeks or so. It ended up taking another 10 months for my clearance to finalise. TSPV does take a while, and I’m a little more complicated a case than the average bear given my travel and online profile
As it turns out, the 10 months presented some useful opportunities. During the last year I did a bunch of contracting work looking largely at tech policy, some website development, and I ended up working for the ACT Government for the last 5 months.
In the ACT Government I worked in a policy role under Mick Chisnall, the Executive Director of the ACT Government Information Office. That was a fantastic learning experience and I’d like to thank Mick for being such a great person to work with and learn from. I worked on open government policy, open data policy and projects (including the dataACT launch, and some initial work for the Canberra Digital Community Connect project), looked at tech policies around mobile, cloud, real time data, accessibility and much more. I also helped write some fascinating papers around the role of government in a digital city. Again, I feel very fortunate to have had the opportunity to work with excellent people with vision. A huge thanks to Mick Chisnall, Andrew Cappie-Wood, Pam Davoren, Christopher Norman, Kerry Webb, James Watson, Greg Tankard, Gavin Tapp and all the people I had the opportunity to work with. I learnt a lot, much of which will be useful in my new role.
It also showed me that the hype around “shared services” being supposedly terrible doesn’t quite map reality. For sure, some states have had significant challenges, but in some states it works reasonably well (nothing is perfect) and presents some pretty useful opportunities for whole of government service delivery.
Anyway, so my new job is at AGIMO as Divisional Coordinator for the Agency Services Division, working directly to John Sheridan who has long been quite an active and engaged voice in the Australian Gov 2.0 scene. I started a week and a half ago and am really enjoying it already. I think there are some great opportunities for me through this job to usefully serve the public and the broader public service. I look forward to making my mark and contributing to the pursuit of good tech in government. I’m also taking the role of Media Coordinator for AGIMO, and supporting John in his role.
I’ve met loads of brilliant people working in the public service across Australia, and I’m looking forward to learning a lot. I’m also keen to take a very collaborative approach (no surprises there), so I’m looking at ways to better enable people to work together across the APS and indeed, across all government jurisdictions in Australia. There is a lot to be gained by collaboration between the Federal, States/Territories and Local spheres of government, particularly when you can get the implementers and policy developers working together rather than just those up the stack.
So, if you are in government (any sphere) and want to talk open government, open data, tech policy, iterative policy development, public engagement, or all the things, please get in touch. I’m hoping to set up an open data working group to bring together the people in various governments doing great work across the country and I’ll be continuing to participate in the Gov 2.0 community, now from within the tent
I recently gave a speech about “collaborative innovation” in the public service, and I thought I’d post it here for those interested
The short version was that governments everywhere, or more specifically, public services everywhere are unlikely to get more money to do the same work, and are struggling to deliver and to transform how they do things under the pressure of rapidly changing citizen expectations. The speech used Game of Thrones as a bit of a metaphor for the public service, and basically challenged public servants (the audience), whatever their level, to take personal responsibility for change, to innovate (in the true sense of the word), to collaborate, to lead, to put the citizen first and to engage beyond the confines of their desk, business unit, department or jurisdiction to co-develop develop better ways of doing things. It basically said that the public service needs to better work across the silos.
The long version is below, on YouTube or you can check out the full transcript:
The first thing I guess I wanted to talk about was pressure number one on government. I’m still new to government. I’ve been working in I guess the public service, be it federal or state, only for a couple of years. Prior to that I was an adviser in a politician’s office, but don’t hold that against me, I’m strictly apolitical. Prior to that I was in the industry for 10 years and I’ve been involved in non-profits, I’ve been involved in communities, I’ve been involved in online communities for 15 years. I sort of got a bit of an idea what’s going on when it comes to online communities and online engagement. It’s interesting for me to see a lot of these things done they’ve become very popular and very interesting.
My background is systems administration, which a lot of people would think is very boring, but it’s been a very useful skill for me because in everything I’ve done, I’ve tried to figure out what all the moving parts are, what the inputs are, where the configurations files are; how to tweak those configurations to get the better outputs. The entire thing has been building up my knowledge of the whole system, how the societal-wide system, if you like, operates.
One of the main of pressures I’ve noticed on government of course is around resources. Everyone has less to do more. In some cases, some of those pressures are around fatigued systems that haven’t had investment for 20 years. Fatigued people who have been trying to do more with less for many years. Some of that is around assumptions. There’s a lot of assumptions about what it takes to innovate. I’ve had people say, “Oh yeah, we can totally do an online survey that’ll cost you $4 million.” “Oh my, really? Okay. I’m going to just use Survey Monkey, that’s cool.” There are a lot of perceptions that I would suggest a little out of date.
It was a very opportunistic and a very wonderful thing that I worked in the ACT Government prior to coming into the federal government. A lot of people in the federal government look down on working in other jurisdictions, but it was very useful because when you see what some of the state territory and local governments do with the tiny fraction of the funding that the federal government has, it’s really quite humbling to start to say, “Well why do we have these assumptions that a project is going to cost a billion dollars?”
I think our perceptions about what’s possible today is a little bit out of whack. Some of those resources problems are also limitations for the self-imposed, our assumptions, our expectations and such. So first major pressure that we’re dealing with is around resources, both the real issue and I would argue a slight issue of perception. This is the only gory one (slide), so turn away from it if you like, I should have said that before sorry.
The second pressure is around changing expectations. Citizens now, because of the Internet, are more powerful than ever before. This is a real challenge for entities such as government or a large traditional power brokers shall we say. Having citizens that can solve their own problems, they can make their own applications that can pull data from wherever we like, that can screen scrape what we put online, is a very different situation to whether it be the Game of Thrones land or Medieval times, even up to even only 100 years ago; the role of a citizen was more about being a subject and they were basically subject to whatever you wanted. A citizen today is able to engage and if you’re not responsive to them, if government don’t be agile and actually fill up a role then that void gets picked up by other people, so the internet society is a major pressure of the changing expectations of the public that we serve is a major pressure. When fundamentally, government can’t in a lot of cases innovate quickly enough, particularly in isolation, to solve the new challenges of today and to adapt and grab on to the new opportunities of today.
We (public servants) need to collaborate. We need to collaborate across government. We need to collaborate across jurisdictions and we need to collaborate across society and I would argue the world. These are things that are very, very foreign concepts to a lot of people in the public service. One of the reasons I chose this topic today was because when I undertook to kick off Data.gov.au again, which is just about to hit its first anniversary and I recommend that you come along on the 17th of July, but when I kicked that off, the first thing I did was say, “Well who else is doing stuff? What are they doing? How’s that working? What’s the best practice?” When I chatted to other jurisdictions in Australia, when I chatted to other countries, I sat down and grilled for a couple of hours the Data.gov.uk guys to find out exactly how they do it, how it’s resourced, what their model was. It was fabulous because it really helped us create a strategy which has really worked and it’s continuing to work in Australia.
A lot of these problems and pressures are relatively new, we can’t use old methods to solve these problems. So to quote another Game of Thrones-ism, if we look back, we are lost.
The third pressure and it’s not too gory, this one. The third pressure is upper management. They don’t always get what we’re trying to do. Let’s be honest, right? I’m very lucky I work for a very innovative, collaborative person who delegates responsibilities down … Audience Member: And still has his head. Pia Waugh: … and still has his head. Well actually it’s the other way around. Upper management is Joffrey Baratheon; but I guess you could say it that way, too. In engaging with upper management, a lot of the time and this has been touched on by several speakers earlier today, a lot of the time they have risks. To manage they have to maintain reputation and when you say we can’t do it that way, if you can’t give a solution that will solve the problem, then what do you expect to happen? We need to engage with upper management to understand what their concerns are, what their risks are and help mitigate those risks. If we can’t do that then it is in a lot of cases to our detriment that our projects are not going to be able to get up.
We need to figure out what the agendas are, we need to be able to align what we’re trying to do effectively and we need to be able to help provide those solutions and engage more constructively, I would suggest, with upper management.
Okay, but the biggest issue, the biggest issue I believe is around what I call systemic silos. So this is how people see government, it’s remote, it’s very hard to get to; it’s one entity. It’s a bit crumbling, a bit off in the realm, it’s out of touch with people, it’s off in the clouds and it’s untouchable. It’s very hard to get to, there’s winding dangerous road you might fall off. Most importantly, it’s one entity. When people have a good or bad experience with your department, they just see that as government. We are all exactly judged by the best and the worst examples of all of these and yet we’re all motivated to work independently of each other in order to meet fairly arbitrary, goals in some cases. In terms of how government sees people, they’re these trouble-making people that climbing up to try and destroy us. They’re a threat, they’re outsiders, they don’t get it. If only we could teach them how government works and then this will all be okay.
Well, it’s not their job; I mean half of the people in government don’t know how government works. By the time you take MOG changes into account, by the time you take changes of functions, changes of management, changes of different approaches, different cultures throughout the public service, the amount of time someone has said to me, “The public service can’t innovate.” I’m like, “Well, the public service is myriad organisations with myriad cultures.” It’s not one entity and yet people see us as one entity. It’s not I think the job of the citizen to understand the complexities of government, but rather the job of the government to abstract the complexities of government to get a better engagement and service for citizens. That’s our job, which means if you’re not collaborating and looking across government, then you’re not actually doing your job, in my opinion. But again, I’m still possibly seen as one of these troublemakers, that’s okay.
This is how government sees government (map of the Realm), a whole map of fiefdoms, of castles to defend, of armies that are beating at your door, people trying to take your food and this is just one department. We don’t have this concept of that flag has these skills that we could use. These people are doing this project; here’s this fantastic thing happening over there that we could chat to. We’re not doing that enough across departments, across jurisdictions, let alone internationally and there’s some fantastic opportunities to actually tap into some of those skills. The solution in my opinion, this massive barrier to doing the work of the public service better is systemic silos. So what’s the solution?
The solution is we need to share. We’re all taught as children to share the cookie and yet as we get into primary school and high school we’re told to hide our cookie. Keep it away. Oh you don’t want to share the cookie because there’s only one cookie and if you gave any of it away you don’t have any cookie left. Well, there’s only so many potatoes in this metaphor and if we don’t share those potatoes then someone’s going to starve and probably the person who’s going to starve is actually right now delivering a service that if they’re not there to deliver, we’re going to have to figure out how to deliver for the one potato that we have. So I’m feeling we have to collaborate and to share those resources is I think a very important step forward.
Innovative collaboration. Innovative collaboration is a totally made up term as are a lot of things are I guess. It’s the concept of actually forging strategic partnerships. I’ve actually had a number of projects now. I didn’t have a lot of funding for Data.gov.au. I don’t need a lot of funding for Data.gov.au because fundamentally, a lot of agencies want to publish data because they see it now to be in their best interest. It helps them improve their policy outcomes, helps them improve their services, helps them improve efficiency in their organisations. Now that we’ve sort of hit that tipping point of agencies wanting to do this stuff increasingly so, it’s not completely proliferated yet, but I’m working on it; now that we sort of hit that tipping point, I’ve got a number of agencies that say, “Well, we’d love to open data but we just need a data model registry.” “Oh, cool. Do you have one?” “Yes, we do but we don’t have anywhere to host it.” “Okay, how about I host it for you. You develop it and I’ll host it. Rock!” I’ve got five of those projects happening right now where I’ve aligned the motivation and the goals of what we’re doing with the motivation and goals of five other departments and we have actually have some fantastic outcomes coming out that meet all the needs of all the players involved, plus create a whole of government improved service.
I think this idea of having a shared load, pooling our resources, pooling our skills, getting a better outcome for everyone is a very important way of thinking. It gives you better improved outcomes in terms of dealing again with upper management. If you start from a premise that most people do, well we’ve only got this number of people and this amount of money and therefore, we’re only going to be able to get this outcome. In a year’s time you’ll be told, “That’s fine, just still do it 20% less.” If you say our engagement with this agency is going to help us get more resilience in a project and more expertise on a project and by the way, upper management, it means we’re splitting the cost with someone else, that starts to help the conversation. You can start to leverage resources across multiple departments, across society and across the world.
Here’s a little how-to, just a couple of ideas, I’m going to go into this into a little bit more detail. In the first case research, so I’m a child of the internet, I’m a little bit unique for my age bracket and that my mom was a geek, so I have been using computers since I was four, 30 years ago. A lot of people my age got their first taste of computing and the internet when they got to university or at best maybe high school whereas I was playing with computers very young. In fact, there’s a wonderful photo if you want to check it out, of my mom and I sitting and looking at the computer very black and white and there’s this beautiful photo of this mother with a tiny child at the computer. What I tell people is that it’s a cute photo but actually my mom had spent three days programming that system and when her back was turned, just five minutes, I completely broke it. The picture is actually of her fixing my first breaking of a system. I guess I could have had a career in testing but anyway I got in big trouble.
One of the things about being a child of the internet or someone, who’s really adopted the internet into the way that I think, is that my work space is not limited to the desk area that I have. I don’t start with a project and sort of go, okay, what’s on my computer, who’s in my immediate team, who’s in my area, my business area. I start with what’s happening in the world. The idea of research is not just to say what’s happening elsewhere so that we can integrate into what we are going to do, but to start to see the whole world as your work space or as your playground or as your sandpit, whichever metaphor you prefer. In this way, you can start to automatically as opposed to by force, start to get into a collaborative mindset.
Research is very important. You need to establish something. You need to actually do something. This is an important one that’s why I’ve got it in bold. You need to demonstrate that success and you need to wrap up. I think a lot of times people get very caught up with establishing a community and then maintaining that community for the sake of maintaining the community. What are the outcomes? You need to identify fairly quickly, is this going to have an outcome or is this sort of a community, an ongoing community which is not necessarily outcome driven? Part of this is around, again, understanding how the system works and how you can actually work in the system. Some of that research is about understanding projects and skills. I’ll jump into a little bit. So what already exists? If I had a mammoth (slide), I’d totally do cool stuff. What exists out there? What are the people and skills that are out there? What are the motivations that exist in those people that are already out there? How can I align with those? What are the projects that are already doing cool stuff? What are the agendas and priorities and I guess systemic motivations that are out there? What tech exists?
And this is why I always contend and I always slip into a talk somewhere, so I’ll slip it in here, you need to have a geek involved somewhere. How many people here would consider yourselves geeks? Not many. You need to have people that have technical literacy in order to make sure that your great idea, your shiny vision; your shiny policy can actually be implemented. If you don’t have a techie person, then you don’t have the person who has a very, very good skill at identifying opportunities and risks. You can say, “Well we’ll just go to our IT department and they’ll give us quote of how much it does to do a survey.” Well in that case, okay, not necessarily our case, it was $4 million. So you need to have techie people who will help you keep your finger on the pulse of what’s possible, what’s probable and how it’s going to possibly work. I highly recommend, you don’t need to be that person but you need to have the different skills in the room.
This is where and I said this on Twitter, I do actually recommend Malcolm Gladwell’s ‘The Tipping Point’, not because he’s the most brilliant author in the world, but because he has a concept in there that’s very important. Maybe I’ll save you reading it now, but of having three skills – connectedness, so the connector; the maven, your researcher sort of person; and your sales person. Those three skills, one person might have all or none of those skills, but a project needs to have all of those skills represented in some format for the project to go from nothing to being successful or massively distributed. It’s a very interesting concept. It’s been very beneficial to a lot of projects I’ve been involved in. I’ve run a lot of volunteer projects. The biggest of which is happening this weekend, which is GovHack. Having 1,300 participants in an 11-city event only with volunteer organisers is a fairly big deal and part of the reason we can do that is because we align natural motivation with the common vision and we get geeks involved obviously.
What already exists? Identifying the opportunities, identifying what’s out there, treating the world like a basket of goodies that you can draw from. Secondly, you want to form an A team. Communities are great and communities are important. Communities establish a ongoing presence from which you can engage in, draw from, get support and all those kinds of things. This kind of community is very, very important, but innovative collaboration is about building a team to do something, a project team. You want to have your A-list. You want to have a wide variety of skills. You want to have doers. You want to establish the common and different needs of the individuals involved and they might be across departments or across governments or from society. Establishing what is common of the people involved that you want to get out of it and establishing then what’s different is important to making sure that when you go to announce this, that everyone’s needs is taken care of or that it doesn’t put someone off side or whatever. You need to understand the dynamics of your group very, very well and you need to have the right people in the room. You want to plan realistic outcomes and milestones. These need to be tangible.
This is where I get just super pragmatic and I apologise, but if you’re building a team to build the project report to build the team, maybe you’ve lost your way just slightly. If the return on investment or the business case that you’re writing takes 10 times the amount of time to do the project, itself, maybe you could do a little optimisation. So just sort of sitting back and saying what is the scale of what we’re trying to do. What are the tangible outcomes and what is actually necessary for this? This comes back to the concept of again, managing and mapping risk to projects. If the risk is very, very, very low, then maybe the amount of time and effort that goes into building the enormous structure of governance around it, can be somewhat minimised. This is taking a engaged proactive approach with the risk I think is very important in this kind of thing and making sure that the outcomes are actually achievable and tangible. This is also important because if you have tangible outcomes then you can demonstrate tangible outcomes. You need to also avoid scope creep.
I had a project recently that didn’t end up happening. It was a very interesting lesson to me though where something simple was asked and I came out with a way to do it in four weeks. Brilliant! Then the scope started to creep significantly and then it became this and this and then this and then we want to have an elephant with bells on it. Well, you can have the elephants with bells if you do this in this way in six months. So how about you have that as a second project? Anyway, so basically try to hold your ground. Often enough when people ask for something, they don’t know what they’re asking for. We need to be the people that are on the front line saying, “What you want to achieve fundamentally, you’re not going to achieve the way that you’re trying to achieve it. So how about we think about what the actual end goal that we all want is and how to achieve that? And by the way, I’m the technical expert and you should believe me and if you don’t, ask another technical expert but for God’s sake, don’t leave it to someone who doesn’t know how to implement this, please.”
You want to plan your goals. You want to ensure and this another important bit that there is actually someone responsible for each bit, otherwise, your planning committee will get together in another four weeks or eight weeks and will say, “So, how is action A going? Oh nothing’s happened. Okay, how’s action B going?” You need to actually make sure that this nominated responsibilities and they again should align to those individuals’ natural motivations and systemic motivations.
My next bit, don’t reinvent the wheel. I find a lot of projects where someone has gone on and completely recreated something. The amount of time when someone said, “Well that’s a really good piece of software but let’s rewrite it in another language.” In technical land, this is very common, but I see it happen in a process perspective, I see it happen in a policy perspective. Again, going back to see what’s available is very important, but I’ll just throw in another thing here, the idea of taking responsibility is a very scary thing, apparently, in the public service. Let’s go back to the wheel. If your wheel is perfect, you’ve developed it, you’ve designed it, you’ve spent six years getting it to this point and it’s shiny and it’s beautiful and it works, but it’s not connected to a car, what’s the point, seriously?
You want to make sure that what you’re doing needs to actually contribute to something bigger, needs to actually be part of the engine, because if your wheel or if your cog is perfectly defined but the engine as a whole doesn’t work, then there’s a problem there and sometimes that’s out of your control. Quite often what’s missing is someone actually looking end to end and saying, “Well, the reason there’s a problem is because there’s actually a spanner, just here.” If we remove that spanner and I know it’s not my job to remove that spanner, but if someone removed that spanner the whole thing would work. Sometimes it’s very scary for some people to do and I understand that, but you need to understand what you’re doing and how it fits into the bigger picture and how the bigger picture is or isn’t working, I would suggest.
Monitoring. Obviously, measuring and monitoring success in Game of Thrones was a lot more messy than it is for us. They had to deal with birds, they had to feed them, they had to deal with what they fed them. To measure and monitor your project is a lot easier in a lot of cases. There’s a lot of ways to automate it. There’s a lot of ways to come up with it at the beginning. How do we define success, if you don’t define it then you don’t know if you’ve got there. These things are all kind of obvious, but I remember having a real epiphany moment when a very senior person from another department actually, I was talking to him about the challenge that I’m having with a project and I said, “Well if you’re doing this great thing, then why aren’t you shouting it from the rooftop. This is wonderful. It’s very innovative, it’s very clever. You’ve solved a really great problem.” Then he looked at me and said, “Well Pia, you know success is just as bad as failure, don’t you?” It really struck me and then I realised I guess any sort of success or failure is seen as attention and the moment someone puts attention then it’s not very scary. I put to you that having success, having defensible projects, having evidence that actually underpins why, what you’re doing is important, is probably one of the most important things that you can do today to make sure that you continue getting funding, resources and all these kinds of things. Measuring, monitoring, reporting is more important now than ever and luckily and coincidentally, it’s easier now than ever. There’s a lot of ways that we can automate this stuff. There’s a lot of ways that we can put in place these mechanisms from the start of a project. There’s a lot of ways we can use technology to help. We need to define success, we need to defend and promote the outcomes of those projects.
Share the glory. If it’s you sitting on the throne then everyone starts to get a little antsy. I like to say that shared glory is the key to a sustainable success. I’ve had a number of projects and I don’t think I’ve told John this, but I’ve had a couple of things where I’ve collaborated with someone and then I’ve let them announce their part of it first, because that’s a good way to get great relationship. It doesn’t really matter to me if I announce it now or in a week’s time. It helps share the success, it helps share the glory. It means everyone is a little bit more on site and it builds trust. The point that was made earlier today about trust is a very important one and the way that you build trust is by having integrity, following through on what you’re doing and to share the glory a little. Sharing the glory is a very important part because if everyone feels like they’re getting out of the collaboration what they need to justify their work, to justify to their bosses, to justify their investment of time, then that’s a very good thing for everyone.
Everything great starts small. This goes to the point of doing pilots, doing demos. How many of you have heard the term release early, release often? Not many. It’s a technology sector idea, but the idea is rather than taking, in big terms, rather than taking four years to scope something out and then get $100 million and then implement it, yeah I know, right? You actually start to do smaller modular projects and if it fails straight away, then at least you haven’t spent four years and $100 million failing. The other part of release early, release often is fail early, fail often, which sounds very scary in the public sector but it’s a very important thing because from failure and from early releases, you get lessons. You can iteratively improve projects or policies or outcomes that you’re doing if you continually getting out there and actually testing with people and demoing and doing pilots. It’s a very, very useful thing to realise that sometimes even the tiniest baby step is still a step and for yourselves as individuals, we don’t always get the big success that we hope and so you need to make sure that you have a continuous success loop in your own environment and for yourself to make sure that you maintain your own sense of moving forward, I guess, so even small steps are very important steps. Audience Member: Fail early, fail often to succeed sooner. Pia Waugh: That’s probably a better sentence.
There’s a lot of lessons that we can learn from other sector and from other industries, from both the corporate and community sectors, that don’t always necessarily translate in the first instance; but they’re tried and true in those sectors. Understanding why they work and why they do or in some cases don’t map to our sector, I think is very important.
Finally, this is the last thing I want to leave you with. The amount of times that I hear someone say, “Oh, we can’t possibly do that. We need to have good leadership. Leadership is what will take us over the line.” We are the leaders of this sector. We are the future of the public service and so there’s a question about you need to start acting it as well, not you, all of us. You lead through doing. You establish change through being the change you want to see, to quote another great guy. When you actually realising that a large proportion of the SES are actually retiring in the next five to ten years, and realising that we are all the future of the public service means that we can be those leaders. Now if you go to your boss and say, “I want to do this great, cool thing and it’s going to be great and I’m going to go and work with all these other people. I’m going to spend lots of your money.” Yeah, they’re going to probably get a little nervous. If you say to them “here’s why this is going to be good for you, I want to make you look good, I want to achieve something great that’s going to help our work, it’s going to help our area, it’s going to help our department, it’s going to help our Minister, it aligns with all of these things” you’re going to have a better chance of getting it through. There’s a lot of ways that you can demonstrate leadership just at our level, just by working to people directly.
So I spoke before about how the first thing I did was go and research what everyone else was doing, I followed that up by establishing an informal forum. A series of informal get togethers. One of those informal get togethers is across jurisdictional meeting with open data people from other jurisdictions. What that means is every two months I meet with the people who are in charge of the open data policies and practice from most of the states and territories, from a bunch of local governments, from a few other departments at the federal level, just to talk about what we’re all doing; made very clear from the start, this is not formal, this is not mandatory, it’s not top down, it’s not the feds trying to tell you what to do, which is an unfortunate although often accurate picture that the other jurisdictions have of us, which is unfortunate because there’s so much we can learn from them. By just setting that up and getting the tone of that right, everyone is sharing policy, sharing outcomes, sharing projects, starting to share a code, starting to share functionality and we’ve got to a point only I guess eight months into the establishment of that group, where we really started to get some great benefits for everyone and it’s bringing everyone’s base line up.
There’s a lot of leadership to be had at every level and identifying what you can do in your job today is very important rather than waiting for the permission. I remember and I’m going to say a little story that I hope John doesn’t mind, I remember when I started in my job and I got a week into the job and I said to John, “So, I’ve been here a week, I really don’t know if this is what you wanted from me. Are you happy with how I’m going?” He said, “Well Pia, don’t change what you’re doing, but I just want to give you a bit of feedback. I’ve never been in a meeting before with outsiders, with vendors or whatever and had an EL speak before.” I said, “Oh, what’s wrong with your department? What’s wrong with ELs?” Because certainly by a particular level you have expertise, you have knowledge, you have something to contribute, so why wouldn’t you be encouraging people of all levels but certainly of senior levels to be actually speaking and engaging in the meetings. It was a really interesting thought experiment and discussion to be had about the culture.
The amount of people that have said to me, just quietly, “Hey, we’d love to do that but we don’t want to get any criticism.” Well, criticism comes in two forms. It’s either constructive or unconstructive. Now it can be given negatively, it can be given positively, it can be given in a little bottle in the sea, but it only comes in those two forms. If it’s constructive, even if yelled at you online, if it’s something to learn from, take that, roll with it. If it’s unconstructive, you can ignore it safely. It’s about having self knowledge, an understanding of a certain amount of clarity and comfort with the idea that you can improve, that sometimes other people will be the mechanism for you to improve, in a lot of cases it will be other people will be the mechanism for you to improve. Conflict is not a bad thing. Conflict is actually a very healthy thing in a lot of ways, if you engage with it. It’s really up to us about how we engage with conflict or with criticism.
This is again where I’m going to be a slight outsider, but it’s very, very hard, not that I’ve seen this directly, but everything I hear is that it’s very, very hard to get rid of someone in the public service. I put to you, why would you not be brave? Seriously. You can’t have it both ways. You can’t say, “Oh, I’m so scared about criticism. I’m so scared blah, blah, blah,” and at the same time it be difficult to be fired, why not be brave? We can do great things and it’s up to us as individuals to not wait for that permission to do great things. We can all do great things at lots and lots of different levels. Yes, there will be bad bosses and yes, there will be good bosses, but if you continually pin your ability to shine on those external factors and wait, then you’ll be waiting a long time. Anyway, it’s just my opinion.
So be the leader, be the leader that you want to see. That’s I guess what I wanted to talk about with collaborative innovation.
I don’t have nearly enough time to blog these days, but I am doing a bunch of writing for university. I decided I would publish a selection of the (hopefully) more interesting essays that people might find interesting Please note, my academic writing is pretty awful, but hopefully some of the ideas, research and references are useful.
For this essay, I had the most fun in developing my own alternative public policy model at the end of the essay. Would love to hear your thoughts. Enjoy and comments welcome!
Question: Critically assess the accuracy of and relevance to Australian public policy of the Bridgman and Davis policy cycle model.
The public policy cycle developed by Peter Bridgman and Glyn Davis is both relevant to Australian public policy and simultaneously not an accurate representation of developing policy in practice. This essay outlines some of the ways the policy cycle model both assists and distracts from quality policy development in Australia and provides an alternative model as a thought experiment based on the authors policy experience and reflecting on the research conducted around the applicability of Bridgman and Davis’ policy cycle model.
In 1998 Peter Bridgman and Glyn Davis released the first edition of The Australian Policy Handbook, a guide developed to assist public servants to understand and develop sound public policy. The book includes a policy cycle model, developed by Bridgman and Davis, which portrays a number of cyclic logical steps for developing and iteratively improving public policy. This policy model has attracted much analysis, scrutiny, criticism and debate since it was first developed, and it continues to be taught as a useful tool in the kit of any public servant. The fifth edition of the Handbook was the most recent, being released in 2012 which includes Catherine Althaus who joined Bridgman and Davis on the fourth edition in 2007.
The policy cycle model
The policy cycle model presented in the Handbook is below:
The model consists of eight steps in a circle that is meant to encourage an ongoing, cyclic and iterative approach to developing and improving policy over time with the benefit of cumulative inputs and experience. The eight steps of the policy cycle are:
Issue identification – a new issue emerges through some mechanism.
Policy analysis – research and analysis of the policy problem to establish sufficient information to make decisions about the policy.
Policy instrument development – the identification of which instruments of government are appropriate to implement the policy. Could include legislation, programs, regulation, etc.
Consultation (which permeates the entire process) – garnering of external and independent expertise and information to inform the policy development.
Coordination – once a policy position is prepared it needs to be coordinated through the mechanisms and machinations of government. This could include engagement with the financial, Cabinet and parliamentary processes.
Decision – a decision is made by the appropriate person or body, often a Minister or the Cabinet.
Implementation – once approved the policy then needs to be implemented.
Evaluation – an important process to measure, monitor and evaluate the policy implementation.
In the first instance is it worth reflecting on the stages of the model, which implies the entire policy process is centrally managed and coordinated by the policy makers which is rarely true, and thus gives very little indication of who is involved, where policies originate, external factors and pressures, how policies go from a concept to being acted upon. Even to just develop a position resources must be allocated and the development of a policy is thus prioritised above the development of some other policy competing for resourcing. Bridgman and Davis establish very little in helping the policy practitioner or entrepreneur to understand the broader picture which is vital in the development and successful implementation of a policy.
The policy cycle model is relevant to Australian public policy in two key ways: 1) that it both presents a useful reference model for identifying various potential parts of policy development; and 2) it is instructive for policy entrepreneurs to understand the expectations and approach taken by their peers in the public service, given that the Bridgman and Davis model has been taught to public servants for a number of years. In the first instance the model presents a basic framework that policy makers can use to go about the thinking of and planning for their policy development. In practise, some stages may be skipped, reversed or compressed depending upon the context, or a completely different approach altogether may be taken, but the model gives a starting point in the absence of anything formally imposed.
Bridgman and Davis themselves paint a picture of vast complexity in policy making whilst holding up their model as both an explanatory and prescriptive approach, albeit with some caveats. This is problematic because public policy development almost never follows a cleanly structured process. Many criticisms of the policy cycle model question its accuracy as a descriptive model given it doesn’t map to the experiences of policy makers. This draws into question the relevance of the model as a prescriptive approach as it is too linear and simplistic to represent even a basic policy development process. Dr Cosmo Howard conducted many interviews with senior public servants in Australia and found that the policy cycle model developed by Bridgman and Davis didn’t broadly match the experiences of policy makers. Although they did identify various aspects of the model that did play a part in their policy development work to varying degrees, the model was seen as too linear, too structured, and generally not reflective of the at times quite different approaches from policy to policy (Howard, 2005). The model was however seen as a good starting point to plan and think about individual policy development processes.
Howard also discovered that political engagement changed throughout the process and from policy to policy depending on government priorities, making a consistent approach to policy development quite difficult to articulate. The common need for policy makers to respond to political demands and tight timelines often leads to an inability to follow a structured policy development process resulting in rushed or pre-canned policies that lack due process or public consultation (Howard, 2005). In this way the policy cycle model as presented does not prepare policy-makers in any pragmatic way for the pressures to respond to the realities of policy making in the public service. Colebatch (2005) also criticised the model as having “not much concern to demonstrate that these prescriptions are derived from practice, or that following them will lead to better outcomes”. Fundamentally, Bridgman and Davis don’t present much evidence to support their policy cycle model or to support the notion that implementation of the model will bring about better policy outcomes.
Policy development is often heavily influenced by political players and agendas, which is not captured in the Bridgman and Davis’ policy cycle model. Some policies are effectively handed over to the public service to develop and implement, but often policies have strong political involvement with the outcomes of policy development ultimately given to the respective Minister for consideration, who may also take the policy to Cabinet for final ratification. This means even the most evidence based, logical, widely consulted and highly researched policy position can be overturned entirely at the behest of the government of the day (Howard, 2005) . The policy cycle model does not capture nor prepare public servants for how to manage this process. Arguably, the most important aspects to successful policy entrepreneurship lie outside the policy development cycle entirely, in the mapping and navigation of the treacherous waters of stakeholder and public management, myriad political and other agendas, and other policy areas competing for prioritisation and limited resources.
The changing role of the public in the 21st century is not captured by the policy cycle model. The proliferation of digital information and communications creates new challenges and opportunities for modern policy makers. They must now compete for influence and attention in an ever expanding and contestable market of experts, perspectives and potential policies (Howard, 2005), which is a real challenge for policy makers used to being the single trusted source of knowledge for decision makers. This has moved policy development and influence away from the traditional Machiavellian bureaucratic approach of an internal, specialised, tightly controlled monopoly on advice, towards a more transparent and inclusive though more complex approach to policy making. Although Bridgman and Davis go part of the way to reflecting this post-Machiavellian approach to policy by explicitly including consultation and the role of various external actors in policy making, they still maintain the Machiavellian role of the public servant at the centre of the policy making process.
The model does not clearly articulate the need for public buy-in and communication of the policy throughout the cycle, from development to implementation. There are a number of recent examples of policies that have been developed and implemented well by any traditional public service standards, but the general public have seen as complete failures due to a lack of or negative public narrative around the policies. Key examples include the Building the Education Revolution policy and the insulation scheme. In the case of both, the policy implementation largely met the policy goals and independent analysis showed the policies to be quite successful through quantitative and qualitative assessment. However, both policies were announced very publicly and politically prior to implementation and then had little to no public narrative throughout implementation leaving the the public narrative around both to be determined by media reporting on issues and the Government Opposition who were motivated to undermine the policies. The policy cycle model in focusing on consultation ignores the necessity of a public engagement and communication strategy throughout the entire process.
The Internet also presents significant opportunities for policy makers to get better policy outcomes through public and transparent policy development. The model down not reflect how to strengthen a policy position in an open environment of competing ideas and expertise (aka, the Internet), though it is arguably one of the greatest opportunities to establish evidence-based, peer reviewed policy positions with a broad range of expertise, experience and public buy-in from experts, stakeholders and those who might be affected by a policy. This establishes a public record for consideration by government. A Minister or the Cabinet has the right to deviate from these publicly developed policy recommendations as our democratically elected representatives, but it increases the accountability and transparency of the political decision making regarding policy development, thus improving the likelihood of an evidence-based rather than purely political outcome. History has shown that transparency in decision making tends to improve outcomes as it aligns the motivations of those involved to pursue what they can defend publicly. Currently the lack of transparency at the political end of policy decision making has led to a number of examples where policy makers are asked to rationalise policy decisions rather than investigate the best possible policy approach (Howard, 2005). Within the public service there is a joke about developing policy-based evidence rather than the generally desired public service approach of developing evidence-based policy.
Although there are clearly issues with any policy cycle model in practise due to the myriad factors involved and the at times quite complex landscape of influences, by constantly referencing throughout their book the importance of “good process” to “help create better policy” (Bridgman & Davis, 2012), they both imply their model is a “good process” and subtly encourage a check-box style, formally structured and iterative approach to policy development. The policy cycle in practice becomes impractical and inappropriate for much policy development (Everett, 2003). Essentially, it gives new and inexperienced policy makers a false sense of confidence in a model put forward as descriptive which is at best just a useful point of reference. In a book review of the 5th edition of the Handbook, Kevin Rozzoli supports this by criticising the policy cycle model as being too generic and academic rather than practical, and compares it to the relatively pragmatic policy guide by Eugene Bardach (2012).
Bridgman and Davis do concede that their policy cycle model is not an accurate portrayal of policy practice, calling it “an ideal type from which every reality must curve away” (Bridgman & Davis, 2012). However, they still teach it as a prescriptive and normative model from which policy developers can begin. This unfortunately provides policy developers with an imperfect model that can’t be implemented in practise and little guidance to tell when it is implemented well or how to successfully “curve away”. At best, the model establishes some useful ideas that policy makers should consider, but as a normative model, it rapidly loses traction as every implementation of the model inevitably will “curve away”.
The model also embeds in the minds of public servants some subtle assumptions about policy development that are questionable such as: the role of the public service as a source of policy; the idea that good policy will be naturally adopted; a simplistic view of implementation when that is arguably the most tricky aspect of policy-making; a top down approach to policy that doesn’t explicitly engage or value input from administrators, implementers or stakeholders throughout the entire process; and very little assistance including no framework in the model for the process of healthy termination or finalisation of policies. Bridgman and Davis effectively promote the virtues of a centralised policy approach whereby the public service controls the process, inputs and outputs of public policy development. However, this perspective is somewhat self serving according to Colebatch, as it supports a central agency agenda approach. The model reinforces a perspective that policy makers control the process and consult where necessary as opposed to being just part of a necessarily diverse ecosystem where they must engage with experts, implementers, the political agenda, the general public and more to create robust policy positions that might be adopted and successfully implemented. The model and handbook as a whole reinforce the somewhat dated and Machiavellian idea of policy making as a standalone profession, with policy makers the trusted source of policies. Although Bridgman and Davis emphasise that consultation should happen throughout the process, modern policy development requires ongoing input and indeed co-design from independent experts, policy implementers and those affected by the policy. This is implied but the model offers no pragmatic way to do policy engagement in this way. Without these three perspectives built into any policy proposal, the outcomes are unlikely to be informed, pragmatic, measurable, implementable or easily accepted by the target communities.
The final problem with the Bridgman and Davis public policy development model is that by focusing so completely on the policy development process and not looking at implementation nor in considering the engagement of policy implementers in the policy development process, the policy is unlikely to be pragmatic or take implementation opportunities and issues into account. Basically, the policy cycle model encourages policy makers to focus on a policy itself, iterative and cyclic though it may be, as an outcome rather than practical outcomes that support the policy goals. The means is mistaken for the ends. This approach artificially delineates policy development from implementation and the motivations of those involved in each are not necessarily aligned.
The context of the model in the handbook is also somewhat misleading which affects the accuracy and relevance of the model. The book over simplifies the roles of various actors in policy development, placing policy responsibility clearly in the domain of Cabinet, Ministers, the Department of Prime Minister & Cabinet and senior departmental officers (Bridgman and Davis, 2012 Figure 2.1). Arguably, this conflicts with the supposed point of the book to support even quite junior or inexperienced public servants throughout a government administration to develop policy. It does not match reality in practise thus confusing students at best or establishing misplaced confidence in outcomes derived from policies developed according to the Handbook at worst.
An alternative model
Part of the reason the Bridgman and Davis policy cycle model has had such traction is because it was created in the absence of much in the way of pragmatic advice to policy makers and thus has been useful at filling a need, regardless as to how effective is has been in doing so. The authors have however, not significantly revisited the model since it was developed in 1998. This would be quite useful given new technologies have established both new mechanisms for public engagement and new public expectations to co-develop or at least have a say about the policies that shape their lives.
From my own experience, policy entrepreneurship in modern Australia requires a highly pragmatic approach that takes into account the various new technologies, influences, motivations, agendas, competing interests, external factors and policy actors involved. This means researching in the first instance the landscape and then shaping the policy development process accordingly to maximise the quality and potential adoptability of the policy position developed. As a bit of a thought experiment, below is my attempt at a more usefully descriptive and thus potentially more useful prescriptive policy model. I have included the main aspects involved in policy development, but have included a number of additional factors that might be useful to policy makers and policy entrepreneurs looking to successfully develop and implement new and iterative policies.
It is also important to identify the inherent motivations of the various actors involved in the pursuit, development of and implementation of a policy. In this way it is possible to align motivations with policy goals or vice versa to get the best and most sustainable policy outcomes. Where these motivations conflict or leave gaps in achieving the policy goals, it is unlikely a policy will be successfully implemented or sustainable in the medium to long term. This process of proactively identifying motivations and effectively dealing with them is missing from the policy cycle model.
The Bridgman and Davis policy cycle model is demonstrably inaccurate and yet is held up by its authors as a reasonable descriptive and prescriptive normative approach to policy development. Evidence is lacking for both the model accuracy and any tangible benefits in applying the model to a policy development process and research into policy development across the public service continually deviates from and often directly contradicts the model. Although Bridgman and Davis concede policy development in practise will deviate from their model, there is very little useful guidance as to how to implement or deviate from the model effectively. The model is also inaccurate in that is overly simplifies policy development, leaving policy practitioners to learn for themselves about external factors, the various policy actors involved throughout the process, the changing nature of public and political expectations and myriad other realities that affect modern policy development and implementation in the Australian public service.
Regardless of the policy cycle model inaccuracy, it has existed and been taught for nearly sixteen years. It has shaped the perspectives and processes of countless public servants and thus is relevant in the Australian public service in so far as it has been used as a normative model or starting point for countless policy developments and provides a common understanding and lexicon for engaging with these policy makers.
The model is therefore both inaccurate and relevant to policy entrepreneurs in the Australian public service today. I believe a review and rewrite of the model would greatly improve the advice and guidance available for policy makers and policy entrepreneurs within the Australian public service and beyond.
(Please note, as is the usual case with academic references, most of these are not publicly freely available at all. Sorry. It is an ongoing bug bear of mine and many others).
Althaus, C, Bridgman, P and Davis, G. 2012, The Australian Policy Handbook. Sydney, Allen and Unwin, 5th ed.
Bridgman, P and Davis, G. 2004, The Australian Policy Handbook. Sydney, Allen and Unwin, 3rd ed.
Bardach, E. 2012, A practical guide for policy analysis: the eightfold path to more effective problem solving, 4th Edition. New York. Chatham House Publishers.
Everett, S. 2003, The Policy Cycle: Democratic Process or Rational Paradigm Revisited?, The Australian Journal of Public Administration, 62(2) 65-70
Howard, C. 2005, The Policy Cycle: a Model of Post-Machiavellian Policy Making?, The Australian Journal of Public Administration, Vol. 64, No. 3, pp3-13.
Rozzoli, K. 2013, Book Review of The Australian Policy Handbook: Fifth Edition., Australasian Parliamentary Review, Autumn 2013, Vol 28, No. 1.
Really excited to note that I’m going to be attending Linux.conf.au 2015 and running the Cloud, Containers, and Orchestration mini-conf. Will be issuing the CfP for that shortly, but just wanted to give a shout (and create the category feed for LCA planet…) about heading to New Zealand next January. Extremely psyched to be going to LCA once again!
Another successful day of Linux geeking has passed, this week is going surprisingly quickly…
Some of the days highlights:
The conference presentations finished up with a surprise talk from Simon Hackett and Robert Llewellyn from Red Dwarf, which was somewhat entertaining, but not highly relevant for me – personally I’d rather have heard more from Simon Hackett on the history and future expectations for the ISP industry in Australia than having them debate their electric cars.
Thursday was the evening of the Penguin Dinner, the (usually) formal dinner held at each LCA, this year rather than the usual sit down 3-course dinner, the conference decided to do a BBQ-style event up at the Observatory on Mount Stromlo.
The Penguin Dinner is always a little pricey at $80, but for a night out, good food, drinks and spending time with friends, it’s usually a fun and enjoyable event. Sadly this year had a few issues that kind of spoilt it, at least for me personally, with some major failings on the food and transport which lead to me spending only 2 hours up the mountain and feeling quite hungry.
At the same time, LCA is a volunteer organised conference and I must thank them for-making the effort, even if it was quite a failure this year – I don’t necessarily know all the behind the scenes factors, although the conflicting/poor communications really didn’t put me in the best mood that night.
Next year there is a professional events coordinator being hired to help with the event, so hopefully this adds value in their experience handling logistics and catering to avoid a repeat of the issue.
On the plus side, for the limited time I spent up the mountain, I got some neat photographs (I *really* need to borrow Lisa’s DSLR rather than using my cellphone for this stuff) and spent some good time discussing life with friends lying on the grass looking at the stars after the sun went down.
The other perk from the penguin dinner was the AWESOME shirts they gave everyone in the conference as a surprise. Lisa took this photo when I got back to Sydney since she loves it  so much.
 She hates it.
Having reached mid-week, my morning wakeup is getting increasingly difficult from late nights, thankfully there were large amounts of deep fried potato and coffee readily available.
The day had some interesting talks, most of the value I got was out of the web development space:
With all the talks this week, I’m feeling particularly motivated to do some more development this week, starting with writing some new proper landing pages for some of my projects.
The second day of linux.conf.au has been and gone, was another day of interesting miniconf talks and many geeky discussions with old and new friends.
The keynote was a really good talk by Radia Perlman about how engineers approach developing network protocols and an interesting talk of the history of STP and the designed replacement, TRILL. Great to see a really technical female keynote speaker at LCA this year, particularly one as passionate about her topic as Radia.
The conference WiFi is still pretty unhappy this year, I’ve been suffering pretty bad latency and packet loss (30-50%) most of the past few days – if I’ve been able to find an AP – seems they’re only located around the lecture rooms. Yesterday afternoon it seems to have started improving however, so it may be that the networking team have beaten the university APs into submission.
Of course, some of the projectors decided not to play nicely, which seems pretty much business as usual when it comes to projectors and functioning…. it appears that the projector in question would complain about the higher refresh rates provided by DVI and HDMI connected devices, but functioned correctly with VGA.
Someone did an interesting talk a couple of LCA’s ago on the issue, apparently many projectors lie about what their true capabilities are and request resolutions and refresh rates from the computer that are higher than what they can actually support, which really messes with any modern operating system’s auto-detection.
A few of my friends were delivering talks today, so I spent my time between the Browser miniconf and Open Programming miniconf, picked up some interesting new technologies and techniques to look at:
After that, dinner at one of the (many!) Asian restaurants in the area, followed by some delicious beer at the Wig and Pen.
Another great day, looking forwards to Wednesday and the rest of the week. :-)
First proper day of linux.conf.au today, starting with breakfast and the quest of several hundred geeks to find and consume coffee.
After acquiring coffee, we started the day with a keynote by the well known Bdale Garbee, talking about a number of (somewhat controversial) thoughts and reflections on Linux and the open source ecosystem in regards to the uptake by commercial companies.
Bdale raised some really good points, particularly how GNU/Linux isn’t a sellable idea to OEM vendors on cost – many vendors pay nothing for Microsoft licensing, or even make a profit due to the amount of preloaded crapware they ship with the computers. Vendors are unlikely to ship GNU/Linux unless there is sufficient consumer demand or feature set that makes it so good
My take on the talk was that Bdale was advocating that we aren’t going to win the desktop with a mass popularity – instead of trying to build a desktop for the average joe, we should build desktops that meet our own needs as power uses
It’s an interesting approach – some of the more recent endeavours with desktop developers has lead to environments that newer users like, but power users hate (eg GNOME 3), as a power user, I share this view, I’d rather we develop a really good power user OS, rather than an OS designed for the simplest user. Having said that, the nice thing about open source is that developers can target different audiences and share each other’s work.
Bdale goes on to state that the year of the Linux desktop isn’t relevant, it’s something we’re probably never going to win – but we have won the year of Linux on the mobile, which is going to replace conventional workstations more and more for the average use and become the dominant device used.
It’s something I personally believe as well, I already have some friends who *only* own a phone or tablet, instead of a desktop or tablet, and use it for all their communications. In this space, Android/Linux is selling extremely well.
And although it’s not a conventional GNU/Linux space we know and love and it still has it’s share of problems, a future where Android/Linux is the dominate device OS is much more promising than the current Windows/MacOS duopoly.
The rest of the day had a mix of miniconf talks – there wasn’t anything particularly special for me, but there were some good highlights during the day:
Overall it was a good first day, followed up by some casual drinks and chats with friends – thankfully we even managed to find an open liquor store in Canberra on a public holiday.
It’s time for the most important week of the year - linux.conf.au – which is being held in Canberra this year. I’m actually going to try and blog each day this year, unlike last year which still has all my photos in the “too be be blogged folder”. :-)
Ended up taking the bus down from Sydney to Canberra – at only around $60 and a 3 hour trip, it made more sense to take the bus down, rather than go through the hassle of getting to and from the airports and all the security hassles of flying.
Ended up having several other linux.conf.au friends on the bus, which makes for an interesting trip – and having a bus with WiFi and power was certainly handy.
The road trip down to Canberra wasn’t particularly scenic, most of the route is just dry Australian bush and motorways, generally it seems between city road trips in AU tend not to be wildly scenic unlike most of the ones I take in NZ.
Canberra itself is interesting, my initial thoughts on entering the city was that it’s kind of a cross between Rotorua and post-quake Christchurch – most of the city is low rise- 5-10 story buildings and low density sprawl, and extremely quiet with both the university and parliament on leave. In fact many have already commented it would be a great place to film a zombie movie simply due to it’s eerily deserted nature.
Considering it’s a designed city, I do wonder why they choose such a sprawled design, IMHO it would have been way better to have a very small high density tower CBD which would be easily walk-able and massive park lands around them. Canberra also made the mistake of not putting in light rail, instead relying on buses and cars as primary transport.
Once nice side of Canberra, is that with the sprawl, there tends to be a lot of greenery (or what passes for greenery in the aussie heat!) around the town and campus, including a bit of wildlife – so far I’ve seen rabbits, cockatoos, and lizards, which makes a nice change from Sydney’s wildlife viewing of giant rats running over concrete pavements.
The evening was spent tracking down the best pub options near by, and we were fortunate enough to discover the Wig and Pen, a local British-style brewery/pub, with about 10 of their own beers on hand pulled taps. I’m told that when the conference was here in Canberra in 2005, the attendees drank the pub dry – twice. Hopefully they have more beer on stock this year.
Normally every year the conference provides a swag bag, typically the bag is pretty good and there’s usually a few good bits in there, as well as spammy items like brochures, branded cheap gadgets (USB speakers, reading lights, etc).
This year they’ve cut down hugely on the swag volume, my bag simply had some bathroom supplies (yes, that means there’s no excuse for the geeks to wash this week), a water bottle, some sunblock and the conference t-shirt. I’m a huge fan of this reduction in waste and hope that other conferences continue on with this theme.
The conference accommodation isn’t the best this year – it’s clean and functional, but I’m really not a huge fan of the older shared dorm styles with communal bathroom facilities, particularly the showers with their coffin-style claustrophobic feel.
The plus side of course, is that the accommodation is always cheap and your evenings are filled with awesome conversations and chats with other geeks.
Looking forwards for the actuals talks, going to be lots of interesting cloud and mobile talks this year, as well as the usual kernel, programming and sysadmin streams. :-)
It’s nearing that important time of year that the NZ-AU open source flock congregate that important and time honoured tradition of linux.conf.au. I’ve said plenty about this conference in the past, going to make an effort to write a lot more this year about the conference.
There’s a bit of concern this year that there might not be a team ready to take up the mantle for 2014, unfortunately linux.conf.au is a victim of it’s own success – as each year has grown bigger and better, it’s at the stage where a lot of volunteers consider it too daunting to take it on themselves. Hopefully a team has managed to put together a credible bid for 2014, it would be sad to lose this amazing conference.
As I’m now living in Sydney, I can actually get to this year’s conference via a business class coach service which is way cheaper than flying, and really just as fast once taking the hassles of getting to the airport, going through security and flying into account. Avoiding the security theatre is a good enough reason for me really – I travel a lot, but I actually really hate all the messing about.
If you’re attending the conference and departing from Sydney (or flying into Sydney from NZ to then transfer to Canberra), I’d also suggest this bus service – feel free to join me on my booked bus if you want a chat buddy:
The bus has WiFi and power and extra leg room, so should be pretty good if you want to laptop the whole way in style – for about $35 each way.
As I predicted at the time, this quickly became the gateway drug – having been given an awesome 8-bit processor that can run off the USB port and can provide any possibility of input/output with both digital and analogue hardware, it was inevitable that I would want to actually acquire some hardware to connect to it!
My background into actual electronics hasn’t been great, my parents kindly got me a Dick Smith starter kit when I was much younger (remember back in the day when DSE actually sold components! Now I feel old :-/) but I never quite managed to grasp all the concepts and a few attempts since then haven’t been that successful.
Part of the issue for me is I learn by doing and having good resources to refer to, back then it wasn’t so easy, however with internet connectivity and thousands of companies selling components to consumers offering tutorials and circuit design information, it’s never been easier.
Interestingly I found it hard to get a real good “you’re a complete novice with no clue about any of this” guide, but the Arduino learning resources are very good at detailing how their digital circuits work and with a bit of wikipediaing, got me on the right track so far.
Also not having the right tools and components for the job is an issue, so I made a decision to get a proper range of components, tools, hookup wire and some Arduino units to make a few fun projects to learn how to make this stuff work.
I settled on 3 main projects:
These cover a few main areas – to learn how to talk with one wire sensor devices, to earn how to use transistors to act as switches, to learn different forms of serial communication and to learn some new programming languages.
Having next to no current electronic parts (soldering iron, breadboard and my general PC tools were about it) I went down the path of ordering a full set of different bits to make sure I had a good selection of tools and parts to make most circuits I want.
Ended up sourcing most of my electronic components (resister packs, prototyping boards, hookup wire, general capacitors & ICs) from Mindkits in NZ, who also import a lot of Sparkfun stuff giving them a pretty awesome range.
Whilst the Arduinos I ordered supply 5V and 3.3V, I grabbed a separate USB-powered supply kit for projects needing their own feed – much easier running off USB (of which I have an abundance of ports around) than adding yet-another-wallwart transformer. I haven’t tackled it yet, but I’m sure my soldering skills will be horrific and naturally worth blogging about in future to scare any competent electronics geek.
I also grabbed two Dallas 1-wire temperature sensors, which whilst expensive compared to the analog options are so damn simple to work with and can be daisy chained. Freetronics sell a breakout board model all pre-assembled, but they’re pricey and they’re so simple you can just wire the sensors straight back to your Arduino circuit anyway.
Next I decided to order some regular size Arduinos from Freetronics – if I start wanting to make my own shields (expansion boards for the Arduinos), I’d need a regular sized unit rather than the ultrasmall Leostick.
Ended up getting the classic Arduino Eleven/Uno and one of the Arduino USB Droids which provide a USB Host port so they can be used with Android phones to write software than can interface with hardware.
After a bit of time, all my bits have arrived from AU and the US and now I’m already to go – planning to blog my progress as I get on with my electronics discovery – hopefully before long I’ll have some neat circuit designs up on here. :-)
Once I actually have a clue what I’m doing, I’ll probably go and prepare a useful resource on learning from scratch, to cover all the gaps that I found hard to fill, since learning this stuff opens up so many exciting projects once you get past the initial barrier.
I’ll keep posting my adventures as I get further into the development of different designs, I expect this is going to become a fun new hobby that ties into my other two main interests – computers and things with blinky lights. :-)
For my trip to linux.conf.au in Melbourne/Ballarat I had rescheduled my flights from Wellington to Auckland due to the fact that I had booked my flights before my lovely lady dragged me up to Auckland to live with her.
It’s the first time I’ve ever flown out of Auckland International Airport and to my delight, I was booked on an Air New Zealand 747. This is the very first time I’ve even flown on one, and with AirNZ phasing out the 747s in favor of 777s, I’m glad to have been able to flown on one before they got phased out entirely.
I’d also like to add just for @thatjohn, that I got some awesome perks on the flight over, including a smile from a cute attendant and a FREE PEN! \m/
I’ve just returned from my annual pilgrimage to linux.conf.au, which was held in Perth this year. It’s the first time I’ve been over to West Australia, it’s a whole 5 hour flight from Sydney – longer than it takes to fly to New Zealand.
Perth’s climate is a very dry heat compared to Sydney, so although it was actually hotter than Sydney for most of the week, it didn’t feel quite as unpleasant – other than the final day which hit 45 degrees and was baking hot…
It’s also a very clean/tidy city, the well maintained nature was very noticeable with the city and gardens being immaculately trimmed – not sure if it’s always been like this, or if it’s a side effect of the mining wealth in the economy allowing the local government to afford it more effectively.
As usual, the conference ran for 5 full days and featured 4-5 concurrent streams of talks during the week. The quality was generally high as always, although I feel that content selection has shifted away from a lot of deep dive technical talks to more high level talks, and that OpenStack (whilst awesome) is taking up far too much of the conference and really deserves it’s own dedicated conference now.
I’ve prepared my personal shortlist of the talks I enjoyed most of all for anyone who wants to spend a bit of time watching some of the recorded sessions.
Interesting New(ish) Software
Evolution of Linux
Walkthroughs and Warstories
Other Cool Stuff
Naturally there have been many other excellent talks – the above is just a selection of the ones that I got the most out from during the conference. Take a look at the full schedule to find other talks that might interest, almost all sessions got recorded during the conference.
Final day of linux.conf.au – I’m about a week behind schedule in posting, but that’s about how long it takes to catch up on life following a week at LCA. ;-)
Friday’s conference keynote was delivered by Tim Berners-Lee, who is widely known as “the inventor of the world wide web”, but is more accurately described as the developer of HTML, the markup language behind all websites. Certainly TBL was an influential player in the internets creation and evolution, but the networking and IP layer of the internet was already being developed by others and is arguably more important than HTML itself, calling anyone the inventor of the internet is wrong for such a collaborative effort.
His talk was enjoyable, although very much a case of preaching to the choir – there wasn’t a lot that would really surprise any linux.conf.au attendee. What *was* more interesting than his talk content, is the aftermath….
TBL was in Australia and New Zealand for just over 1 week, where he gave several talks at different venues, including linux.conf.au as part of the “TBL Down Under Tour“. It turns out that the 1 week tour cost the organisers/sponsors around $200,000 in charges for TBL to speak at these events, a figure I personally consider outrageous for someone to charge non-profits for a speaking event.
I can understand high demand speakers charging to ensure that they have comfortable travel arrangements and even to compensate for lost earnings, but even at an expensive consultant’s charge rate of $1,500 per day, that’s no more than $30,000 for a 1 week trip.
I could understand charging a little more if it’s an expensive commercial conference such as $2k per ticket per day corporate affairs, but I would rather have a passionate technologist who comes for the chance to impart ideas and knowledge at a geeky conference, than someone there to make a profit any day – the $20-40k that Linux Australia contributed would have paid several airfares for some well deserving hackers to come to AU to present.
So whilst I applaud the organisers and particularly Pia Waugh for the efforts spend making this happen, I have to state that I don’t think it was worth it, and seeing the amount TBL charged for this visit to a non-profit entity actually really sours my opinion of the man.
I just hope that seeing a well known figure talking about open data and internet freedom at some of the more public events leads to more positive work in that space in NZ and AU and goes towards making up for this cost.
Friday had it’s share of interesting talks:
Sadly it turns out Friday is the last day of the conference, so I had to finish it up with the obligatory beer and chat with friends, before we all headed off for another year. ;-)
Turns out I’m not very good at blogging very often. However I thought I would put what I’ve been working on for the last few days here out of interest.
For a while the OpenStack Infrastructure team have wanted to move away from storing logs on disk to something more cloudy – namely, swift. I’ve been working on this on and off for a while and we’re nearly there.
For the last few weeks the openstack-infra/project-config repository has been uploading its CI test logs to swift as well as storing them on disk. This has given us the opportunity to compare the last few weeks of data and see what kind of effects we can expect as we move assets into an object storage.
Fetching files from an object storage is nothing particularly new or special (CDN’s have been doing it for ages). However, for our usage we want to serve logs with os-loganalyze giving the opportunity to hyperlink to timestamp anchors or filter by log severity.
First though we need to get the logs into swift somehow. This is done by having the job upload its own logs. Rather than using (or writing) a Jenkins publisher we use a bash script to grab the jobs own console log (pulled from the Jenkins web ui) and then upload it to swift using credentials supplied to the job as environment variables (see my zuul-swift contributions).
This does, however, mean part of the logs are missing. For example the fetching and upload processes write to Jenkins’ console log but because it has already been fetched these entries are missing. Therefore this wants to be the very last thing you do in a job. I did see somebody do something similar where they keep the download process running in a fork so that they can fetch the full log but we’ll look at that another time.
When a request comes into logs.openstack.org, a request is handled like so:
os-loganalyze is set up as an WSGIScriptAlias at /htmlify/. This means all files that aren’t on disk are sent to os-loganalyze (or if the file is on disk but matches a file we want to mark up it is also sent to os-loganalyze). os-loganalyze then does the following:
If the file exists both on disk and in swift then step #2 can be skipped by passing ?source=swift as a parameter (thus only attempting to serve from swift). In our case the files exist both on disk and in swift since we want to compare the performance so this feature is necessary.
So now that we have the logs uploaded into swift and stored on disk we can get into some more interesting comparisons.
My first attempt at this was simply to fetch the files from disk and then from swift and compare the results. A crude little python script did this for me: http://paste.openstack.org/show/122630/
The script fetches a copy of the log from disk and then from swift (both through os-loganalyze and therefore marked-up) and times the results. It does this in two scenarios:
I then ran this in two environments.
Running on my home computer likely introduced a lot of errors due to my limited bandwidth, noisy network and large network latency. To help eliminate these errors I also tested it on 5 performance servers in the Rackspace cloud next to the log server itself. In this case I used ansible to orchestrate the test nodes thus running the benchmarks in parallel. I did this since in real world use there will often be many parallel requests at once affecting performance.
The following metrics are measured for both disk and swift:
The total time can be found by adding the first 3 metrics together.
The complementary colours are the same metric and the darker line represents swift’s performance (over the lighter disk performance line). The vertical lines over the plots are the error bars while the fetched filesize is the column graph down the bottom. Note that the transfer and file size metrics use the right axis for scale while the rest use the left.
As you would expect the requests for both disk and swift files are more or less comparable. We see a more noticable difference on the responses though with swift being slower. This is because disk is checked first, and if the file isn’t found on disk then a connection is sent to swift to check there. Clearly this is going to be slower.
The transfer times are erratic and varied. We can’t draw much from these, so lets keep analyzing deeper.
The total time from request to transfer can be seen by adding the times together. I didn’t do this as when requesting files of different sizes (in the next scenario) there is nothing worth comparing (as the file sizes are different). Arguably we could compare them anyway as the log sizes for identical jobs are similar but I didn’t think it was interesting.
The file sizes are there for interest sake but as expected they never change in this case.
You might notice that the end of the graph is much noisier. That is because I’ve applied some rudimentary data filtering.
|request sent (ms) – disk||request sent (ms) – swift||response (ms) – disk||response (ms) – swift||transfer (ms) – disk||transfer (ms) – swift||size (KB) – disk||size (KB) – swift|
I know it’s argued as poor practice to remove outliers using twice the standard deviation, but I did it anyway to see how it would look. I only did one pass at this even though I calculated new standard deviations.
|request sent (ms) – disk||request sent (ms) – swift||response (ms) – disk||response (ms) – swift||transfer (ms) – disk||transfer (ms) – swift||size (KB) – disk||size (KB) – swift|
I then moved the outliers to the end of the results list instead of removing them completely and used the newly calculated standard deviation (ie without the outliers) as the error margin.
Then to get a better indication of what are average times I plotted the histograms of each of these metrics.
Here we can see a similar request time.
Here it is quite clear that swift is slower at actually responding.
Interestingly both disk and swift sources have a similar total transfer time. This is perhaps an indication of my network limitation in downloading the files.
Next from my home computer I fetched a bunch of files in sequence from recent job runs.
Again I calculated the standard deviation and average to move the outliers to the end and get smaller error margins.
|request sent (ms) – disk||request sent (ms) – swift||response (ms) – disk||response (ms) – swift||transfer (ms) – disk||transfer (ms) – swift||size (KB) – disk||size (KB) – swift|
|Second pass without outliers|
What we are probably seeing here with the large number of slower requests is network congestion in my house. Since the script requests disk, swift, disk, swift, disk.. and so on this evens it out causing a latency in both sources as seen.
Swift is very much slower here.
Although comparable in transfer times. Again this is likely due to my network limitation.
The size histograms don’t really add much here.
Now to reduce latency and other network effects I tested fetching the same log over again in 5 parallel streams. Granted, it may have been interesting to see a machine close to the log server do a bunch of sequential requests for the one file (with little other noise) but I didn’t do it at the time unfortunately. Also we need to keep in mind that others may be access the log server and therefore any request in both my testing and normal use is going to have competing load.
I collected a much larger amount of data here making it harder to visualise through all the noise and error margins etc. (Sadly I couldn’t find a way of linking to a larger google spreadsheet graph). The histograms below give a much better picture of what is going on. However out of interest I created a rolling average graph. This graph won’t mean much in reality but hopefully will show which is faster on average (disk or swift).
You can see now that we’re closer to the server that swift is noticeably slower. This is confirmed by the averages:
|request sent (ms) – disk||request sent (ms) – swift||response (ms) – disk||response (ms) – swift||transfer (ms) – disk||transfer (ms) – swift||size (KB) – disk||size (KB) – swift|
|Second pass without outliers|
Even once outliers are removed we’re still seeing a large latency from swift’s response.
The standard deviation in the requests now have gotten very small. We’ve clearly made a difference moving closer to the logserver.
Very nice and close.
Here we can see that for roughly half the requests the response time was the same for swift as for the disk. It’s the other half of the requests bringing things down.
The transfer for swift is consistently slower.
Finally I ran just over a thousand requests in 5 parallel streams from computers near the logserver for recent logs.
Again the graph is too crowded to see what is happening so I took a rolling average.
|request sent (ms) – disk||request sent (ms) – swift||response (ms) – disk||response (ms) – swift||transfer (ms) – disk||transfer (ms) – swift||size (KB) – disk||size (KB) – swift|
|Second pass without outliers|
The averages here are much more reasonable than when we continually tried to request the same file. Perhaps we’re hitting limitations with swifts serving abilities.
I’m not sure why we have sinc function here. A network expert may be able to tell you more. As far as I know this isn’t important to our analysis other than the fact that both disk and swift match.
Here we can now see swift keeping a lot closer to disk results than when we only requested the one file in parallel. Swift is still, unsurprisingly, slower overall.
Swift still loses out on transfers but again does a much better job of keeping up.
I haven’t accounted for any of the following swift intricacies (in terms of caches etc) for:
I also haven’t done anything to account for things like file system caching, network profiling, noisy neighbours etc etc.
os-loganalyze tries to keep authenticated with swift, however
We could possibly explore getting longer authentication tokens or having os-loganalyze pull from an unauthenticated CDN to add the markup and then serve. I haven’t explored those here though.
os-loganalyze also handles all of the requests not just from my testing but also from anybody looking at OpenStack CI logs. In addition to this it also needs to deflate the gzip stream if required. As such there is potentially a large unknown (to me) load on the log server.
In other words, there are plenty of sources of errors. However I just wanted to get a feel for the general responsiveness compared to fetching from disk. Both sources had noise in their results so it should be expected in the real world when downloading logs that it’ll never be consistent.
As you would expect the request times are pretty much the same for both disk and swift (as mentioned earlier) especially when sitting next to the log server.
The response times vary but looking at the averages and the histograms these are rarely large. Even in the case where requesting the same file over and over in parallel caused responses to go slow these were only in the magnitude of 100ms.
The response time is the important one as it indicates how soon a download will start for the user. The total time to stream the contents of the whole log is seemingly less important if the user is able to start reading the file.
One thing that wasn’t tested was streaming of different file sizes. All of the files were roughly the same size (being logs of the same job). For example, what if the asset was a few gigabytes in size, would swift have any significant differences there? In general swift was slower to stream the file but only by a few hundred milliseconds for a megabyte. It’s hard to say (without further testing) if this would be noticeable on large files where there are many other factors contributing to the variance.
Whether or not these latencies are an issue is relative to how the user is using/consuming the logs. For example, if they are just looking at the logs in their web browser on occasion they probably aren’t going to notice a large difference. However if the logs are being fetched and scraped by a bot then it may see a decrease in performance.
Overall I’ll leave deciding on whether or not these latencies are acceptable as an exercise for the reader.
Why is this hipster voting on my code?!
Soon you are going to see a new robot barista leaving comments on Nova code reviews. He is obsessed with espresso, that band you haven’t heard of yet, and easing the life of OpenStack operators.
Doing a large OpenStack deployment has always been hard when it came to database migrations. Running a migration requires downtime, and when you have giant datasets that downtime could be hours. To help catch these issues Turbo-Hipster (http://josh.people.rcbops.com/2013/09/building-a-zuul-worker/) will now run your patchset’s migrations against copies of real databases. This will give you valuable feedback on the success of the patch, and how long it might take to migrate.
Depending on the results, Turbo-Hipster will add a review to your patchset that looks something like this:
That depends on why it has failed. Here are some scenarios and steps you can take for different errors:
FAILURE – Did not find the end of a migration after a start
WARNING – Migration %s took too long
FAILURE – Final schema version does not match expectation
FAILURE – Could not setup seed database. FAILURE – Could not find seed database.
FAILURE – Could not import required module.
If you receive an error that you think is a false positive, leave a comment on the review with the sole contents of recheck migrations.
If you see any false positives or have any questions or problems please contact us on firstname.lastname@example.org
After travelling very close to literally the other side of the world I’m in Edinburgh for LinuxCon EU recovering from jetlag and getting ready to attend. I’m very much looking forward to my first LinuxCon, meeting new people and learning lots :-).
If you’re around and would like to catch up drop me a comment here. Otherwise I’ll see you at the conference!
Welcome to my new blog.
You can find my old one here: http://josh.opentechnologysolutions.com/blog/joshua-hesketh
Zuul is the continuous integration utility used by OpenStack to gate patchsets against tests. It takes care of communicating with gerrit (the code review system) and the test workers – usually Jenkins. You can read more about how the systems tie together on the OpenStack Project Infrastructure page.
“Turbo-hipster is a CI worker with pluggable tasks initially designed to test OpenStack’s database migrations against copies of real databases.”
This will hopefully catch scenarios where changes to the database schema may not work due to outliers in real datasets and also help find where a migration may take an unreasonable amount of time against a large database.
In zuuls layout configuration we are able to specify which jobs should be ran against which projects in which pipelines. For example, for nova we want to run tests when a patchset is created, but we don’t need to run tests against it (necessarily) once it is merged etc. So in zuul we specify a new gate (aka job) to test nova against real databases.
turbo-hipster then listens for jobs created on that gate using the gearman protocol. Once it receives a patchset from zuul it creates a virtual environment and tests the upgrades. It then compiles and sends back the results.
At the moment turbo-hipster is still under heavy development but I hope to have it reporting results back to gerrit patchsets soon as part of zuuls report summary. For the moment I have a separate zuul instance running to test new nova patches and email the results back to me. Here is an example result report:
<code>Build succeeded. - http://thw01.rcbops.com/logviewer/?q=/results/47/47162/9/check/gate-real-db-upgrade_nova_mysql/c4bc35c/index.html : SUCCESS in 13m 31s </code>
*The name was randomly generated and does not necessarily contain meaning.
The last week has been an interesting nexus of Open and Free.
On Saturday I attended the Firefox OS App day in Wellington. I had heard about Firefox OS some time ago under its project name Boot2Geeko (b2g). At the time I had thought that it was an intriguing idea, but wouldn't be very powerful. I was certainly wrong. Firefox OS is fairly mature and looking like it will be very powerful. Check out arewemobileyet.com for an idea where they are heading (for example WebUSB!) It appeared to work well on the developer phones (re-flashed Android phones, the same Linux kernel is used).
I wasn't able to stick around what people developed, but it was very interesting.
Last night I watched the live stream of Sir Tim Berners-Lee, the inventor of the World Wide Web, giving a public lecture in Wellington (I missed out on a ticket) on "The Open Internet and World Wide Web". He covered the many forms of openness and freedom, including open standards, open source software, open access, open data, and open Internet. One key point from the lecture was that native apps (on IOS or Android, for example) take you off the Web, and therefore away from the core of social discourse. This is significant and currently increasingly happening. I will tweet a link when the lecture is available to view online.
These events dovetail nicely and fits with my general strategy of focusing on web apps that work nicely on phones, tablets, and computers.
I've updated this site over the last few weeks to help manage it going forward.
The key piece is this blog, which I hope to update somewhat frequently. I haven't enabled comments, so reply by twitter or email.
So after thinking about it, I have purchased a new domain, http://begg.digital/. There have been many, many new top level domains launched recently.
I've also taken the opportunity to refresh the Begg Digital website as well. There is now a link to the various tools I've created to help the Python Community, and there is some more pages to come soon too. The underlying code got a significant upgrade as well (there is a saying about a builder's house...)
Hopefully there will be some more news soon.
I have updated the py3progress site. I really should automate it sometime, since the last update was in September.
Since the whole of 2013 is now up, I think we should review what happened.
So the first thing that jumps out at me is there is less red and more green. That's great! Concretely, the percentage of the top 200 that supports Python 3 has gone from 51% (103) to 69% (138), and it's up another 5% in the first two months of 2014.
The oldly consistent period in the middle of the year was when the mirror team changed how that worked PyPI changed to providing downloads via a CDN (Content Delivery Network) [UPDATE: 2014-04-11] and the stats took a week or so to be updated. In some ways, the data after that point might not be as accurate to the actual popularity of the packages, but we are only really worried about the indictive relative popularity and the data should be good enough.
Not long after, the ssl module races up from outside the top 200 to in the top 10. It's clearly visible as it is in Python 3 and therefore in light blue. I'm not sure what has driven it's increase, perhaps a popular package now depends on it?
About 5 projects changed to python 2 only during the year. On the whole they have lost popularity. Some even dropped out of the top 200.
I note from the python3wos page that December 2013 marked 5 years since Python 3.0 was first released. Python 3.3, which has a few features that support added backward compatibility, was released September 2012. Python 3.4 is currently at the release cadidate stage.
So looking at the Python3 Wall of Superpowers today, 149 of the top 200 downloads support Python 3. Let's look at some of the the one's that aren't.
Boto is the highest ranked non-python 3 package at 3rd. It is an library for interacting with AWS services. Python-cloudfiles depends on this and is further down the list.
Paste (18th), the web framework, is next. It hasn't been updated since 2010.
Paramiko (22nd) is a SSH library, which from the github issue appears to be under active porting. Paramiko is something I use in multiple ways. One is Farbic, a remote execution tool used for deployment and automation, which is 37th and will be ported once paramiko is ported.
Just above Farbic at 35th is the MySQL-python library. This also appears to be not too far away from having a working python3 version.
The first python 2 only package is meld3 (56th), a templating library. The second is more important to me, Twisted, at 76th. Twisted is an asynchronous networking frameork and it's used by other packages on the list, such as carbon (52nd) and graphite-web (53rd). Unusally, the python 2 only tag has a slightly different to the Twisted project - they have an active project to port to Python 3, it's just a really, really big job.
Python 3 makes a significant improvement, mostly removing old wrinkles and being clearer about bytestring/unicode datatypes. The transition is ongoing (like IPv6), and a good portion of the libraries people use will need to support Python 3 before a bulk of developers will start developing with Python 3 (even though it's technically better). I'm looking at what packages I use and hope to soon start using Python 3 for some things.
One of the issues developing in Python is keeping a track of security updates of dependencies, such as libraries. While I could subsrcibe to every mailing list and check all the websites regularly, that is a lot of work.
Most packages in the Python environment are released on PyPI, also known as the Cheese Shop. This site lists over 33,000 packages. Handily, PyPI provides a API to query what packages are available and what versions of the packages are there. It doesn't, however, let you know when a package important to you it updated.
So I created ReqFile Check (warning, self-signed certificate). I created this website to track what package I'm using and send me an email alert when one is updated. Today it helpfully told me there was an update to South, and checking the website for South shows that it fixes a bug I had encounted.
Begg Digital is proud to annouce that Classie has launched.
Classie (www.getclassie.com) is a dance studio management web application. It makes taking attendance simple and saves money sending notcies. The app is designed to work on mobile phones, tablets and desktop browsers so it can be used anywhere, avoiding pieces of paper and entering data later.
Begg Digital manages the hosting for Classie. Lee, owner of Begg Digital, is also a founder of Classie and the primary developer.
This weekend I took part in the International Space App Challenge. It is an event (not too dissimilar from Startup Weekend) where teams come together to solve challeges relating to space. the challenge range from hardware to software and visualisation and chicken farms to Mars.
I undertook the bootstrapping lunar industry challenge, also known as #Moonville for being a game for planning how to get private industry on the moon. I wasn't able to attract a team, but made a reasonable start.
MoonBizGame is the web-based game I created. It is a turn based game and highly compressed timelines. Currently, you can login with Persona, create your enterprise and buy launches into space. The SSL certificate is self-signed so you will get a warning, but the game is available at https://moonbizgame.beggdigital.com/
I mentioned above that the site uses Persona to login. This is a technology created by Mozilla so that people don't need to create a username and password for every site they use. The Django implementation works well, even in localhost debugging. I look forward to using it on more sites in the future.
Over the weekend I published Py3 Progress site. I had been meaning to make the data available and now finally I have. It didn't take much.
The Py3 Progress site has a "waterfall" plot of the Python 3 port status of the top 200 downloads from the Python Package Index. The raw data was downloaded from the Python 3 Wall of Superpowers every day, and then acculated into the waterfall plots.
There are some interesting trends in the data. About half way down the 2012 plot (August onwards), you can see where Django first reported that it was Python 3 compatible - it's in the sixth column. I haven't tried it yet. The purple lines are projects that say that will not be ported to Python 3 in the near future (or ever) and they are slowly trending to the right (down in ranking). Among those projects is Twisted, which has an active py3 porting branch. A couple of weeks ago a couple more projects marked themselves as not porting. The light green/blue lines also trend right. This are the packages which are now included in Python 3, and in some cases Python 2.7 and even 2.6. Since people don't need to download the files to use the package, the are slowly falling in rankings as well. The best news is that the Python 3 compatible packages shown in green are generally trending left and the not yet ported packages in red are generally trending right (or converting).
There are a lot of Django-based packages in the top 200 (approximately 11 packages). Since Django 1.5 was released a couple of weeks ago and it supports Python 3, I expect that most of those package will also port fairly quickly. Some already have.