Planet linux.conf.au
Celebrating the wonderful linux.conf.au 2015 conference...

January 23, 2015

Rocket and App Container 0.2.0 Release

This week both Rocket and the App Container (appc) spec have reached 0.2.0. Since our launch of the projects in December, both have been moving very quickly with a healthy community emerging. Rocket now has cryptographic signing by default and a community is emerging around independent implementations of the appc spec. Read on for details on the updates.

Rocket 0.2.0

Development on Rocket has continued rapidly over the past few weeks, and today we are releasing v0.2.0. This important milestone release brings a lot of new features and improvements that enable securely verified image retrieval and tools for container introspection and lifecycle management.

Notably, this release introduces several important new subcommands:

  • rkt enter, to enter the namespace of an app within a container
  • rkt status, to check the status of a container and applications within it
  • rkt gc, to garbage collect old containers no longer in use

In keeping with Rocket's goals of being simple and composable, we've taken care to implement these lifecycle-related subcommands without introducing additional daemons or databases. Rocket achieves this by taking advantage of existing file-system and kernel semantics like advisory file-locking, atomic renames, and implicit closing (and unlocking) of open files at process exit.

v0.2.0 also marks the arrival of automatic signature validation: when retrieving an image during rkt fetch or rkt run, Rocket will verify its signature by default. Kelsey Hightower has written up an overview guide explaining this functionality. This signature verification is backed by a flexible system for storing public keys, which will soon be even easier to use with a new rkt trust subcommand. This is a small but important step towards our goal of Rocket being as secure as possible by default.

Here's an example of the key validation in action when retrieving the latest etcd release (in this case the CoreOS ACI signing key has previously been trusted using the process above):

$ rkt fetch coreos.com/etcd:v2.0.0-rc.1
rkt: searching for app image coreos.com/etcd:v2.0.0-rc.1
rkt: fetching image from https://github.com/coreos/etcd/releases/download/v2.0.0-rc.1/etcd-v2.0.0-rc.1-linux-amd64.aci
Downloading aci: [=============================                ] 2.31 MB/3.58 MB
Downloading signature from https://github.com/coreos/etcd/releases/download/v2.0.0-rc.1/etcd-v2.0.0-rc.1-linux-amd64.sig
rkt: signature verified: 
  CoreOS ACI Builder <release@coreos.com>

App Container 0.2.0

The appc spec continues to evolve but is now stabilizing. Some of the major changes are highlighted in the announcement email that went out earlier this week.

This last week has also seen the emergence of two different implementations of the spec: jetpack (a FreeBSD/Jails-based executor) and libappc (a C++ library for working with app containers). The authors of both projects have provided extremely helpful feedback and pull requests to the spec, and it is great to see these early implementations develop!

Jetpack, App Container for FreeBSD

Jetpack is an implementation of the App Container Specification for FreeBSD. It uses jails as an isolation mechanism, and ZFS for layered storage. Jetpack is a great test of the cross platform portability of appc.

libappc, C++ library for App Container

libappc is a C++ library for doing things with app containers. The goal of the library is to be a flexible toolkit: manifest parsing and creation, pluggable discovery, image creation/extraction/caching, thin-provisioned file systems, etc.

Get involved

If you are interested in contributing to any of these projects, please get involved! A great place to start is issues in the Help Wanted label on GitHub. You can also reach out with questions and feedback on the Rocket and appc mailing lists:

Rocket

App Container

In the SF Bay Area or NYC next week? Come to the meetups in each area to hear more about these changes and the future of rocket and appc. RSVP to the CoreOS NYC meetup and SF meetup to learn more.

Lastly, thank you to the community of contributors emerging around Rocket and App Container:

Alan LaMielle, Alban Crequy, Alex Polvi, Ankush Agarwal, Antoine Roy-Gobeil, azu, beadon, Brandon Philips, Brian Ketelsen, Brian Waldon, Burcu Dogan, Caleb Spare, Charles Aylward, Daniel Farrell, Dan Lipsitt, deepak1556, Derek, Emil Hessman, Eugene Yakubovich, Filippo Giunchedi, Ghislain Guiot, gprggr, Hector Fernandez, Iago López Galeiras, James Bayer, Jimmy Zelinskie, Johan Bergström, Jonathan Boulle, Josh Braegger, Kelsey Hightower, Keunwoo Lee, Krzesimir Nowak, Levi Gross, Maciej Pasternacki, Mark Kropf, Mark Lamourine, Matt Blair, Matt Boersma, Máximo Cuadros Ortiz, Meaglith Ma, PatrickJS, Pekka Enberg, Peter Bourgon, Rahul, Robo, Rob Szumski, Rohit Jnagal, sbevington, Shaun Jackman, Simone Gotti, Simon Thulbourn, virtualswede, Vito Caputo, Vivek Sekhar, Xiang Li

January 20, 2015

Meet us for our January 2015 events

CoreOS CTO Brandon Philips speaking at Linux Conf AU

January has been packed with meetups and events across the globe. So far, we’ve been to India, Switzerland, France, England and New Zealand.

Check out a CoreOS tutorial from Brandon Philips (@brandonphilips) at Linux Conf New Zealand.

Our team has been on a fantastic tour meeting CoreOS contributors and friends around the world. A special thank you to the organizers of those meetups and to all those who came out to the meetups and made us feel at home. Come join us at the following events this month:

Tuesday, January 27 at 11 a.m. PST – Online

Join us for a webinar on Managing CoreOS Container Performance for Production Workloads. Kelsey Hightower (@kelseyhightower) from CoreOS and Matt Williams from Datadog will discuss trends in container usage and show how container performance can be monitored, especially as the container deployments grow. Register here.


Tuesday, January 27 at 6 p.m. EST – New York, NY

Come to our January New York City meetup at Work-Bench, 110 Fifth Avenue on the 5th floor, where our team will discuss our new container runtime, Rocket, as well as Quay.io new features. In addition, Nathan Smith, head of engineering at Wink, www.wink.com, will walk us through how they are using CoreOS. Register here.


Tuesday, January 27 at 6 p.m. PST – San Francisco, CA

Our January San Francisco meetup is not-to-miss! We’ll discuss news and updates on etcd, Rocket and AppC. Register here.


Thursday, January 29 at 7 p.m. CET – Barcelona, Spain

Meet Brian Harrington, better known as Redbeard (@brianredbeard), for CoreOS: An Overview, at itnig. Dedicated VMs and configuration management tools are being replaced by containerization and new service management technologies like systemd. This meetup will give an overview of CoreOS, including etcd, schedulers (mesos, kubernetes, etc.), and containers (nspawn, docker, rocket). Understand how to use these new technologies to build performant, reliable, large distributed systems. Register here.


Saturday, January 31-Sunday, February 1 – Brussels, Belgium

Our team is attending FOSDEM ’15 to connect with developers and the open source community. See our talks and meet the team at our dev booth throughout the event.

  • Redbeard (@brianredbeard) will discuss How CoreOS is built, modified, and updated on Saturday at 1 p.m. CET.
  • Jon Boulle (@baronboulle) from our engineering team will discuss all things Go at CoreOS on Sunday at 9:05 a.m. CET.
  • Kelsey Hightower (@kelseyhightower), developer advocate at CoreOS, will give a talk on Rocket and the App Container Spec at 11:40 a.m. CET.

A special shout out to the organizers of those meetups - Fintan Ryan, Ranganathan Balashanmugam, Muharem Hrnjadovic, Frédéric Ménez, Richard Paul, ­Piotr Zurek, Patrick Heneise, Benjamin Reitzammer, Sunday Ogwu, Tom Martin, Chris Kuhl and Johann Romefort.

If you are interested in hosting an event of your own or inviting someone from CoreOS to speak, reach out to us at press@coreos.com.

Sahana @ linux.conf.au

Last week I was able to attend linux.conf.au which was being hosted in my home town of Auckland. This was a great chance to spend time with people from the open source community from New Zealand, Australia and around the [Read the Rest...]

‘Sup With The Tablet?

As I mentioned on Twitter last week, I’m very happy SUSE was able to support linux.conf.au 2015 with a keynote giveaway on Wednesday morning and sponsorship of the post-conference Beer O’Clock at Catalyst:

For those who were in attendance, I thought a little explanation of the keynote gift (a Samsung Galaxy Tab 4 8″) might be in order, especially given the winner came up to me during the post-conference drinks and asked “what’s up with the tablet?”

To put this in perspective, I’m in engineering at SUSE (I’ve spent a lot of time working on high availabilitydistributed storage and cloud software), and while it’s fair to say I represent the company in some sense simply by existing, I do not (and cannot) actually speak on behalf of my employer. Nevertheless, it fell to me to purchase a gift for us to provide to one lucky delegate sensible enough to arrive on time for Wednesday’s keynote.

I like to think we have a distinct engineering culture at SUSE. In particular, we run a hackweek once or twice a year where everyone has a full week to work on something entirely of their own choosing, provided it’s related to Free and Open Source Software. In that spirit (and given that we don’t make hardware ourselves) I thought it would be nice to be able to donate an Android tablet which the winner would either be able to hack on directly, or would be able to use in the course of hacking something else. So I’m not aware of any particular relationship between my employer and that tablet, but as it says on the back of the hackweek t-shirt I was wearing at the time:

Some things have to be done just because they are possible.

Not because they make sense.

 

January 16, 2015

Linux.conf.au 2015 – Day 5 – Session 3

NoOps with Ansible and Puppet – Monty Taylor

  • NoOps
    • didn’t know it was a contentious term
    • “devs can code and let a service deploy, manage and scale their code”
    • I want to change the system by landing commits. don’t want to “do ops”
    • if I have to use my root access it is a bug
  • Cloud Native
    • Ephemeral Compute
    • Data services
    • Design your applications to be resilient via scale out
    • Cloud scale out, forget HA for one system, forget long-lived system, shared-nothing for everything. Cloud provides the hard scale-out/HA/9s stuff
    • Great for new applications
  • OpenStack Infra
    • Tooling, automation, and CI for the openstack project
    • 2000 devs
    • every commit is fully tested.
    • each test runs on a single use cloud slave
    • 1.7 million test jobs in the last 6 months. 18 TB of log data
    • all runs in HP and rackspace public clouds
  • Create Servers manually at 1st
  • Step 1 – Puppet
    • extra hipster because it is in ruby
    • If you like ruby it is awesome. If don’t is it less-awesome
    • collaboration from non-root users
    • code review
    • problem that it blows up when you try and install the same thing in two different places
    • 3 ways to run. masterless puppet apply. master + puppet agent daemon . master + puppet agent non-daemons
  • Secret stuff that you don’t want into you puppet git repo
    • hiera
  • Step 2 – Ansible for orchestration
    • Control the puppet agent so it runs it nicely and in schedule and on correct hosts first
    • Open source system management tool
    • Sequence of steps not description of state like puppet
    • ad-hoc operation. run random commands
    • easy to slowly grow over time till it takes over puppet
    • yaml syntax of config files
  • Step 3 – Ansible for cloud management
  • Ansible config currently mixed in with puppet under – http://git.openstack.org/cgit/openstack-infra/system-config/

 

Conference Closing

  • Steve Walsh wins Rusty Wrench award
  • Preview of Linux.conf.au 2016 in Geelong
    • Much flatter than Auckland
    • Deakin University – Waterfront Campus
    • Waurn Ponds student accomadation 15 minutes with shuttles
    • Feb 8th – 12th 2016
    • CFP 1st of June 2015
    • Theme “life is better with linux”
    • 4 keynotes confirmed or in final stages of discussion, 2 female, 2 male
    • NFS keytags
    • lcabythebay.org.au
  • Announcement for Linux.conf.au 2017 will be in Hobart

 

Linux.conf.au 2015 – Day 5 – Session 2

When Everything Falls Apart: Stories of Version Control System Scaling – Ben Kero

  • Sysadmin at Mozilla looking after VCS
  • Primarily covering mercurial
  • Background
    • Primarily mercurial
    • 3445 repos (1223 unique)
    • 32 million commits
    • 2TB+ transfer per day
    • 1000+ clones per day
    • Biggest customer = ourselves
    • tested platforms > 12
  • Also use  git (a lot) and a bit of:  subversion, CVS, Bazaar, RCS
  • 2 * ssh servers, 10 machines mirror http traffic behind load balancer
  • 1st story – know what you are hosting
    • Big git repo 1.7G somebody asked to move off github
    • Turned out to be mozilla git mirror, so important to move
    • plenty of spare resources
    • But high load straight away
    • turned out to be mercurial->git converter, huge load
    • Ran garbage collection – took several hours
    • tweaked some other settings
  • 2nd story
    • 2003 . “Try” CI system
    • Simple CI system (before the term existed or they were common)
    • flicks off to build server, sends status back to dev
    • mercurial had history being immutable up until v2.1 and mozilla was stuck on old version
    • ended up with 29,000 brashes in repo
    • Around 10,000 heads some operations just start to fail
    • Wait times for pushes over 45 minutes. Manual fixes for this
    • process was “hg serve” only just freezein gup, not any debug info
    • had to attached debugging. trying to update the cache.
    • cache got nuked by cached push, long process to rebuild it.
    • mercurial bug 4255 in process of being looked at, no fix yet
  • The new system
    • More web-scalable to replace old the system
    • Closer to the pull-request model
    • multi-homing
    • leverage mercurial bundles
    • stores bundles in scalable object store
    • hopefully minimal retooling from other groups (lots of weird systems supported)
  • Planet release engineering @ mozilla

SL[AUO]B: Kernel memory allocator design and philosophy – Christopher Lameter

  • NOTE: I don’t do kernel stuff so much of this is over my head.
  • Role of the allocator
    • page allocator only works in full page size (4k) and is fairly slow
    • slab allocator for smaller allocation
    • SLAB is one of the “slab allocators”
  • kmeme_cache , numa aware, etc
  • History
    • SLOB: K&R 1991-1999 . compact
    • SLAB: Solaris 199-2008 . cache friendly, benchmark friendly
    • SLUB: 2008-today , simple and instruction costs count, better debugging, defrag, execution time friendly
  • 2013 – work to split out common code for allocators
  • SOLB
    • manages list of free objects with the space of free objects
    • have to traverse list to find object of sufficient size
    • rapid fragmentation of memory
  • SLAB
    • queues per cpu and per node to track cache hotness
    • queues for each remote node
    • complete data structures
    • cold object expiration every 2 seconds on each CPU
    • large systems with LOTS of CPUs have huge amount of memory trapped, spending lots of time cleaning cache
  • SLUB
    • A lot less queuing
    • Pages associated with per-cpu. increased locality
    • page based policies and interleave
    • de-fragmentation on multiple levels
    • current default in the kernel
  • slabinfo tool for SLUB. tune, modify, query, control objects and settings
  • can be asked to go into debug mode even when debugging not enabled with rest of the kernel
  • Comparing
    • SLUB faster (SLAB good for benchmarks)
    • SLOB slow
    • SLOB less memory overhead for small/simple systems (only, doesn’t handle lots of reallocations that fragment)
  • Roadmap
    • More common framework
    • Various other speedups and features

 

Linux.conf.au 2015 – Day 5 – Session 1

How to get one of those Open Source jobs – Mark Atwood

  • Warns talk might still have some US-centric stuff still in it
  • “Open Source Job” – most important word is “Job”
    • The Open Source bit means you are a bit more transferable than a closed-source programmer
    • Don’t have to move to major tech city
  • Communication skills
    • Have to learn to Write clearly in English
    • Heave to learn how to speak, including in meetings and give some talks
    • Reachable – Have a public email address
    • Don’t be a jerk, reputation very important
  • Technical skills
    • Learn how to program
    • Start with python and javascript
    • Learn other languages eg scale, erlang, clojure, c, C++
    • How to use debugger and IDE
    • Learn to use git well
    • Learn how to code test (especially to work with CI testers like jenkins)
    • Idea: Do lots of simple practise problems in programming using specific technique or language
  • Relationships & Peers
    • Work with people remote and nearby
    • stackoverflow
    • Don’t be a jerk
  • Work
    • Have to “do the work” then “get the job”
    • Start by fixing bugs on a project
    • Your skills will improve and others will see you have those skills
  • Collaborate
    • Many projects use IRC
    • Most projects have bug tracker
    • Learn how to use the non-basic stuff in git
    • Peer programming
  • Reputation
    • Portfolio vs resume
    • github account is your portfolio
    • Need to be on social media, at least a little bit, most be reachable
  • Getting the Job
    • If you have a good enough a rep the jobs will seek you out
    • Keywords on github and linkedin will attract recruiters
    • People will suggest you that apply
    • Conferences like linux.conf.au
    • Remember to counter-offer the offer letter
    • Once you are working for them, work out what is job related an the company might have a claim on. make sure you list in your agreement any projects you are already working on
  • Health
    • Don’t work longer than 40h a week regularly
    • 60h weeks can only be sustained for a couple of weeks
    • Just eat junk-food
    • Don’t work for jerks
  • Money
    • Startups – bad for your health. Do not kill yourself for a nickle, have real equity
  • Keep Learning
  • 3 books to read
    • Oh the palces you will go – Dr Seuss
    • Getting things Done – David Allen
    • How to fail at almost everything and still win big – Scott Adams

 

Pettycoin: Towards 1.0 – Rusty Russell

  • Problem it bitcoining mining is expensive, places lower limit on transaction fees
  • Took 6 months of to mostly work on pettycoin
  • Petty coin
    • Simple
    • gateway to bitcoin
    • small amounts
    • partial knowledge, don’t need to know everything
    • fast block times
  • Altcoins – bitcoin like things that are not bitcoin
    • 2 million posts to altcoin announce forum
    • lots of noise to talk to people
  • review
    • Paper released saying how it should have been done
    • hash functions
    • bitcoin blocks
    • Bitcoin transactions
  • Sidechain
    • alternative chains that use real bitcoins
    • Lots of wasted work? – bitcoin miners can mine other chains at the same time
    • too fast to keep notes
    • Compact CVP Proofs (reduce length of block header to go all the way back )

 

January 15, 2015

Gender diversity in linux.conf.au speakers

My first linux.conf.au was 2003 and it was absolutely fantastic and I’ve been to every one since. Since I like this radical idea of equality and the LCA2015 organizers said there were 20% female speakers this year, I thought I’d look through the history.

So, since there isn’t M or F on the conference program, I have to guess. This probably means I get things wrong and have a bias. But, heck, I’ll have a go and this is my best guess (and mostly excludes miniconfs as I don’t have programmes for them)

  • 2003: 34 speakers: 5.8% women.
  • 2004: 46 speakers: 4.3% women.
  • 2005: 44 speakers: 4.5% women
  • 2006: 66 speakers: 0% women (somebody please correct me, there’s some non gender specific names without gender pronouns in bios)
  • 2007: 173 speakers: 12.1% women (and an order of magnitude more than previously). Includes miniconfs

    (didn’t have just a list of speakers, so this is numbers of talks and talks given by… plus some talks had multiple presenters)
  • 2008: 72 speakers: 16.6% women
  • 2009: 177 speakers (includes miniconfs): 12.4% women
  • 2010: 207 speakers (includes miniconfs): 14.5% women
  • 2011: 194 speakers (includes miniconfs): 14.4% women
  • 2012: (for some reason site isn’t responding…)
  • 2013: 188 speakers (includes most miniconfs), 14.4% women
  • 2014: 162 speakers (some miniconfs included): 19.1% women
  • 2015: As announced at the opening: 20% women.

Or, in graph form:

Sources:

  • the historical schedules up on linux.org.au.
  • my brain guessing the gender of names. This is no doubt sometimes flawed.

Update/correction: lca2012 had around 20% women speakers at main conference (organizers gave numbers at opening) and 2006 had 3 at sysadmin miniconf and 1 in main conference.

Linux.conf.au 2015 – Day 5 – Keynote/Panel

  • Everybody Sung Happy birthday to Baale
  • Bdale said he has a new house and FreedomBox 0.3 release this week
  • Rusty also on the panel
  • Questions:
    • Why is Linus so mean
    • Unified Storage/Memory machines – from HP
    • Young people getting into community
    • systemd ( I asked this)
    • Year of the Linux Desktop
    • Documentation & training material
    • Predict the security problems in next 12 month
    • Does NZ and Australia need a joint space agency
    • Will you be remembered more for Linux or Git?

Linux.conf.ay 2015 – Day 4 – Session 3

Drupal8 outta the box – Donna Benjamin

  • I went to the first half of this but wanted to catch the talk below so I missed the 2nd part

 

Connecting Containers: Building a PaaS with Docker and Kubernetes – Katie Miller

  • co-presented with Steve Pousty
  • Plugs their OpenShift book, they are re-archetecturing the whole thing based on what in the book
  • Platform as a service
    • dev tooling, runtime, OS , App server, middleware.
    • everything except the application itself
    • Openshift is an example
  • Reasons to rebuild
    • New tech
    • Lessons learned from old deploy
  • Stack
    • Atomic + docker + Kubeneties
  • Atomic
    • Redhat’s answer of CoreOS
    • RPM-OSTree – atomic update to the OS
    • Minimal System
    • Fast boot, container mngt, Good Kernel
  • Containers
    • Docker
    • Nice way of specifying everything
    • Pros – portable, easy to create, fast boot
    • Cons – host centric, no reporting
    • Wins – BYOP ( each container brings all it’s dependencies ) , Standard way to make containers , Big eco-system
  • Kubernetes
    • system managing containerize maps across multiple hosts
    • declarative model
    • open source by google
    • pod + service + label + replication controller
    • cluster = N*nodes + master(s) + etcd
    • Wins: Runtime and operation management + management related containers as a unit, container communication, available, scalable, automated, across multiple hosts
  • Rebuilding Openshift
    • Kubernetes provides container runtime
    • Openshift provides devops and team enviroment
  • Concepts
    • application = multiple pods linked togeather (front + back + db ) managed as a unit, scald independantly
    • config
    • template
    • build config = source + build -> image
    • deployment = image and settings for it
  • This is OpenShift v3 – things have been moving very fast so some docs are out of date
  • Slides http://containers.codemiller.com

Linux.conf.au 2015 – Day 4 – Session 2

Tunnels and Bridges: A drive through OpenStack Networking – Mark McClain

  • Challenges with the cloud
    • High density multi-tenancy
    • On demand provisioning
    • Need to place / move workloads
  • SDN , L2 fabric, network virtualisation Overlay tunneling
  • The Basics
    • The user sees the API, doesn’t matter too much what is behind
    • Neutron = Virtual subnet + L2 virtual network + virtual port
    • Nova = Server + interface on the server
  • Design Goals
    • Unified API
    • Small Core. Networks + Subnets + Ports
    • Plugable open archetecture
  • Features
    • Overlapping IPs
    • Configuration DHCP/Metadata
    • Floating IPs
    • Security Groups ( Like AWS style groups ) . Ingress/egress rules, IPv6 . VMs with multiple VIFS
  • Deployment
    • Database + Neutron Server + Message Queue
    • L2 Agent , L3 agent + DHCP Agent
  • Server
    • Core
    • Plugins types =  Proxy (proxy to backend) or direct control (login instide plugin)
    • ML2 – Modular Layer 2 plugin
  • Plugin extensions
    • Add to REST API
    • dpch, l3, quota, security group, metering, allowed addresses
  • L2 Agent
    • Runs on a hypervisor
    • Watch and notify when devices have been added/removed
  • L3 agent – static routing only for now
  • Load balancing as a service, based on haproxy
  • VPN as a service , based on openswan, replicates AWS VPC.
  • What is new in Juno?
    • IPv6
    • based on Radbd
    • Advised to go dual-stack
  • Look ahead to Kilo
    • Paying down technical debt
    • IPv6 prefix delegation, metadata service
    • IPAM – hook into external systems
    • Facilitate dynamic routing
    • Enabling NFV Applications
  • See Cloud Administrators Guide

 

Crypto Won’t Save You Either – Peter Gutmann

  • US Govt has capabilities against common encryption protocols
  • BULLRUN
  • Example Games consoles
    • Signed executables
    • encrypted storage
    • Full media and memory encryption
    • All of these have been hacked
  • Example – Replaced signature checking code
  • Example – Hacked “secure” kernel to attack the application code
  • Example – Modify firmware to load over the checking code
  • Example – Recover key from firmware image
  • Example – Spoof on-air update
  • LOTS of examples
  • Nobody noticed bunch of DKIM keys were bad, cause all attackers had bypassed encryption rather than trying to beat the crypto
  • No. of times crypto broken: 0, bypassed: all the rest
  • National Security Letters – The Legalised form of rubber-hose cryptanalysis
  • Any well design crypto is NSA-proof
  • The security holes are sitting right next to the crypto

 

Linux.conf.au 2015 – Day 4 – Session 1

8 writers in under 8 months: from zero to a docs team in no time flat – Lana Brindley

  • Co Presenting with Alexandra Settle
  • 8 months ago online 1 documentation person at rackspace
  • Hired a couple people
  • Horrible documentation suite
  • Hired some more
  • 4 in Australia, 4 in the US
  • Building a team fast without a terrible culture
    • Management by MEME – everybody had a meme created for them when they started
    • Not all work and No play. But we still get a lot of work done
    • Use tech to overcome geography
    • Treat people as humans not robots
    • Always stay flexible. Couch time, Gym time
  • Finding the right people
    • Work your network , job is probably not going to be advertise on linkedin, bad for diversity
    • Find great people, and work out how to hire them
    • If you do want a job, network
  • Toolchains and Systems
    • Have a vision and work towards it
    • acknowledge imperfection. If you can’t fix, ack and just move forward anyway
  • You can maintain crazy growth forever. You have to level off.
  • Pair US person with AU person for projects
  • Writers should attend Docs summit and encouraged to attend at least one Openstack summit

 

January 14, 2015

Linux.conf.au 2015 – Day 4 – Keynotes

Cooper Lees – Facebook

  • Open Source at facebook
  • Increase in pull requests, not just pushing out stuff or throwing over the wall anymore
  • Focussing on full life-cycle of opensource
  • Big Projects: react , hhvm , asyncdisplaykit , presto
  • Working on other projects and sending to upstream
  • code.facebook.com  github.com/facebook
  • Network Switches and Open Compute
    • Datacentre in NZ using open compute designs
  • Open source Switch
    • Top of rack switch
    • Want to be the open compute of network switches
    • Installer, OS, API to talk to asic that runs ports
    • Switches = Servers. running chef
  • Wedge
    • 16-32 of 40GE ports
    • Internal facebook design
    • 1st building block for disaggregated switching technology
    • Contributed to OCP project
    • Micro Server + Switchports

Carol Smith – Google

  • Works in Google Open Source office
  • Google Summer of code
    • Real world experience
    • Contacts and references
  • 11th year of the program
  • 8600 participated over last 10 years
  • Not enough people in office to do southern hemisphere programme. There is “Google code-in” though

Mark McLoughlin – Red Hat

  • Open Source and the datacenter
  • iaas, paas, microservices, etc
  • The big guys are leading (amazon, google). They are building on open source
  • Telcos
    • Squeezed and scrambling
    • Not so “special” anymore
    • Need to be agile and responsive
    • Telecom datacentre – filled with big, expensive, proprietary boxes
    • opposite of agile
  • OPNFV reference architecture
  • OpenStack, Open vswitch, etc
  • Why Open Source? – collaboration and coopetition , diversity drives innovation , sustainability

 

There was a Q&A. Mostly questions about diversity at the companies and grumps about having to move to US/Sydney for peopl eto work for them

Linux.conf.au – Day 3 – Lightning talks

 

  • Clinton Roy + Tom Eastman – Python Conference Australia 2015 + Kiwi PyCon 2015
    • Brisbane , late July 2015
    • Similar Structure to LCA
    • Christchurch – Septemberish
    • kiwi.pycon.org
  • Daniel Bryan – Comms for Camps
    • Detention camps for Australian boats people camps
    • Please contact if you can offer technical help
  • Phil Ingram – Beernomics
    • Doing stuff for people in return for beer
    • Windows reinstall = a Keg
    • Beercoin
  • Patrick Shuff – Open sourcing proxygen
    • C++ http framework. Built own webserver
    • Features they need, monitoring, fast, easy to add new features
    • github -> /facebook/progen
  • Nicolás Erdödy – Multicore World 2015 & the SKA.
    • Multicore World – 17-18 Feb 2015 Wellington
  • Paul Foxworthy – Open Source Industry Australia (OSIA)
    • Industry Body
    • Govt will consult with industry bodies but won’t listen to individual companies
    • Please join
  • Francois Marier – apt-get remove –purge skype
    • Web RTC
    • Now usable to replace skype
    • Works in firefox and chrome. Click link, no account, video conversation
    • Firefox Hello
  • Tobin Harding – Central Coast LUG
    • Update on Central Coast of NSW LUG
    • About 6 people regularly
  • Mark Smith – Failing Gracefully At 10,000ft
    • Private pilot
    • Aircrafts have 400+ page handbooks
    • Things will fail…
    • Have procedures…
    • Before the engine is on fire
    • test
    • The most important task is to fly the plane
  • Tim Serong – A very short song about memory management
    • 1 verson song
  • Angela Brett – Working at CERN and why you should do it
    • Really Really awesome
    • Basic I applied, lots of fellowship
    • Meet someone famous
    • Lectures online from famous people
  • Donna Benjamin – The D8 Chook Raffle
    • $125k fund to get Drupal8 out
    • Raffle. google it
  • Matthew Cengia/maia sauren – What is the Open Knowledge Foundation?
    • au.okfn.org
    • Open govt/ data / tech / jouralism / etc
    • govHack
    • Open Knowledge Brisbane Meetup Govt
  • Florian Forster – noping
    • Pretty graphs and output on command line ping
    • http://noping.cc
  • Jan Schmidt – Supporting 3D movies in GStreamer
    • A brief overview of it all
  • Justin Clacherty ORP – An open hardware, open software router
    • PowerPC 1-2G RAM
    • Package based updates
    • Signed packages
    • ORP1.com

Linux.conf.au 2015 – Day 2 – Session 2 – Sysadmin Miniconf

Mass automatic roll out of Linux with Windows as a VM guest – Steven Sykes

  • Was late and missed the start of the talk

etcd: distributed locking and service discovery – Brandon Philips

  • /etc distributed
  • open source, failure tolerant, durable, watchable, exposed via http, runtime configurable
  • API – get/put/del  basics plus some extras
  • Applications
    • Locksmith, distributed locks used when machines update
    • Vulcan http load balancer
  • Leader Election
    • TTL and atomic operations
    • Magical stuff explained faster than I can type it.
    • Just one leader cluster-wide
  • Aims for consistence ahead of raw performance

 

Linux at the University – Randy Appleton

  • No numbers on how many students use Linux
  • Peninsula Michigan
  • 3 schools
  • Michigan Tech
    • research, 7k students, 200CS Students, Sysadmin Majors in biz school
    • Linux used is Sysadmin courses, one of two main subjects
    • Research use Linux “alot”
    • Inactive LUG
    • Scripting languages. Python, perl etc
  • Northern Michigan
    • 9k students, 140 CS Majors
    • Growing CIS program
    • No Phd Programs
    • Required for sophomore and senior network programming course
    • Optional Linux sysadmin course
    • Inactive LUG
    • Sysadmin course: One teacher, app of the week (Apache, nfs, email ), shell scripting at end, big project at the end
    • No problem picking distributions, No problem picking topics, huge problem with desperate incoming knowledge
    • Kernel hacking. Difficult to do, difficult to teach, best students do great. Hard to teach the others
  • Lake Superior State
    • 2600 students
    • 70 CS Majors
    • One professor teaches Sysadmin and PHP/MySQL
    • No LUG
    • Not a lot of research
  • What is missing
    • Big power Universities
    • High Schools – None really
    • Community college – None really
  • Usage for projects
    • Sometimes, not for video games
  • Usage for infrastructure
    • Web sites, ALL
    • Beowuld Clusters
    • Databases – Mostly
  • Obstacles
    • Not in High Schools
    • Not on laptops, not supported by Uni
    • Need to attract liberal studies students
    • Is Sysadmin a core concept – not academic enough
  • What would make it better
    • Servers but not desktops
    • Not a edu distribution
    • Easier than Eclispe , better than visual studio

Untangling the strings: Scaling Puppet with inotify – Steven McDonald

  • Around 1000 nodes at site
  • Lots of small changes, specific to one node that we want to happen quickly
  • Historically restarting the puppet master after each update
  • Problem is the master gets slow as you scale up
  • 1300 manifests, takes at least a minute to read each startup
  • Puppet internal caching very coarse, per environment basis (and they have only one prod one)
  • Multiple environments doesn’t work well at site
  • Ideas – tell puppet exactly what files have changed with each rollout (via git, inotify). But puppet doesn’t support this
  • I missed the explan of exactly how puppet parses the change. I think it is “import” which is getting removed in the future
  • Inotify seemed to be more portable and simpler
  • Speed up of up to 5 minutes for nodes with complex catalogs, 70 seconds off average agent run
  • implementation doesn’t support the future parser, re-opening the class in a seperate file is not supported
  • Available on github. Doesn’t work with current ruby-inotify ( in current master branch )

 

 

Linux.conf.au – Day 2 – Session 1 – Sysadmin Miniconf

Configuration Management – A love Story – Javier Turegano

  • June 2008 – Devs want to deploy fast
  • June 2009 – git -> jenkins -> Puppet master
  • But things got pretty complicated and hard to maintain
  • Remove puppet master, puppet noop, but only happens now and then lots of changes but a couple of errors
  • Now doing manual changes
  • June 2010 – Thngs turned into a mess.
  • June 2011 – Devs want prod-like development
  • Cloud! Tooling! Chef! – each dev have their own environment
  • June 2012 – dev environments for all working in ec2
  • dev no longer prod-like. cloud vs datacentre, puppet vs chef , debian vs centos, etc
  • June 2013 – More into cloud, teams re-arranged
  • Build EC2 images and deploy out of jenkins. Eaither as AMI or as rpm
  • Each team fairly separate, doing thing different ways. Had guilds to share skills and procedures and experience
  • June 2014 – Cloudformation, Ansible used by some groups, random

Healthy Operations – Phil Ingram

  • Acquia – Enterprise Drupal as a service. GovCMS Australian Federal Government. 1/4 are remote
  • Went from working in office to working from home
  • Every week had phone call with boss
  • Talk about thing other than with work, ask home people are going, talk to people.
  • Not sleep, waking up at night, not exercising, quick to anger and negative thinking, inability to concentrate
  • Hadn’t taken more than 1 week off work, let exercise work, hobbies was computer stuff
  • In general being in Ops not as much of an option to take time off. Things stay broke until fix
  • Unable to learn via Osmosis, Timing of handing over between shifts
  • People do not understand that computers are run by people not robots
  • Methods: Turn work off at the end of the day, Rubber Ducking, exercise

Developments in PCP (Performance Co-Pilot) : Nathan Scott

  • See my slides from yesterday for intro to PCP
  • Stuff in last 12 months
    • Included in supported in RHEL 6.6 and RHEL 7
    • Regular stable releases
    • Better out of the box experience
    • Tackling some long-standing problems
  • JSON access – pmwebd , interactive web charts ( Graphite, grafana )
  • zero-install look-inside containers
  • Docker support but written to allow use by others
  • Collectors
    • Lots of new kernel metrics additions
    • New applications from web devs (memcached, DNS, web )
    • DB server additions
    • Python PMDA interfaces
  • Monitor work
    • Reporting tools
    • Web tools, GUIs
  • Also improving ease of setup
  • Getting historical data from sar, iostat
  • www.pcp.io

Security options for container implementations – Jay Coles

  • What doesn’t work: rlimits, quotas, blacklisting via ACLs
  • Capabilities: Big list that containers probably shouldn’t have
  • Cgroups – Accounting, Limiting resource usage, tracking of processes, preventing/allowing device access
  • App Armor vs selinux – Use at least one, selinux a little more featured

Linux.conf.au 2015 – Day 3 – Session 2

EQNZ – crisis response, open source style – Brenda Wallace

  • Started with a Trigger warning and “fucker”
  • First thing posted – “I am okay” , one tweet, one facebook
  • State of Scial Media
    • Social media not as common, SMS king, not many smartphones
    • Google Buzz, twitter, Facebook
    • Multiple hashtags
  • Questions people asked on social media
  • Official info was under strain, websites down due to bad generators
  • Crisis Commons
  • Skype
    • Free
    • Multi-platform
    • Txt based
    • Battery Drain very bad
    • Bad internet in Chc hard to use, no mobile, message reply for minutes on join
  • Things pop up within an hour
    • Pirate Pad
    • Couch apps
    • Wikis
    • WordPress installs
  • Short code 4000 for non-urgent help live by 5pm
    • Volenteers processing the queue
  • All telcos agree to coordinate their social media effort
  • Civil defence didn’t have site ready and refused offers, people decided to do independantly
  • Ushahidi instance setup
    • Google setup people finder app
    • Moved into ec2 cluther
    • hackfest, including added mobile
    • Some other Ushidis, in the end newspaper sites enbedded
  • Council
    • chc council wordpress for info
    • Very slow and bad UI
    • Hit very hard, old information from the previous earthquake
    • staff under extreme pressure
  • Civil Defence
    • Official info only
    • Falls over
    • Caught by DDOS against another govt site
  • Our reliability
    • Never wen tdown
    • contact and reassured some authorities
    • After 24h . 78k page impressions
  • Skype
    • 100+ chatting. limitations
    • IRC used by some but many no common enough
    • Gap for something common. cross platform, easy to use
  • Hashtag
    • twitter to SMS notifications to add stuff to website
  • Maps were a new thing
    • None of the authorities knew them
  • Council and DHB websites did not work on mobile and were not updating
  • Government
    • Govt officers didn’t talk – except NZ Geospacial office
    • Meeting that some people attended
  • Wrap up after 3 weeks
    • Redirected website
    • Anonymous copy of database
  • Pragmatic
    • Used closed source where we had too (eg skype)
    • But easier with OS could quick to modify
    • Closed source people could install webserver, use git, etc. Hard to use contributions
  • Burned Bridges
    • Better jobs with Gov agencies
  • These days
    • Tablets
    • Would use EC2 again
    • phones have low power mode
    • more open street maps

 

collectd in dynamic environments – Florian Forster

  • Started collectd in 2005
  • Dynamic environments – Number and location of machines change frequently – VM or job management system
  • NOTE: I use collectd so my notes are a little sparse here cause I knew most of it already
  • Collects timeseries data, does one thing well. collectd.org
  • agent runs on each host, plugins mostly in C for lots of things or exec plug to run random stuff.
  • Read Plugins to get metrics from system metrics, applications, other weird stuff
  • Write plugs – Graphite, RRD, Reimann, MongoDB
  • Virtual machine Metrics
    • libvirt plugin
    • Various metrics, cpu, memory, swap, disk ops/bytes, network
    • GenericJMX plugin – connects to JVM. memory and garbage collection, threads
  • Network plugin
    • sends and receives metric
    • Effecient binary protocol. 50-100 byte UDP multicast/unicast protocol
    • crypto available
    • send, receive, forward packets
  • Aggregation
    • Often more useful for alerting
  • Aggregation plugin
    • Subscribes to metric
    • aggregates and forwards
    • Limitation, no state, eg medium, mean are missing
    • only metrics with one value
    • can be aggregated at any level
    • eg instead of each CPU then total usage of all your CPUS
  • Reimann
    • Lots of filters and functions
    • can aggregate, many otions
  • Bosum
    • Monitoring and alert language
  • Storage
    • Graphite
    • OpenTSDB based on hadoop
    • InfluxDB – understand collectd protocol native (and graphite).
    • Vaultaire ( no collectd integration but… )
  • New Dishboard – facette.io

Linux.conf.au 2015 – Day 3 – Keynote

Bob Young

  • Warns that some stories might not be 100% true
  • ”  Liked about Early Linux – Nobody was very nice to each other but everybody was very respectful of the Intel Microprocessor “
  • CEO of Redhat 1992 – 2000
  • Various stories, hard to take notes from
  • One person said they walked out of the Keynote when they heard the quote “it was a complete meritocracy” re the early days of Linux.
  • Others didn’t other parts of the talk. General tone and some statements similar to the one above.
  • “SuSe User Loser” proviked from laughs and a Suse Lizzard being thrown at the speaker
  • Reasons the publishing industry rejects books: 1. no good; 2. market not big enough; 3. They already publish one on the subject.

Linux.conf.au 2015 – Day 3 – Session 1

CoreOS: an introduction – Brandon Philips

  • Reference to the “Datacenter as a Computer Paper
  • Intro to containers
  • cAdvisor – API of what resources are used by a container
  • Rocket
    • Multiple implementations of container spec , rocket is just one implementation
  • Operating system is able to make less promises to applications
  • Kernel API is really stable
  • Making updates easy
    • Based on ChromeOS
    • Update one partition with OS version. Then flip over to that.
    • Keep another partition/version ready to fail back if needed
    • Safer to update the OS seperated from the app
    • Just around 100MB in size. Kernel, very base OS, systemd
  • etcd
    • Key value store over http (see my notes from yesterday)
    • multiple, leader election etc
    • Individual server less critical since data across multiple hosts
  • Scheduling stuff to servers
    • fleet – very simple, kinda systemd looking
    • fleetctl start foo.service   – sends it off to some machine
    • meso, kubernetes, swam other alternative scedulers
  • Co-ordination
    • locksmith
  • Service discover
    • skydns, discoverd, conf
    • Export location of application to DNS or http API
    • Need proxies to forward request to the right place (for apps not able to query service discovery directly)
  • It is all pretty much a new way of thinking about problems

 

Why you should consider using btrfs, real COW snapshots and file level incremental server OS upgrades like Google does. – Marc Merlin

  • Worked at netapp, hooked on snapshots, lvm snapshots never worked too well , also lvm partitions not too good
  • Switched laptop to btrfs to 3 years ago
  • Why you should consider btrfs
    • Copy on Write
    • Snapshots
    • cp -reflink=always
    • metadata is redundant and checksummed, data checksummed too
    • btrfs underlying filesystem [for now]
    • RAID 0, 1, 5, 6 built in
    • file compression is also built in
    • online background scrub (partial fsck)
    • block level filesystem diff backups(instead of a slow rsync)
    • convert difectly from ext3 (fails sometimes)
  • Why not use ZFS instead
    • ZFS more mature than ZFS
    • Same features plus more
    • Bad license. Oracle not interested in relicensing. Either hard to do or prfer btrfs
    • Netapp sued sun for infringing patents with ZFS. Might be a factor
    • Hard to ship a project with it due to license condistions
  • Is it safe now?
    • Use new kernels. 3.14.x works okay
    • You have to manually balance sometimes
    • snapshots, raid 0 , raid 1 mostly stable
    • Send/receive mostly works reliably
  • Missing
    • btrfs incomplete, but mostly not needed
    • file encryption not supported yet
    • dedup experimental
  • Who use it
    • openSUSE 13.2 ships with it by default
  • File System recovery
    • Good entry on bfrfs wiki
    • btrfs scrub, run weekly
    • Plan for recovery though, keep backups, not as mature as ext4/ext3 yet, prepare beforehand
    • btrfs-tools are in the Ubuntu initrd
  • Encryption
    • Recommends setup encryption on md raid device if using raid
  • Partitions
    • Not needed anymore
    • Just create storage pools, under them create sub volumes which can be mounted
    • boot: root=/dev/sda1  rootflags=solvol=root
  • Snapshots
    • Works using subvolumes
    • Read only or read-write
    • noatime is strongly recommended
    • Can sneakily fill up your disk “btrfs fi show” tells you real situation. Hard to tell what snapshots to delete to reclaim space
  • Compression
    • Mount option
    • lzo fast, zlib slower but better
    • if change option then files changed from then on use new option
  • Turn off COW for big files with lots of random rights in the middle. eg DBs and virtual disk images
  • Send/receive
    • rsync very slow to scan many files before copy
    • initial copy, then only the diffs. diff is computed instantly
    • backup up ssd to hard drive hourly. very fast
  • You can make metadata of file system at a different raid level than the the data
  • Talk slides here. Lots of command examples

 

January 13, 2015

Linux.conf.au 2015 – Day 2 – Session 3 – Sysadmin

Alerting Husbandry – Julien Goodwin

  • Obsolete alerts
    • New staff members won’t have context to know was is obsolete and should have been removed (or ignorened)
  • Unactionable alerts – It is managed by another team but thought you’d like to be woken up
  • SLA Alerts – can I do something about that?
  • Bad thresholds ( server with 32 cores had load of 4 , that is not load ), Disk space alerts either too much or not enough margin
  • Thresholds only redo after complete monitoring rebuilds
  • Hair trigger alerts ( once at 51ms not 50ms )
  • Not impacting redundancy ( only one of 8 web servers is down )
  • Spamming alerts, things is down for the 2925379857 time. Even if important you’ve stopped caring
  • Alerts for something nobody cares about, eg test servers
  • Most of earlier items end up in “don’t care” bucket
  • Emails bad, within a few weeks the entire team will have a filter to ignore it.
  • Undocumented alerts – If it is broken, what am I supposed to do about it?
  • Document actions to take in  “playbook”
  • Alert acceptance practice, only oncallers should e accepting alerts
  • Need a way to silence it
  • Production by Fiat

 

 

Managing microservices effectively – Daniel Hall

  • Step one – write your own apps
  • keep state outside apps
  • not nanoservices, not milliservices
  • Each should be replaceable, independantly deployable , have a single capability
  • think about depandencies, especially circular
  • Packaging
    • small
    • multiple versions on same machine
    • in dev and prod
    • maybe use docker, have local registry
    • Small performance hit compared to VMs
    • Docker is a little immature
  • Step 3 deployment
    • Fast in and out
    • Minimal human interaction
    • Recovery from failures
    • Less overhead requires less overhead
    • We use Meso and marathon
    • Marathon handles switches from old app to new, task failure and recover
    •  Early on the Hype Cycle
  • Extra Credit Sceduling
    • Chronos within Mesos
    • A bit newish

 

Corralling logs with ELK – Mark Walkom

  • You don’t want to be your bosses grep
  • Cluster Elastisearch, single master at any point
  • Sizing best to determine with single machine, see how much it can hadle. Keep Java heap under 31GB
  • Lots of plugins and clients
  • APIs return json. ?pretty makes it looks nicer. The ” _cat/* ” api is more command line
  • new node scales, auto balancers and grows automatic
  • Logstash. lots of filters, handles just about any format, easy to setup.
  • Kibana – graphical front end for elastisearch
  • Curator, logstash-forwarder, grokdebugger

FAI — the universal deployment tool – Thomas Lange

  • From power off to applications running
  • It is all about installing software packages
  • Central administration and control
  • no master or golden image
  • can be expanded by hooks
  • plan your installation and FAI installs the plan
  • Boot up diskless client via PXE/tftp
  • creates partitions, file systems, installs, reboots
  • groups hosts by classes, mutiple classes per host etc
  • Classes can be executables, writeing to standard output, can be in shell, pass variables
  • partitioning, can handle LVM, RAID
  • Projected started in 1999
  • Supports debian based distributions including ubuntu
  • Supports bare metal, VM, chroot, LiveCD, Golden image

 

Documentation made complicated – Eric Burgueno

  • Incomplete, out of date, inconsistent
  • Tools – Word, LibreOffice  -> Sharepoint
  • Sharepoint = lets put this stuff over here so nobody will read it ever again
  • txt , markdown, html. Need to track changes
  • Files can be put in version control.
  • Mediawiki
  • Wiki – uncontrolled proliferation of pages, duplicate pages
  • Why can’t documentation be mixed in with the configuration management
  • Documentation snippits
    • Same everywhere (mostly)
    • Reusable
  • Transclusion in mediawiki (include one page install another)
  • Modern version of mediawiki have parser functions. display different content depending on a condition
  • awesomewiki.co

Systemd Notes

A few months ago I gave a lecture about systemd for the Linux Users of Victoria. Here are some of my notes reformatted as a blog post:

Scripts in /etc/init.d can still be used, they work the same way as they do under sysvinit for the user. You type the same commands to start and stop daemons.

To get a result similar to changing runlevel use the “systemctl isolate” command. Runlevels were never really supported in Debian (unlike Red Hat where they were used for starting and stopping the X server) so for Debian users there’s no change here.

The command systemctl with no params shows a list of loaded services and highlights failed units.

The command “journalctl -u UNIT-PATTERN” shows journal entries for the unit(s) in question. The pattern uses wildcards not regexs.

The systemd journal includes the stdout and stderr of all daemons. This solves the problem of daemons that don’t log all errors to syslog and leave the sysadmin wondering why they don’t work.

The command “systemctl status UNIT” gives the status and last log entries for the unit in question.

A program can use ioctl(fd, TIOCSTI, …) to push characters into a tty buffer. If the sysadmin runs an untrusted program with the same controlling tty then it can cause the sysadmin shell to run hostile commands. The system call setsid() to create a new terminal session is one solution but managing which daemons can be started with it is difficult. The way that systemd manages start/stop of all daemons solves this. I am glad to be rid of the run_init program we used to use on SE Linux systems to deal with this.

Systemd has a mechanism to ask for passwords for SSL keys and encrypted filesystems etc. There have been problems with that in the past but I think they are all fixed now. While there is some difficulty during development the end result of having one consistent way of managing this will be better than having multiple daemons doing it in different ways.

The commands “systemctl enable” and “systemctl disable” enable/disable daemon start at boot which is easier than the SysVinit alternative of update-rc.d in Debian.

Systemd has built in seat management, which is not more complex than consolekit which it replaces. Consolekit was installed automatically without controversy so I don’t think there should be controversy about systemd replacing consolekit.

Systemd improves performance by parallel start and autofs style fsck.

The command systemd-cgtop shows resource use for cgroups it creates.

The command “systemd-analyze blame” shows what delayed the boot process and

systemd-analyze critical-chain” shows the critical path in boot delays.

Sysremd also has security features such as service private /tmp and restricting service access to directory trees.

Conclusion

For basic use things just work, you don’t need to learn anything new to use systemd.

It provides significant benefits for boot speed and potentially security.

It doesn’t seem more complex than other alternative solutions to the same problems.

https://wiki.debian.org/systemd

http://freedesktop.org/wiki/Software/systemd/Optimizations/

http://0pointer.de/blog/projects/security.html

January 12, 2015

Linux.conf.au – Day 2 – Keynote by Eben Moglen

Last spoke 10 years ago in Canberra Linux.conf.au

Things have improved in the last ten years

  • $10s of billions of value have been lost in software patent war
  • But things have been so bad that some help was acquired, so worst laws have been pushed back  a little
  • “Fear of God” in industry was enough to push open Patent pools
  • Judges determined that Patent law was getting pathological, 3 wins in Supreme court
  • Likelihood worst patent laws will be applied against free software devs has decreased
  • “The Nature of the problem has altered because the world has altered”

The Next 10 years

  • Most important Patent system will be China’s
  • Lack of rule of law in China will cause problems in environment of patents
  • Too risky for somebody too try and stop a free software project. We have “our own baseball bat” to spring back at them

The last 10 years

  • Changes in Society more important changes in software
  • 21st century vs 20th century social organisations
    • Less need for hierarchy and secrecy
    • Transparency, Participation, non-hierarchical interaction
  • OS invented that organisation structure
  • Technology we made has taken over the creation of software
  • “Where is BitKeeper now?” – Eben Moglen
  • Even Microsoft reorganises that our way of software making won
  • Long term the organisation structure change everywhere will be more important than just it’s application in Software
  • If there has been good news about politics = “we did it”, bad news = “we tried”

Our common Values

  • “Bridge entire environment between vi and emacs”

Snowden

  • Without PGP and free software then things could have been worse
  • The world would be a far more despotic place if PGP was driven underground back in 1993. Imagine today’s Net without HTTPS or SSH!
  • “We now live in the world we are afraid of”
  • “What stands between them and us is our inventions”
  • “Freedom itself depends on how we make use of the technologies we are creating.” – Eben Moglen
  • “You can’t trust what you can’t read”
  • Big power in the wrong is committed against the first law of robotics, they what technology to work for it.
  • From guy in twitter – “You can’t trust what you can’t read.” True, but if OpenSSL teaches us anything you can’t necessarily trust what you can
  • Attitudes in under-18s are a lot more positive towards him than those who are older (not just cause he looks like Harry Potter)
  • GNU Project is 30 years old, almost same age is Snowden

Oppertunity

  • We can’t control the net but opportunity to prevent others from controlling it
  • Opportunity to prevent failure of freedom
  • Society is changing, demographics under control
  • But 1.6 billion people live in China, America is committed to spying, consumer companies are committed to collecting consumer information
  • Collecting everything is not the way we want the net to work
  • We are playing for keeps now.

 

 

linux.conf.au 2015

I'm at linux.conf.au this week learning lots. I'll have my Begg Digital hat on (when outside).

Linux.conf.au 2015 – Day 1 – Session 2 – Containers

AWS OpsWorks Orchestration War Stories – Andrew Boag

  • Autoscaling too slow since running build-from-scratch every time
  • Communications dependencies
  • Full stack rebuild in 20-40 minutes to use data currently in production
  • A bit longer in a different region
  • Great for load testing
  • If we ere doing again
    • AMI-based better
    • OPSWorks not suitable for all AWS stacks
    • Golden master for flexable
  • Auto-Scaling
    • Not every AMI instance is Good to Go upon provisioning
    • Not a magic bullet, you can’t broadly under-provision
    • needs to be throughly load-tested
  • Tips
    • Dual factor authentication
    • No single person / credentials should be able to delete all cloud-hosted copies of your data
  • Looked at Cloudformation at start, seemed to be more work
  • Fallen out of love with OpsWorks
  • Nice distinction by Andrew Boag: he doesn’t talk about “lock-in” to cloud providers, but about “cost to exit”.   – Quote from Paul

 

Slim Application Containers from Source – Sven Dowideit

  • Choose a base image and make a local version (so all your stuff uses the same one)
  • I’d pick debian (a little smaller) unless you can make do with busybox or scratch
  • Do I need these files? (check though the Dockerfile) eg remove docs files, manpages, timezones
  • Then build, export, import and it comes all clean with just one layer.
  • If all your images use same base, only on the disk once
  • Use related images with all your tools, related to deployment image but with the extra dev, debug, network tools
  • Version the dev images
  • Minimise to 2 layers
    • look at docker-squash
    • Get rid of all the sourc code from your image, just end up with whats need, not junk hidden in layers
  • Static micro-container nginx
    • Build as container
    • export as tar , reimport
    • It crashes :(
    • Use inotifywait to find what extra files (like shared libraries) it needs
    • Create new tarball with those extra files and “docker import” again
    • Just 21MB instead of 1.4GB with all the build fragments and random system stuff
    • Use docker build as last stage rather than docker import and you can run nginx from docker command line
    • Make 2 tar files, one for each image, one in libs/etc, second is nginx

 

Containers and PCP (Performance Co-Pilot) -  Nathan Scott

  • Been around for 20+ years, 11 years open source, Not a big mindshare
  • What is PCP?
    • Toolkit, System level analysis, live and historical, Extensible, distributed
    • pmcd daemon on each server, plus for various functions (bit of like collectd model)
    • pmlogger, pmchart, pmie, etc talk (pull or poll) to pmcd to get data
  • With Containers
    • Use –container=  to grab info inside a container/namespace
    • Lots of work still needed. Metrics inside containers limited compared to native OS

 

The Challenges of Containerizing your Datacenter – Daniel Hall

  • Goals at LIFX
    • Apps all stateless, easy to dockerize
    • Using mesos, zookeeper, marathon, chronos
    • Databases and other stuff outside that cloud
  • Mesos slave launches docker containers
  • Docker Security
    • chroot < Docker < KVM
    • Running untrusted Docket containers are a BAD IDEA
    • Don’t run apps as root inside container
    • Use a recent kernel
    • Run as little as possible in container
    • Single static app if possible
    • Run SELinux on the host
  • Finding things
    • Lots of micoroservices, marathon/mesos moves things all over the place
    • Whole machines going up and down
    • Marathon comes with a tool that pushes it’s state into HAProxy, works fairly well, apps talk to localhost on each machines and haproxy forwards
    • Use custom script for this
  • Collecting Logs
    • Not a good solution
    • can mount /dev/log but don’t restart syslog
    • Mesos collects stdout/stderror , hard to work with and no timestamps
    • Centralized logs
    • rsyslog log to 127.0.0.1 -> haproxy -> contral machine
    • Sometimes needs to queue/drop if things take a little while to start
    • rsyslog -> logstash
    • elasticsearch on mesos
    • nginx tasks running kibana
  • Troubleshooting
    • Similar to service discover problem
    • Easier to get into a container than getting out
    • Find a container in marathon
    • Use docker exec to run a shell, doesn’t work so well on really thin containers
    • So debugging tolls can work from outside, pprof or jsonsole can connect to exposed port/pid of container

Linux.conf.au 2015 – Day 1 – Session 1 – Containers

Clouds, Containers, and Orchestration Miniconf

 

Cloud Management and ManageIQ – John Mark Walker

  • Who needs management – Needs something to tie it all together
  • New Technology -> Adoption -> Proliferation -> chaos -> Control -> New Technology
  • Many technologies follow this, flies under the radar, becomes a problem to control, management tools created, management tools follow the same pattern
  • Large number of customers using hybrid cloud environment ( 70% )
  • Huge potential complexity, lots of requirements, multiple vendors/systems to interact with
  • ManageIQ
    • Many vendor managed open source products fail – open core, runt products
    • Better way – give more leeway to upstream developers
    • Article about taking it opensource on opensource.com. Took around a year from when decision was made
    • Lots of work to create a good open source project that will grow
    • Release named after Chess Grandmasters
    • Rails App

 

LXD: The Container-Based Hypervisor That Isn’t -  Tycho Andersen

  • Part of Openstack
  • Based on LXC , container based hypervisor
  • Secure by default: user namespaces, cgroups, Apparmor, etc
  • A EST API
  • A daemon that doesn’t hypervisory things
  • A framework for maintaining container based applications
  • It Isn’t
    • No network configuration
    • No storage management – But storage aware
    • Not an application container tool
    • handwavy difference between it and docker, I’m sure it makes sense to some people. Something about running an init/systemd rather than the app directly.
  • Features
    • Snapshoting – eg something that is slow to start, snapshot after just starts and deploy it in that state
    • Injection – add files into the container for app to work on.
    • Migration – designed to go fairly fast with low downtime
  • Image
    • Public and private images
    • can be published
  • Roadmap
    • MVP 0.1 released late January 2015
    • container management only

 

Rocket and the App Container Spec – Brandon Philips

  • Single binary – rkt – runs everywhere, systemd not required
  • rkt fetch – downloads and discovers images ( can run as non-root user )
  • bash -> rkt -> application
  • upstart -> rkt -> application
  • rkt run coreos.com/etcd-v2.3.1
  • multiple processes in container common. Multiple can be run from command line or specified in json file of spec.
  • Steps in launch
    • stage 0 – downloads images, checks it
    • Stage 1 – Exec as root, setup namespaces and cgroups, run systemd container
    • Stage 2 – runs actual app in container. Things like policy to restart the app
    • rocket-gc garbage collects stuff , runs periodicly. no managmanent daemon
  • App Container spec is work in progress
    • images, files, compressed, meta-data, dependencies on other images
    • runtime , restarts processes, run multiple processes, run extra procs under specified conditions
    • metadata server
    • Intended to be built with test suite to verify

January 11, 2015

January 10, 2015

DevOps Automation Services

Since we launched in 2014, we have assisted numerous companies, opensource projects and individuals, in learning, experimenting and using automation tools that nowadays define operations. Many things are changing in this area.

We have helped many people to achieve their automation goals, and we are happy to see how their operational costs are reduced and how productivity is increased.

Do you need help with DevOps and automation ? Don't hesitate to contact us at sales@manageacloud.com. You can also find more information at https://manageacloud.com/operations

Stay tuned! Very soon we will release a new of tools that will make your life in operations even easier.

January 09, 2015

The new citizenship: digital citizenship

Recently I was invited to give a TEDx talk at a Canberra event for women speakers. It was a good opportunity to have some fun with some ideas I’ve been playing with for a while around the concept of being a citizen in the era of the Internet, and what that means for individuals and traditional power structures in society, including government. A snipped transcript below. Enjoy and comments welcome :) I’ve put a few links that might be of interest throughout and the slides are in the video for reference.

Video is at http://www.youtube.com/embed/iqjM_HU0WSw

Digital Citizenship

I want to talk to you about digital citizenship and how, not only the geek will inherit the earth but, indeed, we already have. All the peoples just don’t know it yet.

Powerful individuals

We are in the most exciting of times. People are connected from birth and are engaged across the world. We are more powerful as individuals than ever before. We have, particularly in communities and societies like Australia, we have a population that has all of our basic needs taken care of. So we have got time to kill. And we’ve got resources. Time and resources gives a greater opportunity for introspection which has led over the last hundred years in particular, to enormous progress. To the establishment of the concept of individual rights and strange ideas like the concept that animals might actually have feelings and perhaps maybe shouldn’t be treated awfully or just as a food source.

We’ve had these huge, enormous revolutions and evolutions of thought and perspective for a long, long time but it’s been growing exponentially. It’s a combination of the growth in democracy, the rise of the concept of individual rights, and the concept of individuals being able to participate in the macro forces that shape their world.

But it’s also a combination of technology and the explosion in what an individual can achieve both as a individual but also en mass collaborating dynamically across the globe. It’s the fact that many of us are kind of fat, content and happy and now wanting to make a bit of a difference, which is quite exciting. So what we’ve got is a massive and unprecedented distribution of power.

Distributed power

We’ve got the distribution of publishing. The ability to publish whatever you want. Whether you do it through formal mechanisms or anonymously. You can distribute to a global audience with less barriers to entry than ever before. We have the distribution of the ability to communicate with whomever your please. The ability to monitor, which has traditionally been a top down thing for ensuring laws are followed and taxes are paid. But now people can monitor sideways, they can monitor up. They can monitor their governments. They can monitor companies. There is the distribution of enforcement. This gets a little tricky because if anyone can enforce than anyone can enforce any thing. And you start to get a little bit of active concerns there but it is an interesting time. Finally with the advent of 3D printing starting to get mainstream, we’re seeing the massive distribution of, of property.

And if you think about these five concepts – publishing, communications, monitoring, enforcement and property – these five power bases have traditionally been centralised. We usually look at the industrial revolution and the broadcast age as two majors periods in history but arguably they’re both actually part of the same era. Because both of them are about the centralised creation of stuff – whether it’s physical or information – by a small number of people that could afford to do so, and then distributed to the rest of the population.

The idea that anyone can create any of these things and distribute it to anyone else, or indeed for their own purposes is a whole new thing and very exciting. And what that means is that the relationship between people and governments and industry has changed quite fundamentally. Traditional institutions and bastions of any sort of power are struggling with this and are finding it rather scary but it is creating an imperative to change. It is also creating new questions about legitimacy and power relations between people, companies and governments.

Individuals however, are thriving in this environment. There’s always arguments about trolls and about whether the power’s being used trivially. The fact is the Internet isn’t all unicorns or all doom. It is something different, it is something exciting and it is something that is empowering people in a way that’s unprecedented and often unexpected.

The term singularity is one of those fluffy things that’s been touted around by futurists but it does have a fairly specific meaning which is kind of handy. The concept of the distance between things getting smaller. Whether that’s the distance between you and your publisher, you and your food, you and your network or you and your device. The concept of approaching the singularity is about reducing those distances between. Now, of course the internet has reduced the distance between people quite significantly and I put to you that we’re in a period of a “democratic singularity” because the distance between people and power has dramatically reduced.

People are in many ways now as powerful as a lot of the institutions which frame and shape their lives. So to paraphrase and slightly turn on it’s head the quote by William Gibson: the future is here and it is already widely distributed. So we’ve approached the democratic singularity and it’s starting to make democracy a lot more participatory, a lot more democratic.

Changing expectations

So, what does this mean in reality? What does this actually translate to for us as people, as a society, as a “global village”, to quote Marshall McLuhan. There’s quite massive changing expectations of individual. I see a lot of people focused on the shift in power from the West to the East. But I believe the more interesting shift is the shift in power from institutions to individuals.

That is the more fascinating shift not just because individuals have power but because it is changing our expectations as a society. And when you start to get a massive change of expectations across an entire community of people, that starts to change behaviors, change economics, change socials patterns, change social norms. 

What are those changing expectations? Well, the internet teaches us a lot of things. The foundation technical principles of the internet are effectively shaping the social characteristics of this new society. This distributed society or “Society 5″ if you will.

Some of the expectations are the ability to access what you want. The ability to talk to whom you want. The ability to cross reference. When I was a kid and you did a essay on anything you had to go look at Encyclopedia Britannica. It was a single source of truth. The concept that you could get multiple perspectives, some of which might be skewed by the way, but still to concept of getting the context of those different perspectives and a little comparison was hard and alien for the average person. Now you can often talk to someone who is there right now let alone find myriad sources to help inform your view. You can get a point of comparison against traditionally official sources like a government source or media report. People online start to intuitively understand that the world’s actually a lot more gray than we are generally taught in school and such. Learning that the world is gray is great because you start to say, “you know what? You could be right and I could be right and that doesn’t make either perspective necessarily invalid, and that isn’t a terrible thing.” It doesn’t have to be mutually exclusive or a zero sum game, or a single view of history. We can both have a perspective and be mutually respectful in a lot of cases and actually have a more diverse and interesting world as a result.

Changing expectations are helping many people overcome barriers that traditionally stopped them from being socially successful: economically, reputationally, etc. People are more empowered to basically be a superhero which is kinda cool. Online communities can be one of the most exciting and powerful places to be because it starts to transcend limitations and make it possible for people to excel in a way that perhaps traditionally they weren’t able to. So, it’s very exciting. 

Individual power also brings a lot of responsibility. We’ve got all these power structures but at the end of the day there’s usually a techie implementing the big red button so the role of geeks in this world is very important. We are the ones who enable technology to be used for any agenda. Everything is basically based on technology, right? So everything is reliant upon technology. Well, this means we are exactly as free as the tools that we use. 

Technical freedom

If the tool that you’re using for social networking only allows you to talk to people in the same geographic area as you then you’re limited. If the email tool you’re using only allows you to send to someone who has another secure network then you’re only as free as that tool. Tech literacy becomes an enabler or an inhibitor, and it defines an individuals privacy. Because you might say to yourself, oh you know, I will never tell anyone where I am at a particular point in time cause I don’t want someone to rob my house while I’m out on holiday. But you’ll still put a photo up that you’re Argentina right now, because that’s fun, so now people know. Technical literacy for the masses is really important but largely, at this point, confined to the geeks. So hacker ethos ends up being a really important part of this.

For those that don’t know, hacker is not a rude word. It’s not a bad word. It’s the concept of having a creative and clever approach to technology and applying tech in cool and exciting ways. It helps people scratch an itch, test their skills, solve tricky problems collaboratively. Hacker ethos is a very important thing because you start to say freedom, including technical freedom is actually very, very important. It’s very high on the list. And with this ethos, technologists know that to implement and facilitate technologies that actually hobble our fellow citizens kind of screws them over.

Geeks will always be the most free in a digital society because we will always know how to route around the damage. Again, going back to the technical construct of the internet. But fundamentally we have a role to play to actually be leaders and pioneers in this society and to help lead the masses into a better future.

Danger!

There’s also a lot of other sorts of dangers. Tools don’t discriminate. The same tools that can lead a wonderful social revolution or empower individuals to tell their stories is the same technology that can be used by criminals or those with a nefarious agenda. This is an important reason to remember we shouldn’t lock down the internet because someone can use it for a bad reason in the same way we don’t ban cars just because someone used a vehicle to rob a bank. The idea of hobbling technology because it’s used in a bad way is a highly frustrating one.

Another danger is “privilege cringe”. In communities like Australia we’re sort of taught to say, well, you’ve got privilege because you’ve been brought up in a safe stable environment, you’ve got an education, you’ve got enough money, you’ve got a sense of being able to go out and conquer the world. But you’ve got to hide that because you should be embarrassed of your opportunities when so many others have so little. I suggest to you all that you in this room, and pretty much anyone that would probably come and watch a TED event or go to a TED talk or watch it online, is the sort of person who is probably reasonably privileged in a lot of ways and you can use your privilege to influence the world in a powerful and positive way.

You’ve got access to the internet which makes you part of the third of the world that has access. So use your privilege for the power of good! This is the point. We are more powerful than ever before so if you’re not using your power for the power of good, if you’re not actually contributing to making the world a better place, what are you doing?

Hipsters are a major danger. Billy Bragg made the perfect quote which is, cynicism is the perfect enemy of progress. There is nothing more frustrating than actually making progress and having people tear you down because you haven’t done it exactly so.

Another danger is misdirection. We have a lot of people in Australia who want to do good. That’s very exciting and really cool. But Australians tend to say, I’m going to go to another country and feed some poor people and that’ll make me feel good, that’ll be doing some good and that’ll be great. Me personally, that would really not be good for people because I don’t cook very well. Deciding how you can actually contribute to making the world a better place in a way is like finding a lever? You need to identify what you are good at, what real differences you can make when you apply your skills very specifically. Where do you push to get a major change rather than, rather than contributing to actually maintaining the status quo? How do you rewrite the rules? How do you actually help those people that need help all around the world, including here in Australia, in a way that actually helps them sustainably? Enthused misdirection is I guess where I’m getting at.

And of course, one of the most frustrating dangers is hyperbole. It is literally destroying us. Figuratively speaking ;)

So there’s a lot of dangers, there’s a lot of issues but there is a lot of opportunities and a lot of capacities to do awesome. How many people here have been to a TED talk of some sort before? So keep your hand up if, after that, you went out and did something world changing. OK. So now you’re gonna do that, yeah? Right. So next time we do this all of those hands will stay up.

Progress

I’ll make couple of last points. My terrible little diagram here maps the concept that if you look at the last 5,000 years. The quality of life for individuals in a many societies has been down here fairly low for a long time. In millennia past, kings come and go, people get killed, properties taken. All sorts of things happen and individuals were very much at the behest of the powers of the day but you just keep plowing your fields and try to be all right. But is has slowly improved over a long time time, and the collective epiphany of the individual starts to happen, the idea of having rights, the idea that things could be better and that the people could contribute to their own future and democracy starts to kick off. The many suffrage movements addressing gender, ethnicity and other biases with more and more individuals in societies starting to be granted more equal recognition and rights.

The last hundred years, boom! It has soared up here somewhere. And I’m not tall enough to actually make the point, right? This is so exciting! So where are we going to go next?

How do we contribute to the future if we’re not involved in shaping the future. If we aren’t involved, then other powerful individuals are going to shape it for us. And this, this is the thing I’ve really learned by working in government, but working in the Minister’s office, by working in the public service. I specifically went to work in for a politician – even though I’m very strongly apolitical – to work in the government and in the public service because I wanted to understand the executive, legislative, and administrative arms of the entity that shapes our lives so much. I feel like I have a fairly good understanding of that now and there’s a lot of people who influence your lives every day.

Tipping point

Have we really hit this tipping point? You know, is it, is it really any different today than it was yesterday? Well, we’ve had this exponential progress, we’ve got a third of the world online, we’ve got these super human powerful individuals in a large chunk of different societies around the world. I argue that we have hit and passed the tipping point but the realisation hasn’t hit everyone yet.

So, the question is for you to figure out your super power. How do you best contribute it to making the world a better place?

Powers and kryptonite

For me, going and working in a soup kitchen will not help anybody. I could possibly design a robot that creates super delicious and nutritional food to actually feed people. But me doing it myself would actually probably give them food poisoning and wouldn’t help anyone. You need to figure out your specific super powers so you can deploy them to some effect. Figure out how you can contribute to the world. Also figure out your kryptonite.

What biases do you have in place? What weaknesses do you have? What things will actually get in the way of you trying to do what you’re doing? I quite often see people apply critical analysis and critical thinking tools without any self-awareness and the problem is that we are super clever beings and we can rationalize anything we want if, emotionally, we like it or dislike it.

So try and have both self-awareness and critical analysis and now you’ve got a very powerful way to do some good. So I’m going to just finish with a quote.

JFDI

What better place than here? What better time than now? All hell can’t stop us now — RATM

The future is being determined whether you like it or not. But it’s not really being determined by the traditional players in a lot of ways. The power’s been distributed. It’s not just the politicians or the scholars or the researchers or corporates. It’s being invented right here, right now. You are contributing to that future either passively or actively. So you may as well get up and be active about it.

We’re heading towards this and we’ve possibly even hit the tipping point of a digital singularity and a democratic singularity. So, what are you going do about it? I invite you to share with me in the creating the future together.

Thank you very much.

You might also be interested in my blog post on Creating Open Government for a Digital Society and I think the old nugget of noblesse oblige applies here very well.

Antarctica Adventure!

Recently I adventured to Antarctica. It’s not every day you get to say that and it has always been a dream of mine to travel to the south pole (or close to it!) and to see the glaciers, penguins, whales and birds that inhabit such a remote environment. There is something liberating and awesome (in the full sense of the word) in going somewhere where very few humans have traveled. Especially for someone like me who is spends so much time online.

Awesome Flickr Gallery Error - SSL is required

Being Australian and unused to real cold, I think I was also attracted to exploring a truly cold problem with travelling to Antarctica is, as it turns out, the 48-60 hours of torment you need to go through to get there and to get back. The Drake Passage is the strip of open sea between the bottom of South America and the Peninsula of the Antarctic continent. It is by far the most direct way by ship to get to Antarctica and the port town of Ushuaia is well set up to support intrepid travelers in this venture. We took off from Ushuaia on a calm Wednesday afternoon and within a few hours, were into the dreaded Drake. I found whilst ever I was lying down I was ok but walking around was torture! So I ended up staying in bed about 40 hours by which time it had calmed down significantly. See my little video of the more calm but still awful parts :) And that was apparently a calm crossing! Ah well, turns out I don’t have sea legs. At least I wasn’t actually sick and I certainly caught up with a few months of sleep deprivation so arguably, it was the perfect enforced rest!

Now the adventure begins! We were accompanied by a number of stunning and enormous birds, including Cape Pestrels and a number of Albatrosses. Then we came across a Blue Whale which is apparently quite a rare thing to see in the Drake. It gave us a little show and then went on its way. We entered the Gerlache Strait and saw our first ice which was quite exciting, but by the end of the trip these early views were just breadcrumbs! We landed at Cuverville Island which was stunning! I had taken the snowshoeing option and so with 12 other adventurous travellers, we started up the snow covered hill to get some better views. We saw a large colony of Gentoo penguins which was fun, they are quite curious and cute creatures. We had to be careful to not block any “penguin highways” so often was giving way to scores of them as we explored. We saw a Leopard Seal in the water, which managed to catch one unfortunate penguin for lunch.

We then landed at Neko Harbour, our first step onto the actual Antarctic continent! Again, more stunning views and Gentoo penguins. We had the good fortune to also have time that day to land at Port Lockroy, an old British station in Antarctica and the southern most post office in the world. I send a bunch of postcards to friends and family on the 23rd December, I guess we’ll see how long they take to make the trip. We got to see a number of the Snowy Sheathbill birds, which is a bit of a scavenger. It eats everything, including penguin poo, which is truly horrible. Although their eating habits are awful, they are quite beautiful and I was lucky enough to score a really good shot of one mid flight.

The next day we traveled down the Lemaire Channel to Petermann Island where we saw more Gentoo penguins, but also Adalie penguins, which are terribly cute! Again we did some snowshoeing which was excellent. I took some time to just sit and drink in the remoteness and the pristine environment that is Antarctica. It was humbling and wonderful to remember how truly small we all are and the magnificence of this world on which we reside. We saw some Minke Whales in the water beside the ship.

In the afternoon we broke through a few kilometres of ice and took the small boats (zodiacs) a short distance, then walked a half kilometre over ocean ice to land at Vernadsky Base, a Ukranian scientific post. The dozen or so scientists there hadn’t seen any other humans for 8 months are were very pleased to see us :) All of them were men and when I asked why there weren’t any women scientists there I had a one word answer from our young Ukranian guide: politics. Interesting… At any rate it was fascinating and it looks like they do some incredible science down there. There was also a small Elephant Seal who crawled up to the bar to say hi. They also have the southern most bar in the world, and there were treated to home made sugar based vodka, which was actually pretty good. So good in fact that one of the guests from our ship drank a dozen shots, then exchanged her bra in exchange for some snow mobile moonlighting around the base. Was quite hilarious and poor expedition leader dealt with it very diplomatically.

To cap off a fantastic day, the catering crew put on a BBQ on the deck of the Ocean Nova which was a cold but excellent affair. The mulled wine and hot apple dessert went down particularly well against the cold! We did a trivia night which was great fun, and our team, “The Rise of the Gentoo” won! There was much celebration though the sweet victory was snatched from us when they found a score card for a team that hadn’t been marked. Ah well, all is fair in love and war! I had only one question for our expedition leader, would we see any Orca? Orca are a new favourite animal of mine. They are brilliant, social and strategic animals. Well worth looking into.

The next morning we were woken particularly early as there were some Orca in the water! I was first on deck, in my pyjamas and I have to admit I squealed quite a lot, much to the amusement of our new American friends. At one point I saw all five Orca come to the surface and I could only watch in awe. They really are stunning animals. I learned from the on board whale expert that Orca have some particularly unique hunting techniques. Often the come across a seal or two on a small iceberg surrounded by water and ao they swim up to it in formation and then dive and hit their tails simultaneously creating a small tidal wave that washes the seal off into the water ready for the taking. Very clever animals. Then they always share the spoils of a hunt amongst the pod, and often will simply daze a victim to teach young Orca how to hunt before dealing a death blow. Apparently Orca have been known to kill much larger animals including Humpback Whales.

Anyway, the rest of day we did some zodiac trips (the small courier boats) around Paradise Harbour which was bitterly cold, and then around the Melchior Islands in Dallman Bay which was spectacular. One of the birds down here is the Antarctic Cormorant, closely related to the Cormorants in Australia. They look quite similar :) We got to see number of them nesting. Going back through the Drake I had to confine myself to my room again, which meant I missed seeing Humpback Whales. This was unfortunate but I really did struggle to travel around the ship when in the Drake without getting very ill.

On a final note, I traveled with the Antarctica XX1, which has a caring and wonderful crew. The crew includes scientists, naturists, biologists and others who genuinely love Antarctica. As a result we had a number of amazing lectures throughout the trip about the wildlife and ecosystem of Antarctica. Learning about Krill, ice flow, climate change and the migratory patterns of the whales was awesome. I wish I had been able to attend more talks but I couldn’t get up during most of the Drake :/ The rest of the crew who looked after navigation, feeding us, cleaning and all the other operations were just amazing. A huge thank you to you all for making this voyage the trip of a lifetime!

One thing I didn’t anticipate was the land sickness! 24 hours after getting off the boat and I still feel the sway of the ocean! All of my photos, plus a couple of group photos and a video or two are up on my flickr account in the Antarctica 2013 set at http://www.flickr.com/photos/piawaugh/sets/72157638364999506/ You can also see photos from Buenos Aires if you are interested at http://www.flickr.com/photos/piawaugh/sets/72157638573728155/

A special thank you also to Jamie, our exhibition leader who delivered an incredible itinerary under some quite trying circumstances, and all the expedition crew! You guys totally rock :)

I met some amazing new friends on the trip, and got to spend some quality time with existing friends. You don’t go on adventures like this without meeting other people of a similar adventurous mindset, which is always wonderful.

For everyone else, I highly highly recommend you check out the Antarctica XXI (Ocean Nova) trips if you are interested in going to Antarctica or the Arctic.

For all my linux.conf.au friends, yes I did scope out Antartica for a potential future conference, but given the only LUGs there are Gentoos, I think we should all spare ourselves the pain ;)

Below are links to some additional reading about the places we visited as provided by the Antarctic XX1 crew, the list of animals that were sighted throughout the journey and some other bits and pieces that might be of interest. Below are also some excellent quotes about Antarctica that were on the ship intranet that I just had to post to give you a flavour of what we experienced :)

  • AXXI_Logbook_SE2-1314 (PDF) Log book for the trip. Includes animals we saw, where we went and some details of our activities. Lovely work by the Antarctica XXI crew :)
  • Daily-program (PDF) – our daily program for the journey
  • Info-landings (PDF) – information about the landing sites we went to

The church says the earth is flat, but I know that it is round, for I have seen the shadow on the moon, and I have more faith in a shadow than in the church. — Ferdinando Magallanes

We were the only pulsating creatures in a dead world of ice. — Frederick Albert Cook

Below the 40th latitude there is no law; below the 50th no god; below the 60th no common sense and below the 70th no intelligence whatsoever. — Kim Stanley Robinson

I have never heard or felt or seen a wind like this. I wondered why it did not carry away the earth. — Cherry-Garrard

Great God ! this is an awful place. — Robert Falcon Scott, referring to the South Pole

Human effort is not futile, but Man fights against the giant forces of Nature in the spirit of humility. — Ernest Shackleton

Had we lived I should have had a tale to tell of the hardihood, endurance and courage of my companions …. These rough notes and our dead bodies must tell the tale. — Robert Falcon Scott

People do not decide to become extraordinary. They decide to accomplish extraordinary things. — Edmund Hillary

Superhuman effort isn’t worth a damn unless it achieves results. — Ernest Shackleton Adventure is just bad planning. — Roald Amundsen

For scientific leadership, give me Scott; for swift and efficient travel, Amundsen; but when you are in a hopeless situation, when there seems to be no way out, get on your knees and pray for Shackleton. — Sir Raymond Priestley

The imperatives for changing how we do government

Below are some of the interesting imperatives I have observed as key drivers for changing how governments do things, especially in Australia. I thought it might be of interest for some of you :) Particularly those trying to understand “digital government”, and why technology is now so vital for government services delivery:

  • Changing public expectations – public expectations have fundamentally changed, not just with technology and everyone being connected to each other via ubiquitous mobile computing, but our basic assumptions and instincts are changing, such as the innate assumption of routing around damage, where damage might be technical or social. I’ve gone into my observations in some depth in a blog post called Online Culture – Part 1: Unicorns and Doom (2011).
  • Tipping point of digital engagement with government – in 2009 Australia had more citizens engaging with government  online than through any other means. This digital tipping point creates a strong business case to move to digitally delivered services, as a digital approach enables more citizens to self serve online and frees up expensive human resources for our more vulnerable, complex or disengaged members of the community.
  • Fiscal constraints over a number of years have largely led to IT Departments having done more for less for years, with limited investment in doing things differently, and effectively a legacy technology millstone. New investment is needed but no one has money for it, and IT Departments have in many cases, resorted to being focused on maintenance rather than project work (an upgrade of a system that maintains the status quo is still maintenance in my books). Systems have reached a difficult point where the fat has been trimmed and trimmed, but the demands have grown. In order to scale government services to growing needs in a way that enables more citizens to self service, new approaches are necessary, and the capability to aggregate services and information (through open APIs and open data) as well as user-centric design underpins this capability.
  • Disconnect between business and IT – there has been for some time a growing problem of business units disengaging with IT. As cheap cloud services have started to appear, many parts of government (esp Comms and HR) have more recently started to just avoid IT altogether and do their own thing. On one hand this enables some more innovative approaches, but it also leads directly to a problem in whole of government consistency, reliability, standards and generally a distribution of services which is the exact opposite of a citizen centric approach. It’s important that we figure out how to get IT re-engaged in the business, policy and strategic development of government such that these approaches are more informed and implementable, and such that governments use, develop, fund and prioritise technology in alignment with a broader vision.
  • Highly connected and mobile community and workforce – the opportunities (and risks) are immense, and it is important that governments take an informed and sustainable approach to this space. For instance, in developing public facing mobile services, a mobile optimised web services approach is more inclusive, cost efficient and sustainable than native applications development, but by making secure system APIs and open data available, the government can also facilitate public and private competition and innovation in services delivery.
  • New opportunities for high speed Internet are obviously a big deal in Australia (and also New Zealand) at the moment with the new infrastructure being rolled out (FTTP in both countries), and setting up to better support and engaging with citizens digitally now, before mainstream adoption, is rather important and urgent.
  • Impact of politics and media on policy – the public service is generally motivated to have an evidence-based approach to policy, and where this approach is developed in a transparent and iterative way, in collaboration with the broader society, it means government can engage directly with citizens rather than through the prism of politics or the media, each which have their own motivations and imperatives.
  • Prioritisation of ICT spending – it is difficult to ensure the government investment and prioritisation of ICT projects aligns with the strategic goals of the organisation and government, especially where the goals are not clearly articulated.
  • Communications and trust – with anyone able to publish pretty much anything, it is incumbent on governments to be a part of the public narrative as custodians of a lot of information and research. By doing this in a transparent and apolitical way, the public service can be a value and trusted source.
  • The expensive overhead of replication of effort across governments – consolidating where possible is vital to improve efficiencies, but also to put in place the mechanisms to support whole of government approaches.
  • Skills – a high technical literacy directly supports the capacity to innovate across government and across the society in every sector. As such this should be prioritised in our education systems, way above and well beyond “office productivity” tools.

Note: I originally had some of this in another blog post about open data and digital government in NZ, buried some way down. Have republished with some updated ideas.

Embrace your inner geek: speech to launch QUT OSS community

This was a speech I gave in Brisbane to launch the QUT OSS group. It talks about FOSS, hacker culture, open government/data, and why we all need to embrace our inner geek :)

Welcome to the beginning of something magnificent. I have had the luck, privilege and honour to be involved in some pretty awesome things over the 15 or so years I’ve been in the tech sector, and I can honestly say it has been my involvement in the free and Open Source software community that has been one of the biggest contributors.

It has connected me to amazing and inspiring geeks and communities nationally and internationally, it has given me an appreciation of the fact that we are exactly as free as the tools we use and the skills we possess, it has given me a sense of great responsibility as part of the pioneer warrior class of our age, and it has given me the instincts and tools to do great things and route around issues that get in the way of awesomeness.

As such it is really excited to be part of launching this new student focused Open Source group at QUT, especially one with academic and industry backing so congratulations to QUT, Red Hat, Microsoft and Tech One.

It’s also worth mentioning that Open Source skills are in high demand, both nationally and internationally, and something like 2/3 of Open Source developers are doing so in some professional capacity.

So thanks in advance for having me, and I should say up front that I am here in a voluntary capacity and not to represent my employer or any other organisation.

Who am I? Many things: martial artist, musician, public servant, recently recovered ministerial adviser, but most of all, I am a proud and reasonably successful geek.

Geek Culture

So firstly, why does being a geek make me so proud? Because technology underpins everything we do in modern society. It underpins industry, progress, government, democracy, a more empowered, equitable and meritocratic society. Basically technology supports and enhances everything I care about, so being part of that sector means I can play some small part in making the world a better place.

It is the geeks of this world that create and forge the world we live in today. I like to go to non-geek events and tell people who usually take us completely for granted, “we made the Internet, you’re welcome”, just to try to embed a broader appreciation for tech literacy and creativity.

Geeks are the pioneers of the modern age. We are carving out the future one bit at a time, and leading the charge for mainstream culture. As such we have, I believe, a great responsibility to ensure our powers are used to improve life for all people, but that is another lecture entirely.

Geek culture is one of the driving forces of innovation and progress today, and it is organisations that embrace technology as an enabler and strategic benefit that are able to rapidly adapt to emerging opportunities and challenges.

FOSS culture is drawn very strongly from the hacker culture of the 60′s and 70′s. Unfortunately the term hacker has been stolen by the media and spooks to imply bad or illegal behaviours, which we would refer to as black hat hacking or cracking. But true hacker culture is all about being creative and clever with technology, building cool stuff, showing off one’s skills, scratching an itch.

Hacker culture led to free software culture in the 80′s and 90′s, also known as Open Source in business speak, which also led to a broader free culture movement in the 90′s and 00′s with Creative Commons, Wikipedia and other online cultural commons. And now we are seeing a strong emergence of open government and open science movements which is very exciting.

Open Source

A lot of people are aware of the enormity of Wikipedia. Even though Open Source well predates Wikipedia, it ends up being a good tool to articulate to the general population the importance of Open Source.

Wikipedia is a globally crowdsourced phenomenon than, love it or hate it, has made knowledge more accessible than every before. I personally believe that the greatest success of Wikipedia is in demonstrating that truth is perception, and the “truth” held in the pages of Wikipedia ends up, ideally anyway, being the most credible middle ground of perspectives available. The discussion pages of any page give a wonderful insight to any contradicting perspectives or controversies and it teaches us the importance of taking everything with a grain of salt.

Open Source is the software equivalent of Wikipedia. There are literally hundreds of thousands if not millions of Open Source software projects in the world, and you would used thousands of the most mature and useful ones every day, without even knowing it. Open Source operating systems like Linux or MINIX powers your cars, devices, phones, telephone exchanges and the majority of servers and super computers in the world. Open Source web tools like WordPress, Drupal or indeed WikiMedia (the software behind Wikipedia) power an enormous amount of websites you go to everyday. Even Google heavily uses Open Source software to build the worlds most reliable infrastructure. If Google.com doesn’t work, you generally check your own network reliability first.

Open Source is all about people working together to scratch a mutual itch, sharing in the development and maintenance of software that is developed in an open and collaborative way. You can build on the top of existing Open Source software platforms as a technical foundation for innovation, or employ Open Source development methodologies to better innovate internally. I’m still terrified by the number of organisations I see that don’t use base code revision systems and email around zip files!

Open Source means you can leverage expertise far beyond what you could ever hope to hire, and you build your business around services. The IT sector used to be all about services before the proprietary lowest common denominator approach to software emerged in the 80s.

But we have seen the IT sector largely swing heavily back to services, except in the case on niche software markets, and companies compete on quality of services and whole solution delivery rather than specific products. Services companies that leverage Open Source often find their cost of delivery lower, particularly in the age of “cloud” software as a service, where customers want to access software functionality as a utility based on usage.

Open Source can help improve quality and cost effectiveness of technology solutions as it creates greater competition at the services level.

The Open Source movement has given us an enormous collective repository of stable, useful, innovative, responsive and secure software solutions. I must emphasise secure because many eyes reviewing code means a better chance of identifying and fixing issues. Security through obscurity is a myth and it always frustrates me when people buy into the line that Open Source is somehow less secure than proprietary solutions because you can see the code.

If you want to know about government use of Open Source, check out the Open Source policy on the Department of Finance and Deregulation website. It’s a pretty good policy not only because it encourages procurement processes to consider Open Source equally, but because it encourages government agencies to contribute to and get involved in the Open Source community.

Open Government

It has been fascinating to see a lot of Open Source geeks taking their instincts and skills with them into other avenues. And to see non-technical and non-Open Source people converging on the same basic principles of openness and collaboration for mutual gain from completely different avenues.

For me, the most exciting recent evolution of hacker ethos is the Open Government movement.

Open Government has always been associated with parliamentary and bureacratic transparency bureaucratic, such as Freedom of Information and Hansard.

I currently work primarily on the nexus where open government meets technology. Where we start to look at what government means in a digital age where citizens are more empowered than ever before, where globalisation challenges sovereignty, where the need to adapt and evolve in the public service is vital to provide iterative, personalised and timely responses to new challenges and opportunities both locally and globally.

There are three key pillars of what we like to call “Government 2.0”. A stupid term I know, but bear with me:

  1. Participatory governance – this is about engaging the broader public in the decision making processes of government to both leverage the skills, expertise and knowledge of the population for better policy outcomes, and to give citizens a way to engage directly with decisions and programs that affect their every day lives. Many people think about democratic engagement as political engagement, but I content that the public service has a big role to play in engaging citizens directly in co-developing the future together.
  2. Citizen centricity – this is about designing government services with the citizen at the centre of the design. Imagine if you will, and I know many in the room are somewhat technical, imagine government as an API, where you can easily aggregate information and services thematically or in a deeply personalised way for citizens, regardless of the structure or machinery of government changes. Imagine being able to change your address in one location, and have one place to ask questions or get the services you need. This is the vision of my.gov.au and indeed there are several initiatives that delivery on this vision including the Canberra Connect service in the ACT, which is worth looking at. In the ACT you can go into any Canberra Connect location for all your Territory/Local government needs, and they then interface with all the systems of that government behind the scenes in a way that is seamless to a citizen. It is vital that governments and agencies start to realise that citizens don’t care about the structures of government, and neither should they have to. It is up to us all to start thinking about how we do government in a whole of government way to best serve the public.
  3. Open and transparent government – this translates as both parliamentary transparency, but also opening up government data and APIs. Open data also opens up opportunities for greater analysis, policy development, mobile service delivery, public transaprency and trust, economic development through new services and products being developed in the private sector, and much more.

Open Data

Open data is very much my personal focus at the moment. I’m now in charge of data.gov.au, which we are in the process of migrating to an excellent Open Source data repository called CKAN which will be up soon. There is currently a beta up for people to play with.

I also am the head cat herder for a volunteer run project called GovHack which ran only just a week ago, where we had 1000 participants from 8 cities, including here in Brisbane, all working with government data to build 130 new hacks including mashups, data visualisations, mobile and other applications, interactive websites and more. GovHack shows clearly the benefits to society when you open up government data for public use, particularly if it is available in a machine readable way and is available under a very permissive copyright such as Creative Commons.

I would highly recommend you check out my blog posts about open data around the world from when I went to a conference in Helsinki last year and got to meet luminaries in this space including Hans Rosling, Dr Tim Hubbard and Rufus Pollock. I also did some work with the New Zealand Government looking at NZ open data practice and policy which might be useful, where we were also able to identify some major imperatives for changing how governments work.

The exciting thing is how keen government agencies in Federal, State, Territory and Local governments are to open up their data! To engage meaningfully with citizens. And to evolve their service delivery to be more personalised and effective for everyone. We are truly living in a very exciting time for technologists, democracy and the broader society.

Though to be fair, governments don’t really have much choice. Citizens are more empowered than ever before and governments have to adapt, delivery responsive, iterative and personalised services and policy, or risk losing relevance. We have seen the massive distribution now of every traditional bastion of power, from publishing, communications, monitoring, enforcement, and even property is about to dramatically shift, with the leaps in 3D printing and nano technologies. Ultimately governments are under a lot of pressure to adapt the way we do things, and it is a wonderful thing.

The Federal Australian Government already has in place several policies that directly support opening up government data:

Australia has also recently signed up to the Open Government Partnership, an international consortia of over 65 governments which will be a very exciting step for open data and other aspects of open government.

At the State and Territory level, there is also a lot of movement around open data. Queensland and the ACT launched your new open data platform late last year with some good success. NSW and South Australia have launched new platforms in the last few weeks with hundreds of new data sets. Western Australia and Victoria have been publishing some great data for some time and everyone is looking at how they can do so better!

Many local governments have been very active in trying to open up data, and a huge shout out to the Gold Coast City Council here in Queensland who have been working very hard and doing great things in this space!

It is worth noting that the NSW government currently have a big open data policy consultation happening which closes on the 17th June and is well worth looking into and contributing to.

Embracing geekiness

One of my biggest bug bears is when people say “I’m sorry the software can’t do that”. It is the learned helplessness of the tech illiterate that is our biggest challenge for innovating and being globally competitive, and as countries like Australia are overwhelming well off, with the vast majority of our citizens living high quality lives, it is this learned helplessness that is becoming the difference between the haves and have nots. The empowered and the disempowered.

Teaching everyone to embrace their inner geek isn’t just about improving productivity, efficiency, innovation and competitiveness, it is about empowering our people to be safer, smarter, more collaborative and more empowered citizens in a digital world.

If everyone learnt and experienced even the tiniest amount of programming, we would all have embedded that wonderful instinct that says “the software can do whatever we can imagine”.

Open Source communities and ethos gives us a clear vision as to how we can overcome every traditional barrier to collaboration to make awesome stuff in a sustainable way. It teaches us that enlightened self interest in the age of the Internet translates directly to open and mutually beneficial collaboration.

We can all stand on the shoulders of giants that have come before, and become the giants that support the next generation of pioneers. We can all contribute to making this world just a bit more awesome.

So get out there, embrace your inner geek and join the open movement. Be it Open Source, open government or open knowledge, and whatever your particular skills, you can help shape the future for us all.

Thank you for coming today, thank you to Jim for inviting me to be a part of this launch, and good luck to you all in your endeavours with this new project. I look forward to working with you to create the future of our society, together.

So you want to change the world?

Recently I spoke at BarCamp Canberra about my tips and tricks to changing the world. I thought it might be useful to get people thinking about how they can best contribute to the world, according to their skills and passions.

Completely coincidentally, my most excellent boss did a talk a few sessions ahead of me which was the American Civil War version of the same thing :) I highly recommend it. John Sheridan – Lincoln, Lee and ICT: Lessons from the Civil War.

So you want to change the world?

Here are the tactics I use to some success. I heartily recommend you find what works for you. Then you will have no excuse but to join me in implementing Operation World Awesomeness.

The Short Version:

No wasted movement.

The Long Version:

1) Pick your battles: there are a million things you could do. What do you most care about? What can you maintain constructive and positive energy about even in the face of towering adverseries and significant challenges? What do you think you can make a difference in? There is a subtle difference between choosing to knock down a mountain with your forehead, and renting a bulldozer. If you find yourself expending enormous energy on something, but not making a difference, you need to be comfortable to change tactics.

2) Work to your strengths: everyone is good at something. If you choose to contribute to your battle in a way that doesn’t work to your strengths, whatever they are, then you are wasting energy. You are not contributing in the best way you can. You need to really know yourself, understand what you can and can’t do, then do what you can do well, and supplement your army with the skills of others. Everyone has a part to play and a meaningful way to contribute. FWIW, I work to know myself through my martial arts training, which provides a useful cognitive and physical toolkit to engage in the world with clarity. Find what works for you. As Sun Tzu said: know yourself.

3) Identify success: Figure out what success actually looks like, otherwise you don’t have either a measurement of progress, nor a measurement of completion. I’ve seen too many activists get caught up on a battle and continued fighting well beyond the battle being won, or indeed keep hitting their heads against a battle that can’t be won. It’s important to continually be monitoring and measuring, holding yourself to account, and ensuring you are making progress. If not, change tactics.

4) Reconnaissance: do your research. Whatever your area of interest there is likely a body of work that has come before you that you can build upon. Learn about the environment you are working in, the politics, the various motivations and interests at play, the history and structure of your particular battlefield. Find levers in the system that you can press for maximum effect, rather than just straining against the weight of a mountain. Identify the various moving parts of the system and you have the best chance to have a constructive and positive influence.

5) Networks & Mentors: identify all the players in your field. Who is involved, influential, constructive, destructive, effective, etc. It is important to understand the motivations at play so you can engage meaningfully, collaboratively and build a mutually beneficial network in the persuit of awesomeness. Strong mentors are a vital asset and they will teach you how to navigate the rapids and make things happen. A strong network of allies is also vital to keep you on track, and accountable, and true to your own purpose. People usually strive to meet the expectations of those around them, so surround yourself with high expectations. Knowing your network also helps you identify issues and opportunities early.

6) Sustainability: have you put in place a succession plan? How will your legacy continue on without you? It’s important if your work is to continue on that it not be utterly reliant upon one individual. You need to share your vision, passion and success. Glory shared is glory sustained, so bring others on board, encourage and support them to succeed. Always give recognition and thanks to people who do great stuff.

7) Patience: remember the long game. Nothing changes overnight. It always take a lot of work and persistence, and remembering the long game will help during those times when it doesn’t feel like you are making progress. Again, your network is vital as it will help you maintain your strength, confidence and patience :) Speaking of which, a huge thanks to Geoff Mason for reminding me of this one on the day.

8) Shifting power: it is worth noting that we are living in the most exciting of times. Truly. Individuals are more empowered than ever before to do great things. The Internet has created a mechanism for the mass distribution of power, but putting into the hands of all people (all those online anyway), the tools to:

  1. publish and access knowledge;
  2. communicate and collaborate with people all around the world;
  3. monitor and hold others to account including companies, governments and individuals;
  4. act as enforcers for whatever code or law they uphold. This is of course quite controversial but fascinating nonetheless; and
  5. finally, with the advances in 3D printing and nanotechnology, we are on the cusp of all people having unprecedented access to property.

Side note: Poverty and hunger, we shall overcome you yet! Then we just urgently need to prioritise education of all the people. But that is a post for another day :) Check out my blog post on Unicorns and Doom, which goes into my thoughts on how online culture is fundamentally changing society.

This last aspect is particularly fascinating as it changes the game from one between the haves and the have nots, to one between those with and those without skills and knowledge. We are moving from a material wealth differentiation in society towards an intellectual wealth differentiation. Arguable we always had the latter, but the former has long been a bastion for law, structures, power and hierarchies. And it is all changing.

“What better place than here, what better time than now?” — RATM

I am so thankful – the gap is sorted

I will be doing a longer blog post about the incredible adventure it was to bring Sir Tim Berners-Lee and Rosemary Leith to Australia 10 days ago, but tonight I have had something just amazing happen that I wanted to briefly reflect upon.

I feel humbled, amazed and extremely extremely thankful to be part of such an incredible community in Australia and New Zealand, and a lot of people have stood up and supported me with something I felt very uncomfortable having to deal with.

Basically, a large sponsor pulled out from the TBL Down Under Tour (which I was the coordinator for, supported by the incredible and hard working Jan Bryson) just a few weeks before the start, leaving us with a substantial hole in the budget. I managed to find sponsorship to cover most of the gap, but was left $20k short (for expenses only) and just decided to figure it out myself. Friends rallied around and suggested the crowdsourcing approach which I was hesitant to do, but eventually was convinced it wouldn’t be a bad thing.

We crowdsourced less than two days ago and raised around $6k ($4,800 on GoGetFunding and $1,200 from Jeff’s earlier effort). This was incredible, especially the wonderfully supportive and positive comments that people left. Honestly, it was amazing. And then, much to my surprise and shock, Linux Australia offered to contribute the rest of the $20k. Silvia is closing the crowdsourcing site as I write this and I’m thankful to her for setting it up in the first place.

I am truly speechless. And humbled. And….

It is worth noting that stress and exhaustion aside, and though I put over 350 hours of my own time into this project, for me it has been completely worth it. It has brought many subjects dear to my heart into the mainstream public narrative and media including open government, open data, open source, net neutrality, data retention and indeed, the importance of geeks. I think such a step forward in public narrative will help us take a few more steps towards the the future where Geeks Rule Over Kings ;) (my lca2013 talk)

It was also truly a pleasure to hang out with Tim and Rosemary who are extremely lovely people, clever and very interesting to chat to.

For the haters :) No I am not suffering from cultural cringe. No I am not needing an external voice to validate perspectives locally. There is only one TBL and if he was Australian I’d still have done what I did :P

More to come in the wrap up post on the weekend, but thank you again to all the individuals who contributed, and especially to Linux Australia for offering to fill the gap. There are definitely lessons learnt from this experience which I’ll outline later, but if I was an optimist before, this gives me such a sense of confidence, strength and support to continue to do my best to serve my community and the broader society as best I can.

And I promise I won’t burn out in the meantime ;)

Po is looking forward to spending more time with his human. We all made sacrifices :) (old photo courtesy of Mary Gardiner)

My NZ Open Data and Digital Government Adventure

On a recent trip to New Zealand I spent three action packed days working with Keitha Booth and Alison Stringer looking at open data. These two have an incredible amount of knowledge and experience to share, and it was an absolute pleasure to work with them, albeit briefly. They arranged meetings with about 3000* individuals from across different parts of the NZ government to talk about everything from open data, ICT policy, the role of government in a digital era, iterative policy, public engagement and the components that make up a feasible strategy for all of the above.

It’s important to note, I did this trip in a personal capacity only, and was sure to be clear I was not representing the Australian government in any official sense. I saw it as a bit of a public servant cultural exchange, which I think is probably a good idea even between agencies let alone governments ;)

I got to hear about some of the key NZ Government data projects, including data.govt.nz, data.linz.govt.nz, the statistical data service, some additional geospatial and linked data work, some NZ government planning and efforts around innovation and finding more efficient ways to do tech, and much more. I also found myself in various conversations with extremely clever people about science and government communications, public engagement, rockets, circus and more.

It was awesome, inspiring, informative and exhausting. But this blog post aims to capture the key ideas from the visit. I’d love your feedback on the ideas/frameworks below, and I’ll extrapolate on some of these ideas in followup posts.

I’m also looking forward to working more collaboratively with my colleagues in New Zealand, as well as from across all three spheres of government in Australia. I’d like to set up a way for government people in the open data and open government space across Australia/New Zealand to freely share information and technologies (in code), identify opportunities to collaborate, share their policies and planning for feedback and ideas, and generally work together for more awesome outcomes all round. Any suggestions for how best to do this? :) GovDex? A new thing? Will continue public discussions on the Gov 2.0 mailing list, but I think it’ll be also useful to connect govvies privately whilst encouraging individuals and agencies to promote their work publicly.

This blog post is a collaboration with the wonderful Alison Stringer, in a personal capacity only. Enjoy!

* 3000 may be a wee stretch :)

Table of Contents

Open Data

  • Strategic/Policy Building Blocks
  • Technical Building Blocks
  • References

Digital and Open Government

  • Some imperatives for changing how we do government
  • Policy/strategic components

Open data

Strategic/policy building blocks

Below are some basic building blocks we have found to be needed for an open data strategy to be sustainable and effective in gaining value for both the government and the broader community including industry, academia and civil society. It is based on the experiences in NZ, Aus and discussions with open data colleagues around the world. Would love your feedback, and I’ll expand this out to a broader post in the coming weeks.

  • Policy- open as the default, specifically encouraging and supporting a proactive and automated disclosure of government information in an appropriate, secure and sustainable way. Ideally, each policy should be managed as an iterative and live document that responds to changing trends, opportunities and challenges:
    • Copyright and licensing – providing clear guidance that government information can be legally used. Using simple, permissive and known/trusted licences is important to avoid confusion.
    • Procurement – procurement policy creates a useful and efficient lever to establish proactive “business as usual” disclosure of information assets, by requiring new systems to support such functionality and publishing in open data formats from the start. This also means the security and privacy of data can be built into the system.
    • Proactive publishing – a policy of proactive disclosure helps avoid the inefficiencies of retrospective data publishing. It is important to also review existing assets and require an implementation plan from all parts of government on how they will open up their information assets, and then measure, monitor and report on the progress.
  • Legislation – ensuring any legislative blockers to publishing data are sorted, for instance, in some jurisdictions civil servants are personally liable if someone takes umbrage to the publication of something. Indeed there may be some issues here that are perceptions as opposed to reality. A review of any relevant legislation and plan to fix any blockers to publishing information assets is recommended.
  • Leadership/permission – this is vital, especially in early days whilst open data is still being integrated as business as usual. It should be as senior as possible.
  • Resourcing – it is very hard to find new money in governments in the current fiscal environment. However, we do have people. Resourcing the technical aspects of an open data project would only need a couple of people and a little infrastructure that can both host and point to data and data services. The UK open data platform runs on less than £460K per year, including the costs of three staff). But there needs to be a policy of distributed publishing. In the UK there are ~760 registered publishers of data throughout government. It would be useful to have at least one data publisher (probably to work part of their job only and alongside the current senior agency data champion role) who spends a day or two a week just seeking out and publishing data for their department, and identifying opportunities to automate data publishing with the data.govt.nz team.
  • Value realisation – including:
    • Improved policy development across government through better and early access to data and tools to use data
    • Knowledge transfer across government, especially given so many senior public servants are retiring in the coming years
    • Improved communication of complex issues to the public, better public engagement and exploration of data – especially with data visualisation tools
    • Monitoring, reporting, measuring clear outcomes (productivity savings, commercialisation, new business or products/projects, innovation in government, improved efficiency in Freedom of Information responses, efficiencies in not replicating data or reports, effectiveness and metrics around projects, programs and portfolios)
    • Application of data in developing citizen centric services and information
    • Supporting and facilitating commercialisation opportunities
  • Agency collaboration – the importance of agency collaboration can not be overstated. Especially on sharing/using/reusing data, on sharing knowledge and skills, on public engagement and communications. Also on working together where projects or policy areas might be mutually beneficial and on public engagement such that there is a consistent and effective dialogue with citizens. This shouldn’t be a bottlenecked approach, but rather a distributed network of individuals at different levels and in different functions.
  • Technology – need to have the right bits in place, or the best policy/vision won’t go anywhere :) See below for an extrapolation on the technical building blocks.
  • Public engagement – a public communications and engagement strategy is vital to build and support a community of interest and innovation around government data.

Technical building blocks

Below are some potential technical building blocks for supporting a whole of government(s) approach to information management, proactive publishing and collaboration. Let me know what you think I’m missing :)

Please note, I am not in any way suggesting this should be a functional scope for a single tool. On the contrary, I would suggest for each functional requirement the best of breed tool be found and that there be a modular approach such that you can replace components as they are upgraded or as better alternatives arise. There is no reason why a clever frontend tool couldn’t talk to a number of backend services.

  • Copyright and licensing management – if an appropriately permissive copyright licence is applied to data/content at the point of creation, and stored in the metadata, it saves on the cost of administration down the track. The Australian Government default license has been determined as Creative Commons BY, so agencies and departments should use that, regardless of whether the data/content is ever publishing publicly. The New Zealand government recommends CC-BY as the default for data and information published for re-use.
  • An effective data publishing platform(s) (see Craig Thomler’s useful postabout different generations of open data platforms) that supports the publishing, indexing and federation of data sources/services including:
    • Geospatial data – one of the pivotal data sets required for achieving citizen centric services, and in bringing the various other datasets together for analysis and policy development.
    • Real time data – eg, buses, weather, sensor networks
    • Statistical data – eg census and surveys, where raw access to data is only possible through an API that gives a minimum number of results so as to make individual identification difficult
    • Tabular data – such as spreadsheets or databases of records in structured format
  • Identity management – for publishers at the very least.
  • Linked data and metadata system(s) – particularly where such data can be automatically inferred or drawn from other systems.
  • Change control – the ability to push or take updates to datasets, or multiple files in a dataset, including iterative updates from public or private sources in a verifiable way.
  • Automation tools for publishing and updating datasets including where possible, from their source, proactive system-to-system publishing.
  • Data analysis and visualisation tools – both to make it easier to communicate data, but also to help people (in government and the public) analyse and interact with any number of published datasets more effectively. This is far more efficient for government than each department trying to source their own data visualisation and analysis tools.
  • Reporting tools – that clearly demonstrate status, progress, trends and value of open data and open government on an ongoing basis. Ideally this would also feed into a governance process to iteratively improve the relevant policies on an ongoing basis.

Some open data references

Digital and Open Government

Although I was primarily in New Zealand to discuss open data, I ended up entering into a number of discussions about the broader aspects of digital and open government, which is entirely appropriate and a natural evolution. I was reminded of the three pillars of open government that we often discuss in Australia which roughly translate to:

  • Transparency
  • Participation
  • Citizen centricity

There is a good speech by my old boss, Minister Kate Lundy, which explains these in some detail.

I got into a couple of discussions which went into the concept of public engagement at length. I highly recommend those people check out the Public Sphere consultation methodology that I developed with Minister Kate Lundy which is purposefully modular so that you can adapt it to any community and how they best communicate, digitally or otherwise. It also is focused on getting evidence based, peer reviewed, contextually analysed and useful actual outcomes. It got an international award from the World eDemocracy Forum, which was great to see. Particularly check out how we applied computer forensics tools to help figure out if a consultation is being gamed by any individual or group.

When I consider digital government, I find myself standing back in the first instance to consider the general role of government in a digital society. I think this is an important starting point as our understanding is broadly out of date. New Zealand has definitions in the State Sector Act 1988, but they aren’t necessarily very relevant to 2013, let alone an open and transparent digital government.

Some imperatives for changing how we do government

Below are some of the interesting imperatives I have identified as key drivers for changing how we do government:

  • Changing public expectations – public expectations have fundamentally changed, not just with technology and everyone being connected to each other via ubiquitous mobile computing, but our basic assumptions and instincts are changing, such as the innate assumption of routing around damage, where damage might be technical or social. I’ve gone into my observations in some depth in a blog post called Online Culture – Part 1: Unicorns and Doom (2011).
  • Tipping point of digital engagement with government – in 2009 Australia had more citizens engaging with government  online than through any other means. This digital tipping point creates a strong business case to move to digitally delivered services, as a digital approach enables more citizens to self serve online and frees up expensive human resources for our more vulnerable, complex or disengaged members of the community.
  • Fiscal constraints over a number of years have largely led to IT Departments having done more for less for years, with limited investment in doing things differently, and effectively a legacy technology millstone. New investment is needed but no one has money for it, and IT Departments have in many cases, resorted to being focused on maintenance rather than project work (an upgrade of a system that maintains the status quo is still maintenance in my books). Systems have reached a difficult point where the fat has been trimmed and trimmed, but the demands have grown. In order to scale government services to growing needs in a way that enables more citizens to self service, new approaches are necessary, and the capability to aggregate services and information (through open APIs and open data) as well as user-centric design underpins this capability.
  • Disconnect between business and IT – there has been for some time a growing problem of business units disengaging with IT. As cheap cloud services have started to appear, many parts of government (esp Comms and HR) have more recently started to just avoid IT altogether and do their own thing. On one hand this enables some more innovative approaches, but it also leads directly to a problem in whole of government consistency, reliability, standards and generally a distribution of services which is the exact opposite of a citizen centric approach. It’s important that we figure out how to get IT re-engaged in the business, policy and strategic development of government such that these approaches are more informed and implementable, and such that governments use, develop, fund and prioritise technology in alignment with a broader vision.
  • Highly connected and mobile community and workforce – the opportunities (and risks) are immense, and it is important that governments take an informed and sustainable approach to this space. For instance, in developing public facing mobile services, a mobile optimised web services approach is more inclusive, cost efficient and sustainable than native applications development, but by making secure system APIs and open data available, the government can also facilitate public and private competition and innovation in services delivery.
  • New opportunities for high speed Internet are obviously a big deal in Australia and New Zealand at the moment with the new infrastructure being rolled out (FTTP in both countries), and setting up to better support and engaging with citizens digitally now, before mainstream adoption, is rather important and urgent.
  • Impact of politics and media on policy – the public service is generally to have an evidence-based approach to policy, and where this approach is developed in a transparent and iterative way, in collaboration with the broader society, it means government can engage directly with citizens rather than through the prism of politics or the media, each which have their own motivations and imperatives.
  • Prioritisation of ICT spending – it is difficult to ensure the government investment and prioritisation of ICT projects aligns with the strategic goals of the organisation and government, especially where the goals are not clearly articulated.
  • Communications and value realisation – with anyone able to publish pretty much anything, it is incumbent on governments to be a part of the public narrative as custodians of a lot of information and research. By doing this in a transparent and apolitical way, the public service can be a value and trusted source.
  • The expensive overhead of replication of effort across governments – consolidating where possible is vital to improve efficiencies, but also to put in place the mechanisms to support whole of government approaches.
  • Skills – a high technical literacy directly supports the capacity to innovate across government and across the society in every sector. As such this should be prioritised in our education systems, way above and well beyond “office productivity” tools.

Policy/strategic components

  • Strategic approach to information policy – many people looking at information policy tend to look deeply at one or a small number of areas, but it is only in looking at all of the information created by government, and how we can share, link, re-use, and analyse that we will gain the significant policy, service delivery and social/economic benefits and opportunities. When one considers geospatial, tabular, real time and statistical (census and survey) data, and then the application of metadata and linked data, it gets rather complicated. But we need to be able to interface effectively with these different data types.
  • Facilitating public and private innovation – taking a “government as a platform” approach, including open data and open APIs, such that industry and civil society can innovate on top of government systems and information assets, creating new value and services to the community.
  • Sector and R&D investment – it is vital that government ensured that the investment in digital industries, internal innovation and indeed R&D more broadly, aligns with the strategic vision. This means understanding how to measure and monitor digital innovation more effectively and not through the lens of traditional approaches that may not be relevant, such as the number of patents and other IP metrics. The New Zealand and Australian business and research community need to make the most of their governments’ leadership in Open Government. The Open Government Partnership network might provide a way to build upon and export this expertise.
  • Exports – by creating local capacity in the arena of improved and citizen-centric services delivery, Australia and New Zealand set themselves up nicely for exporting services and products to Asia Pacific, particularly given the rapid uptake of countries in the region to join the Open Government Partnership which requires signatories to develop plans around topics such as open data, citizen centricity and parliamentary transparency, all of which we are quite skilled in.
  • Distributed skunkworks for government – developing the communities/spaces/tools across government to encourage and leverage the skills and enthusiasm of clever geeks both internally (internal hackdays, communities of practice) and externally (eg – GovHack). No one can afford new resources, but allocating a small amount of time from the existing workforce who are motivated to do great things is a cost efficient and empowering way to create a distributed skunkworks. And as people speak to each other about common problems and common solutions we should see less duplication of these solutions and improved efficiency across agencies.
  • Iterative policy – rethinking how policy is developed, implemented, measured and governed to take a more iterative and agile approach that a) leverages the skills and expertise of the broader community for more evidence based and peer reviewed policy outcomes and b) is capable of responding effectively and in a timely manner to new challenges and opportunities as they arise. It would also be useful to build better internal intelligence systems for an improved understanding of the status of projects, and improved strategic planning for success.
  • An Information Commissioner for New Zealand – an option for a policy lead on information management to work closely with departments to have a consolidated, consistent, effective and overall strategic approach to the management, sharing and benefits realisation of government information. This would also build the profile of Open Government in New Zealand and hopefully be the permanent solution to current resourcing challenges. The Office of the Australian Information Commissioner, and similar roles at State level, include the function of Information Commissioner, Privacy Commissioner and Free of Information Commissioner, and these combined give a holistic approach to government information policy that ideally balances open information and privacy. In New Zealand it could be a role that builds on recent information policies, such as NZGOAL which is designed, amongst other things, to replace bespoke content licences. Bespoke licences create an unnecessary liability issue for departments.
  • Citizen centricity – the increasing importance of consolidating government service and information delivery, putting citizens (and business) at the centre of the design. This is achieved through open mechanisms (eg, APIs) to interface with government systems and information such that they can be managed in a distributed and secure way, but aggregated in a thematic way.
  • Shared infrastructure and services – the shared services being taken up by some parts of the New Zealand Government is very encouraging to see, particularly when such an approach has been very successful in the ACT and SA state governments in Australia, and with several shared infrastructure and services projects at a national level in Australia including the AGIMO network and online services, and the NECTAR examples (free cloud stack tools for researchers). Shared services create the capacity for a consistent and consolidated approach, as well as enable the foundations of citizen centric design in a practical sense.

Some additional reading and thoughts

Digital literacy and ICT skills – should be embedded into curriculum and encouraged across the board. I did a paper on this as a contribution to the National Australian Curriculum consultation in 2010 with Senator Kate Lundy which identified three areas of ICT competency: 1) Productivity skills, 2) Online engagement skills, & 3) Automation skills as key skills for all citizens. It’s also worth looking at the NSW Digital Citizenship courseware. It’s worth noting that public libraries are a low cost and effective way to deliver digital services, information and skills to the broader community and minimise the issue of the digital divide.

Media data – often when talking about open data, media is completely forgotten. Video, audio, arts, etc. The GLAM (galleries, libraries, archives and museums) are all over this and should be part of the conversation about how to manage this kind of content across whole of government.

Just a few additional links for those interested, somewhat related to some of the things I discussed this last week.

Getting started in the Australian Public Service

I worked for Senator Kate Lundy from April 2009 till January 2012. It was a fascinating experience learning how the executive and legislative arms of government work and working closely with Kate, who is extremely knowlegable and passionate about good policy and tech. As someone who is very interested in the interrelation between governments, society, the private sector and technology, I could not have asked for a better place to learn.

But last October (2011) I decided I really wanted to take the next step and expand my experience to better understand the public service, how policy goes from (and to) the political sphere from the administrative arm of government, how policy is implemented in practise and the impact/engagement with the general public.

I sat back and considered where I would ideally like to work if I could choose. I wanted to get an insight to different departments and public sector cultures across the whole govenrment. I wanted to work in tech policy, and open government stuff if at all possible. I wanted to be in a position where I might be able to make a difference, and where I could look at government in a holistic way. I think a whole of government approach is vital to serving the public in a coherent and consistent way, as is serious public engagement and transparency.

So I came up with my top three places to work that would satisfy this criteria. My top option happened to have a job going which I applied for and by November I was informed I was their first choice. This was remarkable and I was very excited to get started, but also wanted to tie up a few things in Kate’s office. So we arranged a starting date of January 31st 2012.

What is the job you ask? You’ll have to wait till the end of the post ;)

Unfortunately for me, because I was already 6 months into a Top Secret Positive Vetting (TSPV) process (what you need for a Ministerial office in order to work with any classified information), and that process had to be completed, even though I needed a lower level for the new job. I was informed back in October that it should be done by Christmas.

So I blogged on my last day with Kate about what I had learned and indicated that I was entering the public service to get a better understanding of the administrative arm of government. There was some amusing speculation, and it has probably been the worst kept secret around Canberra for the last year :)

Of course, I thought I would be able to update my “Moving On” blog post within a few weeks or so. It ended up taking another 10 months for my clearance to finalise. TSPV does take a while, and I’m a little more complicated a case than the average bear given my travel and online profile :)

As it turns out, the 10 months presented some useful opportunities. During the last year I did a bunch of contracting work looking largely at tech policy, some website development, and I ended up working for the ACT Government for the last 5 months.

In the ACT Government I worked in a policy role under Mick Chisnall, the Executive Director of the ACT Government Information Office. That was a fantastic learning experience and I’d like to thank Mick for being such a great person to work with and learn from. I worked on open government policy, open data policy and projects (including the dataACT launch, and some initial work for the Canberra Digital Community Connect project), looked at tech policies around mobile, cloud, real time data, accessibility and much more. I also helped write some fascinating papers around the role of government in a digital city. Again, I feel very fortunate to have had the opportunity to work with excellent people with vision. A huge thanks to Mick Chisnall, Andrew Cappie-Wood, Pam Davoren, Christopher Norman, Kerry Webb, James Watson, Greg Tankard, Gavin Tapp and all the people I had the opportunity to work with. I learnt a lot, much of which will be useful in my new role.

It also showed me that the hype around “shared services” being supposedly terrible doesn’t quite map reality. For sure, some states have had significant challenges, but in some states it works reasonably well (nothing is perfect) and presents some pretty useful opportunities for whole of government service delivery.

Anyway, so my new job is at AGIMO as Divisional Coordinator for the Agency Services Division, working directly to John Sheridan who has long been quite an active and engaged voice in the Australian Gov 2.0 scene. I started a week and a half ago and am really enjoying it already. I think there are some great opportunities for me through this job to usefully serve the public and the broader public service. I look forward to making my mark and contributing to the pursuit of good tech in government. I’m also taking the role of Media Coordinator for AGIMO, and supporting John in his role.

I’ve met loads of brilliant people working in the public service across Australia, and I’m looking forward to learning a lot. I’m also keen to take a very collaborative approach (no surprises there), so I’m looking at ways to better enable people to work together across the APS and indeed, across all government jurisdictions in Australia. There is a lot to be gained by collaboration between the Federal, States/Territories and Local spheres of government, particularly when you can get the implementers and policy developers working together rather than just those up the stack.

So, if you are in government (any sphere) and want to talk open government, open data, tech policy, iterative policy development, public engagement, or all the things, please get in touch. I’m hoping to set up an open data working group to bring together the people in various governments doing great work across the country and I’ll be continuing to participate in the Gov 2.0 community, now from within the tent :)

Collaborative innovation in the public service: Game of Thrones style

I recently gave a speech about “collaborative innovation” in the public service, and I thought I’d post it here for those interested :)

The short version was that governments everywhere, or more specifically, public services everywhere are unlikely to get more money to do the same work, and are struggling to deliver and to transform how they do things under the pressure of rapidly changing citizen expectations. The speech used Game of Thrones as a bit of a metaphor for the public service, and basically challenged public servants (the audience), whatever their level, to take personal responsibility for change, to innovate (in the true sense of the word), to collaborate, to lead, to put the citizen first and to engage beyond the confines of their desk, business unit, department or jurisdiction to co-develop develop better ways of doing things. It basically said that the public service needs to better work across the silos.

The long version is below, on YouTube or you can check out the full transcript:

The first thing I guess I wanted to talk about was pressure number one on government. I’m still new to government. I’ve been working in I guess the public service, be it federal or state, only for a couple of years. Prior to that I was an adviser in a politician’s office, but don’t hold that against me, I’m strictly apolitical. Prior to that I was in the industry for 10 years and I’ve been involved in non-profits, I’ve been involved in communities, I’ve been involved in online communities for 15 years. I sort of got a bit of an idea what’s going on when it comes to online communities and online engagement. It’s interesting for me to see a lot of these things done they’ve become very popular and very interesting.

My background is systems administration, which a lot of people would think is very boring, but it’s been a very useful skill for me because in everything I’ve done, I’ve tried to figure out what all the moving parts are, what the inputs are, where the configurations files are; how to tweak those configurations to get the better outputs. The entire thing has been building up my knowledge of the whole system, how the societal-wide system, if you like, operates.

One of the main of pressures I’ve noticed on government of course is around resources. Everyone has less to do more. In some cases, some of those pressures are around fatigued systems that haven’t had investment for 20 years. Fatigued people who have been trying to do more with less for many years. Some of that is around assumptions. There’s a lot of assumptions about what it takes to innovate. I’ve had people say, “Oh yeah, we can totally do an online survey that’ll cost you $4 million.” “Oh my, really? Okay. I’m going to just use Survey Monkey, that’s cool.” There are a lot of perceptions that I would suggest a little out of date.

It was a very opportunistic and a very wonderful thing that I worked in the ACT Government prior to coming into the federal government. A lot of people in the federal government look down on working in other jurisdictions, but it was very useful because when you see what some of the state territory and local governments do with the tiny fraction of the funding that the federal government has, it’s really quite humbling to start to say, “Well why do we have these assumptions that a project is going to cost a billion dollars?”

I think our perceptions about what’s possible today is a little bit out of whack. Some of those resources problems are also limitations for the self-imposed, our assumptions, our expectations and such. So first major pressure that we’re dealing with is around resources, both the real issue and I would argue a slight issue of perception. This is the only gory one (slide), so turn away from it if you like, I should have said that before sorry.

The second pressure is around changing expectations. Citizens now, because of the Internet, are more powerful than ever before. This is a real challenge for entities such as government or a large traditional power brokers shall we say. Having citizens that can solve their own problems, they can make their own applications that can pull data from wherever we like, that can screen scrape what we put online, is a very different situation to whether it be the Game of Thrones land or Medieval times, even up to even only 100 years ago; the role of a citizen was more about being a subject and they were basically subject to whatever you wanted. A citizen today is able to engage and if you’re not responsive to them, if government don’t be agile and actually fill up a role then that void gets picked up by other people, so the internet society is a major pressure of the changing expectations of the public that we serve is a major pressure. When fundamentally, government can’t in a lot of cases innovate quickly enough, particularly in isolation, to solve the new challenges of today and to adapt and grab on to the new opportunities of today.

We (public servants) need to collaborate. We need to collaborate across government. We need to collaborate across jurisdictions and we need to collaborate across society and I would argue the world. These are things that are very, very foreign concepts to a lot of people in the public service. One of the reasons I chose this topic today was because when I undertook to kick off Data.gov.au again, which is just about to hit its first anniversary and I recommend that you come along on the 17th of July, but when I kicked that off, the first thing I did was say, “Well who else is doing stuff? What are they doing? How’s that working? What’s the best practice?” When I chatted to other jurisdictions in Australia, when I chatted to other countries, I sat down and grilled for a couple of hours the Data.gov.uk guys to find out exactly how they do it, how it’s resourced, what their model was. It was fabulous because it really helped us create a strategy which has really worked and it’s continuing to work in Australia.

A lot of these problems and pressures are relatively new, we can’t use old methods to solve these problems. So to quote another Game of Thrones-ism,  if we look back, we are lost.

The third pressure and it’s not too gory, this one. The third pressure is upper management. They don’t always get what we’re trying to do. Let’s be honest, right? I’m very lucky I work for a very innovative, collaborative person who delegates responsibilities down … Audience Member: And still has his head. Pia Waugh: … and still has his head. Well actually it’s the other way around. Upper management is Joffrey Baratheon; but I guess you could say it that way, too. In engaging with upper management, a lot of the time and this has been touched on by several speakers earlier today, a lot of the time they have risks. To manage they have to maintain reputation and when you say we can’t do it that way, if you can’t give a solution that will solve the problem, then what do you expect to happen? We need to engage with upper management to understand what their concerns are, what their risks are and help mitigate those risks. If we can’t do that then it is in a lot of cases to our detriment that our projects are not going to be able to get up.

We need to figure out what the agendas are, we need to be able to align what we’re trying to do effectively and we need to be able to help provide those solutions and engage more constructively, I would suggest, with upper management.

Okay, but the biggest issue, the biggest issue I believe is around what I call systemic silos. So this is how people see government, it’s remote, it’s very hard to get to; it’s one entity. It’s a bit crumbling, a bit off in the realm, it’s out of touch with people, it’s off in the clouds and it’s untouchable. It’s very hard to get to, there’s winding dangerous road you might fall off. Most importantly, it’s one entity. When people have a good or bad experience with your department, they just see that as government. We are all exactly judged by the best and the worst examples of all of these and yet we’re all motivated to work independently of each other in order to meet fairly arbitrary, goals in some cases. In terms of how government sees people, they’re these trouble-making people that climbing up to try and destroy us. They’re a threat, they’re outsiders, they don’t get it. If only we could teach them how government works and then this will all be okay.

Well, it’s not their job; I mean half of the people in government don’t know how government works. By the time you take MOG changes into account, by the time you take changes of functions, changes of management, changes of different approaches, different cultures throughout the public service, the amount of time someone has said to me, “The public service can’t innovate.” I’m like, “Well, the public service is myriad organisations with myriad cultures.” It’s not one entity and yet people see us as one entity. It’s not I think the job of the citizen to understand the complexities of government, but rather the job of the government to abstract the complexities of government to get a better engagement and service for citizens. That’s our job, which means if you’re not collaborating and looking across government, then you’re not actually doing your job, in my opinion. But again, I’m still possibly seen as one of these troublemakers, that’s okay.

This is how government sees government (map of the Realm), a whole map of fiefdoms, of castles to defend, of armies that are beating at your door, people trying to take your food and this is just one department. We don’t have this concept of that flag has these skills that we could use. These people are doing this project; here’s this fantastic thing happening over there that we could chat to. We’re not doing that enough across departments, across jurisdictions, let alone internationally and there’s some fantastic opportunities to actually tap into some of those skills. The solution in my opinion, this massive barrier to doing the work of the public service better is systemic silos. So what’s the solution?

The solution is we need to share. We’re all taught as children to share the cookie and yet as we get into primary school and high school we’re told to hide our cookie. Keep it away. Oh you don’t want to share the cookie because there’s only one cookie and if you gave any of it away you don’t have any cookie left. Well, there’s only so many potatoes in this metaphor and if we don’t share those potatoes then someone’s going to starve and probably the person who’s going to starve is actually right now delivering a service that if they’re not there to deliver, we’re going to have to figure out how to deliver for the one potato that we have. So I’m feeling we have to collaborate and to share those resources is I think a very important step forward.

Innovative collaboration. Innovative collaboration is a totally made up term as are a lot of things are I guess. It’s the concept of actually forging strategic partnerships. I’ve actually had a number of projects now. I didn’t have a lot of funding for Data.gov.au. I don’t need a lot of funding for Data.gov.au because fundamentally, a lot of agencies want to publish data because they see it now to be in their best interest. It helps them improve their policy outcomes, helps them improve their services, helps them improve efficiency in their organisations. Now that we’ve sort of hit that tipping point of agencies wanting to do this stuff increasingly so, it’s not completely proliferated yet, but I’m working on it; now that we sort of hit that tipping point, I’ve got a number of agencies that say, “Well, we’d love to open data but we just need a data model registry.” “Oh, cool. Do you have one?” “Yes, we do but we don’t have anywhere to host it.” “Okay, how about I host it for you. You develop it and I’ll host it. Rock!” I’ve got five of those projects happening right now where I’ve aligned the motivation and the goals of what we’re doing with the motivation and goals of five other departments and we have actually have some fantastic outcomes coming out that meet all the needs of all the players involved, plus create a whole of government improved service.

I think this idea of having a shared load, pooling our resources, pooling our skills, getting a better outcome for everyone is a very important way of thinking. It gives you better improved outcomes in terms of dealing again with upper management. If you start from a premise that most people do, well we’ve only got this number of people and this amount of money and therefore, we’re only going to be able to get this outcome. In a year’s time you’ll be told, “That’s fine, just still do it 20% less.” If you say our engagement with this agency is going to help us get more resilience in a project and more expertise on a project and by the way, upper management, it means we’re splitting the cost with someone else, that starts to help the conversation. You can start to leverage resources across multiple departments, across society and across the world.

Here’s a little how-to, just a couple of ideas, I’m going to go into this into a little bit more detail. In the first case research, so I’m a child of the internet, I’m a little bit unique for my age bracket and that my mom was a geek, so I have been using computers since I was four, 30 years ago. A lot of people my age got their first taste of computing and the internet when they got to university or at best maybe high school whereas I was playing with computers very young. In fact, there’s a wonderful photo if you want to check it out, of my mom and I sitting and looking at the computer very black and white and there’s this beautiful photo of this mother with a tiny child at the computer. What I tell people is that it’s a cute photo but actually my mom had spent three days programming that system and when her back was turned, just five minutes, I completely broke it. The picture is actually of her fixing my first breaking of a system. I guess I could have had a career in testing but anyway I got in big trouble.

One of the things about being a child of the internet or someone, who’s really adopted the internet into the way that I think, is that my work space is not limited to the desk area that I have. I don’t start with a project and sort of go, okay, what’s on my computer, who’s in my immediate team, who’s in my area, my business area. I start with what’s happening in the world. The idea of research is not just to say what’s happening elsewhere so that we can integrate into what we are going to do, but to start to see the whole world as your work space or as your playground or as your sandpit, whichever metaphor you prefer. In this way, you can start to automatically as opposed to by force, start to get into a collaborative mindset.

Research is very important. You need to establish something. You need to actually do something. This is an important one that’s why I’ve got it in bold. You need to demonstrate that success and you need to wrap up. I think a lot of times people get very caught up with establishing a community and then maintaining that community for the sake of maintaining the community. What are the outcomes? You need to identify fairly quickly, is this going to have an outcome or is this sort of a community, an ongoing community which is not necessarily outcome driven? Part of this is around, again, understanding how the system works and how you can actually work in the system. Some of that research is about understanding projects and skills. I’ll jump into a little bit. So what already exists? If I had a mammoth (slide), I’d totally do cool stuff. What exists out there? What are the people and skills that are out there? What are the motivations that exist in those people that are already out there? How can I align with those? What are the projects that are already doing cool stuff? What are the agendas and priorities and I guess systemic motivations that are out there? What tech exists?

And this is why I always contend and I always slip into a talk somewhere, so I’ll slip it in here, you need to have a geek involved somewhere. How many people here would consider yourselves geeks? Not many. You need to have people that have technical literacy in order to make sure that your great idea, your shiny vision; your shiny policy can actually be implemented. If you don’t have a techie person, then you don’t have the person who has a very, very good skill at identifying opportunities and risks. You can say, “Well we’ll just go to our IT department and they’ll give us quote of how much it does to do a survey.” Well in that case, okay, not necessarily our case, it was $4 million. So you need to have techie people who will help you keep your finger on the pulse of what’s possible, what’s probable and how it’s going to possibly work. I highly recommend, you don’t need to be that person but you need to have the different skills in the room.

This is where and I said this on Twitter, I do actually recommend Malcolm Gladwell’s ‘The Tipping Point’, not because he’s the most brilliant author in the world, but because he has a concept in there that’s very important. Maybe I’ll save you reading it now, but of having three skills – connectedness, so the connector; the maven, your researcher sort of person; and your sales person. Those three skills, one person might have all or none of those skills, but a project needs to have all of those skills represented in some format for the project to go from nothing to being successful or massively distributed. It’s a very interesting concept. It’s been very beneficial to a lot of projects I’ve been involved in. I’ve run a lot of volunteer projects. The biggest of which is happening this weekend, which is GovHack. Having 1,300 participants in an 11-city event only with volunteer organisers is a fairly big deal and part of the reason we can do that is because we align natural motivation with the common vision and we get geeks involved obviously.

What already exists? Identifying the opportunities, identifying what’s out there, treating the world like a basket of goodies that you can draw from. Secondly, you want to form an A team. Communities are great and communities are important. Communities establish a ongoing presence from which you can engage in, draw from, get support and all those kinds of things. This kind of community is very, very important, but innovative collaboration is about building a team to do something, a project team. You want to have your A-list. You want to have a wide variety of skills. You want to have doers. You want to establish the common and different needs of the individuals involved and they might be across departments or across governments or from society. Establishing what is common of the people involved that you want to get out of it and establishing then what’s different is important to making sure that when you go to announce this, that everyone’s needs is taken care of or that it doesn’t put someone off side or whatever. You need to understand the dynamics of your group very, very well and you need to have the right people in the room. You want to plan realistic outcomes and milestones. These need to be tangible.

This is where I get just super pragmatic and I apologise, but if you’re building a team to build the project report to build the team, maybe you’ve lost your way just slightly. If the return on investment or the business case that you’re writing takes 10 times the amount of time to do the project, itself, maybe you could do a little optimisation. So just sort of sitting back and saying what is the scale of what we’re trying to do. What are the tangible outcomes and what is actually necessary for this? This comes back to the concept of again, managing and mapping risk to projects. If the risk is very, very, very low, then maybe the amount of time and effort that goes into building the enormous structure of governance around it, can be somewhat minimised. This is taking a engaged proactive approach with the risk I think is very important in this kind of thing and making sure that the outcomes are actually achievable and tangible. This is also important because if you have tangible outcomes then you can demonstrate tangible outcomes. You need to also avoid scope creep.

I had a project recently that didn’t end up happening. It was a very interesting lesson to me though where something simple was asked and I came out with a way to do it in four weeks. Brilliant! Then the scope started to creep significantly and then it became this and this and then this and then we want to have an elephant with bells on it. Well, you can have the elephants with bells if you do this in this way in six months. So how about you have that as a second project? Anyway, so basically try to hold your ground. Often enough when people ask for something, they don’t know what they’re asking for. We need to be the people that are on the front line saying, “What you want to achieve fundamentally, you’re not going to achieve the way that you’re trying to achieve it. So how about we think about what the actual end goal that we all want is and how to achieve that? And by the way, I’m the technical expert and you should believe me and if you don’t, ask another technical expert but for God’s sake, don’t leave it to someone who doesn’t know how to implement this, please.”

You want to plan your goals. You want to ensure and this another important bit that there is actually someone responsible for each bit, otherwise, your planning committee will get together in another four weeks or eight weeks and will say, “So, how is action A going? Oh nothing’s happened. Okay, how’s action B going?” You need to actually make sure that this nominated responsibilities and they again should align to those individuals’ natural motivations and systemic motivations.

My next bit, don’t reinvent the wheel. I find a lot of projects where someone has gone on and completely recreated something. The amount of time when someone said, “Well that’s a really good piece of software but let’s rewrite it in another language.” In technical land, this is very common, but I see it happen in a process perspective, I see it happen in a policy perspective. Again, going back to see what’s available is very important, but I’ll just throw in another thing here, the idea of taking responsibility is a very scary thing, apparently, in the public service. Let’s go back to the wheel. If your wheel is perfect, you’ve developed it, you’ve designed it, you’ve spent six years getting it to this point and it’s shiny and it’s beautiful and it works, but it’s not connected to a car, what’s the point, seriously?

You want to make sure that what you’re doing needs to actually contribute to something bigger, needs to actually be part of the engine, because if your wheel or if your cog is perfectly defined but the engine as a whole doesn’t work, then there’s a problem there and sometimes that’s out of your control. Quite often what’s missing is someone actually looking end to end and saying, “Well, the reason there’s a problem is because there’s actually a spanner, just here.” If we remove that spanner and I know it’s not my job to remove that spanner, but if someone removed that spanner the whole thing would work. Sometimes it’s very scary for some people to do and I understand that, but you need to understand what you’re doing and how it fits into the bigger picture and how the bigger picture is or isn’t working, I would suggest.

Monitoring. Obviously, measuring and monitoring success in Game of Thrones was a lot more messy than it is for us. They had to deal with birds, they had to feed them, they had to deal with what they fed them. To measure and monitor your project is a lot easier in a lot of cases. There’s a lot of ways to automate it. There’s a lot of ways to come up with it at the beginning. How do we define success, if you don’t define it then you don’t know if you’ve got there. These things are all kind of obvious, but I remember having a real epiphany moment when a very senior person from another department actually, I was talking to him about the challenge that I’m having with a project and I said, “Well if you’re doing this great thing, then why aren’t you shouting it from the rooftop. This is wonderful. It’s very innovative, it’s very clever. You’ve solved a really great problem.” Then he looked at me and said, “Well Pia, you know success is just as bad as failure, don’t you?” It really struck me and then I realised I guess any sort of success or failure is seen as attention and the moment someone puts attention then it’s not very scary. I put to you that having success, having defensible projects, having evidence that actually underpins why, what you’re doing is important, is probably one of the most important things that you can do today to make sure that you continue getting funding, resources and all these kinds of things. Measuring, monitoring, reporting is more important now than ever and luckily and coincidentally, it’s easier now than ever. There’s a lot of ways that we can automate this stuff. There’s a lot of ways that we can put in place these mechanisms from the start of a project. There’s a lot of ways we can use technology to help. We need to define success, we need to defend and promote the outcomes of those projects.

Share the glory. If it’s you sitting on the throne then everyone starts to get a little antsy. I like to say that shared glory is the key to a sustainable success. I’ve had a number of projects and I don’t think I’ve told John this, but I’ve had a couple of things where I’ve collaborated with someone and then I’ve let them announce their part of it first, because that’s a good way to get great relationship. It doesn’t really matter to me if I announce it now or in a week’s time. It helps share the success, it helps share the glory. It means everyone is a little bit more on site and it builds trust. The point that was made earlier today about trust is a very important one and the way that you build trust is by having integrity, following through on what you’re doing and to share the glory a little. Sharing the glory is a very important part because if everyone feels like they’re getting out of the collaboration what they need to justify their work, to justify to their bosses, to justify their investment of time, then that’s a very good thing for everyone.

Everything great starts small. This goes to the point of doing pilots, doing demos. How many of you have heard the term release early, release often? Not many. It’s a technology sector idea, but the idea is rather than taking, in big terms, rather than taking four years to scope something out and then get $100 million and then implement it, yeah I know, right? You actually start to do smaller modular projects and if it fails straight away, then at least you haven’t spent four years and $100 million failing. The other part of release early, release often is fail early, fail often, which sounds very scary in the public sector but it’s a very important thing because from failure and from early releases, you get lessons. You can iteratively improve projects or policies or outcomes that you’re doing if you continually getting out there and actually testing with people and demoing and doing pilots. It’s a very, very useful thing to realise that sometimes even the tiniest baby step is still a step and for yourselves as individuals, we don’t always get the big success that we hope and so you need to make sure that you have a continuous success loop in your own environment and for yourself to make sure that you maintain your own sense of moving forward, I guess, so even small steps are very important steps. Audience Member: Fail early, fail often to succeed sooner. Pia Waugh: That’s probably a better sentence.

There’s a lot of lessons that we can learn from other sector and from other industries, from both the corporate and community sectors, that don’t always necessarily translate in the first instance; but they’re tried and true in those sectors. Understanding why they work and why they do or in some cases don’t map to our sector, I think is very important.

Finally, this is the last thing I want to leave you with. The amount of times that I hear someone say, “Oh, we can’t possibly do that. We need to have good leadership. Leadership is what will take us over the line.” We are the leaders of this sector. We are the future of the public service and so there’s a question about you need to start acting it as well, not you, all of us. You lead through doing. You establish change through being the change you want to see, to quote another great guy. When you actually realising that a large proportion of the SES are actually retiring in the next five to ten years, and realising that we are all the future of the public service means that we can be those leaders. Now if you go to your boss and say, “I want to do this great, cool thing and it’s going to be great and I’m going to go and work with all these other people. I’m going to spend lots of your money.” Yeah, they’re going to probably get a little nervous. If you say to them “here’s why this is going to be good for you, I want to make you look good, I want to achieve something great that’s going to help our work, it’s going to help our area, it’s going to help our department, it’s going to help our Minister, it aligns with all of these things” you’re going to have a better chance of getting it through. There’s a lot of ways that you can demonstrate leadership just at our level, just by working to people directly.

So I spoke before about how the first thing I did was go and research what everyone else was doing, I followed that up by establishing an informal forum. A series of informal get togethers. One of those informal get togethers is across jurisdictional meeting with open data people from other jurisdictions. What that means is every two months I meet with the people who are in charge of the open data policies and practice from most of the states and territories, from a bunch of local governments, from a few other departments at the federal level, just to talk about what we’re all doing; made very clear from the start, this is not formal, this is not mandatory, it’s not top down, it’s not the feds trying to tell you what to do, which is an unfortunate although often accurate picture that the other jurisdictions have of us, which is unfortunate because there’s so much we can learn from them. By just setting that up and getting the tone of that right, everyone is sharing policy, sharing outcomes, sharing projects, starting to share a code, starting to share functionality and we’ve got to a point only I guess eight months into the establishment of that group, where we really started to get some great benefits for everyone and it’s bringing everyone’s base line up.

There’s a lot of leadership to be had at every level and identifying what you can do in your job today is very important rather than waiting for the permission. I remember and I’m going to say a little story that I hope John doesn’t mind, I remember when I started in my job and I got a week into the job and I said to John, “So, I’ve been here a week, I really don’t know if this is what you wanted from me. Are you happy with how I’m going?” He said, “Well Pia, don’t change what you’re doing, but I just want to give you a bit of feedback. I’ve never been in a meeting before with outsiders, with vendors or whatever and had an EL speak before.” I said, “Oh, what’s wrong with your department? What’s wrong with ELs?” Because certainly by a particular level you have expertise, you have knowledge, you have something to contribute, so why wouldn’t you be encouraging people of all levels but certainly of senior levels to be actually speaking and engaging in the meetings. It was a really interesting thought experiment and discussion to be had about the culture.

The amount of people that have said to me, just quietly, “Hey, we’d love to do that but we don’t want to get any criticism.” Well, criticism comes in two forms. It’s either constructive or unconstructive. Now it can be given negatively, it can be given positively, it can be given in a little bottle in the sea, but it only comes in those two forms. If it’s constructive, even if yelled at you online, if it’s something to learn from, take that, roll with it. If it’s unconstructive, you can ignore it safely. It’s about having self knowledge, an understanding of a certain amount of clarity and comfort with the idea that you can improve, that sometimes other people will be the mechanism for you to improve, in a lot of cases it will be other people will be the mechanism for you to improve. Conflict is not a bad thing. Conflict is actually a very healthy thing in a lot of ways, if you engage with it. It’s really up to us about how we engage with conflict or with criticism.

This is again where I’m going to be a slight outsider, but it’s very, very hard, not that I’ve seen this directly, but everything I hear is that it’s very, very hard to get rid of someone in the public service. I put to you, why would you not be brave? Seriously. You can’t have it both ways. You can’t say, “Oh, I’m so scared about criticism. I’m so scared blah, blah, blah,” and at the same time it be difficult to be fired, why not be brave? We can do great things and it’s up to us as individuals to not wait for that permission to do great things. We can all do great things at lots and lots of different levels. Yes, there will be bad bosses and yes, there will be good bosses, but if you continually pin your ability to shine on those external factors and wait, then you’ll be waiting a long time. Anyway, it’s just my opinion.

So be the leader, be the leader that you want to see. That’s I guess what I wanted to talk about with collaborative innovation.

Essays: Improving the Public Policy Cycle Model

I don’t have nearly enough time to blog these days, but I am doing a bunch of writing for university. I decided I would publish a selection of the (hopefully) more interesting essays that people might find interesting :) Please note, my academic writing is pretty awful, but hopefully some of the ideas, research and references are useful. 

For this essay, I had the most fun in developing my own alternative public policy model at the end of the essay. Would love to hear your thoughts. Enjoy and comments welcome!

Question: Critically assess the accuracy of and relevance to Australian public policy of the Bridgman and Davis policy cycle model.

The public policy cycle developed by Peter Bridgman and Glyn Davis is both relevant to Australian public policy and simultaneously not an accurate representation of developing policy in practice. This essay outlines some of the ways the policy cycle model both assists and distracts from quality policy development in Australia and provides an alternative model as a thought experiment based on the authors policy experience and reflecting on the research conducted around the applicability of Bridgman and Davis’ policy cycle model.

Background

In 1998 Peter Bridgman and Glyn Davis released the first edition of The Australian Policy Handbook, a guide developed to assist public servants to understand and develop sound public policy. The book includes a policy cycle model, developed by Bridgman and Davis, which portrays a number of cyclic logical steps for developing and iteratively improving public policy. This policy model has attracted much analysis, scrutiny, criticism and debate since it was first developed, and it continues to be taught as a useful tool in the kit of any public servant. The fifth edition of the Handbook was the most recent, being released in 2012 which includes Catherine Althaus who joined Bridgman and Davis on the fourth edition in 2007.

The policy cycle model

The policy cycle model presented in the Handbook is below:

bridgman-and-davis

The model consists of eight steps in a circle that is meant to encourage an ongoing, cyclic and iterative approach to developing and improving policy over time with the benefit of cumulative inputs and experience. The eight steps of the policy cycle are:

  1. Issue identification – a new issue emerges through some mechanism.

  2. Policy analysis – research and analysis of the policy problem to establish sufficient information to make decisions about the policy.

  3. Policy instrument development – the identification of which instruments of government are appropriate to implement the policy. Could include legislation, programs, regulation, etc.

  4. Consultation (which permeates the entire process) – garnering of external and independent expertise and information to inform the policy development.

  5. Coordination – once a policy position is prepared it needs to be coordinated through the mechanisms and machinations of government. This could include engagement with the financial, Cabinet and parliamentary processes.

  6. Decision – a decision is made by the appropriate person or body, often a Minister or the Cabinet.

  7. Implementation – once approved the policy then needs to be implemented.

  8. Evaluation – an important process to measure, monitor and evaluate the policy implementation.

In the first instance is it worth reflecting on the stages of the model, which implies the entire policy process is centrally managed and coordinated by the policy makers which is rarely true, and thus gives very little indication of who is involved, where policies originate, external factors and pressures, how policies go from a concept to being acted upon. Even to just develop a position resources must be allocated and the development of a policy is thus prioritised above the development of some other policy competing for resourcing. Bridgman and Davis establish very little in helping the policy practitioner or entrepreneur to understand the broader picture which is vital in the development and successful implementation of a policy.

The policy cycle model is relevant to Australian public policy in two key ways: 1) that it both presents a useful reference model for identifying various potential parts of policy development; and 2) it is instructive for policy entrepreneurs to understand the expectations and approach taken by their peers in the public service, given that the Bridgman and Davis model has been taught to public servants for a number of years. In the first instance the model presents a basic framework that policy makers can use to go about the thinking of and planning for their policy development. In practise, some stages may be skipped, reversed or compressed depending upon the context, or a completely different approach altogether may be taken, but the model gives a starting point in the absence of anything formally imposed.

Bridgman and Davis themselves paint a picture of vast complexity in policy making whilst holding up their model as both an explanatory and prescriptive approach, albeit with some caveats. This is problematic because public policy development almost never follows a cleanly structured process. Many criticisms of the policy cycle model question its accuracy as a descriptive model given it doesn’t map to the experiences of policy makers. This draws into question the relevance of the model as a prescriptive approach as it is too linear and simplistic to represent even a basic policy development process. Dr Cosmo Howard conducted many interviews with senior public servants in Australia and found that the policy cycle model developed by Bridgman and Davis didn’t broadly match the experiences of policy makers. Although they did identify various aspects of the model that did play a part in their policy development work to varying degrees, the model was seen as too linear, too structured, and generally not reflective of the at times quite different approaches from policy to policy (Howard, 2005). The model was however seen as a good starting point to plan and think about individual policy development processes.

Howard also discovered that political engagement changed throughout the process and from policy to policy depending on government priorities, making a consistent approach to policy development quite difficult to articulate. The common need for policy makers to respond to political demands and tight timelines often leads to an inability to follow a structured policy development process resulting in rushed or pre-canned policies that lack due process or public consultation (Howard, 2005). In this way the policy cycle model as presented does not prepare policy-makers in any pragmatic way for the pressures to respond to the realities of policy making in the public service. Colebatch (2005) also criticised the model as having “not much concern to demonstrate that these prescriptions are derived from practice, or that following them will lead to better outcomes”. Fundamentally, Bridgman and Davis don’t present much evidence to support their policy cycle model or to support the notion that implementation of the model will bring about better policy outcomes.

Policy development is often heavily influenced by political players and agendas, which is not captured in the Bridgman and Davis’ policy cycle model. Some policies are effectively handed over to the public service to develop and implement, but often policies have strong political involvement with the outcomes of policy development ultimately given to the respective Minister for consideration, who may also take the policy to Cabinet for final ratification. This means even the most evidence based, logical, widely consulted and highly researched policy position can be overturned entirely at the behest of the government of the day (Howard, 2005) . The policy cycle model does not capture nor prepare public servants for how to manage this process. Arguably, the most important aspects to successful policy entrepreneurship lie outside the policy development cycle entirely, in the mapping and navigation of the treacherous waters of stakeholder and public management, myriad political and other agendas, and other policy areas competing for prioritisation and limited resources.

The changing role of the public in the 21st century is not captured by the policy cycle model. The proliferation of digital information and communications creates new challenges and opportunities for modern policy makers. They must now compete for influence and attention in an ever expanding and contestable market of experts, perspectives and potential policies (Howard, 2005), which is a real challenge for policy makers used to being the single trusted source of knowledge for decision makers. This has moved policy development and influence away from the traditional Machiavellian bureaucratic approach of an internal, specialised, tightly controlled monopoly on advice, towards a more transparent and inclusive though more complex approach to policy making. Although Bridgman and Davis go part of the way to reflecting this post-Machiavellian approach to policy by explicitly including consultation and the role of various external actors in policy making, they still maintain the Machiavellian role of the public servant at the centre of the policy making process.

The model does not clearly articulate the need for public buy-in and communication of the policy throughout the cycle, from development to implementation. There are a number of recent examples of policies that have been developed and implemented well by any traditional public service standards, but the general public have seen as complete failures due to a lack of or negative public narrative around the policies. Key examples include the Building the Education Revolution policy and the insulation scheme. In the case of both, the policy implementation largely met the policy goals and independent analysis showed the policies to be quite successful through quantitative and qualitative assessment. However, both policies were announced very publicly and politically prior to implementation and then had little to no public narrative throughout implementation leaving the the public narrative around both to be determined by media reporting on issues and the Government Opposition who were motivated to undermine the policies. The policy cycle model in focusing on consultation ignores the necessity of a public engagement and communication strategy throughout the entire process.

The Internet also presents significant opportunities for policy makers to get better policy outcomes through public and transparent policy development. The model down not reflect how to strengthen a policy position in an open environment of competing ideas and expertise (aka, the Internet), though it is arguably one of the greatest opportunities to establish evidence-based, peer reviewed policy positions with a broad range of expertise, experience and public buy-in from experts, stakeholders and those who might be affected by a policy. This establishes a public record for consideration by government. A Minister or the Cabinet has the right to deviate from these publicly developed policy recommendations as our democratically elected representatives, but it increases the accountability and transparency of the political decision making regarding policy development, thus improving the likelihood of an evidence-based rather than purely political outcome. History has shown that transparency in decision making tends to improve outcomes as it aligns the motivations of those involved to pursue what they can defend publicly. Currently the lack of transparency at the political end of policy decision making has led to a number of examples where policy makers are asked to rationalise policy decisions rather than investigate the best possible policy approach (Howard, 2005). Within the public service there is a joke about developing policy-based evidence rather than the generally desired public service approach of developing evidence-based policy.

Although there are clearly issues with any policy cycle model in practise due to the myriad factors involved and the at times quite complex landscape of influences, by constantly referencing throughout their book the importance of “good process” to “help create better policy” (Bridgman & Davis, 2012), they both imply their model is a “good process” and subtly encourage a check-box style, formally structured and iterative approach to policy development. The policy cycle in practice becomes impractical and inappropriate for much policy development (Everett, 2003). Essentially, it gives new and inexperienced policy makers a false sense of confidence in a model put forward as descriptive which is at best just a useful point of reference. In a book review of the 5th edition of the Handbook, Kevin Rozzoli supports this by criticising the policy cycle model as being too generic and academic rather than practical, and compares it to the relatively pragmatic policy guide by Eugene Bardach (2012).

Bridgman and Davis do concede that their policy cycle model is not an accurate portrayal of policy practice, calling it “an ideal type from which every reality must curve away” (Bridgman & Davis, 2012). However, they still teach it as a prescriptive and normative model from which policy developers can begin. This unfortunately provides policy developers with an imperfect model that can’t be implemented in practise and little guidance to tell when it is implemented well or how to successfully “curve away”. At best, the model establishes some useful ideas that policy makers should consider, but as a normative model, it rapidly loses traction as every implementation of the model inevitably will “curve away”.

The model also embeds in the minds of public servants some subtle assumptions about policy development that are questionable such as: the role of the public service as a source of policy; the idea that good policy will be naturally adopted; a simplistic view of implementation when that is arguably the most tricky aspect of policy-making; a top down approach to policy that doesn’t explicitly engage or value input from administrators, implementers or stakeholders throughout the entire process; and very little assistance including no framework in the model for the process of healthy termination or finalisation of policies. Bridgman and Davis effectively promote the virtues of a centralised policy approach whereby the public service controls the process, inputs and outputs of public policy development. However, this perspective is somewhat self serving according to Colebatch, as it supports a central agency agenda approach. The model reinforces a perspective that policy makers control the process and consult where necessary as opposed to being just part of a necessarily diverse ecosystem where they must engage with experts, implementers, the political agenda, the general public and more to create robust policy positions that might be adopted and successfully implemented. The model and handbook as a whole reinforce the somewhat dated and Machiavellian idea of policy making as a standalone profession, with policy makers the trusted source of policies. Although Bridgman and Davis emphasise that consultation should happen throughout the process, modern policy development requires ongoing input and indeed co-design from independent experts, policy implementers and those affected by the policy. This is implied but the model offers no pragmatic way to do policy engagement in this way. Without these three perspectives built into any policy proposal, the outcomes are unlikely to be informed, pragmatic, measurable, implementable or easily accepted by the target communities.

The final problem with the Bridgman and Davis public policy development model is that by focusing so completely on the policy development process and not looking at implementation nor in considering the engagement of policy implementers in the policy development process, the policy is unlikely to be pragmatic or take implementation opportunities and issues into account. Basically, the policy cycle model encourages policy makers to focus on a policy itself, iterative and cyclic though it may be, as an outcome rather than practical outcomes that support the policy goals. The means is mistaken for the ends. This approach artificially delineates policy development from implementation and the motivations of those involved in each are not necessarily aligned.

The context of the model in the handbook is also somewhat misleading which affects the accuracy and relevance of the model. The book over simplifies the roles of various actors in policy development, placing policy responsibility clearly in the domain of Cabinet, Ministers, the Department of Prime Minister & Cabinet and senior departmental officers (Bridgman and Davis, 2012 Figure 2.1). Arguably, this conflicts with the supposed point of the book to support even quite junior or inexperienced public servants throughout a government administration to develop policy. It does not match reality in practise thus confusing students at best or establishing misplaced confidence in outcomes derived from policies developed according to the Handbook at worst.

spheres-of-government

An alternative model

Part of the reason the Bridgman and Davis policy cycle model has had such traction is because it was created in the absence of much in the way of pragmatic advice to policy makers and thus has been useful at filling a need, regardless as to how effective is has been in doing so. The authors have however, not significantly revisited the model since it was developed in 1998. This would be quite useful given new technologies have established both new mechanisms for public engagement and new public expectations to co-develop or at least have a say about the policies that shape their lives.

From my own experience, policy entrepreneurship in modern Australia requires a highly pragmatic approach that takes into account the various new technologies, influences, motivations, agendas, competing interests, external factors and policy actors involved. This means researching in the first instance the landscape and then shaping the policy development process accordingly to maximise the quality and potential adoptability of the policy position developed. As a bit of a thought experiment, below is my attempt at a more usefully descriptive and thus potentially more useful prescriptive policy model. I have included the main aspects involved in policy development, but have included a number of additional factors that might be useful to policy makers and policy entrepreneurs looking to successfully develop and implement new and iterative policies.

Policy-model

It is also important to identify the inherent motivations of the various actors involved in the pursuit, development of and implementation of a policy. In this way it is possible to align motivations with policy goals or vice versa to get the best and most sustainable policy outcomes. Where these motivations conflict or leave gaps in achieving the policy goals, it is unlikely a policy will be successfully implemented or sustainable in the medium to long term. This process of proactively identifying motivations and effectively dealing with them is missing from the policy cycle model.

Conclusion

The Bridgman and Davis policy cycle model is demonstrably inaccurate and yet is held up by its authors as a reasonable descriptive and prescriptive normative approach to policy development. Evidence is lacking for both the model accuracy and any tangible benefits in applying the model to a policy development process and research into policy development across the public service continually deviates from and often directly contradicts the model. Although Bridgman and Davis concede policy development in practise will deviate from their model, there is very little useful guidance as to how to implement or deviate from the model effectively. The model is also inaccurate in that is overly simplifies policy development, leaving policy practitioners to learn for themselves about external factors, the various policy actors involved throughout the process, the changing nature of public and political expectations and myriad other realities that affect modern policy development and implementation in the Australian public service.

Regardless of the policy cycle model inaccuracy, it has existed and been taught for nearly sixteen years. It has shaped the perspectives and processes of countless public servants and thus is relevant in the Australian public service in so far as it has been used as a normative model or starting point for countless policy developments and provides a common understanding and lexicon for engaging with these policy makers.

The model is therefore both inaccurate and relevant to policy entrepreneurs in the Australian public service today. I believe a review and rewrite of the model would greatly improve the advice and guidance available for policy makers and policy entrepreneurs within the Australian public service and beyond.

References

(Please note, as is the usual case with academic references, most of these are not publicly freely available at all. Sorry. It is an ongoing bug bear of mine and many others).

Althaus, C, Bridgman, P and Davis, G. 2012, The Australian Policy Handbook. Sydney, Allen and Unwin, 5th ed.

Bridgman, P and Davis, G. 2004, The Australian Policy Handbook. Sydney, Allen and Unwin, 3rd ed.

Bardach, E. 2012, A practical guide for policy analysis: the eightfold path to more effective problem solving, 4th Edition. New York. Chatham House Publishers.

Everett, S. 2003, The Policy Cycle: Democratic Process or Rational Paradigm Revisited?, The Australian Journal of Public Administration, 62(2) 65-70

Howard, C. 2005, The Policy Cycle: a Model of Post-Machiavellian Policy Making?, The Australian Journal of Public Administration, Vol. 64, No. 3, pp3-13.

Rozzoli, K. 2013, Book Review of The Australian Policy Handbook: Fifth Edition., Australasian Parliamentary Review, Autumn 2013, Vol 28, No. 1.

Attending Linux.conf.au 2015

Really excited to note that I’m going to be attending Linux.conf.au 2015 and running the Cloud, Containers, and Orchestration mini-conf. Will be issuing the CfP for that shortly, but just wanted to give a shout (and create the category feed for LCA planet…) about heading to New Zealand next January. Extremely psyched to be going to LCA once again!

linux.conf.au: day 4

Another successful day of Linux geeking has passed, this week is going surprisingly quickly…

Some of the days highlights:

  • James Bottomley spoke on the current state of Linux UEFI support and demonstrated the tools and processes to install and manage keys and hashes for the installed software. Would have been interesting to have Matthew Garrett at LCA this year to present his somewhat different solution in comparison.
  • Avi Miller from Oracle did an interesting presentation on a new Linux feature called “Transcendent Memory“, which is a solution to the memory ballooning problems for virtualised environments. Essentially it works by giving the kernel the option to request more memory from another host, which could be the VM host, or even another host entirely connected via 10GigE or Infiniband, and having the kernel request and release memory when required. To make it even more exciting, memory doesn’t have to be just RAM, SSDs are also usable, meaning you could add a couple memory hosts to your Xen (and soon KVM) environments and stack them with RAM and SSD to then be provided to all your other guests as a memory ballooning space. It’s a very cool concept and one I intended to review further in future.
  • To wrap up the day, Michael Schwern presented on the 2038 bug – the problem where 32-bit computers are unable to keep time any further and reset to 1901, due to the limits of a 32-bit time buffer (see wikipedia). Time is something that always appears very simple, yet is extremely complex to do right once you consider timezones and other weirdness like leap years/seconds.
The end of time is here! Always trust announcements by a guy wearing a cardboard and robes.

The end of time is here! Always trust announcements by a guy wearing a cardboard and robes.

The conference presentations finished up with a surprise talk from Simon Hackett and Robert Llewellyn from Red Dwarf,  which was somewhat entertaining, but not highly relevant for me – personally I’d rather have heard more from Simon Hackett on the history and future expectations for the ISP industry in Australia than having them debate their electric cars.

Thursday was the evening of the Penguin Dinner, the (usually) formal dinner held at each LCA, this year rather than the usual sit down 3-course dinner, the conference decided to do a BBQ-style event up at the Observatory on Mount Stromlo.

The Penguin Dinner is always a little pricey at $80, but for a night out, good food, drinks and spending time with friends, it’s usually a fun and enjoyable event. Sadly this year had a few issues that kind of spoilt it, at least for me personally, with some major failings on the food and transport which lead to me spending only 2 hours up the mountain and feeling quite hungry.

At the same time, LCA is a volunteer organised conference and I must thank them for-making the effort, even if it was quite a failure this year – I don’t necessarily know all the behind the scenes factors, although the conflicting/poor communications really didn’t put me in the best mood that night.

Next year there is a professional events coordinator being hired to help with the event, so hopefully this adds value in their experience handling logistics and catering to avoid a repeat of the issue.

On the plus side, for the limited time I spent up the mountain, I got some neat photographs (I *really* need to borrow Lisa’s DSLR rather than using my cellphone for this stuff) and spent some good time discussing life with friends lying on the grass looking at the stars after the sun went down.

Part of the old burnt-out observatory

Part of the old burnt-out observatory

Sun setting along the ridge.

Sun setting along the ridge.

What is it with geeks and blue lights? ;-)

What is it with geeks and blue LEDs? ;-)

The other perk from the penguin dinner was the AWESOME shirts they gave everyone in the conference as a surprise. Lisa took this photo when I got back to Sydney since she loves it [1] so much.

Paaaartay!

Paaaartay!

[1] She hates it.

linux.conf.au: day 3

Having reached mid-week, my morning wakeup is getting increasingly difficult from late nights, thankfully there were large amounts of deep fried potato and coffee readily available.

Breakfast of champions - just add cheese and it would be a meal.

Breakfast of champions – just add cheese and it would be a meal.

Coffee Coffee Coffee Coffee Coffee Coffee Coffee Coffee Coffee

Coffee Coffee Coffee Coffee Coffee Coffee Coffee Coffee Coffee

The day had some interesting talks, most of the value I got was out of the web development space:

  • Andy Fitzsimon did an interesting presentation on design and how to approach designing applications or websites and the terminologies that developers use.
  • Sarah Sharp presented on “vampire mice”  – essentially a lot of USB devices don’t correctly obey the USB power suspend options, the result is that by enabling USB suspend for all your devices and disconnecting those that don’t obey, considerable power can be saved – one audience member found he could save 4W by sleeping all his USB devices. I also discovered that newer versions of Powertop now provide the ability to select particular USB devices for power-save mode.
  • There was a really good talk by Joel Stanley, probably one of the most interesting talks that day, on how they designed and built some hardware for doing digital radio transmissions using a radio circuit connected into an Android phone and the challenges encountered of doing hardware integration with Android.
  • We had an update on IPv6 adoption by Geoff Huston – sadly as expected, we’re dangerously low on IPv4 space, yet IPv6 adoption isn’t taking place particularly quickly either, with Internode still being the only major AU ISP with dual stacked addressing for consumers. On a side note, really awesome to see a former keynote presenter come back as a regular presenter and make a talk, having community engagement really adds to my respect for them.
  • My friend Adam Harvey did another awesome web development talk, this time presenting on some of the new CSS3 techniques including animation and transitions with some demonstrations on how these can work.
Open source radio reciever with Android phone coupled.

Open source radio receiver with Android phone coupled.

users: delighted, presenter: smug :-P

users: delighted, presenter: smug :-P

Spot the possum!

Spot the possum!

With all the talks this week, I’m feeling particularly motivated to do some more development this week, starting with writing some new proper landing pages for some of my projects.

Playing with new HTML5/CSS3 effects having been inspired to upskill my web development skills.

Playing with new HTML5/CSS3 effects having been inspired to upskill my web development skills.

linux.conf.au: day 2

The second day of linux.conf.au has been and gone, was another day of interesting miniconf talks and many geeky discussions with old and new friends.

Jethro: Booted

Jethro: Booted, with the power of coffee!

The keynote was a really good talk by Radia Perlman about how engineers approach developing network protocols and an interesting talk of the history of STP and the designed replacement, TRILL. Great to see a really technical female keynote speaker at LCA this year, particularly one as passionate about her topic as Radia.

The conference WiFi is still pretty unhappy this year, I’ve been suffering pretty bad latency and packet loss (30-50%) most of the past few days – if I’ve been able to find an AP – seems they’re only located around the lecture rooms. Yesterday afternoon it seems to have started improving however, so it may be that the networking team have beaten the university APs into submission.

No internet makes sad Jethro sad. :'(

No internet makes sad Jethro sad. :'(

Of course, some of the projectors decided not to play nicely, which seems pretty much business as usual when it comes to projectors and functioning…. it appears that the projector in question would complain about the higher refresh rates provided by DVI and HDMI connected devices, but functioned correctly with VGA.

Someone did an interesting talk a couple of LCA’s ago on the issue, apparently many projectors lie about what their true capabilities are and request resolutions and refresh rates from the computer that are higher than what they can actually support, which really messes with any modern operating system’s auto-detection.

Lending my VGA enabled Thinkpad to @lgnome whist a @chrisjrn observes.

Lending my VGA enabled Thinkpad to @lgnome whist a @chrisjrn observes.

A startled @colmiga approaches!

A startled @colmiga approaches!

Geeks listening intently

Geeks listening intently to concurrent programming.

@lgnome pushing some crazy new drugs to all the kiddies

@lgnome pushing some crazy new drugs to all the kiddies

A few of my friends were delivering talks today, so I spent my time between the Browser miniconf and Open Programming miniconf, picked up some interesting new technologies and techniques to look at:

  • Adam Harvey’s PHP talks were great as usual, always good to get an update on the latest developments in the PHP world.
  • Francois Marier from Mozilla NZ presented on Content Security Policy, a technique I wasn’t aware of until now. Essentially it allows you to set a header defining which sites should be trusted as sources of CSS, Javascript and image content, allowing a well developed site to be locked down to prevent many forms of XSS (cross site scripting).
  • Francios also spoke briefly about HTTP Strict Transport Security, a header which can be used by SSL websites to fix the long standing problem of users being intercepted by a bad proxy and served up a hacked HTTP-only version of the website. Essentially this header tells your browser that your site should only ever be accessed by HTTPS – anything that then directs your browser to HTTP will result in a security block, protecting the user, since your browser has been told that the site should only ever be SSL from it’s previous interaction. It’s not perfect, but it’s a great step forwards, as long as the first connection is made on a trusted non-intercepted link, it makes man-in-the-middle attacks impossible.
  • Daniel Nadasi from Google presented on AngularJS, a modern Javascript framework suitable for building complex applications with features designed to reduce the complexity of developing the required Javascript.

After that, dinner at one of the (many!) Asian restaurants in the area, followed by some delicious beer at the Wig and Pen.

Either I've already had too many beers, or there's a giant stone parcel in my way.

Either I’ve already had too many beers, or there’s a giant stone parcel in my way.

Onwards to delicious geekiness!

Onwards to delicious geekiness!

Delicious hand pulled pale ale.

Delicious hand pulled pale ale.

The beetroot beer is an interesting idea. But some ideas should just not be attempted. :-/

The beetroot beer is an interesting idea. But some ideas should just not be attempted. :-/

Native Australian night life!

Native Australian night life! This little fellow was very up close and friendly.

Linux.conf.au native wildlife. ;-)

Linux.conf.au native wildlife. ;-)

Another great day, looking forwards to Wednesday and the rest of the week. :-)

linux.conf.au: day 1

First proper day of linux.conf.au today, starting with breakfast and the quest of several hundred geeks to find and consume coffee.

Some of us went a bit overboard to get their exact daily coffee fix....

Some of us went a bit overboard to get their exact daily coffee fix….

After acquiring coffee, we started the day with a keynote by the well known Bdale Garbee, talking about a number of (somewhat controversial) thoughts and reflections on Linux and the open source ecosystem in regards to the uptake by commercial companies.

Keynote venue.

Keynote venue.

Bdale raised some really good points, particularly how GNU/Linux isn’t a sellable idea to OEM vendors on cost – many vendors pay nothing for Microsoft licensing, or even make a profit due to the amount of preloaded crapware they ship with the computers. Vendors are unlikely to ship GNU/Linux unless there is sufficient consumer demand or feature set that makes it so good

My take on the talk was that Bdale was advocating that we aren’t going to win the desktop with a mass popularity – instead of trying to build a desktop for the average joe, we should build desktops that meet our own needs as power uses

It’s an interesting approach – some of the more recent endeavours with desktop developers has lead to environments that newer users like, but power users hate (eg GNOME 3), as a power user, I share this view, I’d rather we develop a really good power user OS, rather than an OS designed for the simplest user. Having said that, the nice thing about open source is that developers can target different audiences and share each other’s work.

Bdale goes on to state that the year of the Linux desktop isn’t relevant, it’s something we’re probably never going to win – but we have won the year of Linux on the mobile, which is going to replace conventional workstations more and more for the average use and become the dominant device used.

It’s something I personally believe as well, I already have some friends who *only* own a phone or tablet, instead of a desktop or tablet, and use it for all their communications. In this space, Android/Linux is selling extremely well.

And although it’s not a conventional GNU/Linux space we know and love and it still has it’s share of problems, a future where Android/Linux is the dominate device OS is much more promising than the current Windows/MacOS duopoly.

The rest of the day had a mix of miniconf talks – there wasn’t anything particularly special for me, but there were some good highlights during the day:

  • Sherri Cabral did a great talk on what it means to be a senior sysadmin, stating that a proper senior sysadmin knows how to solve problems by experience ( not guess work), works to continuously automate themselves out of a job with better tools and works to impart knowledge onto others.
  • Andrew Bartlett did a brief update on Samba 4 (the Linux CIFS/SMB file system implementation) – it’s production ready now and includes proper active directory support. The trade off, is that in order to implement AD, you can’t use an external LDAP directory or Kerberos server when using Samba 4 in an AD server mode.
  • Nick Clifford did an entertaining presentation on the experiences and suffering from working with SNMP, turns out that both vendor and open source SNMP implementations are generally quite poor quality.
  • Several interesting debates over the issues with our current monitoring systems (Nagios, Icinga, Munin, etc) and how we can fix them and scale better – no clear “this is the solution” responses, but some good food for thought.

Overall it was a good first day, followed up by some casual drinks and chats with friends – thankfully we even managed to find an open liquor store in Canberra on a public holiday.

Poor @lgnome expresses his pain at yet another closed liquor store before we located an open location.

Poor @lgnome expresses his pain at yet another closed liquor store.

 

 

linux.conf.au: day 0

It’s time for the most important week of the year - linux.conf.au – which is being held in Canberra this year. I’m actually going to try and blog each day this year, unlike last year which still has all my photos in the “too be be blogged folder”. :-)

Ended up taking the bus down from Sydney to Canberra – at only around $60 and a 3 hour trip, it made more sense to take the bus down, rather than go through the hassle of getting to and from the airports and all the security hassles of flying.

Ended up having several other linux.conf.au friends on the bus, which makes for an interesting trip – and having a bus with WiFi and power was certainly handy.

I am geek, hear me roar!

I am geek, hear me roar!

Horrifying wail of the Aucklander!

Horrifying wail of the Aucklander!

The road trip down to Canberra wasn’t particularly scenic, most of the route is just dry Australian bush and motorways, generally it seems between city road trips in AU tend not to be wildly scenic unlike most of the ones I take in NZ.

Canberra itself is interesting, my initial thoughts on entering the city was that it’s kind of a cross between Rotorua and post-quake Christchurch – most of the city is low rise- 5-10 story buildings and low density sprawl, and extremely quiet with both the university and parliament on leave. In fact many have already commented it would be a great place to film a zombie movie simply due to it’s eerily deserted nature.

Considering it’s  a designed city, I do wonder why they choose such a sprawled design, IMHO it would have been way better to have a very small high density tower CBD which would be easily walk-able and massive park lands around them. Canberra also made the mistake of not putting in light rail, instead relying on buses and cars as primary transport.

Neat fountain in town

Neat fountain in town

The Aussies can never make fun of us Kiwis and sheep again... at least we don't have THIS in our capital city O_o

The Aussies can never make fun of us Kiwis and sheep again… at least we don’t have THIS in our capital city O_o

Impressively large transmission tower for such a small city.

Impressively large transmission tower for such a small city.

Once nice side of Canberra, is that with the sprawl, there tends to be a lot of greenery (or what passes for greenery in the aussie heat!) around the town and campus, including a bit of wildlife – so far I’ve seen rabbits, cockatoos, and lizards, which makes a nice change from Sydney’s wildlife viewing of giant rats running over concrete pavements.

Sqwark!

Sqwark!

The evening was spent tracking down the best pub options near by, and we were fortunate enough to discover the Wig and Pen, a local British-style brewery/pub, with about 10 of their own beers on hand pulled taps. I’m told that when the conference was here in Canberra in 2005, the attendees drank the pub dry – twice. Hopefully they have more beer on stock this year.

First beer casualty from the conference - laptop being stood vertically to drain, whilst charging a cellphone.

First beer casualty from the conference – laptop being stood vertically to drain, whilst charging a cellphone.

Normally every year the conference provides a swag bag, typically the bag is pretty good and there’s usually a few good bits in there, as well as spammy items like brochures, branded cheap gadgets (USB speakers, reading lights, etc).

This year they’ve cut down hugely on the swag volume, my bag simply had some bathroom supplies (yes, that means there’s no excuse for the geeks to wash this week), a water bottle, some sunblock and the conference t-shirt. I’m a huge fan of this reduction in waste and hope that other conferences continue on with this theme.

Arrrrrr there be some swag me mateys!

Arrrrrr there be some swag me mateys!

The conference accommodation isn’t the best this year – it’s clean and functional, but I’m really not a huge fan of the older shared dorm styles with communal bathroom facilities, particularly the showers with their coffin-style claustrophobic feel.

The plus side of course, is that the accommodation is always cheap and your evenings are filled with awesome conversations and chats with other geeks.

Looking forwards for the actuals talks, going to be lots of interesting cloud and mobile talks this year, as well as the usual kernel, programming and sysadmin streams. :-)

linux.conf.au 2013 plans

It’s nearing that important time of year that the NZ-AU open source flock congregate that important and time honoured tradition of linux.conf.au. I’ve said plenty about this conference in the past, going to make an effort to write a lot more this year about the conference.

There’s a bit of concern this year that there might not be a team ready to take up the mantle for 2014, unfortunately linux.conf.au is a victim of it’s own success – as each year has grown bigger and better, it’s at the stage where a lot of volunteers consider it too daunting to take it on themselves. Hopefully a team has managed to put together a credible bid for 2014, it would be sad to lose this amazing conference.

As I’m now living in Sydney, I can actually get to this year’s conference via a business class coach service which is way cheaper than flying, and really just as fast once taking the hassles of getting to the airport, going through security and flying into account. Avoiding the security theatre is a good enough reason for me really – I travel a lot, but I actually really hate all the messing about.

If you’re attending the conference and departing from Sydney (or flying into Sydney from NZ to then transfer to Canberra), I’d also suggest this bus service – feel free to join me on my booked bus if you want a chat buddy:

  • Depart Sydney, Sunday 27th Jan at 11:45 on bus GX273.
  • Depart Canberra, Saturday 2nd Feb at 14:00 on bus GX284.

The bus has WiFi and power and extra leg room, so should be pretty good if you want to laptop the whole way in style – for about $35 each way.

Leosticks are a gateway drug

At linux.conf.au earlier this year, the guys behind Freetronics gave every attendee a free Leostick Arduino compatible board.

As I predicted at the time, this quickly became the gateway drug – having been given an awesome 8-bit processor that can run off the USB port and can provide any possibility of input/output with both digital and analogue hardware, it was inevitable that I would want to actually acquire some hardware to connect to it!

Beware kids, this is what crack looks like.

My background into actual electronics hasn’t been great, my parents kindly got me a Dick Smith starter kit when I was much younger (remember back in the day when DSE actually sold components! Now I feel old :-/) but I never quite managed to grasp all the concepts and a few attempts since then haven’t been that successful.

Part of the issue for me is I learn by doing and having good resources to refer to, back then it wasn’t so easy, however with internet connectivity and thousands of companies selling components to consumers offering tutorials and circuit design information, it’s never been easier.

Interestingly I found it hard to get a real good “you’re a complete novice with no clue about any of this” guide, but the Arduino learning resources are very good at detailing how their digital circuits work and with a bit of wikipediaing, got me on the right track so far.

Also not having the right tools and components for the job is an issue, so I made a decision to get a proper range of components, tools, hookup wire and some Arduino units to make a few fun projects to learn how to make this stuff work.

I settled on 3 main projects:

  1. Temperature monitoring inside my home server – this is a whitebox machine so doesn’t have too many sensors in good locations, I’d like to be able to monitor some of the major disk bays, fans, motherboard, etc.
  2. Out-of-band serial management and watchdog restart of my home server. This is more complex & ambitious, but all the components are there – with a RS232 to TTY conversion circuit I can read the server’s serial port from the Arduino and use the Arduno and a transistor to control the reset header on the motherboard to power-restart if my slightly flaky CPU crashes again.
  3. Android controlled projects. This is a great one, since I have an abundance of older model Android phones available and would like a project that allows me to improve my C coding (Arduino) and to learn Java/Dalvik (Android). This ticks both boxes. ATM considering adding an Android phone to the Arduino server monitoring solution, or maybe hooking it into my car and using the Android phone as the display.

These cover a few main areas – to learn how to talk with one wire sensor devices, to earn how to use transistors to act as switches, to learn different forms of serial communication and to learn some new programming languages.

Having next to no current electronic parts (soldering iron, breadboard and my general PC tools were about it) I went down the path of ordering a full set of different bits to make sure I had a good selection of tools and parts to make most circuits I want.

Ended up sourcing most of my electronic components (resister packs, prototyping boards, hookup wire, general capacitors & ICs) from Mindkits in NZ, who also import a lot of Sparkfun stuff giving them a pretty awesome range.

Whilst the Arduinos I ordered supply 5V and 3.3V, I grabbed a separate USB-powered supply kit for projects needing their own feed – much easier running off USB (of which I have an abundance of ports around) than adding yet-another-wallwart transformer. I haven’t tackled it yet, but I’m sure my soldering skills will be horrific and naturally worth blogging about in future to scare any competent electronics geek.

I also grabbed two Dallas 1-wire temperature sensors, which whilst expensive compared to the analog options are so damn simple to work with and can be daisy chained. Freetronics sell a breakout board model all pre-assembled, but they’re pricey and they’re so simple you can just wire the sensors straight back to your Arduino circuit anyway.

Next I decided to order some regular size Arduinos from Freetronics – if I start wanting to make my own shields (expansion boards for the Arduinos), I’d need a regular sized unit rather than the ultrasmall Leostick.

Ended up getting the classic Arduino Eleven/Uno and one of the Arduino USB Droids which provide a USB Host port so they can be used with Android phones to write software than can interface with hardware.

After a bit of time, all my bits have arrived from AU and the US and now I’m already to go – planning to blog my progress as I get on with my electronics discovery – hopefully before long I’ll have some neat circuit designs up on here. :-)

Once I actually have a clue what I’m doing, I’ll probably go and prepare a useful resource on learning from scratch, to cover all the gaps that I found hard to fill, since learning this stuff opens up so many exciting projects once you get past the initial barrier.

Arduino Uno/Eleven making an LED blink. HIGH TECH STUFF ;-)

Push a button to make the LED blink! Sure you can do this with just a battery, switch and LED, but using a whole CPU to read the button state and switch on the LED is much geekier! ;-)

1-wire temperature sensors. Notably with a few more than one wire. ;-)

I’ll keep posting my adventures as I get further into the development of different designs, I expect this is going to become a fun new hobby that ties into my other two main interests – computers and things with blinky lights. :-)

AirNZ 747, yay!

For my trip to linux.conf.au in Melbourne/Ballarat I had rescheduled my flights from Wellington to Auckland due to the fact that I had booked my flights before my lovely lady dragged me up to Auckland to live with her.

It’s the first time I’ve ever flown out of Auckland International Airport and to my delight, I was booked on an Air New Zealand 747. This is the very first time I’ve even flown on one, and with AirNZ phasing out the 747s in favor of 777s, I’m glad to have been able to flown on one before they got phased out entirely.

OMG PLANE! WITH AN UPSTAIRS!

I’d also like to add just for @thatjohn, that I got some awesome perks on the flight over, including a smile from a cute attendant and a FREE PEN! \m/

 

linux.conf.au 2014

I’ve just returned from my annual pilgrimage to linux.conf.au, which was held in Perth this year. It’s the first time I’ve been over to West Australia, it’s a whole 5 hour flight from Sydney –  longer than it takes to fly to New Zealand.

Perth’s climate is a very dry heat compared to Sydney, so although it was actually hotter than Sydney for most of the week, it didn’t feel quite as unpleasant – other than the final day which hit 45 degrees and was baking hot…

It’s also a very clean/tidy city, the well maintained nature was very noticeable with the city and gardens being immaculately trimmed – not sure if it’s always been like this, or if it’s a side effect of the mining wealth in the economy allowing the local government to afford it more effectively.

The towering metropolis of mining wealth.

The towering metropolis of mining wealth.

As usual, the conference ran for 5 full days and featured 4-5 concurrent streams of talks during the week. The quality was generally high as always, although I feel that content selection has shifted away from a lot of deep dive technical talks to more high level talks, and that OpenStack (whilst awesome) is taking up far too much of the conference and really deserves it’s own dedicated conference now.

I’ve prepared my personal shortlist of the talks I enjoyed most of all for anyone who wants to spend a bit of time watching some of the recorded sessions.

 

Interesting New(ish) Software

  1. RatticDB – A web-based password storage system written in Python written by friends in Melbourne. I’ve been trialling it and since then it’s growing in popularity and awareness, as well as getting security audits (and fixes) [video] [project homepage].
  2. MARS Light – This is an insanely awesome replacement for DRBD designed to address the issues of DRBD when replicating over slower long WAN links. Like DRBD, MARS Light is a block-level replication, so ideal for entire datacenter and VM replication. [video] [project homepage].
  3. Pettycoin – Proposal/design for an adjacent network to Bitcoin designed for microtransactions. It’s currently under development, but is an interesting idea. [video] [project homepage].
  4. Lua code in Mediawiki – the Mediawiki developers have added the ability for Wikipedia editors to write Lua code that is executed server side which is pretty insanely awesome when you think about how normally nobody wants to allow untrusted public the ability to remote execute code on systems. The developers have taken Lua and created a “safe” version that runs inside PHP with restrictions to make this possible. [video] [project homepage].
  5. OpenShift – RedHat did a demonstration on their hosted (and open source) PAAS platform, OpenShift. It’s a solution I’ve been looking at before, if you’re a developer whom doesn’t care about infrastructure management, it looks very attractive. [video] [project homepage].

 

Evolution of Linux

  1. D-Bus in the Kernel – Lennart Pottering (of Pulseaudio and SystemD fame) presented the efforts he’s been involved in to fix D-Bus’s shortcomings and move it into the kernel itself and have D-Bus as a proper high speed IPC solution for the Linux kernel. [video]
  2. The Six Stages of SystemD – Presentation by an engineer who has been moving systems to SystemD and the process he went through and his thoughts/experience with SystemD. Really showcases the value that moving to SystemD will bring to GNU/Linux distributions. [video]
  3. Development Tools & The UNIX Philosophy – Excellent talk by a Python developer on how we should stop accepting command-line only tools as being the “right” or “proper” UNIX-style tools. Some tools (eg debuggers) are just better suited for graphical interfaces, and that it still meets the UNIX philosophy of having one tool doing one thing well. I really like the argument he makes and have to agree, in some cases GUIs are just more suitable for some tasks. [video]

 

Walkthroughs and Warstories

  1. TCP Tuning for the Web – presented by one of the co-founders of Fastly showing the various techniques they use to improve the performance of TCP connections and handle issues such as DDOS attacks. Excellent talk by a very smart networking engineer. [video]
  2. Massive Scaling of Graphite – very interesting talk on the massive scaling issues involved to collect statistics with Graphite and some impressive and scary stats on the lifespans and abuse that SSDs will tolerate (which is nowhere near as much as they should!). [video]
  3. Maintaining Internal Forks – One of the FreeBSD developers spoke on how his company maintains an internal fork of FreeBSD (with various modifications for their storage product) and the challenges of keeping it synced with the current releases. Lots of common problems, such as pain of handling new upstream releases and re-merging changes. [video]
  4. Reverse engineering firmware – Mathew Garrett dug deep into vendor firmware configuration tools and explained how to reverse engineer their calls with various tools such as strace, IO and memory mapping tools. Well worth a watch purely for the fact that Matthew Garrett is an amazing speaker. [video]
  5. Android, The positronic brain – Interesting session on how to build native applications for Android devices, such as cross compiling daemons and how the internal structure of Android is laid out. [video]
  6. Rapid OpenStack Deployment – Double-length Tutorial/presentation on how to build OpenStack clusters. Very useful if you’re looking at building one. [video]
  7. Debian on AWS – Interesting talk on how the Debian project is using Amazon AWS for various serving projects and how they’re handling AMI builds. [video]
  8. A Web Page in Seven Syscalls – Excellent walk through on Varnish by one of the developers. Nothing too new for anyone who’s been using it, but a good explanation of how it works and what it’s used for. [video]

 

Other Cool Stuff

  1. Deploying software updates to ArduSat in orbit by Jonathan Oxer – Launching Arduino powered satelittes into orbit and updating them remotely to allow them to be used for educational and research purposes. What could possibly be more awesome than this? [video].
  2. HTTP/2.0 and you – Discussion of the emerging HTTP/2.0 standard. Interesting and important stuff for anyone working in the online space. [video]
  3. OpenStreetMap – Very interesting talk from the director of OpenStreetMap Team about how OpenStreetMap is used around disaster prone areas and getting the local community to assist with generating maps, which are being used by humanitarian teams to help with the disaster relief efforts. [video]
  4. Linux File Systems, Where did they come from? – A great look at the history and development cycles of the different filesytems in the Linux kernel – comparing ext1/2/3/4, XFS, ReiserFS, Btrfs and others. [video]
  5. A pseudo-random talk on entropy – Good explanation of the importance of entropy on Linux systems, but much more low level and about what tools there are for helping with it. Some cross-over with my own previous writings on this topic. [video]

Naturally there have been many other excellent talks – the above is just a selection of the ones that I got the most out from during the conference. Take a look at the full schedule to find other talks that might interest, almost all sessions got recorded during the conference.

linux.conf.au: day 5

Final day of linux.conf.au – I’m about a week behind schedule in posting, but that’s about how long it takes to catch up on life following a week at LCA. ;-)

uuuurgggh need more sleep

uuuurgggh need more sleep

I like that guy's idea!

I like that guy’s idea!

Friday’s conference keynote was delivered by Tim Berners-Lee, who is widely known as “the inventor of the world wide web”, but is more accurately described as the developer of HTML, the markup language behind all websites. Certainly TBL was an influential player in the internets creation and evolution, but the networking and IP layer of the internet was already being developed by others and is arguably more important than HTML itself, calling anyone the inventor of the internet is wrong for such a collaborative effort.

His talk was enjoyable, although very much a case of preaching to the choir – there wasn’t a lot that would really surprise any linux.conf.au attendee. What *was* more interesting than his talk content, is the aftermath….

TBL was in Australia and New Zealand for just over 1 week, where he gave several talks at different venues, including linux.conf.au as part of the “TBL Down Under Tour“. It turns out that the 1 week tour cost the organisers/sponsors around $200,000 in charges for TBL to speak at these events, a figure I personally consider outrageous for someone to charge non-profits for a speaking event.

I can understand high demand speakers charging to ensure that they have comfortable travel arrangements and even to compensate for lost earnings, but even at an expensive consultant’s charge rate of $1,500 per day, that’s no more than $30,000 for a 1 week trip.

I could understand charging a little more if it’s an expensive commercial conference such as $2k per ticket per day corporate affairs, but I would rather have a passionate technologist who comes for the chance to impart ideas and knowledge at a geeky conference, than someone there to make a profit any day –  the $20-40k that Linux Australia contributed would have paid several airfares for some well deserving hackers to come to AU to present.

So whilst I applaud the organisers and particularly Pia Waugh for the efforts spend making this happen, I have to state that I don’t think it was worth it, and seeing the amount TBL charged for this visit to a non-profit entity actually really sours my opinion of the man.

I just hope that seeing a well known figure talking about open data and internet freedom at some of the more public events leads to more positive work in that space in NZ and AU and goes towards making up for this cost.

Outside the conference hall.

Outside the conference hall.

Friday had it’s share of interesting talks:

  • Stewart Smith spoke a bit about SQL databases with focus around MySQL & varieties being used in cloud and hosted environments. Read his latest blog post for some amusing hacks fun to execute on databases.
  • I ended up frequenting a few Linux graphical environment related talks, including David Airlie talking about improvements coming up in the X.org server, as well as Daniel Stone explaining the Wayland project and architecture.
  • Whilst I missed Keith Packard’s talk due to a scheduling clash, he was there heckling during both of the above talks. (Top tip – when presenting at LCAs, if one of the main developers of the software being discussed is in the audience, expect LOTS of heckles). ;-)
  • Francois Marier presented on Persona (developed by Mozilla), a single sign on system for the internet, with a federated decentralised design. Whilst I do have some issues with parts of it’s design, over all it’s pretty awesome and it fixes a lot of problems that plagued other attempts like OpenID. I expect I’ll cover Persona more in a future blog post, since I want to setup a Persona server myself and test it out more, and I’ll detail more about the good and the bad of this proposed solution.

Sadly it turns out Friday is the last day of the conference, so I had to finish it up with the obligatory beer and chat with friends, before we all headed off for another year. ;-)

They're taking the hobbits to Isengard! Or maybe just back to the dorms via the stream.

They’re taking the hobbits to Isengard!

A dodgy looking charactor with a wire running into a large duffle bag.....

Hopefully not a road-side bomber.

The fuel that powers IT

The fuel that powers IT

Incoming!

Incoming!

OpenStack infrastructure swift logs and performance

Turns out I’m not very good at blogging very often. However I thought I would put what I’ve been working on for the last few days here out of interest.

For a while the OpenStack Infrastructure team have wanted to move away from storing logs on disk to something more cloudy – namely, swift. I’ve been working on this on and off for a while and we’re nearly there.

For the last few weeks the openstack-infra/project-config repository has been uploading its CI test logs to swift as well as storing them on disk. This has given us the opportunity to compare the last few weeks of data and see what kind of effects we can expect as we move assets into an object storage.

  • I should add a disclaimer/warning, before you read, that my methods here will likely make statisticians cringe horribly. For the moment though I’m just getting an indication for how things compare.

The set up

Fetching files from an object storage is nothing particularly new or special (CDN’s have been doing it for ages). However, for our usage we want to serve logs with os-loganalyze giving the opportunity to hyperlink to timestamp anchors or filter by log severity.

First though we need to get the logs into swift somehow. This is done by having the job upload its own logs. Rather than using (or writing) a Jenkins publisher we use a bash script to grab the jobs own console log (pulled from the Jenkins web ui) and then upload it to swift using credentials supplied to the job as environment variables (see my zuul-swift contributions).

This does, however, mean part of the logs are missing. For example the fetching and upload processes write to Jenkins’ console log but because it has already been fetched these entries are missing. Therefore this wants to be the very last thing you do in a job. I did see somebody do something similar where they keep the download process running in a fork so that they can fetch the full log but we’ll look at that another time.

When a request comes into logs.openstack.org, a request is handled like so:

  1. apache vhost matches the server
  2. if the request ends in .txt.gz, console.html or console.html.gz rewrite the url to prepend /htmlify/
  3. if the requested filename is a file or folder on disk, serve it up with apache as per normal
  4. otherwise rewrite the requested file to prepend /htmlify/ anyway

os-loganalyze is set up as an WSGIScriptAlias at /htmlify/. This means all files that aren’t on disk are sent to os-loganalyze (or if the file is on disk but matches a file we want to mark up it is also sent to os-loganalyze). os-loganalyze then does the following:

  1. Checks the requested file path is legitimate (or throws a 400 error)
  2. Checks if the file is on disk
  3. Checks if the file is stored in swift
  4. If the file is found markup (such as anchors) are optionally added and the request is served
    1. When serving from swift the file is fetched via the swiftclient by os-loganlayze in chunks and streamed to the user on the fly. Obviously fetching from swift will have larger network consequences.
  5. If no file is found, 404 is returned

If the file exists both on disk and in swift then step #2 can be skipped by passing ?source=swift as a parameter (thus only attempting to serve from swift). In our case the files exist both on disk and in swift since we want to compare the performance so this feature is necessary.

So now that we have the logs uploaded into swift and stored on disk we can get into some more interesting comparisons.

Testing performance process

My first attempt at this was simply to fetch the files from disk and then from swift and compare the results. A crude little python script did this for me: http://paste.openstack.org/show/122630/

The script fetches a copy of the log from disk and then from swift (both through os-loganalyze and therefore marked-up) and times the results. It does this in two scenarios:

  1. Repeatably fetching the same file over again (to get a good average)
  2. Fetching a list of recent logs from gerrit (using the gerrit api) and timing those

I then ran this in two environments.

  1. On my local network the other side of the world to the logserver
  2. On 5 parallel servers in the same DC as the logserver

Running on my home computer likely introduced a lot of errors due to my limited bandwidth, noisy network and large network latency. To help eliminate these errors I also tested it on 5 performance servers in the Rackspace cloud next to the log server itself. In this case I used ansible to orchestrate the test nodes thus running the benchmarks in parallel. I did this since in real world use there will often be many parallel requests at once affecting performance.

The following metrics are measured for both disk and swift:

  1. request sent – time taken to send the http request from my test computer
  2. response – time taken for a response from the server to arrive at the test computer
  3. transfer – time taken to transfer the file
  4. size – filesize of the requested file

The total time can be found by adding the first 3 metrics together.

 

Results

Home computer, sequential requests of one file

 

The complementary colours are the same metric and the darker line represents swift’s performance (over the lighter disk performance line). The vertical lines over the plots are the error bars while the fetched filesize is the column graph down the bottom. Note that the transfer and file size metrics use the right axis for scale while the rest use the left.

As you would expect the requests for both disk and swift files are more or less comparable. We see a more noticable difference on the responses though with swift being slower. This is because disk is checked first, and if the file isn’t found on disk then a connection is sent to swift to check there. Clearly this is going to be slower.

The transfer times are erratic and varied. We can’t draw much from these, so lets keep analyzing deeper.

The total time from request to transfer can be seen by adding the times together. I didn’t do this as when requesting files of different sizes (in the next scenario) there is nothing worth comparing (as the file sizes are different). Arguably we could compare them anyway as the log sizes for identical jobs are similar but I didn’t think it was interesting.

The file sizes are there for interest sake but as expected they never change in this case.

You might notice that the end of the graph is much noisier. That is because I’ve applied some rudimentary data filtering.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 54.89516183 43.71917948 56.74750291 194.7547117 849.8545127 838.9172066 7.121600095 7.311125275
Mean 283.9594368 282.5074598 373.7328851 531.8043908 5091.536092 5122.686897 1219.804598 1220.735632

 

I know it’s argued as poor practice to remove outliers using twice the standard deviation, but I did it anyway to see how it would look. I only did one pass at this even though I calculated new standard deviations.

 

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 13.88664039 14.84054789 44.0860569 115.5299781 541.3912899 515.4364601 7.038111654 6.98399691
Mean 274.9291111 276.2813889 364.6289583 503.9393472 5008.439028 5013.627083 1220.013889 1220.888889

 

I then moved the outliers to the end of the results list instead of removing them completely and used the newly calculated standard deviation (ie without the outliers) as the error margin.

Then to get a better indication of what are average times I plotted the histograms of each of these metrics.

Here we can see a similar request time.

 

Here it is quite clear that swift is slower at actually responding.

 

Interestingly both disk and swift sources have a similar total transfer time. This is perhaps an indication of my network limitation in downloading the files.

 

Home computer, sequential requests of recent logs

Next from my home computer I fetched a bunch of files in sequence from recent job runs.

 

 

Again I calculated the standard deviation and average to move the outliers to the end and get smaller error margins.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 54.89516183 43.71917948 194.7547117 56.74750291 849.8545127 838.9172066 7.121600095 7.311125275
Mean 283.9594368 282.5074598 531.8043908 373.7328851 5091.536092 5122.686897 1219.804598 1220.735632
Second pass without outliers
Standard Deviation 13.88664039 14.84054789 115.5299781 44.0860569 541.3912899 515.4364601 7.038111654 6.98399691
Mean 274.9291111 276.2813889 503.9393472 364.6289583 5008.439028 5013.627083 1220.013889 1220.888889

 

What we are probably seeing here with the large number of slower requests is network congestion in my house. Since the script requests disk, swift, disk, swift, disk.. and so on this evens it out causing a latency in both sources as seen.

 

Swift is very much slower here.

 

Although comparable in transfer times. Again this is likely due to my network limitation.

 

The size histograms don’t really add much here.

 

Rackspace Cloud, parallel requests of same log

Now to reduce latency and other network effects I tested fetching the same log over again in 5 parallel streams. Granted, it may have been interesting to see a machine close to the log server do a bunch of sequential requests for the one file (with little other noise) but I didn’t do it at the time unfortunately. Also we need to keep in mind that others may be access the log server and therefore any request in both my testing and normal use is going to have competing load.

 

I collected a much larger amount of data here making it harder to visualise through all the noise and error margins etc. (Sadly I couldn’t find a way of linking to a larger google spreadsheet graph). The histograms below give a much better picture of what is going on. However out of interest I created a rolling average graph. This graph won’t mean much in reality but hopefully will show which is faster on average (disk or swift).

 

You can see now that we’re closer to the server that swift is noticeably slower. This is confirmed by the averages:

 

  request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 32.42528982 9.749368282 245.3197219 781.8807534 1082.253253 2737.059103 0 0
Mean 4.87337544 4.05191168 39.51898688 245.0792916 1553.098063 4167.07851 1226 1232
Second pass without outliers
Standard Deviation 1.375875503 0.8390193564 28.38377158 191.4744331 878.6703183 2132.654898 0 0
Mean 3.487575109 3.418433003 7.550682037 96.65978872 1389.405618 3660.501404 1226 1232

 

Even once outliers are removed we’re still seeing a large latency from swift’s response.

The standard deviation in the requests now have gotten very small. We’ve clearly made a difference moving closer to the logserver.

 

Very nice and close.

 

Here we can see that for roughly half the requests the response time was the same for swift as for the disk. It’s the other half of the requests bringing things down.

 

The transfer for swift is consistently slower.

 

Rackspace Cloud, parallel requests of recent logs

Finally I ran just over a thousand requests in 5 parallel streams from computers near the logserver for recent logs.

 

Again the graph is too crowded to see what is happening so I took a rolling average.

 

 

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 0.7227904332 0.8900549012 434.8600827 909.095546 1913.9587 2132.992773 6.341238774 7.659678352
Mean 3.515711867 3.56191383 145.5941102 189.947818 2427.776165 2875.289455 1219.940039 1221.384913
Second pass without outliers
Standard Deviation 0.4798803247 0.4966553679 109.6540634 171.1102999 1348.939342 1440.2851 6.137625464 7.565931993
Mean 3.379718381 3.405770445 70.31323922 86.16522485 2016.900047 2426.312363 1220.318912 1221.881335

 

The averages here are much more reasonable than when we continually tried to request the same file. Perhaps we’re hitting limitations with swifts serving abilities.

 

I’m not sure why we have sinc function here. A network expert may be able to tell you more. As far as I know this isn’t important to our analysis other than the fact that both disk and swift match.

 

Here we can now see swift keeping a lot closer to disk results than when we only requested the one file in parallel. Swift is still, unsurprisingly, slower overall.

 

Swift still loses out on transfers but again does a much better job of keeping up.

 

Error sources

I haven’t accounted for any of the following swift intricacies (in terms of caches etc) for:

  • Fetching random objects
  • Fetching the same object over and over
  • Fetching in parallel multiple different objects
  • Fetching the same object in parallel

I also haven’t done anything to account for things like file system caching, network profiling, noisy neighbours etc etc.

os-loganalyze tries to keep authenticated with swift, however

  • This can timeout (causes delays while reconnecting, possibly accounting for some spikes?)
  • This isn’t thread safe (are we hitting those edge cases?)

We could possibly explore getting longer authentication tokens or having os-loganalyze pull from an unauthenticated CDN to add the markup and then serve. I haven’t explored those here though.

os-loganalyze also handles all of the requests not just from my testing but also from anybody looking at OpenStack CI logs. In addition to this it also needs to deflate the gzip stream if required. As such there is potentially a large unknown (to me) load on the log server.

In other words, there are plenty of sources of errors. However I just wanted to get a feel for the general responsiveness compared to fetching from disk. Both sources had noise in their results so it should be expected in the real world when downloading logs that it’ll never be consistent.

Conclusions

As you would expect the request times are pretty much the same for both disk and swift (as mentioned earlier) especially when sitting next to the log server.

The response times vary but looking at the averages and the histograms these are rarely large. Even in the case where requesting the same file over and over in parallel caused responses to go slow these were only in the magnitude of 100ms.

The response time is the important one as it indicates how soon a download will start for the user. The total time to stream the contents of the whole log is seemingly less important if the user is able to start reading the file.

One thing that wasn’t tested was streaming of different file sizes. All of the files were roughly the same size (being logs of the same job). For example, what if the asset was a few gigabytes in size, would swift have any significant differences there? In general swift was slower to stream the file but only by a few hundred milliseconds for a megabyte. It’s hard to say (without further testing) if this would be noticeable on large files where there are many other factors contributing to the variance.

Whether or not these latencies are an issue is relative to how the user is using/consuming the logs. For example, if they are just looking at the logs in their web browser on occasion they probably aren’t going to notice a large difference. However if the logs are being fetched and scraped by a bot then it may see a decrease in performance.

Overall I’ll leave deciding on whether or not these latencies are acceptable as an exercise for the reader.

Third party testing with Turbo-Hipster

Why is this hipster voting on my code?!

Soon you are going to see a new robot barista leaving comments on Nova code reviews. He is obsessed with espresso, that band you haven’t heard of yet, and easing the life of OpenStack operators.

Doing a large OpenStack deployment has always been hard when it came to database migrations. Running a migration requires downtime, and when you have giant datasets that downtime could be hours. To help catch these issues Turbo-Hipster (http://josh.people.rcbops.com/2013/09/building-a-zuul-worker/) will now run your patchset’s migrations against copies of real databases. This will give you valuable feedback on the success of the patch, and how long it might take to migrate.

Depending on the results, Turbo-Hipster will add a review to your patchset that looks something like this:

Example turbo-hipster post

What should I do if Turbo-Hipster fails?

That depends on why it has failed. Here are some scenarios and steps you can take for different errors:

FAILURE – Did not find the end of a migration after a start

  • If you look at the log you should find that a migration began but never finished. Hopefully there’ll be a traceroute for you to follow through to get some hints about why it failed.

WARNING – Migration %s took too long

  • In this case your migration took a long time to run against one of our test datasets. You should reconsider what operations your migration is performing and see if there are any optimisations you can make, or if each step is really necessary. If there is no way to speed up your migration you can email us at rcbau@rcbops.com for an exception.

FAILURE – Final schema version does not match expectation

  • Somewhere along the line the migrations stopped and did not reach the expected version. The datasets start at previous releases and have to upgrade all the way through. If you see this inspect the log for traceroutes or other hints about the failure.

FAILURE – Could not setup seed database. FAILURE – Could not find seed database.

  • These two are internal errors. If you see either of these, contact us at rcbau@rcbops.com to let us know so we can fix and rerun the tests for you.

FAILURE – Could not import required module.

  • This error probably shouldn’t happen as Jenkins should catch it in the unit tests before Turbo-Hipster launches. If you see this, please contact us at rcbau@rcbops.com and let us know.

If you receive an error that you think is a false positive, leave a comment on the review with the sole contents of recheck migrations.

If you see any false positives or have any questions or problems please contact us on rcbau@rcbops.com

LinuxCon Europe

linuxcon-europe-2011After travelling very close to literally the other side of the world[0] I’m in Edinburgh for LinuxCon EU recovering from jetlag and getting ready to attend. I’m very much looking forward to my first LinuxCon, meeting new people and learning lots :-).

If you’re around and would like to catch up drop me a comment here. Otherwise I’ll see you at the conference!

[0] http://goo.gl/maps/JeJO2

New Blog

Welcome to my new blog.

You can find my old one here: http://josh.opentechnologysolutions.com/blog/joshua-hesketh

I intend on back-porting those posts into this one in due course. For now though I’m going to start posting about my adventures in openstack!Wordpress

Introducing turbo-hipster for testing nova db migrations

Zuul is the continuous integration utility used by OpenStack to gate patchsets against tests. It takes care of communicating with gerrit (the code review system) and the test workers – usually Jenkins. You can read more about how the systems tie together on the OpenStack Project Infrastructure page.

The nice thing is that zuul doesn’t require you to use Jenkins. Anybody can provide a worker to zuul using the gearman protocol (which is a simple job server). Enter turbo-hipster*.

“Turbo-hipster is a CI worker with pluggable tasks initially designed to test OpenStack’s database migrations against copies of real databases.”

This will hopefully catch scenarios where changes to the database schema may not work due to outliers in real datasets and also help find where a migration may take an unreasonable amount of time against a large database.

In zuuls layout configuration we are able to specify which jobs should be ran against which projects in which pipelines. For example, for nova we want to run tests when a patchset is created, but we don’t need to run tests against it (necessarily) once it is merged etc. So in zuul we specify a new gate (aka job) to test nova against real databases.

turbo-hipster then listens for jobs created on that gate using the gearman protocol. Once it receives a patchset from zuul it creates a virtual environment and tests the upgrades. It then compiles and sends back the results.

At the moment turbo-hipster is still under heavy development but I hope to have it reporting results back to gerrit patchsets soon as part of zuuls report summary. For the moment I have a separate zuul instance running to test new nova patches and email the results back to me. Here is an example result report:

<code>Build succeeded.

- http://thw01.rcbops.com/logviewer/?q=/results/47/47162/9/check/gate-real-db-upgrade_nova_mysql/c4bc35c/index.html : SUCCESS in 13m 31s
</code>

Turbo Hipster Meme

*The name was randomly generated and does not necessarily contain meaning.

Open and Free Internet

The last week has been an interesting nexus of Open and Free.

On Saturday I attended the Firefox OS App day in Wellington. I had heard about Firefox OS some time ago under its project name Boot2Geeko (b2g). At the time I had thought that it was an intriguing idea, but wouldn't be very powerful. I was certainly wrong. Firefox OS is fairly mature and looking like it will be very powerful. Check out arewemobileyet.com for an idea where they are heading (for example WebUSB!) It appeared to work well on the developer phones (re-flashed Android phones, the same Linux kernel is used).

All the applications on Firefox OS are web applications. In particular, they are Open Web Apps, using HTML5, CSS and Javascript. Even the phone dialer is an HTML5/JS app! Mozilla showed off a framework for building apps called mortar that takes care of the basic UI consistent with the standard apps, but you could use any html5/css/js tools or frameworks. Unless you use some of the newer (and higher security required) APIs, the apps also work in a normal web browser.

I wasn't able to stick around what people developed, but it was very interesting.

Last night I watched the live stream of Sir Tim Berners-Lee, the inventor of the World Wide Web, giving a public lecture in Wellington (I missed out on a ticket) on "The Open Internet and World Wide Web". He covered the many forms of openness and freedom, including open standards, open source software, open access, open data, and open Internet. One key point from the lecture was that native apps (on IOS or Android, for example) take you off the Web, and therefore away from the core of social discourse. This is significant and currently increasingly happening. I will tweet a link when the lecture is available to view online.

These events dovetail nicely and fits with my general strategy of focusing on web apps that work nicely on phones, tablets, and computers.

First Post

I've updated this site over the last few weeks to help manage it going forward.

The key piece is this blog, which I hope to update somewhat frequently. I haven't enabled comments, so reply by twitter or email.

Welcome to 2014 Sahana Google Summer of Code Students!

The Sahana Software Foundation is proud to announce the eleven recipients of the 2014 Google Summer of Code internships with the Sahana Eden and Vesuvius Projects. Competition for these slots was very high. We received many proposals including several from [Read the Rest...]

Symbols in Alerts feed into Federation of Internet Alerting Standards

Almost one year ago, I had presented a concept on the use of “pictographs in alerting” and shared the evidence for the growing need for such an initiative. This was at the 2013 CAP Implementation Workshop in Geneva. The real [Read the Rest...]

Sahana Vesuvius Migrates to GitHub

On behalf of Sahana community, I’m happy to announce that we finished the migration of Sahana Vesuvius from Launchpad to GitHub. All the code and bugs have been successfully transferred with the complete commit history. We can now profit from the best platform [Read the Rest...]

Sahana Vesuvius moves to Github

The Sahana Vesuvius project has moved its repository from Launchpad to Github. This decision was universally supported by the Vesuvius community. See this blog post for more information about the migration process. More information about instructions for developers to migrate [Read the Rest...]

Sahana at Google Code-In 2013

The Sahana Software Foundation has gained a solid reputation over the years for participating successfully in Google’s Open Source programs. Sahana has been able to provide an enriching experience for young students in Open Source through both the Google Summer [Read the Rest...]

Sahana News January 2014

Happy New Year!  It was a busy month of December (and beginning of January) with the ongoing response to Super Typhoon Yolanda in the Philippines, the Google Code-In, a SahanaCamp in Thailand, and hopefully, everyone had some time off with [Read the Rest...]

New design and new URL

So after thinking about it, I have purchased a new domain, http://begg.digital/. There have been many, many new top level domains launched recently.

I've also taken the opportunity to refresh the Begg Digital website as well. There is now a link to the various tools I've created to help the Python Community, and there is some more pages to come soon too. The underlying code got a significant upgrade as well (there is a saying about a builder's house...)

Hopefully there will be some more news soon.

Py3progress updated and 2013 review

I have updated the py3progress site. I really should automate it sometime, since the last update was in September.

Since the whole of 2013 is now up, I think we should review what happened.

2013 in review

So the first thing that jumps out at me is there is less red and more green. That's great! Concretely, the percentage of the top 200 that supports Python 3 has gone from 51% (103) to 69% (138), and it's up another 5% in the first two months of 2014.

The oldly consistent period in the middle of the year was when the mirror team changed how that worked PyPI changed to providing downloads via a CDN (Content Delivery Network) [UPDATE: 2014-04-11] and the stats took a week or so to be updated. In some ways, the data after that point might not be as accurate to the actual popularity of the packages, but we are only really worried about the indictive relative popularity and the data should be good enough.

Not long after, the ssl module races up from outside the top 200 to in the top 10. It's clearly visible as it is in Python 3 and therefore in light blue. I'm not sure what has driven it's increase, perhaps a popular package now depends on it?

About 5 projects changed to python 2 only during the year. On the whole they have lost popularity. Some even dropped out of the top 200.

I note from the python3wos page that December 2013 marked 5 years since Python 3.0 was first released. Python 3.3, which has a few features that support added backward compatibility, was released September 2012. Python 3.4 is currently at the release cadidate stage.

Current state

So looking at the Python3 Wall of Superpowers today, 149 of the top 200 downloads support Python 3. Let's look at some of the the one's that aren't.

Boto is the highest ranked non-python 3 package at 3rd. It is an library for interacting with AWS services. Python-cloudfiles depends on this and is further down the list.

Paste (18th), the web framework, is next. It hasn't been updated since 2010.

Paramiko (22nd) is a SSH library, which from the github issue appears to be under active porting. Paramiko is something I use in multiple ways. One is Farbic, a remote execution tool used for deployment and automation, which is 37th and will be ported once paramiko is ported.

Just above Farbic at 35th is the MySQL-python library. This also appears to be not too far away from having a working python3 version.

The first python 2 only package is meld3 (56th), a templating library. The second is more important to me, Twisted, at 76th. Twisted is an asynchronous networking frameork and it's used by other packages on the list, such as carbon (52nd) and graphite-web (53rd). Unusally, the python 2 only tag has a slightly different to the Twisted project - they have an active project to port to Python 3, it's just a really, really big job.

Why supporting Python 3 matters

Python 3 makes a significant improvement, mostly removing old wrinkles and being clearer about bytestring/unicode datatypes. The transition is ongoing (like IPv6), and a good portion of the libraries people use will need to support Python 3 before a bulk of developers will start developing with Python 3 (even though it's technically better). I'm looking at what packages I use and hope to soon start using Python 3 for some things.

ReqFile Check website

One of the issues developing in Python is keeping a track of security updates of dependencies, such as libraries. While I could subsrcibe to every mailing list and check all the websites regularly, that is a lot of work. 

Most packages in the Python environment are released on PyPI, also known as the Cheese Shop. This site lists over 33,000 packages. Handily, PyPI provides a API to query what packages are available and what versions of the packages are there. It doesn't, however, let you know when a package important to you it updated.

So I created ReqFile Check (warning, self-signed certificate). I created this website to track what package I'm using and send me an email alert when one is updated. Today it helpfully told me there was an update to South, and checking the website for South shows that it fixes a bug I had encounted.

Login is with Mozilla Persona, just like MoonBizGame. It fits this really well as we can use the email address the user logs in as to send the alert emails.

The other Python Development support site I run py3progress gets updated occasionally, about once a month. I announce updates on Twitter, so follow @BeggDigital for updates.

 

Classie launched

Begg Digital is proud to annouce that Classie has launched.

Classie (www.getclassie.com) is a dance studio management web application. It makes taking attendance simple and saves money sending notcies. The app is designed to work on mobile phones, tablets and desktop browsers so it can be used anywhere, avoiding pieces of paper and entering data later.

Begg Digital manages the hosting for Classie. Lee, owner of Begg Digital, is also a founder of Classie and the primary developer.

International Space Apps Challenge 2013

This weekend I took part in the International Space App Challenge. It is an event (not too dissimilar from Startup Weekend) where teams come together to solve challeges relating to space. the challenge range from hardware to software and visualisation and chicken farms to Mars.

I undertook the bootstrapping lunar industry challenge, also known as #Moonville for being a game for planning how to get private industry on the moon. I wasn't able to attract a team, but made a reasonable start.

MoonBizGame is the web-based game I created. It is a turn based game and highly compressed timelines. Currently, you can login with Persona, create your enterprise and buy launches into space. The SSL certificate is self-signed so you will get a warning, but the game is available at https://moonbizgame.beggdigital.com/

I mentioned above that the site uses Persona to login. This is a technology created by Mozilla so that people don't need to create a username and password for every site they use. The Django implementation works well, even in localhost debugging. I look forward to using it on more sites in the future.

Python 3 Progress

Over the weekend I published Py3 Progress site. I had been meaning to make the data available and now finally I have. It didn't take much.

The Py3 Progress site has a "waterfall" plot of the Python 3 port status of the top 200 downloads from the Python Package Index. The raw data was downloaded from the Python 3 Wall of Superpowers every day, and then acculated into the waterfall plots.

There are some interesting trends in the data. About half way down the 2012 plot (August onwards), you can see where Django first reported that it was Python 3 compatible - it's in the sixth column. I haven't tried it yet. The purple lines are projects that say that will not be ported to Python 3 in the near future (or ever) and they are slowly trending to the right (down in ranking). Among those projects is Twisted, which has an active py3 porting branch. A couple of weeks ago a couple more projects marked themselves as not porting. The light green/blue lines also trend right. This are the packages which are now included in Python 3, and in some cases Python 2.7 and even 2.6. Since people don't need to download the files to use the package, the are slowly falling in rankings as well. The best news is that the Python 3 compatible packages shown in green are generally trending left and the not yet ported packages in red are generally trending right (or converting).

There are a lot of Django-based packages in the top 200 (approximately 11 packages). Since Django 1.5 was released a couple of weeks ago and it supports Python 3, I expect that most of those package will also port fairly quickly. Some already have.

The 10th Anniversary of the 2004 Indian Ocean Tsunami Marks the Inception of Sahana

“The beaches are purely white, the waves washing sandcastles away on the way out to the open sea, while the inhabitants of this paradise; that in daily day life is known as part of the Indian Ocean, are in a [Read the Rest...]

Sahana’s Ebola Response

In August the World Health Organization (WHO) declared the Ebola outbreak in West Africa as an international public health emergency. Although media attention to the outbreak has decreased, the number of people who have died from Ebola (6,856 to date) continues [Read the Rest...]

OCHA Asia Regional Business Consultation

Last week I was invited by OCHA to attend the Asia Regional Business Consultation in Bangkok to be part of discussions about how to improve collaboration and better-targeted private sector support in emergencies. These were interesting conversations to be a [Read the Rest...]

Update from the International Conference of Crisis Mappers

I had the pleasure of attending the International Conference of Crisis Mappers (ICCM) in my home town of New York City.  I was particularly excited about this event because I provide emergency management solutions to nonprofits in the New York [Read the Rest...]

Sahana Strategic Plan 2014-2015

Thanks to all your effort throughout our a comprehensive community planning process we now have a Strategic Plan for the next year: http://bit.ly/sahana-strategic-plan-2014-2015 . This is a high level overview: I would like to thank the following people who contributed to the [Read the Rest...]

VMware Sahana Usability Review

Over the past months, Rafae Aziz, Danny Walcoff and a team from VMware have worked on a UI/UX Review of Sahana through a series a of half-day sprints. Sahana has been deployed as customized solution in a wide number of [Read the Rest...]

Sahana Eden Software Development Internship 2014-15

The Sahana Software Foundation is holding its Software Development Internship Program again this year from 13th October 2014 until 13th March 2015. This is a virtual internship and is open to applicants from any country. This program was highly successful [Read the Rest...]

Google Open Source Blog: The Sahana Software Foundation annual conference

Today Sahana was featured in a post on the Google Open Source Blog about our annual conference which they helped to sponsor.

Sahana Community Strategic Planning

One of the priorities for the Sahana Software Foundation is to find out where the community wants to take Sahana over the next years and support the development of a strategic plan to guide us together on this journey. A [Read the Rest...]

Sahana: The Next Generation

Hello All, It is my great honor to have been selected as the Chief Executive Officer of the Sahana Software Foundation. The Sahana Software Foundation is an amazing organization with a very rich history. I have to thank both Mark Prutsalis, [Read the Rest...]

Sahana Software Foundation announces next CEO Michael Howden

Sahana Software Foundation is pleased to announce Michael Howden will be the organisation’s next Chief Executive Officer. Michael served as Managing Director of AidIQ from 2009-2014, and since 2012, as a member of the Board of Directors in Sahana Software [Read the Rest...]

Sahana Software Foundation announces reconstitution of the Board of Directors

It is with great pleasure that I can announce, that Sahana Software Foundation effective 23rd June 2014, reconstituted the Board of Directors, following the Annual Meeting of the Board of Directors. I am especially pleased that we can congratulate Chamindra [Read the Rest...]

IOTX Sri Lanka: A glimpse from the eyes of an intern

I have been associated with Sahana Software Foundation since October, 2013, first as a volunteer contributor, then as a Sahana intern for four months and since April as a Google Summer of Code (GSoC) intern. Sahana Software Foundation held its [Read the Rest...]

Sahana MeetUp – Colombo, Sri Lanka + Virtual

We had a great Meetup  this week with 15 people – 10 in Sri Lanka and 5 joining virtually. We started off together, and then split into a virtual group and a on-location group before reporting back together at the [Read the Rest...]

IOTX Sri Lanka : Sahana Bar Camp

The first day of the Sahana Barcamp was held at the WSO2 office. It was a really cool place and seemed to have good work culture ( They had a drum kit and guitar in cafeteria. That made my day [Read the Rest...]

IOTX Sri Lanka : CAP Code-Fest

The CAP (Common Alerting Protocol) Code-Fest  was the hacker’s day of the week ( Everybody was plugged in, with coffee beside every laptop ) CAP is a XML based data format which is used to send warnings, alerts and other important data during [Read the Rest...]

IOTX Sri Lanka : SahanaCamp

IOTX Sri Lanka : SahanaCamp was a participatory workshop on the use of Disaster Management Information Systems with a focus on the Sahana Open Source Disaster Management Information System. It gave participants An introduction to real-world Sahana disaster management solutions Hands [Read the Rest...]

IOTX Sri Lanka: Sahana Strategic Planning Workshop

I am happy to announce that Sahana Strategic Planning Workshop was conducted on 15 June 2014 in Colombo, Sri Lanka. The workshop was attended by many members of the Sahana Software Foundation as well as some of the GSOC interns. [Read the Rest...]

SahanaCamp & Sahana Conference, Colombo, Sri Lanka, 18-22nd June 2014

The Sahana Software Foundation is excited to invite participants for our upcoming SahanaCamp and Sahana Conference to be held in conjunction with the  Indian Ocean Tsunami 10th (IOTX) Anniversary convention in Colombo Sri Lanka between 18-22nd June. Registration is open [Read the Rest...]

Progress on the OpenRadio for LCA2015

While I’ve been away, David VK5DGR and Mark VK5QI have been racing ahead on the prototype PCB v1.0 for the OpenRadio for LCA2015.

Marks design went out to fab PCBs by Edwin from Dragino. Edwin has now added the OpenRadio kit to his store for pre-order.

OpenRadio PCB v1.0 - Designed and assembled by Mark VK5QI

OpenRadio PCB v1.0 – SMD under-side, designed and assembled by Mark VK5QI.

Parts were ordered from the usual vendors and this week Mark has assembled one to get an better idea of assembly time. Its an important factor is getting the hardware complete on the day, so attendees can take home a known good working SDR. There will be some through hole parts and some SMD parts to load onto the board. Next is looking at the low pass filter.

You can read more over on Davids blog about work on the prototype; OpenRadio Part 2 – Prototype Works!

OpenRadio PCB v1.0 - through hole parts top-side, designed and assembled by Mark VK5QI

OpenRadio PCB v1.0 – through hole parts top-side, designed and assembled by Mark VK5QI.

73, Kim VK5FJ

Open Radio Miniconf 2015 announcement

Hi, this is Kim VK5FJ.

In early January, I’ll be kicking off the second one day Open Radio Miniconf, in Auckland, NZ.

It’s about exploring the hardware in a software defined radio.

It’s about understanding the software used in your software defined radio.

It’s also about exploring the protocols used over the air.

We’ll start off with a build-a-thon and a little theory.

We’re using an established SDR design, reworked by Mark VK5QI and Codec2 author David VK5DGR.

We will cover the how and why of SDR, and look at encoding and decoding some old and new modes.

Later in the day we will have a session for short talks on these topics, each around 10-15 minutes.

So if you are interested in presenting please send an email to vk5fj@wia.org.au

Edit:CfP opens 1st November and closes 14th December.

More information on registering for Linux Conf in Auckland can be found at lca2015.linux.org.au

73 from Kim VK5FJ

Audio in MP3

Astronomy Miniconf at Linux.conf.au 2015

So its on again next year; Astronomy Miniconf at Linux.conf.au 2015

It was lovely to be invited to the Astronomy Miniconf this year in Perth at LCA 2014, I described how I was involved in the ‘TheSkyNet’ project and the trip to see the MWA.

I’ve been invited again to present at the Miniconf in Auckland. Also make sure other Astronomers amateur or professional get along and even throw their hats in the ring to present.

Manageacloud.com blog

Manageacloud is a website that offers configuration management tools for Linux system administrators. Try it at http://manageacloud.com

Web Based Terminal

We have upgraded the web-based terminal. It is now a filly compatible terminal emulator. Until now, we have been recommending the usage of you favourite terminal if you were performing longer sessions or if you need to run programs that are graphically complex.

With the new implementation of the web-based terminal, you can run long sessions and complex visual programs with ease. You are still able to use your favourite terminal emulator if you prefer, but it is not longer mandatory.

This upgraded removes the need for any external program when you are creating and maintaining the server configurations with Mananageacloud. You web browser is everything you need.

Ubuntu Utopic Unicorn 14.10

Manageacloud is now compatible with Ubuntu Utopic Unicorn 14.10 released on 23th of October. Ubuntu Utopic Unicorn 14.10 is currently compatible with Rackspace and DigitalOcean. We are looking toward compatibility with Amazon Web Service too.

If you want to automate the deployment of an application in Ubuntu Utopic Unicorn, try manageacloud now!

The Sysadmin IDE

A few weeks ago, I was working with a well known programming Integrated Development Environment (IDE) for Java. I was analysing a plain SQL when I decided to press CTRL over the table name and click on it. I'd never tried this before. Suddenly the right side of the IDE opened up, connected to the database and fetched the table schema. What a wonderful feature!.

And the truth is that the IDE for programming is so useful that programmers don't work without it. It is so much quicker and easier to use.

When you automate systems, you will iterate over three basic steps:

  1. Create new functionality (or fix an existing one)
  2. Test the code
  3. Analyse new problems

The Sysadmin IDE is designed to assist you in those iterations.

 

Create new functionality (or fix an existing one)

The new functionality is created through the Sysadmin IDE. If you know your Linux distribution well enough, you won't need much training to become proficient in this technology.

This IDE is based in the Package Centric Design Pattern.

The package is the pointer to where the configuration resides. You can customise its files:

  1. Add, modify or delete files and folders
  2. Modify file and folder permissions

If you need to create a very specific behaviour, (like cloning from a it repository or adding a user) you do so through hook scripts, than can be written in any language. These hook scripts can be pre-hooked (executed before the package is installed) or post-hooked, (executed after the package is installed)

Along with these configuration capabilities, you also have access to all the documentation related to the package you are customising.

Test the configuration

You can test the configuration by creating temporary virtual servers with the base operating system in the cloud.

Once the virtual server is created, you can run the configuration.

When you install several packages at once, you can control the order by the dependencies in the hooks. If there are no hooks with dependencies, we assume that the order of installation is unimportant. The different configuration items are installed in this order:

  1. Repository configuration
  2. Pre hooks
  3. Package
  4. Post hooks

Repository configuration, package and hooks will be performed when the conditions are met. For example, if you mark apache2 as 'installed' and it is not in the system, it will be installed automatically.

The hooks runs successfully once per session.  If they fail, they will be executed again the next time you decide to run the configuration.

If the virtual server becomes unusable, or if you want to test the configuration from a new box, you can delete and recreate the server.

If you want to check the stability of the installation, you have the option of clicking "Check stability" under the configuration tab. This will execute the configuration twice and will analyse the returned results. This will detect problems like:

  • A service that is marked as 'started' and dies right after it is initialised
  • A package that is marked as 'installed' but has been removed because of a dependency or in a different stage of installation.

If you need to execute a command fast in the temporary virtual server, you can use the web based terminal1.

The tools tab will give you information about the server:

  • The ports that are opened
  • The processes that are running
  • The credentials of the server. If you want to access the server with your favourite terminal2, you can use those credentials or change the password3 using the web based terminal.

Analyse problems with the changes

When something fails, the log shows you information about the problem. To debug, you can copy and paste the command to the web based terminal (ar any other terminal) or click in the button "Run in terminal"

The configuration will be executed successfully when no errors are show

Would you like to try it out ?

Would you like to try our Sysadmin IDE ? Try our our Interactive Tutorial.

 

Foot notes:

  1. The web based terminal is meant to the an aid to execute simple commands. If you need to have long sessions or execute complex commands in the graphics, it is recommended that you use your favourite terminal.
  2. By default, some servers like Amazon Linux have several restrictions in the way OpenSSH is configured.
  3. Our system connects via SSH to manage the installation. Those credentials are displayed in the tools tab. If you change those credentials (e.g removing access to certain private key) our system won't be able to access them any more.

Open beta released

Over the last two months, we have been performing multiple usability tests for different roles in different countries. Today, we are proud to announce the open beta version release.

We still have a long journey, but the current product development is good to cover multiple business and user needs. With you help, input and collaboration, we are confident that over the next months, we will increase the flexibility of our products and achieve a new high ground.

Create and publish server configurations

If you want a way to install a sever and configure it with a single click from a website, you are in the right place. You also have the ability to create the configuration from your browser: you do not need anything else.

Are you a Sysadmin, DevOps or a developer who is maintaining infrastructure ? Create Modules so you can have the exact instructions, with historical changes, about how to configure production ready servers.

Are you a blogger and you want to teach people how to configure Linux software ? You could finish the article with a link to a Module, so you users can try the configuration quickly and easily.

Are you involved with a software that has a wiki with instructions about how to install a software ? Now you can provide a link to your modules so your users can install your software quickly and easily. You won't loose users because the do not know how to install it, and you will gain a competitive advantage as your competitors may not offer the equivalent services to their users.

Fork configurations

You do not need to create a whole working module yourself. If an experienced Sysadmn, DevOps or developer has already created and published a similar module that fits your needs you can just fork it and use it as the basis for your own module.

Raise your professional visibility

Our user profiles are designed to show what you are capable of. Add you Linkedin, your Twitter, etc and let other people know who you are and what you can do.

 

Roadmap

As we are progressing the testing process, we are confirming the features that we intent to develop before the end of the year. The main ones are:

  1. Private repositories. Currently all configurations are public. We understand that some companies will want privacy in their server configurations.
  2. Infrastructures. The modules are the basis for the configuration, and represents the configuration (or partial configuration) of one single server. We want to add the ability to run those modules in different servers with the modules aware of one another.

 

Business model

As long as we are operating in the open beta version, all services are free of charge.

However, we intend to implement the business model known as "freemium". The freemium model will have two type of users:

  • Users that needs public configurations, such open source projects. As long as their configuration are public, it is free of charge.
  • Users that needs private configurations. We will develop soon some paid plans adapted to different processional needs. If you want further information about those plans please contact us at sales@manageacloud.com.

We are continually listening to our users, and will be refining the business models while we are in the open beta.

We hope you enjoy using our services as much as we enjoyed developing them. As usual, any comments, ideas or concerns can be sent to support@manageacloud.com.

Configuration architecture

This post explains the architecture and philosophy that govern Manageacloud configuration management.

Background

Object-oriented programming (OOP) was a paradigm invented in the early 1960s and first implemented in 1967. It became the dominant programming methodology in the 1990s when programming languages supporting the techniques became widely available.

OOP is very powerful, but in order to make the best of it we need to use it with design patterns. The concept of design patterns originated in 1977, was first applied in 1987 and agained popularity in 1994.

Those design patterns add a set of rules that restrict what you can do with the object oriented paradigm. So, for example, in Model View Controller you should not access the database from the view. Those types of restrictions make it easier to organise the application as well as having numerous other advantages.

System automation is relatively new. The first implementation was in 1993 and it has been continuously evolving ever since. Like in OOP, there are tools that are able to do absolutely anything in any possible way, but now is necessary to start developing "design patterns" that offers a structured and more restrictive way to solve common system automation problems.

Package centric design pattern

When we designed the configuration management architecture, we were inspired by the Debian packages. Oversimplifying, a Debian Package is a set of scripts that runs before and after the software is installed, removed, upgraded or downgraded. It is a solution to deliver software (in some cases configurations too) that has been demonstrated to work very well for many years. The packages have another advantage: maintainers have already thought about how to organise the software well, and the resulting organisation is mature and time-tested.

Our configuration is package-centric. You need a package as a pointer to where the configuration resides. Those packages are surrounded by two types of scripts: the pre-install scripts (executed before the package) and the post-install scripts (executed after the package).

The post-install scripts can create dependencies with other packages, and those scripts won't be executed until the dependent packages processed the pre-install script, the package and the post-install scripts.

If for example you want to install nginx, you will add that package to your configuration. If you are happy with the package version for nginx, you can mark that package as "Install". If you are unhappy with the package version and you can to compile it yourself, you can mark the package as "remove" and then create a post-install script that downloads the source, compiles and installs it. If in order to compile this software you need extra packages installed in the system, you can create dependencies in the post-install script linked to other packages.

The Module

When you create a configuration for a server, you are creating a Module. Therefore a module is a unit of configuration. Modules can be combined to create the configuration of a server, and several servers are combined in infrastructures.

The module is a combination of packages. The package is used as a pointer to where the configuration resides.

Every item is executed in a determined order depending on the internal dependencies within the package:

 - Pre-install scripts: These will be executed first and can be written in any language.

 - Package: The package can have three states: Install, remove or default. Marking the package as 'installed', unsurprisingly, install the package and the dependencies. If it is marked as 'removed', this will remove the package. If it is marked as 'default', no action will be taken: If the package was installed, it will remain installed and if the package was not installed, it will remain absent.

 - Files, folders and permissions: You can create and delete folders, and create, delete or modify files. For example, you can create the folder /var/www/mywebsite with owner www-data.

 - Post-install scripts: These will be executed last. Those scripts can be written in any language.

The package and files, folder and permissions are executed any time where the conditions are met. For example, if you mark the folder owned by user "www-data" and it eventually changes to "nobody", next time the configuration runs it will revert to "www-data".

Pre-install scripts and post-install scripts are only executed once. This is something open for discussion for our next releases. If we can make sure that those scripts are idempotent, it should not be a problem to execute when the conditions are met.

The post-install hooks can create additional dependencies to other packages. Then they will be executed after the package and after all the dependencies of that package are executed successfully. One example: we want to compile Apache and then PHP. If PHP depends on Apache, the PHP post script will run only after all package Apache, pre-install and post-install scripts are executed successfully.

Case Study

We have the following case: we have a private project in github written in python. This project needs the following architecture:

 - Apache with WSGI

 - Memcached configured for 120MB

 - MySQL with an initial database

There are multiple solutions to configure this Module. My proposal would be to install the following packages for Debian Wheezy:

 - git

 - libapache2-mod-ruwsgi

 - mysql-server

 - memcached

Package git

The goal of the package git is to install git and everything that we need to connect to github, to retrieve the project. We have several options:

 - If the github project is public, we do not need any authentication.

 - If the github project is private, we need to authenticate the user. For example, we could use a post-install hook that contains a private key that allows to clone the project. This post-install hook creates a user (or use an existing one) and utilises a private key.

Package libapache2-mod-ruwsgi

The goal of this package is to install and configure Apache, WSGI and the website.

1) We use file, folders and permissions to create the directory that contains the project read by Apache. This folder could be /var/www/mywebsite

2) We use files, folders and permissions: we create the file that contains the configuration for the virtual host, for example at the location /etc/apache2/sites-available/mywebsite.conf

3) We create a post-install hook that executes the command "git clone", and creates a copy of the project in the folder. After executing the script, we restart/reload Apache. This post-install hook has a dependency with the package git, as we need the command git and the authentication.

4) We create another post-install hook that configures Apache: It disables default active website "000-default", enabled "myswebsite" virtual host and reloads or restarts Apache.

Package mysql-server

The goal of this package is to set up the database.

1) We use a post-install hook that creates the database and uploads the copy of the database to mysql.

Package memcached

The goal of this package is to install and configure memcached. The default configuration is set to 64MB and we need to increase it to 128MB.

1) We use files, folders and permissions to modify the file /etc/memcached.conf and increase the memory to 128MB.

2) Create a post-install hook script to restart the service.



The diagram would look like this:

 

The complexity of automating the installation of a whole website has been reduced to several actions performed in the Sysadmin IDE and a few simple scripts (run git clone, restore mysql and enable/disable virtual host in Apache).

We will publish a module that reproduces this configuration as proof of concept soon after releasing the open beta.

Do you have any questions or comments ? Please write to us at support@manageacloud.com

 

Introduction to manageacloud.com configuration management

We are building a web based platform that will allow you to create and maintain your infrastructure's server configurations.

The only tool required is a browser. From there, you can create and maintain server configurations. Our technology is:

Easy to use

Creating a configuration from scratch should not be a complicated task. We aim to convert complex infrastructures in a set of simple configurations.

Easy to learn

It is possible to achieve high productivity in just a short amount of time.

Transparency

Configurations are not black boxes. Anyone who is not familiar with a configuration can identify how it works at a glance.

Data driven design

All configuration is data. This allows, for example, the sharing of configurations through services like github.

Language agnostic

You are free to choose your preferred programming language, when completing certain tasks that require the use of scripts.

Sysadmin IDE

We provide comprehensive facilities for sysadmins when creating configurations.

Linux compatible

Our platform is compatible with any Redhat or Debian based distributions.

Test friendly

Analysing and testing configurations is straightforward and can be run automatically (Continuous Integration).

Accessibility

You only require a modern browser to use our services.

Version control

Our software integrates git by default as a version control system.

Agentless

Only ssh is needed to deploy configurations.

Works everywhere

Bare metal, cloud and hybrid architectures. Configurations can even run on your own laptop!

Creating this software has been an interesting and challenging task for us. We have many plans down the track but have decided to launch with the minimum feasible product. The roadmap will be guided by the feedback we receive.

Our software is currently a private alpha version. We plan to deploy the open beta by 8 September 2014, and would love to hear from you! Please drop us a line at support@manageacloud.com

 

Using the official Go Docker image to try out a library

We received a Pull Request to add Swagger support to document the Docker API, and @proppy asked if we could make sure we could load the schema in a standard json schema loader, for example gojsonschema

The answer is no, not yet – but we’ll work towards it :)

But to find out, I added 3 files, a Dockerfile:



FROM golang:onbuild

a Makefile:



default:

docker build -t loadschema .

docker run --rm loadschema

and a tiny 13 line go program:



package main

import (

"github.com/xeipuuv/gojsonschema"

)

func main() {

_, err := gojsonschema.NewJsonSchemaDocument("file:///go/src/app/docker-v1.14.json")

if err != nil {

panic(err.Error())

}

}

The golang:onbuild image has ONBUILD instructions to COPY the current context, download the dependencies, build the code, and then sets that application as the default CMD.

AWE.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Speeding up CPAN module contributions using the Docker language stack images

Docker Inc. just released our first set of programming language images on the Docker Hub. They cover c/c++ (gcc), clojure, go (golang), hy (hylang), java, node, perl, php, python, rails, and ruby.

As I need to do some work on API testing when I come back from holidays, I thought I’d look at the Net:Docker CPAN module – and of course, there is no Perl on my Boot2Docker image, so its a perfect opportunity to see what I should do.

After forking and cloning the Git repository, I created the following initial Dockerfile:



FROM perl:5.20

MAINTAINER Sven Dowideit

COPY . /docker-perl

WORKDIR /docker-perl

RUN cpanm --installdeps .

RUN perl Build.PL

RUN ./Build build

RUN ./Build test

It fails to build during the ‘test’ step:



$ docker build -t docker-perl .

... snip ...

Step 6 : RUN ./Build test

---> Running in 367afe04c77e

Can't open socket var/run/docker.sock: No such file or directory at /usr/local/lib/perl5/site_perl/5.20.0/LWP/Protocol/http/SocketUnixAlt.pm line 27. at t/docker-api.t line 9.

# Tests were run but no plan was declared and done_testing() was not seen.

# Looks like your test exited with 255 just after 1.

t/docker-api.t ....

Dubious, test returned 255 (wstat 65280, 0xff00)

All 1 subtests passed

Can't locate IO/String.pm in @INC (you may need to install the IO::String module) (@INC contains: /docker-perl/blib/arch /docker-perl/blib/lib /usr/local/lib/perl5/site_perl/5.20.0/x86_64-linux /usr/local/lib/perl5/site_perl/5.20.0 /usr/local/lib/perl5/5.20.0/x86_64-linux /usr/local/lib/perl5/5.20.0 .) at t/docker-start.t line 3.

BEGIN failed--compilation aborted at t/docker-start.t line 3.

t/docker-start.t ..

Dubious, test returned 2 (wstat 512, 0x200)

No subtests run

Test Summary Report

-------------------

t/docker-api.t (Wstat: 65280 Tests: 1 Failed: 0)

Non-zero exit status: 255

Parse errors: No plan found in TAP output

t/docker-start.t (Wstat: 512 Tests: 0 Failed: 0)

Non-zero exit status: 2

Parse errors: No plan found in TAP output

Files=2, Tests=1, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.21 cusr 0.03 csys = 0.26 CPU)

Result: FAIL

2014/09/26 16:08:19 The command [/bin/sh -c ./Build test] returned a non-zero code: 1

I’m going to have to give this Dockerfile a DOCKER_HOST (incorrectly using http://) setting (to one of my insecure plain text tcp based servers :), and add IO::String and JSON:XS to the cpanfile.

Unfortunately, because cpanm --installdeps . uses the files in the build context, this way does not use the build cache – so its slow. Its worth duplicating the contents of the cpanfile before the COPY instruction for speed.

So the working Dockerfile looks like:



FROM perl:5.20

MAINTAINER Sven Dowideit

RUN cpanm Module::Build::Tiny

RUN cpanm Moo

#', '1.002000';

RUN cpanm JSON

RUN cpanm JSON::XS

RUN cpanm LWP::UserAgent

RUN cpanm LWP::Protocol::http::SocketUnixAlt

RUN cpanm URI

RUN cpanm AnyEvent

RUN cpanm AnyEvent::HTTP

RUN cpanm IO::String

COPY . /docker-perl

WORKDIR /docker-perl

RUN cpanm --installdeps .

RUN perl Build.PL

RUN ./Build build

# This is a terrible cheat.

ENV DOCKER_HOST http://10.10.10.4:2375

RUN ./Build test

RUN ./Build install

CMD ["docker.pl", "ps"]

and then docker build -t docker-perl . results in:



bash-3.2$ docker build -t docker-perl .

Sending build context to Docker daemon 138.8 kB

Sending build context to Docker daemon

Step 0 : FROM perl:5.20

---> 4d4674548e76

Step 1 : MAINTAINER Sven Dowideit

---> Using cache

---> 4ad0946e76aa

Step 2 : RUN cpanm Module::Build::Tiny

---> Using cache

---> f1b94d36a51c

Step 3 : RUN cpanm Moo

---> Using cache

---> 98de8c3a19a8

Step 4 : RUN cpanm JSON

---> Using cache

---> 73debd4ee367

Step 5 : RUN cpanm JSON::XS

---> Using cache

---> 89378a425f0b

Step 6 : RUN cpanm LWP::UserAgent

---> Using cache

---> 252fe329cf22

Step 7 : RUN cpanm LWP::Protocol::http::SocketUnixAlt

---> Using cache

---> a77d289faf19

Step 8 : RUN cpanm URI

---> Using cache

---> 6804b418778d

Step 9 : RUN cpanm AnyEvent

---> Using cache

---> c595f66bcf73

Step 10 : RUN cpanm AnyEvent::HTTP

---> Using cache

---> 31b25b2da3c4

Step 11 : RUN cpanm IO::String

---> Using cache

---> e54cd3d01988

Step 12 : COPY . /docker-perl

---> 4d4801209a79

Removing intermediate container c42897136186

Step 13 : WORKDIR /docker-perl

---> Running in 36575a59e465

---> 7042c67cf1b7

Removing intermediate container 36575a59e465

Step 14 : RUN cpanm --installdeps .

---> Running in c1b5cbb75c4a

--> Working on .

Configuring Net-Docker-0.002005 ... OK

<== Installed dependencies for .. Finishing.

---> 071f9caca472

Removing intermediate container c1b5cbb75c4a

Step 15 : RUN perl Build.PL

---> Running in fae9bbce142f

Creating new 'Build' script for 'Net-Docker' version '0.002005'

---> 2800182bd0ff

Removing intermediate container fae9bbce142f

Step 16 : RUN ./Build build

---> Running in a98cb6c7a808

cp lib/Net/Docker.pm blib/lib/Net/Docker.pm

cp script/docker.pl blib/script/docker.pl

---> f5ba5be85f9d

Removing intermediate container a98cb6c7a808

Step 17 : ENV DOCKER_HOST http://10.10.10.4:2375

---> Running in 1e8b3273974c

---> fffb42d69011

Removing intermediate container 1e8b3273974c

Step 18 : RUN ./Build test

---> Running in 3baacccbf17e

t/docker-api.t .... ok

t/docker-start.t .. ok

All tests successful.

Files=2, Tests=41, 5 wallclock secs ( 0.02 usr 0.02 sys + 0.26 cusr 0.06 csys = 0.36 CPU)

Result: PASS

---> f5d371cdc1fa

Removing intermediate container 3baacccbf17e

Step 19 : RUN ./Build install

---> Running in 60cd90714e02

Installing /usr/local/lib/perl5/site_perl/5.20.0/Net/Docker.pm

Installing /usr/local/bin/docker.pl

---> 62c6368a2fb0

Removing intermediate container 60cd90714e02

Step 20 : CMD ["docker.pl", "ps"]

---> Running in cb5ade11e146

---> 94984ed5756d

Removing intermediate container cb5ade11e146

Successfully built 94984ed5756d

So that I can use it:



bash-3.2$ docker run --rm -it docker-perl

ID IMAGE COMMAND CREATED STATUS PORTS

e619112eae2f 10.10.10.2:5001/sve bash 1411104597 Up 7 days ARRAY(0x2b84a48)

363ec1c45841 10.10.10.2:5001/sve bash 1411104470 Up 7 days ARRAY(0x29bae20)

You can also run the container with bash – docker run --rm -it docker-perl bash so you can do some more testing, or try out more complex examples.

In this case, the `./Build test` step probably needs to happen in the `docker run` phase, as it needs access to a working Docker daemon – this issue will be true for modules that talk to external resources.

I’ve made a pull request for the tiny changes to get me this far. Perhaps Dockerfiles like this could be a gateway into the world of contributing quick fixes for open source libraries.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Docker, containers and simplicity.

I’ve now been working for Docker Inc. for 2 months. My primary role is Enterprise Support Engineer: I’m one of the guys that your company can turn to when the going gets tough, for training, or just generally to ask questions.

In these months, I’ve been working on Boot2Docker (OSX, Windows installers), our Documentation, and generally helping users come to terms with the broad spectrum of effects that Docker has on developing, managing and thinking about software components.

I’m still trying to work out ways to explain what Docker does – this is March’s version:

Virtual machines emulate complete computers, so you setup, maintain and run a complete Operating System, and copy around complete monolithic filesystem images.

Docker Containers emulate Operating Systems, allowing you to build, manage and run applications and services. And you copy around your application, data and configurations.

This might not quite feel right, given that images are build ‘FROM’ a base image – but one thought I have, is that as that base image (and most often some local modifications) are likely to be common to your entire infrastructure, that layer will be shared for all your containers. Chances are, you didn’t build it either – Tianon did :).

Solomon keeps reminding me that Dockerfiles are like Makefiles – and in the back of my mind, I think of our application image layers as packages, thin wrappers around applications that are then orchestrated together to produce your service. The base image you choose is only there to support that, and over time I’m sure we’ll simplify those much more.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Boot2Docker dom0 and more docker orchestration magic.

So, some concrete examples for my previous Boot2Docker rules post:dom0boot2docker

I’m modifying boot2docker to

  1. if present, auto-start an image named ‘dom0:latest’. This image then orchestrates the remainder of the system.
  2. my personal dom0 image starts sshd and the containers I want this system to auto-run.
  3. Set up a `home-volume` container, which I -volumes-from mount into all my development containers.

When I do some development, testing or production, it happens in containers, the base OS is pristine, and can be trivially updated (atm, i’m using boot from USB flash and SD Card).

Similarly, the dom0 container is also a bare busybox container, cloned from the filesystem of the boot2docker image itself.. I’m not ready for my end goal of doing this to my notebook and desktop – but then, this setup is only a few days old :).

This setup uses my detect existing /var/lib/docker on HD pull request , and the dom0-rootfs, dom0-base and dom0 images, and then from there, and initial dev image.

2 customisations I’ve made to the boot2docker are persisted on the HD – /var/lib/boot2docker/etc/hostname is set to something useful to me, and the optional /var/lib/boot2docker/bootlocal.sh script starts the dom0 container at boot.

When I need a set of containers started, I can create a tiny orchestration container that can talk to the docker daemon and thus start more containers, controlling how they interact with each other and the outside world.

 

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]