You are here: Home Articles Distributed agile
OpenID Log in


Distributed agile

by Martin Aspeli last modified May 10, 2009 04:29 AM

Managing a Google Summer of Code 2009 project like a real project

Google's Summer of Code programme has kicked off again, and the Plone Foundation is once again a mentoring organisation. This year, Google are sponsoring seven students working on Plone Foundation projects. I am mentoring one of them, Timo Stollenwerk, who is working on improving Plone's default commenting infrastructure.

Plone's commenting story is getting a bit long in the tooth architecturally, features-wise, and UI-wise. There are a number of add-on products that attempt to address the latter two aspects, but none of them has so far been robust enough to become part of the core. The aim of this year's summer of code is very much to build something that will be part of the next major release of Plone. To do that, we'll need this year's GSoC project to deliver the brunt of the code required for the next generation Plone commenting.

We're lucky to have Timo as the chosen student. He is both motivated and capable, and has already contributed a number of Plone products. However, we need to run this as a proper project, with requirements, deliverables and tracking of progress if we are to ensure that the end result is going to outlive the project.

In the past, we've often let GSoC students loose on their proposals, with mentoring being largely reactive ("ask if you need anything"). That works if the student is very motivated and has the type of self-discipline usually required of freelancers and consultants. I suspect Timo actually does have this degree of self-discipline. However, the lack of focus, direction and regular feedback can be daunting, and often means that we fail to deliver the most important features, or leave too many loose ends (like documentation, testing, or refactoring) to make the code viable once GSoC ends.

The team

We've assembled a team of people around the commenting GSoC project:

  • Timo Stollenwerk is the main developer, and of course, the recipient of Google funding.
  • I will act as mentor and project manager. I'll also QA the code and help turn the end result into something we can realistically propose for a future Plone version.
  • Jon Stahl is acting as the customer, and owns a catalogue of requirements that Timo and I are delivering against.
  • David Glick provides technical expertise and advice on an ad-hoc basis.
  • Matthew Wilkes runs Plone's GSoC programme, and will help with interface with Google and the programme overall

This is a great team, but there is a problem: Timo is in Barcelona (GMT+2), Jon and David are in Seattle (GMT-7), and I'm in Perth (GMT+8). Face-to-face time is pretty much impossible (it takes me about 25 hours to fly to Timo...), and scheduling a call or IRC session is pretty tricky too. We did manage - 7am in Seattle/4pm in Barcelona/10pm Perth on a Saturday - but we are clearly going to need tools that support a distributed team.

The process

To manage our project, we're using a process based loosely on Scrum (albeit without the daily scrum stand-up meetings). It goes a little like this:

We started out discussing requirements via email and IRC. We captured those in a shared Google Spreadsheet, in the form of user stories. User stories are just sentences of the form "As a <role>, I can <do something>, so that <something useful is achieved>", though I often skip the last clause when the value delivered from a user story is obvious. User stories should be:

  • atomic - ideally stories should not depend on one another
  • swappable - during the project, we need to be able to take out a story that we no longer need to deliver and replace it with another one
  • deliverable-focused - as opposed to task-focused: when a story is complete, the result should be tangible and demonstrable

Once we had agreed on the initial set of stories, Jon, as the 'customer', went through each one and assigned a measure of business value to each. This is just a number that represents the relative importance of a given story. I prefer to use a Fibonacci sequence number - 1, 2, 3, 5, 8 or 13 - though other schemes are equally valid. Some people prefer to use words like Must Have, Should Have, Could Have and Won't Have (aka MOSCOW), or just Critical, High, Medium, and Low.

The important thing is to ensure that you don't end up with half your requirements being "critical". Customers often feel that if they don't call something "critical", they may not get it. That's a self-defeating policy, since we then can't ensure that we deliver the most valuable things first. Since words like "critical" and "high" have specific meanings, it's often better to use numbers that are a bit more abstract, and look at the spread of values used, re-adjusting the priorities if necessary.

We then sorted the list by business value. This is a requirement of Pivotal Tracker (more on that in a moment), so in this case we didn't have much of a choice, but seeing a sorted list is quite useful in any case.

With the prioritised list of stores, we moved on to estimating. For this, we used a technique called planning poker, estimating in story points. A story point is just a measure of the relative size of a story. You don't need to have technical expertise to judge the relative size of a story (in fact, it's very valuable to get opinions from non-technical people who understand the problem domain). Again, we use a numerical scale, and my preference is to use a Fibonacci sequence.

At this point, we are not concerned with actual time. In fact, it's really important that you don't end up mapping points to days or hours. It's better to think that 1 point means "trivial", 2 means "quite small", 3 means "a decent chunk of work", 5 means "a fair amount of work" and 8 means "lots of work and/or uncertainty". Sometimes, we also include higher numbers, like 13 or 100, though Pivotal doesn't support those. Normally, such larger numbers just means that stories are placeholders that need to broken up later. For this project, the time horizon and understanding of the subject domain is such that we can avoid having to deal with those types of requirements.

It's also possible to estimate using "ideal days" (how long it would take if there were no distractions and you had exactly the same number of hours available each day), although in practice this requires a much better understanding of how features will be implemented, and it is difficult to stop people from padding their estimates when they start thinking about actual time.

Regardless of the scale you use, planning poker is a great technique for arriving at consensual estimates quickly. If you've never used it, I highly recommend that you give it a try the next time you have to estimate anything. Of course, it's normally done in person, around a table. In our case, that wasn't an option. Therefore, we used IRC to discuss stories, and to actually do the estimation in real time. We imported all the stories from the Google Spreadsheet into a game, and estimated them one-by-one. The website simulates a planning poker card game, complete with animated cards. It works pretty well, though it is a bit buggy and we had to reload the game a few times. If you're using it, I recommend that you have a trial run first.

Using this technique, we were able to estimate all 58 stories in about 90 minutes, which is not bad. We could then export all the stories to a CSV file.

With estimation out of the way, the next step is release planning. For this, we moved to Pivotal Tracker, a free online agile project management tool. Pivotal is very good, and feels pretty solid. It is a bit opinionated about the way you run things (e.g. you have to maintain a prioritised backlog of stories as a strictly ordered list, and it assumes you'll largely work on those in order of priority), and you need to accept some limitations on how iterations are set up (they pretty much just run in regular intervals from when you set up the project). It is also quite strict about using velocity (the amount of story point you can get done in one iteration, based on historical performance) to determine what stories go into an iteration, which can be a bit frustrating, but overall, it's slick and intuitive to use.

With a prioritised list in place, we decided to split the project up into seven iterations of two weeks each, with code releases at the end of iteration 3, 5 and 7. Each release is described by an epic (a grouping of stories), such as "basic functionality" or "moderation and spam protection".

Jon, as customer, took charge of this, moving stories into the backlog in accordance to our epics and release milestones. This is an ongoing process, where the current iteration is taken as absolute, and other items in the backlog (the accepted stories that we are not yet doing) typically being more accurately placed the closer they are to the current iteration. The idea is to have high certainty about what we want to be working for the next iteration, as well as a good view to what we're likely to work on next, not to plan the entire project in excruciating detail. The plan will change over time, and stories will drop out or come in as new requirements are discovered. That's OK, because the process treats plans as malleable and we are not making a large investment in writing detailed specifications or developing complex project plans.

With the current iteration planned out, Timo and I, as the delivery team, looked at the stories there and decided to move some things around until we were happy that we had a set of stories we could commit to delivering, arriving at a consensus with Jon, the customer. We (mainly Timo) then move on to actually work on these features, marking them as complete as we go along.

At the end of each iteration, we will meet up with Jon again to demonstrate what we've delivered and plan for the next iteration. A story that is delivered needs to be complete, which means:

  • the code is complete
  • there are automated tests
  • it has been tested through the web
  • the UI is complete
  • there is documentation

If this is not the case, we may need to re-open a story and return to it in the next iteration.

Will it work?

This approach is a bit novel as far as an open source project like this is concerned. We'll see in time whether it works out. However, everyone involved is pretty upbeat about this right now, and at least it'll give us a way to track and report against our progress. Timo and I will be in touch about the project over the next few months, so watch out for updates on and the mailing lists.


Document Actions


Posted by at May 10, 2009 12:19 PM
Thanks for the in-depth writeup, Martin! And for taking the lead on this incredibly innovative approach. It's been really fun so far, and it definitely "feels right."

Succeed or fail (and I'm betting on the former!), this will have been an experiment worth doing, and I'm certain we will be wiser for it.

In the next iteration, I'd like to try weaving in an approach we used late last year to organize a sprint on PloneFormGen. We used to solicit and prioritize user stories from the PloneFormGen user community in advance of sprint here in Seattle. This allowed a wider community of users to contribute and discuss ideas before the sprint team sat down to work.

Tweaking velocity in Pivotal Tracker

Posted by at May 10, 2009 05:57 PM
> [Pivotal Tracker] is also quite strict about using velocity (the amount of story point you can get done in one iteration, based on historical performance) to determine what stories go into an iteration

Pivotal Tracker has a "team strength" factor which you can effectively use as a multiplier to the velocity for a given iteration.

Tweaking velocity in Pivotal Tracker

Posted by Martin Aspeli at May 10, 2009 10:30 PM
Hi David,

This is true. However, you basically end up twiddling the team strength until you get the number of stories you want in, which is kind of self-defeating. I'd like to be able to have some leeway in dropping stories into the iteration even if that goes slightly over the velocity prediction, e.g. because I'm sceptical of some of the estimates or because some of the stories may have been started already.


Overcoming distributed hurdles

Posted by at May 11, 2009 12:03 PM
Great write-up, Martin. I've found great for distributed teams also, though a bit buggy as you mentioned. I'm amazed you got through that many stories so quickly, tho maybe the small size of the team helped. I've found IRC chat slows down the process, so the teams I work with use a Skype conference call instead.

I've had some success with alternatives / companions to planning poker for teams in one location. James Grenning has a great writeup -

Pivotal Tracker has a lot of strengths, but its rigidness did me in - like no stories larger than 8 points, which is unreasonable with a backlog that spans a few months. I've been using instead, which I believe is free for open source projects.
Plone Book
Professional Plone 4 Development

I am the author of a book called Professional Plone Development. You can read more about it here.

About this site

This Plone site is kindly hosted by: 

Six Feet Up