Agile – Should You Ever Re-Estimate User Stories?

agileestimationscrum

My current project is having a 'discussion' which is split down the middle- "this story is more complex than we originally thought, we should re-estimate" vs "you should never re-estimate as you only ever estimate up and never down".

Can anyone shed some light on whether you ever should re-estimate?

IMHO I'd imagine you could bring up an entirely new card for a new requirement or story, but going back and re-estimating on backlog items seems to skew the concept of relative sizing and will only ever 'inflate' your backlog.

Best Answer

A core part of estimating stories in one team I worked on was the idea of a story which was 'too big' to estimate. That is - the workload implied by the story was beyond the scope of a single sprint.

As more information came to hand, or the team got a better grasp on what at first seemed a single beast of a story, we would often re-estimate the story down. In most cases, this might involve breaking the 'too big' story into smaller, achieveable stories and estimating those instead.

These 'too big' stories never went into sizing numbers or burn down charts.

As well, we might come back to a story down the track and with a better understanding of the requirements, we could re-estimate. You should not re-estimate a story simply because it has become easier to achieve (e.g; after building up a bunch of framework libraries, a dependant piece of work will be easier to achieve), because the whole idea is that as you become 'better', the team can achieve more in a sprint, but certainly I think it is valid to re-estimate stories if your understanding of them changes.

The following was going to be a comment but I got carried away...

Don't forget to distinguish between size and complexity in your estimating. You should estimate on size only, not complexity or difficulty. For example - adding a button to a screen should always be a '1', in that as far as the user is concerned, they are getting a button - size is very low. It doesn't matter if you actually implement it in C# (low complexity, very easy) or Assembly (high complexity, very difficult); the user story has the same size.

So, when I say that it's worthwhile re-estimating when understanding changes, it's not that your understanding of how to implement the feature has changed, it's your understanding of what the feature is which has changed.

So, "I want a button" is easy, but later you realise the user means "I want a clickable button, which turns green and pops up a message to the user, is now a more complex story, and so should be re-estimated.


Update; as requested, I'll try to elaborate on what I mean by estimating on 'size' rather than complexity.

I think it easiest to consider this distinction in terms of a new product. Your team is tasked with building a multi-screen system, where everything is new. Amongst your user stories, you have a series of stories like;

1) I want a button on Screen A, which when clicked will show an error to unauthorised users. 2) I want a button on Screen B, which when clicked will show a message if the current day is a weekend. 3) I want a menu oiption on Screen C, which when clicked will make the screen flash every 5 minutes.

Now, when the team estimates these stories, they agree that they are all roughly the same size, and estimate each one as a 'small' story, worth 5 points on their sprint velocity scale.

The sprints kick off, and for the first sprint, the team achieve none of these stories, because they spend the whole cycle setting up projects, infrastructure, core libraries, etc. And there's a new guy on the team who is still learning.

A few sprints in, and the team puts together a screen which fulfills Story #1 - happy days, they've now achieved 5 points of velocity. (With an average of say, 1 point per sprint, due to the unproductive sprints at the start).

Now, for the next sprint, the infrastructure is in place, the team has a screen template to re-use and the new guy is getting his head around things, and this sprint, the team knock off Story #2 & #3.

Now, they have achieved 10 points in a sprint, for an average of about 4 points per sprint. This shows that team productivity is improving over time, which is entirely expected, as the project evolves, the team upskills, core code is reused (not rewritten).

This to me, is the ideal. Well estimated stories, demonstrating an icnrease in velocity over time (which, eventually will plateau you would assume, unless something major changes - like a new team member, etc).

On the other hand, if right at the start, the team looked at those stories and estimated them based on complexity, they would find that Story #1 is a BIG story, as they are factoring in all the ramp-up effort, plus the new guy needing training. Story #2 is a MEDIUM, because they figure they'll be at least on the way by then, and it should be easier. And finally, story #3 is a SMALL, because it'll be easy once #1 & #2 are done.

Now, what you've ended up with in this model is simply an obfuscated estimate of TIME; the estimates are factoring in how hard it will be and how long it will take to get a piece of work done, and as we know, this is difficult at best. In this model, velocity is evened out from the start, and you'll never be able to demonstrate an improvement in team performance. If you estimate on time, then you'll only ever be able to achieve 40 hours of work in a week. And you'd be silly to plan a sprint with any more or less work. If the team improves its capabilities, you can still only book 40 hours of work. You will only ever achieve 40 hours of work.

Hence, why I noted that a job in C# is easier than a job in Assembly (less complexity), but that the 'size' of the task should be estimated equivalently. That way, you can see that the choice to move languages, improvements in capability, (or adjustments to some other team dynamic) has a direct impact on velocity.


[Another Update: Addressing Prioritisation]

As for prioritisation, I believe this is a separate discussion to sizing/estimating. You don't prioritise the queue simply on the estimates of a story, else we would only ever do small jobs, and never the bigger, [possibly] more imporant, ones. In a team I led, we routinely had conversations about complexity when managing a sprint queue. The PO would be given to understand that, whilst some task is a "SMALL" task (in story points), it might be difficult to achieve because of X,Y,Z. At times, the team's velocity would take a hit, in order to implement some of these more complex stories. Other times, the PO would say "well, I'd rather have 5 other things this sprint, so we will put off the more complex jobs".

If we simply estimated stories in difficulty, then that would be hiding the velocity. Difficult or time-consuming tasks would always be given more weighting, in order to make velocity average out. As I noted before, this is just a different form of time-estimating, and there's no point tracking velocity if this is your estimation method, as you always have a fixed duration for a sprint, so your "velocity" would only change if you estimated incorrectly (e.g; an 8hour task took 1hour).

Hope that clears it up a liitle?