Adding a new developer just before deadline is horrible. But what is not

deadlinesproject-managementteam

Imagine a project is assigned to a team, deadline is estimated as 8 months. After 6 months it becomes apparent the project will most certainly not be complete on time(e.g a law changes or a hidden monumental hurdle is discovered, the lead dev gets hit by a bus, etc.). But the project is important (e.g. lose an important client on failure or have to pay reparations).

One solution we all agree is horrible is adding more developers, especially new to the company. They will need at least a month to get up to speed and occupy the rest of the team during that time.

One solution we all agree is awesome is prevention. But such situations do happen.

What is a reasonable solution in such a situation for the manager of the team, provided they have plenty of leverage for additional people, funding, client negotiation etc?

Best Answer

We have historically seen over and over again that there are two working and two non-working ways of combining the two fundamental constraints on software releases: dates and features.

  1. Fixed date, flexible features, aka "release what's ready": you release at a pre-determined date, but you only release what is working. This is a model that is successfully used by Ubuntu, Windows, Linux, and many others.
  2. Fixed features, flexible date, aka "release when ready" or "It's done when it's done": you determine the set of features beforehand, and then you simply work until the features are finished. Some Open Source projects work this way.
  3. Fixed date and features.
  4. Flexible date and features.

#1 and #2 have been shown to work well in many different projects. For example, both Ubuntu and Windows are released with a fixed 6-month cadence with whatever features are ready in time for the release. If you make the cadence fast enough, even if a feature misses the release, customers don't have to wait a very long time for the next release.

Linux actually uses an interesting staging of the two: as soon as there is a new release, there is a fixed-time "merge window" of two weeks, during which new features are added. When this merge window closes, the set of merged features up to that point is fixed, and a "stabilization period" starts, during which the fixed set of features is stabilized, any bugs fixed, etc. This process takes as long as it takes, there is no deadline. When everything is stable, a new release is made, and the process starts anew. It turns out that this actually leads to a fairly stable release cadence of 6-8 weeks, but the point is that this cadence is not enforced, it emerges naturally.

Note that this does not invalidate my assertion that #3 doesn't work: Linux development does not fix dates and features. They do #1, then make a cutoff point and switch over to #2.

#3 is always a big problem, especially with a larger feature list and longer timeframes. It is pretty much impossible to predict the future (many have tried), so your estimates are almost always off. Either you have finished all the features and are sitting around bored twiddling your thumbs, or, more likely, you bump up against the deadline and frantically try to finish all the features in a hellish death march.

It does work if you keep the feature list and timeframe short enough. E.g. this is essentially what a Sprint is in Agile Methodologies: a fixed set of features in a fixed timeframe. However, the timeframes are reasonably short (typically a Sprint is one week or two), and it is ensured that there is rapid and immediate feedback and adjustment. You generally have a Sprint Retrospective after every Sprint, where you gather all the problems and successes of the Sprint and incorporate what you have learned into the next Sprint. And of course there is a Sprint Planning Meeting where the team discusses the next Sprint with the customer and agrees on a set of features to be implemented during that week.

Weekly (or two-weekly) Sprint Retrospectives are still not fast enough feedback, though, so there is also a Daily Standup Meeting with essentially the same goals as the Sprint Retrospective, except being able to react even faster: check whether the previous day's goals were met, and if they weren't, figure out what the problem was and fix it. (Note, I wrote "what" the problem was, not "who"!)

It is also very important that every Sprint ends with the release of a working product, so that the customer can immediately start using the new features, play around with them, get a feel for them, and give feedback for the next Sprint what is good, what isn't, what should be changed, etc.

#4 almost always leads to never-ending releases with feature creep. Debian 3 and Windows Longhorn were famous examples that interestingly happened around the same time. Neither of the two had a fixed release date, and neither of the two had a fixed set of features. Longhorn took 5 years, Debian 3.1 took 3. In both cases, what happened was that they didn't want to cut features because the long release meant that people would have to wait even longer for the features to appear in the next release. But because of not cutting features the release date slipped even further, so they added even more features because otherwise users would have to wait even longer, but that made the release date slip, and so on and so forth. An even more famous example might be ECMAScript 4.

So, what can you actually do in your situation? Well, you are currently in situation #3, and that simply does not work. You have to turn your situation #3 either into a #1 or a #2 by either relaxing the release date or dropping features. There simply is nothing else you can do.

The damage was done 6 months ago, and it cannot be magically fixed. You are in the situation where the amount of features cannot be delivered in the amount of time, and one of the two has to give.

IFF you can manage to move the release, then you might have the chance to grow the team, but the thing is that once you get 5-10 members, you really won't get any faster. You'd then have to break this into two or more projects, each with its own feature set, release date, and team, but then you also have to coordinate those, and define stable interfaces between both the projects and the software deliverables.

Note that in terms of culpability, the three scenarios presented in the question are very different:

  • If the applicable law changes, then it is perfectly possible to deliver the agree-upon features at the agreed-upon time. It's just that the agreed-upon features are useless for the customer. (Another good reason to be Agile.) In this case, it is actually in the customer's interest to re-negotiate the project, because if you just stuck to the agreed contract, they would have to pay for a completely useless result. So, this is essentially either a completely new project or a requirements change for the existing project, and both mean new prices and new timelines.
  • If the lead developer gets hit by a bus, the culpability is squarely on the project manager. Making sure that the bus factor is > 1 is pretty much a core responsibility of the PM. Practices that can improve the bus factor are for example Collective Code Ownership, Pair Programming, Promiscuous Pairing, Mob Programming, Code Reviews.
  • The "monumental hurdle" is a bit squishy. The question doesn't really define what kind of hurdle it is. If it turns out that the supplier massively underestimated the complexity, then it's obviously their fault. This can be mitigated by Spiking or Prototyping, for example.

However, regardless of who screwed up, we are still in the same place: we have an agreed set of features that cannot be delivered in the agreed time, so there is absolutely no way around the fact that one of the two has to give. There simply is no "non-horrible" solution.