Agile – Why can’t we get anything done

agileproductivityteamworktesting

I work on a small team, in a medium-sized company, most of which isn't involved in software development. I'm the newest and least-experienced developer and had no professional or academic background in software before starting, but I'm quite pleased with how respected my input is and am grateful for being taken seriously at such an early stage in my career.

Still, I feel like I should be doing more with this generous amount of airtime. As a team, we seem to have trouble getting things done. I'd like to be able to suggest something to improve the situation, and I think I'd be listened to if it was a good idea, but I'm at a loss for what to suggest.

Things I can identify as being issues include:

  • Specification of the tasks at hand is sparse. This is partly because management is a bottleneck and we don't have the money or people to commit to working out detailed requirements as much as we'd like. It's also partly because the software we're developing is investigative and the precise method isn't clear until it's demonstrated and used to determine its effectiveness.
  • The Lead Dev is very fond of what he calls 'prototyping' to the point that he's lately started insisting that everything is 'prototyped', which to the rest of us looks like writing bad code and giving it to the modellers to play with. It isn't clear what he expects to come out of this exercise in many cases. The 'actual' implementation then suffers because of his insistence that good practice takes too much time from the prototyping. I haven't even begun to be able to untangle this twisted logic and I'm not sure I want to try.
  • The modellers are expected to tell us everything about the desired methodology in precise detail, and it's taken on absolute trust that what they come out with is theoretically flawless. This is hardly ever true, but no action is taken to rectify this situation. Nobody on the modelling side raises any concerns in a structured way that is likely to be acted upon, nor do they seek guidance in applying best practices. Nothing is done about their passivity either.
  • I've tried to push TDD in the team before, but found it difficult as it's new to me and while those with oversight of my work were willing to tolerate it, no enthusiasm has been forthcoming from anyone else. I can't justify the amount of time I spend wallowing and not finishing features, so the idea has – for the moment – been abandoned. I'm concerned it won't be picked up again, because nobody likes to be told how to do their job.
  • We now have a continuous integration server, but it's mostly only being used to run multiple-hour regression tests. It's been left open that it ought to be running full-coverage unit and integration tests as well, but at the moment nobody writes them.
  • Every time I raise the issue of quality with the lead dev, I get an answer to the effect of 'Testing feature A is straightforward, feature B is much more important to the user but too difficult to test, therefore we shouldn't test feature A'. Once again I've made no headway in trying to untangle this logic.

….phew. When I phrase it like that, it looks much worse than I thought. I suppose, as it turns out, this is a cry for help.

Best Answer

Let me play devil's advocate for a moment:

Specification of the tasks at hand is sparse... The Lead Dev is very fond of what he calls 'prototyping'

The lead dev is fond of prototyping because specifications are sparse. This is probably a good thing; this is how iterative shops work.

The modellers are expected to tell us everything about the desired methodology in precise detail

This won't work in an iterative shop. The very nature of iterative development is that requirements are often incomplete. The iterations are what is needed to flesh out the requirements.

I've tried to push TDD in the team before, but found it difficult as it's new to me

This won't work either; you need to understand the technology before you can evangelize it. Further, in an iterative shop with scant requirements, TDD may be too much overhead. It's better to encourage adequate unit testing coverage.

We now have a continuous integration server, but it's mostly only being used to run multiple-hour regression tests.

That may be appropriate in a small, iterative shop.

Every time I raise the issue of quality with the lead dev, I get an answer to the effect of 'Testing feature A is straightforward, feature B is much more important to the user but too difficult to test, therefore we shouldn't test feature A'

It sounds like your shop has some fairly tight time constraints; like it or not, you are bound by those constraints.

It also sounds like you came from a part of the software industry that values doing things "the right way" over getting things to the market first. There's nothing wrong with that (it's admirable, in fact), except that the first to market with a buggy piece of software is often the winner. It's not fair, but that's how it is.

Related Topic