Architecture – a good method to do lightweight architecture evaluation

Architectureevaluation

I'm familiar with architecture evaluation methods such as the technical Architecture Tradeoff Analysis Method (ATAM) and more business-oriented Cost Benefit Analysis Method (CBAM). However, these methods are fairly large scale: they prescribe several brainstorming sessions, presentations, development of a host of scenarios describing tradeoffs, etc. While useful for projects of a certain size, they are too big for internal projects or desktop applications that are typically developed by a handful of developers (or less), that even though they are small, have some fairly steep quality constraints (performance, scalability, adaptability).

A typical practice I have used in the past is to have one developer (or the architect if a team has one) to come up with a general architecture for the application and then discuss it on a whiteboard with the rest of the team, typically using some pseudo-UML notation that is easy to draw and understand. This typically leads to feedback and some iterations on the architecture. But it tends to be a little too informal causing all kinds of assumptions to be made that can later turn out to be wrong decisions.

Methods like ATAM typically force all stakeholders to think deeply about the architecture, which leads to discussions until everyone at least agrees on what the architecture exactly is.

Does anyone have experience with doing lightweight up-front architecture evaluation? If so, what are good practices?

Best Answer

The key to lightweight evaluation is to evaluate the right things at the right time. There are two ways that I know of to do this effectively. With scenario-based evaluation you use quality attribute scenarios and use cases to drive the evaluation focusing only on the high priority quality attributes. With risk-based evaluation you identify risks and let the identified risks drive your architecture design activities.

There are two books I can recommend which explore these two (somewhat related) approaches.

Architecting Software Intensive Systems by Anthony Lattanze introduces the Architecture Centric Design Methodology and covers light-weight scenario-based evaluations. You may recognize Lattanze from the SEI's Quality Attributes Workshop and there are similar ideas involved.

Just Enough Software Architecture: A Risk-Driven Approach by George Fairbanks introduces, well, a risk driven approach to designing and evaluating the architecture of a software system. There's also some free chapters available on his website if you wanted a preview. While the principles in this book are immediately applicable, the approach does not come with a specific method so you will need to combine ideas from other areas. I highly recommend the SEI's continuous risk management approach for identifying/prioritizing risks.

The basic idea behind these approaches is that you reduce the cost of evaluation (and design) by evaluating as you go rather than waiting until the end. While this is certainly a little more heavyweight than talking around a whiteboard, it's no where near as costly as a fully blown ATAM. And if you're comfortable, you can cherry pick practices to meet your specific needs.

No matter which approach you use to drive the evaluation the general idea will be the same...

Before you start:

  • Quality attribute scenarios or risks, prioritized (can be informal if that's all you've got)
  • Clear definition for go/no-go decision (how do you know the architecture is "good enough")
  • Most recent cut of the architecture description (the artifact you are evaluating)

Sit down for an evaluation session:

  • Architect presents an overview of the architecture
  • Walk through a view, show how the scenario or risk is satisfied
  • Issues are recorded to be fixed later
  • Roles and general procedure is similar to that used for a Fagan inspection (architect or author, moderator, recorder).
  • The session could take as little as an hour or two depending on the size of your system.

Once the session is over:

  • Review identified issues and determine if go/no-go criteria are met. Generally it takes about 3 reviews to get everything worked out. If not met, keep refining and experimenting (or mitigating architecture risks).
  • This is not an "all or nothing" evaluation - different parts of your architecture might "pass" while others still need refinement.

To help give you a feel for what the scenario based approach might look like, there's some public documentation from a capstone project I worked on in grad school. The documentation is a little rough, but it could help give some examples of the scenario-based approach within the context of ACDM. We were a team of 5 and built a typical web-based application, about 35 KLOC Java/GWT.

Related Topic