The pattern I have seen with testing over my career shows a strong correspondence with the risk of failure in a project. Big projects are more likely to be tested than small ones, mission critical applications are more likely to be tested than one off marketing web sites, in house systems are less likely to be tested than public facing ones.
That said there are still projects that have been excessively tested and those that have not been tested enough, but these are the minority.
To be honest, even a test plan template that had been used successfully by other agile teams might not work well for your team - but seeing what other people are doing is useful for getting ideas about different approaches.
I've also been thinking about the same issue for a while now. My approach so far has been pragmatic: I was working in a small team, and was initially the only tester to 6 developers. Creating documentation instead of testing would have been a very poor choice. Creating documentation instead of testing, so that the developers could run the tests: another very poor choice, IMHO.
Currently, I will add a page to our wiki for each story, and that will hold a set of test ideas, used as a basis for exploratory testing sessions. If necessary, I will also add setup information there. I would prefer to keep that separate, in order to keep it as a resource that can be updated more easily, but at the moment it goes onto the same page. (I generally don't like mixing the "how" and the "what", it makes it harder to see "what" you're doing if you have to pick it out of pages of "how"). We don't have a template for those pages - I don't feel we've needed it yet. When we do, I will add one, and then tweak it as we learn more. At the moment, it works for me to give an overview of what areas we'll look at when testing, and being on the wiki, anyone can add items to it if they feel something is missing.
I have considered setting up a low tech testing dashboard, but at the moment, I believe that our whiteboard is sufficient for us to see how stories are progressing at this point - though as the team grows, we may want to revisit that.
You also wanted to know what other Agile testers are doing - here are a few blog posts that I think you'll find useful:
I very much like Marlena Compton's description of how she uses a wiki for testing at Atlassian: http://marlenacompton.com/?p=1894
Again, a light-weight approach, keeping test objectives tied to the story. She uses a testing dashboard for a high level view of what features are in a release. The feature links to a page of test objectives, which are sorted under different headings - function, domain, stress, data, flow, and claims. This gives a "at-a-glance" view of what areas/types of tests you have planned, and you can see instantly if one area has a lot fewer tests. This may be an approach you'd find useful.
Trish Khoo also has some interesting things to say on using a wiki, more on the level of structuring the individual tests (they've moved to using a Gherkin-style "Given, When, Then" format for their tests):
http://ubertest.hogfish.net/?p=243
Elizabeth Hendrickson's blog post about specialised test management systems is a little off-topic, but you may find some useful points raised:
http://testobsessed.com/2009/10/06/specialized-test-management-systems-are-an-agile-impediment/
Best Answer
Here is what i would propose:
1. Test strategy document:
This outlines the over all testing objectives, what testing goals exists and how is the over all testing will be performed linking all levels from unit test, component test, system test and integration test. This is not a standard or something - but it can be something of this sort.
2. Test suit:
This is the collection of test cases and conditions on when and how each cases needs to be performed. Each set of inputs, procedures and expected output behavior against each elements. There are times when more than just success and failures are noted so that further analysis is done on this.
3. Test environment/setup and procedures
If you are automating the testing process fully or partly, it is worthwhile to document how exactly the (various elements of) testing is going to get executed. It should be debated and validated if the testing performed here is correct method or not. The developer and QA associated should know how to operate the set of tools and what procedures to follow.
4. Traceability Matrix:
This is a well defined matrix which identifies which set of test cases are relevant to promise each functionality point is assured to be steady. Read more here or this wiki link. Whenever a new bug is discovered or a new feature is requested, traceability matrix should be updated to capture these changes.
5. Test results
Whether generated automated or performed manually, the results (detailed and summerised) should be captured in a test execution sheet. Most important thing to note down is that a. original observation (such as logs, actual output of the application) should be capture as relevant so that conclusions can be validate. b. the document needs to capture the build against which these test were carried out; different build may not produce the same behavior against the same test.
Procedure and formats can be developed as needed. Most important thing, from my personal experience is that instead of making water tight compliance to some format allow people do document this like a running diary making only few things mandatory and let people pour in more information freely. Testing is never static (at least for any reasonably complex project) so over time all these template must evolve continuously - quite often every next step could be a major departure from the last one. If either the templates are stale, or people do not follow that because it is too rigid, eventually much of the relevant knowledge through testing procedure won't be reflected right.