I'm doing a school project which requires us to create a computer application and then write documentation; therefore I have to write testing documentation, I already have a of test plan (Validate all input and test all features) but how do I present the testing results in my project (e.g. table etc.)?
Testing Documentation – Effective Ways to Present Testing Results
documentationtesting
Related Solutions
To be honest, even a test plan template that had been used successfully by other agile teams might not work well for your team - but seeing what other people are doing is useful for getting ideas about different approaches.
I've also been thinking about the same issue for a while now. My approach so far has been pragmatic: I was working in a small team, and was initially the only tester to 6 developers. Creating documentation instead of testing would have been a very poor choice. Creating documentation instead of testing, so that the developers could run the tests: another very poor choice, IMHO.
Currently, I will add a page to our wiki for each story, and that will hold a set of test ideas, used as a basis for exploratory testing sessions. If necessary, I will also add setup information there. I would prefer to keep that separate, in order to keep it as a resource that can be updated more easily, but at the moment it goes onto the same page. (I generally don't like mixing the "how" and the "what", it makes it harder to see "what" you're doing if you have to pick it out of pages of "how"). We don't have a template for those pages - I don't feel we've needed it yet. When we do, I will add one, and then tweak it as we learn more. At the moment, it works for me to give an overview of what areas we'll look at when testing, and being on the wiki, anyone can add items to it if they feel something is missing.
I have considered setting up a low tech testing dashboard, but at the moment, I believe that our whiteboard is sufficient for us to see how stories are progressing at this point - though as the team grows, we may want to revisit that.
You also wanted to know what other Agile testers are doing - here are a few blog posts that I think you'll find useful:
I very much like Marlena Compton's description of how she uses a wiki for testing at Atlassian: http://marlenacompton.com/?p=1894
Again, a light-weight approach, keeping test objectives tied to the story. She uses a testing dashboard for a high level view of what features are in a release. The feature links to a page of test objectives, which are sorted under different headings - function, domain, stress, data, flow, and claims. This gives a "at-a-glance" view of what areas/types of tests you have planned, and you can see instantly if one area has a lot fewer tests. This may be an approach you'd find useful.
Trish Khoo also has some interesting things to say on using a wiki, more on the level of structuring the individual tests (they've moved to using a Gherkin-style "Given, When, Then" format for their tests): http://ubertest.hogfish.net/?p=243
Elizabeth Hendrickson's blog post about specialised test management systems is a little off-topic, but you may find some useful points raised: http://testobsessed.com/2009/10/06/specialized-test-management-systems-are-an-agile-impediment/
When every run of the test suite gives you the possibility to yield a different result, the test is almost completely worthless - when the suite shows you a bug, you have a high chance that you cannot reproduce it, and when you try to fix the bug, you cannot verify wether your fix works (or not).
So when you think you need to use some kind of random number generator for generating of your test data, either make sure you always initialize the generator with the same seed, or persist your random test data in a file before feeding it into your test, so you can re-run the test again with exact the same data from the run before. This way, you can transform any non-deterministic test into a deterministic one.
EDIT: Using a random number generator to pick some test data is IMHO sometimes a sign for being too lazy about picking good test data. Instead of throwing 100,000 randomly choosen test values and hope that this will be enough to discover all serious bugs by chance, better use your brain, pick 10 to 20 "interesting" cases, and use them for the test suite. This will not only result in a better quality of your tests, but also in a much higher performance of the suite.
Best Answer
I would consider reporting various test coverage metrics. This will show that your tests are correctly exercising the full functionality of your application. Although, coverage metrics are like any statistics - they need to be clearly explained as they can be reported in mis-leading ways.
See types of coverage metrics on Wikipedia.
You can gather coverage metrics for applications that are tested both my manual and automated testing techniques.