Testing is meant to find defects in the code, or from a different angle, to prove to a suitable level (it can never be 100%) that the program does what it is supposed to do. It can be manual or automated, and it has many different kinds, like unit, integration, system / acceptance, stress, load, soak etc. testing.
Debugging is the process of finding and removing a specific bug from the program. It is always a manual, one-off process, as all bugs are different.
My guess is that the author means, that on Level 0, only manual tests are performed, in an ad hoc fashion, without a test plan or anything to ensure that the tester actually thoroughly tested the feature under test, and that the tests can be reliably repeated.
Technically speaking, it was more a case of "software rot". The flight control software was recycled from the earlier Ariane 4 rocket, a sensible move given how expensive it is to develop software, especially when it's mission critical software which must be tested and verified to far more rigorous standards than most commercial software needs to be.
Unfortunately, nobody bothered testing what effect the change in operating environment would have, or if they did they didn't do said testing to a sufficiently thorough standard.
The software was built to expect certain parameters to never exceed certain values (thrust, acceleration, fuel consumption rates, vibration levels, etc). In normal flight on an Ariane 4 this wasn't a problem because those parameters would never reach invalid values without something already being spectacularly wrong. The Ariane 5, however, is much more powerful and ranges that would seem to be silly on the 4 could quite easily happen on the 5.
I'm not sure what parameter it was that went out of range (it might have been acceleration, I'll have to check), but when it did, the software was unable to cope and suffered an arithmetic overflow for which there had been insufficient error checking and recovery code implemented. The guidance computer started sending garbage to the engine nozzle gimbals, which in turn started pointing the engine nozzle pretty much randomly. The rocket started to tumble and break up, and the automatic self-destruct system detected the rocket was now in an unsafe irrecoverable attitude and finished the job.
To be honest, this incident probably didn't teach any new lessons, as the kind of problems have been unearthed before in all manner of systems, and there are already strategies in place to deal with finding and fixing errors. What the incident did do was ram home the point that being lax in following those strategies can have enormous consequences, in this case millions of dollars of destroyed hardware, some extremely pissed off customers and an ugly dent in the reputation of Arianespace.
This particular case was especially glaring because a shortcut taken to save money ended up costing a huge amount, both in terms of money and lost reputation. If the software had been tested just as robustly in an Ariane 5 simulated environment as it had been when it was originally developed for Ariane 4, the error surely would have come to light long before the software was installed in launch hardware and put in command of an actual flight. Additionally, if a software developer had deliberately thrown some nonsense input into the software then the error might have even been caught in the Ariane 4 era, as it would have highlighted the fact that the error recovery that was in place was inadequate.
So in short, it didn't really teach new lessons, but it rammed home the dangers of not remembering old ones. It also demonstrated that the environment within which a software system operates is every bit as important as the software itself. Just because the software is verifiably correct for environment X doesn't mean it's fit for purpose in the similar but distinct environment Y. Finally it highlighted how important it is for mission critical software to be robust enough to deal with circumstances that shouldn't have happened.
Contrast flight 501 with Apollo 11 and its computer problems. Whilst the LGC software suffered from a serious glitch during the landing, it was designed to be extremely robust and was able to remain in an operational state in spite of the software alarms that were triggered, without putting any astronauts in danger and still being able to complete its mission.
Best Answer
Perhaps a good analogy is that (manual) testing is to dynamic analysis what code reviews are to static analysis. Both manual testing and dynamic analysis rely on the behaviour of code as it is executed to find problems.
But testing is not a means for dynamic analysis. For starters, dynamic analysis is automated. It also helps you observe behaviours that are not easily seen otherwise, such as memory usage and profiling. Testing, on the other hand, helps you assess qualities like usability and presentation, things you cannot ask a dynamic analysis tool to help you with.