I think (and I may be going out on a limb here) that ALL projects should have a bit of classic waterfall: The initial analysis and specification phase is essential. You must know what you are doing, and you must have it in writing. Getting the requirements in writing is difficult and time consuming, and easy to do badly. That's why so many skip it - any excuse will do: "Oh we do agile so we don't need to do that." Once upon a time, before agile, it was "oh I'm really clever and know how to solve this, so we don't need to do that." The words have changed a bit but the song is essentially the same.
This is of course all bull: You have to know what you are to do - and a specification is the means by which developer and client can communicate what is intended.
Once you know what you have to do - sketch out an architecture. This is the "get the big picture right" part. There is no magic solution here, no one right way, and no methodology that will help you. Architectures are the SYNTHESIS of a solution, and they come from partly inspired genius, and partly hard-won knowledge.
At each of these steps there will be iteration: you find things wrong or missing, and go fix 'em. That's debugging. It's just done before any code got written.
Some see these steps as boring, or not needed. In fact, these two steps are the most important of all in solving any problem - get these wrong and everything that follows will be wrong. These steps are like the foundations of a building: Get them wrong and you have a Leaning Tower of Pisa.
Once you have the WHAT (that's your spec) and the HOW (that's the architecture - which is a high-level design) then you have tasks. Usually lots of them.
Bust the tasks up however you want, allocate them however you want. Use whatever methodology-of-the-week that you like, or that works for you. And get those tasks done, knowing where you are heading and what you need to accomplish.
Along the way there will be false trails, mistakes, problems found with the spec and the architecture. This prompts things like: "Well all that planning was a waste of time then." Which is also bull. It just meant you have LESS foul-ups to deal with later. As you find problems with the high-level early days stuff, FIX THEM.
(And on a side issue here: There is a big temptation I've seen over and over to try to meet a spec which is expensive, difficult, or even impossible. The correct response is to ask: "Is my implementation broken, or is the spec broken?" Because if an issue can be sorted out quickly and cheaply by changing the spec, then that is what you should do. Sometimes this works with a client, sometimes it does not. But you won't know if you don't ask.)
Finally - you must test. You can use TDD or anything else you like but this is no guarantee that at the end, you did what you said you would do. It helps, but it does not guarantee. So you need to do final test. Thats why things like Verification and Validation are still big items in most approaches to project management - be that development of software or making bulldozers.
Summary: You need all the up-front boring stuff. Use things like Agile as a means of delivery, but you can't eliminate old-fashioned thinking, specifying, and architectural design.
[Would you seriously expect to build a 25-story building by putting 1000 laborers on site and telling them to form teams to do a few jobs? Without plans. Without structural calculations. Without a design or vision of how the building should look. And with only knowing that it is a hotel. No - didn't think so.]
first, get rid of the split between 'testers' and 'developers'. Everyone tests
second, in TDD, the developers code the tests before they code the feature/story
What you have described is not TDD [it may be Scrum though; Scrum is a project-management methodology independent of the development methodology; Scrum is not relevant to your problem]
Scenarios where automated testing is impossible are exceedingly rare. Scenarios where automated testing is difficult, expensive, or inconvenient are much more common - but it is precisely these scenarios where automated testing is needed the most.
From the vague description, I assume the software is drawing something on the screen. If what is being drawn is determined by data, formulas, or functional steps, then at least write automated tests that test to the level of the data/formulas/functional steps. If the screen output is deterministic (the same steps result in the same drawing output every time) then test manually once, take a screenshot, and let future tests compare the output to the verified screenshot. If the screen output is nondeterministic and not governed by data, formulas, or functional steps, then you're in that rare area where automated testing may be impossible. But I doubt it.
I'm guessing that the only reason testing has not been automated so far is that the developers don't care about it. In TDD, the developers do the testing, so they feel the pain of the boring repetition of testing the same 62-step process a hundred times until all the bugs are gone. Nothing will get an automated testing solution developed faster than making the developers do the testing.
Best Answer
Whenever it makes sense to.
From experience, we don't have strict allocated time for peer-programming. It is merely an exercised concept to benefit developers. During allocation of tasks, sometimes it makes sense for two people to tackle the task together. So there's no sense of strict rotation like speed dating.