I think (and I may be going out on a limb here) that ALL projects should have a bit of classic waterfall: The initial analysis and specification phase is essential. You must know what you are doing, and you must have it in writing. Getting the requirements in writing is difficult and time consuming, and easy to do badly. That's why so many skip it - any excuse will do: "Oh we do agile so we don't need to do that." Once upon a time, before agile, it was "oh I'm really clever and know how to solve this, so we don't need to do that." The words have changed a bit but the song is essentially the same.
This is of course all bull: You have to know what you are to do - and a specification is the means by which developer and client can communicate what is intended.
Once you know what you have to do - sketch out an architecture. This is the "get the big picture right" part. There is no magic solution here, no one right way, and no methodology that will help you. Architectures are the SYNTHESIS of a solution, and they come from partly inspired genius, and partly hard-won knowledge.
At each of these steps there will be iteration: you find things wrong or missing, and go fix 'em. That's debugging. It's just done before any code got written.
Some see these steps as boring, or not needed. In fact, these two steps are the most important of all in solving any problem - get these wrong and everything that follows will be wrong. These steps are like the foundations of a building: Get them wrong and you have a Leaning Tower of Pisa.
Once you have the WHAT (that's your spec) and the HOW (that's the architecture - which is a high-level design) then you have tasks. Usually lots of them.
Bust the tasks up however you want, allocate them however you want. Use whatever methodology-of-the-week that you like, or that works for you. And get those tasks done, knowing where you are heading and what you need to accomplish.
Along the way there will be false trails, mistakes, problems found with the spec and the architecture. This prompts things like: "Well all that planning was a waste of time then." Which is also bull. It just meant you have LESS foul-ups to deal with later. As you find problems with the high-level early days stuff, FIX THEM.
(And on a side issue here: There is a big temptation I've seen over and over to try to meet a spec which is expensive, difficult, or even impossible. The correct response is to ask: "Is my implementation broken, or is the spec broken?" Because if an issue can be sorted out quickly and cheaply by changing the spec, then that is what you should do. Sometimes this works with a client, sometimes it does not. But you won't know if you don't ask.)
Finally - you must test. You can use TDD or anything else you like but this is no guarantee that at the end, you did what you said you would do. It helps, but it does not guarantee. So you need to do final test. Thats why things like Verification and Validation are still big items in most approaches to project management - be that development of software or making bulldozers.
Summary: You need all the up-front boring stuff. Use things like Agile as a means of delivery, but you can't eliminate old-fashioned thinking, specifying, and architectural design.
[Would you seriously expect to build a 25-story building by putting 1000 laborers on site and telling them to form teams to do a few jobs? Without plans. Without structural calculations. Without a design or vision of how the building should look. And with only knowing that it is a hotel. No - didn't think so.]
The waterfall model that you are referring to was never intended to be a process model used on a real project. Instead, it is a strawman. It identifies the key phases and activities that exist in software projects and the most basic flow between them. This oversimplification of how to develop software is a flawed one, and it was even presented that way.
From the Wikipedia article:
The first formal description of the waterfall model is often cited as a 1970 article by Winston W. Royce, though Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model.
The paper discussed is titled Managing the Developement of Large Software Systems. In it, Royce does present that model on the second page. However, the text immediately below the pictoral representation goes on to read:
I believe in this concept, but the implementation described above is risky and invites failure.
He follows this with a discussion of the problems with testing following the "completion" of the development phase, and how failures here can lead to major redesigns and code changes, and how these can lead to major overruns in cost and schedule. Throughout the paper, he refines the original model to one that is indeed viable on a project. In the end, he ends up with a model that introduces prototyping, customer interaction, and refinement of artifacts - ideas that would eventually end up being critical to the agile movement that began in the late 1990s and early 2000s.
To answer your question: The Waterfall that you are asking about is not, and never was, a viable method to deliver software projects with a reasonable amount of quality on time and budget. However, there are other plan-driven methodologies that lie opposite of agile that can and do work on projects.
Best Answer
Of course waterfall is viable. It brought us to the moon!
And it's a agile coach talking here!
Unless you can clearly identify problems related to the way you manage your projects, there is no valid reason to change.
As an alternative of Agile and Waterfall methodologies, I will suggest YOUR methodology. Adapted to your specific business, your specific team, your products, you way of working, your company culture... It's why Scrum is called a simple framework instead of a methodology.
Wanting to implement a methodology because someone on a blog you like talked about it is as stupid as letting problems going without doing anything.