Top-down is a great way to describe things you know, or to re-build things that you've already built.
Top-down biggest problem is that quite often simply there is no "top". You will change your mind about what the system should do while developing the system and while exploring the domain. How can be your starting point something that you don't know (i.e. what you want the system to do)?
A "local" top down is a good thing... some thinking ahead of coding is clearly good. But thinking and planning too much is not, because what you are envisioning is not the real scenario (unless you've already been there before, i.e. if you are not building, but re-building). Global top-down when building new things is just nonsense.
Bottom-up should be (globally) the approach unless you know 100% of the problem, you need just the known solution to be coded and you don't care about looking for possible alternative solutions.
Lisp approach is the distilled bottom-up. You not only build bottom up but you can also shape the bricks the way you need them to be. Nothing is fixed, freedom is total. Of course freedom takes responsibility and you can make horrible things by misusing this power.
But horrible code can be written in any language. Even in languages that are shaped as cages for the mind, designed with the hope that with those languages even monkeys could get good programs up and running (an idea so wrong on so many levels that it hurts even just thinking about it).
Your example is about a web server. Now in 2012 this is a well-defined problem, you have specs to be followed. A web server is just an implementation problem.
Especially if you are aiming at writing a web server substantially identical to the other gajillion of web servers that are out there then nothing is really unclear, except some minutiae. Even your comment about RSA is still talking about a clearly defined problem, with formal specifications.
With a well defined problem, with formal specifications and already known solutions then coding is just connecting in the dots. Top down is ok for that. This is the project manager heaven.
In many cases however there is no proven well-known approach to be used to connect the dots. Actually very often is hard to say even what are the dots.
Suppose for example you are asked to instruct an automatic cutting machine to align the parts to be cut to a printed material that is not perfectly conforming to the theoretic repetitive logo. You are given the parts and pictures of the material as taken by the machine.
What is an alignment rule? You decide. What is a pattern, how to represent it? You decide. How to align the parts? You decide. Can parts be "bent"? It depends, some not and some yes, but of course not too much. What to do if the material is just too deformed for a part to cut it acceptably? You decide. Are all the material rolls identical? Of course not, but you cannot bug the user to adapt alignment rules for every roll... that would be impractical. What pictures are seeing the cameras? The material, whatever that may mean... it can be color, it can be black over black where just the light reflex makes the pattern evident. What does it mean to recognize a pattern? You decide.
Now try to design the general structure of a solution for this problem and give a quote, in money and time. My bet is that even your system architecture... (yes, the architecture) will be wrong. Cost and time estimation will be random numbers.
We implemented it and now it's a working system, but changed our mind about the very shape of the system a big number of times. We added entire sub-systems that now cannot even be reached from the menus. We switched master/slave roles in protocols more than once. Probably now we've enough knowledge to attempt re-building it better.
Other companies of course did solve the same problem... but unless you are in one of these companies most probably your top-down detailed project will be a joke. We can design it top-down. You cannot because you never did it before.
You can probably solve the same problem too. Working bottom-up however. Starting with what you know, learning what you don't and adding up.
New complex software systems are grown, not designed. Every now and then someone starts designing a big new complex ill-specified software system from scratch (note that with a big complex software project there are only three possibilities: a] the specification is fuzzy, b] the specification is wrong and self-contradictory or c] both... and most often [c] is the case).
These are the typical huge-company projects with thousands and thousands of hours thrown into powerpoint slides and UML diagrams alone. They invariably fail completely after burning embarrassing amounts of resources... or in some very exceptional case they finally deliver an overpriced piece of software that implements only a tiny part of the initial specs. And that software invariably is deeply hated by users... not the kind of software you would buy, but the kind of software you use because you're forced to.
Does this mean that I think that you should think only to code? Of course not. But in my opinion the construction should start from bottom (bricks, concrete code) and should go up... and your focus and attention to detail should in a sense "fade" as you are getting farther from what you have. Top-down is often presented as if you should put the same level of detail to the whole system at once: just keep it splitting every node until everything is just obvious... in reality modules, subsystem are "grown" from subroutines.
If you do not have a previous experience in the specific problem your top down design of a subsystem, module or library will be horrible. You can design a good library once you know what functions to put in, not the other way around.
Many of the Lisp ideas are getting more popular (first class functions, closures, dynamic typing as default, garbage collection, metaprogramming, interactive development) but Lisp is still today (among the languages I know) quite unique in how easy is to shape code for what you need.
Keyword parameters for example are already present, but if they were not present they could be added. I did it (including keyword verification at compile time) for a toy Lisp compiler I am experimenting with and it doesn't take much code.
With C++ instead the most you can get is a bunch of C++ experts telling you that keyword parameters are not that useful, or an incredibly complex, broken, half backed template implementation that indeed is not that useful.
Are C++ classes first-class objects? No and there's nothing you can do about it. Can you have introspection at runtime or at compile time? No and there's nothing you can do about it.
This language flexibility of Lisp is what makes it great for bottom-up building. You can build not only subroutines, but also the syntax and the semantic of the language. And in a sense Lisp itself is bottom-up.
I would be hesitant to discard Waterfall across the board so quickly.
Although it is a flawed model for actually building software systems, it's not a bad teaching model to instruct on good practices for each stage of the lifecycle. Regardless of the process model that you apply to the project, you still perform requirements engineering, system architecture and design, implementation, testing, release, and maintenance (including refactoring and enhancement). The difference is how these phases are organized and conducted, but all of the activities still happen.
I'd argue that your transition from Waterfall to Scrum in the middle of the project is not the best idea. A key to Scrum's success is a long-running project. The first three to five sprints are the team settling in on a velocity, learning the process, and going through team development. Although you are doing through the motions, it's not really Scrum at that point. In addition, trying to create an exclusively Scrum-based curriculum is probably a bad idea as Scrum as not a silver bullet - it's better to teach best practices rather than a single methodology. In the workforce, not all projects are going to use Scrum. In fact, in some environments, Scrum would be detrimental to the success of the project.
You've already found problems with Scrum in an academic setting, and some of them are hard to adequately address.
The non-issue in your list of incompatibilities is that estimating is difficult. Yes, it is. But the only way to get better at estimating is to estimate and compare actuals against estimates. Students should be estimating size, time, and effort using various means (story points, source lines of code, hours, pages, person-hours) early so that they are more prepared to do so after graduating and entering the workforce.
The need for documentation is something that can be addressed from both the perspective of the professor and the perspective of the students. The Lean approaches tell us that documentation that doesn't add value to either the team or the customer is wasteful (in terms of time and cost). However, some documentation is needed to achieve some objectives of both the students and the professor (the customer/client) for various purposes. Overall, it sounds like an opportunity to teach process tailoring and quantitative project management (which does have a role even in agile methods).
With respect to Scrum meetings and scheduling, there are two ideas that come to my mind. The first is that this indicates that Scrum might not be the best process to use in an academic setting. There is no singular "best process model" for software projects, with factors such as schedule, staffing, visibility, and experience of the development team (among others).
Overall, I'd suggest emphasizing good practices, process tailoring, and process improvement over single methodologies. This will let you be the most effective to everyone taking the courses, and expose them to a variety of process methodologies and understand what the best practices are for a given set of conditions.
Since you're working to build a university curriculum, I'll give a high level overview of how the software engineering curriculum at the university I attended fit together.
The was an introductory software engineering went through the project in a waterfall model, with the lectures during each phase corresponding to different ways to conduct the activities of that phase. The teams progressed through the phases at the same rate. Having those clearly defined boundaries made fit well into the teaching model for a group of people with no to minimal experience working on teams to build software. Throughout the course, references were made to other methodologies - various agile methods (Scrum, XP), Rational Unified Process, Spiral Model - with regards to how their advantages and disadvantages.
In terms of the activities, there were specific courses to discuss requirements engineering, architecture and design (two courses - one focusing on detailed design using object-oriented methods and one focusing on system architecture), a number of courses focusing on designing and implementing various classes of systems (real-time and embedded systems, enterprise systems, concurrent systems, distributed systems, and so on), and software testing.
There are also three courses dedicated to software process. Software Engineering Process and Project Management that focuses on best practices for managing a software project with respect to multiple methodologies. A second process course teaches measurements, metrics, and process improvement (emphasizing CMMI, Six Sigma, and Lean). Finally, there's a process course that teaches agile software development (Scrum, Extreme Programming, Crystal, DSDM discussed) using a project carried out using the Scrum methodology.
The capstone project was a two-quarter project that was performed for a sponsoring company and run entirely by the student project team, with guidance from both the sponsors and a faculty advisor. Every aspect of how to conduct the project is up to the students, within any constraints set forth by the sponsors. The only university-mandated deadlines were an interim presentation half way (10 weeks) into the project, a final presentation at the end, and a quad poster presentation shortly before the end. Everything else was up to the sponsor and team to agree to.
Best Answer
It depends a bit on your target audience, but my experience ( more in small/medium scale development than very large scale work ) is that detailed design documents are arduous and boring to write, rarely read and tend to end up out of date by the time a project is delivered.
This does not mean that they are worthless - if you are delivering something for someone, there needs to be an authoritative and agreed statement of what will be delivered sufficiently detailed that everyone can point to it in case anyone is dissatisfied with the deal and say "this is what we promised" and evaluate it against what was delivered.
If I were setting up a company to build a product, however, I wouldn't worry so much about a detailed specification. I would want to document what we were going to do, but I wouldn't want to go into too much depth regarding how - that is the part that is most likely to change and leave the documents out of date and useless or even inaccurate enough to be actually obstructive. I would prefer to document the "how" stuff in code using whatever documentation format the language or IDE supports best, so that as the code changes it is easier to update the documentation at the same time. It won't stop it going out of date, but it will reduce it somewhat.
Ideally you would want a design document that could double as your manual when your code is complete, but I don't know of anyone who has managed that successfully.