Your understanding is correct, if you're from the past. You're pretty much describe as it looked like in 1990s.
Yes, many languages can be executed directly by a web server plugin. Right on for PHP, mod_php for Apache is still the most popular way to host it. However, high-traffic sites use more modern approach, using web server only as a proxy for FastCGI (in case of PHP it's PHP-FPM)
the output is embed into HTML pages is then sent back to the client.
I think you're referring to early 90's so called spaghetti code, however modern approach is to use one of many MVC frameworks. In case of PHP that would mean for example Zend Framework (there are numerous alternatives).
As for ASP, you're probably referring to so called "classic ASP", which is obsolete. Currently it's ASP.NET, which can use any of .NET languages (C# being the most popular), and of course the .NET framework.
C and C++ are not typically used for web application. If so, such services are implemented either as stand alone servers, as module for web server or as FastCGI.
Perl can be executed directly from web serve module using mod_perl. There is also PSGI, which is basically clone of Python's WSGI.
Python is very popular language for web apps. It can be directly executed from Apache web server via mod_python, however that is obsolete and not recommended. Currently the way to go with Python is either via WSGI server module. WSGI server implemented in Python (eg. CherryPy, WebPy) or using stand alone Python web stack (Tornado and Twisted's Web module are fine examples). And of course again, you'd most probably will be using WSGI compatible MVC framework, Django is the most popular one (again, multiple alternatives available).
Ruby, again very popular language for web apps. Best known for web framework Ruby on Rails, which again is MVC. You can execute Ruby directly from server module via mod_ruby or via FastCGI.
Servlets/JSP are executed in stand-alone J2EE application servers, such as JBoss or Tomcat. It's more commonly used to add web interface to business system rather than to create stand alone web apps.
Classical CGI (ie. spawning process on each request) has become obsolete many years ago. It has been replaced by FastCGI (where process is long-running, rather than spawned on each request), server modules, interfaces such as WSGI and clones and stand-alone solutions.
Also the paradigm of the request processing has evolved, with CGI it was process per request. Then was process pool (or thread pool), each process (thread) handling one request at a time. However now, most modern approach is for web servers and stand-alone frameworks to use event-driven programming.
It's important to remember that interpreting and compiling are not just alternatives to each other. In the end, any program that you write (including one compiled to machine code) gets interpreted. Interpreting code simply means taking a set of instructions and returning an answer.
Compiling, on the other hand, means converting a program in one language to another language. Usually it is assumed that when compilation takes place, the code is compiled to a "lower-level" language (eg. machine code, some kind of VM bytecode, etc.). This compiled code is still interpreted later on.
With regards to your question of whether there is a useful distinction between interpreted and compiled languages, my personal opinion is that everyone should have a basic understanding of what is happening to the code they write during interpretation. So, if their code is being JIT compiled, or bytecode-cached, etc., the programmer should at least have a basic understanding of what that means.
Best Answer
There is no technical standard that defines a scripting language. It's just a word that is defined by common usage, and like any other word in common usage, there is no guarantee that all the usages are consistent. Tackling your specific questions:
The dynamic code generation they are talking about is machine code. In a classic interpreted language (think BASIC interpreter), each time a line of a script is executed, that line is translated on the spot into native machine code. It's more complicated now, since many scripting languages will be translated into byte code for a virtual machine, and the byte code may get cached.
This is where it gets very fuzzy, and changes with time. In ye olden days, pretty much every scripting language was a classic interpreted language. Nowadays many use byte code, virtual machines, and may use Just-in-time compilers. At that point the line between interpreted languages and compiled languages is blurry. Still, I don't know of any language commonly referred to as a scripting language that is compiled in the classic sense of a one time conversion to native machine code.
Languages commonly called scripting languages usually provide a suite of high level data structures like sets, lists, and dictionaries, as well as features like regular expressions. There are interpreted languages that don't provide those high level features, and they usually aren't called scripting languages. I don't think many folks would refer to interpreted BASIC or even UCSD Pascal as a scripting language.