If you want an academic viewpoint, read this paper. If you are thinking about Scala, read Odersky and friends’ overview where they discuss what makes Scala scalable. There’s also a related question.
In a nutshell, Scala has features such as operator overloading, user-defined classes, traits, and many others, that allow one to express many domain-specific problems in a very natural way.
I think the motivation for language designers to revise existing languages is to introduce innovation while ensuring that their target developer community stays together and adopts the new language: moving an existing community to a new revision of an existing language is more effective than creating a new community around a new language. Of course, this forces some developers to adopt the new standard even if they were fine with the old one: in a community you sometimes have to impose certain decisions on a minority if you want to keep the community together.
Also, consider that a general-purpose language tries to serve as many programmers as possible, and often it is applied in new areas it wasn't designed for. So instead of aiming for simplicity and stability of design, the community can choose to incorporate new ideas (even from other languages) as the language moves to new application areas. In such a scenario, you cannot expect to get it right at the first attempt.
This means that languages can undergo deep change over the years and the latest revision may look very different from the first one. The name of the language is not kept for technical reasons, but because the community of developers agrees to use an old name for a new language. So the name of the programming language identifies the community of its users rather than the language itself.
IMO the motivation why many developers find this acceptable (or even desirable) is that a gradual transition to a slightly different language is easier and less confusing than a jump into a completely new language that would take them more time and effort to master.
Consider that there are a number of developers that have one or two favourite languages and are not very keen on learning new (radically different) languages. And even for the ones who do like learning new stuff, learning a new programming language is always a hard and time consuming activity.
Also, it can be preferable to be part of a large community and rich ecosystem than of a very small community using a lesser known language. So, when the community decides to move on, many members decide to follow to avoid isolation.
As a side comment, I think that the argument of allowing evolution while maintaining compatibility with legacy code is rather weak: Java can call C code, Scala can easily integrate with Java code, C# can integrate with C++. There are many examples that show that you can easily interface with legacy code written in another language without the need of source code compatibility.
NOTE
From some answers and comments I seem to understand that some readers have interpreted the question as "Why do programming languages need to evolve."
I think this is not the main point of the question, since it is obvious that programming languages need to evolve and why (new requirements, new ideas). The question is rather "Why does this evolution have to happen inside a programming language instead of spawning many new languages?"
Best Answer
You say "especially for financial software", which brings up one of my pet peeves: money is not a float, it's an int.
Sure, it looks like a float. It has a decimal point in there. But that's just because you're used to units that confuse the issue. Money always comes in integer quantities. In America, it's cents. (In certain contexts I think it can be mills, but ignore that for now.)
So when you say $1.23, that's really 123 cents. Always, always, always do your math in those terms, and you will be fine. For more information, see:
Answering the question directly, programming languages should just include a Money type as a reasonable primitive.
update
Ok, I should have only said "always" twice, rather than three times. Money is indeed always an int; those who think otherwise are welcome to try sending me 0.3 cents and showing me the result on your bank statement. But as commenters point out, there are rare exceptions when you need to do floating point math on money-like numbers. E.g., certain kinds of prices or interest calculations. Even then, those should be treated like exceptions. Money comes in and goes out as integer quantities, so the closer your system hews to that, the saner it will be.