While performance is really a result of implementation rather than languages, there are, in practice, faster and slower languages.
C is usually the fastest in comparisons. C compilers are relatively mature, and C programs require minimal run-time support. A C program will normally be compiled to something that can be loaded and executed, with just a little preparation on the part of the computer. (There have been C interpreters, and they were slow like you'd expect.)
Fortran is not usually in those computations, but is similar in most respects. Fortran was inherently faster in large-scale floating-point computations than the C of the original Standard, since the Fortran compiler could assume, say, that the three matrices passed to a multiplication program were disjoint, and could optimize on that basis. C compilers couldn't assume that.
Java programs are normally compiled to an artificial machine language, and that is normally compiled on the fly (just-in-time compiling). That could theoretically be faster than C-style compilation (it could make better guesses about the flow of execution, and it could tailor the compilation to the exact system in use), but in practice isn't. Java also requires more run-time support, such as a garbage collector, and the JIT compiler and runtime have to load and get going. That results in increased startup time, which can be noticeable.
Python programs are normally compiled to an artificial machine language and then interpreted, which is slower. It is possible to store the compiled files (".pyc"), but frequently only the source is stored, so to execute it is necessary to compile first and then interpret, which is slow. Also, Python has dynamic typing, which means the compiler doesn't know everything's type up front, and therefore Python functions have to be able to take different data types at runtime, which is inefficient.
There's always room for surprises. On one celebrated occasion, a CMU Common Lisp program out-number-crunched a Fortran program. Common Lisp requires garbage collection, which apparently wasn't an issue in that application, and normally is dynamically typed, but it's possible to declare all types statically. The Fortran compiler had a small inefficiency the CMU Common Lisp compiler didn't, and was duly improved afterwards.
I think the motivation for language designers to revise existing languages is to introduce innovation while ensuring that their target developer community stays together and adopts the new language: moving an existing community to a new revision of an existing language is more effective than creating a new community around a new language. Of course, this forces some developers to adopt the new standard even if they were fine with the old one: in a community you sometimes have to impose certain decisions on a minority if you want to keep the community together.
Also, consider that a general-purpose language tries to serve as many programmers as possible, and often it is applied in new areas it wasn't designed for. So instead of aiming for simplicity and stability of design, the community can choose to incorporate new ideas (even from other languages) as the language moves to new application areas. In such a scenario, you cannot expect to get it right at the first attempt.
This means that languages can undergo deep change over the years and the latest revision may look very different from the first one. The name of the language is not kept for technical reasons, but because the community of developers agrees to use an old name for a new language. So the name of the programming language identifies the community of its users rather than the language itself.
IMO the motivation why many developers find this acceptable (or even desirable) is that a gradual transition to a slightly different language is easier and less confusing than a jump into a completely new language that would take them more time and effort to master.
Consider that there are a number of developers that have one or two favourite languages and are not very keen on learning new (radically different) languages. And even for the ones who do like learning new stuff, learning a new programming language is always a hard and time consuming activity.
Also, it can be preferable to be part of a large community and rich ecosystem than of a very small community using a lesser known language. So, when the community decides to move on, many members decide to follow to avoid isolation.
As a side comment, I think that the argument of allowing evolution while maintaining compatibility with legacy code is rather weak: Java can call C code, Scala can easily integrate with Java code, C# can integrate with C++. There are many examples that show that you can easily interface with legacy code written in another language without the need of source code compatibility.
NOTE
From some answers and comments I seem to understand that some readers have interpreted the question as "Why do programming languages need to evolve."
I think this is not the main point of the question, since it is obvious that programming languages need to evolve and why (new requirements, new ideas). The question is rather "Why does this evolution have to happen inside a programming language instead of spawning many new languages?"
Best Answer
one of the biggest advantages of an XML based language is that it looks easy to implement
no really, there are a ton of validating parsers available which will diagnose the syntax related compile errors and give you the AST for free
the execution is also simply iterating over said AST and keeping a map of the functions and variables