While performance is really a result of implementation rather than languages, there are, in practice, faster and slower languages.
C is usually the fastest in comparisons. C compilers are relatively mature, and C programs require minimal run-time support. A C program will normally be compiled to something that can be loaded and executed, with just a little preparation on the part of the computer. (There have been C interpreters, and they were slow like you'd expect.)
Fortran is not usually in those computations, but is similar in most respects. Fortran was inherently faster in large-scale floating-point computations than the C of the original Standard, since the Fortran compiler could assume, say, that the three matrices passed to a multiplication program were disjoint, and could optimize on that basis. C compilers couldn't assume that.
Java programs are normally compiled to an artificial machine language, and that is normally compiled on the fly (just-in-time compiling). That could theoretically be faster than C-style compilation (it could make better guesses about the flow of execution, and it could tailor the compilation to the exact system in use), but in practice isn't. Java also requires more run-time support, such as a garbage collector, and the JIT compiler and runtime have to load and get going. That results in increased startup time, which can be noticeable.
Python programs are normally compiled to an artificial machine language and then interpreted, which is slower. It is possible to store the compiled files (".pyc"), but frequently only the source is stored, so to execute it is necessary to compile first and then interpret, which is slow. Also, Python has dynamic typing, which means the compiler doesn't know everything's type up front, and therefore Python functions have to be able to take different data types at runtime, which is inefficient.
There's always room for surprises. On one celebrated occasion, a CMU Common Lisp program out-number-crunched a Fortran program. Common Lisp requires garbage collection, which apparently wasn't an issue in that application, and normally is dynamically typed, but it's possible to declare all types statically. The Fortran compiler had a small inefficiency the CMU Common Lisp compiler didn't, and was duly improved afterwards.
Yes, it is irrelevant.
Computers are tireless, near-perfect execution engines working at speeds totally un-comparable to brains. While there is a measurable amount of time that a function call adds to the execution time of a program, this is as nothing compared to the additional time needed by the brain of the next person involved with the code when they have to disentangle the unreadable routine to even begin to understand how to work with it. You can try the calculation out for a joke - assume that your code has to be maintained only once, and it only adds half an hour to the time needed for someone to come to terms with the code. Take your processor clock speed and calculate: how many times would the code have to run to even dream of offsetting that?
In short, taking pity on the CPU is completely, utterly misguided 99.99% of the time. For the rare remaining cases, use profilers. Do not assume that you can spot those cases - you can't.
Best Answer
We're all just learning bits of programming languages. I would only consider the language implementers to be those who are a 10 out of 10 in the knowledge of a language.
Learning multiple languages, and paradigms, is the only way to develop a "taste" for what you like and don't like. If you only learned one language, you wouldn't even be able to really decide whether you even like it or not.
You're actually doing it the correct way. You will be able to reuse the most important fundamentals you learn in each while getting exposure to different syntax, libraries, and frameworks.