Chrome V8 and JIT Compilation – How Chrome V8 Works and Why JavaScript Was Not JIT-Compiled Initially

compilerinterpretersjavascriptjit

I have been researching Interpreters/Compilers, then I stumbled across JIT-Compilation – specifically Google Chrome's V8 Javascript Engine.

My questions are –

  1. How can it be faster than standard Interpretation?
  2. Why wasn't JIT-Compilation used in the first place?

My Current Understanding

  1. Every Javascript Program starts out as source code, then, regardless of the method of execution, is ultimately is translated to machine code.

    Both JIT-Compilation and Interpretation must follow this path , so how can JIT-Compilation be faster (also because JIT is time-constrained, unlike AOT-Compilation) ?

  2. It seems that JIT-Compilation is a relatively old innovation, based off of Wikipedia's JIT-Compilation Article.

"The earliest published JIT compiler is generally attributed to work on LISP by McCarthy in 1960."

"Smalltalk (c. 1983) pioneered new aspects of JIT compilations. For example, translation to machine code was done on demand, and the result was cached for later use. When memory became scarce, the system would delete some of this code and regenerate it when it was needed again."

So why was Javascript Interpreted to begin with?


I'm very confused, and I've done a lot of research on this, but I haven't found satisfactory answers.

So clear, concise answers would be appreciated. And if additional explanation about Interpreters, JIT-Compilers, etc. needs to be brought in, that's appreciated as well.

Best Answer

The short answer is that JIT has longer initialization times, but is a lot faster in the long run, and JavaScript wasn't originally intended for the long run.

In the 90s, typical JavaScript on a web site would amount to one or two functions in the header, and a handful of code embedded directly in onclick properties and the like. It would typically get run right when the user was expecting a huge page load delay anyway. Think extremely basic form validation or tiny math utilities like mortgage interest calculators.

Interpreting as needed was a lot simpler and provided perfectly adequate performance for the use cases of the day. If you wanted something with long-run performance, you used flash or a java applet.

Google maps in 2004 was one of the first killer apps for heavy JavaScript use. It was eye-opening to the possibilities of JavaScript, but also highlighted its performance problems. Google spent some time trying to encourage browsers to improve their JavaScript performance, then eventually decided competition would be the best motivator, and would also give them the best seat at the browser-standards table. Chrome and V8 were released in 2008 as a result. Now, 11 years after Google Maps came on the scene, we have new developers who don't remember JavaScript ever being considered inadequate for that sort of task.

Say you have a function animateDraggedMap. It might take 500 ms to interpret it, and 700 ms to JIT compile it. However, after JIT compilation, it might take only 100 ms to actually run. If it's the 90s and you're only calling a function once then reloading the page, JIT is not worth it at all. If it's today and you're calling animateDraggedMap hundreds or thousands of times, that extra 200 ms at initialization is nothing, and it can be done behind the scenes before the user even tries to drag the map.