The poster boy for HTML5 apps, LinkedIn went native early 2013.
In the interview in VentureBeat they explain why.
I think this is the part most relevant to your question:
Prasad said performance issues weren’t causing crashes or making the
app run slowly. What he did say shows that HTML5 for the mobile web
still has a bright future — but only if developers are willing to
build the tools to support it.
...
There are a few things that are critically missing. One is tooling
support — having a debugger that actually works, performance tools
that tell you where the memory is running out. If you look at Android
and iOS, there are two very large corporations that are focused on
building tools to give a lot of detailed information when things go
wrong in production. On the mobile web side, getting those desktop
tools to work for mobile devices is really difficult. The second big
chunk we are struggling with is operability, runtime diagnostics
information. Even now, when we build HTML5, we build it as a
client-side app. It’s more of a client-server architecture. … The
operability of that, giving us information when we’re distributed to a
large volume of users, there aren’t as many great tools to support
that, as well.
[Prasad also noted that dev and ops tools for solving
issues quickly "don't exist."]
Because those two things don’t exist,
people are falling back to native. It’s not that HTML5 isn’t ready;
it’s that the ecosystem doesn’t support it. … There are tools, but
they’re at the beginning. People are just figuring out the basics.
There's really two things to understand:
- prototypal inheritance has nothing to do with performance at all. The performance issues come from runtime changes to the inheritance structure.
- prototypal inheritance (and flexible object structures) are not so much inherently slower, as they are harder to optimize.
To illustrate the first claim, Ruby would be a flagship example of class based objects being abysmally slow. The problem generally persists throughout Smalltalk-like langauges, e.g. Objective-C, that employ message passing for method calls. The Objective-C runtime uses some nifty method caching to tackle the issue, but it did take them a while to get that far.
What makes method calls cheap is that the compiler (or JIT or runtime even) has some definite knowledge about the structure of an object. Be that explicitly given in terms of language features, or implicitly inferred from static analysis, the knowledge exists and the compiler can use that to optimize.
Now when the structure can change at runtime, this makes things a little tricky, because you need good heuristics to detect which portions of the code are worth being optimized at all (you want a good ratio between the frequency of the code being run to the frequency at which it changes and the cost of optimizing it), to get the best overall runtime characteristics.
So what is the point of classes in dynamic languages? Well, there must be one, because there's numerous JavaScript class systems (like that of Ext). Sure, familiarity for programmers used to classes is one reason. But the real benefit comes from helping to ensure explicit definitions of object types. Such a class definition is one (albeit complex) statement. With vanilla JavaScript constructors, you have a whole bunch of statements, that are grouped together, if you're lucky. The class structure is really just a side effect of imperative code, if you will. Class declarations are unsurprisingly meant to facilitate a more declarative style.
It's worth to note that the Self language that really made prototypal inheritance explicit (although it's really straight forward to achieve in a language with message passing), was created for an environment where you programmed a fully interactive system while it was running. You could actually see an object on screen. This allowed for a clean declaration that was still modifiable at runtime, because the declaration and the result were so intimately coupled. Without such a coupling, fiddling with object structure at runtime can quickly become an unintelligible mess that is really hard to fit in your brain, let alone reason about.
You can pretty much get prototypal inheritance if you have good support for delegation. You just delegate all unimplemented calls to an object that you consider a prototype and you're done. It's more flexible. However it's equally harder to optimize.
Best Answer
EcmaScript language geeks often use the term "ES interpreter" to refer to an implementation of EcmaScript, but the spec does not use that term. The language overview in particular describes the language in interpreter-agnostic terms:
So EcmaScript assumes a "host environment" which is defined as a provider of object definitions including all those that allow I/O or any other links to the outside world, but does not require an interpreter.
The semantics of statements and expressions in the language are defined in terms of completion specification which are trivially implemented in an interpreter, but the specification does not require that.
The non-local transfers of control can be converted to arrays of instructions with jumps allowing for native or byte-code compilation.
"EcmaScript Engine" might be a better way to express the same idea.
This is not true. The V8 "interpreter" compiles to native code internally, Rhino optionally compiles to Java bytecode internally, and various Mozilla interpreters ({Trace,Spider,Jager}Monkey) use a JIT compiler.
V8:
Rhino:
TraceMonkey: