Javascript – Pros and Cons of Facebook’s React vs. Web Components (Polymer)

htmljavascriptreact

What are the main benefits of Facebook's React over the upcoming Web Components spec and vice versa (or perhaps a more apples-to-apples comparison would be to Google's Polymer library)?

According to this JSConf EU talk and the React homepage, the main benefits of React are:

  • Decoupling and increased cohesion using a component model
  • Abstraction, Composition and Expressivity
  • Virtual DOM & Synthetic events (which basically means they completely re-implemented the DOM and its event system)
    • Enables modern HTML5 event stuff on IE 8
    • Server-side rendering
    • Testability
    • Bindings to SVG, VML, and <canvas>

Almost everything mentioned is being integrated into browsers natively through Web Components except this virtual DOM concept (obviously). I can see how the virtual DOM and synthetic events can be beneficial today to support old browsers, but isn't throwing away a huge chunk of native browser code kind of like shooting yourself in the foot in the long term? As far as modern browsers are concerned, isn't that a lot of unnecessary overhead/reinventing of the wheel?

Here are some things I think React is missing that Web Components will take care of for you. Correct me if I'm wrong.

  • Native browser support (read "guaranteed to be faster")
  • Write script in a scripting language, write styles in a styling language, write markup in a markup language.
  • Style encapsulation using Shadow DOM
    • React instead has this, which requires writing CSS in JavaScript. Not pretty.
  • Two-way binding

Best Answer

Update: this answer seems to be pretty popular so I took some time to clean it up a little bit, add some new info and clarify few things that I thought were not clear enough. Please comment if you think anything else needs clarification or updates.

Most of your concerns are really a matter of opinion and personal preference but I'll try to answer as objectively as I can:

Native vs. Compiled

Write JavaScript in vanilla JavaScript, write CSS in CSS, write HTML in HTML.

Back in the day there were hot debates whether one should write native Assembly by hand or use a higher level language like C to make the compiler generate Assembly code for you. Even before that people refused to trust assemblers and preferred to write native machine code by hand (and I'm not joking).

Meanwhile, today there are a lot of people who write HTML in Haml or Jade, CSS in Sass or Less and JavaScript in CoffeeScript or TypeScript. It's there. It works. Some people prefer it, some don't.

The point is that there is nothing fundamentally wrong in not writing JavaScript in vanilla JavaScript, CSS in CSS and HTML in HTML. It's really a matter of preference.

Internal vs. External DSLs

 Style encapsulation using Shadow DOM  React instead has this, which requires writing CSS in JavaScript. Not pretty.

Pretty or not, it is certainly expressive. JavaScript is a very powerful language, much more powerful than CSS (even including any of CSS preprocessors). It kind of depends on whether you prefer internal or external DSLs for those sorts of things. Again, a matter of preference.

(Note: I was talking about the inline styles in React that was referenced in the original question.)

Types of DSLs - explanation

Update: Reading my answer some time after writing it I think that I need to explain what I mean here. DSL is a domain-specific language and it can be either internal (using syntax of the host language like JavaScript - like for example React without JSX, or like the inline styles in React mentioned above) or it can be external (using a different syntax than the host language - like in this example would be inlining CSS (an external DSL) inside JavaScript).

It can be confusing because some literature uses different terms than "internal" and "external" to describe those kinds of DSLs. Sometimes "embedded" is used instead of "internal" but the word "embedded" can mean different things - for example Lua is described as "Lua: an extensible embedded language" where embedded has nothing to do with embedded (internal) DSL (in which sense it is quite the opposite - an external DSL) but it means that it is embedded in the same sense that, say, SQLite is an embedded database. There is even eLua where "e" stands for "embedded" in a third sense - that it is meant for embedded systems! That's why I don't like using the term "embedded DSL" because things like eLua can be "DSLs" that are "embedded" in two different senses while not being an "embedded DSL" at all!

To make things worse some projects introduce even more confusion to the mix. Eg. Flatiron templates are described as "DSL-free" while in fact it is just a perfect example of an internal DSL with syntax like: map.where('href').is('/').insert('newurl');

That having been said, when I wrote "JavaScript is a very powerful language, much more powerful than CSS (even including any of CSS preprocessors). It kind of depends on whether you prefer internal or external DSLs for those sorts of things. Again, a matter of preference." I was talking about those two scenarios:

One:

/** @jsx React.DOM */
var colored = {
  color: myColor
};
React.renderComponent(<div style={colored}>Hello World!</div>, mountNode);

Two:

// SASS:
.colored {
  color: $my-color;
}
// HTML:
<div class="colored">Hello World!</div>

The first example uses what was described in the question as: "writing CSS in JavaScript. Not pretty." The second example uses Sass. While I agree that using JavaScript to write CSS may not be pretty (for some definitions of "pretty") but there is one advantage of doing it.

I can have variables and functions in Sass but are they lexically scoped or dynamically scoped? Are they statically or dynamically typed? Strongly or weakly? What about the numeric types? Type coersion? Which values are truthy and which are falsy? Can I have higher-order functions? Recursion? Tail calls? Lexical closures? Are they evaluated in normal order or applicative order? Is there lazy or eager evaluation? Are arguments to functions passed by value or by reference? Are they mutable? Immutable? Persistent? What about objects? Classes? Prototypes? Inheritance?

Those are not trivial questions and yet I have to know answers to them if I want to understand Sass or Less code. I already know those answers for JavaScript so it means that I already understand every internal DSL (like the inline styles in React) on those very levels so if I use React then I have to know only one set of answers to those (and many similar) questions, while when I use for eg. Sass and Handlebars then I have to know three sets of those answers and understand their implications.

It's not to say that one way or the other is always better but every time you introduce another language to the mix then you pay some price that may not be as obvious at a first glance, and this price is complexity.

I hope I clarified what I originally meant a little bit.

Data binding

Two-way binding

This is a really interesting subject and in fact also a matter of preference. Two-way is not always better than one-way. It's a question of how do you want to model mutable state in your application. I always viewed two-way bindings as an idea somewhat contrary to the principles of functional programming but functional programming is not the only paradigm that works, some people prefer this kind of behavior and both approaches seem to work pretty well in practice. If you're interested in the details of the design decisions related to the modeling of the state in React then watch the talk by Pete Hunt (linked to in the question) and the talk by Tom Occhino and Jordan Walke who explain it very well in my opinion.

Update: See also another talk by Pete Hunt: Be predictable, not correct: functional DOM programming.

Update 2: It's worth noting that many developers are arguing against bidirectional data flow, or two-way binding, some even call it an anti-pattern. Take for example the Flux application architecture that explicitly avoids the MVC model (that proved to be hard to scale for large Facebook and Instagram applications) in favor of a strictly unidirectional data flow (see the Hacker Way: Rethinking Web App Development at Facebook talk by Tom Occhino, Jing Chen and Pete Hunt for a good introduction). Also, a lot of critique against AngularJS (the most popular Web framework that is loosely based on the MVC model, known for two-way data binding) includes arguments against that bidirectional data flow, see:

Update 3: Another interesting article that nicely explains some of the issues disscussed above is Deconstructing ReactJS's Flux - Not using MVC with ReactJS by Mikael Brassman, author of RefluxJS (a simple library for unidirectional data flow application architecture inspired by Flux).

Update 4: Ember.js is currently going away from the two-way data binding and in future versions it will be one-way by default. See: The Future of Ember talk by Stefan Penner from the Embergarten Symposium in Toronto on November 15th, 2014.

Update 5: See also: The Road to Ember 2.0 RFC - interesting discussion in the pull request by Tom Dale:

"When we designed the original templating layer, we figured that making all data bindings two-way wasn't very harmful: if you don't set a two-way binding, it's a de facto one-way binding!

We have since realized (with some help from our friends at React), that components want to be able to hand out data to their children without having to be on guard for wayward mutations.

Additionally, communication between components is often most naturally expressed as events or callbacks. This is possible in Ember, but the dominance of two-way data bindings often leads people down a path of using two-way bindings as a communication channel. Experienced Ember developers don't (usually) make this mistake, but it's an easy one to make." [emphasis added]

Native vs. VM

Native browser support (read "guaranteed to be faster")

Now finally something that is not a matter of opinion.

Actually here it is exactly the other way around. Of course "native" code can be written in C++ but what do you think the JavaScript engines are written in?

As a matter of fact the JavaScript engines are truly amazing in the optimizations that they use today - and not only V8 any more, also SpiderMonkey and even Chakra shines these days. And keep in mind that with JIT compilers the code is not only as native as it can possibly be but there are also run time optimization opportunities that are simply impossible to do in any statically compiled code.

When people think that JavaScript is slow, they usually mean JavaScript that accesses the DOM. The DOM is slow. It is native, written in C++ and yet it is slow as hell because of the complexity that it has to implement.

Open your console and write:

console.dir(document.createElement('div'));

and see how many properties an empty div element that is not even attached to the DOM has to implement. These are only the first level properties that are "own properties" ie. not inherited from the prototype chain:

align, onwaiting, onvolumechange, ontimeupdate, onsuspend, onsubmit, onstalled, onshow, onselect, onseeking, onseeked, onscroll, onresize, onreset, onratechange, onprogress, onplaying, onplay, onpause, onmousewheel, onmouseup, onmouseover, onmouseout, onmousemove, onmouseleave, onmouseenter, onmousedown, onloadstart, onloadedmetadata, onloadeddata, onload, onkeyup, onkeypress, onkeydown, oninvalid, oninput, onfocus, onerror, onended, onemptied, ondurationchange, ondrop, ondragstart, ondragover, ondragleave, ondragenter, ondragend, ondrag, ondblclick, oncuechange, oncontextmenu, onclose, onclick, onchange, oncanplaythrough, oncanplay, oncancel, onblur, onabort, spellcheck, isContentEditable, contentEditable, outerText, innerText, accessKey, hidden, webkitdropzone, draggable, tabIndex, dir, translate, lang, title, childElementCount, lastElementChild, firstElementChild, children, nextElementSibling, previousElementSibling, onwheel, onwebkitfullscreenerror, onwebkitfullscreenchange, onselectstart, onsearch, onpaste, oncut, oncopy, onbeforepaste, onbeforecut, onbeforecopy, webkitShadowRoot, dataset, classList, className, outerHTML, innerHTML, scrollHeight, scrollWidth, scrollTop, scrollLeft, clientHeight, clientWidth, clientTop, clientLeft, offsetParent, offsetHeight, offsetWidth, offsetTop, offsetLeft, localName, prefix, namespaceURI, id, style, attributes, tagName, parentElement, textContent, baseURI, ownerDocument, nextSibling, previousSibling, lastChild, firstChild, childNodes, parentNode, nodeType, nodeValue, nodeName

Many of them are actually nested objects - to see second level (own) properties of an empty native div in your browser, see this fiddle.

I mean seriously, onvolumechange property on every single div node? Is it a mistake? Nope, it's just a legacy DOM Level 0 traditional event model version of one of the event handlers "that must be supported by all HTML elements, as both content attributes and IDL attributes" [emphasis added] in Section 6.1.6.2 of the HTML spec by W3C - no way around it.

Meanwhile, these are the first level properties of a fake-DOM div in React:

props, _owner, _lifeCycleState, _pendingProps, _pendingCallbacks, _pendingOwner

Quite a difference, isn't it? In fact this is the entire object serialized to JSON (LIVE DEMO), because hey you actually can serialize it to JSON as it doesn't contain any circular references - something unthinkable in the world of native DOM (where it would just throw an exception):

{
  "props": {},
  "_owner": null,
  "_lifeCycleState": "UNMOUNTED",
  "_pendingProps": null,
  "_pendingCallbacks": null,
  "_pendingOwner": null
}

This is pretty much the main reason why React can be faster than the native browser DOM - because it doesn't have to implement this mess.

See this presentation by Steven Luscher to see what is faster: native DOM written in C++ or a fake DOM written entirely in JavaScript. It's a very fair and entertaining presentation.

Update: Ember.js in future versions will use a virtual DOM heavily inspired by React to improve perfomance. See: The Future of Ember talk by Stefan Penner from the Embergarten Symposium in Toronto on November 15th, 2014.

To sum it up: features from Web Components like templates, data binding or custom elements will have a lot of advantages over React but until the document object model itself gets significantly simplified then performance will not be one of them.

Update

Two months after I posted this answers there was some news that is relevant here. As I have just written on Twitter, the lastest version of the Atom text editor written by GitHub in JavaScript uses Facebook's React to get better performance even though according to Wikipedia "Atom is based on Chromium and written in C++" so it has full control of the native C++ DOM implementation (see The Nucleus of Atom) and is guaranteed to have support for Web Components since it ships with its own web browser. It is just a very recent example of a real world project that could've used any other kind of optimization typically unavailable to Web applications and yet it has chosen to use React which is itself written in JavaScript, to achieve best performance, even though Atom was not built with React to begin with, so doing it was not a trivial change.

Update 2

There is an interesting comparison by Todd Parker using WebPagetest to compare performance of TodoMVC examples written in Angular, Backbone, Ember, Polymer, CanJS, YUI, Knockout, React and Shoestring. This is the most objective comparison that I've seen so far. What is significant here is that all of the respective examples were written by experts in all of those frameworks, they are all available on GitHub and can be improved by anyone who thinks that some of the code could be optimized to run faster.

Update 3

Ember.js in future versions will include a number of React's features that are discussed here (including a virtual DOM and unidirectional data binding, to name just a few) which means that the ideas that originated in React are already migrating into other frameworks. See: The Road to Ember 2.0 RFC - interesting discussion in the pull request by Tom Dale (Start Date: 2014-12-03): "In Ember 2.0, we will be adopting a "virtual DOM" and data flow model that embraces the best ideas from React and simplifies communication between components."

As well, Angular.js 2.0 is implementing a lot of the concepts discussed here.

Update 4

I have to elaborate on few issues to answer this comment by Igwe Kalu:

"it is not sensible to compare React (JSX or the compilation output) to plain JavaScript, when React ultimately reduces to plain JavaScript. [...] Whatever strategy React uses for DOM insertion can be applied without using React. That said, it doesn't add any special benefits when considering the feature in question other than the convenience." (full comment here)

In case it wasn't clear enough, in part of my answer I am comparing the performance of operating directly on the native DOM (implemented as host objects in the browser) vs. React's fake/virtual DOM (implemented in JavaScript). The point I was trying to make is that the virtual DOM implemented in JavaScript can outperform the real DOM implemented in C++ and not that React can outperform JavaScript (which obviously wouldn't make much sense since it is written in JavaScript). My point was that "native" C++ code is not always guaranteed to be faster than "not-native" JavaScript. Using React to illustrate that point was just an example.

But this comment touched an interesting issue. In a sense it is true that you don't need any framework (React, Angular or jQuery) for any reason whatsoever (like performance, portability, features) because you can always recreate what the framework does for you and reinvent the wheel - if you can justify the cost, that is.

But - as Dave Smith nicely put it in How to miss the point when comparing web framework performance: "When comparing two web frameworks, the question is not can my app be fast with framework X. The question is will my app be fast with framework X."

In my 2011 answer to: What are some empirical technical reasons not to use jQuery I explain a similar issue, that it is not impossible to write portable DOM-manipulation code without a library like jQuery, but that people rarely do so.

When using programming languages, libraries or frameworks, people tend to use the most convenient or idiomatic ways of doing things, not the perfect but inconvenient ones. The true value of good frameworks is making easy what would otherwise be hard to do - and the secret is making the right things convenient. The result is still having exactly the same power at your disposal as the simplest form of lambda calculus or the most primitive Turing machine, but the relative expressiveness of certain concepts means that those very concepts tend to get expressed more easily or at all, and that the right solutions are not just possible but actually implemented widely.

Update 5

React + Performance = ? article by Paul Lewis from July 2015 shows an example where React is slower than vanilla JavaScript written by hand for an infinite list of Flickr pictures, which is especially significant on mobile. This example shows that everyone should always test performance for specific use case and specific target platforms and devices.

Thanks to Kevin Lozandier for bringing it to my attention.

Related Topic