Bubbling is an event flow mechanism in the DOM where events that are designated as "bubbling" (such as your click event), after being dispatched to their target, follow the target's parent chain upwards, checking for and triggering any event listeners registered on each successive target.
In other words, given a DOM like this:
<div id="foo">
<div id="bar">
<a href="/something">Click Target!</a>
</div>
</div>
The click event that is triggered when the anchor is clicked will bubble up through the chain, triggering any click handlers on div#bar
and div#foo
on the way up.
It means all of its production rules have a single non-terminal on their left hand side.
For example, this grammar which recognizes strings of matched parentheses ("()", "()()", "(())()", ...) is context-free:
S → SS
S → (S)
S → ()
The left-hand side of every rule consists of a single non-terminal (in this case it's always S
, but there could be more.)
Now consider this other grammar which recognizes strings of the form {a^n b^n c^n : n >= 1} (e.g. "abc", "aabbcc", "aaabbbccc"):
S → abc
S → aSBc
cB → WB
WB → WX
WX → BX
BX → Bc
bB → bb
If the non-terminal B
is preceded by the terminal/literal character c
, you rewrite that term to WB
but if it's preceded by b
, you expand to bb
instead. This is presumably what the context-sensitivity of context-sensitive grammars is alluding to.
A context-free language can be recognized a push-down automaton. Whereas a finite state machine makes use of no auxiliary storage, i.e. its decision is based only on its current state and input, a push-down automaton also has a stack at its disposal and can peek at the top of the stack for taking decisions.
To see that in action, you can parse nested parentheses by moving left to right and pushing a left parentheses onto a stack each time you encounter one, and popping each time you encounter a right parentheses. If you never end up trying to pop from an empty stack, and the stack's empty at the end of the string, the string is valid.
For a context-sensitive language, a PDA isn't enough. You'll need a linear-bounded automaton which is like a Turing Machine whose tape isn't unlimited (though the amount of tape available is proportional to the input). Note that that describes computers pretty well - we like to think of them as Turing Machines but in the real world you can't grab arbitrarily more RAM mid-program. If it's not obvious to you how an LBA is more powerful than a PDA, an LBA can emulate a PDA by using part of its tape as a stack, but it can also choose to use its tape in other ways.
(If you're wondering what a Finite State Machine can recognize, the answer is regular expressions. But not the regexes on steroids with capture groups and look-behind/look-ahead you see in program languages; I mean the ones you can build with operators like [abc]
, |
, *
, +
, and ?
. You can see that abbbz
matches regex ab*z
just by keeping your current position in the string and regex, no stack required.)
Best Answer
You already described what it is. The use is that it's a low-level language that works across all browsers, is quite fast in most and very fast in some. What you make of this is as open-ended as what you do with any other programming language.
The use case Mozilla seems most keen on is this: There are already ways of compiling languages with LLVM backends (most prominently C and C++) to JavaScript, via Emscripten. asm.js is very close to what Emscripten already emits, so this allows Emscripten code (which is already impressively fast on today's JavaScript JIT compilers) to become even faster, furthering the goal of porting existing code bases to the web. Again, what exactly you use this for is your decision. Porting games is one use cases (which Mozilla is apparently actively involved in), but there are countless things written in C and C++, quite a few of which might come in handy for someone's web site. Some I've seen flung around (plus some of my own devising), with no guarantee regarding feasibility: