You make a special message to A that changes it in a way that is not propagated. Its a common idiom where you want general updates to send notifications, but not internal updates or refreshes.
For example, I have a dialog that has notification handlers for changes to various controls, but in the initialisation I want to set the controls initial state - without sending these notifications. In this case, I simply set an 'initialising' flag that the notification handler checks for, if its set the handler simply does nothing.
If you cannot set a flag to change the state of A to prevent notification, then you will have to send it with the change message. So you have 2 messages, one to update A (the general case) and another to quietly change A that is sent by B but still allows for changes to A to be triggered when others change it. I don't know how you make these updates so the implementation specifics is up to you.
Repeat after me:
REST and asynchronous events are not alternatives. They're completely orthogonal.
You can have one, or the other, or both, or neither. They're entirely different tools for entirely different problem domains. In fact, general purpose request-response communication is absolutely capable of being asynchronous, event-driven, and fault tolerant.
As a trivial example, the AMQP protocol sends messages over a TCP connection. In TCP, every packet must be acknowledged by the receiver. If a sender of a packet doesn't receive an ACK for that packet, it keeps resending that packet until it's ACK'd or until the application layer "gives up" and abandons the connection. This is clearly a non-fault-tolerant request-response model because every "packet send request" must have an accompanying "packet acknowledge response", and failure to respond results in the entire connection failing. Yet AMQP, a standardized and widely adopted protocol for asynchronous fault tolerant messaging, is communicated over TCP! What gives?
The core concept at play here is that scalable loosely-coupled fault-tolerant messaging is defined by what messages you send, not how you send them. In other words, loose coupling is defined at the application layer.
Let's look at two parties communicating either directly with RESTful HTTP or indirectly with an AMQP message broker. Suppose Party A wishes to upload a JPEG image to Party B who will sharpen, compress, or otherwise enhance the image. Party A doesn't need the processed image immediately, but does require a reference to it for future use and retrieval. Here's one way that might go in REST:
- Party A sends an HTTP
POST
request message to Party B with Content-Type: image/jpeg
- Party B processes the image (for a long time if it's large) while Party A waits, possibly doing other things
- Party B sends an HTTP
201 Created
response message to Party A with a Content-Location: <url>
header which links to the processed image
- Party A considers its work done since it now has a reference to the processed image
- Sometime in the future when Party A needs the processed image, it
GETs it using the link from the earlier
Content-Location
header
The 201 Created
response code tells a client that not only was their request successful, it also created a new resource. In a 201 response, the Content-Location
header is a link to the created resource. This is specified in RFC 7231 Sections 6.3.2 and 3.1.4.2.
Now, lets see how this interaction works over a hypothetical RPC protocol on top of AMQP:
- Party A sends an AMQP message broker (call it Messenger) a message containing the image and instructions to route it to Party B for processing, then respond to Party A with an address of some sort for the image
- Party A waits, possibly doing other things
- Messenger sends Party A's original message to Party B
- Party B processes the message
- Party B sends Messenger a message containing an address for the processed image and instructions to route that message to Party A
- Messenger sends Party A the message from Party B containing the processed image address
- Party A considers its work done since it now has a reference to the processed image
- Sometime in the future when Party A needs the image, it retrieves the image using the address (possibly by sending messages to some other party)
Do you see the problem here? In both cases, Party A can't get an image address until after Party B processes the image. Yet Party A doesn't need the image right away and, by all rights, couldn't care less if processing is finished yet!
We can fix this pretty easily in the AMQP case by having Party B tell A that B accepted the image for processing, giving A an address for where the image will be after processing completes. Then Party B can send A a message sometime in the future indicating the image processing is finished. AMQP messaging to the rescue!
Except guess what: you can achieve the same thing with REST. In the AMQP example we changed a "here's the processed image" message to a "the image is processing, you can get it later" message. To do that in RESTful HTTP, we'll use the 202 Accepted
code and Content-Location
again:
- Party A sends an HTTP
POST
message to Party B with Content-Type: image/jpeg
- Party B immediately sends back a
202 Accepted
response which contains some sort of "asynchronous operation" content which describes whether processing is finished and where the image will be available when it's done processing. Also included is a Content-Location: <link>
header which, in a 202 Accepted
response, is a link to the resource represented by whatever the response body is. In this case, that means it's a link to our asynchronous operation!
- Party A considers its work done since it now has a reference to the processed image
- Sometime in the future when Party A needs the processed image, it first GETs the async operation resource linked to in the
Content-Location
header to determine if processing is finished. If so, Party A then uses the link in the async operation itself to GET the processed image.
The only difference here is that in the AMQP model, Party B tells Party A when the image processing is done. But in the REST model, Party A checks if processing is done just before it actually needs the image. These approaches are equivalently scalable. As the system gets larger, the number of messages sent in both the async AMQP and the async REST strategies increase with equivalent asymptotic complexity. The only difference is the client is sending an extra message instead of the server.
But the REST approach has a few more tricks up its sleeve: dynamic discovery and protocol negotiation. Consider how both the sync and async REST interactions started. Party A sent the exact same request to Party B, with the only difference being the particular kind of success message that Party B responded with. What if Party A wanted to choose whether image processing was synchronous or asynchronous? What if Party A doesn't know if Party B is even capable of async processing?
Well, HTTP actually has a standardized protocol for this already! It's called HTTP Preferences, specifically the respond-async
preference of RFC 7240 Section 4.1. If Party A desires an asynchronous response, it includes a Prefer: respond-async
header with its initial POST request. If Party B decides to honor this request, it sends back a 202 Accepted
response that includes a Preference-Applied: respond-async
. Otherwise, Party B simply ignores the Prefer
header and sends back 201 Created
as it normally would.
This allows Party A to negotiate with the server, dynamically adapting to whatever image processing implementation it happens to be talking to. Furthermore, the use of explicit links means Party A doesn't have to know about any parties other than B: no AMQP message broker, no mysterious Party C that knows how to actually turn the image address into image data, no second B-Async party if both synchronous and asynchronous requests need to be made, etc. It simply describes what it needs, what it would optionally like, and then reacts to status codes, response content, and links. Add in Cache-Control
headers for explicit instructions on when to keep local copies of data, and now servers can negotiate with clients which resources clients may keep local (or even offline!) copies of. This is how you build loosely-coupled fault-tolerant microservices in REST.
Best Answer
Not sure what you mean with reference driven programming. From what I gather, you're wondering what the advantages of event-driven programming are as opposed to writing code, and then using a bunch of branches to determine when to call a given method.
Before I set off, allow me to be pedantic and point out that: a module can't listen for any event, nor can a button have a reference to a module. The button is part of the DOM, the module is JS. Both live in separate universes, and never the twain shall meet.
You can have the event loop pick up on certain events, and then invoke a given function object (either stand-alone, anonymous, or a function object that is part of a module) to handle that event, or you can mix JS into your markup, using certain html attributes (
onclick=""
and the like).The latter method is generally said to be outdated, messy, and therefore bad practice. In this day and age, where OOP-buzzwords have found there way into the daily vocab of the average middle-manager, mixing JS in with markup isn't what I'd call separation of concern...
I see it like this:
JavaScript was originally intended to run client side. It's not been considered a valid server-side language for that long. It's small, light and portable, and was meant to make it easy to "liven up" your website. Making the UI responsive, move stuff around and, most importantly, load content dynamically using AJAX.
When talking about AJAX, I think you'll agree, event-driven programming is the better option: you request content, based on user input (clicks links, scrolls the page down etc...), and as you probably know: an ajax response is dealt with using an event handler (
xhr.onreadystatechange = function
).Since then, we've come a long way, but some things have remained the same: JS is, in essence, a functional language, which implies closures, lambda's and callback functions to be used all over the place. The easiest way to make functional code work is to set all the functions up, and then let a change in the state (aka event) call one or more of these functions. Basically: functional languages feel at home in an event driven universe.
The expressive power of these constructs is huge, and so it stands to reason that with every new trick, technology or feature that gets added, the people developing the JS engines will tend to implement events to better support these new bells and whistles. Couple that to the fact that the main libs/toolkits will probably provide an API that requires you to pass around functions, and you end up with a bunch of developers who, when confronted with a problem, will look at it as a series of events, that they can tap into to respond accordingly.
So, I guess you might say that JS was designed to be event-driven to a rather large extent, and that the people who use it are "trained" if you will to think accordingly.
Another thing to consider is: how the engines work. V8, for example, is great at idling. If you run node.js, you'll see that it's perfectly happy, consuming very little resources, just sitting there, doing nothing. Once it receives a request (which is an event in a way), it'll wake up and set to work.
Event driven code does reflect this:
This script doesn't do anything, until the server "has to be there", then the function passed to
createServer
is invoked, and, you could say, becomes the server.Back to the client-side:
You say you see the point of event binding with DOM elements, and responding to the user's actions. Well, that's what client-side JS does. Its main purpouse was (and is) to augment the user experience. Whatever code gets executed client side, should be triggered by an event.
The event could be the
load
event, or aclick
orchange
event... it doesn't matter. If the user isn't doing anything, JS shouldn't be busy. Some browsers kill a script if it's busy for too long anyway.You ought to realize that picking up on changes, without using the event loop JS already has to offer, leaves you no real alternative than to write a busy loop: JS isn't meant to be a system language, so you could end up with:
for(;;)
orwhile(1)
loops. If you then were to have a single event handler, owing to JS being single threaded, that handler can't ever get called, because the JS thread is always busy, claiming the CPU like there's no tomorrow.I can't see how else you could set about your business, to be honest.
Worker
's? No, because they communicate only through events.setTimout
andsetInterval
? Not really: it's a right hasstle to sort out the scope, it's error prone and expensive. Besides, an event probably translates to an interrupt at some lower level, as does an interval, or a timeout. What those do is, essentially the same thing as events, only: they're obtrusive, because they weren't initiated by the user, and they could block the UI.The only "alternative" I can think of, and reading your question and comments again is what you're actually thinking of, is something along the following lines. The first is the reference driven example, the second is a quick translation of the same basic concept to event-driven programming:
then either have markup like below, or bind the module methods to certain elements
As opposed to doing something like
Well, quite apart from the fact that event-driven code is lighter (only one listener is bound at a time), and less error prone (if the
module.next
listener is bound, theswitch
listener isn't), it also doesn't require JS to mixed in with JS. Given some more effort, it can also be written a lot shorter than the snippets I've posted here, and it's not littered with tons ofif
's andelse
's. it uses functions, which is what you do in a functional languageBottom line:
The advantages of event-driven JS are simple, IMO: JS was intended to be used on the web, client-side. It was meant to enhance the user experience by making static pages responsive, and fetching content on-the-fly. All through events. Coupled with the functional paradigm, that makes for a rather expressive little language. To use it in any other way means using it in a way it wasn't designed to be used (excel).
If you don't use a hammer to fasten a screw if you have a screwdriver lying right next to it.
Likewise: you don't work around events, if there is an event readily available.
In the end, though, what's in a name?. If ever you write a standalone program in C, using the GTK+ lib, you'll use "events", too. C# is popular nowadays, guess what, it wont take many tutorials before you see the word "event" crop up. You want to react on some changes in state or user input? That's what we happen to call an event. Lastly: as svidgen pointed out: JS engines already have an event system written out. Good programmers are lazy: they don't go and write their own systems, if they can find an existing one that allows them to do what they need to do. Sure, sometimes you may find the existing systems lacking in some respect, and curse the people making it for not seeing that there might be a need for whatever you happen to need at that time, but that's life.
Personally, I'd say: if you don't like event-driven programming, you don't really like JS(?)