Repeat after me:
REST and asynchronous events are not alternatives. They're completely orthogonal.
You can have one, or the other, or both, or neither. They're entirely different tools for entirely different problem domains. In fact, general purpose request-response communication is absolutely capable of being asynchronous, event-driven, and fault tolerant.
As a trivial example, the AMQP protocol sends messages over a TCP connection. In TCP, every packet must be acknowledged by the receiver. If a sender of a packet doesn't receive an ACK for that packet, it keeps resending that packet until it's ACK'd or until the application layer "gives up" and abandons the connection. This is clearly a non-fault-tolerant request-response model because every "packet send request" must have an accompanying "packet acknowledge response", and failure to respond results in the entire connection failing. Yet AMQP, a standardized and widely adopted protocol for asynchronous fault tolerant messaging, is communicated over TCP! What gives?
The core concept at play here is that scalable loosely-coupled fault-tolerant messaging is defined by what messages you send, not how you send them. In other words, loose coupling is defined at the application layer.
Let's look at two parties communicating either directly with RESTful HTTP or indirectly with an AMQP message broker. Suppose Party A wishes to upload a JPEG image to Party B who will sharpen, compress, or otherwise enhance the image. Party A doesn't need the processed image immediately, but does require a reference to it for future use and retrieval. Here's one way that might go in REST:
- Party A sends an HTTP
POST
request message to Party B with Content-Type: image/jpeg
- Party B processes the image (for a long time if it's large) while Party A waits, possibly doing other things
- Party B sends an HTTP
201 Created
response message to Party A with a Content-Location: <url>
header which links to the processed image
- Party A considers its work done since it now has a reference to the processed image
- Sometime in the future when Party A needs the processed image, it
GETs it using the link from the earlier
Content-Location
header
The 201 Created
response code tells a client that not only was their request successful, it also created a new resource. In a 201 response, the Content-Location
header is a link to the created resource. This is specified in RFC 7231 Sections 6.3.2 and 3.1.4.2.
Now, lets see how this interaction works over a hypothetical RPC protocol on top of AMQP:
- Party A sends an AMQP message broker (call it Messenger) a message containing the image and instructions to route it to Party B for processing, then respond to Party A with an address of some sort for the image
- Party A waits, possibly doing other things
- Messenger sends Party A's original message to Party B
- Party B processes the message
- Party B sends Messenger a message containing an address for the processed image and instructions to route that message to Party A
- Messenger sends Party A the message from Party B containing the processed image address
- Party A considers its work done since it now has a reference to the processed image
- Sometime in the future when Party A needs the image, it retrieves the image using the address (possibly by sending messages to some other party)
Do you see the problem here? In both cases, Party A can't get an image address until after Party B processes the image. Yet Party A doesn't need the image right away and, by all rights, couldn't care less if processing is finished yet!
We can fix this pretty easily in the AMQP case by having Party B tell A that B accepted the image for processing, giving A an address for where the image will be after processing completes. Then Party B can send A a message sometime in the future indicating the image processing is finished. AMQP messaging to the rescue!
Except guess what: you can achieve the same thing with REST. In the AMQP example we changed a "here's the processed image" message to a "the image is processing, you can get it later" message. To do that in RESTful HTTP, we'll use the 202 Accepted
code and Content-Location
again:
- Party A sends an HTTP
POST
message to Party B with Content-Type: image/jpeg
- Party B immediately sends back a
202 Accepted
response which contains some sort of "asynchronous operation" content which describes whether processing is finished and where the image will be available when it's done processing. Also included is a Content-Location: <link>
header which, in a 202 Accepted
response, is a link to the resource represented by whatever the response body is. In this case, that means it's a link to our asynchronous operation!
- Party A considers its work done since it now has a reference to the processed image
- Sometime in the future when Party A needs the processed image, it first GETs the async operation resource linked to in the
Content-Location
header to determine if processing is finished. If so, Party A then uses the link in the async operation itself to GET the processed image.
The only difference here is that in the AMQP model, Party B tells Party A when the image processing is done. But in the REST model, Party A checks if processing is done just before it actually needs the image. These approaches are equivalently scalable. As the system gets larger, the number of messages sent in both the async AMQP and the async REST strategies increase with equivalent asymptotic complexity. The only difference is the client is sending an extra message instead of the server.
But the REST approach has a few more tricks up its sleeve: dynamic discovery and protocol negotiation. Consider how both the sync and async REST interactions started. Party A sent the exact same request to Party B, with the only difference being the particular kind of success message that Party B responded with. What if Party A wanted to choose whether image processing was synchronous or asynchronous? What if Party A doesn't know if Party B is even capable of async processing?
Well, HTTP actually has a standardized protocol for this already! It's called HTTP Preferences, specifically the respond-async
preference of RFC 7240 Section 4.1. If Party A desires an asynchronous response, it includes a Prefer: respond-async
header with its initial POST request. If Party B decides to honor this request, it sends back a 202 Accepted
response that includes a Preference-Applied: respond-async
. Otherwise, Party B simply ignores the Prefer
header and sends back 201 Created
as it normally would.
This allows Party A to negotiate with the server, dynamically adapting to whatever image processing implementation it happens to be talking to. Furthermore, the use of explicit links means Party A doesn't have to know about any parties other than B: no AMQP message broker, no mysterious Party C that knows how to actually turn the image address into image data, no second B-Async party if both synchronous and asynchronous requests need to be made, etc. It simply describes what it needs, what it would optionally like, and then reacts to status codes, response content, and links. Add in Cache-Control
headers for explicit instructions on when to keep local copies of data, and now servers can negotiate with clients which resources clients may keep local (or even offline!) copies of. This is how you build loosely-coupled fault-tolerant microservices in REST.
There are quite a few approaches you can take depending on the problem you're trying to solve. I didn't quite understand the motivation behind having a separate Write Actor and a Calculator Actor. The important thing you need to keep in mind is that each Actor is turn-based. If you want to delegate the responsibility of saving the Actor's state to a 3rd party system that is OK but you don't really need to. You could simply write to a Stateful Service. This service can be queried directly using a 3rd party REST API etc if needed.
Please note that Actors already manage their state and keep 3 copies (configurable) already.
If you feel like you want to persist this state on a durable medium (say SQL Server, Azure Table Storage or Redis), you could backup the state at regular intervals.
It's kinda hard to tell the exact motivation behind your solution but I have an example that I implemented (using Akka) which I'm now porting over to Service Fabric because of the overhead it eliminates.
- I have a Shopping Cart Actor that can keep a user's cart in memory - great because I can infinitely scale this and its state is replicated.
In Akka, I was using Akka Persistence on MongoDB but this comes with performance overheads of restarting, restoring state, managing snapshots etc. With Service Fabric, this overhead goes away. You simply don't have to worry about the State (within reason as long as you don't corrupt it i.e. Use cancellation tokens always).
This Shopping Cart Actor checks inventory when someone wants to buy a product (with Akka, I was checking in memory inventory persisted on mongo store).
Now in Service Fabric, my Shopping Cart Actor calls an
Inventory Actor for a given product I'm purchasing to check if inventory is available.
This Inventory Actor internally calls a stateful service which maintains a reliable dictionary of inventory. I can easily update the inventory without threading issues.
At some point, I would like to get a Product's inventory summary (e.g. how many in stock, how many sold etc). To simplify this, I also update this Inventory in Redis (not sure if this is needed, but given that Redis is extremely fast, I think there might be benefits to keeping a copy of the inventory for quick reads). All writes happen using the Inventory Actor which in turn calls the Stateful Service.
The biggest benefit of having an inventory actor is that it will always be turn based. I could've stored the Inventory directly in the Inventory Actor's state, but I chose against it because I may want to eventually open up the inventory for reads across different systems.
Stateful services allow you to do that. Turn based Inventory reads wouldn't have worked very well because you are pretty much waiting for your turn just to read the inventory summary, so delegating that to a Stateful service (and Redis) helps.
Hopefully this gives you some direction. Using this Architecture in Akka I was able to scale significantly more than ever imagined. With Service Fabric, this will become even easier as I don't have to worry about Cluster Management, Actor location (simple partition keys), state management etc.
Hopefully this gives you some insights.
Best Answer
No, Service Fabric doesn't provide any messaging mechanism to communicate with reliable services. In practice, reliable services (stateful and stateless) are basically processes running doing nothing until you tell them to do something. You are responsible to create the mechanism to allow a service from receiving requests to do work (with remoting, REST, messaging...). Service Fabric only provides you the basis of how to implement this mechanism which is the ICommunicationListener.
If you want asynchronous communication between services, you could use, for example, Azure Service Bus and use a ready made communication listener.
Now, using messaging doesn't fully solve your problem of having long running processes, as starting a long running process can be done synchronously as well (you could call your service with REST for example, and the service could return immediately with an OK and start the long running process so that the client is not synchronously waiting). The main problem with long running processes is normally how to communicate back to the client that the process has completed. Note that being stateless services, you cannot poll the service to get the status of the process as the request could go to a different instance of the service (and it would go against the design rules of stateless services anyway).
If your scenario is two microservices talking to each other, you have several options to communicate back the results. If you are ok introducing a circular dependency, then you could simply do a REST call from A to B to start the process and a REST call from B to A to communicate the completion.
If you use messaging, you could create a mechanism to allow Replies, without introducing a circular dependency, for example, by sending the Reply Queue as part of the original message (or message properties in ASB). This way your service B doesn't have to know anything about the caller.
I hope it makes sense.