Given the scenario where you can't control/access REST server, you have to make calls for every table until REST service expose some API to enable this feature. But off-course you can optimize data flowing through network:
- Create list of tables which have updates more recent than your service (by querying REST service).
- Now only sync tables from above list.
or,
Try to fetch updated/new data in one call (in step 1).
In my opinion above solutions are not fundamentally different from your current solution but they do try to control amount of traffic in network. You have to implement both and run load test on them to see what fits best.
How do I deal with side effects in Event Sourcing?
Short version: the domain model doesn't perform side effects. It tracks them. Side effects are performed using a port that connects to the boundary; when the email is sent, you send the acknowledgement back to the domain model.
This means that the email is sent outside of the transaction that updates the
event stream.
Precisely where, outside, is a matter of taste.
So conceptually, you have a stream of events like
EmailPrepared(id:123)
EmailPrepared(id:456)
EmailPrepared(id:789)
EmailDelivered(id:456)
EmailDelivered(id:789)
And from this stream you can create a fold
{
deliveredMail : [ 456, 789 ],
undeliveredMail : [123]
}
The fold tells you which emails haven't been acknowledged, so you send them again:
undeliveredMail.each ( mail -> {
send(mail);
dispatch( new EmailDelivered.from(mail) );
}
Effectively, this is a two phase commit: you are modifying SMTP in the real world, and then you are updating the model.
The pattern above gives you an at-least-once delivery model. If you want at-most-once, you can turn it around
undeliveredMail.each ( mail -> {
commit( new EmailDelivered.from(mail) );
send(mail);
}
There's a transaction barrier between making EmailPrepared durable and actually sending the email. There's also a transaction barrier between sending the email and making EmailDelivered durable.
Udi Dahan's Reliable Messaging with Distributed Transactions may be a good starting point.
Best Answer
I wrote this stuff a long time ago for a C++ application and made a post on my blog. Here's a quick summary (since the original article is quite long and contains file attachments):
In SQLite you write callbacks in raw C++:
Header file (file.h)
It defines the two additional functions we'll need for the next step (actually our C++ side function and a SQLite stored procedure).
Implementation file (file.cpp)
This function will be called from our trigger from the DB.
And now a SQLite stored procedure to call our function; a SQLite trigger will invoke this.
We don't have to limit the function to just a simple print; this function could have (for example) moved data to another table.
Register the stored procedure to the DB
Create the SQLite trigger
We need the stored procedure since triggers are built on top of them.
The trigger is only temporary. This way it won't stick around in the database after execution terminates (of our whole program). Our callback won't be around after then anyway and the trigger could (and will, given it's nature) cause major problems if we use the database outside this application.