Deciding if order is important is up to each microservice. In your case, it definitely is. Your example above is actually easy to solve since if the OrderService gets an UpdateProduct event when the product doesn’t exist, it can put the UpdateProduct event back on the queue assuming there’s a Create event coming soon. However, two UpdateProduct events arriving out of order could cause problems so we still have to solve the issue.
There are ways of ensuring order in the publisher and subscriber but they are clumsy. If your Update events contain the entire new state of the product, you can add a timestamp and refuse to update your cache if the incoming timestamp is older than the persisted one. If your Update events only send updated data, you can put an incrementing version number on your aggregate and if an incoming event is more than 1 greater than your existing version, requeue the event assuming the intervening event is coming soon. As you can see, these are clumsy. You’re much better off ensuring order in the queue.
You may consider Azure Event Hub instead of Azure Event Grid. The Event Hub will guarantee order in a single partition, so wisely partitioning your app can give you the order guarantee you need. The Event Hub acts more like Kafka than a traditional Queue in that it provides a queue plus persistence of events for a small number of days. This can be advantageous if a system goes down and needs to recover.
There is nothing that explicitly forbids or argues against using stored procedures with microservices.
Disclaimer: I don't like stored procedures from a developer's POV, but that is not related to microservices in any way.
Stored procedures typically work on a monolith database.
I think you're succumbing to a logical fallacy.
Stored procedures are on the decline nowadays. Most stored procedures that are still in use are from an older codebase that's been kept around. Back then, monolithic databases were also much more prevalent compared to when microservices have become popular.
Stored procs and monolithic databases both occur in old codebases, which is why you see them together more often. But that's not a causal link. You don't use stored procs because you have a monololithic database. You don't have a monolithic database because you use stored procs.
most books on microservices recommend one database per microservice.
There is no technical reason why these smaller databases cannot have stored procedures.
As I mentioned, I don't like stored procs. I find them cumbersome and resistant to future maintenance. I do think that spreading sprocs over many small databases further exacerbates the issues that I already don't like. But that doesn't mean it can't be done.
again most microservice architecture books state that they should be autonomous and loosely coupled. Using stored procedures written say specifically in Oracle, tightly couples the microservice to that technology.
On the other side, the same argument can be made for whatever ORM your microservice uses. Not every ORM will support every database either. Coupling (specifically its tightness) is a relative concept. It's a matter of being as loose as you can reasonably be.
Sprocs do suffer from tight coupling in general regardless of microservices. I would advise against sprocs in general, but not particularly because you're using microservices. It's the same argument as before: I don't think sprocs are the way to go (in general), but that might just be my bias, and it's not related to microservices.
most msa books (that I have read) recommend that microservices should be business oriented (designed using ddd). By moving business logic into stored procedures in the database this is no longer the case.
This has always been my main gripe about sprocs: business logic in the database. Even when not the intention, it tends to somehow always end up that way.
But again, that gripe exists regardless of whether you use microservices or not. The only reason it looks like a bigger issue is because microservices push you to modernize your entire architecture, and sprocs are not that favored anymore in modern architectures.
Best Answer
First off, no your microservice shouldn't have a copy of another microservices data. Each microservice should only have its own data and make calls out to other apis if required. Although the design of your system to avoid lots of those calls is key.
However, this need to replay past or missed events to catch up is a common problem in event driven systems.
Solution 1.
Add a Get past events API in addition to the push messages. This allows catch up, checking for missed messages and other scenarios. It's not really an anti pattern unless you are forced to use it so much that you are basically admitting the push messages are not trusted to work.
I think its fairly common to add such an interface and have some sort of check/sync job which audits your overall system on a schedule. say for example you have jobs which haven't moved on in the expected SLA, it might be because a message was missed or errored, you might want to poll for missed messages on these delayed jobs.
Solution 2.
Change from a queue based system to a streaming database like kafka. a streaming database will support replaying events from a point in time, giving you a method for catch-up and spotting missed messages.
Solution 3.
Add a separate replication of existing data process that you can 'manually' apply. This can be useful where you have a specific one off process that needs the full info, say deploying a new tennant. You might need a large amount of base data before subscribing to the push messages, too much for an API, but doable with an export/import flat file on a memory stick