May Domain Events handlers lead to new events

domain-driven-design

I want to understand the consequences of allowing handlers to be direct or indirect source of new domain events.

In all examples I see in the literature Domain Events are usually published by Aggregate Roots. Actually, events may be just returned and then later be published before or/and after transaction is committed.

Let's consider the flow:

AR returns event -> event is published -> handler modifies another AR which belongs to same Bounded Context -> modified AR produces another event -> event is published/persisted -> ... -> commit (all or nothing).

Event dispatching and processing continues until no new events occur. (It somewhat reminds me database triggers behaviour; and triggers are hated for cascades of state changes, but to be honest, I find eventual consistency with all these event and message flows to be not that easier to reason about.)

Ok, you may say:
"Changing multiple AR's in same transaction is an indication of broken AR design as its main task is to be a transaction boundary."

I may agree and suggest another scenario:

Imagine two requirements:

  • Client address must be unique.
  • (recently introduced req.) Some addresses (not only client's) must be automatically substituted with another address.

We have multiple ARs and thus multiple workflows where address is set. Address is modelled as a value object, it is validated and created by Address IAddressService.New(string address).

The logic of substitution partially depends on AR's state and some external state.

Option1.
Create a set of address services each working with particular AR type and returning coerced address. Unfortunately, some ARs require an Address to be provided as a process of their creation. It means service has to work with 'raw' data. This would be a possible solution, unless we needed to create an event about address substitution and reference AR in this event. It is impossible, if AR's Id is generated by the store.

Option 2.
Adopt eventual consistency. Save AR as is, run address substitution after commit, maybe fail if unique constraint is broken.
Personally, I have no idea how to notify downstream Bounded Context about the failure when this BC belongs to third party – they just make a call to HTTP endpoint and expect either success or failure.

Option 3.
Let handler change address in AR. The whole flow is:

AR is created -> event is handled by the handler which checks for uniqueness of address, then same event is handled by new handler, and it changes AR's address+emits new event AddressSubstituted about performed substitution -> AR emits a regular AddressChanged event -> check for uniqueness is done once again, but no need for substitution this time -> no new events are emitted -> commit -> Id is assigned to AR -> publish/persist AddressSubstituted event (this event has a reference to AR, thus when publishing/persisting, Id is already has a valid value).

Note that all critical parts run inside same transaction. Handler produces one event itself AddressSubstituted, and leads to another event indirectly – AddressChanged.

AddressSubstituted is published outside transaction, however it is still possible to save everything within same transaction. EF core is quite a clever beast. In case of configured relationships it will figure out the order of persistence, and will first save AR, then use its Id for saving. EF can also generate Guid automatically when AR is added to DbContext (but still after AR is created).

It's worth mentioning that order of execution of event handlers may or may not be important, but it is a separate subject for discussion.

Pros of this solution:
– Old code is almost not touched, just develop new handlers.
– Atomicity. Caller is immediately notified about the status of operation.
– Intent of the change is captured.

Cons:
– Extremely hard to reason about the flow (as with any event driven solution, I assume.)
– Event generation may never end because of some weird nonbreaking loop created by handler.
<your items here>

I find this approach to be legit, but I may be missing something very important here. As I never met this – let's call it – exhaustive event dispatching before, I am in doubt.

Best Answer

DDD is a design methodology and has nothing to say about transaction management outside the domain. In DDD terms, the "transactional boundary" is meant to mean that an Entity is the sole, authoritative, and unambiguous representation of its state. This simply means that an Entity "owns" its data, and really has nothing to do with database transactions.

The reason many argue for only changing one Aggregate per database transaction has to do with scaling. Of course, like any other design decision, this comes with trade-offs. Modifying multiple Aggregates in a single database transaction means you don't have to deal with eventual consistency at the cost of not being able to shard your system. Whether or not that benefit is worth the cost is a function of your system's requirements, and should be considered carefully by your stakeholders. This does not have to be an "all or nothing" decision.

As far as Domain Events are concerned, again, they represent a trade-off. One of the main objectives of DDD is to "make the implicit explicit". Domain Events work against this idea by introducing implicit coupling. This can result in quite a bit of added complexity as well as to reduce the cohesiveness of a system. Put another way, I often see Domain Events introduced into a system to make up for poor design. That is, instead of taking the care to model the domain according to behavior and vectors of change, a complex web of Domain Events are rigged together ad-hoc in order to meet business requirements. As you astutely point out, the entire purpose of modelling Aggregates is to bring together data that changes together. If this isn't happening, we can look at another principal of DDD to help guide us: continuous refactoring.

Domain Events are best-used to trigger orthogonal (async) processes like sending an email when a CustomerRegistered event is raised, NOT simply keeping state in sync. In this way, they are not like database triggers. I wouldn't go so far as to say there is no situation where Domain Events should be used to keep state in sync, but it's important to understand the trade-offs and to always be on the lookout for an alternative approach.

As for your example problem. It's a bit too vague and convoluted to address (I had to). It reads like a problem contrived backwards given a faulty premise. Of course, starting with a poor design or not changing your design to accommodate "recently introduced requirements" will create problems down the road. You mention a number of Services/Aggregates in your explanation that only seem to create the problem you are outlining. Why design a system that has a problem? Start with the requirements. THEN model the system around them.

Systems are not designed to meet future requirements. They are designed to meet the current requirements and, hopefully, are designed according to principals that make them easy to change so that when new requirements crop up, they can be quickly integrated into the system. This is the entire purpose of DDD!

Related Topic