Domain-Driven Design – Ensuring Transactional Consistency

domain-driven-designdomain-model

I am starting out with DDD and understand that aggregate roots are used to ensure transnational consistency. We should not modify multiple aggregates in one application service.

I would like to know however how to deal with the following situation.

I have an aggregate root called Products.

There is also an aggregate root called Group.

Both have Id's, and can be edited independently.

Multiple Products can point to the same Group.

I have an application service that can change a product's group:

ProductService.ChangeProductGroup(string productId, string groupId)

  1. Check group exists
  2. Get product from repository
  3. Set its group
  4. Write the product back to repository

I also have an application service where the group can be deleted:

GroupService.DeleteGroup(string groupId)
1. Get products from repository whose groupId is set to provided groupId, ensure count is 0 or abort
2. Delete group from groups repository
3. Save changes

My question is the following scenario, what would happen if:

In the ProductService.ChangeProductGroup, we check the group exists (it does), then just after this check a separate user deletes the productGroup (via the other GroupService.DeleteGroup). In this case we set a reference to a product which has just been deleted?

Is this a flaw in my design in that i should use a different domain design (adding additional elements if necessary), or would i have to use transactions?

Best Answer

We have the same issue.

And I se no way to solve this problem but with transactions or with consistency check at the DB to get an exception in the worst case.

You can also use pessimistic lock, blocking aggregate root or only its parts for other clients until business transaction is completed what is to some extent equivalent to serializable transaction.

The way you go heavily depends on your system and business logic.
Concurrency is not an easy task at all.
Even if you can detect a problem how will you resolve it? Just cancel the operation or allow user to 'merge' changes?

We use Entity Framework and EF6 uses read_commited_snapshot by default, so two consecutive reads from repository can give us inconsistent data. We just keep that in mind for the future when business processes will be more clearly outlined and we can make infromed decision. And yes we still check consistency at model level as you do. This at least allows to test BL separately from DB.

I also suggest you to think over repository consistency in case you have 'long' business transactions. It is quite tricky as it turned out.

Related Topic