I'd go with Option 3, with the following notes:
- Try and reduce the amount of domain logic that your clients need to know to get the job done. Create services that expose that data in meaningful (to your clients) ways, so that you can request collections of domain objects which fulfil certain criteria, rather than doing that crunching on your client.
- Validation should be considered optional on the client-side as, you can never guarantee that future client implementations are going to be done properly. Therefore, always validate on the server side as if it hadn't been done elsewhere. Of course, clients should be validating on the client-side too.
- Rather than mapping the WCF data back to full domain objects on the client side, consider mapping them to simpler ViewModel-type objects - a slimmed down version of your full domain objects only containing properties appropriate to the client - makes client programming simpler.
The problem you're still faced with is lots of mapping. I guess this price is worth paying (and made easier with a tool such as AutoMapper) because removing the client dependency on your domain model gives you breathing room to change your domain, tweak the mapping, without breaking any client code.
This is a very broad question but I will try to give you an answer.
...there is some ambiguity with regard to the use of domain models and application services
There is no ambiguity if you design well your bounded contexts, the domain models and the relationships between them.
However, what happens if there are multiple database hits that depend on business logic?
In DDD
, all the operations go through the Aggregate root
(AR
). The application services
load the ARs
from the persistence, send commands to them, then persist those ARs
back. ARs
don't need to hit the database at all. In fact a good designed AR
does not even know that databases exists at all. All they touch/see/smell is their internal state and the immutable arguments that they receive in their command methods. If an AR
needs something from the database then the Application service
pass that thing as argument.
ARs
should be pure, side effects free objects/functions. One reason is that commands applied on them must be retry-able, in case of concurrent modifications.
As an example: ARs
don't send emails, they return a value object
that holds the email data (from, to, subject and body) and the Application service takes that value object and passes it to a infrastructure service that does the actual sending of the email.
For example, I have a domain model "Order" that has a method "IsValid". To determine whether an order is valid, a database read must be performed.
You don't need an isValid
method, as an Order
cannot get into an invalid state anyway because any modifications are done through it's methods. If you are referring to the existence of an Order
then this kind of validation is done by the Application service
: if it does not find the Order
in the persistence then it does not exist, as simple as that. Maybe you are referring to the ShoppingCart
as being valid, not the Order
. What about then? Well, you could try to create an Order
from a ShoppingCart
and if you succeed then a Cart
is ready to be ordered. As the Order
is side effect free, no order will actually be created. Just an example of how you might think in DDD
.
DDD seems like it provides a lot of nice value, but I'm worried about these kinds of issues leading to design "rot".
If you follow the DDD
approach well, your design will never rot. Never.
As a foot note: In the beginning I was a little miss-leaded by the Layered architecture. Don't make the same mistake. Take a look at some newer architectures like CQRS
that fits very well with DDD
.
Best Answer
From what I gather, his point is that domain objects will not generally map 1-to-1 to the DTOs, in the sense that they will not have all the same properties. In fact, domain objects may have very few getters and setters; they may even have none at all. This is related to the the "tell, don't ask" principle: if responsibilities are properly distributed across classes, domain objects will have very little need to pull data from each other in order to achieve something. Instead, they will mostly call methods on each other. Now, while you can use Automapper-like libraries to copy data from DTOs to the domain objects in such a scenario, the author's point is that this requires a fairly complicated setup, which defeats the purpose of having an automapper.
If the design of your domain objects is influenced by the desire to support easy automapping, then you'll break encapsulation by introducing getters and setters that are not really needed. This could introduce problems with validation. E.g., without automapping considerations, your domain objects may be designed in such a way so that it is not possible to invoke a method or a property and have the object end up in an invalid state when the control returns to the caller. If you introduce getters and setters and a Validate() method, then it becomes possible for an object to be in an invalid state between the a call to a setter, and a call to Validate(). That may or may not be OK - as everything else, it's a trade-off.