I would think if your domain objects on the command side are always in a valid state, you shouldn't need to worry that the queries return invalid results.
What would you do, anyway, if you discovered an invalidity on the read-side, anyway? Tell the read repository to re-enter the values? ;-)
If the read side were to have checking, it would think it would be in the area of some kind of unit or integration testing, because instead of a data entry error message, it's a bug.
One of the things about CQRS is prying apart those concerns, plus just having stuff in one place. Otherwise it makes it harder, not easier (really, it can be as easy or easier, otherwise, look for a better way to implement it or keep studying it).
Unfortunately, I am not sure what your first question really means, therefore am unable to answer that, but I can help you with the rest.
2) We're currently performing validation on the request data before
passing it into our immutable command objects. However, there may be
some cases where we need parse data from the DB/CLI prior to
initializing our commands.
As a result, I've considered placing our validation logic into the
command objects themselves. If validation fails then a exception might
be thrown. Does this seem like a reasonable approach or am I again
violating some design principles?
What you should ask yourself is why are you dealing with invalid objects in the first place so that they require validation after they are created?
There are two situations why this could happen:
- you do not trust the variables which are used when an object is created
- you do not trust the objects themselves, you do not believe, that what they are doing with their data is perfect
Both of these cases are bad, but the second one is much worse.
The first case can and does happen, when you use variables, which the user has control over. Ie. he can input anything he want. This is something you really have no control over, so you need to make sure to check all options before creating an object.
The second case happens when, even though you model a class, you do not trust the class does what it is supposed to do. Do you perhaps expose too much of its internal structure, giving the programmer too many posibilities on how to operate on the object, perhaps having simple getters and setters, instead of actual methods?
I said the second case is worse. The reason why it is worse is because you do not trust your own domain. And if you cannot trust it, then who?
3) This may sound foolish but what exactly is the difference between a
Domain Object and a DTO. I've heard the command objects themselves
being referred to as a DTO yet they appear to belong to the domain
layer (unless I'm mistaken).
DTO is a very simple and stupid object which consists only from getters and setters and glues small chunks of seemingly related information together to make one larger structure, so the data requests may be reduced.
Martin Fowler provides a simple example of a DTO object.
Domain model is an object containing your business logic. It is an object which should have no methods starting with the prefix set
, it should follow the Tell, Don't ask principle, etc. The domain is the core of your application, the domain objects make sure, everything works well, they throw domain exceptions on invalid operations to make sure, they are always in a valid state.
The Tell, Don't ask principle provides encapsulation. It turns this:
if (personObject.GetSex() == "Male") { ... }
into this:
if (personObject.IsMale()) { ... }
The programmer using a Person
class no longer cares, how the Sex
property is represented, if it is a string
, an enum
or a boolean
. The only thing it cares about is the Person::IsMale : boolean
method.
Domain model summary: It is an object containing data which are directly tied to it (no persistence logic is allowed, the object has to deal only with its own data) and encapsulates operations to make sure once created, it will never find itself in an invalid state.
Best Answer
I think you need to separate two types of validation in this case; domain validation and application validation.
Application validation is what you have when you verify that the command property 'text' is between 20 and 200 characters; so you validate this with the GUI and with a view-model-validator that also executes at the server after a POST. The same goes for e-mail (btw, I hope you realize that an e-mail such as `32.d+"Hello World .42"@mindomän.local" is a valid according to the RFC).
Then you have another validation; check that article exists - you have to ask yourself the question why the article should not exist if there is indeed a command sent from the GUI that is about attaching a comment to it. Was your GUI eventually consistent and you have an aggregate root, the article, that can be physically deleted from the data store? In that case you just move the command to the error queue because the command handler fails to load the aggregate root.
In the above case, you would have infrastructure that handles poison messages - they would for example retry the message 1-5 times and then move it to a poision queue where you could manually inspect the collection of messages and re-dispatch those relevant. It's a good thing to monitor.
So now we've discussed:
Application validation
Missing aggregate root + poison queues
What about commands that are out of sync with the domain? Perhaps you have a rule in your domain logic saying that after 5 comments to an article, only comments below 400 characters are allowed, but one guy was too late with the 5th comment and got to be the 6th - GUI didn't catch it because it was not consistent with the domain at the point of him sending his command - in this case you have a 'validation failure' as a part of your domain logic and you would return the corresponding failure event.
The event could be in the form of a message onto a message broker or your custom dispatcher. The web server, if the application is monolithic, could synchronously listen for both a success event and the failure event mentioned and display the appropriate view/partial.
Often you have custom event that means failure for many types of commands, and it is this event that you subscribe to from the web server's perspective.
In the system that we are working on, we're doing request-response with commands/events over a MassTransit+RabbitMQ message bus+broker and we have an event in this particular domain (modelling a workflow in part) that is named
InvalidStateTransitionError
. Most commands that try to move along an edge in the state graph may cause this event to happen. In our case, we're modelling the GUI after an eventually consistent paradigm, and so we send the user to a 'command accepted' page and thereafter let the web server's views update passively through event subscriptions. It should be mentioned that we are also doing event-sourcing in the aggregate roots (and will be doing for sagas as well).So you see, a lot of the validation you are talking about are actually application-type validations, not actual domain logic. There's no problem in having a simple domain model if your domain is simple but you are doing DDD. As you continue modelling your domain, however, you'll discover that the domain might not be as simple as it first turned out. In many cases the aggregate root/entity might just accept a method invocation caused by a command and change some of its state without even performing any validation - especially if you trust your commands like you'd do if you validate them in the web server that you control.
I can recommand watching the two presentations on DDD from Norwegian Developer Conference 2011 and also Greg's presentation at Öredev 2010.
Cheers, Henke