I think you need to separate two types of validation in this case; domain validation and application validation.
Application validation is what you have when you verify that the command property 'text' is between 20 and 200 characters; so you validate this with the GUI and with a view-model-validator that also executes at the server after a POST. The same goes for e-mail (btw, I hope you realize that an e-mail such as `32.d+"Hello World .42"@mindomän.local" is a valid according to the RFC).
Then you have another validation; check that article exists - you have to ask yourself the question why the article should not exist if there is indeed a command sent from the GUI that is about attaching a comment to it. Was your GUI eventually consistent and you have an aggregate root, the article, that can be physically deleted from the data store? In that case you just move the command to the error queue because the command handler fails to load the aggregate root.
In the above case, you would have infrastructure that handles poison messages - they would for example retry the message 1-5 times and then move it to a poision queue where you could manually inspect the collection of messages and re-dispatch those relevant. It's a good thing to monitor.
So now we've discussed:
What about commands that are out of sync with the domain? Perhaps you have a rule in your domain logic saying that after 5 comments to an article, only comments below 400 characters are allowed, but one guy was too late with the 5th comment and got to be the 6th - GUI didn't catch it because it was not consistent with the domain at the point of him sending his command - in this case you have a 'validation failure' as a part of your domain logic and you would return the corresponding failure event.
The event could be in the form of a message onto a message broker or your custom dispatcher. The web server, if the application is monolithic, could synchronously listen for both a success event and the failure event mentioned and display the appropriate view/partial.
Often you have custom event that means failure for many types of commands, and it is this event that you subscribe to from the web server's perspective.
In the system that we are working on, we're doing request-response with commands/events over a MassTransit+RabbitMQ message bus+broker and we have an event in this particular domain (modelling a workflow in part) that is named InvalidStateTransitionError
. Most commands that try to move along an edge in the state graph may cause this event to happen. In our case, we're modelling the GUI after an eventually consistent paradigm, and so we send the user to a 'command accepted' page and thereafter let the web server's views update passively through event subscriptions. It should be mentioned that we are also doing event-sourcing in the aggregate roots (and will be doing for sagas as well).
So you see, a lot of the validation you are talking about are actually application-type validations, not actual domain logic. There's no problem in having a simple domain model if your domain is simple but you are doing DDD. As you continue modelling your domain, however, you'll discover that the domain might not be as simple as it first turned out. In many cases the aggregate root/entity might just accept a method invocation caused by a command and change some of its state without even performing any validation - especially if you trust your commands like you'd do if you validate them in the web server that you control.
I can recommand watching the two presentations on DDD from Norwegian Developer Conference 2011 and also Greg's presentation at Öredev 2010.
Cheers,
Henke
Broadly speaking, an event handler should notify the client if the client is (or might be) interested in the information.
However, the granularity of the message is important.
Remember that whenever you end up passing information over the network you move from the realm of sub-microsecond processing to multi-milliseconds of elapsed time.
For this reason, I'm not convinced the fine granularity of events that you described is appropriate.
Jim Webber has a fair bit to say on the subject, recommending that the granularity should be similar to what you'd use if the service was an actual person in a mid to large size business. In your case, a single CustomerUpdated message instead of multiple smaller messages.
Best Answer
There exists another design principle: KISS -Keep It Simple...
Simple Repository can make some advantage as can simplify the code of your services, but going deeper can not be certainly necessary.
You have to think out what are your business values of making over-engineered query engine.
Do you expect that you will need to pass queries between subsystems? If so then some dispatcher can be helpful (I would name it router then anyway).
Do you expect that you will need to add new query parameters frequently? If so, query classes will be certainly needed.