Does it make sense for each module to
have their own version of an inventory
item that is designed to suit the
needs of that module? In that case,
the Financial module would have to
perform its own lookup of the
inventory items when handling the
GoodsReceivedEvent. Where do I draw
the line between modularity and the
need to share information?
You will always have to accept trade-offs. That's all there's to say to your questions really. It has been a good practice to keep the entities in a separate assembly, so many applications keep them in a shared library. But that means that there're no (physical) business-logical boundries. Performing your own lookups means a lot of redundant code which I'd only implement if you have special requirements in both the modules. Of course, looking at it from an architectural point of view, your Financial Module and your Inventory Module have to share a dependency anyway. The Financial Module is most likely dependent on the Inventory Module. If requirements permit you to let one module depend on another module, then I'd say it's ok to encapsulate the entities within the modules they belong to the most. You'll need to find a good balance between physical distinction (shared dependencies) and maintenance. Too many shared dependencies could cause a maintenance nightmare of versions. Also, with an ORM there'll be troubles if you want to map new relations to existing entities or extend them from modules which depend on the module containing the entity.
If you don't use ORM then keep in mind all your modules probably share the same database anyway, as that's often the case. You could use that as a common ground.
You could also keep the data plain (in terms of CSV/resultsets e.g.) and use a central messaging service to notify modules that registered to it about new data with along with some meta information. That means no real dependencies, but sacrifices type safty and contracts and there can be troubles with malformed data. I guess that's how many big ERP systems do it.
So in short, you'll have to decide which kind of interface you want. More modularity leads to less contracts and vice versa.
Id'probably use a shared library for my data objects and a central messaging service with a publish subscibe pattern, that seems the best solution to me.
I think you need to separate two types of validation in this case; domain validation and application validation.
Application validation is what you have when you verify that the command property 'text' is between 20 and 200 characters; so you validate this with the GUI and with a view-model-validator that also executes at the server after a POST. The same goes for e-mail (btw, I hope you realize that an e-mail such as `32.d+"Hello World .42"@mindomän.local" is a valid according to the RFC).
Then you have another validation; check that article exists - you have to ask yourself the question why the article should not exist if there is indeed a command sent from the GUI that is about attaching a comment to it. Was your GUI eventually consistent and you have an aggregate root, the article, that can be physically deleted from the data store? In that case you just move the command to the error queue because the command handler fails to load the aggregate root.
In the above case, you would have infrastructure that handles poison messages - they would for example retry the message 1-5 times and then move it to a poision queue where you could manually inspect the collection of messages and re-dispatch those relevant. It's a good thing to monitor.
So now we've discussed:
What about commands that are out of sync with the domain? Perhaps you have a rule in your domain logic saying that after 5 comments to an article, only comments below 400 characters are allowed, but one guy was too late with the 5th comment and got to be the 6th - GUI didn't catch it because it was not consistent with the domain at the point of him sending his command - in this case you have a 'validation failure' as a part of your domain logic and you would return the corresponding failure event.
The event could be in the form of a message onto a message broker or your custom dispatcher. The web server, if the application is monolithic, could synchronously listen for both a success event and the failure event mentioned and display the appropriate view/partial.
Often you have custom event that means failure for many types of commands, and it is this event that you subscribe to from the web server's perspective.
In the system that we are working on, we're doing request-response with commands/events over a MassTransit+RabbitMQ message bus+broker and we have an event in this particular domain (modelling a workflow in part) that is named InvalidStateTransitionError
. Most commands that try to move along an edge in the state graph may cause this event to happen. In our case, we're modelling the GUI after an eventually consistent paradigm, and so we send the user to a 'command accepted' page and thereafter let the web server's views update passively through event subscriptions. It should be mentioned that we are also doing event-sourcing in the aggregate roots (and will be doing for sagas as well).
So you see, a lot of the validation you are talking about are actually application-type validations, not actual domain logic. There's no problem in having a simple domain model if your domain is simple but you are doing DDD. As you continue modelling your domain, however, you'll discover that the domain might not be as simple as it first turned out. In many cases the aggregate root/entity might just accept a method invocation caused by a command and change some of its state without even performing any validation - especially if you trust your commands like you'd do if you validate them in the web server that you control.
I can recommand watching the two presentations on DDD from Norwegian Developer Conference 2011 and also Greg's presentation at Öredev 2010.
Cheers,
Henke
Best Answer
Extensive use of "service" is indicative of bad OOP. However, the nature of development sometimes make service-like classes unavoidable, such as in your case.
Your user entity should not be responsible for storing or retrieving itself; storing itself does not violate OOP, however, it violates many good architectural practices such as separation of concerns and the single responsibility principal. Allowing an object to store and retrieve it self will make your application very brittle, and hard to maintain.
You will need storage and retrieval service; however, simply labeling it a service is somewhat lazy imo. OOP is about nouns that verb; let's make this service a little more OOP...
An object that is responsible for only CRUD is called a repository. It should not contain anything other than CRUD methods. If you find your repository having a method like GiveUserPermission you are violating the repository pattern, it does not belong there; it either belongs in the User class or a mediator class of User and Permission. Each entity should have its own repository. A repository should not be responsible for multiple entities. And remember, it is per entity, not per table.
It is irrelevant whether your repository gets its data from a database or a http rest call; the rest of your application should be ignorant to this detail.