I think that we often out-think ourselves when we are trying to learn a new way of doing things. Instead of boiling the problem down to the smallest thing that can possibly work, we think about things that aren't important right now. Onion architecture and DDD work together to provide a way of letting you define that simplest way of doing things.
What's really your domain?
That's the question you need to be asking yourself. The most core concepts are:
- How domain objects interact with other domain objects
- How domain objects interact with primary services
Everything else is completely outside the domain. Whether you use a Repository pattern for persistence or something else is beside the point. There will be points where your domain object needs to interact with a service. That's OK.
- Stub your services using interfaces and domain terminology
And stop. You can have a fully unit tested domain driven design with these two concepts.
Services from the inside out
In an Onion architecture, the implementation of your services are done at a layer outside of your domain model. That keeps the domain clean, and separated from the concerns of your service.
One example would be using a SQL database for persistence. If you need to change that in the future, or add other types of databases (graph, search, etc) to facilitate your system, you can localize those changes behind the service you defined in the domain model.
Common things that are not central to the domain:
- Authentication
- Persistence
- Rendering (i.e. REST services, full web application, desktop app, etc)
- Internationalization
- Messaging
That is not a full and exhaustive list.
Your domain may have users, and those users have roles... but how the user is authenticated and those roles supplied are things that will change over time.
Modes of messaging may change over time, as well as specific content, but the types of messages and triggers that kick them off are related to the domain.
Feel free to apply the same concepts to implementing your services as you did for the core domain. The only difference is the domain for a persistence service is different than for your core application.
Project Structure
It really depends on how granular you want to go, but at the very least you should have one library as your core domain model. You can have service implementations in separate projects, or group some things together.
+ Solution
+ Project.Domain (or Core if you prefer)
+ Project.Domain.Test
+ Project.Persistence.Service
-- etc.
* WebAPI
Don't be overly religious about your project organization. It only needs to be reasonably clear where to look for things. Your core domain library has the domain objects and interfaces for services as necessary. The only rule I'd recommend is to have a clear hierarchy of project dependencies. Project.Domain shouldn't depend on anything outside of your standard library for the language you are using.
GUIDs are by definition "Globally Unique IDentifiers". There's a similar but slightly different concept in Java called UUIDs "Universally Unique IDentifiers". The names are interchangeable for all practical use.
GUIDs are central to how Microsoft envisioned database clustering to work, and if you need to incorporate data from sometimes connected sources, they really help prevent data collisions.
Some Pro-GUID Facts:
- GUIDs prevent key collisions
- GUIDs help with merging data between networks, machines, etc.
- SQL Server has support for semi-sequential GUIDS to help minimize index fragmentation (ref, some caveats)
Some Ugliness with GUIDs
- They are big, 16 bytes each
- They are out of order, so you can't sort on ID and hope to get the insertion order like you can on auto-increment ids
- They are more cumbersome to work with, particularly on small data sets (like look up tables)
- The new GUID implementation is more robust on SQL Server than it is in the C# library (you can have sequential GUIDS from SQL Server, in C# it is random)
GUIDs will make your indexes bigger, so the disk space cost of indexing a column will be higher. Random GUIDs will fragment your indexes.
If you know you aren't going to synchronize data from different networks, GUIDs can carry more overhead than they are worth.
If you have a need to ingest data from sometimes connected clients, they can be a lot more robust for preventing key collisions than relying on setting sequence ranges for those clients.
Best Answer
Short answers
What follows will make more sense if you are familiar with the different flavors of test doubles; Mocks Aren't Stubs is an accessible starting point.
Longer answers:
Using test doubles is a trade off. There are risks associated with the fact that the system under test isn't talking to a real collaborator, and there are benefits. Part of our craft is understanding the trade we are making.
There are properties that we want our tests to have.
But all of those are just bikewash unless
Replacing real collaborators (which tend to be messy) with fake collaborators (which tend to be simple) increases the probability that we miss certain categories of mistakes; so we had better be sure when we do that that the benefits we gain offset the increased risks.
We derive almost no additional benefit from mocking a value. Well designed value objects are already well isolated, side effect free, and tend to express the semantics of a test better than a substitute would. They live entirely within the functional core of your application.
If you run that same math on entities, you will see that it doesn't make much sense to mock an entity either.
With domain services, however, the trade offs start to look really interesting. Domain services are the mechanism by which an encapsulated part of the domain model communicates with its collaborators; those collaborators might be other parts of the same model, or they may be further away.
When Evans described domain services in the blue book, he included among the motivating examples needing to access application and infrastructure services -- code that lives outside of the abstraction boundary of the domain model. Domain services are often proxies for communicating side effects, which may even cross process boundaries.
Domain services are often proxies for code across an architecturally significant boundary.
So if you've got a domain service that is an in memory abstraction -- Orders needs access to an in memory tax calculator service, then the test double doesn't provide nearly as much marginal benefit as a test double that stands in for a domain service that needs to talk to a database....
Put another way, there are a lot of different things under the umbrella term "domain service", and they have different trade offs.
You are much more likely to use a test double when the actual behavior of the service is hard to predict, or hard to constrain.
That sounds like the marginal advantage of introducing a mock is small; so I would recommend using the real implementation in this circumstance.