I'd go with Option 3, with the following notes:
- Try and reduce the amount of domain logic that your clients need to know to get the job done. Create services that expose that data in meaningful (to your clients) ways, so that you can request collections of domain objects which fulfil certain criteria, rather than doing that crunching on your client.
- Validation should be considered optional on the client-side as, you can never guarantee that future client implementations are going to be done properly. Therefore, always validate on the server side as if it hadn't been done elsewhere. Of course, clients should be validating on the client-side too.
- Rather than mapping the WCF data back to full domain objects on the client side, consider mapping them to simpler ViewModel-type objects - a slimmed down version of your full domain objects only containing properties appropriate to the client - makes client programming simpler.
The problem you're still faced with is lots of mapping. I guess this price is worth paying (and made easier with a tool such as AutoMapper) because removing the client dependency on your domain model gives you breathing room to change your domain, tweak the mapping, without breaking any client code.
Overview
Let's step back a little, and look at the original Onion Architecture proposed by Jeffrey Palermo.
The outside skin is the interface to the external world: the user interface, the test suite (the idea is to promote TDD alike systematic tests for everything inside), and the infrastructure.
Then you dig deeper inside towards the core to find application services,
domain services, and domain objects (in the core of the core).
Infrastructure
Infrastructure has a different meaning here than elsewhere. It is in fact the interface to the outside world, and especially to services sourced outside, such as database management systems or external web-services, local or cloud storage service, etc.
The term "adapter" is directly inspired from Cockburn's hexagonal architecture: the idea is that the inner side of the adapter remains unchanged, but the external part can vary. Hence, one day you work with an Oracle adapter connecting your application services to an Oracle DBMS, the day after, you can develop a MongoDB adapter to switch to Mongo as new persistence layer.
So the platform and OS specific stuffs should be in the infrastructure layer. If you follow this logic, everything which is in the inner circles is platform neutral ("technology neutral" may be misleading).
Towards the core
Here you have an example of mapping in the original onion architecture:
- Domain objects (entities, value objects, aggregates) are in the core
- Around domain, you'll have domain services (e.g. repositories, services, etc...)
- And still around you have the application services, i.e. the services to which user interface or other interfaces are connected.
However, Wade Waldron's variant is a little different for the inner part:
- The core is made of building application blocks which are domain independent, and used by the domain. It's like those offered by a standard library. Personally, I have the feeling that this is not a good idea: those basic constructs do not fit into this structure. They are indeed used by the next ring, but they may also used by any other ring, diretly, even without going through the domain. In fact the core should be orthogonal to this architectureit. In other words, you may use a list anywhere in the architecture, without having to go through the successive rings to the core.
- Ok, the domain is clear, but he puts there everything which is domain related, including: entities, value objects, aggregates, and also repositories, factories and domain services. So he regroups domain objects and domain services of the original model.
- Ok, API is obvious: it' corresponds to the application services in the original architecture, those which are not accessible directly, but via the outside ring.
This video shall explain in detail and with example Waldron's variant of the Onion architecture.
Best Answer
Business Logic is the core responsibility of the domain layer. However, if that was strictly all the domain layer knew about, it wouldn't be able to communicate with anything else.
What you're being prohibited from putting in the domain layer is knowledge of the volatile, concrete, happens to be what we're using today, details of NHibernate, or the web, the database, the file system, or whatever else they want to use tomorrow.
The domain should be communicating by using whatever is convenient for itself to use, a data structure, an object, a collection. It should not be flinging around result sets, json, row tables, or anything else that gives away what it's talking to.
Which means you need to translate result sets, json, etc into, or from, those convenient to use things in a layer outside the domain. That's part of the infrastructure. That's the adapter part of the ports and adapters in Hexagonal Architecture which honestly isn't all that different from Onion Architecture or Clean Architecture.
The interfaces/abstractions needed to talk to that infrastructure, or have that infrastructure talk to the domain, are what the domain will know about. The domain will own them and dictate when and if they can change. Nothing else gets to have a say in how they change.
Doing that is what "removes all knowledge" of those outside details yet still allows for communication.