I cannot put the object into the state needed to perform the tests.
If you cannot put the object into the state needed to perform a test, then you cannot put the object into the state in production code, so there's no need to test that state. Obviously, this isn't true in your case, you can put your object into the needed state, just call Approve.
I cannot publish unless the Document has been approved: write a test that calling publish before calling approve causes the right error without changing the object state.
void testPublishBeforeApprove() {
doc = new Document("Doc");
AssertRaises(doc.publish, ..., NotApprovedException);
}
I cannot re-publish a Document: write a test that approves an object, then calling publish once succeed, but second time causes the right error without changing the object state.
void testRePublish() {
doc = new Document("Doc");
doc.approve();
doc.publish();
AssertRaises(doc.publish, ..., RepublishException);
}
When published, the PublishedBy and PublishedOn values are properly set: write a test that calls approve then call publish, assert that the object state changes correctly
void testPublish() {
doc = new Document("Doc");
doc.approve();
doc.publish();
Assert(doc.PublishedBy, ...);
...
}
When publised, the PublishedEvent is raised: hook to the event system and set a flag to make sure it's called
You also need to write test for approve.
In other word, don't test the relation between internal fields and IsPublished and IsApproved, your test would be quite fragile if you do that since changing your field would mean changing your tests code, so the test would be quite pointless. Instead you should test the relationship between calls of public methods, this way, even if you modify the fields you wouldn't need to modify the test.
Kinds of objects
For purposes of our discussion, let's separate our objects into three different kinds:
These are the objects that get work done. They move money from one checking account to another, fulfill orders, and all of the other actions that we expect business software to take.
Domain logic objects normally do not require accessors (getters and setters). Rather, you create the object by handing it dependencies through a constructor, and then manipulate the object through methods (tell, don't ask).
Data Transfer Objects are pure state; they don't contain any business logic. They will always have accessors. They may or may not have setters, depending on whether or not you're writing them in an immutable fashion. You will either set your fields in the constructor and their values will not change for the lifetime of the object, or your accessors will be read/write. In practice, these objects are typically mutable, so that a user can edit them.
View Model objects contain a displayable/editable data representation. They may contain business logic, usually confined to data validation. An example of a View Model object might be an InvoiceViewModel, containing a Customer object, an Invoice Header object, and Invoice Line Items. View Model objects always contain accessors.
So the only kind of object that will be "pure" in the sense that it doesn't contain field accessors will be the Domain Logic object. Serializing such an object saves its current "computational state," so that it can be retrieved later to complete processing. View Models and DTO's can be freely serialized, but in practice their data is normally saved to a database.
Serialization, dependencies and coupling
While it is true that serialization creates dependencies, in the sense that you have to deserialize to a compatible object, it does not necessarily follow that you have to change your serialization configuration. Good serialization mechanisms are general purpose; they don't care if you change the name of a property or member, so long as it can still map values to members. In practice, this only means that you must re-serialize the object instance to make the serialization representation (xml, json, whatever) compatible with your new object; no configuration changes to the serializer should be necessary.
It is true that objects should not be concerned with how they are serialized. You've already described one way such concerns can be decoupled from the domain classes: reflection. But the serializer should be concerned about how it serializes and deserializes objects; that, after all, is its function. The way you keep your objects decoupled from your serialization process is to make serialization a general-purpose function, able to work across all object types.
One of the things people get confused about is that decoupling has to occur in both directions. It does not; it only has to work in one direction. In practice, you can never decouple completely; there is always some coupling. The goal of loose coupling is to make code maintenance easier, not to remove all dependencies.
Best Answer
This question has a couple of different flavors of answer.
The most straight forward answer is that we are using the domain model manage change to a data model. That
Money.value
came from somewhere, and the motivation for invokingAccount.withdraw
is to change that specific value at its source.For state of the resource, think DTO; a representation of that state designed to cross process boundaries.
The same idea expressed another way: there must be a way to inject data into the domain mail, and extract it back out. There's something that understands how to take primitive inputs and transform them into Value Objects; and like wise there must be something that understands how to take Value Objects and convert them back into primitive outputs.
If you are familiar with TDD, you see this approach taken when designing a new class -- consider the bowling game kata. The test models rolls as int inputs, and models the score as an int output. When we start making those concepts explicit in the model, we write code to transform the primitives to the explicit concept, and vice versa.
A slightly different variation; what we are typically doing with a domain model is ensuring that changes to a book of record satisfy some business invariant. So we think in terms of passing the "book of record" to the model, let the model make the changes, and then inspect the book of record to verify the changes.
A different answer came from Greg Young, when he introduced the probability kata "test the model in the calculus of itself".
For example, the fact that, in your example, the domain model uses a floating point value to track the current state of Money is an implementation detail. It's one that is likely to change as your model gets more sophisticated about rounding rules.
That kind of a change shouldn't break all of your tests.
The "state" of the model should be a black box; what the tests should be focused on is not that the model ends up in the expected "state", but instead that the model produces the right responses to queries.
Some of those queries may be as simple as "is the model internally consistent?"