To be honest, your question is not easy to understand.
When you start your DDD journey, you identify your domains and bounded contexts. Ideally you have one bounded context per domain but it depends. Each domain has its own domain model, you mention this yourself. But these domain models have nothing in common. They don't know about each other. Your idea to have one base class for two entities from different domains does not fit to DDD. Real domain models do not allow accessing anything within the domain model to anything outside of the bounded context. All communication between bounded contexts go via pre-defined interfaces like queries and commands if you do CQRS. This way you can make your components loosely coupled and allow different groups to work on different domains without depending on each other.
Bounded context always mean separate application. You can think of a separate solution though. It will include a domain project with entities, VOs, commands. It will also include services project with command handlers and query handlers. It will include transport projects like an application that uses service bus for commands and events.
Remember that using domainevents is crucial to enable inversion of control on the business logic level, when one domain informs everyone else about things that happen, allowing the, to react accordingly and do necessary activities. You can think of order payment event in sales domain that triggers a dispatching process in the inventory domain. Sales domain does not even need to know about the inventory domain.
You may want to display the inventory level on the web page, or you may want to display the edition number of the inventory in stock (imagine your inventory is books, magazines etc). This information comes from the Inventory domain.
The main thing to notice at this point is that you are talking about a view, which is to say that using stale data is acceptable.
That being said, you don't need to be interacting with the aggregates (which are responsible for preventing changes from violating the business invariant), but with a representation of a recent copy of the aggregate's state.
So what I would normally expect is a query run against the Product Catalog, and another run against the Inventory, and something to compose the two into the DTO that you need to support the view.
Load both the Product domain and the Inventory domain aggregates?
So that's close. We don't need to load the aggregates, because we aren't going to change anything. But we need their state; so we could load that. That said, I would normally expect the two domains to be running in different processes. Therefore, we'd be calling both, not loading both.
Would you hold some properties on your Product domain entity for number in stock, and edition in stock, and then use Domain Events to update these when the Inventory entity is updated?
"Don't cross the streams. It would be bad."
Using events to coordinate information across domain contexts: great idea. Pushing concepts that belong in one domain into another: opposite of a great idea, except more so.
You want to keep the domains clean. The applications that interact with the domains, it's not so important. So for instance, it is reasonable for the Inventory application to call a service in the product application to query some product specific concepts to add to a view. Or vice versa.
I don't know of any reason that a single application needs to be restricted to a single domain. So long as there is a single source of truth, you can distribute the transactions any way you like.
But just to think this through, in the example above we would end up with potentially 2 DB tables for product catalog and product inventory. Now, do we use the same identifier in these as it's the same product.
That would be the easy way. In larger terms, you use the same identifier because the real world entity is the same; the two different bounded contexts model that entity differently, but the model isn't the real world entity.
When that doesn't work, then you'll need some query to use to bridge the gap. I think the most common variation of this is that the newer entity preserves the id of the older entity. You'll see this within a single BC as well: applicants, when approved, become clients. It's a different aggregate (the state associated with a client is subject to a different invariant than that of the applicant); so if your persistence layer is using event streams, the stream for the new aggregate will need a different identifier. So there will be a bit of state somewhere that says "this applicant became this client".
Or, could we use 1 table and 1 table row for the data and simply map the relevant data onto the aggregate properties?
YIKES! No, don't do that. You're adding transaction contention without any business reason for doing so.
Best Answer
I would definitely try to avoid Option #1. Synchronous dependencies between services are an anti-pattern and just complicate operations.
Replicating information is not a bad thing in itself, when there is a clear producer and consumers. Although it should be kept low as possible, I would have no problem having data replicated.
The point is, when other services might be down or overloaded, the product page would still work. This is a good thing you probably want for an e-commerce site.
The downside is, that the information on that page could be somewhat out-of-date. That is normally not a problem, since you will see the exact information when you switch from "browsing" to actual checkout, which should be another application with the "master" data.
Of course there is always the third option, to try to re-arrange the service boundaries in a way which does not require that much data to be replicated.