Long story short: all in one.
If you consider one of these as "master" and the other as an augmented replica, that updates are replicated one way only, and you can survive large latencies in synchronisation (and I'll let you define "large") then two separate DBs will work. For anything else I'd suggest a single DB.
If you have two-way updates will they be synchronous or async? The former and you have closely coupled the two DBs. The latter and you must have conflict resolution measures in place, adding complexity. With a single DB you have the whole weath of concurrency technology to support you.
With any copy process there will be latency, even if it is only milli- or micro-seconds. That's all it takes to introduce inconsistency between the source's and replica's data, however, and then you have the whole reconciliation & resolution thing again.
How will you handle DR? How will you ensure a common sync point across two DBs, potentially on different servers (or data centres)? How do you ensure you get the two matching tapes back from the off-site vault first time, every time?
Any schema change which adds tables or columns will be transparent to the other application. (You're not writing SELECT *
, are you? ARE YOU?) Removing objects should only affect modules which use then, which you'd want to change anyway. (Two DBs with a redundant table in one but removed from the other smells like a potential source of great confusion to me.) Changing schema because business rules have changed may cause some duplication of work; but it seems to me if one application isn't implementing that business rule it had no call to be in that part of the DB anyway.
Could you store the ORM-implementing code files in way that made them common to both applications? Then schema changes will affect only one set of code.
"Database as single point of failure." Well, fair point. But then it has been on absolutely every application I've ever worked on since they've all had a DB. That's why we have cluster/ mirror/ high-availability built into the DBMS products. If you have two DBs and one fails, what then? Back to the reconciliation/ conflict resolution cycle. It is also a single point of tuning; fix it one place and it's fixed everywhere!
I would have thought views, stored procedures, the ORM itself, and limiting the items in any SELECT query would have been able to isolate any part of the application from data in which it has no interest.
My reading of your question is that your two applications are actually quite closely coupled. My response is coloured by that. I've written plenty of systems where we bulk copy or two-way replicate data and deal with the consequences.
It seems to me your question is split into two parts. The first revolves around duplication of interfaces. The second on dependencies.
First, duplication of interfaces. You probably want to duplicate them in the case you are describing.
A good rule of thumb when breaking up a monolith is to keep your boundaries clearly defined. If you have two domains, it can be counter-intuitive to keep in mind that any given domain object needs to be modeled in a way which is unique to the domain.
For instance, if you have a customer interface, ICustomer, and you have two domains, Orders and Invoices, although both orders and invoices utilize something called a customer, it is often better to define the interface twice than it is to try to force a single interface (usually due to a misunderstanding of DRY). This is because a customer from the perspective of the Order domain is a very different beast than a customer from the Invoices domain. It might seem intuitive that both Orders and Invoices share the same ICustomer interface. But in truth, they probably do not. A customer in an Order domain is very different than a customer in an Invoice domain, and will change for very different reasons.
So if you want to split your project into domain-driven micro services, create libraries around each domain. Don't be so much concerned about how well you can re-use code (or other resources) across domains, but how easy it is for the code in one domain to be changed without breaking code in an unrelated domain.
Second, dependencies. This will mostly solve itself after you address the above issue. Keeping dependencies simple is a matter of keeping dependencies secluded to their proper domain, and then pushing the interfaces for more generalized dependencies down into your base common libraries.
Best Answer
If you go the service route, all your applications will fail if the single service fails. Of course, it means if you need to modify the code that handles the common functionality, you only have to do it in one place (the service).
If you go the library route, your applications are more independent (in fact, they could be deployed on to different servers that do not even communicate) and don't all rely on a single service. Of course, if you need to modify the code that handles the common functionality, you will probably have to rebuild and redeploy all the applications with the new libraries. This could also be an advantage if you want Application A to use version 3 of Some Library and you want Application B to use version 4.2 of the same library. It's not a common case, but I have seen it happen once or twice in very specific situations.
Testing efforts for both should be the same because even if only one service "application" is modified, all the user applications should still be retested as if the service was internal to them.