I'll try to tackle part of your question: why do I need a DAO layer if I never intend to swap databases?
As another respondent mentioned part of your goal when writing any class is the Single Responsibility Principle: a class should have only one reason to change. Simply put, you don't want to modify code without necessity, because doing so introduces risk. To minimize that risk, we try and avoid touching the implementation of a class for too many reasons. If I had to alter my class because the business rules changed, or because I decided to change how I mapped it into persistent storage, I would have two reasons to change my code. By removing one of those responsibilities, database mapping, I can avoid the risk of making an mistake that impacts the other, and the testing burden of checking both. I don't want to check my Db mappings, just because I change a business rule. You may also hear people talking about 'separation of concerns', that's another way of expressing this idea. A class that handles my logic for arranging shipping containers should not care about persistence.
It also comes into play when we think about unit testing. When I unit test my shipping container arrangement class, I don't want those tests to be coupled to other concerns, such as persistence. I want to be isolated from them. So I need to be easily able to test in isolation from them. If my class does not contain persistence logic, then I can easily test it in memory, without worrying about how it is persisted.
This driver produces an additional benefit: reasoning about our code. If we don't perform data-access at random points in our code, but through accessing to the DAO through an application service layer and then access to objects returned from that, I will not suddenly have unexpected I/O code running at some point during execution of business logic. It is also much easier to reason about our persistence code if it is not interspersed with business logic.
This makes it much easier to think about our code. We can follow domain logic without having our understanding polluted by access to persistent storage. This ability to reason about code easily is a key to productivity
The ultimate expression of many of these layering ideas is an hexagonal architecture.
You should be aware though that a domain model is not the ideal for all scenarios; particularly where you have no business logic. In that case a transaction script (one thing after another) might be the right solution. Some models, such as CQRS exist to try and gain the benefits of a domain model, without the costs for simple read access, for example for displaying a web page.
I'd say 99 times out of 100 the answer is to grit your teeth and re-use the existing COBOL code. It sounds like it might have some fairly complex logic behind the scenes; unless you actually have an up-to-date and accurate spec you might have trouble re-producing the logic in all its nuances. This is bad mojo on a financial application as the bean counters will want bug-for-bug compatibility with the legacy code.
So, as a first stop, I would suggest that you seriously investigate what is needed to wrap your legacy process or integrate it with the front-end. You might take a look at the COBOL-Java integration options available from Micro Focus.
If you really can't re-cycle the existing code then you have a significant problem. While a sproc might well do what you want, you have the problem of making something mimic the exact behaviour of an apparently complex and probably poorly documented COBOL application.
Unless your replacement process is radically slower, wrapping 2000 DMLs in a transaction isn't going to be a major issue. It's possible to wrap entire ETL processes in a transaction and have them roll back on a failure. I have seen an application where it was necessary to do this, although it is unusual. Splitting a financial transaction is a no-no, especially on a double entry system. In almost all cases it should commit or roll back atomically. Stick with the single transaction.
The fact that the COBOL application does computations one DML at a time and uses the results in the next step of the computation does not bode well for conversion to set operations. Be prepared to have to do this row-by-row and find a way to make it run fast. A stored procedure may well be the way to do this. You don't say what DBMS you're using (DB/2 for Z/OS?).
Also, you might have to drop down to XA transactions (java.transactions API) and JDBC, especially if you need to work in the client layer. If you want good performance on 2000 row based operations you might have to go below the framework.
Best Answer
Sorry for the rather long answer.
I think, for a modern application approach, you should consider the connection to be injected into the dao, not taken from a factory.
Your problem is, as I understand it, that from the perspective from the service layer, you want small grained DAO functionality (CRUD) that should be combined to more complex database operations on the service layer. You want to ensure the integrity of the complex operations using db transactions. The issue arising is the visibility of transaction handling and database access outside the dao classes. Your business code gets dirty with transaction and connection handling code, distracting away from the BL to happen.
This is a common problem also on ORM based approaches. For example on plain Hibernate applications, you need to decide, if you are already in a running session or if you need to start a new session. If you encapsulate the hibernate code into seperate classes, you need to handle the opening and closing of the session in a smart way to avoid code duplication.
Another arising issue is the stacking of methods: If each methods handles the connection/transaction, you cannot easily have one method call another.
I have some ideas for you, that you may want to consider:
First on the stacking issue:
This allows to start transactions and connections on any level of your code, calling beginTransaction is idempotent regarding the database, but will raise a counter. You need to take care a lot, not to skip a commit on your code, otherwise your transaction boundaries are not going to be executed.
Using this method will clutter your code at every level with obtainConnection/startTransaction/commit/rollback/returnConnection calls. There is a lot of boilerplate here.
Yet, keep this method in mind, I solved the stacking issue with this for my frameworks. Now, we need to find ways to remove all this connection clutter from our code. Find the following approaches useful, with an emphasis on the third one.
Dependency injection of connection: Your DAOs are not singletons but throw-away objects, receiving the connection on creation time. The calling code will control the connection creation for you.
Bind the connection/transaction to the running thread: Your DAOs can be singletons. You bind runtime information in a thread local context object. The business layer will initialize the context for the DAO. For web applications, this can happen on request start.
Use Proxy classes to implement the connection handling transparently on a context object in the thread during service method invocation:
I have implemented all three approaches (without Spring etal). The last one is especially very powerful and cleans up your service infrastructure a lot. You will demystify the frameworks - it is fun to do :-)
Yet: Depending on your needs, you may be better off with a lightweight framework provided by some project.
For the case where you need to implement a transaction boundary inside an otherwise transaction-less service function, you can: