It’s hard and unrealistic to maintain large mock data. It’s is even harder when database structure undergoes changes.
False.
Unit testing doesn't require "large" mock data. It requires enough mock data to test the scenarios and nothing more.
Also, the truly lazy programmers ask the subject matter experts to create simple spreadsheets of the various test cases. Just a simple spreadsheet.
Then the lazy programmer writes a simple script to transform the spreadsheet rows into unit test cases. It's pretty simple, really.
When the product evolves, the spreadsheets of test cases are updated and new unit tests generated. Do it all the time. It really works.
Even with MVVM and ability to test GUI, it’s takes a lot of code to reproduce the GUI scenario.
What? "Reproduce"?
The point of TDD is to Design things for Testability (Test Drive Development). If the GUI is that complex, then it has to be redesigned to be simpler and more testable. Simpler also means faster, more maintainable and more flexible. But mostly simpler will mean more testable.
I have experience that TDD works well if you limit it to simple business logic. However complex business logic is hard to test since the number of combination of test (test space) is very large.
That can be true.
However, asking the subject matter experts to provide the core test cases in a simple form (like a spreadsheet) really helps.
The spreadsheets can become rather large. But that's okay, since I used a simple Python script to turn the spreadsheets into test cases.
And. I did have to write some test cases manually because the spreadsheets were incomplete.
However. When the users reported "bugs", I simply asked which test case in the spreadsheet was wrong.
At that moment, the subject matter experts would either correct the spreadsheet or they would add examples to explain what was supposed to happen. The bug reports can -- in many cases -- be clearly defined as a test case problem. Indeed, from my experience, defining the bug as a broken test case makes the discussion much, much simpler.
Rather than listen to experts try to explain a super-complex business process, the experts have to produce concrete examples of the process.
TDD requires that requirements are 100% correct. In such cases one could expect that conflicting requirements would be captured during creating of tests. But the problem is that this isn’t the case in complex scenario.
Not using TDD absolutely mandates that the requirements be 100% correct. Some claim that TDD can tolerate incomplete and changing requirements, where a non-TDD approach can't work with incomplete requirements.
If you don't use TDD, the contradiction is found late under implementation phase.
If you use TDD the contradiction is found earlier when the code passes some tests and fails other tests. Indeed, TDD gives you proof of a contradiction earlier in the process, long before implementation (and arguments during user acceptance testing).
You have code which passes some tests and fails others. You look at only those tests and you find the contradiction. It works out really, really well in practice because now the users have to argue about the contradiction and produce consistent, concrete examples of the desired behavior.
attempting it in the fashion of TDD, will merely make it a maintenance nightmare and impossible for the team maintain.
You can't win that argument. They're making this up. Sadly, you have no real facts, either. Any example you provide can be disputed.
The only way to make this point is to have code which is lower cost to maintain.
Furthermore, as it's a front-end application (not web-based), adding tests is pointless,
Everyone says this. It may be partially true, also. If the application is reasonably well designed, the front-end does very little.
If the application is poorly designed, however, the front-end does too much and is difficult to test. This is a design problem, not a testing problem.
as the business drive changes (by changes they mean improvements of course), the tests will become out of date, other developers who come on to the project in the future will not maintain them and become more of a burden for them to fix etc.
This is the same argument as above.
You can't win the argument. So don't argue.
"I am fully responsible for the rewrite of this product"
In that case,
Add tests anyway. But add tests as you go, incrementally. Don't spend a long time getting tests written first. Convert a little. Test a little. Convert a little more. Test a little more.
Use those tests until someone figures out that testing is working and asks why things go so well.
I had the same argument on a rewrite (from C++ to Java) and I simply used the tests even though they told me not to.
I was developing very quickly. I asked for concrete examples of correct results, which they sent in spreadsheets. I turned the spreadsheets into unittest.TestCase (without telling them) and uses these to test.
When were in user acceptance testing -- and mistakes were found -- I just asked for the spreadsheets with the examples to be reviewed, corrected and expanded to cover the problems found during acceptance test.
I turned the corrected spreadsheets into unittest.TestCase (without telling them) and uses these to test.
No one needs to know in detail why you are successful.
Just be successful.
Best Answer
Adding this as an answer ( because long :) ), but the "best" structure of your project is subjective.
I tend to apply this exact architecture for my C++ projects, both at home (CMake + XCode) and at work (Visual Studio):
It is a good idea to keep the tests separate from the code (either as another extra library, or written directly into the test applications.
At home, I have the following:
static libs:
test libs: