When you plan, don't plan every possible thing about the application in advance. Plan in baby steps. What is the absolute minimum functionality you need to start using the application? Start there.
When you start your project, only code out the absolute minimum functionality. When you do code it out, make sure you are writing good, clean code with smart encapsulation. This will minimize errors that come from making changes later.
Iterate on that minimum functionality until you are happy with it. Then start adding in new functionality and enhancements, one at a time. Again focus on writing good, clean code with smart encapsulation.
If you plan in baby steps and write clean code it will minimize the number of changes you actually need to make. By the time you've finished writing that first feature, you should have adopted the patterns your application's foundation will sit on. If there are problems with that foundation, your next features should quickly reveal the problem. It will be easier to see how piece integrate together. The changes you do make should, at this point, cause minimal disruptions.
Your goal is simply to avoid compiling greet.cpp
twice (once for the application and once for the tests). Correct?
Like you said, there are lots of ways of doing this. It's really a question of what build system you're using, how your code is organized, and how you want to organize your code.
Use a library
This is your last suggestion: Make a greeting
library and link both main and unittests targets to it.
This is the cleanest solution:
- It makes it obvious what you're doing and it should work well with any build system or IDE out there.
- Having the application and test suite share the same library ensures that you're testing production-ready code (instead of, for example, accidentally using different compiler options for application versus test suite and failing to catch problems).
- Putting your unit testable code in a library can help encourage good design (by keeping you from slipping dependencies on UI code or other non-unit-testable code into your library).
On the other hand, it may be more hassle to set up initially, or your project design may make it hard to cleanly split into libraries.
Share the object files
Your first three suggestions are variations on this. Some build systems make this easy. For example, the following Makefile would work:
%.o: %.cpp
$(CXX) -c $<
main_program: greet.o main.o
$(CXX) -o $@ $+
unit_tests: greet.o testgreet.o
$(CXX) -o $@ $+
If you're using something like Make, this should be very easy to set up and get going. (Simply for the sake of organizing your source files, you may want to put the test sources in a separate directory, as you mentioned.)
Other build systems make this harder. For example, using IDE-managed project files and configuring the two IDE projects to share an output directory, so they happen to see each other's output, should work, but it's a bit hackish, and it can invite problems if one project's settings diverge at all from the other (since a project build may think that an object file is suitable for its own use, even if it was built using incompatible settings by the other project).
Best Answer
In the old dark ages, software was build using the famous waterfall approach: plan, analyse requirements, design system, build system, test system and run system.
This goes back in the fifties, in a time where this separation of duties and the specialisation of tasks was a strong reality in a Tayloristic environment.
I think that there is the origin of the plan, build, run concept, long before COBIT and IASCA. Some clever consultant just let the detailed stages out, to make it easy to grasp for non-technical managers and auditors.
Nowadays, big consulting firm continue to sell the idea as a proven path to success.
However, everyone who was involved in real software development knows, that you can't plan all the details from scratch, and that you need some degree of flexibility in order to cope with uncertainty. This is why agile is so popular today. Adaptative and iterative planning goes along with development. And the growing popularity and success of DevOps proves that it's better to integrate development (build) and operations (run).
Look at project management itself. PMBOK explains that complex projects require progressive elaboration. PMBOK and ISO21500 both see planning not as a phase (as in plan/buil/run) but as a set of processes carried out throughout the project.
With this in mind, how can Plan/build/run be implemented ? Project managers in a plan department, loosing gradually understanding of technology and buisness ? Developpers in a build department who are required to go through plan to organize their work ? And after go live, the same developpers do no longer intervene in support (despite they know system best), because support is run and not build ?
In my personal life I witnessed such transformation, and the end result is that it's difficult to get projects done, and there was a huge overhead of interdepartmental communication, when the same people delivered efficiently as an integrated team before the plan/build/run organization.