Let's start at the beginning: mixed C and C++ code is fairly common. So you're in a big club to start with. We have huge C codebases in the wild. But for obvious reasons many programmers refuse to write at least new stuff in C, having access to C++ in the same compiler, new modules start to be written that way -- at first just leaving the existing parts alone.
Then eventually some existing files get recompiled as C++, and some bridges can be deleted... But it may take really long time.
You are ahead somewhat, your full system is now C++, just most of it is written "C-style". And you see mix of styles a problem, what you should not: C++ is a multi-paradigm language supporting many styles, and allow them to co-exist for good. Actually that is the main strength, that you are not forced to a single style. One that would be suboptimal here and there, with some luck not everywhere.
Re-working the codebase is a good idea, IF it is broken. Or if it is in the way of development. But if it works (in the original sense of word), please follow the most basic engineering principle: if it ain't broke, don't fix it. Leave the cold parts alone, put your effort where it counts. On the parts that are bad, dangerous -- or in new features, and just refactor parts to make them a bed.
If you seek general things to address, here's what worth evicting from a C codebase:
- all the str* functions and char[] -- replace them with a string class
- if you use sprintf, create a version that returns a string with the result, or puts it in the string, and replace usage. (If you never bothered with streams do yourself a favor and just skip them, unless you like them; gcc provides perfect type safety out of the box for checking formats, just add the proper attribute.
- most malloc and free -- NOT to with new and delete, but vector, list, map and other collectons.
- the rest of memory management (after the previous two points it must be pretty rare, cover with smart pointers or implement your special collections
- replace all other resource usage (FILE*, mutex, lock, etc) to use RAII wrappers or classes
When you're done with that you approach the point where the codebase can be reasonably exception-safe, so you can drop return-code football using exceptions and rare try/catch in high-level functions only.
Beyond that just write new code in some healthy C++, and if some classes are born that are good replacement in existing code, pick them up.
I didn't mentions syntax-related stuff, obviously use refs instead of pointers in all new code, but replacing old C parts just for that change is no good value. Casts you must address, eliminate all you can, and use C++ variants in wrapper functions for the remainder. And very importantly, add const wherever applicable. These interleave with the earlier bullets. And consolidate your macros, and replace what you can make into enum, inline function or template.
I suggest reading Sutter/Alexandrescu's C++ Coding Standards if not yet done and follow them closely.
Most compilers will implement references as pointers. So the deep answer to your question is that there will be absolutely no difference in terms of performance between the two. (Doesn't change aliasing analysis either as far as I know.)
If you want to be 100% sure of that statement, inspect your compiler's output.
struct Small {
int s;
};
void foo(Small* s)
{
s->s = 1;
}
void bar(Small& s)
{
s.s = 1;
}
Compiled with clang++ -O2
, saving the assembly:
_Z3fooP5Small: # @_Z3fooP5Small
.cfi_startproc
# BB#0:
movl $1, (%rdi)
ret
_Z3barR5Small: # @_Z3barR5Small
.cfi_startproc
# BB#0:
movl $1, (%rdi)
ret
You can try that with a large struct or an enormously complex struct - doesn't matter, all you're passing in to the function is a pointer.
That being said, there are semantic differences between the two. The most important one being that, as long as your program is free of undefined behavior, the overload that takes a reference is guaranteed to get a reference to a valid, live object. The pointer overload isn't.
Also assigning to s
in these two examples has completely different meanings. It would replace the pointer in the first function (i.e. whatever it pointed to remains unchanged, but becomes unreachable from within that function; caller unaffected by the assignment).
In the second, it would call the appropriate assignment operator en the object passed in (effect visible from the caller).
So your choice shouldn't be made on a potential performance difference (there will generally be none), but on semantics. What you need the function to be able to do, and how you should be able to call it, will dictate what overload(s) you need to provide.
Best Answer
Your .cpp code files all compile into the same binary/objects, so separating files is merely a convenience for development, making code modular. After parsing the command line parameters, you may decide what calls to perform in your application.
You iterate through application arguments via the argv[] array passed to your main() entry point.
As for best practices, it's best not to reinvent the wheel by using a command line parser library as answered here.