Short answer: More completely, my current opinion on auto
is that you should use auto
by default unless you explicitly want a conversion. (Slightly more precisely, "... unless you want to explicitly commit to a type, which nearly always is because you want a conversion.")
Longer answer and rationale:
Write an explicit type (rather than auto
) only when you really want to explicitly commit to a type, which nearly always means you want to explicitly get a conversion to that type. Off the top of my head, I recall two main cases:
- (Common) The
initializer_list
surprise that auto x = { 1 };
deduces initializer_list
. If you don’t want initializer_list
, say the type -- i.e., explicitly ask for a conversion.
- (Rare) The expression templates case, such as that
auto x = matrix1 * matrix 2 + matrix3;
captures a helper or proxy type not meant to be visible to the programmer. In many cases, it's fine and benign to capture that type, but sometimes if you really want it to collapse and do the computation then say the type -- i.e., again explicitly ask for a conversion.
Routinely use auto
by default otherwise, because using auto
avoids pitfalls and makes your code more correct, more maintainable and robust, and more efficient. Roughly in order from most to least important, in the spirit of "write for clarity and correctness first":
- Correctness: Using
auto
guarantees you’ll get the right type. As the saying goes, if you repeat yourself (say the type redundantly), you can and will lie (get it wrong). Here's a usual example: void f( const vector<int>& v ) { for( /*…*
-- at this point, if you write the iterator’s type explicitly, you want to remember to write const_iterator
(did you?), whereas auto
just gets it right.
- Maintainability and robustness: Using
auto
makes your code more robust in the face of change, because when the expression's type changes, auto
will continue to resolve to the correct type. If you instead commit to an explicit type, changing the expression's type will inject silent conversions when the new type converts to the old type, or needless build breaks when the new type still works-like the old type but doesn't convert to the old type (for example, when you change a map
to an unordered_map
, which is always fine if you aren't relying on order, using auto
for your iterators you'll seamlessly switch from map<>::iterator
to unordered_map<>::iterator
, but using map<>::iterator
everywhere explicitly means you'll be wasting your valuable time on a mechanical code fix ripple, unless an intern is walking by and you can foist off the boring work on them).
- Performance: Because
auto
guarantees no implicit conversion will happen, it guarantees better performance by default. If instead you say the type, and it requires a conversion, you will often silently get a conversion whether you expected it or not.
- Usability: Using
auto
is your only good option for hard-to-spell and unutterable types, such as lambdas and template helpers, short of resorting to repetitive decltype
expressions or less-efficient indirections like std::function
.
- Convenience: And, yes,
auto
is less typing. I mention that last for completeness because it's a common reason to like it, but it's not the biggest reason to use it.
Hence: Prefer to say auto
by default. It offers so much simplicity and performance and clarity goodness that you're only hurting yourself (and your code's future maintainers) if you don't. Only commit to an explicit type when you really mean it, which nearly always means you want an explicit conversion.
Yes, there is (now) a GotW about this.
Yes, Virginia, there is a Santa Claus.
The notion of using programs to modify programs has been around a long time. The original idea came from John von Neumann in the form of stored-program computers. But machine code modifying machine code in arbitrary ways is pretty inconvenient.
People generally want to modify source code. This is mostly realized in the form of program transformation systems (PTS).
PTS generally offer, for at least one programming language, the ability to parse to ASTs, manipulate that AST, and regenerate valid source text. If in fact you dig around, for most mainstream languages, somebody has built such a tool (Clang is an example for C++, the Java compiler offers this capability as an API, Microsoft offers Rosyln, Eclipse's JDT, ...) with a procedural API that is actually pretty useful. For the broader community, almost every language-specific community can point to something like this, implemented with various levels of maturity (usually modest, many "just parsers producing ASTs"). Happy metaprogramming.
[There's a reflection-oriented community that tries to do metaprogramming from inside the programming language, but only achieve "runtime" behaviour modifiation, and only to the extent that the language compilers made some information available by reflection. With the exception of LISP, there are always details about the program that are not available by reflection ("Luke, you need the source") that always limit what reflection can do.]
The more interesting PTS do this for arbitrary languages (you give the tool a language description as a configuration parameter, including at a minimum the BNF). Such PTS also allow you to do "source to source" transformation, e.g., specify patterns directly using the surface syntax of the targeted language; using such patterns, you can code fragments of interest, and/or find and replace code fragments. This is far more convenient than the programming API, because you don't have to know every microscopic details about the ASTs to do most of your work. Think of this as meta-metaprogramming :-}
A downside: unless the PTS offers various kinds of useful static analyses (symbol tables, control and data flow analyses), it is hard to write really interesting transformations this way, because you need to check types and verify information flows for most practical tasks. Unfortunately, this capability is in fact rare in the general PTS. (It is always unavailable with the ever-proposed "If I just had a parser... " See my bio for a longer discussion of "Life After Parsing").
There's a theorem that says if you can do string rewriting [thus tree rewriting] you can do arbitrary transformation; and thus a number of PTS lean on this to claim you can metaprogram anything with just the tree rewrites they offer. While the theorem is satisfying in the sense you are now sure you can do anything, it is unsatisfying in the same way that a Turing Machine's ability to do anything doesn't make programming a Turing Machine the method of choice. (The same holds true for systems with just procedural APIs, if they will let you make arbitrary changes to the AST [and in fact I think this is not true of Clang]).
What you want is the best of both worlds, a system that offers you the generality of the language-parameterized type of PTS (even handling multiple languages), with the additional static analyses, the ability to mix source-to-source transformations with procedural APIs. I only know of two that do this:
- Rascal (MPL) MetaProgramming Language
- our DMS Software Reengineering Toolkit
Unless you want the write the language descriptions and static analyzers yourself (for C++ this is a tremendous amount of work, which is why Clang was constructed both as a compiler and as general procedural metaprogramming foundation), you will want a PTS with mature language descriptions already available. Otherwise you will spend all your time configuring the PTS, and none doing the work you actually wanted to do. [If you pick a random, non-mainstream language, this step is very hard to avoid].
Rascal tries to do this by co-opting "OPP" (Other People's Parsers) but that doesnt help with the static analysis part. I think they have Java pretty well in hand, but I'm very sure they don't do C or C++. But, its a academic research tool; hard to blame them.
I emphasize, our [commercial] DMS tool does have Java, C, C++ full front ends available. For C++, it covers almost everything in C++14 for GCC and even Microsoft's variations (and we are polishing now), macro expansion and conditional management, and method-level control and data flow analysis. And yes, you can specify grammar changes in a practical way; we built a custom VectorC++ system for a client that radically extended C++ to use what amount to F90/APL data-parallel array operations. DMS has been used to carry out other massive metaprogramming tasks on large C++ systems (e.g., application architectural reshaping). (I am the architect behind DMS).
Happy meta-metaprogramming.
Best Answer
First, some rules of thumb:
Use
std::unique_ptr
as a no-overhead smart pointer. You shouldn’t need to bother with raw pointers all that often.std::shared_ptr
is likewise unnecessary in most cases. A desire for shared ownership often betrays a lack of thought about ownership in the first place.Use
std::array
for static-length arrays andstd::vector
for dynamic.Use generic algorithms extensively, in particular:
<algorithm>
<numeric>
<iterator>
<functional>
Use
auto
anddecltype()
wherever they benefit readability. In particular, when you want to declare a thing, but of a type that you don’t care about such as an iterator or complex template type, useauto
. When you want to declare a thing in terms of the type of another thing, usedecltype()
.Make things type-safe when you can. When you have assertions that enforce invariants on a particular kind of thing, that logic can be centralised in a type. And this doesn’t necessarily make for any runtime overhead. It should also go without saying that C-style casts (
(T)x
) should be avoided in favour of the more explicit (and searchable!) C++-style casts (e.g.,static_cast
).Finally, know how the rule of three:
Has become the rule of five with the addition of the move constructor and move assignment operator. And understand rvalue references in general and how to avoid copying.
C++ is a complex language, so it’s difficult to characterise how best to use all of it. But the practices of good C++ development haven’t changed fundamentally with C++11. You should still prefer memory-managed containers over manual memory management—smart pointers make it easy to efficiently do this.
I would say that modern C++ is indeed mostly free of manual memory management—the advantage to C++’s memory model is that it’s deterministic, not that it’s manual. Predictable deallocations make for more predictable performance.
As for a compiler, G++ and Clang are both competitive in terms of C++11 features, and rapidly catching up on their deficiencies. I don’t use Visual Studio, so I can speak neither for nor against it.
Finally, a note about
std::for_each
: avoid it in general.transform
,accumulate
, anderase
–remove_if
are good old functionalmap
,fold
, andfilter
. Butfor_each
is more general, and therefore less meaningful—it doesn’t express any intent other than looping. Besides that, it’s used in the same situations as range-basedfor
, and is syntactically heavier, even when used point-free. Consider: