In short: No, it is not any good to use "binary arithmetic"(in sense the question asks it) or "C style" in C++.
What makes you believe bitwise arithmetic would be any faster really? You can throw your bitwise operations all over the code, but what would make it any faster?
The thing is, almost whatever trivial thing you're trying to solve, can be solved easier by the standard high-level features(including the standard library, STL and STL algorithms). In some specific cases you might indeed want to switch the individual bits, for example when working with bitmasks. Or storing very compact data, for example when writing a compression algorithm, a dense file format or working for example with embedded systems.
If you are concerned with performance, always write a simple and trivial algorithm first. Just make it work. Then, measure the time your algorithm takes with typical input. Now, if you at this point feel that it is too slow, only then you can try optimizing it by hand with these "bitwise arithmetic" tricks. And after done, measure if your code is any faster. The chances are that it is not, unless you really know what you are doing in that specific situation/case.
Frankly, the best way to understand about this kind of low-level constructs which deal with performance is to study assembly language. It really makes you realize that no, writing some bit-manipulating wizzcode is not any faster than using that sort(begin(v),end(v))
. Just because you operate at low level doesn't mean you operate fast. In general, Algorithms are more important than implementation details!
Basically whatever the "C style" means, please, stay away from it when writing C++. They are two completely different languages. Don't mix them.
Bjarne Stroustrup gave a great talk about C++ style in Microsoft's GoingNative 2012 conference this February, please take a look: http://channel9.msdn.com/Events/GoingNative/GoingNative-2012/Keynote-Bjarne-Stroustrup-Cpp11-Style
Especially the parts between around 10 and 15 minutes are great, when he talks about old C-style code compared to modern C++ style.
Everything has a cost, even if it isn't measured in runtime performance.
Encoding such assumptions into the type system sounds like a good idea. But it is not without its flaws. In particular, it requires you to have and use a bunch of increasingly specific types for increasingly specific assumptions.
Let's say that you have a function that takes an array from the user and modifies the first three elements in it. Now, this function makes two assumptions: that there's actually an array and that the array is at least 3 elements long.
There are types which can encode both of these assumptions. The guideline support library type span
can cover both of these. But just look at the code for that type. If it weren't available, you probably wouldn't write it yourself.
The more such assumptions you have, and the more special-case they get, the harder it is to write a type just for them. After all, span
only solves this particular problem as a partial by-product of solving its real problem: having a way to represent an array of some size.
So it's a balancing act. You don't want to spend more time writing special-case types, but you do need some to cover a lot of bases. Where exactly you draw the line depends on your needs, but I don't feel that trying to encode everything into the type system is worthwhile.
Also, having contracts as part of C++, which people are working on (PDF), would be able to bridge the gap here in many of the special cases.
There is also the issue of dealing with combinations of such contracts. The not_null
contract is generally a good idea, but by its very nature it cannot work with move-only types that leave the moved-from object null. Thus, not_null<unique_ptr<T>>
is not a functional type.
Again, that's not to say that you shouldn't have these. But you really need to think about when it is truly appropriate to have a type encapsulate a contract and when it is not.
Best Answer
As noted in the comments, size of the binary could be very important for some embedded systems - especially old ones.
However, as you've noted in the update to the question
This is one of the most pointy-haired schemes I've heard in a long time. You'll be penalized for including a library that's well tested and solves a lot of problems, but they'll let a Bubble sort get past?
Seriously, it would be useful to see some justification for their main argument that the binary size correlates with other qualities of the code. It's entirely possible that I'm dead wrong and there is such a correlation, but I kind of doubt it.