Such a language extension would be significantly more complicated and invasive than you seem to think. You can't just add
if the life-time of a variable of a stack-bound type ends, call Dispose
on the object it refers to
to the relevant section of the language spec and be done. I'll ignore the problem of temporary values (new Resource().doSomething()
) which can be solved by slightly more general wording, this is not the most serious issue. For example, this code would be broken (and this sort of thing probably becomes impossible to do in general):
File openSavegame(string id) {
string path = ... id ...;
File f = new File(path);
// do something, perhaps logging
return f;
} // f goes out of scope, caller receives a closed file
Now you need user-defined copy constructors (or move constructors) and start invoking them everywhere. Not only does this carry performance implications, it also makes these things effectively value types, whereas almost all other objects are reference types. In Java's case, this is a radical deviation from how objects work. In C# less so (already has struct
s, but no user-defined copy constructors for them AFAIK), but it still makes these RAII objects more special. Alternatively, a limited version of linear types (cf. Rust) may also solve the problem, at the cost of prohibiting aliasing including parameter passing (unless you want to introduce even more complexity by adopting Rust-like borrowed references and a borrow checker).
It can be done technically, but you end up with a category of things which are very different from everything else in the language. This is almost always a bad idea, with consequences for implementers (more edge cases, more time/cost in every department) and users (more concepts to learn, more possibility of bugs). It's not worth the added convenience.
Mandatory disclaimers
(1) Because people who have seen the code can't say anything about it, and people who can freely comment on it have never seen the actual code, all we can do here is to speculate, speculate, and to speculate. Therefore, here is not an answer, just a speculation.
(2) This is not the typical way I write C++ because most of the projects I work on allows exceptions, at least on a local basis (i.e. not crossing application boundaries), and the coding standard ensures that there are always appropriate exception catchers in the right place. This answer was written as if it is an interesting thought, not as a sharing of experience.
My opinion is that to avoid the issue of partially constructed object (or, "state"), one must first fundamentally change the way parameter (precondition) validation is performed.
The change is this: instead of validating and assigning parameters one-by-one, one must perform the complete validation of all parameters together, in a side-effect-free manner.
In addition to that change, the role of class constructor is also changed. Instead of handling both precondition validation and state initialization, it will "outsource" both to someone else; it will only retain the part of responsibilities that are "failproof" (not capable of failing).
For example, assigning a primitive value (e.g. integer) to a primitive variable is failproof, provided that the primitive variable has valid storage. Another example is the ownership-transfer of a pointer from one smart variable to another.
Some of the biggest advances of C++11 is that smart pointers (and many other things) are moving toward making at least some of the operations "failproof", by giving the option of isolating those "failable" (having the potential of failing) operations into separate methods.
Ultimately, however I must say, the "no exceptions" rule is sometimes impractical for at least some types of application development. How else would one prevent std::bad_alloc
without exceptions? Should the system crash-and-burn?
Mission-critical systems prevent out-of-memory issues by ensuring system-wide determinism in memory usage. Everything is preallocated; objects are merely placement-newed on allocators. There is a maximum number of instances prescribed for each type of objects; attempt to exceed the maximum will either be rejected, or result in the yanking of another less-important, not-actively-in-use object.
This, may be why we keep hearing those memes about "this enemy tracking system is capable of simultaneously tracking 256 different objects." When a 257th object wants to be added to the system, one of the least-important object must go. Since none of us commenting here have seen any of the code, this is just a speculation.
Best Answer
When you acquire any resource in the member functions of your class and haven't delegated the responsibility to release those resources to another object/function, you need to make sure that you release those resources in the destructor.
If your class acquires a file handle by using
std::ifstream::open()
, you shouldn't have to explicitly callstd::ifstream::close()
since the destructor ofstd::ifstream
will take care of it, i.e. the responsibility has been delegated to another object.However, if your class acquires a file handle by using
fopen()
, you will have to make sure that destructor of your class callsfclose()
appropriately.As a rule of thump, you should assume that each class that you use, whether they are from the standard library, a vendor's library, or in-house library, adheres to the principle. If it doesn't, it's a defect and must be fixed by the party responsible for it.
You shouldn't have to call
std::ifstream::close()
. If the file doesn't get closed, you should inform the vendor that they need to fix the problem. In the mean time, you should write a wrapper class that takes care of the problem and use the wrapper class in rest of your code.