C++ – What are the consequences of no virtual destructor for this base class

cvirtual-functions

I found this same code here:
https://stackoverflow.com/a/5854862/257299

struct Base { virtual Base& operator+=(int) = 0; };

struct X : Base
{
    X(int n) : n_(n) { }
    X& operator+=(int n) { n_ += n; return *this; }
    int n_;
};

struct Y : Base
{
    Y(double n) : n_(n) { }
    Y& operator+=(int n) { n_ += n; return *this; }
    double n_;
};

void f(Base& x) { x += 2; } // run-time polymorphic dispatch

I have read many Q&As about when (and when not) to use a virtual destructor, but I am stumped by this sample code. My textbook understanding of C++ says: "Aha! A base class without a virtual destructor is bad. Memory leak or undefined behaviour may occur." I also think this style appears in the C++ standard library, but I don't know an example off the top of my head.

What happens if I use X (and Y) as such…?

X* x = new X(5);
// Scenario 1
delete x;  // undefined behaviour...?  Does default dtor for `Base` get called?
// Scenario 2
Base* b = x;
delete x;  // undefined behaviour...?

Perhaps I am confused about (a) using a class safely without a virtual destructor versus (b) safety-by-design / I cannot get it wrong.

In case (a), to use safely, only stack allocate. This requires discipline!

In case (b), either add a virtual destructor in Base or force stack allocation via a private constructor and public static factory method. This does not require discipline. (Although this comment from David Rodríguez confused me more! "Why would you want to do this [prevent heap allocation]?")

Best Answer

In your first scenario, the behavior is defined. You have a pointer to X and you destroy an X via that pointer. The fact that X derived from B doesn't cause a problem.

The second scenario is where the big problem arises. Here you have undefined behavior, because you're destroying a derived object via a pointer to the base object. In such a case you must have a virtual dtor to get defined behavior.

The exact result of doing this varies. In some cases, the program can crash. In others, the base dtor is invoked but the derived dtor isn't, so the object is partially (but not completely) destroyed. I did give some concrete examples for one example, in an answer on SO. That's just one particular case though--changing the compiler or code could change the result completely.

At least in my opinion, attempting to eliminate dynamic allocation isn't really a cure. For example, consider code like this:

#include <iostream>

struct base {
    virtual void show() { std::cout << "type: base\n"; }
};


struct derived : public base {
    derived operator+(int) { return derived(); }
    virtual void show() { std::cout << "type: derived\n"; }
};

void f(derived &d) {
    base &bb{ d+1 };
    bb.show();
    // boom!
}

int main() {
    derived d;
    f(d);
}

When we invoke bb.show(), it (as expected) shows that the type is derived. That is, we have a reference to base bound to a temporary object of type derived. At the end of the function (marked "boom!") that temporary object of type derived is destroyed via a reference to base. We haven't used any dynamic (or static) allocation, but we still have undefined behavior.

This is probably why David Rodriguez (who's a smart guy and knows C++ well--continue paying attention to what he says) asked what he did--at least for the situation under discussion, preventing dynamic allocation doesn't prevent problems.