开发者

Why exactly is calling the destructor for the second time undefined behavior in C++?

As mentioned in this answer simply calling the destructor for the second time is already undefined behavior 12.4/14(3.8).

For example:

class Class {
public:
    ~Class() {}
};
// somewhere in code:
{
    Class* object = new Class();
    object->~Class();
    delete object; // UB because at this point the destructor call is attempted again
}

In this example the class is designed in such a way that the destructor could be called multiple times - no things like double-deletion can happen. The memory is still allocated at the point where delete is called - the first destructor call doesn't call the ::operator delete() to release memory.

For example, in Visual C++ 9 the above code looks working. Even C++ definition of UB doesn't directly prohibit things qualified as UB from working. So for the code above to break some implementation and/or platform specifics are required.

Why开发者_如何学C exactly would the above code break and under what conditions?


I think your question aims at the rationale behind the standard. Think about it the other way around:

  1. Defining the behavior of calling a destructor twice creates work, possibly a lot of work.
  2. Your example only shows that in some trivial cases it wouldn't be a problem to call the destructor twice. That's true but not very interesting.
  3. You did not give a convincing use-case (and I doubt you can) when calling the destructor twice is in any way a good idea / makes code easier / makes the language more powerful / cleans up semantics / or anything else.

So why again should this not cause undefined behavior?


The reason for the formulation in the standard is most probably that everything else would be vastly more complicated: it’d have to define when exactly double-deleting is possible (or the other way round) – i.e. either with a trivial destructor or with a destructor whose side-effect can be discarded.

On the other hand, there’s no benefit for this behaviour. In practice, you cannot profit from it because you can’t know in general whether a class destructor fits the above criteria or not. No general-purpose code could rely on this. It would be very easy to introduce bugs that way. And finally, how does it help? It just makes it possible to write sloppy code that doesn’t track life-time of its objects – under-specified code, in other words. Why should the standard support this?


Will existing compilers/runtimes break your particular code? Probably not – unless they have special run-time checks to prevent illegal access (to prevent what looks like malicious code, or simply leak protection).


The object no longer exists after you call the destructor.

So if you call it again, you're calling a method on an object that doesn't exist.

Why would this ever be defined behavior? The compiler may choose to zero out the memory of an object which has been destructed, for debugging/security/some reason, or recycle its memory with another object as an optimisation, or whatever. The implementation can do as it pleases. Calling the destructor again is essentially calling a method on arbitrary raw memory - a Bad Idea (tm).


When you use the facilities of C++ to create and destroy your objects, you agree to use its object model, however it's implemented.

Some implementations may be more sensitive than others. For example, an interactive interpreted environment or a debugger might try harder to be introspective. That might even include specifically alerting you to double destruction.

Some objects are more complicated than others. For example, virtual destructors with virtual base classes can be a bit hairy. The dynamic type of an object changes over the execution of a sequence of virtual destructors, if I recall correctly. That could easily lead to invalid state at the end.

It's easy enough to declare properly named functions to use instead of abusing the constructor and destructor. Object-oriented straight C is still possible in C++, and may be the right tool for some job… in any case, the destructor isn't the right construct for every destruction-related task.


Destructors are not regular functions. Calling one doesn't call one function, it calls many functions. Its the magic of destructors. While you have provided a trivial destructor with the sole intent of making it hard to show how it might break, you have failed to demonstrate what the other functions that get called do. And neither does the standard. Its in those functions that things can potentially fall apart.

As a trivial example, lets say the compiler inserts code to track object lifetimes for debugging purposes. The constructor [which is also a magic function that does all sorts of things you didn't ask it to] stores some data somewhere that says "Here I am." Before the destructor is called, it changes that data to say "There I go". After the destructor is called, it gets rid of the information it used to find that data. So the next time you call the destructor, you end up with an access violation.

You could probably also come up with examples that involve virtual tables, but your sample code didn't include any virtual functions so that would be cheating.


The following Class will crash in Windows on my machine if you'll call destructor twice:

class Class {
public:
    Class()
    {
        x = new int;
    }
    ~Class() 
    {
        delete x;
        x = (int*)0xbaadf00d;
    }

    int* x;
};

I can imagine an implementation when it will crash with trivial destructor. For instance, such implementation could remove destructed objects from physical memory and any access to them will lead to some hardware fault. Looks like Visual C++ is not one of such sort of implementations, but who knows.


Standard 12.4/14

Once a destructor is invoked for an object, the object no longer exists; the behavior is undefined if the destructor is invoked for an object whose lifetime has ended (3.8).

I think this section refers to invoking the destructor via delete. In other words: The gist of this paragraph is that "deleting an object twice is undefined behavior". So that's why your code example works fine.

Nevertheless, this question is rather academic. Destructors are meant to be invoked via delete (apart from the exception of objects allocated via placement-new as sharptooth correctly observed). If you want to share code between a destructor and second function, simply extract the code to a separate function and call that from your destructor.


Since what you're really asking for is a plausible implementation in which your code would fail, suppose that your implementation provides a helpful debugging mode, in which it tracks all memory allocations and all calls to constructors and destructors. So after the explicit destructor call, it sets a flag to say that the object has been destructed. delete checks this flag and halts the program when it detects the evidence of a bug in your code.

To make your code "work" as you intended, this debugging implementation would have to special-case your do-nothing destructor, and skip setting that flag. That is, it would have to assume that you're deliberately destroying twice because (you think) the destructor does nothing, as opposed to assuming that you're accidentally destroying twice, but failed to spot the bug because the destructor happens to do nothing. Either you're careless or you're a rebel, and there's more mileage in debug implementations helping out people who are careless than there is in pandering to rebels ;-)


One important example of an implementation which could break:

A conforming C++ implementation can support Garbage Collection. This has been a longstanding design goal. A GC may assume that an object can be GC'ed immediately when its dtor is run. Thus each dtor call will update its internal GC bookkeeping. The second time the dtor is called for the same pointer, the GC data structures might very well become corrupted.


By definition, the destructor 'destroys' the object and destroy an object twice makes no sense.

Your example works but its difficult that works generally


I guess it's been classified as undefined because most double deletes are dangerous and the standards committee didn't want to add an exception to the standard for the relatively few cases where they don't have to be.

As for where your code could break; you might find your code breaks in debug builds on some compilers; many compilers treat UB as 'do the thing that wouldn't impact on performance for well defined behaviour' in release mode and 'insert checks to detect bad behaviour' in debug builds.


Basically, as already pointed out, calling the destructor a second time will fail for any class destructor that performs work.


It's undefined behavior because the standard made it clear what a destructor is used for, and didn't decide what should happen if you use it incorrectly. Undefined behavior doesn't necessarily mean "crashy smashy," it just means the standard didn't define it so it's left up to the implementation.

While I'm not too fluent in C++, my gut tells me that the implementation is welcome to either treat the destructor as just another member function, or to actually destroy the object when the destructor is called. So it might break in some implementations but maybe it won't in others. Who knows, it's undefined (look out for demons flying out your nose if you try).


It is undefined because if it weren't, every implementation would have to bookmark via some metadata whether an object is still alive or not. You would have to pay that cost for every single object which goes against basic C++ design rules.


The reason is because your class might be for example a reference counted smart pointer. So the destructor decrements the reference counter. Once that counter hits 0 the actual object should be cleaned up.

But if you call the destructor twice then the count will be messed up.

Same idea for other situations too. Maybe the destructor writes 0s to a piece of memory and then deallocates it (so you don't accidentally leave a user's password in memory). If you try to write to that memory again - after it has been deallocated - you will get an access violation.

It just makes sense for objects to be constructed once and destructed once.


The reason is that, in absence of that rule, your programs would become less strict. Being more strict--even when it's not enforced at compile-time--is good, because, in return, you gain more predictability of how program will behave. This is especially important when the source code of classes is not under your control.

A lot of concepts: RAII, smart pointers, and just generic allocation/freeing of memory rely on this rule. The amount of times the destructor will be called (one) is essential for them. So the documentation for such things usually promises: "Use our classes according to C++ language rules, and they will work correctly!"

If there wasn't such a rule, it would state as "Use our classes according to C++ lanugage rules, and yes, don't call its destructor twice, then they will work correctly." A lot of specifications would sound that way. The concept is just too important for the language in order to skip it in the standard document.

This is the reason. Not anything related to binary internals (which are described in Potatoswatter's answer).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜