Is Foo* f = new Foo good C++ code
Reading through an old C++ Journal I had, I noticed something.
One of the articles asserted that
Foo *f = new Foo();
was nearly unacceptable professional C++ code by and large, and an automatic memory management solution was appropriate.
Is this so?
edit: rephrased: is direct memory management unacceptable for new C++ cod开发者_高级运维e, in general? Should auto_ptr(or the other management wrappers) be used for most new code?
This example is very Java like.
In C++ we only use dynamic memory management if it is required.
A better alternative is just to declare a local variable.
{
Foo f;
// use f
} // f goes out of scope and is immediately destroyed here.
If you must use dynamic memory then use a smart pointer.
// In C++14
{
std::unique_ptr<Foo> f = std::make_unique<Foo>(); // no need for new anymore
}
// In C++11
{
std::unique_ptr<Foo> f(new Foo); // See Description below.
}
// In C++03
{
std::auto_ptr<Foo> f(new Foo); // the smart pointer f owns the pointer.
// At some point f may give up ownership to another
// object. If not then f will automatically delete
// the pointer when it goes out of scope..
}
There are a whole bunch os smart pointers provided int std:: and boost:: (now some are in std::tr1) pick the appropriate one and use it to manage the lifespan of your object.
See Smart Pointers: Or who owns you baby?
Technically you can use new/delete to do memory management.
But in real C++ code it is almost never done. There is nearly always a better alternative to doing memory management by hand.
A simple example is the std::vector. Under the covers it uses new and delete. But you would never be able to tell from the outside. This is completely transparent to the user of the class. All that the user knows is that the vector will take ownership of the object and it will be destroyed when the vector is destroyed.
No.
There are very good reasons to not use automatic memory management systems in certain cases. These can be performance, complexity of data structures due to cyclical referencing etc.
However I recommend only using a raw poiner with new/malloc if ou have a good reason to not use somehting smarter. Seeing unprotected allocations scares me and makes me hope the coder knows what they're doing.
Some kind of smart pointer class like boost::shared_ptr, boost::scoped_ptr would be a good start. ( These will be part of the C++0x standard so dont be scared of them ;) )
I think, the problem of all these "...best practices..." questions is that they all consider the code without context. If you ask "in general", I have to admit that direct memory management is perfectly acceptable. It is syntactically legal and it does not violate any language semantics.
As for the alternatives (stack variables, smart pointers etc), they all have their drawbacks. And none of them have the flexibility, the direct memory management have. The price you have to pay for such a flexibility is your debugging time, and you should be aware of all risks.
With some kind of smart pointer scheme you can get automatic memory management, reference counting, etc., with only a small amount of overhead. You pay for that (in memory or performance), but it may be worth it to pay for it instead of having to worry about it all the time.
If you are using exceptions that kind of code is practically guaranteed to lead to recource leaks. Even if you disable exceptions, cleaning up is very easy to srew up when manually pairing new with delete.
It depends on exactly what we mean.
- Should
new
never be used to allocate memory? Of course it should, we have no other option.new
is the way to dynamically allocate objects in C++. When we need to dynamically allocate an object of type T, we donew T(...)
. - Should
new
be called by default when we want to instantiate a new object? NO. In java or C#,new
is used to create new objects, so you use it everywhere. in C++, it is only used for heap allocations. Almost all objects should be stack-allocated (or created in-place as class members) so that the language's scoping rules help us manage their lifetimes.new
isn't often necessary. Usually, when we want to allocate new objects on the heap, you do it as part of a larger collection, in which case you should just push the object onto your STL container, and let it worry about allocating and deallocating memory. If you just need a single object, it can typically be created as a class member or a local variable, without usingnew
. - Should
new
be present in your business logic code? Rarely, if ever. As mentioned above, it can and should be typically be hidden away inside wrapper classes.std::vector
for example dynamically allocates the memory it needs. So the user of thevector
doesn't have to care. I just create a vector on the stack, and it takes care of the heap allocations for me. When a vector or other container class isn't suitable, we may want to write our own RAII wrapper, which allocates some memory in the constructor withnew
, and releases it in the destructor. And that wrapper can then be stack-allocated, so the user of the class never has to callnew
.
One of the articles asserted that
Foo *f = new Foo();
was nearly unacceptable professional C++ code by and large, and an automatic memory management solution was appropriate.
If they mean what I think they mean, then they are right. As I said above, new
should usually be hidden away in wrapper classes, where automatic memory management (in the shape of scoped lifetime and objects having their destructors called when they go out of scope) can take care of it for you. The article doesn't say "never allocate anything on the heap" or never use new
", but simply "When you do use new
, don't just store a pointer to the allocated memory. Place it inside some kind of class that can take care of releasing it when it goes out of scope.
Rather than Foo *f = new Foo();
, you should use one of these:
Scoped_Foo f; // just create a wrapper which *internally* allocates what it needs on the heap and frees it when it goes out of scope
shared_ptr<Foo> f = new Foo(); // if you *do* need to dynamically allocate an object, place the resulting pointer inside a smart pointer of some sort. Depending on circumstances, scoped_ptr, or auto_ptr may be preferable. Or in C++0x, unique_ptr
std::vector<Foo> v; v.push_back(Foo()); // place the object in a vector or another container, and let that worry about memory allocations.
I stopped writing such code some time ago. There are several alternatives:
Scope based deletion
{
Foo foo;
// done with foo, release
}
scoped_ptr for scope based dynamical allocation
{
scoped_ptr<Foo> foo( new Foo() );
// done with foo, release
}
shared_ptr for things that should be handled in many places
shared_ptr<Foo> foo;
{
foo.reset( new Foo() );
}
// still alive
shared_ptr<Foo> bar = foo; // pointer copy
...
foo.reset(); // Foo still lives via bar
bar.reset(); // released
Facory-based resource management
Foo* foo = fooFactory.build();
...
fooFactory.release( foo ); // or it will be
// automatically released
// on factory destruction
In general, no, but the general case is not the common case. Which is why automatic schemes like RAII were invented in the first place.
From an answer I wrote to another question:
The job of a programmer is to express things elegantly in his language of choice.
C++ has very nice semantics for construction and destruction of objects on the stack. If a resource can be allocated for the duration of a scope block, then a good programmer will probably take that path of least resistance. The object's lifetime is delimited by braces which are probably already there anyway.
If there's no good way to put the object directly on the stack, maybe it can be put inside another object as a member. Now its lifetime is a little longer, but C++ still doe a lot automatically. The object's lifetime is delimited by a parent object — the problem has been delegated.
There might not be one parent, though. The next best thing is a sequence of adoptive parents. This is what
auto_ptr
is for. Still pretty good, because the programmer should know what particular parent is the owner. The object's lifetime is delimited by the lifetime of its sequence of owners. One step down the chain in determinism and per se elegance isshared_ptr
: lifetime delimited by the union of a pool of owners.> But maybe this resource isn't concurrent with any other object, set of objects, or control flow in the system. It's created upon some event happening and destroyed upon another event. Although there are a lot of tools for delimiting lifetimes by delegations and other lifetimes, they aren't sufficient for computing any arbitrary function. So the programmer might decide to write a function of several variables to determine whether an object is coming into existence or disappearing, and call
new
anddelete
.Finally, writing functions can be hard. Maybe the rules governing the object would take too much time and memory to actually compute! And it might just be really hard to express them elegantly, getting back to my original point. So for that we have garbage collection: the object lifetime is delimited by when you want it and when you don't.
First of all, i believe it should be Foo *f = new Foo();
And the reason I don't like using that syntax because it is easy to forget to add a delete
at the end of the code and leave your memory a-leakin'.
In general your example is not exception safe and therefore shouldn't be used. If the line directly following the new throws? The stack unwinds and you have just leaked memory. A smart pointer will take care of it for you as part of the stack unwind. If you tend to not handle exceptions then there is no draw back outside of RAII issues.
精彩评论