Dynamic allocation or automatic recommended for subsystem creation?
I am a hobbyist C++ programmer and currently working on a game (using Ogre3D) and I have a question regarding the memory allocation for my main classes.
I have read a lot on memory allocation, allocating automatically on the stack and dynamically on the heap, and their differences (performance, limited stack size). Still I am not sure what to use for my main class (Application) and some other 'factory' classes (created by a single instance of the Application class), which will all have a single instance existing throughout the entire execution.
Below is a simplified snippet of the layout:
int main()
{
// like this (automatic)
Application app;
app.create(); // initializing
app.run(); // runs the game-loop
// or like this (dynamic)
Application* app;
app = new Application();
app->create();
app->run();
return(0); // only reached after exiting game
}
class Application
{
public:
Application(); // ctor
~Application(); // dtor
// like this, using 'new' in ctor and 'delete' in dtor (dynamic)
SceneManager* sceneManager_; // a factory for handling scene objects
DebugManager* debugManager_; // a factory for handling debugging o开发者_运维百科bjects
// or like this (automatic)
SceneManager sceneManager_;
DebugManager debugManager_;
};
Is it better to allocate memory on the stack or on the heap (both for the Application class and the factory classes)? And by what arguments?
Thanks in advance!
Always prefer automatic allocation over dynamic allocation. And when you need dynamic allocation, make sure its lifetime is managed by automatically allocated resource wrappers, like smart pointers.
In this situation I think it all comes down to size.
You don't want to waste stack space, so either use dynamic allocation with new
or put Application
as a global variable outside main()
.
In C++, the question is rather more complicated, but in general you can't avoid allocating on the heap. For example, your new
operation is allocating your Application
object on the heap -- new
allocates memory dynamically at run time, where the allocation of auto
memory is determined at compile time. (It's actually allocated at run time, of course -- but it's allocated as the startup code creates the stack for main
according to the compiled-in allocations.)
Now, why would you want to avoid allocating on the heap? It might be because of limited heap sizes, but with modern machines that's rarely a problem, even in hand-held devices. Stack space might well be limited, however. So that argues for heap.
Of course, auto
memory doesn't "leak" -- but memory allocated in the main (or in file scope, although that's static) doesn't get freed, so one could almost claim it "automatically" leaked.
I think the essential question here is really "why wouldn't you allocate on the heap?" The main reason is usually to avoid memory leaks, but care in using new/delete
can protect you from that -- and given the ubiquity of dynamic allocation in libraries and such, you can't stop thinking about good memory hygiene even if you contrive to avoid dynamic allocation.
精彩评论