IronPython memory leak?
Run this:
for i in range(1000000000):
a = []
It looks like the list objects being created never get marked for garbage collection. From a memory profiler, it looks like the interpreter's stack frame is holding onto all the list objects, so GC can never do anything about it.
Is this by design?
EDIT:
Here is a better example of the problem. Run the code below with a memory profiler:
a = [b for b in range(1000000)]
a = [b for b in range(1000000)]
a = [b for b in range(1000000)]
a = [b for b in range(1000000)]
a = [b for b in range(1000000)]
a = [b for b in range(1000000)]
a = [b for b in range(1000000)]
You will see that the memory allocated during the list comprehensions never gets garbage collected. This is because all the objects created are being referenced by an InterpreterFrame object in the DLR.
Now run this:
def get():
return [b for b in range(1000000)]
a = get()
a = get()
a = get()
a = get()
a = get()
a = get()
a =开发者_JS百科 get()
Under a profiler, you can see that the memory here does get garbage collected as it should. I am guessing this works because the InterpreterFrame of the function is cleared when the function exits.
So, is this a bug? This seems that it will lead to some pretty bad memory leaks when within frames (contexts?) of an IronPython script.
Try setting "LightweightScopes" when you are constructing IronPython engine. This solved a lot of garbage collection problems for me.
var engineOptions = new Dictionary<string, object> { ["LightweightScopes"] = true };
var scriptEngine = Python.CreateEngine(engineOptions);
精彩评论