very quickly getting total size of folder
I want to quickly find the total size of any folder using python.
开发者_JS百科import os
from os.path import join, getsize, isfile, isdir, splitext
def GetFolderSize(path):
TotalSize = 0
for item in os.walk(path):
for file in item[2]:
try:
TotalSize = TotalSize + getsize(join(item[0], file))
except:
print("error with file: " + join(item[0], file))
return TotalSize
print(float(GetFolderSize("C:\\")) /1024 /1024 /1024)
That's the simple script I wrote to get the total size of the folder, it took around 60 seconds (+-5 seconds). By using multiprocessing I got it down to 23 seconds on a quad core machine.
Using the Windows file explorer it takes only ~3 seconds (Right click-> properties to see for yourself). So is there a faster way of finding the total size of a folder close to the speed that windows can do it?
Windows 7, python 2.6 (Did searches but most of the time people used a very similar method to my own) Thanks in advance.
You are at a disadvantage.
Windows Explorer almost certainly uses FindFirstFile
/FindNextFile
to both traverse the directory structure and collect size information (through lpFindFileData
) in one pass, making what is essentially a single system call per file.
Python is unfortunately not your friend in this case. Thus,
os.walk
first callsos.listdir
(which internally callsFindFirstFile
/FindNextFile
)- any additional system calls made from this point onward can only make you slower than Windows Explorer
os.walk
then callsisdir
for each file returned byos.listdir
(which internally callsGetFileAttributesEx
-- or, prior to Win2k, aGetFileAttributes
+FindFirstFile
combo) to redetermine whether to recurse or notos.walk
andos.listdir
will perform additional memory allocation, string and array operations etc. to fill out their return value- you then call
getsize
for each file returned byos.walk
(which again callsGetFileAttributesEx
)
That is 3x more system calls per file than Windows Explorer, plus memory allocation and manipulation overhead.
You can either use Anurag's solution, or try to call FindFirstFile
/FindNextFile
directly and recursively (which should be comparable to the performance of a cygwin
or other win32 port du -s some_directory
.)
Refer to os.py
for the implementation of os.walk
, posixmodule.c
for the implementation of listdir
and win32_stat
(invoked by both isdir
and getsize
.)
Note that Python's os.walk
is suboptimal on all platforms (Windows and *nices), up to and including Python3.1. On both Windows and *nices os.walk
could achieve traversal in a single pass without calling isdir
since both FindFirst
/FindNext
(Windows) and opendir
/readdir
(*nix) already return file type via lpFindFileData->dwFileAttributes
(Windows) and dirent::d_type
(*nix).
Perhaps counterintuitively, on most modern configurations (e.g. Win7 and NTFS, and even some SMB implementations) GetFileAttributesEx
is twice as slow as FindFirstFile
of a single file (possibly even slower than iterating over a directory with FindNextFile
.)
Update: Python 3.5 includes the new PEP 471 os.scandir()
function that solves this problem by returning file attributes along with the filename. This new function is used to speed up the built-in os.walk()
(on both Windows and Linux). You can use the scandir module on PyPI to get this behavior for older Python versions, including 2.x.
If you want same speed as explorer, why not use the windows scripting to access same functionality using pythoncom e.g.
import win32com.client as com
folderPath = r"D:\Software\Downloads"
fso = com.Dispatch("Scripting.FileSystemObject")
folder = fso.GetFolder(folderPath)
MB = 1024 * 1024.0
print("%.2f MB" % (folder.Size / MB))
It will work same as explorer, you can read more about Scripting runtime at http://msdn.microsoft.com/en-us/library/bstcxhf7(VS.85).aspx.
I compared the performance of the Python code against a 15k directory tree containing 190k files and compared it against the du(1)
command which presumably goes about as fast as the OS. The Python code took 3.3 seconds compared to du which took 0.8 seconds. This was on Linux.
I'm not sure there is much to squeeze out of the Python code. Note too that the first run of du took 45 seconds which was obviously before the relevant i-nodes were in the block cache; therefore this performance is heavily dependent upon how well the system is managing its store. It wouldn't surprise me if either or both:
- os.path.getsize is sub-optimal on Windows
- Windows caches directory contents size once calculated
精彩评论