开发者

Analysing resource leaks in network applications (socket handlers etc)

For memory leaks, there are many tools available such as valgrind, which you can use to figure out where the leaks are coming from. For OpenGL leaks, there was glDEBugger, which was very useful for that kind of stuff.

Is there any similar tool for network programming? In particular, for when working with linux sockets and pipes.

Perhaps there's a for-posix tool that will measure how many po开发者_开发技巧six resources a program is using (how many sockets, how many threads, how many mutexes etc?)

Also, correct me if I'm wrong, but would the higher level languages (java, python etc, as opposed to c++) be able to take care of these resource management as they do with their memory management?


Valgrind has the ability to track a few of the resources you are interested in:

  • Memory via memcheck

  • File descriptors via the --track-fds=yes memcheck option

  • Threads and locks via Helgrind and DRD

The generated information is not always detailed, but it can be quite helpful.


strace and lsof can be of (some) help in identifying leaks. There's almost definitely some memory allocated along with the sockets, pipes, etc. which you might be able to track with the memory-debugging tools, especially if you have custom classes wrapping the resource allocators, in which case it may be practical to add a giant chunk of unused data to those classes, and look for those giant chunks of data in the results of memcheck, etc.

Higher-level languages may or may not be able to manage these resources. For example, if you're accessing the same low-level functions in a high-level language, then most likely the resources aren't being managed. But if the resources are wrapped in classes that can be garbage-collected, then yes, I think the garbage collection can also take care of the resources.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜