开发者

Parsing large PLIST and memory footprint

In this question (http://stackoverflow.com/questions/1267474/itunes-xml-parsing-in-cocoa), Sreelal asks how to improve p开发者_JS百科erformance of loading/parsing a large PLIST. The question, however, never got a real answer (although some very useful pointers were given by Alex).

Peter Hosey pointed out that the whole file does get loaded into memory even when the PLIST is parsed instead of dumped into a NSDictionary.

In a Cocoa application, I am working with Aperture libraries and they too have large PLIST files. What is the best approach to have good performance (speed) and not having your app taking up all the system's memory?

Is NSXMLParser a good approach? I prefer to stick to Apple's own frameworks if possible.

Thanks


When messing with large files I would use a combination of NSXMLParser with NSFileHandle this allows you to load parts of the data at a time without loading it all at once into memory. Apple has an entire WWDC video on developer.apple.com (if your a registered developer) called Advanced Performance Optimizations on iPhone OS. In that they recommend you don't use the PLIST format for extremely large files but they then discuss how you can load files in parts which you could then use NSXMLParser to parse them in small parts (which could even be divided into several threads). Hope this helps!

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜