iPhone OS: Strategies for high density image work
I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time.
I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases.
This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory.
Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility.
I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following:
1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images
I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an exte开发者_StackOverflowrnal server.
CATiledLayer is probably the way to go. The only reason to create a UIImageView for each image is if you need the interactivity managed for you. CATiledLayer will allow you to load and draw images asynchronously from background threads as needed. Just use CGImage since that is what you will draw into the layer anyway.
You will probably want to implement your own threaded image cache so you can impose a cap on the number of images kept in memory and start image loads when you predict they will be needed soon. If a load is not finished when a draw request comes in, you can block the draw thread.
Images generally have a lot of repeating data and are good subjects for compression. this depends on what format you use for your images and if it has built in compression. My first thought is to stick to png format because its native to the iPhone and somehow use a compression archive for the bulk of your images. Something like .zip or .rar.
if you can determine batches of images that the user may encounter you could then unarchive them and present them to the user. The use of a subset "thumbnail" archive could be useful as well.
I'm not sure what all this compressing and decompressing will do to your response times but its an idea to keep your memory footprint lower.
This is a pretty big question, so I'll point you to a project that is an open source map view. You may be able to re-use this project wholesale by adding your server as a data source (if it's lng/lat based that may make sense), or you can take some of the design patterns and implement your requirements directly.
http://code.google.com/p/route-me/source/browse/trunk#trunk/MapView/Map
The fact that your images are 1mb or 5mb's isn't a problem, as you should only be displaying 1 or 4 images at once. Then an asynchronous cache loader that stuffs and ages your cache of images based on what the user may look at next. Then as the user interacts, take the image from the cache (or a spinner on a cache miss) and add it to a CALayer and pop it in the CALayer hierarchy.
There are a lot of performance / usage questions about. I would get in touch with someone in the project above or just follow their lead. Based on what I know of the project that means one UIView for the parent container that passes in events, and using CALayers for all of the tiling. CGImageRefs are used in CALayers so stick with those too.
Hope this helps.
精彩评论