开发者

Dynamic image rendering on iOS

I have a programming task for an application I am writing for the iPad and the documentation is not clear about how to go about doing this. I am hoping for some good advice on approaching this problem.

Basically, I have a memory buffer that stores raw RGB for a 256x192 pixel image. This image will be written to regularly and I wish to display this to a 768x576 pixel area on the screen on an update call. I would like this to be relatively quick and maybe optimise it by only processing the areas of the image that actually change.

How would I go about doing this? My initial thought is to create a CGBitmapC开发者_运维问答ontext to manage the 256x192 image, then create a CGImage from this, then create a UIImage from this and change the image property of a UIImageView instance. This sounds like a rather slow process.

Am I on the right lines or should I be looking at something different. Another note is that this image must co-exists with other UIKit views on the screen.

Thanks for any help you can provide.


In my experience, obtaining an image from a bitmap context is actually very quick. The real performance hit, if any, will be in the drawing operations themselves. Since you are scaling the resultant image, you might obtain better results by creating the bitmap context at the final size, and drawing everything scaled to begin with.

If you do use a bitmap context, however, you must make sure to add an alpha channel (RGBA or ARGB), as CGBitmapContext does not support just RGB.


OK, I've come up with a solution. Thanks Justin for giving me the confidence to use the bitmap contexts. In the end I used this bit of code:

CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CFDataRef data = CFDataCreateWithBytesNoCopy(kCFAllocatorDefault, (UInt8*)screenBitmap, sizeof(UInt32)*256*192, kCFAllocatorNull);
CGDataProviderRef provider = CGDataProviderCreateWithCFData(data);
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();

CGImageRef image = CGImageCreate(256, 192, 8, 32, sizeof(UInt32)*256, colourSpace, bitmapInfo, provider, 0, NO, kCGRenderingIntentDefault);

CGColorSpaceRelease(colourSpace);
CGDataProviderRelease(provider);
CFRelease(data);

self.image = [UIImage imageWithCGImage:image];

CGImageRelease(image);

Also note that screenBitmap is my UInt32 array of size 256x192, and self is a UIImageView-derived object. This code works well, but is it the right way of doing it?

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜