Creating image from an array of colors in iphone
I have array with colors as its objects. I want to create an image out of it. Actually, what is happening is , i'm taking the pixel value of each pixel of an image, modify it and store in a mutable array its object. Now want to draw an image from this. How to do that?? ANy idea???
-(UIImage*)modifyPixels:(UIImage*)originalImage {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init];
NSMutableArray *result =[[NSMutableArray alloc]init];
int width = img.size.width;
int height = img.size.height;
originalImage = imageView.image;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc (height * width * 4);
NSU开发者_运维问答Integer bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height),img.CGImage);
CGContextRelease(context);
int byteIndex = 0;
for (int xx=0;xx<width;xx++){
for (int yy=0;yy<height;yy++){
// Now rawData contains the image data in the RGBA8888 pixel format.
NSLog(@"(%d,%d)",xx,yy);
NSLog(@"Alpha 255-Value is: %u", rawData[byteIndex + 3]);
NSLog(@"Red 255-Value is: %u", rawData[byteIndex]);
NSLog(@"Green 255-Value is: %u",rawData[byteIndex + 1]);
NSLog(@"Blue 255-Value is: %u",rawData[byteIndex + 2]);
CGFloat red = (rawData[byteIndex]+rawData[byteIndex + 1]+rawData[byteIndex + 2]) / 3;
CGFloat green = red;
CGFloat blue = red;
CGFloat alpha = 255;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
}
UIImage *newImage;
//CREATE NEW UIIMAGE (newImage) HERE from acolor(array of colors)
//this is the portion i'm in trouble with
return newImage;
[pool release];
}
As far as I understand you try average all channels.
You could try the following approach which uses Core Graphics:
CGImageRef inImage = mainImageView.image.CGImage;
CFDataRef dataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8* pixelBuffer = (UInt8*)CFDataGetBytePtr(dataRef);
int length = CFDataGetLength(dataRef);
for (int index = 0; index < length; index += 4)
{
pixelBuffer[index + 1] = (pixelBuffer[index + 1] + pixelBuffer[index + 2] + pixelBuffer[index + 3])/3.0;
pixelBuffer[index + 2] = pixelBuffer[index + 1];
pixelBuffer[index + 3] = pixelBuffer[index + 1];
}
CGContextRef ctx = CGBitmapContextCreate(pixelBuffer,
CGImageGetWidth( inImage ),
CGImageGetHeight( inImage ),
8,
CGImageGetBytesPerRow( inImage ),
CGImageGetColorSpace( inImage ),
kCGImageAlphaPremultipliedFirst );
CGImageRef imageRef = CGBitmapContextCreateImage(ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
CFRelease(dataRef);
CGImageRelease(imageRef);
The result is stored in rawImage
.
You could also take a look at the GLImageProcessing sample from Apple.
This shows some basic image processing techniques on the iPhone using OpenGL.
精彩评论