How to process CIFilter using CPU instead of GPU?
Does anyone know how to tell core image to process a CIImage through 开发者_运维知识库a CIFilter using the CPU instead of the GPU? I need to process some very large images and I get strange results using the GPU. I don't care how long it takes to CPU will be fine.
kCIContextUseSoftwareRenderer is key here:
+ (CIContext*)coreContextFor:(NSGraphicsContext *)context forceSoftware:(BOOL)forceSoftware
{
//CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSDictionary *contextOptions = [NSDictionary dictionaryWithObjectsAndKeys:
(id)colorSpace, kCIContextWorkingColorSpace,
(id)colorSpace, kCIContextOutputColorSpace,
[NSNumber numberWithBool:forceSoftware], kCIContextUseSoftwareRenderer,
nil];
CIContext* result = [CIContext contextWithCGContext:(CGContext *)[context graphicsPort] options:contextOptions];
CGColorSpaceRelease(colorSpace);
return result;
}
Rendering in software more (CPU) solves some issues, but... Performance penalty is so strong on modern machines that I can't say it is solution. I use CoreImage in my apps and I always render with GPU on-screen while forced CPU is used for saving only. I have noticed that CPU rendering is a bit more accurate on my test hardwares, and saving filtered image is a long process, I can sacrifice the speed here.
Here is a Swift version :
func coreContextFor(context : CGContext,forceSoftware force : Bool) -> CIContext {
let colorSpace = CGColorSpaceCreateDeviceRGB()
let options : [String:AnyObject] = [
kCIContextWorkingColorSpace: colorSpace!,
kCIContextOutputColorSpace : colorSpace!,
kCIContextUseSoftwareRenderer : NSNumber(bool: force)
]
return CIContext(CGContext: context, options: options)
}
Not an answer, but you always can re-write CIkernel as function which operates on pixel RGB value, and after that apply that function in a loop on unsigned char []
data of image and convert result to CIImage.
精彩评论