Any point in converting CVImageBufferRef to CGImageRef to deal with pixels directly?
I'm wanting to analyze the pixels on a per-frame basis 开发者_StackOverflow中文版coming from iOS's camera. I'm wondering if there's any reason to go from CVImageBufferRef to CGImageRef, and then get the pixels from the CGImageRef, or whether this data will essentially be the same. Things that come to mind are perhaps color space magic taking place during the conversion?
I was just looking into this issue and found that a CVImageBufferRef can normally be read directly as 24 BPP pixels, but the tricky part is to check the results of CVPixelBufferGetBytesPerRow(imageBuffer) to deal with the case where the returned bytes per row is not the same as width * sizeof(uint32_t). In my code, I need to flatten the input pixels into a buffer that contains no padding bytes, so here is what I do:
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
...
int outputBytesPerRow = width * sizeof(uint32_t);
if (bytesPerRow == outputBytesPerRow) {
// In this optimized case, just copy the entire framebuffer in one go
memcpy(frameBuffer.pixels, baseAddress, frameBuffer.numBytes);
} else {
// Non-simple layout where padding bytes appear after the pixels, need to copy
// one row at a time into the flat pixel buffer layout.
for (int rowi = 0; rowi < height; rowi++) {
void *inputRowStartPtr = baseAddress + (rowi * bytesPerRow);
void *outputRowStartPtr = frameBuffer.pixels + (rowi * outputBytesPerRow);
memcpy(outputRowStartPtr, inputRowStartPtr, outputBytesPerRow);
}
}
I guess people might not care since this is a question about 10 years ago, anyway, VTB is the best answer: VTCreateCGImageFromCVPixelBuffer
精彩评论