开发者

How do I create/render a UIImage from a 3D transformed UIImageView?

After applying a 3d transform to an UIImageView.layer, I need to save the resulting "view" as a new UIImage... Seemed like a simple task at first :-) but no luck so far, and searching hasn't turned up any clues :-( so I'm hoping someone will be kind enough to point me in the right direction.

A very simple iPhone project is available here.

Thanks.

- (void)transformImage {
    float degrees = 12.0;
    float zDistance = 250;
    CATransform3D transform3D = CATransform3DIdentity;
    transform3D.m34 = 1.0 / zDistance; // the m34 cell of the matrix controls perspective, and zDistance affects the "sharpness" of the transform
    transform3D = CATransform3DRotate(transform3D, DEGREES_TO_RADIANS(degrees), 1, 0, 0); // perspective transform on y-axis
    imageView.layer.transform = transform3D;
}

/* FAIL : capturing layer contents doesn't get the transformed image -- just the original

CGImageRef newImageRef = (CGImageRef)imageView.layer.contents;

UIImage *image = [UIImage imageWithCGImage:newImageRef];

*/


/* FAIL : docs for renderInContext states that it does not render 3D transforms

UIGraphicsBeginImageContext(imageView.image.size);

[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];

UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

*/
//
// header
//
#import <QuartzCore/QuartzCore.h>
#define DEGREES_TO_RADIANS(x) x * M_PI / 180
UIImageView *imageView;
@property (nonatomic, retain) IBOutlet UIImageView *imageView;

//
// code
//
@synthesize imageView;

- (void)transformImage {
    float degrees = 12.0;
    float zDistance = 250;
    CATransform3D transform3D = CATransform3DIdentity;
    transform3D.m34 = 1.0 / zDistance; // the m34 cell of the matrix controls perspective, and zDistance affects the "sharpness" of the transform
    transform3D = CATransform3DRotate(transform3D, DEGREES_TO_RADIANS(degrees), 1, 0, 0); // perspective transform on y-axis
    imageView.layer.transform = transform3D;
}

- (UIImage *)captureView:(UIImageView *)view {
    UIGraphicsBeginImageContext(view.frame.开发者_如何学运维size);
    [view.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return newImage;
}

- (void)imageSavedToPhotosAlbum:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo {
    NSString *title = @"Save to Photo Album";
    NSString *message = (error ? [error description] : @"Success!");
    UIAlertView *alert = [[UIAlertView alloc] initWithTitle:title message:message delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
    [alert show];
    [alert release];
}

- (IBAction)saveButtonClicked:(id)sender {
    UIImage *newImage = [self captureView:imageView];
    UIImageWriteToSavedPhotosAlbum(newImage, self, @selector(imageSavedToPhotosAlbum: didFinishSavingWithError: contextInfo:), nil);  
}


I ended up creating a render method pixel per pixel on the CPU using the inverse of the view transform.

Basically, it renders the original UIImageView into a UIImage. Then every pixel in the UIImage is multiplied by the inverse transform matrix to generate the transformed UIImage.

RenderUIImageView.h

#import <UIKit/UIKit.h>
#import <QuartzCore/CATransform3D.h>
#import <QuartzCore/CALayer.h>

@interface RenderUIImageView : UIImageView

- (UIImage *)generateImage;

@end

RenderUIImageView.m

#import "RenderUIImageView.h"

@interface RenderUIImageView()

@property (assign) CATransform3D transform;
@property (assign) CGRect rect;

@property (assign) float denominatorx;
@property (assign) float denominatory;
@property (assign) float denominatorw;

@property (assign) float factor;

@end

@implementation RenderUIImageView


- (UIImage *)generateImage
{

    _transform = self.layer.transform;

    _denominatorx = _transform.m12 * _transform.m21 - _transform.m11  * _transform.m22 + _transform.m14 * _transform.m22 * _transform.m41 - _transform.m12 * _transform.m24 * _transform.m41 - _transform.m14 * _transform.m21 * _transform.m42 +
    _transform.m11 * _transform.m24 * _transform.m42;

    _denominatory = -_transform.m12 *_transform.m21 + _transform.m11 *_transform.m22 - _transform.m14 *_transform.m22 *_transform.m41 + _transform.m12 *_transform.m24 *_transform.m41 + _transform.m14 *_transform.m21 *_transform.m42 -
    _transform.m11* _transform.m24 *_transform.m42;

    _denominatorw = _transform.m12 *_transform.m21 - _transform.m11 *_transform.m22 + _transform.m14 *_transform.m22 *_transform.m41 - _transform.m12 *_transform.m24 *_transform.m41 - _transform.m14 *_transform.m21 *_transform.m42 +
    _transform.m11 *_transform.m24 *_transform.m42;

    _rect = self.bounds;

    if (UIGraphicsBeginImageContextWithOptions != NULL) {

        UIGraphicsBeginImageContextWithOptions(_rect.size, NO, 0.0);
    } else {
        UIGraphicsBeginImageContext(_rect.size);
    }

    if ([[UIScreen mainScreen] respondsToSelector:@selector(displayLinkWithTarget:selector:)] &&
        ([UIScreen mainScreen].scale == 2.0)) {
        _factor = 2.0f;
    } else {
        _factor = 1.0f;
    }


    UIImageView *img = [[UIImageView alloc] initWithFrame:_rect];
    img.image = self.image;

    [img.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *source = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    CGContextRef ctx;
    CGImageRef imageRef = [source CGImage];
    NSUInteger width = CGImageGetWidth(imageRef);
    NSUInteger height = CGImageGetHeight(imageRef);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    unsigned char *inputData = malloc(height * width * 4);
    unsigned char *outputData = malloc(height * width * 4);

    NSUInteger bytesPerPixel = 4;
    NSUInteger bytesPerRow = bytesPerPixel * width;
    NSUInteger bitsPerComponent = 8;

    CGContextRef context = CGBitmapContextCreate(inputData, width, height,
                                                 bitsPerComponent, bytesPerRow, colorSpace,
                                                 kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGColorSpaceRelease(colorSpace);
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
    CGContextRelease(context);

    context = CGBitmapContextCreate(outputData, width, height,
                                    bitsPerComponent, bytesPerRow, colorSpace,
                                    kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGColorSpaceRelease(colorSpace);
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
    CGContextRelease(context);


    for (int ii = 0 ; ii < width * height ; ++ii)
    {
        int x = ii % width;
        int y = ii / width;
        int indexOutput = 4 * x + 4 * width * y;

        CGPoint p = [self modelToScreen:(x*2/_factor - _rect.size.width)/2.0 :(y*2/_factor - _rect.size.height)/2.0];

        p.x *= _factor;
        p.y *= _factor;

        int indexInput = 4*(int)p.x + (4*width*(int)p.y);

        if (p.x >= width || p.x < 0 || p.y >= height || p.y < 0 || indexInput >  width * height *4)
        {
            outputData[indexOutput] = 0.0;
            outputData[indexOutput+1] = 0.0;
            outputData[indexOutput+2] = 0.0;
            outputData[indexOutput+3] = 0.0;

        }
        else
        {
            outputData[indexOutput] = inputData[indexInput];
            outputData[indexOutput+1] = inputData[indexInput + 1];
            outputData[indexOutput+2] = inputData[indexInput + 2];
            outputData[indexOutput+3] = 255.0;
        }
    }

    ctx = CGBitmapContextCreate(outputData,CGImageGetWidth( imageRef ),CGImageGetHeight( imageRef ),8,CGImageGetBytesPerRow( imageRef ),CGImageGetColorSpace( imageRef ), kCGImageAlphaPremultipliedLast );

    imageRef = CGBitmapContextCreateImage (ctx);

    UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
    CGContextRelease(ctx);
    free(inputData);
    free(outputData);
    return rawImage;
}

- (CGPoint) modelToScreen : (float) x: (float) y
{
    float xp = (_transform.m22 *_transform.m41 - _transform.m21 *_transform.m42 - _transform.m22* x + _transform.m24 *_transform.m42 *x + _transform.m21* y - _transform.m24* _transform.m41* y) / _denominatorx;        
    float yp = (-_transform.m11 *_transform.m42 + _transform.m12 * (_transform.m41 - x) + _transform.m14 *_transform.m42 *x + _transform.m11 *y - _transform.m14 *_transform.m41* y) / _denominatory;        
    float wp = (_transform.m12 *_transform.m21 - _transform.m11 *_transform.m22 + _transform.m14*_transform.m22* x - _transform.m12 *_transform.m24* x - _transform.m14 *_transform.m21* y + _transform.m11 *_transform.m24 *y) / _denominatorw;

    CGPoint result = CGPointMake(xp/wp, yp/wp);
    return result;
}

@end


Theoretically, you could use the (now-allowed) undocumented call UIGetScreenImage() after quickly rendering it to the screen on a black background, but in practice this will be slow and ugly, so don't use it ;P.


I have the same problem with you and I found the solution! I want to rotate the UIImageView, because I will have the animation. And save the image, I use this method:

void CGContextConcatCTM(CGContextRef c, CGAffineTransform transform)

the transform param is the transform of your UIImageView!. So anything you have done to the imageView will be the same with image!. And I have write a category method of UIImage.

-(UIImage *)imageRotateByTransform:(CGAffineTransform)transform{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.size.width, self.size.height)];
rotatedViewBox.transform = transform;
CGSize rotatedSize = rotatedViewBox.frame.size;
[rotatedViewBox release];

// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();

// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);

//Rotate the image context using tranform
CGContextConcatCTM(bitmap, transform);
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2, self.size.width, self.size.height), [self CGImage]);

UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;

}

Hope this will help you.


Have you had a look at this? UIImage from UIView


I had the same problem, I was able to use UIView's drawViewHierarchyInRect:afterScreenUpdates: method, from iOS 7.0 - (Documentation)

It draws the whole tree as it appears on the screen.

UIGraphicsBeginImageContextWithOptions(viewToRender.bounds.size, YES, 0);
[viewToRender drawViewHierarchyInRect:viewToRender.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();


Let say you have and UIImageView called imageView. If you apply 3d transform and try to render this view with UIGraphicsImageRenderer transforms are ignored.

imageView.layer.transform = someTransform3d

but if you convert CATransform3d to CGAffine transform using CATransform3DGetAffineTransform and apply it to transform property of image view, it works.

 imageView.transform = CATransform3DGetAffineTransform(someTransform3d)

And then, you can use the extension below to save it as UIImage

extension UIView {
    func asImage() -> UIImage {
        let renderer = UIGraphicsImageRenderer(bounds: bounds)
        return renderer.image { rendererContext in
            layer.render(in: rendererContext.cgContext)
        }
    }
}

And just call

let image = imageView.asImage()


In your captureView: method, try replacing this line:

[view.layer renderInContext:UIGraphicsGetCurrentContext()];

with this:

[view.layer.superlayer renderInContext:UIGraphicsGetCurrentContext()];

You may have to adjust the size you use to create the image context.

I don't see anything in the API doc that says renderInContext: ignores 3D transformations. However, the transformations apply to the layer, not its contents, which is why you need to render the superlayer to see the transformation applied.

Note that calling drawRect: on the superview definitely won't work, as drawRect: does not draw subviews.


3D transform on UIImage / CGImageRef

I've improved Marcos Fuentes answer. You should be able to calculate the mapping of each pixel yourself.. Not perfect, but it does the trick...

It is available on this repository http://github.com/hfossli/AGGeometryKit/

The interesting files is

https://github.com/hfossli/AGGeometryKit/blob/master/Source/AGTransformPixelMapper.m

https://github.com/hfossli/AGGeometryKit/blob/master/Source/CGImageRef%2BCATransform3D.m

https://github.com/hfossli/AGGeometryKit/blob/master/Source/UIImage%2BCATransform3D.m


3D transform on UIView / UIImageView

https://stackoverflow.com/a/12820877/202451

Then you will have full control over each point in the quadrilateral. :)


A solution I found that at least worked in my case was to subclass CALayer. When a renderInContext: message is sent to a layer, that layer automatically forwards that message to all its sublayers. So all I had to do was to subclass CALayer and override the renderInContext: method and render what I needed to be rendered in the provided context.

For example, in my code I had a layer for which I was setting its contents to an image of an arrow:

UIImage *image = [UIImage imageNamed:@"arrow.png"];
MYLayer *myLayer = [[CALayer alloc] init];
[myLayer setContents:(__bridge id)[image CGImage]];
[self.mainLayer addSublayer:myLayer];

Now when I was applying a 3D 180 degree rotation over the Y-axis on the arrow and was trying to do a [self.mainLayer renderInContext:context] afterwards I was still getting the un-rotated image.

So in my subclass MyLayer I overrode renderInContext: and used an already rotated image to draw in provided context:

- (void)renderInContext:(CGContextRef)ctx
{
    NSLog(@"Rendered in context");
    UIImage *image = [UIImage imageNamed:@"arrow_rotated.png"];
    CGContextDrawImage(ctx, self.bounds, image.CGImage);
}

This worked in my case, however I can see that if you are doing lots of 3D transforms you may not be able to have an image ready for every possible scenario. In many other cases though it should be possible to render the result of 3D transform using 2D transforms in the passed context. For example in my case instead of using a different image arrow_rotated.png I could use the arrow.png image and mirror it and draw it in the context.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜