开发者

How to crop the UIImage?

I develop an application in which i process the image using its pixels but in that image processing it takes a lot of time. Therefore i want to crop UIImage (Only middle part of image i.e. removing/croping bordered part of image).I hav开发者_运维知识库e the develop code are,

- (NSInteger) processImage1: (UIImage*) image
{

 CGFloat width = image.size.width;
 CGFloat height = image.size.height;
 struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
 if (pixels != nil)
 {
  // Create a new bitmap
  CGContextRef context = CGBitmapContextCreate(
              (void*) pixels,
              image.size.width,
              image.size.height,
              8,
              image.size.width * 4,
              CGImageGetColorSpace(image.CGImage),
              kCGImageAlphaPremultipliedLast
              );
  if (context != NULL)
  {
   // Draw the image in the bitmap
   CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage);
   NSUInteger numberOfPixels = image.size.width * image.size.height;

   NSMutableArray *numberOfPixelsArray = [[[NSMutableArray alloc] initWithCapacity:numberOfPixelsArray] autorelease];
}

How i take(croping outside bordered) the middle part of UIImage?????????


Try something like this:

CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
image = [UIImage imageWithCGImage:imageRef]; 
CGImageRelease(imageRef);

Note: cropRect is smaller rectangle with middle part of the image...


I was looking for a way to get an arbitrary rectangular crop (ie., sub-image) of a UIImage.

Most of the solutions I tried do not work if the orientation of the image is anything but UIImageOrientationUp.

For example:

http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/

Typically if you use your iPhone camera, you will have other orientations like UIImageOrientationLeft, and you will not get a correct crop with the above. This is because of the use of CGImageRef/CGContextDrawImage which differ in the coordinate system with respect to UIImage.

The code below uses UI* methods (no CGImageRef), and I have tested this with up/down/left/right oriented images, and it seems to work great.


// get sub image
- (UIImage*) getSubImageFrom: (UIImage*) img WithRect: (CGRect) rect {

    UIGraphicsBeginImageContext(rect.size);
    CGContextRef context = UIGraphicsGetCurrentContext();

    // translated rectangle for drawing sub image 
    CGRect drawRect = CGRectMake(-rect.origin.x, -rect.origin.y, img.size.width, img.size.height);

    // clip to the bounds of the image context
    // not strictly necessary as it will get clipped anyway?
    CGContextClipToRect(context, CGRectMake(0, 0, rect.size.width, rect.size.height));

    // draw image
    [img drawInRect:drawRect];

    // grab image
    UIImage* subImage = UIGraphicsGetImageFromCurrentImageContext();

    UIGraphicsEndImageContext();

    return subImage;
}


Because I needed it just now, here is M-V 's code in Swift 4:

func imageWithImage(image: UIImage, croppedTo rect: CGRect) -> UIImage {

    UIGraphicsBeginImageContext(rect.size)
    let context = UIGraphicsGetCurrentContext()

    let drawRect = CGRect(x: -rect.origin.x, y: -rect.origin.y, 
                          width: image.size.width, height: image.size.height)

    context?.clip(to: CGRect(x: 0, y: 0, 
                             width: rect.size.width, height: rect.size.height))

    image.draw(in: drawRect)

    let subImage = UIGraphicsGetImageFromCurrentImageContext()

    UIGraphicsEndImageContext()
    return subImage!
}


It would ultimately be faster, with a lot less image creation from sprite atlases, if you could set not only the image for a UIImageView, but also the top-left offset to display within that UIImage. Maybe this is possible. It would certainly eliminate a lot of effort!

Meanwhile, I created these useful functions in a utility class that I use in my apps. It creates a UIImage from part of another UIImage, with options to rotate, scale, and flip using standard UIImageOrientation values to specify. The pixel scaling is preserved from the original image.

My app creates a lot of UIImages during initialization, and this necessarily takes time. But some images aren't needed until a certain tab is selected. To give the appearance of quicker load I could create them in a separate thread spawned at startup, then just wait till it's done when that tab is selected.

This code is also posted at Most efficient way to draw part of an image in iOS

+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture {
    return [ChordCalcController imageByCropping:imageToCrop toRect:aperture withOrientation:UIImageOrientationUp];
}

// Draw a full image into a crop-sized area and offset to produce a cropped, rotated image
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture withOrientation:(UIImageOrientation)orientation {

            // convert y coordinate to origin bottom-left
    CGFloat orgY = aperture.origin.y + aperture.size.height - imageToCrop.size.height,
            orgX = -aperture.origin.x,
            scaleX = 1.0,
            scaleY = 1.0,
            rot = 0.0;
    CGSize size;

    switch (orientation) {
        case UIImageOrientationRight:
        case UIImageOrientationRightMirrored:
        case UIImageOrientationLeft:
        case UIImageOrientationLeftMirrored:
            size = CGSizeMake(aperture.size.height, aperture.size.width);
            break;
        case UIImageOrientationDown:
        case UIImageOrientationDownMirrored:
        case UIImageOrientationUp:
        case UIImageOrientationUpMirrored:
            size = aperture.size;
            break;
        default:
            assert(NO);
            return nil;
    }


    switch (orientation) {
        case UIImageOrientationRight:
            rot = 1.0 * M_PI / 2.0;
            orgY -= aperture.size.height;
            break;
        case UIImageOrientationRightMirrored:
            rot = 1.0 * M_PI / 2.0;
            scaleY = -1.0;
            break;
        case UIImageOrientationDown:
            scaleX = scaleY = -1.0;
            orgX -= aperture.size.width;
            orgY -= aperture.size.height;
            break;
        case UIImageOrientationDownMirrored:
            orgY -= aperture.size.height;
            scaleY = -1.0;
            break;
        case UIImageOrientationLeft:
            rot = 3.0 * M_PI / 2.0;
            orgX -= aperture.size.height;
            break;
        case UIImageOrientationLeftMirrored:
            rot = 3.0 * M_PI / 2.0;
            orgY -= aperture.size.height;
            orgX -= aperture.size.width;
            scaleY = -1.0;
            break;
        case UIImageOrientationUp:
            break;
        case UIImageOrientationUpMirrored:
            orgX -= aperture.size.width;
            scaleX = -1.0;
            break;
    }

    // set the draw rect to pan the image to the right spot
    CGRect drawRect = CGRectMake(orgX, orgY, imageToCrop.size.width, imageToCrop.size.height);

    // create a context for the new image
    UIGraphicsBeginImageContextWithOptions(size, NO, imageToCrop.scale);
    CGContextRef gc = UIGraphicsGetCurrentContext();

    // apply rotation and scaling
    CGContextRotateCTM(gc, rot);
    CGContextScaleCTM(gc, scaleX, scaleY);

    // draw the image to our clipped context using the offset rect
    CGContextDrawImage(gc, drawRect, imageToCrop.CGImage);

    // pull the image from our cropped context
    UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();

    // pop the context to get back to the default
    UIGraphicsEndImageContext();

    // Note: this is autoreleased
    return cropped;
}


#Very small/simple Swift 5 version,

You shouldn't mix UI and CG objects, they sometimes have very different coordinate spaces. This can make you sad.

Note

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜