开发者

What is best way to get breakdown image in UIImage?

for example : if i have one image of 开发者_JAVA技巧man, and i want to trigger event from different part of human body (from head, stomach).

So What is best way to make that image and trigger pro-grammatically from UIImageView


I would figure out the regions of the image that you want to trigger different events and then use either a UIGestureRecoginzer or an override of the "touches" methods of UIView to determine whether a touch has fallen within one of these regions. You would then need to determine if the location of a touch is within the regions you've defined using a 2D point-in-shape-test (here's a really good example of this). This test could be more complicated depending on the complexity of the regions you've defined.


I got solution by this one:

I have slice images of different regions with same size, and arrange it like one images. Now on click i have just check that on image clicked done....

This is code for that...

    -(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
     {

UITouch *touch = [touches anyObject];
CGPoint point = [touch locationInView:self.view];

BOOL alphaFace = [self isPointTransparent:point addImage:[UIImage imageNamed:@"face.png"]];
BOOL alphaHand = [self isPointTransparent:point addImage:[UIImage imageNamed:@"hand.png"]];
BOOL alphaMidStomach = [self isPointTransparent:point addImage:[UIImage imageNamed:@"midStomach.png"]];
NSString *filePath=[[NSBundle mainBundle] pathForResource:@"beep" ofType:@"wav"];;
NSURL *fileURL= [[NSURL alloc] initFileURLWithPath:filePath];;

    if(alphaFace)
    {
        // Clicked on Face Image
    }
    else if(alphaHand)
    {
        // Clicked on Hand Image
    }
    else if(alphaMidStomach)
    {
        // Clicked on Stomach Image
    }
    // Now just check alplha , if it's No then do something....

 }   
  -CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
  {
CGContextRef    context = NULL;
CGColorSpaceRef colorSpace = NULL; // tell we want  kCGImageAlphaOnly
void *          bitmapData;
int             bitmapByteCount;
int             bitmapBytesPerRow;


size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
bitmapBytesPerRow   = (pixelsWide * 1); // 8bpp
bitmapByteCount     = (bitmapBytesPerRow * pixelsHigh);

bitmapData = calloc(1, bitmapByteCount );
if (bitmapData == NULL) 
{
    CGColorSpaceRelease( colorSpace );
    return nil;
}
context = CGBitmapContextCreate (bitmapData,
                                 pixelsWide,
                                 pixelsHigh,
                                 8,
                                 bitmapBytesPerRow,
                                 colorSpace,
                                 kCGImageAlphaOnly);
if (context == NULL)
{
    free (bitmapData);
    fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );

return context;
  }

 - (NSData *) ARGBData:(UIImage*)image  {
CGContextRef cgctx = CreateARGBBitmapContext(image.CGImage);
if (cgctx == NULL) 
    return nil;

size_t w = CGImageGetWidth(image.CGImage);
size_t h = CGImageGetHeight(image.CGImage);
CGRect rect = {{0,0},{w,h}}; 
CGContextDrawImage(cgctx, rect, image.CGImage); 

unsigned char *data = CGBitmapContextGetData (cgctx);
CGContextRelease(cgctx); 
if (!data)
    return nil;

size_t dataSize = 1 * w * h; // 8 bits

return [NSData dataWithBytes:data length:dataSize];
 }

 -(BOOL) isPointTransparent: (CGPoint) point addImage:(UIImage*)image {
NSData *rawData = [self ARGBData:image];  // Or cache this
if (rawData == nil)
    return NO;

// just 8 bits per alpha component
size_t bpp = 1;
size_t bpr = 320 * 1;

NSUInteger index = (point.x * bpp) + (point.y * bpr);
unsigned char *rawDataBytes = (unsigned char *)[rawData bytes];

return rawDataBytes[index] == 0;    
}
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜