R – Why is scaling down a UIImage from the camera so slow

iphoneperformanceuiimageuiimagepickercontroller

resizing a camera UIImage returned by the UIImagePickerController takes a ridiculously long time if you do it the usual way as in this post.

[update: last call for creative ideas here! my next option is to go ask Apple, I guess.]

Yes, it's a lot of pixels, but the graphics hardware on the iPhone is perfectly capable of drawing lots of 1024×1024 textured quads onto the screen in 1/60th of a second, so there really should be a way of resizing a 2048×1536 image down to 640×480 in a lot less than 1.5 seconds.

So why is it so slow? Is the underlying image data the OS returns from the picker somehow not ready to be drawn, so that it has to be swizzled in some fashion that the GPU can't help with?

My best guess is that it needs to be converted from RGBA to ABGR or something like that; can anybody think of a way that it might be possible to convince the system to give me the data quickly, even if it's in the wrong format, and I'll deal with it myself later?

As far as I know, the iPhone doesn't have any dedicated "graphics" memory, so there shouldn't be a question of moving the image data from one place to another.

So, the question: is there some alternative drawing method besides just using CGBitmapContextCreate and CGContextDrawImage that takes more advantage of the GPU?

Something to investigate: if I start with a UIImage of the same size that's not from the image picker, is it just as slow? Apparently not…

Update: Matt Long found that it only takes 30ms to resize the image you get back from the picker in [info objectForKey:@"UIImagePickerControllerEditedImage"], if you've enabled cropping with the manual camera controls. That isn't helpful for the case I care about where I'm using takePicture to take pictures programmatically. I see that that the edited image is kCGImageAlphaPremultipliedFirst but the original image is kCGImageAlphaNoneSkipFirst.

Further update: Jason Crawford suggested CGContextSetInterpolationQuality(context, kCGInterpolationLow), which does in fact cut the time from about 1.5 sec to 1.3 sec, at a cost in image quality–but that's still far from the speed the GPU should be capable of!

Last update before the week runs out: user refulgentis did some profiling which seems to indicate that the 1.5 seconds is spent writing the captured camera image out to disk as a JPEG and then reading it back in. If true, very bizarre.

Best Answer

Use Shark, profile it, figure out what's taking so long.

I have to work a lot with MediaPlayer.framework and when you get properties for songs on the iPod, the first property request is insanely slow compared to subsequent requests, because in the first property request MobileMediaPlayer packages up a dictionary with all the properties and passes it to my app.

I'd be willing to bet that there is a similar situation occurring here.

EDIT: I was able to do a time profile in Shark of both Matt Long's UIImagePickerControllerEditedImage situation and the generic UIImagePickerControllerOriginalImage situation.

In both cases, a majority of the time is taken up by CGContextDrawImage. In Matt Long's case, the UIImagePickerController takes care of this in between the user capturing the image and the image entering 'edit' mode.

Scaling the percentage of time taken to CGContextDrawImage = 100%, CGContextDelegateDrawImage then takes 100%, then ripc_DrawImage (from libRIP.A.dylib) takes 100%, and then ripc_AcquireImage (which it looks like decompresses the JPEG, and takes up most of its time in _cg_jpeg_idct_islow, vec_ycc_bgrx_convert, decompress_onepass, sep_upsample) takes 93% of the time. Only 7% of the time is actually spent in ripc_RenderImage, which I assume is the actual drawing.