I use html5 canvas elements to resize images im my browser. It turns out that the quality is very low. I found this: Disable Interpolation when Scaling a <canvas> but it does not help to increase the quality.
Below is my css and js code as well as the image scalled with Photoshop and scaled in the canvas API.
What do I have to do to get optimal quality when scaling an image in the browser?
Note: I want to scale down a large image to a small one, modify color in a canvas and send the result from the canvas to the server.
CSS:
canvas, img {
image-rendering: optimizeQuality;
image-rendering: -moz-crisp-edges;
image-rendering: -webkit-optimize-contrast;
image-rendering: optimize-contrast;
-ms-interpolation-mode: nearest-neighbor;
}
JS:
var $img = $('<img>');
var $originalCanvas = $('<canvas>');
$img.load(function() {
var originalContext = $originalCanvas[0].getContext('2d');
originalContext.imageSmoothingEnabled = false;
originalContext.webkitImageSmoothingEnabled = false;
originalContext.mozImageSmoothingEnabled = false;
originalContext.drawImage(this, 0, 0, 379, 500);
});
The image resized with photoshop:
The image resized on canvas:
Edit:
I tried to make downscaling in more than one steps as proposed in:
Resizing an image in an HTML5 canvas and
Html5 canvas drawImage: how to apply antialiasing
This is the function I have used:
function resizeCanvasImage(img, canvas, maxWidth, maxHeight) {
var imgWidth = img.width,
imgHeight = img.height;
var ratio = 1, ratio1 = 1, ratio2 = 1;
ratio1 = maxWidth / imgWidth;
ratio2 = maxHeight / imgHeight;
// Use the smallest ratio that the image best fit into the maxWidth x maxHeight box.
if (ratio1 < ratio2) {
ratio = ratio1;
}
else {
ratio = ratio2;
}
var canvasContext = canvas.getContext("2d");
var canvasCopy = document.createElement("canvas");
var copyContext = canvasCopy.getContext("2d");
var canvasCopy2 = document.createElement("canvas");
var copyContext2 = canvasCopy2.getContext("2d");
canvasCopy.width = imgWidth;
canvasCopy.height = imgHeight;
copyContext.drawImage(img, 0, 0);
// init
canvasCopy2.width = imgWidth;
canvasCopy2.height = imgHeight;
copyContext2.drawImage(canvasCopy, 0, 0, canvasCopy.width, canvasCopy.height, 0, 0, canvasCopy2.width, canvasCopy2.height);
var rounds = 2;
var roundRatio = ratio * rounds;
for (var i = 1; i <= rounds; i++) {
console.log("Step: "+i);
// tmp
canvasCopy.width = imgWidth * roundRatio / i;
canvasCopy.height = imgHeight * roundRatio / i;
copyContext.drawImage(canvasCopy2, 0, 0, canvasCopy2.width, canvasCopy2.height, 0, 0, canvasCopy.width, canvasCopy.height);
// copy back
canvasCopy2.width = imgWidth * roundRatio / i;
canvasCopy2.height = imgHeight * roundRatio / i;
copyContext2.drawImage(canvasCopy, 0, 0, canvasCopy.width, canvasCopy.height, 0, 0, canvasCopy2.width, canvasCopy2.height);
} // end for
// copy back to canvas
canvas.width = imgWidth * roundRatio / rounds;
canvas.height = imgHeight * roundRatio / rounds;
canvasContext.drawImage(canvasCopy2, 0, 0, canvasCopy2.width, canvasCopy2.height, 0, 0, canvas.width, canvas.height);
}
Here is the result if I use a 2 step down sizing:
Here is the result if I use a 3 step down sizing:
Here is the result if I use a 4 step down sizing:
Here is the result if I use a 20 step down sizing:
Note: It turns out that from 1 step to 2 steps there is a large improvement in image quality but the more steps you add to the process the more fuzzy the image becomes.
Is there a way to solve the problem that the image gets more fuzzy the more steps you add?
Edit 2013-10-04: I tried the algorithm of GameAlchemist. Here is the result compared to Photoshop.
PhotoShop Image:
GameAlchemist's Algorithm:
Best Answer
Since your problem is to downscale your image, there is no point in talking about interpolation -which is about creating pixel-. The issue here is downsampling.
To downsample an image, we need to turn each square of p * p pixels in the original image into a single pixel in the destination image.
For performances reasons Browsers do a very simple downsampling : to build the smaller image, they will just pick ONE pixel in the source and use its value for the destination. which 'forgets' some details and adds noise.
Yet there's an exception to that : since the 2X image downsampling is very simple to compute (average 4 pixels to make one) and is used for retina/HiDPI pixels, this case is handled properly -the Browser does make use of 4 pixels to make one-.
BUT... if you use several time a 2X downsampling, you'll face the issue that the successive rounding errors will add too much noise.
What's worse, you won't always resize by a power of two, and resizing to the nearest power + a last resizing is very noisy.
What you seek is a pixel-perfect downsampling, that is : a re-sampling of the image that will take all input pixels into account -whatever the scale-.
To do that we must compute, for each input pixel, its contribution to one, two, or four destination pixels depending wether the scaled projection of the input pixels is right inside a destination pixels, overlaps an X border, an Y border, or both.
( A scheme would be nice here, but i don't have one. )
Here's an example of canvas scale vs my pixel perfect scale on a 1/3 scale of a zombat.
Notice that the picture might get scaled in your Browser, and is .jpegized by S.O..
Yet we see that there's much less noise especially in the grass behind the wombat, and the branches on its right. The noise in the fur makes it more contrasted, but it looks like he's got white hairs -unlike source picture-.
Right image is less catchy but definitively nicer.
Here's the code to do the pixel perfect downscaling :
fiddle result : http://jsfiddle.net/gamealchemist/r6aVp/embedded/result/
fiddle itself : http://jsfiddle.net/gamealchemist/r6aVp/
It is quite memory greedy, since a float buffer is required to store the intermediate values of the destination image (-> if we count the result canvas, we use 6 times the source image's memory in this algorithm).
It is also quite expensive, since each source pixel is used whatever the destination size, and we have to pay for the getImageData / putImageDate, quite slow also.
But there's no way to be faster than process each source value in this case, and situation is not that bad : For my 740 * 556 image of a wombat, processing takes between 30 and 40 ms.