Вы находитесь на странице: 1из 11

Bicubic: Bicubic interpolation (the default); the output pixel value is a weighted average of pixels in the nearest 4-by-4

neighborhood

Bicubic Interpolation
For bicubic interpolation, the block uses the weighted average of four translated pixel values for each output pixel value. For example, suppose this matrix,

represents your input image. You want to translate this image 0.5 pixel in the positive horizontal direction using bicubic interpolation. The Translate block's bicubic interpolation algorithm is illustrated by the following steps: 1. Zero pad the input matrix and translate it by 0.5 pixel to the right.

2. Create the output matrix by replacing each input pixel value with the weighted average of the two translated values on either side. The result is the following matrix where the output matrix has one more column than the input matrix:

Bicubic interpolation example

In a bicubic interpolation, first the position of each pixel in the output map is determined; then the values of 16 surrounding pixels of the input map are used to calculate an interpolated value for each pixel in the output map. Figure 1 below shows the position of a 'new' pixel in the output map, and the position and values of 16 surrounding pixels in the input map.

Fig. 1: Principle of bicubic interpolation. The value of the 'new' pixel in the output map is calculated by:

first 4 interpolations in y-direction, then 1 interpolation in x-direction (between the 4 intermediate values, represented in red).

A third order polynomial is fitted through each set of 4 known points and from this the value of the fifth point is known. A bicubic interpolation gives a better estimate of the output value than a bilinear interpolation.

bicubic interpolation
September 24, 2001 by Ben Long Filed under: Glossary_B A method of interpolation used in a resampling process. Usually the best interpolation choice when you scale an image. Resampling is the process of computing new pixel information to scale an image to a larger or smaller size. Photoshop offers a choice of several algorithms for computing new pixels when resampling. Bicubic is the best choice for almost all images.

When using bicubic interpolation, Photoshop examines all of the surrounding pixels for each pixel in the image. A weighted average of the color values of these neighboring pixels is used to determine the color of new pixels. If youre sampling up, this color is assigned to new pixels that are created to increase the size of your image. If youre sampling down, this color is assigned to a single pixel that replaces a larger area of pixels, to decrease the size of your image.

Bicubic interpolation uses the information from an original pixel and sixteen of the surrounding pixels to determine the color of the new pixels that are created from the original pixel. Bicubic interpolation is a big improvement over the previous two interpolation methods for two reasons: (1) Bicubic interpolation uses data from a larger number of pixels and (2) Bicubic interpolation uses a Bicubic calculation that is more sophisticated than the calculations of the previous interpolation methods. Bicubic interpolation is capable of producing photo quality results and is probably the method most commonly used.

This is a reproduction of a couple of graphics from an old classic article about cubic interpolation for image processing --- Cubic convolution interpolation for digital image processing by R Keys (1981). The version available on-line is a bad scan of what looks like is already an originally a bad printing, with heavy dithering. I am going to use this kind of interpolation here in my work, and because this is a very important topic, and those figures are very instructional and interesting, I decided to take a little turn from my specific application, and make a reproduction of them to leave available on the internet. The main application of this kind of interpolation is to produce intermediary values from a set of regularly spaced samples. This is usually the way you store signals from all sorts, from 1D audio, to images and even 3D fields obtained via e.g. ultrasound. You need these intermediary values for "zooming in", for example, to present the signals into a monitor that has a higher resolution then the signal is sampled at. You also need that when you are transforming an image, be it just a simple scaling (more specifically an upscaling, sing you would need a smoothing filter to downscale), but also non-linear transformations like rotation, and all sorts of deformations. In the case of 3D fields, you need that to find contour surfaces the same levels. The theoretically correct way to do that is to calculate the convolution by the sinc function of the signal considered as a modulated impulse train . There are some problems with that, though. First this is hard to calculate and second this may look bad, specially in the case of images. We usually don't expect a sample value to have any correlation with the function at a very distant point, and the sinc function causes that --- this is just the nature of bandlimited signals. So we prefer to convolve that impulse train with a function with a limited support.

The article by Keys shows how to calculate the coefficients of a cubic function to implement such a function. You can actually look at it just as an appoximation of the sinc function by a cubic. While the sinc function is the theorectically perfect, but more complicated alternative, there are simpler alternatives to this cubic interpolation too. The most important are 1_ Just picking the nearest value, which is equivalent of convolving by a boxcar function . 2_ Linear interpolation. It's a first-order polynomial, just connect two given samples with a straight line, and pick the corresponding value from any intermediate position. Bilinear interpolation is the application of linear to both dimensions, you find the intermediate "x" value from two lines of two pixels, in the whole-number surrounding y positions, then interpolate between them. Bilinear interpolation is quite common, and serves most purposes in simpler applications. But if you start pushing its limits you will get upset with it very quickly. The first reason is that the first derivatives of the resulting function is discontinuous, there is an angle at each sample. One important characteristic of these interpolation techniques, by the way, is that the output of the calculation over a sample position is the original sample value itself. This first figure below is one example of one-dimensional interpolation. The top curve is the original function that we want to interpolate, a sinusoid. The samples are over the intergers. The second graphic are the individual cubic curves modulated by the sample values. Their sum is the result of the convolution operation, and is the blue curve in the third graphic. The wiggly red curve is the error, its vertical axis is the right one, in a different scale. The next figure is the reproduction of that article's experiment. The function that was interpolated was sin(0.5*r**2), and is top left picture. The choice of this function is because we can see what happens as the direction of the curves move from horizontal, then inclinated then vertical. Also the frequency grows as we move from the top left to the bottom right corners of the image. In the article the images are 350-ish pixels wide, mine have 513513. The top right picture is the one created by interpolating values from a regular grid of 6565 samples selected over the original values. That means one pixel every 8 in the vertical and horizontal directions. It is hard to see the differences in this figure, but the full scale images are at the end of this post for you to appreciate. The bilinear interpolation is there too.

The two bottom pictures are the absolute errors. The maximum error is 0.196 for the bicubic, and 0.234 for the bilinear. But that is for the whole area of the image. If you look at them, you can see that the top left region has smaller errors, specially in the case of the bicubic. That is because of the lower frequencies there. The maximum errors in this top left

quarter of the image is 0.059 for the bilinear and 0.004 for the bicubic, a great improvement in this case. The next images are the original 513513 values for your pleasure. Look for differences in the interpolations at the lower right corners. it's easy to see the blocks where each different set of reference samples is used. I also think it's cool to see at the error images how there are black lines over the reference coordinates, showing that over the samples the errors is very small. It's also interesting that the bilinear error follows the shape of the original surface, while the bicubic error is some less obvious shape. EDIT: The code I used to create these figures is available at http://pastebin.com/DxLr2btC

EDIT2: Here's a much prettier image of the error. White is small erorr. Blue and red are negative and positive errors.

Вам также может понравиться