Вы находитесь на странице: 1из 34

Local Edge-Preserving Multiscale Decomposition for High Dynamic Range Image Tone Mapping

ABSTRACT:
A novel filter is proposed for edge-preserving decomposition of an image. It is different from previous filters in its locally adaptive property. The filtered image contains local means everywhere and preserves local salient edges. Comparisons are made between our filtered result and the results of three other methods. A detailed analysis is also made on the behavior of the filter. A multiscale decomposition with this filter is proposed for manipulating a high dynamic range image, which has three detail layers and one base layer. The multiscale decomposition with the filter addresses three assumptions: 1 the base layer preserves local means everywhere! " every scale#s salient edges are relatively large gradients in a local window! and $ all of the non%ero gradient information belongs to the detail layer. An effective function is also proposed for compressing the detail layers. The reproduced image gives a good visuali%ation. &'perimental results on real images demonstrate that our algorithm is especially effective at preserving or enhancing local details.

Introduction To Image Processing


1.1 What is an image?

An image is an array, or a matri', of s(uare pi'els )picture elements arranged in columns and rows.

Figure 1: An image an array or a matrix of pixels arranged in columns and rows. In a )*-bit greyscale image each picture element has an assigned intensity that ranges from + to ",,. A grey scale image is what people normally call a blac- and white image, but the name emphasi%es that such an image will also include many shades of grey.

Figure 2: Each pixel has a value from 0 !lac"# to 2$$ white#. %he possi!le range of the pixel values depend on the colour depth of the image& here ' !it ( 2$) tones or greyscales.

A normal greyscale image has * bit colour depth . ",/ greyscales. A 0true colour1 image has "2 bit colour depth . * ' * ' * bits . ",/ ' ",/ ' ",/ colours . 31/ million colours.

Figure *: A true+colour image assem!led from three greyscale images coloured red& green and !lue. ,uch an image may contain up to 1) million different colours. 4ome greyscale images have more greyscales, for instance 1/ bit . /,,$/ greyscales. In principle three greyscale images can be combined to form an image with "*1,252,65/,51+,/,/ greyscales. There are two general groups of 7images#: vector graphics )or line art and bitmaps )pi'el-based or 7images# . 4ome of the most common file formats are: 8I9 : an *-bit )",/ colour , non-destructively compressed bitmap format. ;ostly used for web. <as several sub-standards one of which is the animated 8I9.

=>&8 : a very efficient )i.e. much information per byte destructively compressed "2 bit )1/ million colours bitmap format. ?idely used, especially for web and Internet )bandwidthlimited . TI99 : the standard "2 bit publication bitmap format. Compresses non-destructively with, for instance, @empel-Aiv-?elch )@A? compression. >4 : >ostscript, a standard vector format. <as numerous substandards and can be difficult to transport across platforms and operating systems. >4B C a dedicated >hotoshop format that -eeps all the information in an image including all the layers.

>ictures are the most common and convenient means of conveying or transmitting information. A picture is worth a thousand words. >ictures concisely convey information about positions, si%es and inter relationships between obDects. They portray spatial information that we can recogni%e as obDects. <uman beings are good at deriving information from such images, because of our innate visual and mental abilities. About 5,E of the information received by human is in pictorial form. An image is digiti%ed to convert it to a form which can be stored in a computerFs memory or on some form of storage media such as a hard dis- or CB-GH;. This digiti%ation procedure can be done by a scanner, or by a video camera connected to a frame grabber board in a computer. Hnce the image has been digiti%ed, it can be operated upon by various image processing operations. Image processing operations can be roughly divided into three maDor categories, Image Compression, Image &nhancement and

Gestoration, and ;easurement &'traction. It involves reducing the amount of memory needed to store a digital image. Image defects which could be caused by the digiti%ation process or by faults in the imaging set-up )for e'ample, bad lighting can be corrected using Image &nhancement techni(ues. Hnce the image is in good condition, the ;easurement &'traction operations can be used to obtain useful information from the image. 4ome e'amples of Image &nhancement and ;easurement &'traction are given below. The e'amples shown all operate on ",/ grey-scale images. This means that each pi'el in the image is stored as a number between + to ",,, where + represents a blac- pi'el, ",, represents a white pi'el and values in-between represent shades of grey. These operations can be e'tended to operate on colour images. The e'amples below represent only a few of the many techni(ues available for operating on images. Betails about the inner wor-ings of the operations have not been given, but some references to boo-s containing this information are given at the end for the interested reader. Images and pictures As we mentioned in the preface, human beings are predominantly visual creatures: we rely heavily on our vision to ma-e sense of the world around us. ?e not only loo- at things to identify and classify them, but we can scan for differences, and obtain an overall rough feeling for a scene with a (uic- glance. <umans have evolved very precise visual s-ills: we can identify a face in an instant! we can differentiate colors! we can process a large amount of visual information very (uic-ly. <owever, the world is in constant motion: stare at something for long enough and it will change in some way. &ven a large solid structure, li-e a building or a mountain, will change its appearance

depending on the time of day )day or night ! amount of sunlight )clear or cloudy , or various shadows falling upon it. ?e are concerned with single images: snapshots, if you li-e, of a visual scene. Although image processing can deal with changing scenes, we shall not discuss it in any detail in this te't. 9or our purposes, an image is a single picture which represents something. It may be a picture of a person, of people or animals, or of an outdoor scene, or a microphotograph of an electronic component, or the result of medical imaging. &ven if the picture is not immediately recogni%able, it will not be Dust a random blur. Image processing involves changing the nature of an image in order to either 1. improve its pictorial information for human interpretation, ". render it more suitable for autonomous machine perception.

?e shall be concerned with digital image processing, which involves using a computer to change the nature of a digital image. It is necessary to reali%e that these two aspects represent two separate but e(ually important aspects of image processing. A procedure which satisfies condition, a procedure which ma-es an image loo- better may be the very worst procedure for satisfying condition. <umans li-e their images to be sharp, clear and detailed! machines prefer their images to be simple and uncluttered. Images and digital images 4uppose we ta-e an image, a photo, say. 9or the moment, lets ma-e things easy and suppose the photo is blac- and white )that is, lots of shades of grey , so no colour. ?e may consider this image as being a two dimensional function, where the function values give the brightness

of the image at any given point. ?e may assume that in such an image brightness values can be any real numbers in the range )blac- )white . A digital image from a photo in that the values are all discrete. Isually they ta-e on only integer values. The brightness values also ranging from + )blac- to ",, )white . A digital image can be considered as a large array of discrete dots, each of which has a brightness associated with it. These dots are called picture elements, or more simply pi'els. The pi'els surrounding a given pi'el constitute its neighborhood. A neighborhood can be characteri%ed by its shape in the same way as a matri': we can spea- of a neighborhood,. &'cept in very special circumstances, neighborhoods have odd numbers of rows and columns! this ensures that the current pi'el is in the centre of the neighborhood.

Image Processing Fundamentals: Pixel: In order for any digital computer processing to be carried out on an image, it must first be stored within the computer in a suitable form that can be manipulated by a computer program. The most practical way of doing this is to divide the image up into a collection of discrete )and usually small cells, which are -nown as pixels. ;ost commonly, the image is divided up into a rectangular grid of pi'els, so that each pi'el is itself a small rectangle. Hnce this has been done, each pi'el is given a pi'el value that represents the color of that pi'el. It is assumed that the whole pi'el is the same color, and so any color variation that did e'ist within the area of the pi'el before the image was discreti%ed is lost. <owever, if the area of each pi'el is very small, then the discrete nature of the image is often not visible to the human eye.

Other pi el shapes and formations can !e used" most nota!ly the he agonal grid" in #hich each pi el is a small he agon$ This has some advantages in image processing" including the fact that pi el connectivity is less am!iguously defined than #ith a s%uare grid" !ut he agonal grids are not #idely used$ Part of the reason is that many image capture systems &e.g. most ''D cameras and scanners( intrinsically discreti)e the captured image into a rectangular grid in the first instance$

Pixel Connectivity
The notation of pi'el connectivity describes a relation between two or more pi'els. 9or two pi'els to be connected they have to fulfill certain conditions on the pi'el brightness and spatial adDacency. 9irst, in order for two pi'els to be considered connected, their pi'el values must both be from the same set of values V. 9or a grayscale image, V might be any range of graylevels, e.g. V={22,23,...40}, for a binary image we simple have V={1}. To formulate the adDacency criterion for connectivity, we first introduce the notation of neighborhood. 9or a pi'el p with the coordinates (x,y) the set of pi'els given by:

is called its 4-neighbors$ Its 8-neighbors are defined as

*rom this #e can infer the definition for 4- and 8connectivity+ T#o pi els p and q" !oth having values from a set V are 4-connected if q is from the set is from $ and 8-connected if q

,eneral connectivity can either !e !ased on 4- or 8connectivity- for the follo#ing discussion #e use 4connectivity$ . pi el p is connected to a pi el q if p is 4-connected to q or if p is 4-connected to a third pi el #hich itself is connected to q$ Or" in other #ords" t#o pi els q and p are connected if there is a path from p and q on #hich each pi el is 4-connected to the ne t one$ . set of pi els in an image #hich are all connected to each other is called a connected component$ *inding all connected components in an image and mar/ing each of them #ith a distinctive la!el is called connected component la!eling$ .n e ample of a !inary image #ith t#o connected components #hich are !ased on 4-connectivity can !e seen in *igure 0$ If the connectivity #ere !ased on 8neigh!ors" the t#o connected components #ould merge into one$

Figure 1 T#o connected components !ased on 4-connectivity$

Pixel Values
&ach of the pi'els that represents an image stored inside a computer has a pixel value which describes how bright that pi'el is, andJor what color it should be. In the simplest case of binary images , the pi'el value is a 1-bit number indicating either foreground or bac-ground. 9or a gray scale images, the pi'el value is a single number that represents the brightness of the pi'el. The most common pixel or!a" is the by"e i!age, where this number is stored as an *-bit integer giving a range of possible values from + to ",,. Typically %ero is ta-en to be blac-, and ",, is ta-en to be white. Kalues in between ma-e up the different shades of gray. To represent colour images , separate red, green and blue components must be specified for each pi'el )assuming an G8L colour space , and so the pi'el MvalueF is actually a vector of three numbers. Hften the three

different components are stored as three separate MgrayscaleF images -nown as #olor planes )one for each of red, green and blue , which have to be recombined when displaying or processing. ;ultispectral Images can contain even more than three components for each pi'el, and by e'tension these are stored in the same -ind of way, as a vector pi'el value, or as separate color planes. The actual grayscale or color component intensities for each pi'el may not actually be stored e'plicitly. Hften, all that is stored for each pi'el is an inde' into a colour map in which the actual intensity or colors can be loo-ed up. Although simple *-bit integers or vectors of *-bit integers are the most common sorts of pi'el values used, some image formats support different types of value, for instance $"-bit signed integers or floating point values. 4uch values are e'tremely useful in image processing as they allow processing to be carried out on the image where the resulting pi'el values are not necessarily *-bit integers. If this approach is used then it is usually necessary to set up a colormap which relates particular ranges of pi'el values to particular displayed colors.

Pixels, with a neighborhood:

Color scale
The t#o main color spaces are RGB and CMYK.

RGB
The RGB color model is an additive color model in #hich red" green" and !lue light are added together in various #ays to reproduce a !road array of colors$ R,1 uses additive color mi ing and is the !asic color model used in television or any other medium that pro2ects color #ith light$ It is the !asic color model used in computers and for #e! graphics" !ut it cannot !e used for print production$

The secondary colors of R,1 3 cyan" magenta" and yello# 3 are formed !y mi ing t#o of the primary colors & red green or !lue( and e cluding the third color$ Red and

green com!ine to ma/e yello#" green and !lue to ma/e cyan" and !lue and red form magenta$ The com!ination of red" green" and !lue in full intensity ma/es #hite$ 4figure56

*igure 456+ The additive model of R,1$ Red" green" and !lue are the primary stimuli for human color perception and are the primary additive colours$

To see ho# different R,1 components com!ine together" here is a selected repertoire of colors and their respective relative intensities for each of the red" green" and !lue components+

"#y$ical uses o% M&#'&B include:(


- Math and computation$ -.lgorithm development -Data ac%uisition -Modeling" simulation" and prototyping -Data analysis" e ploration" and visuali)ation -7cientific and engineering graphics -.pplication development" including graphical user interface !uilding

Some applications:

Image processing has an enormous range of applications! almost every area of science and technology can ma-e use of image processing methods. <ere is a short list Dust to give some indication of the range of image processing applications.

1. ;edicine Inspection and interpretation of images obtained from N-rays, ;GI or CAT scans, analysis of cell images, of chromosome -aryotypes.

". Agriculture 4atelliteJaerial views of land, for e'ample to determine how much land is being used for different purposes, or to investigate the suitability of different regions for different crops, inspection of fruit and vegetables distinguishing good and fresh produce from old. $. Industry Automatic inspection of items on a production line, inspection of paper samples. 2. @aw enforcement

9ingerprint analysis, sharpening or de-blurring of speed-camera images.

Aspects o image processing:

It is convenient to subdivide different image processing algorithms into broad subclasses. There are different algorithms for different tas-s and problems, and often we would li-e to distinguish the nature of the tas- at hand. Image enhancement: This refers to processing an image so that the result is more suitable for a particular application. &'ample include: sharpening or de-blurring an out of focus image, highlighting edges, improving image contrast, or brightening an image, removing noise.

Image restoration. This may be considered as reversing the damage done to an image by a -nown cause, for e'ample: removing of blur caused by linear motion, removal of optical distortions, removing periodic interference.

Image segmentation. This involves subdividing an image into constituent parts, or isolating certain aspects of an image: circles, or particular shapes in an image, In an aerial photograph, identifying cars, trees, buildings, or roads.

These classes are not disDoint! a given algorithm may be used for both image enhancement or for image restoration. <owever, we should be able to decide what it is that we are trying to do with our image: simply ma-e it loo- better )enhancement , or removing damage )restoration .

An image processing tas!

?e will loo- in some detail at a particular real-world tas-, and see how the above classes may be used to describe the various stages in performing this tas-. The Dob is to obtain, by an automatic process, the postcodes from envelopes. <ere is how this may be accomplished:

Ac"uiring the image: 9irst we need to produce a digital image from a paper envelope. This can be done using either a CCB camera, or a scanner.

Preprocessing: This is the step ta-en before the maDor image processing tas-. The problem here is to perform some basic tas-s in order to render the resulting image more suitable for the Dob to follow. In this case it may involve enhancing the contrast, removing noise, or identifying regions li-ely to contain the postcode.

Segmentation: <ere is where we actually get the postcode! in other words we e'tract from the image that part of it which contains Dust the postcode.

Representation and description These terms refer to e'tracting the particular features which allow us to differentiate between obDects. <ere we will be loo-ing for curves, holes and corners which allow us to distinguish the different digits which constitute a postcode. Recognition and interpretation: This means assigning labels to obDects based on their descriptors )from the previous step , and assigning meanings to those labels. 4o we identify particular digits, and we interpret a string of four digits at the end of the address as the postcode.

#$ISTI%& S'ST#(:
&. @and and ;cCann proposed the Getine' theory in 1651. It simulates the feature of <K4 and decomposes an image into an illumination image and a reflectance image. The illumination image is always assumed to be the lowfre(uency component, and the reflectance image corresponds to the high-fre(uency component. This theory is usually used in enhancing images . And recently, it is also used to reproduce the <BG images due to its dynamic range compression feature . The decomposition process is usually based on a 8aussian filtering to estimate the surround or adaptive illumination in CenterJ4urround Getine' . This causes significant halo artifacts in result images O*P. @ater, bilateral filtering is used to replace the 8aussian filtering, and produces much better results. <owever, it is hard to determine parameters in bilateral filtering, which still suffers halo artifacts .

PR)P)S#* S'ST#(:
In this paper, we adopt the nice feature of the multiscale edge-preserving decomposition. The salient edges are no longer thought of as large gradients of the whole image, and they are locally adaptive. This is intuitive that one large gradient may not be a salient edge in a larger scale or the whole image. In other words, one small gradient may also be an important edge locally. 4o, our definition of salient edge is different from 9arbman#s . A salient

edge is defined as a large gradient globally in , while we define a salient edge as a relatively large gradient locally. Therefore, the decomposition process is different in that a locally salient but small gradient will be decomposed into the base layer. ?e call our filter local edge-preserving )@&> filter.

()*+,#S:
>reprocessing <BG 8eneration @ogarithm @&> 9ilter Color Geproduction

(odule *escription: Preprocessing:


In this module if any noise in our input image is removed using ;edian 9ilter. ;edian 9iltering is to replace each pi'el value in an image by the median of its neighborhood. >rocedure of ;edian 9iltering 4ort the pi'el values 9ind the ;edian

Geplace the pi'el value by the ;edian-

.*R &eneration
<BG is constructed by merging some shots with multiple e'posures.

,ogarithm
The logarithm of luminance appro'imates the perceived lightness . To sufficiently use the domain of the logarithm function, we arbitrarily magnify the luminance 1+Q/ times. It is calculated as follows: $ = ln($in 10% & 1), where ln() represen"s "he na"ural logari"h!. 'inally, "he gray image is found by scaling $ in"o range (0, 1)* $ = $+ !ax($), where ma'($) represen"s "he !axi!u! value o $.

,#P /ilter
There are two parameters for @&>: ,R, -. They are relevant to the filter#s sensitivity to gradient. ;ore gradients will be treated as salient edges when ,R or - is small. Htherwise, when ,R or - is large, the filtered output will be over smoothed )less gradients will be treated as salient edges . The effect of the parameters for a real image is shown in 9ig. $. Sine results are presented in a matri' with ,R varying vertically and - varying hori%ontally. The image becomes blurred with the increase of ,R or -, while the details are -ept with the decrease of ,R or -. ?e find values for ,R . +.1 and - . 1 to always produce satisfactory results, burring details while preserving salient edges.

Color Reproduction
?e restore the color information proportional to its original ratio.

S0stem Architecture:

So tware Speci ication: .ardware Re"uirement: T >entium IK C ".5 8<% T 18L BBG GA;

T 89:,! Hard Dis/

So tware Re"uirement:

T Hperating 4ystem : ?indows N> T Tool : ;atlab

T ;ersion + <$=

,iterature Sur1e0 Properties and per ormance o a center2surround retinex:


The last version of @andFs )16*/ retine' model for human visionFs lightness and color constancy has been implemented and tested in image processing e'periments. >revious research has established the mathematical foundations of @andFs retine' but has not subDected his lightness theory to e'tensive image processing e'periments. ?e have sought to define a practical implementation of the retine' without particular concern for its validity as a model for human lightness and color perception. ?e describe the trade-off between rendition and dynamic range compression that is governed by the surround space constant. 9urther, unli-e previous results, we find that the placement of the logarithmic function is important and produces best results when placed after the surround formation. Also unli-e previous results, we find the best rendition for a 0canonical1 gainJoffset applied after the retine' operation. Karious functional forms for the retine' surround are evaluated, and a 8aussian form is found to perform better than the inverse s(uare suggested by @and. Images that violate the gray world assumptions )implicit to this retine' are investigated to provide insight into cases where this retine' fails to produce a good rendition.

.igh *0namic Range Image *ispla0 3ith .alo and Clipping Pre1ention

The dynamic range of an image is defined as the ratio between the highest and the lowest luminance level. In a high dynamic range )<BG image, this value e'ceeds the capabilities of conventional display devices! as a conse(uence, dedicated visuali%ation techni(ues are re(uired. In particular, it is possible to process an <BG image in order to reduce its dynamic range without producing a significant change in the visual sensation e'perienced by the observer. In this paper, we propose a dynamic range reduction algorithm that produces high-(uality results with a low computational cost and a limited number of parameters. The algorithm belongs to the category of methods based upon the Getine' theory of vision and was specifically designed in order to prevent the formation of common artifacts, such as halos around the sharp edges and clipping of the highlights, that often affect methods of this -ind. After a detailed analysis of the state of the art, we shall describe the method and compare the results and performance with those of two techni(ues recently proposed in the literature and one commercial software. Compressing and Companding .igh *0namic Range Images with Subband Architectures <igh dynamic range )<BG imaging is an area of increasing importance, but most display devices still have limited dynamic range )@BG . Karious techni(ues have been proposed for compressing the dynamic range while retaining important visual information. ;ultiscale image processing techni(ues, which are widely used for many image processing tas-s, have a reputation of causing halo artifacts when used for range compression. <owever, we demonstrate that they can wor- when properly implemented. ?e use a symmetrical analysis-synthesis filter ban-, and apply local gain control to the subbands. ?e also show that the techni(ue can be adapted for the related problem of 0companding1, in which an <BG image is converted to an @BG image, and later e'panded bac- to high dynamic range.

Compressing and Companding .igh *0namic Range Images with Subband Architectures <igh dynamic range )<BG imaging is an area of increasing importance, but most display devices still have limited dynamic range )@BG . Karious techni(ues have been proposed for compressing the dynamic range while retaining important visual information. ;ultiscale image processing techni(ues, which are widely used for many image processing tas-s, have a reputation of causing halo artifacts when used for range compression. <owever, we demonstrate that they can wor- when properly implemented. ?e use a symmetrical analysis-synthesis filter ban-, and apply local gain control to the subbands. ?e also show that the techni(ue can be adapted for the related problem of 0companding1, in which an <BG image is converted to an @BG image, and later e'panded bac- to high dynamic range.

So tware description: ;AT@ALU is a high-level technical computing language and interactive environment for algorithm development, data visuali%ation, data analysis, and numerical computation. Ising ;AT@AL, you can solve technical computing problems faster than with traditional programming languages, such as C, CVV, and 9ortran. ;atlab is a data analysis and visuali%ation tool which has been designed with powerful support for matrices and matri' operations. As well as this, ;atlab has e'cellent graphics capabilities, and its own powerful programming language. Hne of the reasons that ;atlab has become such an important tool is through the use of sets of ;atlab programs designed to support a particular tas-. These sets of programs are called toolbo'es,

and the particular toolbo' of interest to us is the image processing toolbo'. Gather than give a description of all of ;atlabFs capabilities, we shall restrict ourselves to Dust those aspects concerned with handling of images. ?e shall introduce functions, commands and techni(ues as re(uired. A ;atlab function is a -eyword which accepts various parameters, and produces some sort of output: for e'ample a matri', a string, a graph. &'amples of such functions are sin, imread, imclose. There are manyfunctions in ;atlab, and as we shall see, it is very easy )and sometimes necessary to write our own. ;atlabFs standard data type is the matri'Rall data are considered to be matrices of some sort. Images, of course, are matrices whose elements are the grey values )or possibly the G8L values of its pi'els. 4ingle values are considered by ;atlab to be matrices, while a string is merely a matri' of characters! being the stringFs length. In this chapter we will loo- at the more generic ;atlab commands, and discuss images in further chapters.

?hen you start up ;atlab, you have a blan- window called the Command ?indowR in which you enter commands. 8iven the vast number of ;atlabFs functions, and the different parameters they can ta-e, a command line style interface is in fact much more efficient than a comple' se(uence of pull-down menus. Wou can use ;AT@AL in a wide range of applications, including signal and image processing, communications, control design, test and measurement financial modeling and analysis. Add-on toolbo'es )collections of special-purpose ;AT@AL functions e'tend the ;AT@AL environment to solve particular classes of problems in these application areas.

;AT@AL provides a number of features for documenting and sharing your wor-. Wou can integrate your ;AT@AL code with other languages and applications, and distribute your ;AT@AL algorithms and applications. ?hen wor-ing with images in ;atlab, there are many things to -eep in mind such as loading an image, using the right format, saving the data as different data types, how to display an image, conversion between different image formats. Image >rocessing Toolbo' provides a comprehensive set of referencestandard algorithms and graphical tools for image processing, analysis, visuali%ation, and algorithm development. Wou can perform image enhancement, image deblurring, feature detection, noise reduction, image segmentation, spatial transformations, and image registration. ;any functions in the toolbo' are multithreaded to ta-e advantage of multicore and multiprocessor computers.
(AT,AB and images

The help in ;AT@AL is very good, use itX An image in ;AT@AL is treated as a matri' &very pi'el is a matri' element All the operators in ;AT@AL defined on matrices can be used on images: V, -, Y, J, Q, s(rt, sin, cos etc.
(AT,AB can import2export se1eral image ormats

L;> );icrosoft ?indows Litmap 8I9 )8raphics Interchange 9iles <B9 )<ierarchical Bata 9ormat =>&8 )=oint >hotographic &'perts 8roup >CN )>aintbrush >S8 )>ortable Setwor- 8raphics TI99 )Tagged Image 9ile 9ormat N?B )N ?indow Bump ;AT@AL can also load raw-data or other types of image data
*ata t0pes in (AT,AB

Bouble )/2-bit double-precision floating point 4ingle )$"-bit single-precision floating point Int$" )$"-bit signed integer Int1/ )1/-bit signed integer

Int* )*-bit signed integer Iint$" )$"-bit unsigned integer Iint1/ )1/-bit unsigned integer Iint* )*-bit unsigned integer
Images in (AT,AB

Linary images : Z+,1[ T Intensity images : O+,1P or uint*, double etc. T G8L images : m-by-n-by-$ T Inde'ed images : m-by-$ color map T ;ultidimensional images m-by-n-by-p )p is the number of layers
I(A&# T'P#S I% (AT,AB

Hutside ;atlab images may be of three types i.e. blac- \ white, grey scale and colored. In ;atlab, however, there are four types of images. Llac- \ ?hite images are called binary images, containing 1 for white and + for blac-. 8rey scale images are called intensity images, containing numbers in the range of + to ",, or + to 1. Colored images may be represented as G8L Image or Inde'ed Image. In G8L Images there e'ist three inde'ed images. 9irst image contains all the red portion of the image, second green and third contains the blue portion. 4o for a /2+ ' 2*+ si%ed image the matri' will be /2+ ' 2*+ ' $. An alternate method of colored image representation is Inde'ed Image. It actually e'ist of two matrices namely image matri' and map matri'.

&ach color in the image is given an inde' number and in image matri' each color is represented as an inde' number. ;ap matri' contains the database of which inde' number belongs to which color.
I(A&# T'P# C)%V#RSI)%

G8L Image to Intensity Image )rgb"gray G8L Image to Inde'ed Image )rgb"ind G8L Image to Linary Image )im"bw Inde'ed Image to G8L Image )ind"rgb Inde'ed Image to Intensity Image )ind"gray Inde'ed Image to Linary Image )im"bw Intensity Image to Inde'ed Image )gray"ind Intensity Image to Linary Image )im"bw Intensity Image to G8L Image )gray"ind, ind"rgb

4e0 /eatures

<igh-level language for technical computing Bevelopment environment for managing code, files, and data Interactive tools for iterative e'ploration, design, and problem solving

;athematical functions for linear algebra, statistics, 9ourier analysis, filtering, optimi%ation, and numerical integration

"-B and $-B graphics functions for visuali%ing data Tools for building custom graphical user interfaces 9unctions for integrating ;AT@AL based algorithms with e'ternal applications and languages, such as C, CVV, 9ortran, =ava, CH;, and ;icrosoft &'cel.

Conclusion:
?e have presented three assumptions for our multiscale edge-preserving image decomposition. A local edgepreserving filter has been derived from the assumptions. And we have also e'plored the connection with previous algorithms. Hnly two parameters )e'cept the window radius are needed for our filter, and they can be always set default values for good results. Hur filter is capable of multi-scale coarsening an image while -eeping local shape of the signal. ?e have also presented a process with our filter to reproduce <BG images. The results are compared with the results by some recent effective algorithms. The comparisons show that our algorithm is good at compressing the high dynamic range while preserving local tiny details, and the global view is appealing. The process is very efficient for its linear asymptotic time comple'ity of the image si%e. ?e have arbitrarily assumed a linear function between the input and the filtered output in a local window in the filter designing, and then averaged all the output values globally. The linear operations may be a cause of artifacts in results, since they may unsuitably reduce the gradients. Another drawbac- of our filter may

be the preservation of the local shape near a salient edge. It can be seen from 9ig. 1)h that the details near an edge are preserved, which should be smoothed. This may be another source of artifacts near edges. A nonlinear function may be a prospect for avoiding these disadvantages.

Re erences:
O1P >. &. Bebevec and =. ;ali-, 0Gecovering high dynamic range radiance maps from photographs,1 in >roc. 4I88GA><, 1665, pp. $/6C$5*. O"P =. ;. BiCarlo and L. A. ?andell, 0Gendering high dynamic range images,1 >roc. 4>I&, vol. $6/,, pp. $6"C2+1, ;ay "+++. O$P &. Geinhard, ;. ;. 4tar-, >. 4hirley, and =. A. 9erwerda, 0>hotographic tone reproduction for digital images,1 in >roc. 4I88GA><, "++", pp. "/5C"5/. O2P &. <. @and and =. =. ;cCann, 0@ightness and retine' theory,1 =. Hpt. 4oc. Amer., vol. /1, no. 1, pp. 1C11, =an. 1651. O,P A. Gahman, B. =. =obson, and 8. A. ?oodell, 0Getine' processing for automatic image enhancement,1 =. &lectron. Imag., vol. 1$, no. 1, pp. 1++C11+, "++2. O/P 4. Lattiato, A. Castorina, and ;. ;ancuso, 0<igh dynamic range imaging for digital still camera: An overview,1 =. &lectron. Imag., vol. 1", no. $, pp. 2,6C2/6, "++$. O5P B. =. =obson, A. Gahman, and 8. A. ?oodell, 0>roperties and performance of a centerJsurround retine',1 I&&& Trans. Image >rocess., vol. /, no. $, pp. 2,1C 2/", ;ar. 1665. O*P ;. &lad, 0Getine' by two bilateral filters,1 in >roc. ,th Int. Conf. 4cale 4pace >B& ;ethods Comput. Kis., vol. $2,6. "++,, pp. "15C""6. O6P A. 9arbman, G. 9attal, B. @ischins-i, and G. 4%elis-i, 0&dge-preserving decompositions for multi-scale tone and detail manipulation,1 AC; Trans. 8raph., vol. "5, no. $, pp. 1C1+, Aug. "++*.

O1+P ]. 4ubr, C. 4oler, and 9. Burand, 0&dge-preserving multiscale image decomposition based on local e'trema,1 AC; Trans. 8raph., vol. "*, no. ,, pp. 125C1,,, Bec. "++6. O11P G. ]immel, ;. &lad, B. 4ha-ed, G. ]eshet, and I. 4obel, 0A variational framewor- for retine',1 Int. =. Comput. Kis., vol. ,", no. 1, pp. 5C"$, "++$. O1"P 8. 8uarnieri, 4. ;arsi, and 8. Gamponi, 0<igh dynamic range image display with halo and clipping prevention,1 I&&& Trans. Image >rocess., vol. "+, no. ,, pp. 1$,1C1$/", ;ay "+11. O1$P ]. <e, =. 4un, and N. Tang, 08uided image filtering,1 in >roc. &ur. Conf. Comput. Kis., vol. 1. "+1+, pp. 1C12. O12P G. 9attal, B. @ischins-i, and ;. ?erman, 08radient domain high dynamic range compression,1 AC; Trans. 8raph., vol. "1, no. $, pp. "26C",/, "++". O1,P 9. Burand and =. Borsey, 09ast bilateral filtering for the display of highdynamic- range images,1 in >roc. 4I88GA><, "++", pp. ",5C"//.

Вам также может понравиться