Вы находитесь на странице: 1из 6

. ... Analyzing and manipulating images with a computer. Image processing generally involves three steps: 1.

. Import an image with an optical scanner or directly through digital photography. 2. Manipulate or analyze the image in some way. This stage can include image enhancement and data compression, or the image may be analyzed to find patterns that aren't visible by the human eye. For example, meteorologists use image processing to analyze satellite photographs. 3. Output the result. The result might be the image altered in some way or it might be a report based on analysis of the image. Image Processing- Manipulation of an image to improve or change some quality of the image.

Image processing is any form of signal processing for which the input is an image, such as photographs or frames of video; the output of image processing can be either an image or a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it. Image processing usually refers to digital image processing, but optical and analog image processing are also possible. This article is about general techniques that apply to all of them. Image Processing Techniques
Once a radiograph has been processed, the image is permanent and further adjustments cannot be made. If the image is too dark or too light, the image has to be repeated. However, this is not the case with digital images. All digital systems employ a stable electronic circuit called a bit, or binary digit. A circuit containing a bit can electronically be switched into two states, off or on. Off is represented by a zero and on is represented by a one. If a shade or color is assigned to the zero and the one then only two colors can be used, black or white. Digital devices used in radiographic imaging must be able to represent more than two colors. To image several shades of gray there must be more than one bit, or multibits. The number of bits corresponds to the number of gray levels dis-played by a particular system and is calculated as follows: L = 2n where L is equal to the number of gray levels and n the number of bits. For example, an 8-bit unit can display 28 or 256 shades of gray in an image. Since a digital image is made up of pixels, each pixel is assigned a numerical value corresponding to a shade of gray, thus the density and contrast of the image is adjusted by varying the numerical values of each pixel. Human vision can differentiate approximately 32 gray levels, which means that the dynamic range of the X-ray detection system and the human eye do not match. As a result, the computer must be manipulated to show the proper density and contrast of the final image. Most manufacturers treat the raw data with a firmware before the image is displayed. This simply means that the software in the system uses certain algorithms or mathematical computations set by the manufacturer to optimize the image. However, once the image is dis-played, it can be further processed by the operator to change parameters as desired.

The factors controlling the dynamic range are the window level and the window width. The window level is the level within the possible shades that is used to create the middle density in the image. This means the window level controls the image density. The window width is the range of gray shades that will be used in creating the image and therefore controls the contrast of the digital image. A computer usually uses 256 shades of gray to display the image. An increase in window width means that more shades of gray are displayed in the image, resulting in a decrease in image contrast. A too narrow or too wide window width can cause information to be missed in the resulting image. When the entire range of densities is displayed, the image will have lower contrast, or more shades of gray. When a smaller range of densities is displayed, the image will have higher contrast, or fewer shades of grays. Most software programs are equipped with multiple processing tools and filters, but the most widely used are brightness and contrast adjustments. Many of the image processing tools, such as color conversion and three-dimensional filtering, have no known diagnostic value. Since a digital image can be processed, more sophisticated processing methods can be used. Among these methods are digital subtraction, image synthesis, image restoration, and image analysis. The potential of digital imaging in dentistry lies in the development of practical techniques to perform digital subtraction, Tuned Aperture Computed Tomography (TACT), fractal analysis, and the creation of decision support systems

Image stabilization (IS) is a family of techniques to increase the stability of an image. It is used in image-stabilized binoculars, photography, videography, and astronomical telescopes. With still cameras, camera shake is particularly problematic at slow shutter speeds or with long focal length (telephoto) lenses. With video cameras, camera shake causes visible frame-to-frame jitter in the recorded video. In astronomy, the problems of lens-shake are compounded by variations in the atmosphere over time, which cause the apparent position of objects to move. Image editing encompasses the processes of altering images, whether they be digital photographs, traditional analog photographs, or illustrations. Traditional analog image editing is known as photo retouching, using tools such as an airbrush to modify photographs, or editing illustrations with any traditional art medium. Graphic software programs, which can be broadly grouped into vector graphics editors, raster graphics editors, and 3d modelers, are the primary tools with which a user may manipulate, enhance, and transform images. Many image editing programs are also used to render or create computer art from scratch. Image registration is the process of transforming the different sets of data into one coordinate system. Registration is necessary in order to be able to compare or integrate the data obtained from different measurements.

Conclusion
Digital image processing is far from being a simple transpose of audio signal principles to a two dimensions space. Image signal has its particular properties, and therefore we have to deal with it in a specific way. The Fast Fourier Transform, for

example, which was such a practical tool in audio processing, becomes useless in image processing. Oppositely, digital filters are easier to create directly, without any signal transforms, in image processing. Digital image processing has become a vast domain of modern signal technologies. Its applications pass far beyond simple aesthetical considerations, and they include medical imagery, television and multimedia signals, security, portable digital devices, video compression, and even digital movies. We have been flying over some elementary notions in image processing but there is yet a lot more to explore. If you are beginning in this topic, I hope this paper will have given you the taste and the motivation to carry on.

Medical imaging refers to the techniques and processes used to create images of the human body (or parts thereof) for clinical purposes (medical procedures seeking to reveal, diagnose or examine disease) or medical science (including the study of normal anatomy and physiology). As a discipline and in its widest sense, it is part of biological imaging and incorporates radiology (in the wider sense), radiological sciences, endoscopy, (medical) thermography, medical photography and microscopy (e.g. for human pathological investigations). Measurement and recording techniques which are not primarily designed to produce images, such as electroencephalography (EEG) and magnetoencephalography (MEG) and others, but which produce data susceptible to be represented as maps (i.e. containing positional information), can be seen as forms of medical imaging.

In computer vision and image processing the concept of feature detection refers to methods that aim at computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image domain, often in the form of isolated points, continuous curves or connected regions. Computer vision is the science and technology of machines that see. As a scientific discipline, computer vision is concerned with the theory for building artificial systems that obtain information from images. The image data can take many forms, such as a video sequence, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply the theories and models of computer vision to the construction of computer vision systems. Examples of applications of computer vision systems include systems for:

Controlling processes (e.g. an industrial robot or an autonomous vehicle). Detecting events (e.g. for visual surveillance or people counting).

Organizing information (e.g. for indexing databases of images and image sequences). Modeling objects or environments (e.g. industrial inspection, medical image analysis or topographical modeling). Interaction (e.g. as the input to a device for computer-human interaction).

Computer vision can also be described as a complement (but not necessarily the opposite) of biological vision. In biological vision, the visual perception of humans and various animals are studied, resulting in models of how these systems operate in terms of physiological processes. Computer vision, on the other hand, studies and describes artificial vision system that are implemented in software and/or hardware. Interdisciplinary exchange between biological and computer vision has proven increasingly fruitful for both fields. Sub-domains of computer vision include scene reconstruction, event detection, tracking, object recognition, learning, indexing, ego-motion and image restoration. Face detection is a computer technology that determines the locations and sizes of human faces in arbitrary (digital) images. It detects facial features and ignores anything else, such as buildings, trees and bodies. segmentation refers to the process of partitioning a digital image into multiple regions (sets of pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.[1] Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images.

Advantages and Disadvantages of Digital Imaging


One of the biggest advantages of digital imaging is the ability of the operator to post-process the image. Postprocessing of the image allows the operator to manipulate the pixel shades to correct image density and contrast, as well as perform other processing functions that could result in improved diagnosis and fewer repeated examinations (Figure 5). With the advent of electronic record systems, images can be stored in the computer memory and easily retrieved on the same computer screen and can be saved indefinitely or be printed on paper or film if necessary. All digital imaging systems can be networked into practice management software programs facilitating integration of data. With networks, the images can be viewed in more than one room and can be used in

conjunction with pictures obtained with an optical camera to enhance the patients understanding of treatment (Figure 6). Digital imaging allows the electronic transmission of images to third-party providers, referring dentists, consultants, and insurance carriers via a modem. Digital imaging is also environmentally friendly since it does not require chemical processing. It is well known that used film processing chemicals contaminate the water supply system with harmful metals such as the silver found in used fixer solution.5,6 Radiation dose reduction is also a benefit derived from the use of digital systems. Some manufacturers have claimed a 90% decrease in radiation exposure, but the real savings depend on comparisons. For example, the dose savings will be different if Insight film (F speed film) with rectangular collimation is used versus Ultra-Speed film (D speed film) with round collimation. Clearly, a much greater dose reduction will result from the change of UltraSpeed film with round collimation to Insight film with rectangular collimation. Disadvantages There are also disadvantages associated with the use of digital systems. The initial cost can be high depending on the system used, the number of detectors purchased, etc. Competency using the software can take time to master depending on the level of computer literacy of team members. The detectors, as well as the phosphor plates, cannot be sterilized or autoclaved and in some cases CCD/CMOS detectors pose positioning limitations because of their size and rigidity. This is not the case with phosphor plates; however, if a patient has a small mouth, the plates cannot be bent because they will become permanently damaged (Figure 7.) Phosphor plates cost an average of $25 to replace, and CCD/CMOS detectors can cost more than $5,000 per unit. Finally, since digital imaging in dentistry is not standardized, professionals are unable to exchange information without going through an intermediary process. Hopefully, this will change within the next few years as manufacturers of digital equipment become DICOM compliant.

Digital image processing remains a challenging domain of programming for several reasons. First the issue of digital image processing appeared relatively late in computer history, it had to wait for the arrival of the first graphical operating systems to become a true matter. Secondly, digital image processing requires the most careful optimisations and especially for real time applications. Comparing

image processing and audio processing is a good way to fix ideas. Let us consider the necessary memory bandwidth for examining the pixels of a 320x240, 32 bits bitmap, 30 times a second: 10 Mo/sec. Now with the same quality standard, an audio stereo wave real time processing needs 44100 (samples per second) x 2 (bytes per sample per channel) x 2 (channels) = 176Ko/sec, which is 50 times less. Obviously we will not be able to use the same signal processing techniques in both audio and image. Finally, digital image processing is by definition a two dimensions domain; this somehow complicates things when elaborating digital filters. We will explore some of the existing methods used to deal with digital images starting by a very basic approach of colour interpretation. As a more advanced level of interpretation comes the matrix convolution and digital filters. Finally, we will have an overview of some applications of image processing. This guide assumes the reader has a basic signal processing understanding (Convolutions and correlations should sound common places) and also some algorithmic notions (complexity, optimisations). The aim of this document is to give the reader a little overview of the existing techniques in digital image processing. We will neither penetrate deep into theory, nor will we in the coding itself; we will more concentrate on the algorithms themselves, the methods. Anyway, this document should be used as a source of ideas only, and not as a source of code. If you have a question or a comment about this text, please send it to the above email address, Ill be happy to answer as soon as possible. Please enjoy your reading.

Digital image processing allows one to enhance image features of interest while attenuating detail irrelevant to a given application, and then extract useful information about the scene from the enhanced image. This introduction is a practical guide to the challenges, and the hardware and algorithms used to meet them .images are produced by a variety of physical devices, including still and video cameras, x-ray devices, electron microscopes, radar, and ultrasound, and used for a variety of purposes, including entertainment, medical, business (e.g. documents), industrial, military, civil (e.g. traffic),security, and scientific. The goal in each case is for an observer, human or machine, to extract useful the scene being imaged. An example of an industrial application Digital image processing allows one to enhance image features of interest while attenuating detail irrelevant to a given application, and then extract useful information about the scene from the enhanced image.

Вам также может понравиться