Вы находитесь на странице: 1из 7

IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 7, NO.

4, DECEMBER 2003

249

Design and Analysis of a Content-Based Pathology Image Retrieval System


Lei Zheng, Arthur W. Wetzel, John Gilbertson, and Michael J. Becich
AbstractA prototype, content-based image retrieval system has been built employing a client/server architecture to access supercomputing power from the physicians desktop. The system retrieves images and their associated annotations from a networked microscopic pathology image database based on content similarity to user supplied query images. Similarity is evaluated based on four image feature types: color histogram, image texture, Fourier coefficients, and wavelet coefficients, using the vector dot product as a distance metric. Current retrieval accuracy varies across pathological categories depending on the number of available training samples and the effectiveness of the feature set. The distance measure of the search algorithm was validated by agglomerative cluster analysis in light of the medical domain knowledge. Results show a correlation between pathological significance and the image document distance value generated by the computer algorithm. This correlation agrees with observed visual similarity. This validation method has an advantage over traditional statistical evaluation methods when sample size is small and where domain knowledge is important. A multi-dimensional scaling analysis shows a low dimensionality nature of the embedded space for the current test set. Index TermsCluster analysis, content-based image retrieval (CBIR), medical imaging.

I. BACKGROUND ARGE image databases have been built to store satellite images [1], facial images [2], [3], patent images [4], and artistic photographs, as well as medical images in radiology [5], [6] and pathology [7], [8]. Some image databases are constructed with broad content coverage specifically to demonstrate certain retrieval systems [9]. Currently, most image databases are indexed by keywords, classified by human experts, and are accessed from text-based database systems where retrieval is based on association of the image and textual portions of each record. Text-based information retrieval works well with existing tools and is supported by the framework of all standard database management systems (DBMS). However, despite its efficiency, text-based image retrieval is unable to meet the growing demand for large-scale image capture and archiving. The sheer content volume of very large image databases is simply beyond the manual indexing capability of human experts. Image capturing and archiving has become increasingly popular, with the rapidly increasing affordability of internal
Manuscript received June 11, 2001; revised January 3, 2002. L. Zheng, J. Gilbertson, and M. J. Becich are with the Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, PA 15261 USA (e-mail: becich+@pitt.edu). A. W. Wetzel is with the Pittsburgh Supercomputing Center, Pittsburgh, PA 15213 USA. Digital Object Identifier 10.1109/TITB.2003.822952

and external storage. Supercomputers can now handle intensive image analysis and indexing tasks regarded as impossible or impractical a decade ago. Content-based image retrieval (CBIR)indexing and retrieving images in a database based on their contenthas become an attractive approach to managing large image archives. An image feature can be defined as the value generated by a predefined image-processing algorithm applied to the image of interest. Global features are extracted from the content of the entire image. For other applications features extracted from local regions of an image, corresponding to objects within the image, may be more useful. Image content can thus be defined as the set of all possible features, or combinations of basic features over a target image, and can be perceived at three levels of increasing complexity. First, the machine vision level extracts primitive features including, but not limited to, color, texture, and shape [10]. A second level combines primitive features to form higher level grouping and semantic relationships among multiple objects in the same image. Finally, at the highest level, image content can be symbolized by the concepts that it conveys [11], [12]. In our application, this can be pathology diagnosis. Some CBIR systems have achieved limited success using retrieval algorithms operating on one or two primitive image features [4], [13], [15], [16]. Progress has also been made in applying artificial neural network classifiers directly to digitized images [2], [17]. However, the validity of retrieval based on common primitive features is largely questionable [18] due to the fundamental difference between image processing and numerical feature extraction, and the understanding of image semantics and visual language. The task of building an all-purpose CBIR system becomes virtually equivalent to building an image understanding system that duplicates human visual perception, reasoning, and specific domain knowledge. Therefore, we are investigating the capabilities of more limited systems using easily computed image features. There are two categories of features we may consider. A feature is considered strong if the mapping is supported by domain knowledge, and weak if only backed by statistical association. The distinction is primarily due to the ultimate difference between machine vision and human visual perception and subsequent understanding. Strong features take advantage of historical knowledge and classifications but can be difficult to implement as most historical knowledge is based on verbal descriptions of visual features rather than algorithmically. Weak features are typically more easily described in the form of computational algorithms but may not adequately capture the level of distinctions contained in strong features. However, by carefully

1089-7771/03$17.00 2003 IEEE

250

IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 7, NO. 4, DECEMBER 2003

controlling image quality and by limiting the retrieval problem to a well-defined domain, it is frequently possible to map primitive features to the content semantics. Restricting the scope of a CBIR system to a set of specific problems in a particular domain also helps to formalize the evaluation metric of system performance by making the relevance judgment meaningful. Pathology is a medical discipline that inherently relies heavily on images and image analysis. A good deal of research in managing pathology image databases has been conducted in the last decade [7], [10], [14], [19], [20]. Nearly all of these rely on text-based retrieval algorithms as their fundamental operating mechanism. It has proved to be an effective way for managing small image collections with predetermined domain context. Pathology image databases deal with two major image types: gross and microscopic. Microscopic image content is especially important in that it is the foundation upon which pathologists make their diagnoses. Therefore, CBIR of microscopic pathology images is becoming an increasingly important and necessary methodology. We have developed a content-based pathology image retrieval system based on four types of image features that begins to address this need. II. MATERIALS AND METHODS A. Retrieval Algorithm Four primitive image feature types were used to define the content of microscopic pathology images. Each image was processed to obtain color histogram [21][23], texture representation [24][27], Fourier transformation coefficients [28], and wavelet coefficients [14]. The values of these features from any single image combine to form a long vector encoding primitive features into a unit that we call a signature. The set of signatures collected from all images in the database constitutes a signature file, which is the searchable database structure. Similarity between the signature of a query image and the signatures from images in the database is computed based on the weighted cosine measure, also known as the vector inner product or dot product, of their representative signature vectors. Mathematically, this is expressed as Cosine measure (1)

TABLE I CLIENT/SERVER COMMUNICATION

be combined in the following signature similarity computation as commonly used in text-based information retrieval applications:

Image document similarity

(2)

is the weight assigned to the th out of features, where and is the raw value of that feature before adjustment. A content-based image search engine implementing the proceeding mechanisms has been built and populated with a collection of representative microscopic images. A Cray J90 vector supercomputer at the Pittsburgh Supercomputing Center hosts the database as well as the software search engine. In our implementation, all images in the database were preprocessed offline to generate the master signature file. The same signature computation process is applied to each query image when it is submitted to the system. B. Interface The client/server communication is shown in Table I. The primary goal of a graphical user interface (GUI) design is ease of use for both first-time and experienced users. All requests can be input with a mouse. No keyboard typing is required. All functional components are laid out in one screen instead of being hidden in layered pull-down menus. The user can adjust slide bars to refine search criteria. The GUI design includes browsing windows for both query images, lists of retrieved records and scores of retrieved images and images themselves, and a text area for displaying text descriptors. Thumbnails and full images can be displayed in separate windows. C. Image Acquisition Tissue specimens were collected from a wide variety of organs, including prostate, liver, and heart. The test database consisted of 675 high-quality images. Images were captured with 20 objective lens, from H&E-stained glass slides on the image capturing stations equipped with an Olympus BX40 microscope, Sony CCD camera [DXC970 MD, 3 chip camera adapter 0.45 (Diagnostic Instruments Inc., Sterling Heights, MI) Horizontal Resolution: 750TV lines], and a Pentium

where is the cosine measure of the th feature of a document and from the archive, , and the query image, ; and are the values of th feature element with that feature of the image document and the query image, respectively. This simto , where 0 corresponds to ilarity value ranges from no correlation and 1 is complete correlation, which would only occur for identical signatures [29]. Individual weights can be applied to each component of the signature to control the relative importance of different signature components. Although we currently treat the entire signature as a single vector in our prototype, it is also possible to break the computation into parts that can be independently computed. Those partial results can

ZHENG et al.: DESIGN AND ANALYSIS OF A CONTENT-BASED PATHOLOGY

251

Fig. 1. Screen capture showing the GUI running one query session. The query image is a Gleason grade 3 prostate tumor (upper left image in the upper window with red border). The retrieved images are mostly Gleason grade 2 and 3 images. (A list of records and their scores are shown in the list window on the right. The lower window on the left shows the icons of the top six retrieved images.)

class PC with a Matrox (Matrox Electronic Systems Ltd., QC, Canada) Meteor/RGB capturing card. The images were stored in 640 480 24 bits uncompressed Microsoft bitmap format. Prior to image capturing, the imaging stations were calibrated, with possible defects measured qualitatively and quantitatively with a standard calibration slide. These include white balance, and illumination adjustment, and the flat field measure with a calibration slide. The microscopes that failed to produce a flat field were avoided. The illumination was checked prior to each capturing so make sure that the histogram is not off limits as human experts would prefer brighter illumination than what is good for image capturing. All the images used in this study were captured on one single capturing station with the best settings in an effort to avoid the contribution of any artifacts in the retrieval stage. Images were diagnosed according to the consensus of three pathologists from the University of Pittsburgh Medical Center. Brief comments were recorded in a separate annotation file for each image that can be retrieved along with the database image.

III. RESULTS A. System Performance The system has been tested with the client code running on Suns Ultra Sparc (Sun Microsystems), SGI Indy (Silicon Graphics, Inc.), and a PC running Windows 9X/NT4.0 and

Linux, with different Internet connection speeds. The performance differed only with network connection speed. In the best case with the fastest network settings, one query session took less than 2 s to complete. Feedback during multiple demo sessions for different audience/user groups showed that researchers, clinicians, and nonexpert audiences all found the interface stable, intuitive, and easy to use (see Fig. 1). They were able to focus on their information needs and the accuracy of image retrieval algorithms. The server currently runs on a Cray J90 supercomputer. It is possible to optimize the database search process, which is now a sequential search. It is possible to perform a dimensionality reduction and use one of the special access methods (SAMS), so as to improve the search efficiency with reasonable sacrifice of the accuracy. SAMS are able to break down the search algorithm complexity to logarithm so that the search time will not grow linearly as the size of the database grows. However, in our current implementation, the search process has been optimized and the size of the database is still under control. Most of the CPU time spent in a query session is expected to deal with the query image feature extraction, especially Fourier transformation and wavelet transformation. To get prompt response at the client side, a fast supercomputer is a big help. In this prototype, query images were a part of the image archive stored on the supercomputer side. So top match was always the image itself and not taken into account for accuracy. Retrieval performance varied across different tissues. The best performance was observed with prostate query images, where

252

IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 7, NO. 4, DECEMBER 2003

Fig. 2. Dendrogram generated by hierarchical agglomerative grouping with centroid clustering method (HAGCCM) in SPSS package. Squared Euclidean distance is used as suggested by the SPSS. The red line was added arbitrarily to separate low level and high-level clusters. The HAGCCM takes a proximity matrix (distance matrix in this case) and group the closest samples together progressively while conserving the original pair-wise distance measure as much as possible, and replace the generated group with its centroid, which is the arithmetic average of the samples. The length of the lines indicates the distance between groups in this figure.

the system could reliably retrieve tumors with the same histology type and Gleason grade. Retrieval precision was not predictable for other tissues with only a few cases in the database. The difference may be due to our groups expertise in that particular area of pathology. B. Empirical Evaluation and Validation With Domain Knowledge in Medicine Due to the difficulty in accumulating more images from rare categories, a thorough evaluation of the performance of feature extraction and feature comparison algorithms is currently impractical. Also, we would not claim that the images in the archive are adequate for extensive reliability and accuracy tests against a gold standard such as human experts. This will be addressed as the image collection is expanded. We turned to cluster analysis techniques to validate the retrieval algorithm. Twenty-four images from 24 different pathology categories were selected from the database. Their pair-wise distances were computed using the same code as in the retrieval process. The result, a 24 24 diagonally symmetrical matrix, was put into the SPSS statistical package for analysis. The type of images used and the clusters they formed are as follows.

Prostate Cancer Cluster: Gleason grade 1 to 5 (images 8 12). Colon and Vas Deferens Cluster (images 3 and 24). Ductal and Thyroid Cancer Cluster (images 4 and 22). Skin, Thyroid and Small Intestine Cluster (images 19, 23, and 14). Bladder and Spleen Cluster (images 1 and 20). Lymph Node, Parotid Gland, and Normal Prostate Cluster (images 16, 17, and 18) Breast and Breast Fibro-adenoma Cluster (images 2 and 7). Esophagus and Stomach Mucosa Cluster (images 5 and 21). Both centroid clustering and single linkage clustering (data not shown) showed common features of the dendrogram; see Figs. 2 and 3. Results from centroid clustering accurately reflected the pathological categorization of the samples. Notably, the dendrogram shows the five Gleason grade prostate cancer images (node numbers 812) clustered together. These cancer slides show various degrees of irregularity of the glandular structure while still sharing enough similar coloration and sub-cellular structures to group them closely. Other tissues with similar pathological morphology, or tissues that share similar

ZHENG et al.: DESIGN AND ANALYSIS OF A CONTENT-BASED PATHOLOGY

253

Fig. 3. Two prominent clusters in the dendrogram. (a) From left to right, upper to lower: prostate cancer Gleason grade from 1 to 5. (b) From left to right, upper to lower: bladder, lymph node, parotid gland, normal prostate, and spleen.

prototypical origins group together. Liver, heart, and fat tissues are unique enough by themselves that they alone form three branches of the dendrogram. The rest of the tissues form eight tight clusters [30]. Tissues that are visually similarand which medical students often have difficulty identifyingalso locate closely in the dendrogram. Tissues such as bladder, spleen, lymph node, normal prostate, and parotid gland, have a characteristic lower cellularization level and together form a bigger cluster. The cancer images in this small dataset are located in just two clusters (1 and 3), indicating that the feature extraction and the distance metric make useful semantic distinctions of image content in a global scheme. C. Multi-Dimensional Scaling Analysis Nonmetric multidimensional scaling (MDS) analysis by kyst [31], [32] gave a table of stress values of the final configuration

of 24 points in one to six dimensions; see Fig. 4. Kyst tries to discover the hidden structure of data by mapping data points to other configurations in -dimensional spaces while decreasing the stress values, the measure of badness-of-fit between the configuration of points and the data:

stress

(3)

is the distance between image and image ; is the where distance between image and image in the reduced space; is the function transforming from sample space to reduced space. The data strongly suggested that there is an elbow at the dimensionality of 3, where the stress value was a marginal 0.102; hence, the dimensionality of the problem was roughly 3. [32]

254

IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 7, NO. 4, DECEMBER 2003

IV. DISCUSSION Most previous research in CBIR has used only one or two image features at a time, usually color and texture, the most widely used, effective, and relatively well-studied features [1], [4]. However, the use of only one or two image features often leads to over-simplified algorithms and unpredictable retrieval performance. Researchers are beginning to explore how the linear or nonlinear combination of several features can help generate a very effective classification metric, leading to efficient retrieval [2], [3]. Two major problems are associated with the nature of high dimensionality in image retrieval. First, high dimensionality brings computation overhead in the training and classification process. Second, over-fitting is unavoidable, as real world problems often do not provide enough data sample to support a complex high dimensionality model. Therefore, it is useful to create a simple model that gives satisfactory performance. This simple model can be constructed from many candidate features through. Feature selection, selecting a subset of features that provide the most discriminating power and eliminating those that contribute little to the classification. In this study, we picked four feature types that have been widely accepted and used. Dimensionality reduction, mapping high dimensional feature space into a reduced space that can be searched effectively. Traditional methods of choice are pattern recognition techniques, such as KarhunenLeve transformation (KLT), singular value decomposition (SVD), and principal component analysis (PCA) Clustering is also used to build hierarchical structure in the search space. This work shows that within a specific domain, cluster structure can also be used to verify the design of the system. The similarity measure defines the characteristic of search space of the retrieval system. Cluster analysis helps to visualize the transformed space in a layout comparable to pathology taxonomy. This provides a valuable way to incorporate domain knowledge not only before modeling, but also during and after modeling and implementation. The revealed internal structure of the search space can also provide insight into the task of designing a navigation interface for content-based retrieval and/or concept-based browsing of a large image database. Finally, the clustered structure provides informative guidance for implementing relevance feedback and query expansion techniques, both of which were originally invented and have matured in solving text retrieval problems. Image database designs range from those based on a PC platform [33] to those utilizing supercomputing power, such as the one we have adopted. In a networked system, a user interface component can be separate from the image database and content-based search engine. This design helps to keep the client software light, requires no extra hardware and local installation, and leaves almost all computation to the powerful server machine, on which many algorithms can be fine tuned for efficiency. Although the methodology adopted here can be generalized to other multimedia information retrieval within a domain con-

Fig. 4. Scatter plot of stress value against dimensionality (kyst program was developed by J. B. Kruskal, F. W. Young, and J. B. Seery, and maintained locally at the University of Pittsburgh by Dr. S. Hirtle).

text, the results obtained in this study should be interpreted with caution. A limited number of samples were used for analysis. Only one representative image was selected for each diagnostic category, which cannot serve as a definitive example for all variations of image content. Future research will address the within group variation of this pathology image data set. We are currently working on improving the feature selection algorithm to take advantage of compressed image file formats, as well as incorporating useful information retrieval techniques to improve database search efficiency. Another enhancement to the system will be to integrate the existing text-based searching component into the content-based search system. A unified client will allow access to two separate search engines. Users can view the results from both subsystems in one window, and use Boolean algebra to perform complicated queries. This will also provide an opportunity to evaluate the performance of the CBIR technique against the text-based approach. ACKNOWLEDGMENT The authors would like to thank Dr. R. Dawson of InterScope Technologies, Inc. (http://www.interscopetech.com), and Dr. R. Crowley of the Center of Pathology Informatics (http://path.upmc.edu/cpi/), Department of Pathology, University of Pittsburgh Medical Center, for image classification and their valuable insights, and S. Leeds for her generous help with the preparation of the manuscript. REFERENCES
[1] L. D. Bergman, V. Castelli, and C. S. Li, Progressive content-base retrieval from satellite image archives, D-Lib Mag., Oct. 1997. [2] H. A. Rowley, S. Baluja, and T. Kanade, Neural network-based face detection, IEEE Trans. Pattern Anal. Machine Intell., vol. 20, pp. 2338, Jan. 1998. [3] A. Aslandogan and C. T. Yu, Multiple evidence combination in image retrieval: Digenes searches for people on the web, in Proc. SIGIR 00, July 2000, pp. 8895. [4] M. Das, R. Manmatha, and E. Riseman, Indexing flower patent images using domain knowledge, IEEE Intell. Syst., pp. 2436, Sept./Oct. 1999. [5] E. Leisch, S. Sartzetakis, M. Tsiknakis, and S. C. Orphanoudakis, A framework for the integration of distributed autonomous healthcare information systems, Med. Inform., Special Issue, vol. 22, no. 4, pp. 325335, Oct.-Dec. 1997.

ZHENG et al.: DESIGN AND ANALYSIS OF A CONTENT-BASED PATHOLOGY

255

[6] M. Tsiknakis, C. E. Chronaki, S. Kapidakis, C. Nikolaou, and S. C. Orphanoudakis, An integrated architecture for the provision of health telematic services based on digital library technologies, Int. J.Digital Libraries, Special Issue on Digital Libraries in Medicine, vol. 1, no. 3, pp. 257277, Dec. 1997. [7] A. W. Wetzel, Computational aspects of pathology image classification and retrieval, J. Supercomput., vol. 11, pp. 279293, November 1997. [8] [Online]. Available: http://www.nlm.nih.gov/research/visible/visible_human.html [9] M. Flickner et al., Query by image and video content: The QBIC system, IEEE Comput., vol. 28, pp. 2332, Sept. 1995. [10] Y. Rui, T. S. Huang, and S. F. Chang, Image retrieval: Current techniques, promising directions and open issues, J. Visual Commun. Image Represent., vol. 10, pp. 3962, Mar. 1999. [11] J. P. Eakins, Automatic image content retrieval Are we getting anywhere?, in Proc. 3rd Int. Conf. Electronic Library and Visual Information Research (ELVIRA3), Milton Keynes, U.K., May 1996, pp. 123135. , Techniques for image retrieval, in Library and Information [12] Briefings. London, U.K.: South Bank University, Library Information Technology Centre, 1998, vol. 85. [13] J. R. Smith and S. F. Chang, Automated Image retrieval using color and texture, Columbia Univ., New York, Techn. Rep. 414-95-20, 1995. [14] J. Z. Wang, G. Wiederhold, O. Firschein, and S. X. Wei, Content-based image indexing and searching using Daubechies wavelets, Int. J. Digital Libraries (IJODL), vol. 1, no. 4, pp. 311328, Mar. 1998. [15] S. Brandt, Use of shape features in content-based image retrieval, M.S. thesis, Dept. Eng. Phys. Math., Helsinki Univ. Technol., Helsinki, Finland, 1999. [16] D. M. Squire and T. M. Caelli, Shift, rotation and scale invariant signatures for two-dimensional contours, in a neural network architecture, presented at the 1st Int. Conf. Mathematics Neural Networks and Applications, Lady Margaret Hall, Oxford, U.K., July 1995. [17] Y. Liu, F. Dellaert, and W. Rothfus, Classification Driven Semantic Based Medical Image Retrieval, submitted for publication. [18] R. E. Dayhoff, P. M. Kuymak, and B. Shepard, Integrating medical images into hospital information systems, J Digit. Imag., vol. 4, pp. 8793, May 1991. [19] H. D. Tagare, C. C. Jaffe, and J. Duncan, Medical image databases: A content-based retrieval approach, J. Amer. Med. Inform. Assoc., vol. 4, no. 3, May/June 1997. [20] J. Z. Wang, Pathfinder: Multiresolution region-based searching of pathology images using IRM, in J. Amer. Med. Inform. Assoc., Symp. Supple., Los Angeles, CA, Nov. 2000, pp. 883887. [21] M. J. Swain and D. H. Ballard, Color indexing, Int. J. Comput. Vision, vol. 7, no. 1, pp. 1132, Nov. 1991. [22] J. R. Smith and S.-F. Chang, Local color and texture extraction and spatial query, in IEEE Proc. Int. Conf. Image Processing, vol. 3, Lausanne, Switzerland, Sept. 1996, pp. 10111014. [23] H. Zhang, Y. Gong, C. Y. Low, and S. W. Smoliar, Image retrieval based on color features: An evaluation study, in Proc. SPIE Digital Image Storage Archiving Systems, vol. 2606, 1995, pp. 212220.

[24] W. Y. Ma, Y. Deng, and B. S. Manjunath, Tools for texture/color based search of images, in Proc. SPIE Conference, Human Vision Electronic Imaging II, vol. 3106, San Jose, CA, Feb. 1997, pp. 496507. [25] W. Y. Ma and B. S. Manjunath, Image indexing using a texture dictionary, in Proc. SPIE Int. Society Optical Engineering, vol. 2606, Digital Image Storage and Archiving Systems, Philadelphia, PA, USA, Oct. 1995, pp. 28898. [26] B. S. Manjunath and W. Y. Ma, Texture features for browsing and retrieval of image data, IEEE Trans. Pattern Anal. Machine Intell., vol. 18, pp. 837842, Aug. 1996. [27] W. Y. Ma and B. S. Manjunath, Texture features and learning similarity, in Proc. IEEE Int. Conf. Computer Vision Pattern Recognition, San Francisco, CA, June 1996, pp. 425430. [28] W. H. Press et al., Numerical Recipes in C: The Art of Scientific Computing, 2nd ed. Cambridge, U.K.: Cambridge Univ. Press, 1997. [29] R. R. Korfhage, Information Storage and Retrieval. New York: Wiley, 1997, pp. 8485. [30] M. S. Aldenderfer and R. K. Blashfield, Cluster Analysis. Quantitative Applications in the Social Sciences, ser. 07-044. Thousand Oaks, CA: Sage Publications, 1984. [31] J. B. Kruskal, Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis, Psychometrik, vol. 29, no. 1, pp. 127, March 1964. [32] J. B. Kruskal and M. Wish, Multidimensional Scaling. Quantitative Applications in the Social Sciences, ser. 07-011. Thousand Oaks, CA: Sage Publications, 1978. [33] J. N. Stahl and B. Kranmann, Customized medical image databases: A low-cost approach, Comput. Med. Imag. Graphics, vol. 21, no. 6, pp. 345350, Nov.-Dec. 1997.

Lei Zheng, photograph and biography not available at the time of publication.

Arthur W. Wetzel, photograph and biography not available at the time of publication.

John Gilbertson, photograph and biography not available at the time of publication.

Michael J. Becich, photograph and biography not available at the time of publication.

Вам также может понравиться