Вы находитесь на странице: 1из 21

A Short Primer on Chemometrics for Spectroscopists Steven D.

Brown Department of Chemistry and Biochemistry University of Delaware Newark, DE 19716 USA Introduction For a number of good reasons, spectral measurement is often the method of choice in qualitative and quantitative analysis of chemical mixtures. It is relatively easy to generate a good deal of data in a short time by proper use of spectroscopy. Getting useful results from a set of spectral data is not always straightforward, however. Determining the amounts of the components of a mixture can often be problematic without a prior separation step because of the overlap of spectral responses. Identifying the components of a mixture can also be challenging because of the similarity of many spectral responses. Often, the solution to these problems has been to increase spectral resolution or, as in the case of the quantitative analysis, to enhance the spectral resolution by means of a prior separation step. Many of these spectroscopic fixes to the problem of extracting results from data work less well than one might expect, given the apparent information in a spectral scan. For this reason, spectroscopists have increasingly turned to chemometrics for help in dealing with spectral data. To understand why chemometrics has had such good success in dealing with apparently intractable problems in spectroscopy, one must understand the distinction between data and information. Spectroscopic measurements often generate huge amounts of data, but because the chemical and physical basis for the spectroscopic transition(s) observed is not perfectly unique to a single species and perfectly unique to a very isolated set of energies, the data generated have high connection mathematically speaking, they have high correlation - from one measurement channel to the next and from one chemical species to the next over those same channels. This high serial correlation decreases the apparent information of the data; for example, it is very hard to find a wavelength channel in a UV-visible spectrum to distinguish between alanine and glycine as both tend to absorb over the same sets of wavelengths. Yet, if an approach is used that expects serial correlation, it is possible to turn the correlation to good advantage by taking advantage of its redundancy, in the same way that a mean takes advantage of the redundancy (the sameness) of a set of numbers: one can use redundancy to gain precision. Chemometric methods are efficient at extracting unique and redundant information from multichannel data such as spectra because these methods presume serial correlation found in spectra and are designed to use it to improve the precision of any estimate made from the data. To get the best from a chemometric method, it is helpful to have a basic understanding of the principles under which they function and the assumptions implicit in their use. This brief tutorial is intended to introduce the practicing spectroscopist to the basics of several

popular chemometrics methods. A full treatment of the application of chemometrics to spectroscopy is covered in a recent textbook (Pelikn, eppan, and Lika, 1994). Soft Modeling in Latent Variables Many of the methods employed in chemometrics are based on the concept of soft modeling, a linear modeling method that originated in the field of multivariate statistical analysis but which has become synonymous with the term chemometrics. The focus of the soft modeling method on the properties of the signal rather than on the noise help to distinguish chemometrics from statistics, where the emphasis is usually on the structure and properties of the error term. Chemists often confuse the two fields, but remembering the difference in focus makes distinguishing them relatively simple. Because of the heavy emphasis on soft modeling in chemometrics, the field developed around an algorithmic rather than theoretical framework, an attribute that is now beginning to change. Discussions of soft modeling in current literature are more likely to focus on the linear algebraic theory of the modeling than on the specific steps needed to form the model. It is useful to have an appreciation for some of the key approaches and assumptions of soft modeling, as these underlie the logic of many of the chemometric methods (Wold, 1995). More details are available in the Wiley Encyclopedia of Analytical Chemistry article Soft Modeling of Analytical Data (de Juan, Casassas and Tauler, 2000), so it is appropriate to provide only a brief overview here. Traditionally, modeling in spectroscopy has been done using first-principles (hard) models. A hard model is one that describes the system in terms of mathematical relationship developed using the measurement variables as independent variables and the desired outputs as dependent variables. Because chemical systems studied are complex, the hard modeling used in chemistry has either been applied to simplified systems or has involved either limiting laws or other approximations and restrictions to the region of application of the hard model. The ubiquitous Beers Law relations used in quantitative spectroscopy is an example of one such first-principles model with well-known places where deviations are seen because of the simplifications made in the theory. Soft modeling sees the modeling problem from an entirely different logical perspective: it presumes that the chemical system under study is complex, and that it is not possible or economically feasible to adequately describe the behavior of the system using a hard model. The soft model is based on variation and correlation in the data, as captured in a covariance matrix which can be thought of as a measure of the overall fluctuation in each independent variable present in the data set, as well as the variable-variable interactions. The first step in soft modeling is to express the data in terms of a new set of axes based on the different contributions to variation in the data. This is conveniently done by a

mathematical conversion from measured data to new axes based on covariance in the data set. A set of orthogonal components made from linear combinations of the independent (spectral data) variables is created to describe independent sources of the observed variation in the covariance matrix created from the data set analyzed, according to the equation X = UVT (1)

where matrix V, the loadings of the set of spectral data described by matrix X, contains the linear combinations of the original measurement variables that define the new variation-based coordinate system spanning the data in X and matrix U, called the scores of X, contains the coordinates of the data X in that variation-based coordinate system (see Jackson, 1991). It is useful to note that equation 1 amounts to a rotation of the spectroscopic variables defining X to a new matrix U. The rotation matrix is the matrix V because XV = U. Equation 1 is accompanied by a small, important, but often unmentioned detail: it is usual to center matrix X (i.e., make the average of the data in X equal 0 for each of the measured spectroscopic variables) prior to performing the rotation defined in the equation. The centering operation can be thought of as a translation of X from the measured center (the average spectrum of the data) to a new one with an average spectrum of 0. These linear combinations of the measured spectral variables that result from the rotation are called latent variables because they are derived rather than measured. The latent variables extracted to describe a data set are ordered in terms of the size of the independent sources of variation that they explain: the first latent variable explains the largest independent source of variance in the data, the second latent variable the second largest, and so on, until all variation in the data set is accounted for by one of the linear combinations of the measurements. The second step in soft modeling is the elimination of non-informative latent variables. Generally, re-expression of the data set X in terms of latent variables is not useful unless a decision is also made on the number of latent variables that are needed to adequately explain the systematic variation in the data X. It is at this point that we must distinguish between variables that help us answer the questions posed (and therefore must contain signal) and those that do not help (because they contain noise). Note that signal and noise are defined in a way consistent with the usual understanding of experimental scientists but very differently than is done in traditional statistics. This removal of noninformative latent variables is a projection from a high dimension (number of latent variables), which is usually equal to the number of samples measured, to a much smaller dimension. Thus, as Figure 1 shows, the PCA algorithm is merely a re-expression of a part of the spectroscopic data by translation, rotation and projection. For those readers more comfortable with quantum mechanics, this process is identical to an eigenanalysis of the

scatter matrix XTX and the vectors making up the columns of the V matrix are the eigenvectors. Figure 1 Finding the Latent Variables in a Set of NIR Spectra

Fig 1a The data set (Weyer, 1996)

translation

Figure 1b: mean-centered (by column) spectral data ready for soft modeling

rotation

Figure 1c: Soft modeled spectral data

truncation

Table 1: Percent Variance Captured by PCA Model of Spectra Principal Eigenvalue % Variance % Variance Component of Captured Captured Cov(X) This PC Total --------- ---------- ---------- ---------1 9.65e+000 96.19 96.19 2 3.30e-001 3.29 99.48
_______________________________________________________

3 4 5 6 7 8

4.11e-002 5.42e-003 2.47e-003 1.05e-003 7.61e-004 3.29e-004

0.41 0.05 0.02 0.01 0.01 0.00

99.89 99.94 99.97 99.98 99.99 99.99

Figure 1d: Truncated scores data containing ~99.5% of the variance of the original data (see Table 1 above)

Multivariate (First-Order) Calibration Any correlation between the dependent variable(s) and these latent variables is captured by means of a regression model. The regression step can be done after the creation of latent variables for the independent variables and the removal of sources of variation believed to be unrelated to the systematic effects under study. Building a regression model by first soft modeling and truncating the independent (measured) variables is known as principal components regression (PCR) in statistics and chemometrics (or it can be done in concert with the extraction of the latent variables using a modeling method known as partial least squares regression (PLS). This is the subject of a well-known text (Martens and Ns, 1989) and an Encyclopedia of Analytical Chemistry article Multivariate Calibration of Analytical Data (Wold and Josephson, 2000). A very brief overview of first-order calibration is provided here. Figure 2 shows the result of the linear regression of the water on the first principal component of the data. The red line is the regression.

Figure 2: Inverse regression of water in mixtures on the first PC of the spectral data. Note that we actually regress the property on the score here. Doing the regression in this way permits us to analyze single components in mixtures (Brown, 1982). The inverse relationship between spectral responses R and concentration or other property information C is expressed as C = RB (2)

where C is an m x n matrix of concentrations, R is an m x p matrix of responses, and B is a p x n matrix describing the calibration relationship. Equation 2 is often called an inverse Beers Law relation, as it implies that concentration C is dependent on the spectral response variables in R and also carries all of the error. There are some subtle but important details associated with our choice of inverse regression, and the interested reader would benefit from further study (Martens and Ns, 1989). Both PCR and PLS are used with inverse models in chemometrics. PCR implements the inverse relation as shown in Figure 2, although more than one set of scores can be used in the PCR regression, analogous to conventional multiple linear regression. PLS regression also decomposes the property into another set of scores and the two sets of scores are then regressed. A schematic of the way in which PLS implements equation 2 is shown in Figure 3. By soft modeling the spectral response R (and also C with PLS), then

truncating the soft model to decrease the variance unrelated to the calibration relationship, a useful calibration model often results.

Figure 3: PLS regression of a soft model for x onto a soft model for y involves simultaneous selection of model latent variables and maximization of x-y relation. The most widely known success of first-order calibration methods has come in nearinfrared (NIR) spectrometry, a fairly general analytical measurement that, previously, was often useless for direct quantitative measurements because of the lack of specificity of NIR bands as compared to those from infrared or Raman spectroscopy (Martens, et al., 1987) (Ns and Martens, 1988). Efforts to enhance the specificity or selectivity of analytical instrumentation, especially spectroscopic instrumentation, continue to make up a sizable fraction of chemometric research. Multivariate calibration using PLS for the modeling of the inverse response concentration relationship has become commonplace in the last ten years. (Martens and Ns, 1989) Published applications of multivariate calibration abound, as is clear from the number of citations in the Fundamental Reviews reported over the last ten years (Lavine, 2000), but it is the authors experience that many more applications of multivariate calibration are put into practice than are reported. A good, short introduction to PLS theory and coding is also available (Geladi and Kowalski, 1986a, 1986b). The current trend in first-order calibration is to explain the calibration in terms of the net analyte signal (Booksh and Kowalski, 1994) (Faber, et al., 1997)

Second- and Higher-Order Calibration Second-order calibration, where the analytical response gives rise to a matrix rather than a vector of data, is a very active area of research at present. This area is the subject of an Encyclopedia of Analytical Chemistry article Second-order Calibration and Higher (Fleming and Kowalski, 2000), so again, only an overview is provided here. There are two reasons for the interest. One is practical: a large number of analytical measurements give rise to a matrix of data (e.g., fluorescence excitation-emission data, GC-MS or a liquid chromatographic separation monitored with a diode-array detector). There is also a theoretical second-order advantage in using certain of these data in a calibration, in that the mathematics permits analysis of a calibrated component in the presence of an uncalibrated interference. (Kowalski and Seasholz, 1992). First-order calibration compensates for calibrated, unknown interferences but only second- and higher-order calibration can compensate for uncalibrated interferences. To date, the number of practical applications of the second-order advantage is still rather small. The analysis of higher-order chemical data that is, data with dimension higher than one - involves a choice of the way that the data are unfolded to make matrices of data (where the rows are defined by the individual samples and the columns by the components of the one-dimensional measurements) for more conventional matrix-oriented analysis. Smilde has reviewed different unfolding methods, including the various Tucker unfolding schemes and parallel factor analysis (PARAFAC) modeling, and offered a discussion of the history and applications of higher-order analysis. (Smilde, 1992) Like multivariate calibration of first-order data, higher-order analysis is also based on soft modeling. It is no surprise that it carries some of the defects inherent in soft modeling, including sensitivity to noise, difficulty in getting the correct model size (called the rank here) and problems in assuring the match of bases. There are some important differences, however: the rank annihilation method deals with information in multiple measurement dimensions, and it is far less susceptible to the effect of an interference that shows in a prediction sample but not in any calibration sample. Higherorder calibration is very dependent on a good match of the calibration model with the response data to be determined. Semi-Quantitative Methods using Soft Modeling In many studies, the goal is not quantitative, indirect measurement of one or more known, calibrated chemical species in a series of mixtures. The goal may be discovery of what chemical species are involved in a dynamic system, or possibly discovery of the number of species changing in that system. It may just be an objective measure that the dynamic system has not changed in its overall make-up. Chemometric methods based on softmodeling of the multivariate sensor measurements taken on these systems are well-suited to obtaining information on the identity and number of chemical species.

When little is known about a data set, self-modeling can be attempted. The loadings used to describe a set of data are rotated to find a set of non-orthogonal axes that have physical significance. The number of possible rotations is infinite, so some set of external constraints must be used to limit the region searched for rotations that make spectroscopic and physical sense. A constraint, such as requiring all positive spectra, defines a mathematical inequality that corresponds to restricting the set of possible rotations to a region of measurement space called a limiting convex hull. Another common constraint, namely requiring all positive amounts or concentrations of component species, defines a second convex hull that also bounds the set of possible solutions. Possible rotations satisfying the two inequalities used as constraints on the solution can occur in the space between the two limiting convex hulls. Figure 4 shows a well-known two-component mixture and its resolution by constrained rotations. The convex hulls are the dashed lines here, and the solution must lie between the outer set of dashed lines and the inner set of dashed lines. The component spectra generated at the limiting lines are shown as ranges, as additional constraints would be needed to reduce the range of acceptable rotations (i.e., ones that produce positive spectra and positive amounts for each component). Figure 4 Self Modeling of a Spectral Data Set

Raw Spectra for SMCR 100 90 80 70 absorbance 60 50 40 30 20 10 0 400 450 500 550 wavelength (nm) 600 650 700

Figure 4a: The spectral data (Lawton and Sylvestre, 1971)

Data in Score Space 150

100

50 score(2)

-50

-100

-150

50

100 score(1)

150

200

250

Figure 4b: The scores of the data and the limiting convex hulls (dashed lines).
Component 1 Spectrum 0.16 0.14 0.12 0.1 absorbance 0.08 0.06 0.04 0.02 0 400

450

500

550 wavelength

600

650

700

Figure 4c: Limiting solutions for the first component of the mixture.

Component 2 Spectrum 0.12

0.1

0.08 absorbance

0.06

0.04

0.02

0 400

450

500

550 wavelength

600

650

700

Figure 4d: Limiting solutions for the second component of the mixture. It is now known that the solutions are found near the apexes of the convex hulls, but there still is no systematic means to map out convex hulls and locate their apexes in the presence of noise and confounding effects. SIMPLISMA, one popular method for selfmodeling , uses a visual examination to help find suitable rotations of the soft model (Windig, 1988). The solutions that result from this self modeling are subject to uncertainty unless each species has a signature that is independent of the others at one or more of the wavelength channels used to collect the multivariate set. The more similar the species true responses over the sensor set used, the larger the uncertainty in the self modeled responses for these species will tend to be, not surprisingly. Often, self-modeling is needed because of a lack of solid information on the composition of the system, but it may be that the data are collected in such a way that ensures their serial correlation in both the wavelength axis and another axis, often time. In these cases, the additional correlation offers an additional constraint that helps to remove ambiguity in the rotated solution. Now, it is possible to develop a series of soft models that evolve as the data evolves in time, in both the forward (increasing time) and backward (decreasing time) directions. Tracking of the number of significant principal components in the soft models over the time axis permits an estimate of the number of species that vary over the data set, as well as where it is in the data that the different species enter in to the chemical systems response. Rotation of the soft model to simultaneously describe both the spectral response and the time-dependent data axes, subject to constraints in spectra and amounts, gives semiquantitative estimates of pure responses and relative amounts. Malinowski discusses this approach in more detail and reviews early work showing that this modeling approach can be used to discover relationships in data (Malinowski, 1991).

Classification and Clustering of Chemical Data It may be that there is grouping of the samples in the data according to some latent property. Often, a set of data is heterogeneous, in that the samples clump together rather than spread evenly over the multivariate space. Building a single PLS regression model on the full set of multivariate data is inappropriate because the variation captured in the modeling will be heavily influenced by variation between groups of data and will be less influenced by the variation between the independent and dependent variables. In these cases, a grouping step is needed before regression modeling to ensure that a homogeneous structure is present in some subset of the data prior to modeling this portion of the data. If a multivariate grouping structure present in the data can be guessed or determined, separate soft models can be developed for each group in the data. Once grouping is decided, separate soft models can be built for each of the groups. Soft modeling can be used to develop class regression models for different values of the latent property. New data can be classified into the appropriate group on the basis of the distance of the new data to each of the class models. An F test is used to decide on membership, again using residual variance in the fit and in the model. The SIMCA method focuses on modeling the classes rather than on finding an optimal classifier (Wold, 1976)(Wold, et al., 1984). See Figure 5 for an example of the SIMCA classification approach. By examining the mathematics behind SIMCA, it has become apparent that there may be better classifiers, depending on the structure of the data (Frank and Friedman, 1989) (McLachlan, 1992). Background and current research in classification methods are discussed in the Encyclopedia article Clustering and Classification of Analytical Data (Lavine, 2000b).

Figure 5: SIMCA classification of 2 classes as shown in the soft modeling representation of the data. Note the outlier datum to the right and the overlap of the classes at the center. SIMCA permits multi-class assignments and identifies samples not fitting the class model in a way consistent with other samples. Strengths and Weakness of the Soft Modeling Approach The strength of soft modeling with latent variables is that systematic sources of variation in the data that contain signal do not go unmodeled, as they might under hard models developed with inadequate theory. With soft modeling, relationships in data can be modeled even when the existing theory is incomplete, wrong, or even missing. The goal is a trade of a small amount of bias in the model for a larger decrease in the imprecision of the modeling, as shown in Figure 6a. This is done by examination of the quality of the model as a function of the number of latent variables, as shown in Figure 6b.

The ability to model linear relationships in the absence of suitable theory has made soft modeling an attractive approach for a wide rage of problems in applied and analytical chemistry, especially in modeling of quantitative relationships in spectroscopy. Soft modeling with latent variables also has uses in semi-quantitative modeling, where latent variables can be used to group or to classify spectra of samples by some latent property. Examination of the latent variables can often provide insights into the chemistry behind the observations. And, because latent variables are linear combinations of what may be many measurements, they offer a way to screen measurements for appropriateness and impact on the modeling. For this reason, soft modeling in latent variables is also often used for discovery of relationships in data prior to building hard models. Figure 6: The tradeoff between bias and variance in soft modeling .
1

0.9 0.8 0.7


relative frequency

U n b ia s e d m e t h o d B ia s e d m e t h o d w it h m in im u m va ria n c e

0.6 0.5 0.4 0.3 0.2 0.1


0

20

40

60

80

100 re s u lt

120

140

160

180

200

Figure 6a: A well-executed soft model introduces a small amount of bias but significantly decreases the amount of imprecision in the results of modeling (red curve -> green curve)

120 100

underfitting overfitting

MSE Bias or Variance

80
60
40
20
0 0

optimal

10

15

20

25

30

35

40

model complexity, q

Figure 6b: The optimal soft model is selected by minimization of the combined bias and variance contributions to the model.

Soft modeling comes with several inherent defects, however. The most serious is that the latent variables created in the soft modeling are local to the data set analyzed. The soft model developed explains only the variation seen in the data set used to create it. Unfortunately, quantitative modeling often requires a predictive model, and there is strong incentive to take a soft model developed on one set of data for use on an entirely different set. Any use of this model on new data is an extrapolation and, unless great care is taken in collecting and analyzing the data, the soft model may have little predictive use. Because the soft model defines a truncated mathematical basis for a series of measurements, it is possible to create a soft model on one set of data and to use it on a closely related set. To make this possible, though, the truncated basis defining the latent variables developed in the soft model must span the bases defining any new set of experimental data subjected to the model. The subtle and not-so-subtle consequences of this requirement force the user of soft models to consider the experimental design of data sets and the sensitivity of the models to individual samples to a degree that goes far beyond that done with more traditional modeling methods. A second difficulty with soft modeling arises from the fact that the latent variables describe orthogonal mathematical effects, and not physical effects. Most physical effects show high correlation between measurements, especially between adjacent measurement channels (e.g., individual wavelength channels in spectra). For example, suppose that in a chemical calibration, the varying amounts of two different compounds with different spectra are responsible for the two different sources of variation in the analytical response. These two chemically distinct sources may not be mathematically independent, however, because of the redundant nature of most multi-channel spectral responses measured with chemical instrumentation. Described in terms of latent variables, though,

the two compounds describe different sources of variation and therefore must be mathematically independent. The result is the mixing of separate, chemically significant effects in each of the latent variables and the spread of these effects over several latent variables. The latent variables describe variation in measurements rather than some easily understood measurement phenomenon, such as the absorbance at a particular wavelength. Direct interpretation of the chemical or spectral significance of these latent variables is difficult or impossible. Usually, conversion of mathematically significant effects as evidenced in the latent variables surviving after truncation of the soft model back to physically significant effects is needed to extract chemically relevant information. In essence, the conversion needed is an oblique rotation of orthogonal basis defined by the latent variables into the nonorthogonal basis defined by the nature of the physically significant spectral information desired. But which rotation is the correct one? Without external constraints or some prior knowledge of the solution, constraining the large number of possible rotations is impossible in most cases. The conversion of a mathematically useful latent variable set to a chemically relevant basis is therefore a stumbling block in getting usable chemical information from the soft modeling. This fact more than anything limits the use of chemometrics to those secure enough in the mathematics to apply it without many of the usual safety lines associated with visual interpretation or simple statistical inference (confidence intervals, etc.). Another defect in soft modeling is the lack of an easy route to distinguishing the sources of variation attributable to systematic, chemical effects in the data from other sources of variation. Accidental correlations arising between systematic, intentional variation in the data and various kinds of noise present either in the measurements themselves or in the taking of the data can corrupt the latent variables by embedding non-informative variation within useful variation. One consequence of this accidental correlation is the aliasing of measurement variables weighted heavily in the latent variable; it is possible to see increased or decreased weights given measurements in a latent variable as a consequence of noise-induced artifactual correlation with another measurement variable with large variation. These artifacts in the correlation and variation present in the data set frustrate the modeling effort in several ways. First, when present, accidental correlation induced by noise effects can make difficult the direct interpretation of the latent variables, an important part in extracting information from the soft model. Second, the presence of the incorrect weighting of the measurement variables in the latent variables can corrupt the rotation of latent variables back into chemically significant information presented in terms of the original measurement variables. Third, the noise-related artifacts complicate statistical decisions on the number of latent variables needed to describe a systematic effect in data, a key part of the development of the soft model. Key Reviews of the Field of Chemometrics Beginning in 1976, a detailed review of the methodology and practice of data analysis in chemistry has appeared in the biennial Fundamental Reviews issue of the journal

Analytical Chemistry (the most recent as of this writing is that of Lavine, 2000a). These reviews provide a comprehensive analysis of the state of chemometrics and, when examined together, also provide a glimpse at the evolution of concepts in chemometrics over the past 25 years. Along with the reviews of basic research in chemometrics, three major reviews on chemometrics applied to problems arising in process analytical chemistry also have appeared in the Applications Reviews issue of Analytical Chemistry. References Cited Booksh, K.S. and Kowalski, B.R. Theory of analytical chemistry. Anal. Chem. 66, 782A791A (1994). Brown, P.J. Multivariate calibration (with discussion). J. Roy Stat. Soc., Ser. B 44, 287321 (1982). DeJuan, A., Casassas, E., and Tauler, R. Soft modeling of analytical data. In Encyclopedia of Analytical Chemistry. Myers, R.A., Ed. v. 11, Wiley and Sons: Chichester, UK, pp 9800-37 (2000). Fleming, C. and Kowalski B.R. Second-order calibration and higher. In In Encyclopedia of Analytical Chemistry. Myers, R.A., Ed. v. 11, Wiley and Sons: Chichester, UK, pp 9736-64 (2000). Faber, K., Lorber, A. and Kowalski, B.R. Net analyte signal calculation in multivariate calibration. Anal. Chem. 69, 1620-6 (1997). Frank, I.E. and Friedman, J.H. Classification: Oldtimers and newcomers. J. Chemometrics 3, 463-475 (1989). Geladi, P. and Kowalski, B.R. Partial least squares regression: A tutorial. Anal. Chim. Acta 185, 1-17(1986a). Geladi, P. and Kowalski, B.R. An example of 2-block predictive partial least squares regression with simulated data. Analyt. Chim. Acta 185, 19-32(1986b). Jackson, J.E. A Users Guide to Principal Components. Wiley-Interscience, New York (1991). Kowalski, B.R., Seasholz, M.B. Recent developments in multivariate calibration. J. Chemometrics 5, 129-146 (1992). Lavine, B.K. Chemometrics: Fundamental review. Analyt. Chem. 72, 91R-98R (2000a).

Lavine, B.R. Clustering and classification of analytical data. In Encyclopedia of Analytical Chemistry. Myers, R.A., Ed. v. 11, Wiley and Sons: Chichester, UK, pp 9689-9710 (2000b). Lawton, W.H. and Sylvestre, E.A. Self-Modeling Curve resolution. Technometrics, 13, 617-33 (1971). MacGregor, J.F. Statistical process control of multivariable processes. Control Eng. Practice 3, 403-414 (1995). Malinowski, E.R. Factor Analysis in Chemistry. 2nd ed. Wiley-Interscience, New York (1991). Martens, H., Karstang, T. and Ns, T. Improved selectivity in spectroscopy by multivariate calibration. J. Chemometrics, 1, 201-219 (1987). Martens, H. and Ns, T. Multivariate Calibration. John Wiley and Sons, Chichester, UK (1989). McLachlan, G. J. Discriminant Analysis and Statistical Pattern Recognition. WileyInterscience, New York (1992). Miller, C.E. The use of chemometric techniques in process analytical method development and operation. Chemom. Intell. Lab. Syst. 30, 11-22 (1995). Ns, T. and Martens, H. Principal component regression in NIR analysis: Viewpoints, background details and selection of components. J. Chemometrics 2, 155-168(1988). Pelikn, P., eppan, M., and Lika, M. Applications of Numerical Methods in Molecular Spectroscopy. CRC Press: Boca Raton, FL (1994). Smilde, A. K. Three-way analyses: Problems and prospects. Chemom. Intell. Lab. Syst. 15, 143-157 (1992). Weyer, L.G. Chemometric Methods Applied to the Near Infrared and Raman Analysis of Water-alcohol-acetone Mixtures, (Dissertation, University of Delaware, 1996). Windig, W. Mixture analysis of spectral data by multivariate methods. Chemom. Intell. Lab. Syst. 4, 201-213 (1988). Wold, S. Pattern recognition by means of disjoint principal component models. Patt. Recog. 8, 127-139 (1976). Wold, S. Chemometrics: What do we mean with it, and what do we want from it? Chemom. Intell. Lab. Syst. 30, 109-115 (1995).

Wold, S., Albano, C., Dunn, III, W.J., Edlund, U., Esbensen, K., Geladi, P.,Hellberg, S., Johansson, E., Lindberg, W., Sjstrm, M. Multivariate data analysis in chemistry. In Chemometrics: Mathematics and Statistics in Chemistry. Kowalski, B.R., ed., v. 138, NATO ASI Series, Reidel: Dordrecht, Holland, pp17-95 (1984). Wold, S. and Josephson, M. Multivariate calibration of analytical data. In Encyclopedia of Analytical Chemistry. Myers, R.A., Ed. v. 11, Wiley and Sons: Chichester, UK, pp 9710-36 (2000).

Вам также может понравиться