Вы находитесь на странице: 1из 8

Talanta 48 (1999) 729 736

Short communication

Intra-laboratory testing of method accuracy from recovery assays


A. Gustavo Gonza lez *, M. Angeles Herrador, Agust n G. Asuero
Department of Analytical Chemistry, Uni6ersity of Se6ille, 41012, Se6ille, Spain Received 27 April 1998; received in revised form 6 August 1998; accepted 7 August 1998

Abstract A revision on intra-laboratory testing of accuracy of analytical methods from recovery assays is given. Procedures based on spiked matrices and spiked samples are presented and discussed. 1999 Elsevier Science B.V. All rights reserved. Keywords: Accuracy; Recovery assays; Spiked samples; Spiked matrices

1. Introduction The accuracy of an analytical method is a key feature for validation purposes [1,2]. Four principal methods have been proposed to the study of accuracy of analytical methods [3 5]. They are based on: (i) the use of certied reference materials (CRM); (ii) the comparison of the proposed method with a reference one, (iii) the use of recovery assays on matrices or samples and, (iv) the round robin studies (collaborative tests). CRMs, when available, are the preferred control materials because they are directly traceable to international standards or units. The procedure consists in analyzing a sufcient number of CRMs
* Corresponding author. Tel.: + 34 5 4557173; fax: + 34 5 4557168; e-mail: agonzale@cica.es

and comparing the results against the certied values [4,6,7]. The Community Bureau of Reference of the Commission of the European Community (BCR, Bureau Communautaire de Re fe rence), the Laboratory of the Government Chemist of Middlesex (LGC), the National Institute of Standards and Technology of USA (NIST) and the National Institute for Environmental Studies of Japan, provide a general coverage of certied reference materials [810]. Nevertheless a series of shortcomings and limitations for CRMs have been pinpointed [11], specially: Their cost, the small amounts that may be purchased and the narrow range covered of matrices and analytes. The performance of a newly developed method can be assessed by comparing the results obtained by it with those found with a reference or comparison method of known accuracy and precision [1216].

0039-9140/99/$ - see front matter 1999 Elsevier Science B.V. All rights reserved. PII S0039-9140(98)00271-9

730

A.G. Gonza lez et al. / Talanta 48 (1999) 729736

The use of Collaborative studies to control methodological bias is a very important topic [17,18]. However prociency testing and round robin studies will not be considered here, this paper dealing with the internal (intra-laboratory) control of accuracy. Unfortunately, within the realm of environmental, toxicological and pharmaceutical analysis, neither CRMs nor alternate methods are available for new contaminant, toxic and drugrelated analytes. Accordingly, the remaining methods to check method accuracy, that is, the recovery assays are the tools of the trade for the study of accuracy in pharmaceutical analysis. The aim of the present paper is to revise, outline and discuss the suitable procedures for testing accuracy in pharmaceutical analysis from recovery assays. For the sake of illustration some examples taken from literature are discussed.

namic range of the analytical response (Y ). This response may be expressed by the following relationship involving both analyte and matrix amounts [22]: Y = A + Bx + Cz + Dxz (1)

2. General overview on recovery assays Before embarking us in the body of the subject, some terminology should be established. According to the IUPAC paper (1990) on nomenclature for sampling in analytical chemistry [19,20] we will use the term test portion as the quantity of material removed from the test sample which is suitable in size to measure the concentration of the determinant by using the selected analytical method. The test portion may be dissolved with or without reaction to give the test solution. An aliquot (fractional part) of the test solution is then measured by following the analytical operating procedure. The test portion consists of analyte plus matrix [21]. Accordingly, if a test portion of weight m is dissolved into a total volume V, the test solution will have a concentration c = m /V which is the sum of the analyte concentration (x ) plus the matrix concentration (z ). Consider now the newly proposed analytical method which is applied to dissolved test portions of a given sample within the linear dy-

where A, B, C and D are constants A is a constant that does not change when the concentrations of the matrix, z, and/or the analyte, x, change. It is called the true sample blank [23] and may be evaluated by using the Youden sample plot [2426], which is dened as the sample response curve [23]. By using our terminology, the application of the selected analytical operating procedure to different test portions, m, (different mass taken from the test sample) produces different analytical responses Y as outputs. The plot of Y versus m is the Youden sample plot and the intercept of the corresponding regression line is the so called Total Youden Blank (TYB) which is the true sample blank [2330]. As will be discussed below, when a matrix without analyte is available the term A can be more easily determined. Bx is the fundamental term that justies the analytical method and it is directly related to the analytical sensitivity [31]. Cz is the contribution from the matrix, depending only on its amount, z. When this term occurs, the matrix is called interferent. In general this kind of interference is very infrequent, because a validated analytical method should be selective enough with respect to the potential interferences appearing in the samples where the analyte is determined [32]. The USP monograph [33] denes the selectivity of an analytical method as its ability to measure accurately an analyte in presence of interference, such as synthetic precursors, excipients, enantiomers and known or likely degradation products that may be present in the sample matrix. Accordingly, the majority of validated methods do not suffer from such a direct matrix interference. Anyway, as it will be discussed below, the method accuracy may be tested even when faced with interferent matrices. Dxz is an interaction analyte/matrix term. This matrix effect occurs when the sensitivity of

A.G. Gonza lez et al. / Talanta 48 (1999) 729736

731

the instrument to the analyte is dependent on the presence of the other species (the matrix) in the sample [31]. For the sake of determining analytes, this effect may be overcome by using the method of standard additions (MOSA) [23 30]. Certain types of samples, of which pharmaceutical dosage forms are just an example, enables the sample matrix to be simulated by a laboratory preparation procedure with all the excipients present in their corresponding amounts except the analyte of interest. In other cases, the sample matrix may be a synthetic mixture of naturally complex substances such as an analyte-free body uids from unmedicated patients. In both cases, it is said that the placebo is available and recoveries are obtained from spiked placebos. The term placebo was used by Cardone [28] to refer to free-analyte materials. However in order to avoid confusion and unify the jargon, the word matrix (or blank matrix) will be used throughout the text. Sometimes, however, it is not possible to prepare a matrix without the presence of analyte. This may occur, for instance with lyophilized materials, in which the speciation is signicantly different when the analyte is absent [3]. In these cases, the matrix is not available and the MOSA must be applied [23 30] the recoveries being obtained from spiked samples. In spiked matrices or spiked samples, the analyte addition cannot be blindly performed. Addition should be accomplished over the suitable analyte range. Some analytes when incorporated naturally into the matrix are chemically bound to the constituents of the matrix. In such cases, the mere addition to the sample or matrix will not mirror what happens in practice. It is recommended that the analyte is added to the matrix and then left in contact for several hours, preferably overnight, before applying the analytical method to allow analyte/matrix interactions to occur [34]. Spiked samples or matrices are also called fortied ones, the concentration of analyte added being the corresponding fortication level. In the following the two procedures for demonstrating accuracy from recovery tests, namely, spiked matrices and spiked samples will be outlined. For the sake of generality, in all cases the coefcient C of Eq. (1) will be considered signicant.

3. Recovery tests from spiked matrices The recovery test is carried out from spiked matrices over the analyte range of interest, generally 75125% of the expected assay value (label claim or theory), holding the matrix at the nominal constant level z = z0. The analytical response when analyte is added follows the equation Y = A + Bx + Cz0 + Dxz0 = (A + Cz0) + (B + Dz0)x = A % + B %x (2) where A % and B % are the intercept and the slope of the calibration line in the matrix environment which are constant because B, C, D and z0 are also xed. By using this calibration function, several amounts of analyte are added to the matrix. From the analytical signal at each addition i, Yi, the amount of analyte found is estimated as x i= (Yi A %)/B %. The recovery (Rec) may be estimated as the average of the individual recoveries obtained at each spike i (Reci = x i /xi ). Alternatively, a regression analysis of found versus added analyte concentrations may be performed, and the slope may be taken as the average recovery as it will be discussed in Section 5.

4. Recovery tests from spiked samples In a way similar to the preceding section, the application of the MOSA on the test portions (matrix plus analyte) is performed. An important requirement for this technique is that all solutions, unspiked and spiked test portions be diluted to the same nal volume. If we carry out analyte additions, x, on the nal solution of a given test portion of concentration c0 = x0 + z0 (x0 is the concentration of analyte coming from the sample, present in the nal solution), then the analytical response will be: Y = A + B (x0 + x ) + Cz0 + D (x0 + x )z0 = (A + Cz0) + (B + Dz0)(x0 + x ) = A % + B %(x0 + x ) = A % + B %x0 + B %x = A + B %x (3)

732

A.G. Gonza lez et al. / Talanta 48 (1999) 729736

Note that both A and B % are constant because A, B, C, D, x0 and z0 are also constants. By using this calibration function, from the analytical signal at each addition i, Yi, the amount of analyte found is estimated as x i = (Yi A )/B %. This is an estimation of the added analyte concentration (xi ) instead of the total analyte concentration (x0 + xi ). A crucial requirement is that, the total nal analyte concentration obtained for the maximum amount of analyte spiked should remain within the linear range obtained at the method development step. The remainder is the same as it was discussed above.

sRec =

n 2 %ii = = 1 (Rec Reci )

(6)

n1

n being the number of spike levels. If the t value obtained from Eq. (5) is less than the tabulated value for n 1 degrees of freedom at a given signicance level, then the null hypothesis is accepted and the method is accurate.

5.2. Regression analysis of added 6ersus found data


Both in spiked matrices or samples, a regression analysis of estimated (found) against spiked (added) analyte concentration is performed. These studies are not new, indeed. In a landmark paper [35], Mandel and Linnig studied the accuracy in chemical analysis using linear calibration curves, by applying regression analysis to the linear relationships xfound = a + bxadded (7)

5. Evaluation of the recovery from spiked matrices or samples and signicance tests for assessing accuracy The calculation of recoveries from spiked matrices or samples may be performed (i) by computing the individual recoveries for each spiked amount (fortication level) of analyte, and evaluating the recovery as average; and (ii) from linear regression analysis of added versus found data.

5.1. Method of the a6eraged reco6ery


In this case, for each fortication level or analyte spike i, we have the concentration added of analyte xi. From the calibration graph, the estimated concentration of analyte x i is obtained. An individual recovery is then calculated as Reci = x i/ xi. The mean recovery, Rec, is calculated as average of the individual ones Rec = 1 i=n % Reci n i=1 (4)

The average recovery may be tested for signicance by using the Student t -test, the null hypothesis being that the recovery is unity (or 100 in percentage) and the method is accurate. t= Rec 1 sRec/
n (5)

with

here xfound and xadded refer to the above concentrations of analyte estimated (x ) and spiked (x ). The theory predicts a value of 1 for the slope, b, and a value of 0 for the intercept. However, the occurrence of systematic and random errors in the analytical procedure may produce deviations of the ideal situation [36]. Thus, it may occur that the straight line has a slope of 1 but a non-zero intercept, coming from a wrongly estimated background signal, and reecting the need for a blank correction in the calibration graph. Another possibility is that the slope is signicantly different from unity, indicating a source of proportional error in the proposed analytical method. The plot may present curvature or even may exhibit peculiar behaviour in case of analyte speciation. Once the parameters a and b were calculated from the linear t xfound versus xadded, and before to evaluate the recovery, diagnostic checking of the residuals (responsesmodel predictions), that is, xfound a bxadded at each spike level, should be applied to assess model t validity [37]. Certain underlying assumption have been outlined for the regression analysis such as the independence of the random error, homoscedasticity and Gaussian

A.G. Gonza lez et al. / Talanta 48 (1999) 729736

733

distribution of the random error [38]. If the model represents the data suitably, the residuals should be randomly distributed about the value predicted by the model equation with normal distribution. A plot of the residuals on a normal probability paper is a useful technique [39]. If the error distribution is normal, then the plot will be linear. On the other hand, examinations of plots of residuals against the independent variable (here xadded) may be of great help in the diagnosis of the regression models [40]. Some systematic patterns indicated that the model is incorrect in some way. A sector pattern indicates heteroscedasticity in the data [41,42]. A non linear pattern indicates that the present model is incorrect [43]. Residuals also may be used to detect outliers. A very straightforward way is to consider as outlier any calibration point whose residual is greater than twice the value of the standard deviation of the regression line, although the use of jackknife residuals or the Cook distance method are more accurate tools for detecting outliers [44]. The heteroscedasticity revealed by residual analysis of the added versus found data comes from the heteroscedasticity in the responses (Y ) of the calibration curve which is propagated to the estimates xfound. In such a case, instead of using ordinary least squares methods to t the calibration straight line, the use of weighted least squares is advised. The weights wi are given by wi = 1/s 2 i , where si is the standard deviation of the responses, replicated at the concentration of analyte xi [41,42,45]. Non-linear patterns detected by residual analysis of the added versus found plots arise because the calibration graph used for estimating the analyte concentration is by far non-linear. Curved patterns suggest the inclusion of a quadratic term in x 2 in the calibration function. In these situations, suitable non-linear calibration curves should be established in order to obtain unbiased estimates of xfound. Three methods are available for this purpose, namely the method of the linear segments, the method of the three-parameter function and the method based on polynomial functions [46,47]. After the validity of the t has been appraised, then statistical comparison of a and b with their idealistic values, 0 and 1, must be performed.

Conventional individual condence intervals for the slope and the intercept once their standard deviations sb and sa are calculated, based on the t -test (ta = a /sa ; tb = b 1 /sb ), although frequently used by the workers [4850], can lead to erroneous conclusions because these tests, when carried out independently of each other ignore the strong correlation between slope and intercept [51]. Instead of these individual tests, the elliptic joint condence region (EJCR) for the true slope (i ) and intercept (h ) derived by Working and Hotelling [52] and adopted by Mandel and Linnig [35] is recommended, whose equation is n (a h )2 + 2 % xi (a h )(b i )
2 2 + % x2 i (b i ) = 2s F2, w

 

 

(8)

where n is the number of points, s 2 the regression variance and F2, w the critical value of the SnedecorFishers statistic with 2 and w = n 2 degrees of freedom at a given P % condence level, usually 95% [53,54]. The centre of ellipse is (a, b ). Any point (h, i ) which lies inside the EJCR is compatible with the data at the chosen condence level P. In order to check constant (translational) or proportional (rotational) bias, the values h = 0 and i = 1 are compared with the estimates a and b using the EJCR. If the point (0, 1) lies inside the EJCR, then bias are absent [13] and consequently, the recovery may be taken as unity (or 100% in percentile scale). This can be done from easy calculations as described in Appendix A. Once the recovery is computed (from one or another procedure), it should be checked for fullling the accuracy criteria according to the AOAC guidelines [55] as indicated in Table 1. Note that for trace analysis, e.g. drug residues in tissues, recoveries about 50% is often the best that can be achieved. 6. Worked example For the sake of illustration, a case study was selected. It deals with the recovery of trigonelline in coffee extracts from ion chromatography.

734

A.G. Gonza lez et al. / Talanta 48 (1999) 729736 Table 2 Recovery study of trigonelline on coffee extracts xadded 5 10 15 20 25 30 xfound 5.02 9.89 14.94 19.81 25.30 30.22 Rec 1.004 0.989 0.996 0.990 1.012 1.007 Estimated xfound 4.88 9.94 15.00 20.06 25.12 30.18 Residual 0.14 0.05 0.06 0.25 0.18 0.04

Trigonelline is determined in green and roasted coffee extracts by an ion chromatography procedure using as stationary phase polybutadiene maleic acid (PBDMA) coated on silica, 2 mmol l 1 aqueous hydrochloric acid (pH 3) as eluent and UV detection at 254 nm. Test solutions were prepared by reuxing test portions of 3 g of dried coffee (green or roasted) with hot water (80C) for 1 h. The extract was ltered and diluted to 250 ml. For trigonelline analysis, the test solution is diluted 1:5 (V/V) and 3 ml of the resulting solution is passed through a C18 SPE cartridge and then, the eluent is also passed to collect a total volume of 10 ml. An aliquot of this later solution was ltered through a 0.45 mm lter unit and subsequently injected on the HPLC system [56]. Owing to the lack of suitable certied reference materials, the validation methodology was based on recovery assays from spiked extracts of green and roasted coffees. Roasted coffees have trigonelline contents within 0.50 0.85% (w/w, dry basis). The corresponding extracts will present trigonelline concentrations in the range 60102 mg l 1, which corresponds to 1525.5 mg trigonelline. Six spikes, additions or fortication levels were selected. Spiked extracts were allowed overnight and then analyzed. The added and found amounts of trigonelline are shown in Table 2. The individual recoveries are also indicated in Table 2 at each fortication level. The averaged recovery is 0.9997 and its standard deviation, 0.0094. By applying Eq. (5), the observed t value is 0.078. The critical value for 5 degrees of freedom at a 95% condence level is 2.015,
Table 1 Analyte recovery depending on the concentration range Analyte concentration (%) ]10 ]1 ]0.1 ]0.01 ]0.001--]0.00001 ]0.000001 ]0.0000001 Recovery range (%) 98102 97103 95105 90107 80110 60115 40120

xadded and xfound refer to the amounts (in mg) of trigonelline added and found by analysis in the different extracts.

and therefore, the null hypothesis is accepted and the recovery do not differ statistically from 100%. The same conclusion could be drawn from comparison of the averaged recovery in percentile scale, 99.97, with the ranges provided by Table 1. Considering that trigonelline contents in coffee are of 0.50.85%, the third row of the table ( \ 0.1%) is selected and the corresponding recovery range is 95105%. The averaged recovery 99.97 lies within this interval and consequently, meets with the requirements of the AOAC guidelines. On the other hand, the regression approach can be carried out. The plot of xadded versus xfound was linear with an intercept of 0.18 and a slope of 1.012. The correlation coefcient was about 0.9999 and the regression variance s 2 = 0.0355. In Table 2 the estimated values of xfound and the corresponding residuals are presented. Residual analysis did not show any pathology with the exception of a large value ( 0.25) for the point corresponding to the addition of 20 mg. However, the absolute value of this residual is lesser than twice the standard deviation of the regression line (s = 0.1884) and hence cannot be considered as outlier. A plot of the residuals on a normal probability paper was fairly linear, which raties the model validity. In order to test if simultaneously the slope and intercept are not different from the idealistic values h = 0 and i = 1, the procedure base on the EJCR was considered as it is explained

A.G. Gonza lez et al. / Talanta 48 (1999) 729736

735

in Appendix A. Thus, by taking the critical value for the Snedecor Fisher statistic at a 95% condence level F2, 4 = 6.94, we obtain i1 = 0.2304 B 1, and i2 = 1.7770 \ 1. This indicates that the point (0, 1) lies inside the EJCR and then, the intercept may be considered to be zero and the slope to be unity, which leads to that the recovery can be considered as 100%.

References
[1] ICH, International Conference on Harmonization, Validation of Analytical Procedures, Note for Guidance, Commission of European Community, Brussels, 1995. [2] M. Thompson, Analyst 121 (1996) 285 288. [3] J.M. Green, Anal. Chem. News Features May 1 (1996) 305A 309A. [4] M. Thompson, Anal. Proc. 27 (1990) 142 144. [5] J.K. Taylor, Anal. Chem. 55 (1983) 600A 608A. [6] R. Sutarno, H.F. Steger, Talanta 32 (1985) 439 445. [7] M. Valca rcel, A. R os, Analyst 120 (1995) 2291 2297. [8] E. Prichard, Quality in the Analytical Chemistry Laboratory, ACOL series, Appendix 3: Some Sources of Reference Materials, Wiley, Chichester, UK, 1995, pp. 255 256. [9] M. Valca rcel, A. R os, Materiales de referencia, in: M. Valca rcel, A. R os (Eds.), La calidad en los laboratorios anal ticos, Editorial Reverte , Barcelona, 1992, Ch. 6, pp. 177 222. [10] K. Lamble, S.J. Hill, Analyst 120 (1995) 413 417. [11] Analytical Methods Committee, Analyst 120 (1995) 29 34. [12] A.C. Metha, Analyst 122 (1997) 83R 88R. [13] A.G. Gonza lez, A.G. Asuero, Fresenius J. Anal. Chem. 346 (1993) 885 887. [14] A.G. Gonza lez, A. Ma rquez, J. Ferna ndez-Sanz, Comput. Chem. 16 (1992) 25 27. [15] M. Thompson, Anal. Chem. 61 (1989) 1942 1945. [16] B.D. Ripley, M. Thompson, Analyst 112 (1987) 377 383. [17] J.K. Taylor, J. Assoc. Off. Anal. Chem. 69 (1986) 398 400. [18] G.T. Vernimont, Interlaboratory evaluation of an analytical process, in: W. Spendley (Ed.), Use of Statistics to Develop and Evaluate Analytical Methods, ch. 4, AOAC, Arlington, VA, 1990, pp. 87 143. [19] W. Horwitz, Pure Appl. Chem. 62 (1990) 1193 1208. [20] R.E. Majors, LC-GC Int. 5 (1992) 8 14. [21] U.R. Kunze, Probenahme und Probenvorbereitung, in: Grundlagen der quantitativen Analyse, ch. 2, 3rd ed., Georg Thieme Verlag, Stutgart, 1990. pp. 2 4. [22] R. Ferru s, Analytical function, calibration, interference, and modellization in quantitative chemical analysis, in: Miscel-la ` nia Enric Cassasas, Bellaterra, Universitat Auto ` noma de Barcelona, 1991, pp. 147 150. [23] M.J. Cardone, Anal. Chem. 58 (1986) 438 445. [24] W.J. Youden, Anal. Chem. 19 (1947) 946 950. [25] W.J. Youden, Biometrics 3 (1947) 61. [26] W.J. Youden, Mater. Res. Stand. 1 (1961) 268 271. [27] M.J. Cardone, J. Assoc. Off. Anal. Chem. 66 (1983) 1257 1282. [28] M.J. Cardone, J. Assoc. Off. Anal. Chem. 66 (1983) 1283 1294. [29] M.J. Cardone, J.G. Lehman, J. Assoc. Off. Anal. Chem. 68 (1985) 199 202. [30] L. Cuadros Rodr guez, A.M. Garc a Campan a, F. Ale s Barrero, C. Jime nez Linares, M. Roma n Ceba, J. AOAC Int. 78 (1995) 471 476.

Appendix A The equation of the isoprobability ellipse is given by (8). One very easy way to determine if the point (0, 1), corresponding to the joint null hypothesis h = 0 and i = 1, lies inside the ellipse (and therefore the null hypothesis is accepted) is to consider the intersections of the straight line h = 0 and the ellipse: Only when the point (0, 1) lies inside the ellipse, the straight line h = 0 and the ellipse intersect at two points (0, i1) and (0, i2) fullling that (for i2 \ i1): i2 \ 1 and i1 B 1 simultaneously. Otherwise, the point (0, 1) is outside the ellipse. If in Eq. (8) we set h = 0 and z = b i, the following expression is obtained:
2 2 2 % x2 i z + 2h % xi z + [na 2s F2, w ] = 0

 n 
L=% x2 i M = 2a % xi

(9) after the following changes

N = na 2 2s 2F2, w the roots for z will be z1 = z2 = M +


M 2 4LN 2L M
M 2 4LN 2L

(10)

(11)

and consequently, i1 = b z1 and i2 = b z2, which are the parameters needed to check if the point (0, 1) lies inside or outside the EJCR.

736

A.G. Gonza lez et al. / Talanta 48 (1999) 729736 [43] P.C. Meier, R.E. Zu nd, Statistical Methods in Analytical Chemistry, Wiley, New York, 1993, pp. 92 94. [44] J.N. Miller, Analyst 118 (1993) 455 461. [45] Analytical Methods Committee, Analyst 119 (1994) 2363 2366. [46] L.M. Schwartz, Anal. Chem. 49 (1977) 2062 2068. [47] L.M. Schwartz, Anal. Chem. 51 (1979) 723 727. [48] Y. Lacroix, Analyse chimie, interpre tation des re sultats par le calcul statistique, Masson et Cie, Paris, 1962, pp. 31 33. [49] K. Doerfel, Statistik in der analytische Chemie, 4th ed, VCH, Weinheim, 1987, pp 137 155. [50] R.J. Tallarida, R.B. Murray, Manual of Pharmacologic Calculations with Computer Programs, 2nd ed., Springer, New York, 1987, p. 16. [51] P.D. Lark, Anal. Chem. 26 (1954) 1712 1725. [52] H. Working, H. Hotelling, Proc. J. Am. Stat. Assoc. 24 (1929) 73 85. [53] J.S. Hunter, J. Assoc. Off. Anal. Chem. 64 (1981) 574 583. [54] K.A. Brownlee, Statistical Theory and Methodology in Science and Engineering, Wiley, New York, 1965, pp. 362 366. [55] AOAC, Peer Veried Method Program, Manual on Policies and Procedures, Arlington, VA, November 1993. [56] M.J. Mart n, F. Pablos, M.A. Bello, A.G. Gonza lez, Fresenius J. Anal. Chem. 357 (1997) 357 358.

[31] K.S. Booksh, B.R. Kowalski, Anal. Chem. 66 (1994) 782A 791A. [32] L. Huber, LC-GC Int. 11 (1998) 96105. [33] United States Pharmacopeia XXIII, National Formulary XVIII, Rockville, MD, The United States Pharmacopeial Convention, 1995, pp. 16101612. [34] E. Prichard, Selecting the method, in: Quality in the Analytical Chemistry Laboratory, ACOL series, ch. 3, Wiley, Chichester, UK, 1995, pp 67101. [35] J. Mandel, F.J. Linnig, Anal. Chem. 29 (1957) 743749. [36] J.C. Miller, J.N. Miller, Errors in instrumental analysis; regression and correlation, in: Statistics for Analytical Chemistry, 3rd ed., ch. 5, Prentice Hall, Chichester, UK, 1993, pp. 101 139. [37] W.P. Gardiner, Statistical Analysis Methods for Chemists, The Royal Society of Chemistry, Cambridge, 1997, pp. 182 185. [38] A.G. Gonza lez, Anal. Chim. Acta 360 (1998) 227241. [39] E. Morgan, Chemometrics: Experimental Design, ACOL Series, Wiley, Chichester, UK, 1991, pp. 126128. [40] M. Meloun, J. Militky, M. Forina, Chemometrics for Analytical Chemistry, vol. 2, Ellis Horwood, London, 1994, pp. 64 69. [41] J.S. Garden, D.G. Mitchell, W.N. Mills, Anal. Chem. 52 (1980) 2310 2315. [42] M. Davidian, P.D. Haaland, Chem. Int. Lab. Sys. 9 (1990) 231 248.

Вам также может понравиться