Вы находитесь на странице: 1из 2

Least Square Method A method of determining the curve that best describes the relationship between expected and

observed sets of data by minimizing the sums of the squares of deviation between observed and expected values. A statistical method used to determine a line of best fit by minimizing the sum of squares created by a mathematical function. A "square" is determined by squaring the distance between a data point and the regression line. The least squares approach limits the distance between a function and the data points that a function is trying to explain. It is used in regression analysis, often in nonlinear regression modeling in which a curve is fit into a set of data. The least squares approach is a popular method for determining regression equations. Instead of trying to solve an equation exactly, mathematicians use the least squares to make a close approximation (referred to as a maximum-likelihood estimate). Modeling methods that are often used when fitting a function to a curve include the straight line method, polynomial method, logarithmic method and Gaussian method. The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation. The most important application is in data fitting. The best fit in the least-squares sense minimizes the sum of squared residuals, a residual being the difference between an observed value and the fitted value provided by a model. When the problem has substantial uncertainties in the independent variable (the 'x' variable), then simple regression and least squares methods have problems; in such cases, the methodology required for fitting errors-in-variables models may be considered instead of that for least squares. Least squares problems fall into two categories: linear or ordinary least squares and non-linear least squares, depending on whether or not the residuals are linear in all unknowns. The linear least-squares problem occurs in statistical regression analysis; it has a closed-form solution. A closed-form solution (or closed-form expression) is any formula that can be evaluated in a finite number of standard operations. The non-linear problem has no closedform solution and is usually solved by iterative refinement; at each iteration the system is approximated by a linear one, thus the core calculation is similar in both cases. Least squares corresponds to the maximum likelihood criterion if the experimental errors have a normal distribution and can also be derived as a method of moments estimator. The following discussion is mostly presented in terms of linear functions but the use of least-squares is valid and practical for more general families of functions. Also, by iteratively applying local quadratic approximation to the likelihood (through the Fisher information), the least-squares method may be used to fit a generalized linear model.For the topic of approximating a function by a sum of others using an objective function based on squared distances, see least squares (function approximation). The objective consists of adjusting the parameters of a model function to best fit a data set. A simple data set consists of n points (data pairs) , i = 1, ..., n, where is an independent variable and is a dependent variable whose value is found by observation. The model function has the form , where the m adjustable

parameters are held in the vector . The goal is to find the parameter values for the model which "best" fits the data. The least squares method finds its optimum when the sum, S, of squared residuals

is a minimum. A residual is defined as the difference between the actual value of the dependent variable and the value predicted by the model. . An example of a model is that of the straight line in two dimensions. Denoting the intercept as and the slope as , the model function is given by . See linear least squares for a fully worked out example of this model. A data point may consist of more than one independent variable. For example, when fitting a plane to a set of height measurements, the plane is a function of two independent variables, x and z, say. In the most general case there may be one or more independent variables and one or more dependent variables at each data point. Adjusting Level Nets When a level survey system covers a large area, you, in turn, adjust the interconnecting network in the whole system. Adjustment of an interconnecting network of level circuits consists of adjusting, in

turn, each separate figure in the net, with the adjusted values for each circuit used in the adjustment of adjacent circuits. This process is repeated for as many cycles as necessary to balance the values for the whole net. Within each circuit the error of closure is normally distributed to the various sides in proportion to their lengths. Figure 7-5 represents a level net made up of circuits BCDEB, AEDA, and EABE. Along each side of the circuit is shown the length of the side in miles and the observed difference in elevation in feet between terminal BMs. The difference in elevation (plus or minus) is in the direction indicated by the arrows. Within each circuit is shown its total length (L) and the error of closure (Ec) that is determined by summing up the differences in elevation in a clockwise direction. Figure 7-6 shows the computations required to balance the net. The circuits, sides, distances (expressed in miles and in percentages of the total), and differences in elevation (DE) are listed.

Adjustmmt of level nets. For circuit BCDEB, the error of closure is 0.40 ft. This is distributed among the lines in proportion to their lengths.

Thus, for the line BC, the correction is (Notice that the sign is opposite to that of the error of closure.) The correction of +0.07 ft is entered on the first line of the column headed CORR and is added to the difference in elevation (10.94 + 0.07 = +11.01). That sum is entered on the first line under the heading CORR DE (corrected difference in elevation). The same procedure is followed for the remaining lines CD, DE, and EB of circuit BCDEB. The sum of the corrections must have the opposite sign and be equal to the error of closure. The algebraic sum of the corrected differences in elevation must equal zero. The lines in circuit AEDA are corrected in the same manner as BCDEB, except that the corrected value of ED (+27.08 instead of +27.15) is used. The lines of EABE are corrected using the corrected value of EA (+17.97 instead of +17.91) and BE (+5.13 instead of +5.23). In the column Cycle II, the procedure of Cycle I is repeated. You should always list the latest corrected value from previously adjusted circuits before computing the new error of closure. The cycles are continued until the corrections become zero. The sequence in which the circuits are taken is immaterial as long as they are repeated in the same order for each cycle. Computations may be based on corrections rather than differences in elevation.

Вам также может понравиться