Вы находитесь на странице: 1из 6
Intemational Journal of Navigation and Port Research Vol28, No‘? pp. 629~634, 2004 USSN-1598-7701) Charted Depth Interpolation: Neuron Network Approaches Shi Chaojians “Shanghai Maritime University Abstract: Cantino depts data are cften required in applicationsef both onbuard systems and maritime simulation. Hut data avaiable are usualy discrete and irregularly distributed Based on the neuron nenwurk technique, methods of interpolation to the charted depth are suggested in this paper. Two algorttons based on Levenbers- Marquardt back propagarela and radial-basis function networks are inestigated respectively: A dhnamic neuron network system is developed which safes both realtime and mass processing applications. Using feperbolic paraboloid and typical chart area, effectiveness of the algoritons is tested and error analvsis presented Special process in practical applications such as partition of lager areas, normalization and selection of depth contour data are als illustrated Key words © charted depth, neuron nenvork, function appreximation, spatial interpolatio 1. Introduction In practical applications such as geometric. information systems (Wu, 2002), maritime simulation, ship automation and harbor and waterway design (Shi, 2003 and Kiso, 2002), etc, we usually need continuous water depth data, Because of restricted observation measures and limited information resources, the depth data available are usually diserete and irregularly distributed, such as those either on paper chart or in BCDIS database (HO, 1996). A. systematticapproach of interpolation is required for practical purposes. This Kind of problem, known as sputial interpolation, canbe adéressed_usingt various methods, which include inverse distance weighting, interpolating polynomials, splines, power and Fourier series fitting and others (Cressie, 1968). of the methods may lack the desired accuracy and some are hare to implement More complicated methods like kriging perform well butalso need ‘much effort and computer time (Zimmerman, 1999}. Neuron networks behave very well in function approximation and are worth being investigat tow dimensional environment, nonlinear 2. Neuron Network Approach 2.1 Function Approximation Let the surface of the sea bottom be denoted by fy) Where 2x and y ane horizontal coordinates and 2 is the \ater depth. Employing the superior performance of neuron network for approximating, nonlinear functions, f (x,y) can * Corresponding Author : Shi Chaoiian, Cjshiastimtuedacn be offectively evaluated by architecture, properly designed network ‘The universal approximation theorem for a nonlinear nput-oulput mapping. can be used to solve the problem: Let e(.) de a nonconstant, bounded, and monotone increasing continuous function, Let Z,.denote mw dimentional unit hypercube (0.11. Th 1, space of continuous functions on denoted by CU). FEC.) and €>0, there exist a integer my and sets of Then, given any function real constants a7 .b, and wy. where 2 omy, such that we may define Sao Sw,x, +5) a8 an approximate realization of the function fl PU ttn Di that is, FO Xe kn ISA tarete, PSE 1 XiXaeat, for all im that Tie in the input spec. It can be accepted that the surface of the sea bottom, ‘meets the requirement es a continuous function and the normalized surface function S€C(x,), Selecting logsig as the transfer functions of the neurons meets the noedl to be nonconstant, bounded and monotone increasing continuous, ‘To approximate the sea bottom surface, therefore, we can ‘employ the network architecture as shown in Fig 1. The transfer function of hidden layer is logsig, and that of ‘output layer, purelin The tree parameter vector of the network is - 629 - (Chacted Depth Interpolation: Neuron Network Approaches Fig. 1 Fuetion approximation BP network ‘The charted depth data in a proper area can be used as training set. ‘The network is trained by backpropagation algorithm and the free parameter vector is modified to approximate effectively the surface function, f Uy), of the sea bottom, ‘The standard backpropagation algorithm is popular and. casy to implement. Its converging speed is slow, however ‘The heuristic modifications such as the ase of momentum (or variable learning rate show some improvements but hardly are satisfactory in practical applications. Levenberg, Marquardt. backproxsagation (LMBP) proves to be an. effective algorithm It appears to be the fastest neuron network sings algorithm for moderate number of network parameters (Heygan, 1996), 2.2 Function Interpolation “The performance of the backpropsafation network proximation generally meets the requirements of practical upieation. But the approximation surface usually does not vass through the observed data, which are used as training dita points, and there exist errors over the known data prints. In some special applications, which require accurate value over observed data points, radial-basis function (RB) network is preferred Phe interpolation problem may be stated: given a set of N different points (X, €R" |/=1,2,..V) and a corres- ponding set of N real numbers (d) €9%"[/=12..N), find a function F 9" 1! that satisfies the interpolation condition OW) Qo N Use the radial-basis function technique and choose a function F that has the following form: Fa) = Ywollx-x, i) w where (@UIX=%/ [D/P 12N} js a set of N’ nonlinear functions, known as radial-basis. functions, and) + [ denotes a norm. The known datapoints 4,8" [F120 are taken as center of the radial-basis fanctions, From (D and (@), we ean write wed where Pn 2 Px Pn Pry Pus Ova Pry Pe = PUN, = DBI = d= [d,dydy)} Do, ] @ Lu) By Michelli theorem , Let ¥€%™ | N be a set of distinct points. Then the N-by-N interpolation matrix ®, whose ji-th elements is pi-vind=e(| x) ~ 61). is (Micchelli, 1986), ‘Therefore we can write, a functions are covered by Micchelli theorem, The following, ones are corprnonly applied (Haykin, 1909) oir=(r 67) Fig.2 Radial-basis function network. The architecture of radial-basis function network for the Function interpolation is shown in Fig2, The number of ~ 630 - neurons on the hidden layer is identical to the number of data in the training set, NM. The win (2 ‘The output of the network is (1) and the free parameters are determined by (3) is the free parameter vector 3. Test and implementation analysis 3.1 Cure Funetion Test In order to verify the performance of the network for interpolation, Tests have been performed on a_ hyperbolic paraboloid, 05 ‘The result of the interpolation is shown in Fig 2. The: surface is generated by the radial-basis funtion network. The dark points represent the training, dt Table | Interpolated values from the tested fur TT. ao Ocipu | Fae | Ouput] Emer_| up (i o5i2| 3425 | 3.97| ors ‘| 012| 2.1 | 2004 Prom ti, y)I-1