Вы находитесь на странице: 1из 2

Review:

X" 1 # Xi N i
2 1 #$ i N2 i

Normal vs Optimal Average


X"

Adding up Noisy Data


A = area under the curve, e.g. ux of the star How to measure A ? Simple method: Add up the data:
N

$X
i i

/# i2
2 i

$ 2(X ) =

$1/#

#2 X =

( )

1 $1/# i2
i

A " # Xi
i=1 N N

Dilemma: How many data points to include ?

!
Equal weights: Poor data degrades the result. Better to ignore bad data. Information lost. Optimal weights: ! New data always improves the result. Use ALL the data, but with appropriate 1 / Variance weights. Must have good error bars.

A = A # Pi
i=1

$ 2 [ A ] = #$ i 2
i=1

!
Pi = fraction of photons in pixel i .

"P =1
i i=1

Biased if N too small. Noisy if N too large.

Can we do better? Yes, if the pattern P is known.

Optimal Scaling of a Pattern


Scale the pattern Pi by factor A to t the data. Optimal average of unbiased estimates:
Ai " X i Pi unbiased : Ai = A $# ' Var [ Ai ] = & i ) % Pi (
2

Sum the Data vs Optimal Scaling


Sum up the data.
N

Data : X i " i Model : X i # i = A Pi Pattern : Pi

Optimal Scaling of known Pattern.

A " # Xi
i=1 N N

A = Var A

#X P " #P "
i i i 2 i i i

2 i

A = A # Pi

$ 2 [ A ] = #$ i 2
i=1

Optimal average : w i = 1 Var [ Ai ] ) A=

i=1

[ ]

*w A *w
i i i i

$P ' $X ' * & # i ) & Pi ) i % i( % i ( $ P '2 * & #i ) i % i(


i i

#P
i

" i2

*X P # *P #
i i i 2 i i i

2 i

A
!

) Var A

[ ]

* Var[ X ] ( P
=
i

# i2
2

$ ' 2 2 & * Pi # i ) % i (

!
# i2

*P
i

!
Biased if N too small. Noisy if N too large.

No bias.

Result improves with N.

The Golden Rule of Optimal Data Analysis:

Optimal Scaling
Model :

Fitting Models by minimising 2


Data : X i " i Parameters : $ k i = 1 ... N k = 1 ... M X i # i ($ )

Data : X i " i Model : X i # i = A Pi Optimal Scaling : A = Var A

$ X i Pi " i
i

Error : %i # X i & i ($ )

$P
i

" i2 " i2

[ ]

%i X i & i ($ ) = "i "i Badness - of - fit statistic :


Normalised Error : 'i #
2 N N * X & i ($ ) ( 2 ( X, " , $ ) # ) 'i 2 = ) , i / "i . i=1 i=1 + Best - fit parameters $ minimise ( 2 .

$P
i

"2
2 " min

Memorise this result. Know how to derive it.

!
!

Optimal Average is a special case of Optimal Scaling, with pattern Pi = 1.

"
!

"

Page #

Example: Estimate < X > by 2 Fitting


Model : Xi = Badness - of - Fit statistic : % X # (2 "2 = + ' i * $i ) i & Minimise " 2 :

Parameter Error Bars: 1 at 2 = 1


= X = Optimal Average 1 Must have # 2 () = # 2 ( X ) = $ 1/# i2 From " 2 fit :
i
2 " 2 = " min +

" #2 X $ = $2 & i 2 " %i i " 2# 2 1 = +2 & 2 " 2 %i i


Fill with water to depth of 1 "2

, "2 X # 0= = #2 + i 2 , $i i
X + $ 2i = + $ 2 i i i i - =

at =

1 # 2" 2 2 # 2

( $ ) 2 + ...
=

!
2 i 2 i

!
!

+ X /$ +1/$
i i i

= X.

"

' 1 * 2 = " min + )& 2 , ( $ ) 2 + ... ( i %i +


2 = " min

The Optimal Average

minimises 2 !

' $ *2 +) , + ... ( % () + ' $ *2 0) , =1 ( % () +


!

!" 2 = 1
2 " min

"

2 min

" ()

- ." / " $ "

2 min

for

= % ()
!

Parameter Error Bars: 1- from 2 Curvature


1 )' # , 2 "# 2 $ # 2 % # min & + . 2 * '( 2 2 2

Test Understanding
From Dataset 1 From Dataset 2

(( % ( ) 2
(=(

"2
"
2

) ( % ( ,2 =+ . =1 * / (( ) 0 / (( ) =
2

for

( = ( / (( )
!

!" 2 = 1
2 " min

!
Which dataset ( 1 or 2 ) gives the better estimate of parameter ?

2 ) ' 2# 2 , + 2 . * '( - ( = (
!

" # (" )

"
How would you combine these datasets ?

!
!

"

Scaling a Pattern by 2 minimization


Model : i = X i = A Pi Badness - of - fit : % X i # A Pi ( 2 ' * & $i )

Summary
Two 1-parameter models:
Estimating < X > : Scaling a pattern:

i = X i = i = X i = A Pi

" =+
2 i

Two equivalent methods:


Algebra of Random Variables: Optimal Average and Optimal Scaling

Minimise : 0=

, "2 (X i # A Pi ) Pi = #2 + ,A $ i2 i X i Pi A Pi 2 - + =+ $ i2 $ i2 ! i i

" # P = +2 % i2 " A2 $i i
"2 A

X=

# X /" # 1/"
i i i

2 i

2 i

"2 X =

1 ( ) # 1/"
i

!
2 i

A=

#X
i i

Pi /" i2
2 i

#P

/" i2

"2 A =

( ) # P1 /"
2 i i

2 i

( )

2 #2 $ 2 # A2

Minimising 2

gives same result:

"2

%P
i

" i2

2 "# 2 $ # 2 % # min

# X P /" A= # P /"
i i i 2 i i

2 i

( & % & +2 =* - + ... ) ' (& ) ,

!2 ' (& ) =

Same result as Optimal Scaling.

2 i

2 ( . # 2 +! * 2) .& , & %&

!" 2 = 1
2 " min

!
" # (" )
!
!

"

Page #

Вам также может понравиться