Академический Документы
Профессиональный Документы
Культура Документы
to Op)miza)on
and
Economic Dispatch
Baosen Zhang
Perspec)ve
• Conven)onal PS economics problems:
– Economic Dispatch (ED)
– Unit Commitment (UC)
– Op)mal Power Flow (OPF)
• Assume monopoly, ver)cally integrated u)lity
• Why?
– Conceptually simpler
– Variants for compe))ve electricity markets
– Comparison with “benevolent monopolist”
© 2017 B. Zhang and University of Washington 2
Economic dispatch problem
L
A B C
- =
actual load – uncontrolled renewable genera)on
= net load
= controllable genera)on
Input
Fuel Electric Power
(Input) (Output) MW
Pmin Pmax
• Thermal genera)ng units
Output
• Consider the running costs only
• Input / Output curve
– Fuel vs. electric power
• Fuel consump)on measured by its energy content
• Upper and lower limit on output of the genera)ng unit
$/h
Cost
No-load cost
MW
Pmin Pmax
© 2017 B. Zhang and University of Washington
Output 11
Incremental Cost Curve
Cost [$/h]
• Incremental cost curve
Δ FuelCost vs Power
Δ Power ∆F
∆P
• Deriva)ve of the cost curve MW
• In $/MWh
Incremental Cost
• Cost of the next MWh [$/MWh]
MW
© 2017 B. Zhang and University of Washington 12
Mathema)cal formula)on
• Objec)ve: Minimize
C = C A (PA ) + C B (PB ) + CC (PC ) A B C L
• Constraints
– Load / Genera)on balance:
L = PA + PB + PC
– Unit Constraints:
PAmin ≤ PA ≤ PAmax
PBmin ≤ PB ≤ PBmax
This is an optimization problem
PCmin ≤ PC ≤ PCmax
x* x
x* x
-f(x)
-f(x*)
df df
>0 <0
dx dx
x* x
If x = x * maximises f ( x ) then:
df
f ( x ) < f ( x ) for x < x ⇒
* *
> 0 for x < x *
dx
df
f ( x ) < f ( x ) for x > x ⇒
* *
< 0 for x > x *
dx
© 2017 B. Zhang and University of Washington 21
Necessary Condi)on for Op)mality
f(x) df
=0
dx
x* x
df
If x = x maximises f ( x ) then
*
= 0 for x = x *
dx
x
df
For what values of x is =0?
dx
In other words, for what values of x is the necessary condition for
optimality satisfied?
f(x)
A B C D x
• A, B, C, D are sta)onary points
• A and D are maxima
• B is a minimum
• C is an inflexion point
f(x)
A B C D x
d2 f
For x = A and x = D, we have: 2
<0
dx
f(x)
A B C D x
d2 f
For x = B we have: 2
>0
dx
f(x)
A B C D x
d2 f
For x = C , we have: 2
=0
dx
f(x)
xMIN A xMAX D x
Feasible Set
The values of the objec)ve func)on outside
the feasible set do not maaer
© 2017 B. Zhang and University of Washington 31
Two-Dimensional Case
f(x1,x2)
x1*
x1
x2*
∂f ( x 1 ,x 2 )
=0
∂x 1
x * ,x *
1 2
∂f ( x 1 ,x 2 )
=0
∂x 2 x * ,x *
1 2
x1*
x1
x2*
x2
© 2017 B. Zhang and University of Washington 33
Mul)-Dimensional Case
At a maximum or minimum value of f ( x 1 , x 2 , x 3 ,… x n )
we must have: ∂f
=0
∂x 1
∂f
=0
∂x 2
∂f
=0
∂x n
x1
x2
© 2017 B. Zhang and University of Washington 35
Sufficient Condi)ons for Op)mality
f(x1,x2)
Saddle point
x1
x2
© 2017 B. Zhang and University of Washington 36
Sufficient Condi)ons for Op)mality
Calculate the Hessian matrix at the stationary point:
⎛ ∂2 f ∂2 f ∂2 f ⎞
⎜ ∂x 2
∂x 1 ∂x 2 ∂x 1 ∂x n ⎟
⎜ 1 ⎟
⎜ ∂2 f ∂2 f ∂ f ⎟
2
⎜ ⎟
⎜ ∂x 2 ∂x 1 ∂x ∂x 2 ∂x n ⎟
2
2
⎜ ⎟
⎜ ∂2 f ∂2 f ∂ 2 f ⎟⎟
⎜⎜ ⎟
⎝ ∂x n ∂x 1 ∂x n ∂x 2 ∂x n ⎠
2
det(λ𝐼−𝐴)=0
(█1&0&0@0&1&0@0&0&1 )
Where I is the iden)ty matrix:
F2
F1
x1
F1 F2
x2
© 2017 B. Zhang and University of Washington 40
Contours
A contour is the locus of all the point that give the same value
to the objective function
x2
x1
© 2017 B. Zhang and University of Washington
Minimum or maximum 41
Example 1
Minimise C = x 12 + 4 x 22 − 2 x 1 x 2
⎛ ∂ 2C ∂ 2C ⎞
⎜ ∂x 2 ∂x 1 ∂x 2 ⎟ ⎛ 2 −2 ⎞
Hessian Matrix: ⎜ 1
⎟ =⎜ ⎟
⎜ ∂ 2C ∂ C ⎟ ⎝ −2 8 ⎠
2
⎜ ⎟
⎝ ∂x 2 ∂x 1 ∂x 22 ⎠
x2
C=9
C=4
C=1
x1
Minimum: C=0
λ + 2 −2
= 0 ⇒ λ2 − 4 λ − 8 = 0
−2 λ − 6
4 + 80
⇒λ = >0
2 The stationary point
is a saddle point
4− 80
or λ = <0
2
© 2017 B. Zhang and University of Washington 46
Example 2
x2
C=0
C=9
C=4
C=1
x1
C=1
C=4
C=9
x2 Subject to ω ( x 1 , x 2 ) ≡ 5 − x 1 − x 2 = 0
ω ( x1 ,x 2 ) ≡ 5 − x1 − x 2 = 0
Minimum
x1
f ( x 1 , x 2 ) = 0.25 x 12 + x 22
© 2017 B. Zhang and University of Washington 52
Example 2: Economic Dispatch
x1 x2
L
G1 G2
Optimization problem:
Minimise C = a 1 + a 2 + b1 x 12 + b 2 x 22
Subject to: x 1 + x 2 = L
© 2017 B. Zhang and University of Washington 53
Solu)on by subs)tu)on
Minimise C = a 1 + a 2 + b1 x 12 + b 2 x 22
Subject to: x 1 + x 2 = L
⇒ x 2 = L − x1
⇒ C = a1 + a 2 + b1 x 12 + b 2 ( L − x 1 )
2
Unconstrained minimization
dC
= 2 b1 x 1 − 2 b 2 ( L − x 1 ) = 0
dx 1
b2 L ⎛ b1 L ⎞
⇒ x1 = ⎜⇒ x2 = ⎟
b1 + b 2 ⎝ b1 + b 2 ⎠
d 2C
2
= 2b1 + 2 b 2 > 0 ⇒ minimum
© 2017 B. Zhang and University of Washington
dx 1 54
Solu)on by subs)tu)on
• Difficult
• Usually impossible when constraints are non-
linear
• Provides liale or no insight into solu)on
➙ Solu)on using Lagrange mul)pliers
∇f
ω ( x1 ,x 2 ) = 5 − x1 − x 2
f = 0.25 x 12 + x 22 = 6
f = 0.25 x 12 + x 22 = 5
f ( x1 ,x 2 ) = 5 ∇f
∇f
f ( x1 ,x 2 ) = 5
∇ω
∇ω
∇ω
© 2017 B. Zhang and University of Washington 62
Lagrange mul)pliers
• The solution must be on the constraint
• To reduce the value of f, we must move
in a direction opposite to the gradient
ω ( x1 ,x 2 ) ∇f
f ( x1 ,x 2 ) = 6
f ( x1 ,x 2 ) = 5 ∇f
∇f
f ( x1 ,x 2 ) = 5 ∇f
∇ω
∇f
∇ω
At the optimum, the gradient of the
function is parallel to the gradient
of the constraint ∇ω
© 2017 B. Zhang and University of Washington 64
Lagrange mul)pliers
At the optimum, we must have: ∇f ∇ω
Which can be expressed as: ∇f + λ ∇ω = 0
∂f ∂ω
In terms of the co-ordinates: +λ =0
∂x 1 ∂x 1
∂f ∂ω
+λ =0
∂x 2 ∂x 2
( x 1 , x 2 , λ ) = f ( x 1 , x 2 ) + λω ( x 1 , x 2 )
The necessary conditions for optimality are then given
by the partial derivatives of the Lagrangian:
∂ ( x 1 , x 2 , λ ) ∂f ∂ω
= +λ =0
∂x 1 ∂x 1 ∂x 1
∂ ( x 1 , x 2 , λ ) ∂f ∂ω
= +λ =0
∂x 2 ∂x 2 ∂x 2
∂ ( x 1 , x 2 , λ )
= ω ( x1 ,x 2 ) = 0
© 2017 B. Zhang and University of Washington
∂λ 66
Example
Minimise f ( x 1 , x 2 ) = 0.25 x 12 + x 22 subject to ω ( x 1 , x 2 ) ≡ 5 − x 1 − x 2 = 0
( x 1 , x 2 , λ ) = 0.25 x 12 + x 22 + λ ( 5 − x 1 − x 2 )
∂ ( x 1 , x 2 , λ )
≡ 0.5 x 1 − λ = 0
∂x 1
∂ ( x 1 , x 2 , λ )
≡ 2x2 −λ = 0
∂x 2
∂ ( x 1 , x 2 , λ )
≡ 5 − x1 − x 2 = 0
∂λ
∂ ( x 1 , x 2 , λ )
≡ 0.5 x 1 − λ = 0 ⇒ x1 = 2λ
∂x 1
∂ ( x 1 , x 2 , λ ) 1
≡ 2x2 −λ = 0 ⇒ x2 = λ
∂x 2 2
∂ ( x 1 , x 2 , λ ) 1
≡ 5 − x1 − x 2 = 0 ⇒ 5 − 2λ − λ = 0
∂λ 2
⇒λ=2
⇒ x1 = 4
⇒ x2 =1
© 2017 B. Zhang and University of Washington 68
Example
Minimise f ( x 1 , x 2 ) = 0.25 x + x
1
2 2
2
x2 Subject to ω ( x 1 , x 2 ) ≡ 5 − x 1 − x 2 = 0
ω ( x1 ,x 2 ) ≡ 5 − x1 − x 2 = 0
f ( x1 ,x 2 ) = 5 Minimum
1
4 x1
© 2017 B. Zhang and University of Washington 69
Important Note!
If the constraint is of the form: ax 1 + bx 2 = L
= f ( x 1 ,…, x n ) + λ ( L − ax 1 − bx 2 )
= f ( x 1 ,…, x n ) + λ ( ax 1 + bx 2 )
( x 1 , x 2 ,λ ) = C1 ( x 1 ) + C 2 ( x 2 ) + λ ( L − x 1 − x 2 )
∂ dC 1
≡ −λ =0
∂x 1 dx 1 dC 1 dC 2
= =λ
∂ dC 2 dx 1 dx 2
≡ −λ =0
∂x 2 dx 2
∂ Equal incremental cost
≡ L − x1 − x 2 = 0
∂λ solution
© 2017 B. Zhang and University of Washington 71
Incremental Cost
C1 ( x 1 ) C2 ( x 2 )
Cost curves:
x1 x2
dC 1 dC 2
Incremental dx 1 dx 2
cost curves:
x1 x2
© 2017 B. Zhang and University of Washington 72
Interpreta)on of this solu)on
dC 1 dC 2
dx 1 dx 2
x1 x2
-
L -
+
If < 0, reduce λ
L − x1 − x 2 If > 0, increase λ
© 2017 B. Zhang and University of Washington 73
Physical interpreta)on
dC ΔC
= lim
C( x ) dx Δx→0 Δx
dC 1 dC 2
= =λ
dx 1 dx 2
∂
= ω m ( x 1 ,, x n ) = 0 n + m equations in
∂λ m n + m variables
© 2017 B. Zhang and University of Washington 78
Op)miza)on with inequality constraints
Minimise
f ( x 1 , x 2 ,…, x n ) Objective function
subject to:
ω 1 ( x 1 , x 2 ,…, x n ) = 0
Equality constraints
ω m ( x 1 , x 2 ,…, x n ) = 0
and:
g1 ( x 1 , x 2 ,…, x n ) ≤ 0
Inequality constraints
g p ( x 1 , x 2 ,…, x n ) ≤ 0
© 2017 B. Zhang and University of Washington 79
Example: Economic Dispatch
x 1min ≤ x 1 ≤ x 1max
x1 x2
L
G1 G2 x 2min ≤ x 2 ≤ x 2max
Minimise C = a 1 + b1 x 12 + a 2 + b 2 x 22
Subject to:
x1 + x 2 = L Equality constraints
x 1 − x 1max ≤ 0
x 1min − x 1 ≤ 0
Inequality constraints
x2 − x max
2 ≤0
x 2min − x 2 ≤ 0
© 2017 B. Zhang and University of Washington 80
Example: Economic Dispatch
Minimise C = a 1 + b1 x 12 + a 2 + b 2 x 22 Family of ellipses
x2max
A
x2min
x1max x1
x1min
© 2017 B. Zhang and University of Washington 81
Example: Economic Dispatch
What is the solution for a larger load?
x1 + x 2 = L x 1 + x 2 = L'
x2max
A B
x2min
x1min x1max x1
© 2017 B. Zhang and University of Washington 82
Example: Economic Dispatch
C is the solution because it is the point on
the equality constraint that satisfies the
x2 inequality constraints at minimum cost
x1 + x 2 = L x 1 + x 2 = L'
x2max
A B
x2min
x1min x1max x1
© 2017 B. Zhang and University of Washington 83
Binding Inequality Constraints
• A binding inequality constraint is an inequality constraint that
is sa)sfied exactly
• Example:
– If we must have x1 ≤ x1max
– And at the solu)on we have x1 = x1max
– Then the constraint x1 ≤ x1max is said to be binding
• ALL of the inequality constraints must be sa)sfied
• Only a FEW will be binding (or ac)ve) at any given )me
• But we don’t know ahead of )me which inequality constraints
will be binding!
• All equality constraints are always binding
Lagrangian function:
( )
x 1 ,…, x n , λ 1 ,..., λ m , µ 1 ,..., µ p = f ( x 1 ,…, x n )
m
+ ∑ λ i ω i ( x 1 ,…, x n )
i=1
+ ∑ µ j g j ( x 1 ,…, x n )
j =1
© 2017 B. Zhang and University of Washington 85
Op)mality Condi)ons
(Known as the Karush Kuhn Tucker (KKT) conditions
m p
( x ,λ ,µ ) = f ( x ) + ∑ λ i ω i ( x ) + ∑ µ j g j ( x )
i=1 j =1
∂ ( x , λ , µ ) ∂f ( x ) m ∂ω k ( x ) p ∂g j ( x )
≡ + ∑λ k + ∑µ j =0 i = 1,...n
∂x i ∂x i k =1 ∂x i j =1 ∂x i
∂ ( x , λ , µ )
≡ ω k ( x) = 0 k = 1,...m
∂λ k
gj (x)≤ 0 j = 1,... p
µ j g j ( x) = 0 j = 1,... p
µj ≥0 Complementary Slackness Conditions j = 1,... p
© 2017 B. Zhang and University of Washington 86
Complementary slackness condi)ons
µ j g j ( x) = 0
µj ≥0
OR
Subject to:
ω ( x1 ,x 2 ) ≡ 5 − x1 − x 2 = 0
g ( x 1 , x 2 ) ≡ x 1 + 0.2 x 2 − 3 ≤ 0
f ( x 1 , x 2 ) = 0.25 x 12 + x 22
x1
ω ( x1 ,x 2 ) ≡ 5 − x1 − x 2 = 0
© 2017 B. Zhang and University of Washington 90
Example
( x 1 , x 2 , λ , µ ) = f ( x 1 , x 2 ) + λω ( x 1 , x 2 ) + µg ( x 1 , x 2 )
= 0.25 x 12 + x 22 + λ ( 5 − x 1 − x 2 ) + µ ( x 1 + 0.2 x 2 − 3 )
∂
≡ 0.5x1 − λ + µ = 0
∂x1
∂
≡ 2x2 − λ + 0.2 µ = 0
∂x2
∂
≡ 5 − x1 − x2 = 0
∂λ
∂
≡ x1 + 0.2x2 − 3 ≤ 0
∂µ
µ g(x) ≡ µ ( x1 + 0.2x2 − 3) = 0 and µ ≥ 0
© 2017 B. Zhang and University of Washington 91
Example
KKT conditions do not tell us if inequality constraint is binding
Must use a trial and error approach
Trial 1: Assume inequality constraint is not binding èµ = 0
∂
≡ 5 − x1 − x 2 = 0 x 1 = 2.5
∂λ
∂ x 2 = 2.5
≡ x 1 + 0.2 x 2 − 3 = 0
∂µ
∂
≡ 0.5 x 1 − λ + µ = 0 λ = 5.9375
∂x 1
∂ µ = 4.6875
≡ 2 x 2 − λ + 0.2 µ = 0
∂x 2
Solution of problem
with inequality constraint
Solution of problem
f ( x 1 , x 2 ) = 0.25 x + x 2
1
2
2 without inequality
constraint
x1
Solution of problem
without constraints
ω ( x1 ,x 2 ) ≡ 5 − x1 − x 2 = 0
© 2017 B. Zhang and University of Washington 94
Applica)on to Economic Dispatch
x1 x2
L
G1 G2
minimise f ( x 1 , x 2 ) = C 1 ( x 1 ) + C 2 ( x 2 )
s.t . ω ( x 1 , x 2 ) ≡ L − x 1 − x 2 = 0
g1 ( x 1 , x 2 ) ≡ x 1 − x 1max ≤ 0
x 1min ≤ x 1 ≤ x 1max
g 2 ( x 1 , x 2 ) ≡ x 1min − x 1 ≤ 0
g 3 ( x 1 , x 2 ) ≡ x 2 − x 2max ≤ 0
x 2min ≤ x 2 ≤ x 2max
g 4 ( x 1 , x 2 ) ≡ x 2min − x 2 ≤ 0
+ µ 3 ( x 2 − x 2max ) + µ 4 ( x 2min − x 2 )
KKT Conditions:
∂ dC 1
≡ − λ + µ1 − µ 2 = 0
∂x 1 dx 1
∂ dC 2
≡ −λ +µ3 −µ4 = 0
∂x 2 dx 2
∂
≡ L − x1 − x 2 = 0
∂λ
© 2017 B. Zhang and University of Washington 96
Applica)on to Economic Dispatch
KKT Conditions (continued):
∂
≡ x 1 − x 1max ≤ 0 µ 1 ( x 1 − x 1max ) = 0 ; µ 1 ≥ 0
∂µ 1
∂
≡ x 1min − x 1 ≤ 0 µ 2 ( x 1min − x 1 ) = 0 ; µ 2 ≥ 0
∂µ 2
∂
≡ x 2 − x 2max ≤ 0 µ 3 ( x 2 − x 2max ) = 0 ; µ 3 ≥ 0
∂µ 3
∂
≡ x 2min − x 2 ≤ 0 µ 4 ( x 2min − x 2 ) = 0 ; µ 4 ≥ 0
∂µ 4
∂ dC 1
≡ −λ =0
∂x 1 dx 1 dC 1 dC 2
= =λ
∂ dC 2 dx 1 dx 2
≡ −λ =0
∂x 2 dx 2
∂
≡ L − x1 − x 2 = 0
∂λ
Minimise C ( x )
subject to: ω ( x ) = L ⇔ L − ω ( x ) = 0
and: g( x) ≥ K ⇔ K − g( x ) ≤ 0
( x ,λ ,µ ) = C ( x ) + λ ( L − ω ( x ) ) + µ ( K − g ( x ) )
At the optimum x * , λ * , µ * : =0 =0
( x * ,λ * ,µ * ) = C ( x * )
∂
= λ * Marginal cost of equality constraint
∂L Optimum
∂
= µ* Marginal cost of inequality constraint
∂K Optimum
© 2017 B. Zhang and University of Washington 101
Physical Interpreta)on of Lagrange Mul)pliers
+ λ (L − P1 − P2 − P3 )
+ µ1 (P1 − 400) + µ2 (230 − P1 )
+ µ 3 (P2 − 500) + µ 4 (100 − P2 )
+ µ5 (P3 − 260) + µ6 (100 − P3 )
© 2017 B. Zhang and University of Washington 105
Example: Op)mality Condi)ons
Set the partial derivatives of the Lagrangian with respect to the
decision variables and λ to zero:
∂
≡ 8 + 0.2P1 − λ + µ1 − µ2 = 0 (1)
∂P1
∂
≡ 7 + 0.12P2 − λ + µ 3 − µ 4 = 0 (2)
∂P2
∂
≡ 9 + 0.14P3 − λ + µ5 − µ6 = 0 (3)
∂P3
∂
≡ L − P1 − P2 − P3 = 0 (4)
∂λ
© 2017 B. Zhang and University of Washington 106
Example: Op)mality Condi)ons
Set the partial derivatives of the Lagrangian with respect to the
μ’s to zero:
∂
≡ P1 − 400 ≤ 0
∂ µ1
∂
≡ 230 − P1 ≤ 0
∂ µ2
∂
≡ P2 − 500 ≤ 0
∂ µ3
∂
≡ 100 − P2 ≤ 0
∂ µ4
∂
≡ P3 − 260 ≤ 0
∂ µ5
∂
≡ 100 − P3 ≤ 0
∂ µ6
µ2 .(230 − P1 ) = 0; µ2 ≥ 0
µ 3 .(P2 − 500) = 0; µ 3 ≥ 0
µ 4 .(100 − P2 ) = 0; µ 4 ≥ 0
µ5 .(P3 − 260) = 0; µ5 ≥ 0
µ6 .(100 − P3 ) = 0; µ6 ≥ 0
© 2017 B. Zhang and University of Washington 109
Example: First Trial (1)
Assume that none of the inequality constraints is binding
This means that all the μ‘s are zero
Set the μ’s equal to zero in the optimality conditions.
Equations (1) to (3) become:
∂ ⇒ P1 =
λ−8
≡ 8 + 0.2P1 − λ = 0
∂P1 0.2
∂ λ−7
≡ 7 + 0.12P2 − λ = 0 ⇒ P2 =
∂P2 0.12
∂ λ−9
≡ 9 + 0.14P3 − λ = 0 ⇒ P3 =
∂P3 0.14
© 2017 B. Zhang and University of Washington 110
Example: First Trial (2)
Inserting these expressions in Equation (4), we get:
P1 = 194.8 MW <P
1
min
= 230 MW
Not a valid solu)on
P2 = 333.0 MW à trial fails
P3 = 271.2 MW >P
3
max
= 260 MW
© 2017 B. Zhang and University of Washington 111
Example: Second Trial (1)
We must try another combination of binding constraints.
Let us try:
P1 = P1min = 230 MW ⇒ µ2 ≠ 0
P3 = P 3
max
= 260 MW ⇒ µ5 ≠ 0
µ1 = µ 3 = µ 4 = µ6 = 0
© 2017 B. Zhang and University of Washington 112
Example: Second Trial (2)
L − P1 − P2 − P3 = 0
P3 = P 3
max
= 260 MW
First trial:
P1
max
P1
P
1
min P1
P2
P2
P2min P2max
P3
min max
P3
P 3 P3
© 2017 B. Zhang and University of Washington 115
Example: Second Trial (5)
In the second trial, we “pushed” P1 towards its lower limit and P3
towards its upper limit. However, there was no need to push P3
because increasing P1 would naturally reduce P3 below its limit.
Second trial:
P1
max
P1
P
1
min P
1
P2
P2
P2min P2max
P3
min max
P3
P 3 P
3
© 2017 B. Zhang and University of Washington 116
Example: Third Trial (1)
P1 = P1min = 230 MW ⇒ µ2 ≠ 0
µ1 = µ 3 = µ 4 = µ5 = µ6 = 0
∂
≡ 7 + 0.12P2 − λ = 0
∂P2 λ = 44.75 $/MWh
∂ P2 = 315 MW
≡ 9 + 0.14P3 − λ = 0
∂P3
P3 = 255 MW
∂
≡ L − P1 − P2 − P3 = 0
∂λ
© 2017 B. Zhang and University of Washington 117
Example: Third Trial (2)
∂
≡ 8 + 0.2P1 − λ − µ2 = 0 ⇒ µ2 = 9.25 $/MWh
∂P1
Solution that satisfies all the optimality conditions:
P1 = 230 MW
P2 = 315 MW
P3 = 255 MW
λ = 44.75 $/MWh ß Cost of an extra MW of load
µ2 = 9.25 $/MWh ß Value of reducing P by 1MW 1
min
Third trial:
P1
max
P1
P
1
min P
1
P2
P2
P2min P2max
P3
min max
P3
P 3 P 3
© 2017 B. Zhang and University of Washington 119
Prac)cal Economic Dispatch
Equal Incremental Cost Dispatch
dC A dC B dC C
dPA dPB dPC
PA PB PC
PA + PB + PC
© 2017 B. Zhang and University of Washington 121
Lambda search algorithm
1. Choose a starting value for λ
3. If one of these values exceeds its lower or upper limit, fix it at that limit
4. Calculate PTOTAL = PA + PB + PC
6. Go To Step 2
© 2017 B. Zhang and University of Washington 122
Linear Cost Curves
CA CB
PA PB
PAMAX PBMAX
dC A dC B
dPA dPB
λ
PA PB
PAMAX PBMAX
PA PB
PAMAX PBMAX
dC A dC B
dPA dPB
λ
PA PB
PAMAX PBMAX
PA PB
PAMAX PBMAX
dC A dC B
dPA dPB
PA PB
PAMAX PBMAX
PA PB
dC A dC B
dPA dPB
PA PB
PA PB
dC A dC B
dPA dPB
PA PB
CA CB
PA PB
dC A dC B
dPA dPB
λ
PA PB
CA CB
PA PB
dC A dC B
dPA dPB
PA PB
CA CB
PA PB
dC A dC B
dPA dPB
PA PB
CA CB
PA PB
dC A dC B
dPA dPB
PA PB
CA CB
PA PB
dC A dC B
dPA dPB
PA PB