Вы находитесь на странице: 1из 17

A Dynamic Algorithm for Reachability Games Played on

Trees

Bakhadyr Khoussainov1 , Jiamou Liu2? , Imran Khaliq1


1
Department of Computer Science, University of Auckland, New Zealand
2
Universität Leipzig, Institut für Informatik, Germany
bmk@cs.auckland.ac.nz, jiamou@informatik.uni-leipzig.de
ikha020@aucklanduni.ac.nz

Abstract. This paper starts the investigation on dynamic algorithms for solving games
that are played on finite graphs. The dynamic game determinacy problem calls for finding
efficient algorithms that decide the winner of the game when the underlying graph under-
goes repeated modifications. In this paper, we focus on turn-based reachability games. We
provide an algorithm that solves the dynamic reachability game problem played on trees.
The amortized time complexity of our algorithm is O(log2 n) for updates and O(log n) for
queries, where n is the number of nodes in the current graph.

1 Introduction
We start the investigation on dynamic algorithms for solving games played on finite graphs. Games
played on graphs, with reachability, Büchi, Muller, Streett, parity and similar winning conditions,
have recently attracted a great attention due to connections with model checking and verification
problems, automata and logic [7][12][14][19]. Here we focus on two-player games that are turn-
based, deterministic and with perfect information. Each such game is defined on a finite directed
graph. The two players play the game by moving a token on the underlying graph in turn. The
goal of one player is to move the token along a path that satisfy the winning condition, while the
other player wants the opposite. Given one of these games, to solve the game means to design an
(efficient) algorithm that tells us from which nodes a given player wins the game. Formally, the
game determinacy problem is defined as follows:
INPUT: A game G and a node u on the underlying graph of G.
QUESTION: Does Player 0 wins the game G starting from the node u?
Polynomial time algorithms exist to solve some of the games mentioned above, while efficient
algorithms for other games remain unknown. For example, on a graph with n nodes and m edges,
the reachability game problem is in O(n + m) and is PTIME-complete [9], and Büchi games are in
O(n · m) [2]. Parity games are known to be in NP ∩ Co-NP but not known to be in P.
The game determinacy problem can be answered both by a static algorithm or a dynamic
algorithm. In the setting of the static algorithm, the games, once given as the input, remain
unchanged over time. All the works on games played on graphs mentioned above belong to this
category. On the other hand, in the setting of the dynamic algorithm, the game is dynamically
modified. Examples of situations where such dynamic algorithms are of interest include 1). The
game is used for modeling a system which undergoes certain changes over time. 2). In the case
where we do not have full information about the game, we may solve the game by performing a
series of refinements on approximations of the game. Each approximation is itself a game and each
refinement can be thought of as an update to the previous approximation. We pose the dynamic
game determinacy problem as follows:

We would like to maintain the graph of the game that undergoes a sequence of update and
query operations in such a way that facilitates an efficient solution of the current game.
?
The second author is supported by the DFG research project GELO.
Contrary to the static case, the dynamic determinacy problem takes as input a game G and
a (finite or infinite) sequence α1 , α2 , α3 , . . . of update or query operations. Each update operation
makes a unit change to the current game such as inserting or deleting a node or an edge. Since
each time the change to the game is small, we hope to handle these updates more efficiently than
re-solving the game from scratch. The dynamic algorithm that solves this problem is a collection
of computations that handle all the operations.
There has recently been increasing interest in dynamic graph algorithms (See, for example,
[5][6]). The dynamic reachability problem on graphs have been investigated in a series of papers
by King [10], Demetrescu and Italiano [4], Roditty [15] and Roditty and Zwick [16][17]. In [17],
it is shown that for directed graphs with m edges and n nodes, there is a dynamic algorithm
for the reachability problem which has an amortized update time of O(m + n log n) and a worst-
case query time of O(n). This paper extends this line of research to dynamic reachability game
algorithms. In the setting of games, for a given directed graph G and a player σ, a set of nodes T
is reachable from a node u in G means that there is a strategy for player σ such that starting from
u, all paths produced by player σ following that strategy reach T , regardless of the actions of the
opponent. In this manner, graphs can be seen as games where one of the players has no power to
change the course of the play. Hence, the dynamic reachability game problem can be viewed as a
generalization of the dynamic reachability problem for graphs.
In this paper we describe a dynamic algorithm that solves dynamic reachability games played
on trees. We analyze the amortized time complexity of the algorithm, which measures the average
running time per operation over a worst-case sequence of operations1 . We concentrate on trees
because: (1) Trees are simple data structures, and the study of dynamic algorithms on trees is
the first step towards the dynamic game determinacy problem. (2) Even in the case of trees the
techniques one needs to employ is non-trivial. (3) The amortized time analysis for the dynamic
reachability game problem on graphs, in general case, is an interesting hard problem. (4) Finally, we
give a satisfactory solution to the problem on trees. We show that the amortized time complexity
of our algorithm is of order O(log2 n) for updates and O(log n), where n is the number of nodes
on the tree.
The rest of the paper is organized as follows. Section 2 describes the known static algorithm
that solves reachability games. Section 3 lays out the basic framework of dynamic reachability
game problem and reachability game played on trees. Section 4 describe the data structure we use
in the dynamic algorithm. A crucial technique in the algorithm is that the tree is partitioned into a
collection of paths. Nodes on the same path are processed collectively under an update operation.
Section 5 and 6 describe the algorithm in detail. Finally, Section 7 analyzes the amortized time
complexity of the algorithm.

2 A Static Algorithm for Reachability Games

We now describe two-person reachability games played on directed finite graphs. The two players
are Player 0 and Player 1. The arena A of the game is a directed graph (V0 , V1 , E), where V0 is a
finite set of 0-nodes, V1 is a finite set of 1-nodes disjoint from V0 , and E ⊆ V0 × V1 ∪ V1 × V0 is the
edge relation. We use V to denote V0 ∪ V1 . A reachability game G is a pair (A, T ) where A is the
arena and T ⊆ V is the set of target nodes for Player 0. We call (V, E) the underlying graph of G.
The players start by placing a token on some initial node v ∈ V and then move the token in
rounds. At each round, the token is moved along an edge by respecting the direction of the edge.
If the token is placed at u ∈ Vσ , where σ ∈ {0, 1}, then Player σ moves the token from u to a
v such that (u, v) ∈ E. The play stops when the token reaches a node with no out-going edge
or a target node. Otherwise, the play continues forever. Formally, a play is a (finite or infinite)
sequence π = v0 v1 v2 . . . such that (vi , vi+1 ) ∈ E for all i. Player 0 wins the play π if π is finite
and the last node in π is in T . Otherwise, Player 1 wins the play.
1
See standard textbook such as [3] (Chapter 17) for a thorough introduction to amortized complexity
analysis.
A (memoryless) strategy for Player σ is a partial function fσ : Vσ → V1−σ . A play π = v0 v1 ...
is consistent with fσ if vi+1 = fσ (vi ) whenever vi ∈ Vσ and fσ is defined on vi for all i. All strategies
in this paper are memoryless. A winning strategy for Player σ from v is a strategy fσ such that
Player σ wins all plays starting from v that are consistent with fσ . A node u is a winning position
for Player σ, if Player σ has a winning strategy from u. The σ-winning region, denoted Wσ , is the
set of all winning positions for Player σ. Note that the winning regions are defined for memoryless
strategies. A game enjoys memoryless determinacy if the regions W0 and W1 partition V .
Let G = (A, T ) be a reachability game. A (memoryless) strategy for Player σ is a partial
function fσ : Vσ → V1−σ . A play π = v0 v1 ... is consistent with fσ if vi+1 = fσ (vi ) whenever
vi ∈ Vσ and fσ is defined on vi for all i. All strategies in this paper are memoryless. A winning
strategy for Player σ from v is a strategy fσ such that Player σ wins all plays starting from v
that are consistent with fσ . A node u is a winning position for Player σ, if Player σ has a winning
strategy from u. The σ-winning region, denoted Wσ , is the set of all winning positions for Player
σ. Note that the winning regions are defined for memoryless strategies. A game enjoys memoryless
determinacy if the regions W0 and W1 partition V .
The rest of the section describes the known static algorithm for solving reachability games.
Theorem 1 (Reachability game determinacy[8]). Reachability games enjoy memoryless de-
terminacy. Moreover, there is an algorithm that computes W0 and W1 in time O(n + m), where n
and m are respectively the number of nodes and directed edges in A.

Proof. For Y ⊆ V , set Pre(Y ) = {v ∈ V0 | ∃u[(v, u) ∈ E ∧ u ∈ Y ]} ∪ {v ∈ V1 | ∀u[(v, u) ∈ E →


u ∈ Y ]}. Define a sequence T0 , T1 , ... such that T0 = T , and for i > 0, Ti =Pre(Ti−1 ) ∪ Ti−1 . Since
A is finite, there is an s such that Ts = Ts+1 . We say a node u has rank r, r ≥ 0, if u ∈ Tr − Tr−1 .
A node u has infinite rank if u ∈ / Ts .
By induction on the rank of u, one proves that u ∈ W0 if u has a finite rank. On the other
hand, if u ∈ W0 , then there is a winning strategy f0 of Player 0 such that all plays consistent with
f0 starting from u reach T . Let π be a play consistent with f0 starting from u. Note that π must
reach T in less than n steps as otherwise π will continue forever without reaching T . Therefore
one may prove easily that u ∈ Tn . Hence W0 = {u | u has a finite rank}.
The algorithm finds W0 inductively. It sets T0 = T . At each round i, the algorithm computes
all the nodes of rank i + 1 by examining each edge (u, v) where v has rank i. The procedure
examines each node and each edge at most once and hence takes time O(n + m). u
t

3 Basic Framework

3.1 Dynamic reachability game problem

As mentioned above, the dynamic game determinacy problem takes as input a reachability game
G = (A, T ) and a sequence α1 , α2 , . . . of update and query operations. The operations produce the
sequence of games G0 , G1 , . . . such that Gi is obtained from Gi−1 by applying the operation αi .
A dynamic algorithm should solve the game Gi for each i. We define the following seven update
operations and one query operation:

1. InsertNode(u, i, j) operation, where i, j ∈ {0, 1}, creates a new node u in Vi . Set u as a target
if j = 1 and not a target if j = 0.
2. DeleteNode(u) deletes the node u ∈ V .
3. InsertEdge(u, v) operation inserts an edge from u to v.
4. DeleteEdge(u, v) operation deletes the edge from u to v.
5. SetTarget(u) operation sets node u as a target.
6. UnsetTarget(u) operation sets node u as a non-target.
7. SwitchPosition(u) operation changes u from a σ-node to a (1 − σ)-node where u originally
belongs to Vσ .
8. Query(u) operation returns true if u ∈ W0 and false if u ∈ W1
By convention, we assume that the initial game G0 is empty (where the underlying graph is empty).
At stage s, s > 0, the algorithm applies the operation αs to the current game.
Using the static algorithm from Theorem 1, one produces two lazy dynamic algorithms for
reachability games. The first algorithm runs the static algorithm after each update and therefore
query takes constant time at each stage. The second algorithm modifies the game graph after each
update operation without re-computing the winning positions, but the algorithm runs the static
algorithm for the Query(u) operation. In this way, the update operations take constant time, but
Query(u) takes the same time as the static algorithm. The amortized time complexity in both
algorithms is the same as the static algorithm. Our goal is to improve upon these two algorithms.

3.2 Reachability games played on trees

We view trees as directed acyclic weakly-connected graphs where each node has a set of zero or
more children nodes, and at most one parent node. The node with no incoming edge is called the
root. Nodes with no children are called leaves. A forest consists of pairwise disjoint trees. Since
the underlying tree of the game undergos changes, the game will in fact be played on forests. We
however still say that a reachability game G is played on trees if its arena is a forest F . A node u
is an ancestor of v (and v is a descendant of u) in forest F if there is a path in F that goes from
u to v. We denote as u ≤F v that u is an ancestor of v in the forest F. For two nodes u, v with
u ≤F v, we say the path from u to v is the set Path[u, v] = {w | u ≤F w ≤F v}.
Let G = (V0 , V1 , E) be a game played on trees. Recall that for σ ∈ {0, 1}, Wσ denotes the
σ-winning region in the forest F = (V, E). By Theorem 1, each node of F belongs to either W0 or
W1 . We make the following definition.

Definition 1. A node u is in state σ if u ∈ Wσ . We denote the state of u by State(u).

For any node u ∈ V , the value of State(u) depends on the states of the children of u. The following
lemma is immediate.

Lemma 1. The value of State(u) are determined as follows:

1. If u is a target then State(u) = 0.


2. If u is a leaf and is not a target, then State(u) = 1.
3. In all other cases
– If u ∈ V0 , then State(u) = 0 if and only if one child of u has state 0.
– If u ∈ V1 , then State(u) = 0 if and only if all children of u have state 0.

In subsequent sections, we describe a dynamic algorithm for solving reachability games played
on trees. The algorithm maintains a data structure (the base structure) which stores the current
game and an auxiliary data structure to facilitate efficient solutions to the query operations.
Essentially, the problem amounts to efficiently update the auxiliary structure.
We describe the base structure used to store a reachability game played on trees. The underlying
forest F is implemented as a doubly linked list List(F) of nodes. A node u is represented by the
tuple (p(u), pos(u), tar(u)) where p(u) is a pointer to the parent of u (p(u) = null if u is a root),
pos(u) = σ if u ∈ Vσ , a Boolean variable tar(u) = true iff u is a target.
We make the following assumption about the operations and their implementations:

– Inputs of the update and query operations are given as pointers to their representatives in
base structure.
– The InsertNode(u) operation adds an isolated node2 to the current forest F.
– We assume that DeleteNode(u) is only performed on the root. The operation replaces the tree
containing u with several trees, one containing a child of u as its root. When u is not a root,
we can first perform DeleteEdge(p(u), u) to make u the root.
2
A node is isolated if it has no incoming or outgoing edges
– To preserve the forest structure, the InsertEdge(u, v) operation is applied only when v is the root
of a tree not containing u. InsertEdge(u, v) links the trees containing u and v. DeleteEdge(u, v)
does the opposite by splitting the tree containing u and v into two trees. One contains u and
the other has v as its root.
– For simplicity, we assume that SetTarget(u) is applied only when u is not a target node;
similarly UnsetTarget(u) is applied only when u is a target node.

4 Data structures

4.1 Splay trees

This algorithm makes use of the splay tree data structure introduced by Sleator and Tarjan [18].
Splay trees form a dynamic data structure for maintaining elements drawn from a totally ordered
domain D. Each splay tree is itself a tree which is identified by its root element. Elements in D are
arranged in a collection PD of splay trees with the requirement that each element of D belongs to
some splay tree in PD and no element appears in two different elements in PD . The data structure
supports the following splay tree operations.

– Splay(A, u): This operation reorganizes the splay tree A so that u is at the root if u ∈ A.
– Join(A, B): This operation joins two splay trees A, B ∈ PD , where each element in A is less
than each element in B, into one tree.
– Split(A, u): This operation splits the splay tree A ∈ PD into two new splay trees

R(u) = {x ∈ A | x > u} and L(u) = {x ∈ A | x ≤ u}.

– Max(A)/Min(A): This operation returns the Max/Min element in A ∈ PD .

The readers are referred to standard textbooks such as [11] or [13] for proofs of the following
theorem.
Theorem 2 (splay trees). For the splay trees on PD , the amortized time of the operations above
is O(log n), where n is the cardinality of D.

4.2 Partition a forest by paths

For a reachability game played on the forest F, we make the following definition.
Definition 2. 1. A node u ∈ V is stable if either u is a target node, or u ∈ Vσ ∩ Wσ for some
σ ∈ {0, 1}, and
|{v | State(u) = State(v) ∧ (u, v) ∈ E}| ≥ 2.
We use Z to denote the set of stable nodes.
2. A path Path[u, v] in F is homogeneous if all nodes in Path[u, v] have the same state.
3. A path Path[u, v] is stable if it contains at most one stable node and the stable node can only
appear as the ≤Fi -maximum element in Path[u, v].
As the auxiliary data structure, the algorithm maintains a partition V Path of nodes V where
each element P ∈ V Path is a homogeneous and stable path in F. The partition V Path is rep-
resented using the splay tree data structure as described above, where each element of V Path
forms a splay tree. We assume V Path is equipped with the Splay(A, u), Join(A, B), Split(A, u) and
Max(A)/Min(A) operations. The assumed “total order” on the nodes is the forest order ≤F . The
order relation ≤F is not total, however, it becomes a total order when restricted to a particular
path. Therefore, when applying the Join(A, B) operation, we need to make sure that the resulting
set A ∪ B again forms a path in the forest. Note that the structure

F Path = (V Path , {(P1 , P2 ) | ∃u ∈ P1 ∃v ∈ P2 : (u, v) ∈ E}


again forms a forest, which we call the partition forest of F.
We use Pu to denote the path containing u in V Path . We assume that from each node u in
the linked list List(F) there is a pointer to the corresponding element u in the path Pu . Hence
accessing Pu from u takes constant time.
The algorithm maintains the following additional variables as auxiliary data structures:
1. For each Pu ∈ V Path , the algorithm maintains State(Pu ) which is the state of all nodes in P .
This variable is linked to by a pointer from the root of the splay tree representing Pu . It can
be accessed from Pu by performing the Splay(Pu , u) operation.
2. For each node u, this algorithm maintains

h(u) = |{v | (u, v) ∈ E}|.

3. For each node u, the algorithm maintains Stable(u) ∈ { true, false } such that

Stable(u) = true if and only if u is stable.

4. For each stable node u in F, the algorithm maintains

ν(u) = |{v | State(u) = State(v) ∧ (u, v) ∈ E}|.

Accessing h(u), Stable(u) and ν(u) from u takes constant time.


Let α0 , α1 , . . . be a sequence of operations describe above. We sometimes use the notation

Fi = (V0,i ∪ V1,i , Ei ), Vi , Ti , Wσ,i , hi (u), νi (u), Zi , Stablei (u), ViPath , Pi,u , FiPath

to denote the underlying forest F, the set of nodes V , the target set T , σ-winning region Wσ,i ,
the variables h(u) and ν(u), Z, Stable(u), V Path , Pu and F Path as they appear at stage i. We say
that the node u changes state at stage i + 1, if u is moved either from W0,i to W1,i+1 or from W1,i
to W0,i+1 .

5 Trace up and change state


Suppose an update operation is applied at stage s + 1. This operation results in a modified forest
Fs+1 . Let u be the ≤Fs -maximum node who changes state at stage s + 1. This change may trigger
a sequence of state changes on the ancestors of u according to Lemma 1 (nodes that are not
ancestors are not affected). Let Ps (u) be the maximal homogeneous stable path 3 that contains
u at stage s. The following lemma shows how a state change on u influences its ancestors in the
forest.

Lemma 2. Suppose the node u changes state at stage s + 1. Let x be the ≤Fs -least node in Ps (u).
Then a node y ≤Fs u changes state at stage s + 1 if and only if x ∈ Path[x, u].

Proof. Let x0 >Fs x1 >Fs . . . >Fs xm be the sequence of nodes in Path[x, u]. Note that x0 = u and
xm = x. Suppose x0 ∈ W0,s . By the assumption that Path[x, u] is homogeneous, States (xi ) = 0
for each 0 ≤ i ≤ m. Suppose for 0 ≤ i < m, xi changes state from 0 to 1. By the assumption that
xi+1 is not a stable node at stage s, xi+1 is not a target. Furthermore, if xi+1 ∈ V0,s , then xi+1
has exactly one child in state 0 at stage s; this child is xi . If xi+1 ∈ V1,s , then all children of xi+1
are in state 0 at stage s. In both cases, xi+1 changes state at stage s + 1. This proves that all
nodes in Path[x, u] change state.
Now take y <Fs u. If y is stable, then either y is a target, in which case it does not change
state at stage s + 1, or y ∈ Vσ,s ∩ Wσ,s for some σ ∈ {0, 1} and y has two children v1 , v2 with the
same states as y at stage s. In the later case, at most one of v1 , v2 changes state at stage s + 1,
which means that y will stay in the σ-winning region at stage s + 1.
3
A homogeneous stable path is maximal if it is not a proper subset of any other homogeneous stable
paths
If States (y) = States+1 (x0 ) = 1, then when y ∈ V0,s , all children of y are in W1,s ; when
y ∈ V1,s , at least one child of y is in W1,s . In both cases, y does not change state at stage s + 1.
Consequently by Lemma 1, no node ≤Fs y changes state at stage s + 1.
This finishes the lemma for the case when x0 ∈ W0,s . The case when x0 ∈ W1,s can be proved
in a similar way as above by interchanging 0 and 1. u
t

We describe a TraceUp(u) operation which locates the ≤Fs -least node x in Ps (u) and returns
the path Path[Ps (u), u]. The algorithm first performs Split(Ps,u ), which divides Ps,u into two paths
L(u) and R(u) and Pu becomes L(u). Note now that u is the ≤F -maximum element in Pu . Let
x = Min(Pu ). The algorithm stops and outputs Pu if any one of the following three conditions
holds:

1. x is a root
2. the parent p(x) of x is stable
3. State(p(x)) 6= State(x).

If none of these conditions holds, let w = p(x). The algorithm runs Split(Pw , w) that divides Pw
into two paths L(w) and R(w) and Pw becomes L(w). Note also that elements in Pu ∪ Pw are
totally ordered by ≤F so that Pu ∪ Pw forms a path in F that contains both u and w . This allows
the algorithm to apply Join(Pu , Pw ). Note that after this is done, Pu is enlarged to include w. This
process is iterated until one of the above three conditions holds. See Algorithm 1.

Algorithm 1 TraceUp(u).
1: Split(Pu , u); x ← Min(Pu )
2: while p(x) 6= null ∩¬Stable(p(x)) ∩ State(Pp(x) ) 6= State(Pu ) do
3: x ← Min(Pu ); w ← p(x)
4: Split(Pw , w); Join(Pw , Pu )
5: end while
6: return Pu

It is easy to see that the last node assigned to the variable x is the ≤Fs -least node in Ps (u).
Therefore by Lemma 2, TraceUp(u) contains all nodes that change state at stage s + 1.
We describe a ChangeState(u) algorithm which carries out the necessary updates when u is
the ≤Fs -maximum node that changes state at stage s. The ChangeState(u) first use computes
TraceUp(u), then changes the state of all nodes in Pu by setting States+1 (Pu ) to 1 − States (Pu ).
Finally, the algorithm maintains the stableness property for the updated game Gs+1 . The next
lemma characterizes how the state change of u influences the stable nodes.

Lemma 3. The following hold for any node v 6= u:

1. If v ≮Fs+1 u, then v ∈ Zs+1 if and only if v ∈ Zs .


2. If v <Fs+1 u and v ∈ Zs , then v ∈ Zs+1 if and only if v is a target or νs+1 ≥ 2.
3. If v <Fs+1 u and v ∈/ Zs , v ∈ Zs+1 if and only if v is the parent of the ≤Fs+1 -least node that
changes state at stage s + 1.

Proof. Statement (1) is clear. For (2), suppose v ∈ Zs . If v is a target then v stays stable at stage
s + 1; otherwise, v ∈ Vσ,s ∩ Wσ,s for some σ ∈ {0, 1} and νs (v) ≥ 2. Since v does not change state,
and there is at most one child of v that changes state at stage s + 1, then v ∈ Vσ,s+1 ∩ Wσ,s+1 .
Therefore v is stable if and only if νs+1 ≥ 2.
We now prove (3). Fix v ∈ V0,s . Suppose one of v’s children, say v0 , is the ≤Fs+1 -least node
that changes its state at stage s + 1. If States (v) = 1, then all children of v have state 1. Thus
States+1 (v0 ) = 0 and v should also change to state 0. This is impossible by ≤Fs+1 -minimality of
v0 . Therefore States (v) = 0. Since v is not stable at stage s, exactly one of it child, say v1 is in
W0,s . If v0 = v1 , then one of v’s children is in W0,s+1 and States (v) 6= States+1 (v). Therefore
v0 6= v1 . Hence at stage s + 1, v has exactly two children in W0,s+1 and v ∈ Zs+1 .
On the other hand, suppose v ∈ Zs+1 . This means that v ∈ W0,s+1 and v has two children
v0 , v1 in W0,s+1 . Note that for any x, at most one child of x may change state at any given stage.
Therefore, at most one of v0 , v1 , say v1 , is in W0,s . Hence v ∈ W0,s , and States (v) = States+1 (v).
Therefore v0 is the ≤Fs+1 -least node that changes state at stage s + 1.
This proves (3) for the case when v ∈ V0,s . The case when v ∈ V1,s is proved using the same
argument by interchanging 0 and 1. u
t

Let w be the parent of the ≤Fs+1 -least node in Pu at stage s + 1. The algorithm checks if
w is a stable node at stage s + 1 according to the condition given in Lemma 3. If w ∈ / Zs , then
w ∈ Zs+1 . In this case, the algorithm sets Stables+1 (w) to true and applies Split(Pw , w) to ensure
that the path Pw is stable. If w ∈ Zs and w ∈ / Zs+1 , the algorithm sets Stables+1 (w) to false and
applies TraceUp(w). The reason for running TraceUp(w) here is to optimize the amortized time
complexity of the algorithm and will be explained in Section 7.

Algorithm 2 ChangeState(u).
1: P ← TraceUp(u)
2: State(P ) ← 1 − State(P )
3: if Stable(w) then
4: ν(w) ← [State(Pw ) = State(Pu )?ν(w) + 1 : ν(w) − 1].
5: if ν(w) < 2 ∧ ¬tar(u) then
6: Stable(w) ← false; Run TraceUp(w).
7: end if
8: else
9: Stable(w) ← true; Split(Pw , w); ν(w) ← 2.
10: end if

From Lemma 2 and 3, one may easily prove the following lemma which implies the correctness
of Algorithm 2.

Lemma 4. Suppose u is the ≤Fs -maximum node that changes state at stage s + 1. The following
holds after applying ChangeState(u) to the updated game Gs+1 :

– A node v ∈ Vs+1 has state States+1 (Pv ).


– If v 6= u, then v is stable if and only if Stables+1 (v) = true.
– If v 6= u and v is stable, then νs+1 (v) = |{w | (v, w) ∈ E ∧ State(v) = State(w)}|.
– If v 6= u, then v has hs+1 F (v) children.

6 Update and query operations

This section explains the computation for handling each query and update operation. The Query(u)
operation takes a parameter u and returns State(Pu ) where Pu is the element in the current
partition VsPath that contains u.
The InsertNode(u, i, j) and DeleteNode(u) operations require changing the data structure in
the following way.

1. The InsertNode(u, i, j) operation links the first node in List(F) to a new node u and sets the
appropriate values of pos(u), tar(u), Stable(u) and State(Pu ) according to the input parameters
i, j. Set h(u) = 0.
2. The DeleteNode(u) operation performs Split(Pu , u) to separate u from other nodes in Pu . Note
that the updated Pu is the singleton {u}. The operation then deletes u by setting it to null in
List(F ).
In principle, the InsertEdge(u, v), DeleteEdge(u, v), SetTarget(u), UnsetTarget(u) and SwitchPosition(u)
operations all perform the following three steps. (1) Firstly, it carries out the update operation
on the base structure List(F) and correspondingly update the partition V Path if necessary. (2)
Secondly, it decides if this update causes u to change state. If so, u is the ≤F -maximum node that
changes state at this stage and hence ChangeState(u) is called. (3) Lastly, it updates the additional
variables h(u), ν(u) and Stable(u) for u. Task (1) is straightforward by changing the values of
p(v), pos(v) and tar(u). Task (2) (3) can be done by using a fixed number of if statements, each
with a fixed boolean condition involving comparisons on the variables. We next describe these two
tasks separately.
The above update operations requires updating the base structure List(F) and the partition
V Path in the following way.

1. The InsertEdge(u, v) operation inserts the edge (u, v) by setting p(v) to u.


2. The DeleteEdge(u, v) operation deletes the edge (u, v) by setting p(v) to null and applies
Split(Pu , u) to separate Pu with Pv .
3. The SetTarget(u) operation sets u to a target node by setting tar(u) to true.
4. The UnsetTarget(u) operation sets u to a non-target node by setting tar(u) to false.
5. The SwitchPosition(u) operation sets pos(u) to 1 − pos(u).

6.1 Update State(Pu )

The following lemma lists the conditions in which u changes state during an update operation.

Lemma 5. Suppose an update operation is performed at stage s + 1.

1. If InsertEdge(u, v) is performed, then u changes state if and only if


³ ´
¬tars (u) and States (Pu ) 6= States (Pv ) and hs (u) = 0 or States (Pu ) 6= poss (u) . (1)

2. If DeleteEdge(u, v) is performed, then u changes state if and only if

h¬Stable
¡
s (u) and States (Pu ) = States (Pv ) and
¢ i
(2)
hs (u) = 1 and States (Pu ) = 0 or States (Pu ) = poss (u)

3. If SetTarget(u) is performed, then u changes state if and only if States (Pu ) = 1.


4. If UnsetTarget(u) is performed, then u changes state if and only if
³ ´ ³ ´
poss (u) = 0 and νs (u) = 0 or poss (u) = 1 and νs (u) < hs (u) . (3)

5. If SwitchPosition(u) is performed, then u changes state if and only if

¬tar
h¡ s (u) and poss (u) = States (P u ) and
¢ ¡ ¢i (4)
Stables (u) and hs (u) > νs (u) or ¬States (u) and hs (u) > 1

Proof. (1) Say InsertEdge(u, v) is performed at stage s + 1. Suppose u changes state. It must be
that u ∈ / Ts and States (u) 6= States (v). Suppose further that u is not a leaf at stage s. If
u ∈ V0,s ∩ W0,s , there is a child w of u which is also in W0,s . This means that States (u) =
States+1 (u). Similarly, one may conclude that u does not change state if u ∈ V1,s ∩ W1,s .
Therefore u ∈ (V0,s ∩ W1,s ) ∪ (V1,s ∩ W0,s ). Hence (1) holds.
On the other hand, suppose (1) holds. This means that u is not a target and States (u) 6=
States (v). If u is a leaf at stage s, then States+1 (u) = States+1 (v) 6= States (u). Similarly, if
u ∈ Vσ,s ∩ W1−σ,s for some σ ∈ {0, 1}, States+1 (u) 6= States (u).
(2) Say DeleteEdge(u, v) is performed at stage s + 1. Suppose u changes state. It must be that
u∈ / Zs and States (u) = States (v). Suppose further that u is not a leaf at stage s + 1. There
is another child w of u which is in W1,s and u does not change state at stage s + 1. Similarly,
u does not change state if u ∈ V1,s ∩ W0,s . Therefore u ∈ (V0,s ∩ W0,s ) ∪ (V1,s ∩ W1,s ). Now
suppose u is a leaf at stage s + 1. Then States+1 (u) = 1 and States (u) = States (v) = 0.
Therefore (2) holds.
On the other hand, suppose u ∈ / Zs , States (u) = States (v). If v was the only child of u where
u ∈ W0,s , then States (u) = 1 6= States−1 (u). If u ∈ Vσ,s ∩ Wσ,s−1 for some σ ∈ {0, 1}, all
children of u apart from v are in state 1 − σ as otherwise u is stable. Therefore States+1 (u) =
1 − σ 6= States+1 (v).
(3) Say SetTarget(u) is performed at stage s + 1. Then States+1 (u) = 0 and hence u changes state
if and only if States (Pu ) = 1.
(4) Say UnsetTarget(u) is performed at stage s + 1. By assumption u is a target node at stage s,
thus States (u) = 0. If u ∈ V0,s , then by Lemma 1, States+1 (u) = 1 if and only if all children of
u have state 1 if and only if νs (u) = 0. If u ∈ V1,s , then by Lemma 1 again, States+1 (u) = 1 if
and only if some child of u has state 1 if and only if hs (u) − νs (u) > 0. Hence u changes state
if and only if (3) holds.
(5) Say SwitchPosition(u) is performed at stage s + 1. Suppose u changes state. Then u is not a
target node at stage s. If poss (u) 6= States (Pu ), then by Lemma 1 all children of u have state
States (Pu ) and thus States+1 (u) = States (u). Therefore it must be that poss (u) = States (Pu ).
If u is stable, then there is some child of u having state 1−States (Pu ) as otherwise u would not
change state. This implies hs (u) − νs (u) > 0. If u is not stable, then u has exactly 1 child with
state States (Pu ) and some other children with state 1 − States (Pu ). This implies hs (u) > 1.
Hence (4) holds.
On the other hand, if (4) holds, then u is not a target and u ∈ Vs,σ ∩ Ws,σ for some σ ∈ {0, 1}.
If u is stable and hs (u) > νs (u), then some children of u has state 1 − σ at stage s + 1. If u
is not stable and hs (u) > 1, then again some children of u has state 1 − σ at stage s + 1. In
both cases u changes state.
u
t
For each update operations above, the algorithm checks if the respective conditions as listed in
Lemma 5 is met. If the condition holds, then the algorithm applies ChangeState(u). The following
lemma is easily implied from Lemma 4 and 5.
Lemma 6. Suppose an update operation is applied at stage s + 1. Then u is a winning position
for Player States+1 (Pu ) in the game Gs+1 .

6.2 Update the variables h(u), ν(u) and Stable(u)


It remains to describe the computation for updating the values of h(u), ν(u) and Stable(u) after
applying an update operation. Note that h(u) can be updated easily: In the case of InsertEdge(u, v),
hs+1 (u) is set to be hs (u) + 1; in the case of DeleteEdge(u, v), hs+1 (u) is set to be hs (u) − 1;
SetTarget(u), UnsetTarget(u) and SwitchtPosition(u) does not change h(u).
Recall from Section 4.2 that ν(u) is only defined on the set of stable nodes. For simplicity, we
also allow ν(u) to be defined when u is not stable. When u is not stable, ν(u) need not be equal
to |{v | (u, v) ∈ E, State(u) = State(v)}|. Below we list the computations for updating ν(u) and
Stable(u) after applying an update operation.
1. Suppose InsertEdge(u, v) is performed at stage s + 1. Then set


νs (u) + 1 if u ∈ Zs and States (u) = States (v)
νs+1 (u) = 2 if u ∈/ Zs , hs (u) > 0 and States (u) = States (v) = poss (u)


νs (u) otherwise
(
true if u ∈
/ Zs , hs (u) > 0 and States (u) = States (v) = poss (u)
Stables+1 (u) =
Stables (u) otherwise.
2. Suppose DeleteEdge(u, v) is performed at stage s + 1. Then set
(
νs (u) − 1 if u ∈ Zs and States (u) = States (v)
νs+1 (u) =
νs (u) otherwise
(
false if u ∈ Zs and νs+1 (u) < 2
Stables+1 (u) =
Stables (u) otherwise
3. Suppose SetTarget(u) is performed at stage s + 1. Then set


 0 if hs (u) = 0 or u ∈ V0,s ∩ W1,s



 hs (u) if u ∈ V1,s ∩ W0,s


ν (u) if u ∈ Zs ∩ V0,s
s
νs+1 (u) =

 hs (u) − νs (u) if u ∈ Zs ∩ V1,s



 1 if u ∈
/ Zs and u ∈ Vs,0 ∩ W0,s



h(n) − 1 if u ∈
/ Zs and u ∈ V1,s ∩ W1,s

Stables+1 (u) = true.


4. Suppose UnsetTarget(u) is performed at stage s + 1. Then set
(
hs (u) − νs (u) if States+1 (u) 6= States (u)
νs+1 (u) =
νs (u) otherwise
(
true if νs+1 (u) > 1 and poss+1 (u) = States+1 (Pu )
Stables+1 (u) =
false otherwise
5. Suppose SwitchPosition(u) is performed at stage s + 1. Then set


 hs (u) if States (u) 6= poss (u)

 hs (u) − νs (u) if u ∈ Zs , u ∈ / Ts and hs (u) > νs (u)
νs+1 (u) =

 hs (u) − 1 if u ∈
/ Zs and States (u) = poss (u) and hs (u) ≥ 1


νs (u) otherwise


true if u ∈
/ Zs , States+1 (u) = poss+1 (u), νs+1 (u) > 1, hs (u) > 0
Stables+1 (u) = false if u ∈ Zs , u ∈/ Ts and νs+1 (u) < 2


Stables (u) otherwise

The algorithm updates the variables ν(u) and Stable(u) using the computation above. If u
turns into a stable node, the algorithm applies Split(Pu , u) to preserve the stableness property of
Pu . This finishes the description of the algorithm at stage s + 1.
The correctness of the algorithm is proved by Lemma 6 and the following lemma.
Lemma 7. Suppose an update operation is applied at stage s + 1. The following hold at stage
s + 1.
1. If u is stable then νs+1 (u) = |{v | (u, v) ∈ Es+1 , States+1 (u) = States+1 (v)}|.
2. The node u is stable if and only if Stables+1 (u) is true.
3. The node u has exactly hs+1 (u) children.
Proof. The third statement is immediate from the description of the algorithm. To prove the first
statement, we use n(u) to denote the number
|{v | (u, v) ∈ Es+1 , States+1 (u) = States+1 (v)}|
and prove that u ∈ Zs+1 implies νs+1 (u) = n(u). We prove the first two statements in the lemma
respectively for each type of update operations.
1. Say InsertEdge(u, v) is applied at stage s + 1. Suppose u ∈ Zs+1 . If u ∈ Zs , then by Lemma 5,
u does not change state at stage s + 1. Therefore n(u) = νs (u) + 1 if States (u) = States (v)
and n(u) = νs (u) otherwise. If u ∈/ Zs , then u turns into a stable node at stage s + 1 only if
States (u) = poss (u) and States (v) = States (u) and in this case νs+1 (u) = 2. This prove the
first statement of the lemma.
If u ∈ Zs , then u remains a stable node at stage s + 1. Suppose u ∈ / Zs . If hs (u) > 0,
States (u) = States (v) = poss (u), then u has a unique child w at stage s with States (w) =
States (u). Thus u ∈ Zs+1 . On the other hand, suppose u ∈ Zs+1 . If States (u) 6= States+1 (u),
then States (u) 6= poss (u). This means that all children of u are in WStates (u),s and u ∈
/ Zs+1 .
Therefore it must be that u did not change state. By definition of a stable node, hs (u) > 0 ,
States (u) = States (v) = poss (u). This proves the second statement.
2. Say DeleteEdge(u, v) is applied at stage s + 1. Suppose u ∈ Zs+1 . If u ∈ Zs then deleting the
edge (u, v) will not cause u to change state. Therefore n(u) = νs (u)−1 if States (u) = States (v)
and n(u) = νs (u) otherwise. If u ∈/ Zs , then u must change its state at stage s + 1 to become
stable. This means that poss (u) 6= States (u) and all children of u have state States (u) at stage
s. By Lemma 5, this means that u is a leaf at stage s+1 and States (u) = 0. However u ∈ / Zs+1 .
Contradiction. Therefore we proves that u ∈ Zs+1 implies u ∈ Zs and νs+1 (u) = n(u). This
proves the first statement.
We proved above that u ∈ / Zs implies that u ∈ / Zs+1 . Suppose u ∈ Zs . Then u does not
change its state at stage s + 1. Therefore u ∈ / Zs+1 if and only if States (u) = States (v) and
νs+1 (u) < 2. Hence Stables+1 (u) is true if and only if u ∈ Zs+1 . The second statement in the
lemma is proved.
3. Say SetTarget(u) is applied at stage s+1. By definition u ∈ Zs+1 and thus the second statement
is proved. If hs (u) = 0 (u is a leaf) then n(u) = 0. Note that States+1 (u) = 0. If u ∈ V0,s ∩W1,s ,
all children of u have state 1 at stage s and thus again n(u) = 0. If u ∈ V1,s ∩ W0,s , then all
children of u have state 0 at stage s and thus n(u) is the number hs (u) of children of u.
Checking n(u) = νs+1 (u) in all other cases is similar in spirit to the argument above and is
straightforward.
4. Say UnsetTarget(u) is applied at stage s + 1. Suppose u ∈ Zs+1 . Since u is no longer a target,
it must be that States+1 (u) = poss+1 (u). If States (u) = States+1 (u), u does not change state
at stage s + 1 and thus n(u) = νs (u). Otherwise u changes state and n(u) = hs (u) − νs (u).
This proves the first statement.
By definition of a stable node, it is easy to see that u ∈
/ Zs+1 unless poss+1 (u) = States+1 (Pu )
and νs+1 (u) > 1. Hence Stables+1 (u) is true if and only if u ∈ Zs+1 .
5. Say SwitchPosition(u) is applied at stage s + 1. Suppose u ∈ Zs+1 . If u is a target at stage
s, then it is stable at stage s + 1 and n(u) = νs (u) = νs+1 (u). If States (u) 6= poss (u), then
all children of u have state States (u) at stage s and States+1 (u) = States (u). In this case
n(u) = hs (u) = νs+1 (u). Now suppose States (u) = poss (u) and u is not a target. Then after
applying the operation u must change state in order to remain a target node. This means
that there are some children of u that have state 1 − States (u) = States+1 (u) at stage s. If
u ∈ Zs , then the number of such nodes is hs (u) − νs (u) and if u ∈ / Zs , then the number of
such node is hs (u) − 1 because there is exactly one child of u with state States (u). In all cases
n(u) = νs+1 (u). This proves the first statement in the lemma.
Suppose u ∈ / Zs . Then it is either that States (u) 6= poss (u) or States (u) = poss (u) and no
more than one child of u has state States (u) at stage s. In the formal case, u ∈ Zs+1 if
and only if States+1 (u) = poss+1 (u) and hs (u) = νs+1 (u) > 1, if and only if States+1 (u)
is true. In the later case, u ∈ Zs+1 if and only if hs (u) > 0, States+1 (u) = poss+1 (u) and
νs+1 (u) = hs (u) − 1 > 1, if and only if States+1 (u) is true. Now suppose u ∈ Zs . Then
u ∈/ Zs+1 if and only if u is not a target and νs+1 (u) = hs (u) − νs (u) < 2, if and only if
Stables+1 (u) is true. This shows that that States+1 (u) is true if and only if u ∈ Zs+1 and
the second statement is proved. u
t
7 Amortized Complexity

Lastly, we analyze the amortized complexity of the algorithm. In the analysis, we count opera-
tions such as pointer-manipulations and comparisons as low-level operations with constant time
complexity, while splay tree operations as unit high-level operations. Recall from Theorem 2 that
each splay tree operation has amortized time complexity O(log n), where n denotes the number
of nodes in the underlying forest. We discuss the amortized time complexity for each operations
below:

– Each Query(u) operation takes the parameter u and searches for the canonical element in Pu .
This requires applying the Splay(Pu , u) operation. By Theorem 2, Query(u) has amortized time
complexity O(log n).
– Each InsertNode(u, i, j) operation runs a fixed number of low-level operations. Hence by The-
orem ?? it takes constant time. By the same reason, DeleteNode(u) also takes constant time
under the assumption that u is the root. When u is not the root, DeleteNode(u) applies
DeleteEdge(p(u), u) and hence has the same time complexity as the DeleteEdge operations.
– Each InsertEdge(u, v), DeleteEdge(u, v), SetTarget(u), UnsetTarget(u) and SwitchPosition algo-
rithm involves applying the ChangeState(u) algorithm at most once. Additionally, it includes
a fixed number of splay tree operations and low-level operations. The ChangeState(u) algo-
rithm applies the TraceUp operation at most twice and a fixed number of other splay tree or
low-level operations. In turn, TraceUp(u) iteratively runs a while-loop (See Alg. ??), which
also contains a fixed number of splay tree or low-level operations.

From the analysis above, the time it takes to perform each of InsertEdge(u), DeleteEdge(u),
SetTarget(u), UnsetTarget(u) and SwitchPosition(u) is O(log n + t log n) where t is the number of
iterations of the while loop ran by the TraceUp(u) algorithm. Recall from Section 5 that Ps (u)
denotes the maximal homogeneous stable path that contains u at stage s. For any u ∈ Vs , define
Ts (u) as that the subgraph of Fs restricted to the nodes

{v | Ps (u) ∩ Ps (v) 6= ∅}.

It is clear that Ts (u) is a tree and v ∈ Ts (u) implies Pv ⊆ Ts (u). Therefore we let TsPath (u) be the
subgraph of the partition forest FsPath restricted on the set

{Pv | v ∈ Ts (u)}.

Suppose one of the above update operation is applied at stage s + 1 and it runs ChangeState(u).
Let w be the parent of the ≤Fs+1 -least node in Pu at stage s + 1. By the description of the
ChangeState(u) algorithm, the number t of while-loop iterations ran at stage s + 1 is less or equal
to the sum of the number of ancestors of Pu in the tree TsPath (u) and the number of ancestors of
Pw in TsPath (w). In the worst case, before running TraceUp(u), each ancestor of Pu in TsPath (u)
and of Pw in TsPath (w) contains exactly one node, and the TraceUp(u) operation will run exactly
|Path[r, u]| while-loop iterations where r is the root of the tree Ts (w). This leads to O(n log n)
time cost. On the other hand, we will argue below that over a sequence of operations, the total
time used can be small.

Lemma 8. The amortized number of while-loop iterations ran by TraceUp(u) is O(log n).

We use a credit accounting scheme (See [11]) to analyze the amortized number of while-loop
iterations ran by TraceUp(u). At each stage s, each element P in the partition forest is stored a
certain number of credits cs (P ). Instead of indicating the number of while-loop iterations that
have already occurred, these time credits store the numbers of iterations that we can “afford” in
the future. Our plan is:

– At each stage s, we introduce in total O(log n) new credits, which will be added to the credits
of some P ∈ VsPath .
Path
– To run a while-loop iteration, we first need to deduce one credit from some P ∈ Vs+1 ; this
is called paying for the iteration.
– The credits which are not paid at this stage are carried over to subsequent stages. They can
be used to pay for the while-loop iterations in the future.
We want to make sure that at each stage, the total number of credits stored in the forest is positive.
In this way, we can make sure that the amortized number of while-loop iterations performed at
this stage is O(log n).
We define Ts (Pu ) as the subtree of TsPath (u) rooted at Pu and let

δs (Pu ) = blog |Ts (Pu )|c.

In the rest of the section, we describe a way to create and allocate credits at each stage s that
preserve the following invariant:
(I) For all P ∈ VsPath , cs (P ) ≥ δs (P ) after performing operation αs .
To prove Lemma 8, we first describe a way to create and allocate credits for the TraceUp(u)
operation alone (without state changes or any changes to the underlying graph F). We will then
describe how to take into account of the state changes or changes to F at stage s + 1.

7.1 Credit analysis for TraceUp(u)


In this section we assume no state changes nor changes to the underlying graph F take place. Our
goal is to prove the following lemma.
Lemma 9. Suppose TraceUp(u) is applied. We can create O(log n) new credits to pay for the
while-loop iterations and preserve the invariant (I).
Proof. Suppose (I) holds at stage s and TraceUp(u) is applied at stage s + 1. When u is not the
≤Fs -maximum element in Pu , then the operation splits Pu into two paths L(u) and R(u) where
L(u) becomes the new Pu . Let δs,0 (P ), Ts,0 (P ) and cs,0 (P ) denote respectively the updated value
of δs (P ), Ts (P ) and cs (P ). We move the cs (Pu ) credits to the new Pu . In other words, we sets
cs,0 (Pu ) = cs (Pu ). Note that R(u) is a new element in V Path that has no credits assigned to it.
Therefore we create and assign to R(u) δs,0 (R(u)) new credits to preserve (I).
Let P0 >FsPath P1 >FsPath . . . >FsPath Pm be the sequence of ancestors of Pu in the tree TsPath (u).
Note that Pu = P0 . For 0 < j ≤ m, we use δs,j (P ), Ts,j (P ) and cs,j (P ) to denote respectively the
values of δs (P ), Ts (P ) and cs (P ) in the updated partition forest after running j iterations of the
while loop.
Suppose (I) holds after running j − 1 iterations of the while loop, 0 < j ≤ m. During the jth
iteration, let w = p(Min(Pu ). The algorithm splits Pj into two paths P 0 = L(w) and P 00 = R(w).
It then joins the current Pu with P 0 to form the new Pu , which we denote by Pu0 . Note that

|Ts,j (Pu0 )| = |Ts,j−1 (Pj )|

and therefore we “move” the cs,j−1 (Pj ) credits on Pj to Pu0 by setting cs,j (Pu0 ) = cs,j−1 (Pj ). This
satisfies the invariant (I) on the updated Pu0 .
Then we create
2(δs,0 (Pj ) − δs,0 (Pj−1 ))
new credits, among which 1 credit is used for paying this iteration. The remaining new credits,
together with the cs,j−1 (Pu ) credits that were assigned to Pu at stage s, are assigned to P 00 . In
other words, we let

cs,j (P 00 ) = 2(δs,0 (Pj ) − δs,0 (Pj−1 )) + cs,j−1 (Pu ) − 1.

By (I), cs,j−1 (Pu ) ≥ δs,j−1 (Pu ). Thus

cs,j (P 00 ) ≥ 2(δs,0 (Pj ) − δs,0 (Pj−1 )) + δs,j−1 (Pu ) − 1 ≥ δs,j−1 (Pu ) − 1.


Therefore if δs,j−1 (Pu ) > δs,j (P 00 ), then

cs,j (P 00 ) ≥ δs,j (P 00 ),

and (I) is satisfied for P 00 . Therefore, we assume

δs,j−1 (Pu ) ≤ δs,j (P 00 ). (5)

Suppose δs,0 (Pj ) = δs,0 (Pj−1 ). Note that

|Ts,j−1 (Pu )| = |Ts,0 (Pj−1 )| (6)

and
|Ts,j (P 00 )| ≤ |Ts,j−1 (Pj )| = |Ts,0 (Pj )| (7)
By (5) and (6) we have
δs,j (P 00 ) ≥ δs,j−1 (Pu ) = δs,0 (Pj−1 ).
By (7) we have
δs,j (P 00 ) ≤ δs,j−1 (Pj ) = δs,0 (Pj ).
Therefore
δs,0 (Pj−1 ) = δs,j (P 00 ) = δs,0 (Pj ).
Note also that
|Ts,0 (Pj )| ≥ |Ts,j (P 00 )| + |Ts,0 (Pj−1 )| + 1.
This implies

δs,0 (Pj ) ≥ blog(|Ts,0 (Pj−1 )| + |Ts,j (P 00 )| + 1)c


≥ blog(2|Ts,0 (Pj−1 )|)c
= 1 + blog(|Ts,0 (Pj−1 )|)c > δs,0 (Pj−1 )

This is a contradiction with the assumption that δs,0 (Pj ) = δs,0 (Pj−1 ). Hence

δs,0 (Pj ) > δs,0 (Pj−1 ). (8)

Therefore we have

cs,j (P 00 ) = 2(δs,0 (Pj ) − δs,0 (Pj−1 )) + cs,j−1 (Pu ) − 1


(by (8)) ≥ δs,0 (Pj ) − δs,0 (Pj−1 ) + cs,j−1 (Pu )
(by inductive assumption) ≥ δs,0 (Pj ) − δs,0 (Pj−1 ) + δs,j−1 (Pu )
(by (6)) ≥ δs,0 (Pj )
(by (7)) ≥ δs,j (P 00 )

Hence (I) is satisfied after j iterations of the while loop.


Summing up, we create δs,0 (R(u)) new credits before running the while-loop and 2(δs,0 (Pj ) −
δs,0 (Pj−1 )) new credits for the jth iteration of the whileloop. Hence the total number of new
credits created is
X
δs,0 (R(u)) + 2(δs,0 (Pj ) − δs,0 (Pj−1 ))
1≤j≤m

≤2δs,0 (Pm ) − δs,0 (Pu ) ∈ O(log n).

u
t
7.2 Credit analysis for the update operations

Suppose the invariant (I) holds at stage s and a update operation is applied at stage s + 1
which also runs ChangeState(u). Let w be the parent of the ≤Fs+1 -least node in Pu after running
ChangeState(u). Suppose w ∈ / Zs , then by Lemma 3, w will turn into a stable node at stage s + 1.
In this case the algorithm splits Pw into L(w) and R(w) where L(w) becomes the new Pw . Since
w ∈ Zs+1 , δs+1 (Pw ) ≤ δs (Pw ) and (I) is preserved for Pw . The path R(w) is a newly created
Path
block in Vs+1 which does not has any credits. Therefore we preserve (I) on R(w) by creating and
assigning to R(w) δs+1 (R(w)) new credits. Note that δs+1 (R(w)) ∈ O(log n).
Suppose w ∈ Zs , then the ChangeState(u) operation may result in w turning into a non-stable
node. In this case, the ChangeState(u) operation runs TraceUp(w). By Lemma 9, we create O(log n)
0
new credits to perform the TraceUp(w) operation. Afterwards, Pw has at least δs+1 (Pw ) credits
0
where δs+1 (Pw ) is the value of δs+1 (Pw ) assuming w is stable. Note that the updated Pw is the
Path Path
root in the tree Ts+1 (w). Since w becomes non-stable, the tree Ts+1 (w) is expanded by attaching
all trees
T ∈ {TsPath (v) | (v, w) ∈ Es+1 , States+1 (v) = States+1 (w)}.
as subtrees of Pw . Therefore we preserve (I) on Pw by creating and assigning to Pw δs+1 (Pw ) new
Path
credits. Note that δs+1 (Pw ) ∈ O(log n). Since Pw is the root of Ts+1 (w), (I) is preserved on Pv
for all v <Fs+1 w.
We now look at the path Pu . By Lemma 9, we create O(log n) new credits for running the
0
TraceUp(u) operation. Afterwards, for all Pv ≥Fs+1 Path Pu , Pv has at least δs+1 (Pv ) credits where

0
δs+1 (Pv ) is the value of δs+1 (Pv ) without taking into account the state changes and changes on
0
F. Note that for Pv <Fs+1 Path Pu , the value of δs+1 (Pv ) coincides with the actual value of δs+1 (Pv ).

Therefore, to preserve (I) on Pu , we only need to create and assign to Pu δs+1 (Pu ) additional new
credits. This finishes the description of the accounting scheme at stage s + 1. The above argument
shows that at stage s + 1, we can create and allocate O(log n) new credits in total for paying the
while-loop iterations and preserving the invariant (I). Hence this proves Lemma 8.
In conclusion, this paper proves the following theorem.
Theorem 3. There exists a dynamic algorithm to solve reachability games played on trees which
runs the query and update operations with the following amortized time complexity:

– Query(u) runs in O(log n) time


– InsertNode(u, i, j) runs in constant time
– DeleteNode(u) runs in O(log n) time
– InsertEdge(u, v), DeleteEdge(u, v), SetTarget(u), UnsetTarget(u) and SwitchPosition(u) runs
in O(log2 n) time

where n is the number of nodes in the underlying forest. The algorithm uses a data structure of
size O(n).

Proof. The time complexity of the algorithm is analyzed above. By Lemma 8, each update opera-
tion (except InsertNode(u, i, j) and DeleteNode(u)) runs amortized O(log n) while-loop iteration,
each of which takes times O(log n). Hence these update operations has amortized time complexity
O(log2 n).
At each stage, the algorithm maintains the base structure F and the partition V Path where
each block of V Path is represented as a splay tree. The size of the splay tree is linear to the
number of nodes in the block. For each node u in F, the algorithm maintains a constant number
of additional pointers and variables. Therefore, the data structure has size O(n). u
t

References
1. Balmin, A., Papakonstantinou, Y., Vianu, V.: Incremental validation of XML documents. In: ACM Trans.
Database Syst., vol 29, 4:710-751,2004.
2. Chatterjee, K., Henzinger, T., Piterman, N.: Algorithms for büchi games. In: Proceedings of the 3rd Workshop
of Games in Design and Verification (GDV’06). (2006)
3. Cormen, T., Leiserson, C., Rivest, R., Stein, C.: Introduction to algorithms, Second Edition. MIT Press and
McGraw-Hill. (2001)
4. Demetrescu, C., and Italiano, G.: Fully dynamic transitive closure: Breaking through the O(n2 ) barrier. In:
Proceedings of FOCS’00, pp. 381-389. (2000)
5. Demetrescu, C., and Italiano, G.: A new approach to dynamic all pairs shortest paths. In: Proceedings of
STOC’03, pp. 159-166. (2003)
6. Eppstein, D., Galil, Z., Italiano, G.: Dynamic graph algorithms. In: Algorithms and Theoretical Computing
Handbook, M. Atallah, ed., CRC Press. (1999)
7. Grädel, E.: Model checking games. In: Proceedings of WOLLIC’02, vol.67 of Electronic Notes in Theoretical
Computer Science. Elsevier. (2002)
8. Grädel, E., Thomas, W., Wilke, T.: Automata, logics, and infinite games. LNCS 2500, Springer, Heidelberg.
(2002)
9. Immerman, N.: Number of quantifiers is better than number of tape cells. In: Journal of Computer and System
Sciences, 22:384-406. (1981)
10. King, V.: Fully dynamic algorithms for maintaining all-pairs shortest paths and transitive closure in digraphs.
In: Proceedings of FOCS’99, pp 81-91. (1999)
11. Kozen, D.: The design and analysis of algorithms(Monographs in computer science), Springer-Verlag, New
York. (1992)
12. Murawski, A.: Reachability games and game semantics: Comparing nondeterministic programs. In: Proceedings
of LICS’08, pp. 353-363. (2008)
13. Tarjan, R.: Data structures and network algorithms, vol 44 of Regional Conference Series in Applied Mathe-
matics. SIAM (1983)
14. Radmacher, F., Thomas, W.: A game theoretic approach to the analysis of dynamic networks. In: Proceedings of
the 1st Workshop on Verification of Adaptive Systems, VerAS 2007, vol 200(2) of Electronic Notes in Theoretical
Computer Science, pp. 21-37. Kaiserslautern, Germany (2007)
15. Rodity, L.: A faster and simpler fully dynamic transitive closure. In: Proceedings of SODA’03, pp. 401-412.
(2003)
16. Roditty, L., Zwick, U.: Improved dynamic reachability algorithm for directed graphs. In: Proceedings of
FOCS’02, pp. 679-689. (2002)
17. Roditty, L., Zwick, U.: A fully dynamic reachability algorithm for directed graphs with an almost linear update
time. In: Proceedings of 36th ACM Symposium on Theory of Computing (STOC’04), pp. 184-191. (2004)
18. Sleator, D., Tarjan, R.: Self-Adjusting Binary Search Trees. In Journal of the ACM 32, 3:652C686. 1985.
19. Thomas, W.: Infinite games and verification. In: Proceedings of the International Conference on Computer
Aided Verification CAV’02, LNCS 2404:58-64. (2002)

Вам также может понравиться