Вы находитесь на странице: 1из 71

Fredholm Determinants,

Jimbo-Miwa-Ueno -Functions,
and Representation Theory
ALEXEI BORODIN
Institute for Advanced Study
AND
PERCY DEIFT
Courant Institute
Abstract
The authors showthat a wide class of Fredholmdeterminants arising in the repre-
sentation theory of big groups, such as the innite-dimensional unitary group,
solve Painlev equations. Their methods are based on the theory of integrable
operators and the theory of Riemann-Hilbert problems. c 2002 Wiley Periodi-
cals, Inc.
Contents
Introduction 1161
1. Harmonic Analysis on the Innite-Dimensional Unitary Group 1166
2. Continuous
2
F
1
Kernel: Setting of the Problem 1174
3. The Resolvent Kernel and the Corresponding Riemann-Hilbert Problem 1183
4. Associated System of Linear Differential Equations
with Rational Coefcients 1190
5. General Setting 1197
6. Isomonodromy Deformations: The Jimbo-Miwa-Ueno -Function 1200
7. Painlev VI 1206
8. Other Kernels 1210
9. Differential Equations: A General Approach 1222
Appendix. Integrable Operators and Riemann-Hilbert Problems 1224
Bibliography 1227
Communications on Pure and Applied Mathematics, Vol. LV, 11601230 (2002)
c
2002 Wiley Periodicals, Inc.
FREDHOLM DETERMINANTS 1161
Introduction
Consider the kernel
(0.1) K(x. y) =
A(x)B(y) B(x)A(y)
x y
_
(x)(y) . x. y
_
1
2
. +
_
.
where
(x) =
sin z sin z

2

_
x
1
2
_
zz
_
x +
1
2
_
ww

.
A(x) =
_
x +
1
2
x
1
2
_
w

2
F
1
_
z +n

. z

+n

z + z

+n +n

1
1
2
x
_
.
B(x) =
I(z +n +1)I(z +n

+1)I(z

+n +1)I(z

+n

+1)
I(z + z

+n +n

+1)I(z + z

+n +n

+2)

1
x
1
2
_
x +
1
2
x
1
2
_
w

2
F
1
_
z +n

+1. z

+n

+1
z + z

+n +n

+2

1
1
2
x
_
.
Here
2
F
1
[
a,b
c
| ] stands for the Gauss hypergeometric function, and z, z

, n, and
n

are some complex numbers. We call K(x. y) the continuous


2
F
1
kernel, or
simply the
2
F
1
kernel.
The basic problem considered in this paper is the derivation of an ordinary
differential equation for the Fredholm determinant D(s) = det(1 K

(s,+)
).
This kernel originates in the representation theory of the innite-dimensional
unitary group U(). Briey, decomposition of a certain natural representation
of U() into irreducibles is described by a probability measure on the innite-
dimensional space of all irreducible representations; a projection of this measure
onto a one-dimensional subspace has the distribution function equal to D(s) =
det(1 K

(s,+)
), where K is as above. The study of this representation-theoretic
problem is the main subject of two recent papers [16, 50]. For a more detailed
description of the problem and the results in these papers, the reader is referred to
Section 1 below.
The problem of deriving differential equations for determinants of the form
D(s) as above has a long history. In their pioneering work [33] in 1980, M. Jimbo,
T. Miwa, Y. Mri, and M. Sato considered the so-called sine kernel, which has the
form (0.1) with (x) = 1,, A(x) = sin x, and B(x) = cos x. They showed that
the determinant of the identity operator minus this kernel restricted to an interval
of varying length s can be expressed through a solution of the Painlev V equation.
Their proof was based on the theory of isomonodromy deformations of linear sys-
tems of differential equations with rational coefcients. This theory in turn goes
back to the work of Riemann, Schlesinger, Fuchs, Garnier, and others. [33] used
the results of [32, 34], where the theory of isomonodromy deformations was devel-
oped in a setting more general than in the classic papers mentioned above. Along
with the one-interval case, [33] also considered the restriction of the sine kernel to
1162 A. BORODIN AND P. DEIFT
a union of a nite number of intervals. They showed that the corresponding Fred-
holm determinant, as a function of the endpoints of the intervals, is a -function (in
the sense of [34]) of the corresponding isomonodromy problem. In other words, it
can be expressed through a solution of a completely integrable system of partial
differential equations called the Schlesinger equations.
Kernels of the form (0.1) are of great interest in random matrix theory. Indeed,
the Fredholm determinant related to the kernel (0.1) restricted to a domain J, with
A and B being n
th
and (n1)
th
orthogonal polynomials with the weight function ,
measures the probability of having no particles in J for certain n-particle systems
called orthogonal polynomial ensembles. Such systems describe the spectra of
random unitary and Hermitian matrices. We refer the reader to [41] for details.
The results of [33] attracted considerable attention in the random matrix com-
munity. In 1992 M. L. Mehta [42] rederived the Painlev V equation for the sine
kernel. Approximately at the same time, C. Tracy and H. Widom [58] gave their
own derivation of this result. Moreover, they produced a general algorithm (see
[61]) to obtain a system of partial differential equations for a Fredholm determi-
nant associated with a kernel of type (0.1) restricted to a union of intervals in the
case where the functions , A, and B satisfy a differential equation of the form
(0.2)
d
dx
_
(x)A(x)

(x)B(x)
_
= R(x)
_
(x)A(x)

(x)B(x)
_
.
where R(x) is a traceless rational 2 2 matrix. Using their method, they derived
different Painlev equations for a number of kernels relevant to random matrix
theory [58][61].
Shortly after, J. Palmer [51] showed that the partial differential equations aris-
ing in the Tracy-Widom method are precisely the Schlesinger equations for an
associated isomonodromy problem.
Among more recent papers, we mention (in no particular order) the works [1, 2],
where a different approach to the kernels arising from matrix models can be found;
the paper [27], where the Painlev VI equation for the Jacobi kernel was derived;
the paper [22], where the theory of Riemann-Hilbert problems was applied to de-
rive the Schlesinger equations for certain kernels and to analyze the asymptotics of
solutions; the paper [28], where a multidimensional analogue of the sine kernel was
treated using the isomonodromy deformation method; and the papers [26, 64, 65],
where, in particular, a two-interval situation was reduced to an ordinary differential
equation in one variable.
Returning to our specic
2
F
1
kernel, we nd that our functions , A, and B
satisfy an equation of the form (0.2) (see Remark 4.8).
However, the method in [61] leads in our case to considerable algebraic com-
plexity, and we have not been able to see our way through the calculation. A similar
situation arose in the case of the (simpler) Jacobi kernel, for which the method in
[61] leads to a third-order differential equation. This equation was shown to be
equivalent to the (second-order) Painlev VI equation only in the later work of
FREDHOLM DETERMINANTS 1163
Haine and Semengue [27]. In the face of these difculties, we decided to look for
a different approach.
The representation-theoretic origin of the
2
F
1
kernel suggests a new approach.
It turns out that the construction of the kernel K (see Section 1) strongly indicates
that K should have a simple resolvent kernel L = K(1 K)
1
in the sense that
the formula for L(x. y) should not involve any special functions! At the formal
level det(1 K) = (det(1 + L))
1
. However, we are interested in the restricted
operator K|
(s,+)
, and it is not at all clear that the simple kernel L can be used
in any way to compute D(s) = det(1 K|
(s,+)
). It is the basic observation of
this paper that the kernel L can indeed be used to compute D(s), and this leads, as
we will see, to the desired differential equations.
In the analysis that follows, a crucial fact is that both kernels K and L are
integrable in the sense of [30]. We refer the reader to the appendix for the de-
nition and basic properties of integrable operators and also for the denition of a
Riemann-Hilbert problem (RHP). Our method is as follows (see Section 5):
Step 1. The kernel K(x. y) is expressed through an explicit solution of an RHP
(R. :), where : comes from L and is simple.
Step 2. D(s) = det(1 K|
(s,+)
) is expressed through the solution m
s
of a
normalized RHP ((s. +). :), where : involves special functions as in (0.1).
Step 3. The product m
s
m satises the RHP (R\(s. +) = (. s]. :), where
: is again the simple jump matrix occurring in step 1.
Step 3, which is the key fact, is a consequence of the theory of integrable opera-
tors and the following elementary observation: Let Y = Y
1
Y
2
Cbe a union of
two contours. Let m and m
1
be solutions of the RHPs (Y. :) and (Y
1
. :), respec-
tively. Then m
2
m
1
m
1
solves the RHP (Y
2
. :
2
= m
+
:
1
m
1
+
= m

:
1
m
1

).
Conversely, if m and m
2
are solutions of the RHPs (Y. :) and (Y
2
. :
2
), then
m
1
= m
2
m solves the RHP (Y
1
. :).
As we will see, if : is the jump matrix associated via the theory of integrable
operators with the kernel L, then m
+
:
1
m
1
+
= m

:
1
m
1

is the jump matrix :


associated with the kernel K.
In the RH framework, differential equations are deduced from the fact that the
jump matrix for the problem at hand can be conjugated to a form that does not de-
pend on the parameters relevant for the problem. A prototypical calculation, which
can be traced essentially to the beginning of the inverse scattering theory, is as fol-
lows (see, e.g., [21] and references therein). The defocusing nonlinear Schrdinger
(NLS) equation is associated with the RHP (Y = R. :
x,t
= e
i
3
: e
i
3
) where
= x t
2
.
3
=
_
1 0
0 1
_
. : =
_
1 |r( )|
2
r( )
r( ) 1
_
.
for some reection coefcient r. If m is a solution of (R. :
x,t
), then + = me
i
3
1164 A. BORODIN AND P. DEIFT
solves the RHP (R. :), which is independent of x and t . It follows that

x
and

t
solve the same RHP and hence

x
+
1
and

t
+
1
have no jump across Y = R.
A short calculation then leads to the Lax pair

x
= P+ and

x
= L+ for some
polynomial matrices P = P( ) and L = L( ). Cross-differentiation

t

x
=

t
then leads to the NLS equation.
As we will see in Section 4, the jump matrix : in steps 1 and 3 is easily con-
jugated to a jump matrix V that is piecewise constant. In the spirit of the above
calculation for NLS, this means that a solution M of the RHP M
+
= M

V can be
differentiated with respect to the variable on the contour, and also with respect to
s, leading as above to the relations of the form
M

= PM and
M
s
= LM, where
P = P( ) and L = L( ) are now rational. Cross-differentiation then leads to a
set of differential relations. In order to extract specic equations, such as PVI for
D(s) = det(1 K|
(s,+)
), we recall the result in [51]. As V is piecewise con-
stant, the above equations
M

= PM and
M
s
= LM describe an isomonodromy
deformation, and hence one can construct an associated tau-function = (s) as
in [34]. A separate calculation (Section 6) shows that in fact D(s) = (s), and PVI
follows using calculations similar to those as in [32, appendix C]. The above calcu-
lations generalize immediately to the case where the interval (s. +) is replaced
by a union of intervals J.
The idea of reducing the Riemann-Hilbert problem for m
s
to a problem with a
piecewise constant jump matrix has been recently used in [22, 28, 36, 51]; see also
[29]. However, the method outlined above of performing the reduction seems to be
new.
Our main result (see equations (6.4) and (6.5), and Theorems 6.5 and 7.1) can
be stated as follows: Let
J = (a
1
. a
2
) (a
3
. a
4
) (a
2m1
. a
2m
) R.
a
1
- a
2
- - a
2m
+.
be a union of disjoint (possibly innite) intervals inside the real line such that
the closure of J does not contain the points
1
2
. Denote by K
J
the restriction
of the continuous
2
F
1
kernel K(x. y) introduced above to J (see Section 2 for the
complete denition of K(x. y)). Then under suitable restrictions on the parameters
(z. z

. n. n

), the integral operator dened by K


J
is trace class, and det(1 K
J
)
is a -function for the system of Schlesinger equations
A
a
i
=
[C
i
. A]
a
i

1
2
.
B
a
i
=
[C
i
. B]
a
i
+
1
2
.
C
j
a
i
=
[C
i
. C
j
]
a
i
a
j
. i = j ;
C
i
a
i
=
[A. C
i
]
a
i

1
2
+
[B. C
i
]
a
i
+
1
2
+

j =i
[C
j
. C
i
]
a
i
a
j
.
FREDHOLM DETERMINANTS 1165
where A. B. {C
i
}
2m
i =1
are nonzero 2 2 matrices (if a
1
= or a
2m
= + then
the corresponding matrices and equations are removed) satisfying
tr A = tr B = tr C
1
= tr C
2
= = tr C
2m
= 0 .
det A =
_
zz

2
_
2
. det B =
_
ww

2
_
2
. det C
1
= det C
2
= = det C
2m
= 0 .
A+B +
2m

i =1
C
i
=
_

z+z

+w+w

2
0
0
z+z

+w+w

2
_
;
that is,
d ln det(1 K
J
) =
2m

i =1
_
tr(AC
i
)
a
i

1
2
+
tr(BC
i
)
a
i
+
1
2
+

j =i
tr(C
j
C
i
)
a
i
a
j
_
da
i
.
If J = (s. +), then with the notation

1
=
2
=
z + z

+n +n

2
.
3
=
z z

+n n

2
.

4
=
z z

n +n

2
.
the function
(s) =
_
s
1
2
__
s +
1
2
_
d ln det(1 K|
(s,+)
)
ds

2
1
s +

3

4
2
solves the -form of Painlev VI:

__
s
1
2
__
s +
1
2
_

_
2
=
_
2(s

4
_
2

+
2
1
__

+
2
2
__

+
2
3
__

+
2
4
_
.
As noted above, the property of the kernel K that is important for us is the
existence of a simple resolvent kernel L = K(1 K)
1
. This property seems to
be new and was rst observed in the context of the representation theory of the
innite symmetric group S() in [11]. In random matrix theory the operators K
that arise are projection operators (of ChristoffelDarboux type) or their scaling
limits. All these kernels have norm 1 and hence the operator L = K(1 K)
1
is
not dened. However, our problem has a different origin that makes it possible not
only to dene L but also to express it in an explicit way [11, 16].
The method that we introduce can be used to recover the results in [61] for in-
tegrable operators with entries satisfying equations of type (0.2). We will illustrate
the situation in the specic case of the Airy kernel in Section 9.
In the remainder of the paper we consider a variety of kernels similar to (0.1).
First, we apply our methods to the Jacobi kernel and prove that the determinant
of the identity minus the Jacobi kernel restricted to a nite union of intervals is the
-function of the corresponding isomonodromy problem. For the one-interval case
we again get the Painlev VI equation, re-proving the result of [27].
1166 A. BORODIN AND P. DEIFT
Second, we apply our formalism to the so-called Whittaker kernel and its spe-
cial case, the Laguerre kernel. The Whittaker kernel appeared in works on the
representation theory of the innite symmetric group [5, 6, 7, 11, 12, 48, 49]. The
calculations for the
2
F
1
kernel are applicable to (the simpler case of) the Whit-
taker kernel. We prove that the Fredholm determinant of the Whittaker kernel on
a union of intervals is a -function of an isomonodromy problem, and we derive
Painlev V in the one-interval case. This last result was proven in [57], and in [61]
for the special case of the Laguerre kernel.
Finally, we observe that the
2
F
1
kernel degenerates in a certain limit to a kernel
that we call the conuent hypergeometric kernel. This kernel appears in a problem
of decomposing a remarkable family of probability measures on the space of in-
nite Hermitian matrices on ergodic components; see [14]. It can also be obtained
as a scaling limit of Christoffel-Darboux kernels for the so-called pseudo-Jacobi
orthogonal polynomials; see [14, 64]. We show that the Fredholm determinant
in the one-interval case for this kernel can be expressed in terms of a solution of
the Painlev V equation. The conuent hypergeometric kernel depends on one
complex parameter r, and for real values of r the last result was proven in [64].
For r = 0 the kernel turns into the sine kernel, which recovers the original result
of [33].
The paper is organized as follows. In Section 1 we describe the representation-
theoretic origin of the problem. In Section 2 we introduce the
2
F
1
kernel and study
its properties. In Section 3 the resolvent kernel L is dened, and the matrix m in
step 1 above is considered. In Section 4 we derive the Lax pair for M as above. In
Section 5 we describe the general setting in which our method is applicable. The
reader interested primarily in the derivation of the differential equations might want
to start reading the paper with this section. In Section 6 we prove that the Fred-
holm determinants of kernels that satisfy the general conditions of Section 5 are
-functions of associated isomonodromy problems. In Section 7 we solve our ini-
tial problem: The Painlev VI equation for det(1 K|
(s,+)
) is derived. Section 8
deals with the applications of our method to the Jacobi, Whittaker, and conuent
hypergeometric kernels. Section 9 presents a general approach to kernels of the
form (0.1) subject to (0.2), worked out in the case of the Airy kernel. Finally, the
appendix contains a brief description of the formalism of integrable operators and
Riemann-Hilbert problems.
A discrete version of many of the results in this paper is given in [9].
1 Harmonic Analysis on the Innite-Dimensional Unitary Group
By a character of a (topological) group K (in the sense of von Neumann) we
mean any central (continuous) positive denite function on K normalized by
the condition (e) = 1. Recall that centrality means (gh) = (hg) for any
g. h K, and positive deniteness means

i, j
z
i
z
j
(g
i
g
1
j
) 0 for any z
i
C,
g
i
K, i = 1. 2. . . . . n. The characters form a convex set. The extreme points
FREDHOLM DETERMINANTS 1167
of this set are called indecomposable characters, and the other points are called
decomposable characters.
The characters of K give rise to representations in two ways.
Through the Gelfand-Naimark-Segal (GNS) construction each character de-
termines a unitary representation of K that will be denoted as H(). When
is indecomposable, H() is a factor representation of nite type in the sense of
von Neumann; see [56]. Recall that H() being a factor representation means
that if S commutes with {H()(g). g K} and S lies in the weak closure W of
{H()(g). g K}, then S is a multiple of the identity. Finite type means that W
carries a nite trace function.
Alternatively (see [47]), set G = K K and let diag K denote the diagonal
subgroup in G that is isomorphic to K. We interpret as a function on the rst
copy of K in G, and then extend it to the whole group G by the formula
(g
1
. g
2
) = (g
1
g
1
2
) . (g
1
. g
2
) K .
Note that is the only extension of that is a diag K-bi-invariant function on
G. The function is also positive denite, so the GNS construction assigns to it
a unitary representation that we will denote by T(). By its very construction, it
possesses a distinguished diag K-invariant vector.
If is indecomposable, then T() = T(
()
) is irreducible. The representa-
tions of the form T() with indecomposable s are exactly the irreducible unitary
representations of the group G possessing a K-invariant vector. See [47] for de-
tails.
If K is a nite or compact group, then the indecomposable characters of K are
all of the form
(1.1)

(g) =
tr((g))
dim
.
where is an irreducible (nite-dimensional) representation of K, and dim is its
dimension. Moreover, any character can be written in a unique way as a convex
linear combination of indecomposable ones:
(1.2) (g) =

Irr(K)
P()

(g) . P() 0 .

Irr(K)
P() = 1 .
If =

is of the form (1.1) with an irreducible , then H() = dim


and T() = , where denotes the representation conjugate to . If acts in
V, then acts in V V

End(V), and Id End(V) is the diag K-invariant


vector for T().
In particular, if K = U(N), the group of N N unitary matrices, then the
irreducible representations of K are parametrized by the highest weights (see,
e.g., [66])
= (
1

2

N
) .
i
Z. i = 1. 2. . . . . N.
1168 A. BORODIN AND P. DEIFT
and every character can be written in the form (1.2)
(1.3) =

N
P
N
()

.
where

is the normalized (as in (1.1)) character of U(N) corresponding to .


Note that the coordinates of may be negative.
Now let K = U() be the innite-dimensional unitary group dened as the
inductive limit of the nite-dimensional unitary groups U(N) with respect to the
natural embeddings U(N) U(N + 1). Equivalently, U() is the group of
matrices U = [u
i j
]

i, j =1
such that all but nitely many off-diagonal entries are 0,
all but nitely many diagonal entries are equal to 1, and U

= U
1
.
A fundamental result of the representation theory of the group U() is a com-
plete description of its indecomposable characters. They are naturally parameter-
ized by the points
= (
+
.
+
.

.
+
.

) R
4+2
such that
(1.4)

+
1

+
2
0 .
+
1

+
2
0 .

2
0 .

2
0 .

+
0 .

0 .

i =1
(
+
i
+
+
i
+

i
+

i
) - .
+
1
+

1
1 .
The values of extreme characters are provided by Voiculescus formulae [63]. This
classication result can be established in two ways: by reduction to a deep theorem
due to Edrei [23] about two-sided, totally positive sequences (see [17, 62]), and by
applying Kerov-Vershiks asymptotic approach (see [46, 62]).
We denote the set of all points satisfying (1.4) by O. The coordinates
+
i
,

+
i
,

i
,

i
,
+
, and

are called the Voiculescu parameters.


Instead of giving a more detailed description of the indecomposable characters
(which is rather simple and can be found in [63]), we will explain why such param-
eterization is natural. It can be shown that every indecomposable character

of
U() is a limit of indecomposable characters
(N)
of growing nite-dimensional
unitary groups U(N) as N . Here (N) =
1
(N)
2
(N)
N
(N)
is a highest weight of U(N). The label O of the character

can be viewed
as a limit of (N)s as N in the following way.
We write the set of nonzero coordinates of (N) as a union of two sequences
of positive and negative coordinates:
{
i
(N) = 0} =
+
(N) (

(N)) .

+
(N) =
_

+
1
(N)
+
2
(N)
+
k
(N)
_
.

(N) =
_

1
(N)

2
(N)

l
(N)
_
.
FREDHOLM DETERMINANTS 1169
where
+
i
> 0 and

i
> 0 for all i , and k and l are the numbers of positive and
negative coordinates in (N), respectively. Note that k +l N. We now regard

+
(N) and

(N) as Young diagrams (of length k and l, respectively), and write


them in the Frobenius notation (see [38, sect. 1] for the denition):

+
(N) =
_
p
+
1
(N) > p
+
2
(N) > | q
+
1
(N) > q
+
2
(N) >
_
.

(N) =
_
p

1
(N) > p

2
(N) > | q

1
(N) > q

2
(N) >
_
.
Then, if

is a limit of
(N)
as N , we must have
(1.5)

+
i
= lim
N
p
+
i
(N)
N
.
+
i
= lim
N
q
+
i
(N)
N
.

i
= lim
N
p

i
(N)
N
.

i
= lim
N
q

i
(N)
N
.
for all i = 1. 2. . . . ; see [46, 62]. The parameters
+
and

can also be de-


scribed in a similar manner. Since we will not be concerned with them, we refer
the interested reader to [46, 62] for the asymptotic meaning of
+
and

.
Observe that the condition
+
1
+

1
1 in (1.4) is now easily explainedit
follows from the relation q
+
1
+q

1
= k +l 2 N.
The next question that we address is how the characters of U() decompose
in terms of the indecomposable ones.
THEOREM 1.1 [50] Let be a character of U(). Then there exists a unique
probability measure P on O such that
(1.6) =
_

(d) .
where

is the indecomposable character of U() corresponding to O.


The measure P

is called the spectral measure of the character . The problem


of nding the spectral measure for a given character is referred to as the problem
of harmonic analysis for .
The decomposition (1.6) is the innite-dimensional analogue of (1.3).
Since the indecomposable characters

are limits of the normalized characters

(N) of U(N), it is natural to expect that the measure P

from Theorem 1.1 can


be approximated by discrete measures P
N
from (1.3) as N . To formulate
the exact result we need more notation.
Dene O
o
as the set of points
o
= (
+
.
+
.

) R
4
satisfying condi-
tions (1.4). There is a natural projection O O
o
that consists of omitting the two
gammas. Denote by P
o
the push-forward of the measure P under this projection.
Because we will only be concerned with statistical quantities depending on
o
and
not on
+
and

, it is enough to consider P
o
instead of P.
1170 A. BORODIN AND P. DEIFT
For every N = 1. 2. . . . . dene a map i
N
that embeds the set of all highest
weights (N) of U(N) into O
o
as follows: For (N) = (
1
(N)
2
(N)

N
(N)), using the above notation, we set
i
N
() =
_

+
i
=
p
+
i
(N)
N
.
+
i
=
q
+
i
(N)
N
.

i
=
p

i
(N)
N
.

i
=
q

i
(N)
N
_
O
o
.
THEOREM 1.2 [50] Let be a character of U(),
N
be its restriction to U(N),
and
(1.7) |
U(N)
=

N
P
N
()

. P
N
() 0 .

N
P
N
() = 1 .
be the decomposition of
N
on indecomposable characters. Then the projection P
o

of the spectral measure P

of is the weak limit of push-forwards of the measures


P
N
under the embeddings i
N
. In other words, if F is a bounded continuous function
on O
o
, then
lim
N

N
F(i
N
())P
N
() =
_

o
F()P
o
(dn) .
Now, following [16], we apply the above general theory to a specic family of
decomposable characters of U() constructed in [50]. The group U() does not
carry Haar measure, and hence the naive denition of the regular representation
fails. The representations in [50] should be viewed as analogues of the nonexist-
ing regular representation of U(). A beautiful geometric construction of these
representations can also be found in [50].
For every N = 1. 2. . . . and a highest weight = (
1

2

N
), set
P
N
() = c
N
dim
2
N
()
N

i =1
f (
i
i ) .
f (x) =
1
I(z x)I(z

x)I(n + N +1 + x)I(n

+ N +1 + x)
.
c
N
=
N

i =1
I(z +n +i )I(z +n

+i )I(z

+n +i )I(z

+n

+i )I(i )
I(z + z

+n +n

+i )
.
where dim
N
() is the dimension of the irreducible representation of U(N) corre-
sponding to ,
dim
N
=

i i <j N

i
i
j
+ j
j i
;
see, e.g., [66]. Here z, z

, n, and n

are complex parameters such that P


N
() > 0
for all N and . This implies that
(1) z

= z C \ Z or k - z. z

- k +1 for some k Z, and


(2) n

= n C \ Z or l - z. z

- l +1 for some l Z.
FREDHOLM DETERMINANTS 1171
We also want the series

P
N
() to converge, and this condition is equivalent
to the additional inequality
(3) z + z

+n +n

> 1.
Under these conditions the choice of c
N
makes P
N
into a probability distribution.
THEOREM 1.3 [50] Let z, z

, n, and n

satisfy conditions (1)(3) above. Then


there exists a character =
(z,z

,w,w

)
of U() such that
|
U(N)
=

N
P
N
()

with P
N
() given by (1.8).
In order to describe the spectral measures for
(z,z

,w,w

)
, we need to switch
to a different representation for the s. First, we describe the measures P
N
in a
different way.
Consider the lattice
X
(N)
=
_
Z. N is odd.
Z +
1
2
. N is even.
and divide it into two parts
X
(N)
= X
(N)
in
X
(N)
out
.
X
(N)
in
=
_

N1
2
.
N3
2
. . . . .
N3
2
.
N1
2
_
. |X
(N)
in
| = N .
X
(N)
out
=
_
. . . .
N+3
2
.
N+1
2
_

_
N+1
2
.
N+3
2
. . . .
_
. |X
(N)
out
| = .
Let us associate to every highest weight =
1

2

N
a nite-point
conguration X() X
(N)
as follows:
X() =
_
p
+
i
+
N+1
2
_

_
N1
2
q
+
i
_

_
p

j

N+1
2
_

N1
2
+q

j
_
.
where ps and qs are the Frobenius coordinates of the positive and negative parts
of as explained above. Note that can be reconstructed if we know X().
The probability measure P
N
() makes these point congurations random, and,
according to the usual terminology [19], we obtain a random point process. We
will denote this process by P
N
.
Introduce a matrix L
(N)
on X
(N)
X
(N)
that in block form corresponding to the
splitting X
(N)
= X
(N)
out
X
(N)
in
is given by
L
(N)
=
_
0 A
(N)
(A
(N)
)

0
_
.
1172 A. BORODIN AND P. DEIFT
where A
(N)
is a matrix on X
out
X
in
,
A
(N)
(a. b) =
_

(N)
out
(a)
(N)
in
(b)
a b
. a X
(N)
out
. b X
(N)
in
.

(N)
in
(x) =
f (x)
_
I
_
x +
N+1
2
_
I
_
x +
N+1
2
__
2
.

(N)
out
(x) =
_

_
_
(x+
N+1
2
)
(x
N1
2
)
_
2
f (x). x
N+1
2
.
_
(x+
N+1
2
)
(x
N1
2
)
_
2
f (x). x
N+1
2
.
and f (x) was introduced in (1.8).
PROPOSITION 1.4 [16] For any highest weight = (
1

2

N
)
P
N
() =
det L
(N)
X()
det(1 + L
(N)
)
.
where L
(N)
X()
denotes the nite submatrix of L
(N)
on X() X(). Moreover, if a
nite-point conguration X X
(N)
is not of the form X = X() for some highest
weight , then det L
(N)
X
= 0.
Proposition 1.4 implies that P
N
is a determinantal point process (see [10, ap-
pendix] and [16, 54] for a general discussion of such processes). In particular, this
implies the following claim:
COROLLARY 1.5 [16] The matrix L
(N)
denes a nite rank (and hence trace class)
operator in
2
(X
(N)
). The correlation functions

(N)
k
(x
1
. x
2
. . . . . x
k
) = P
N
{ | {x
1
. x
2
. . . . . x
k
} X()}
of the process P
N
have the determinantal form

(N)
k
(x
1
. x
2
. . . . . x
k
) = det
_
K
(N)
(x
i
. x
j
)
_
k
i, j =1
. k = 1. 2. . . . .
where K
(N)
(x. y) is the matrix of the operator K
(N)
= L
(N)
,(1+L
(N)
) in
2
(X
(N)
).
Explicit formulae for K
(N)
can be found in [16].
Now we will describe the limit situation as N . Dene the continuous
phase space
X = X
()
= R \
_

1
2
_
and divide it into two parts:
X = X
in
X
out
. X
in
=
_

1
2
.
1
2
_
. X
out
=
_
.
1
2
_

_
1
2
. +
_
.
To each point O
o
we associate a point conguration in X as follows:
= (
+
.
+
;

) X()
=
_

+
i
+
1
2
_

_
1
2

+
i
_

j

1
2
_

1
2
+

j
_
.
FREDHOLM DETERMINANTS 1173
where we omit possible zeros in
+
,
+
,

, and

, and possible ones in


+
and

.
Denote by P = P
(z,z

,w,w

)
the spectral measure for the character
(z,z

,w,w

)
given by Theorem 1.3, and let P
o
be its push-forward to O
o
. Then using the above
correspondence between points in O
o
and point congurations, P
o
can be inter-
preted as a measure on the space of locally nite-point congurations in X, that is,
as a point process. We will denote this process by P.
Since the measures P
N
converge to the spectral measure P
o
as N (The-
orem 1.2), we should expect the correlation functions
(N)
k
to converge to the cor-
relation functions of P as N .
For any x X we will denote by x
N
the point of the lattice X
(N)
that is closest
to x N.
THEOREM 1.6 [16] The correlation functions

k
(x
1
. x
2
. . . . . x
k
) = lim
x
1
,x
2
,...,x
k
+0
P
o
{ | X() intersects each interval (x
i
. x
i
+Lx
i
). i = 1. 2. . . . . k}
Lx
1
Lx
2
Lx
k
of the process P have determinantal form

k
(x
1
. x
2
. . . . . x
k
) = det[K(x
i
. x
j
)]
k
i, j =1
. k = 1. 2. . . . .
where K(x. y) is a kernel on X that is the scaling limit of the kernels K
(N)
(x. y)
introduced above:
(1.8) K(x. y) = lim
N
N K
(N)
(x
N
. y
N
) . x. y X.
The kernel K(x. y) is called the continuous
2
F
1
kernel and is precisely the
kernel in (0.1) for x. y >
1
2
. Explicit formulae for K(x. y) can be found in the next
section. This kernel is a real-analytic function of the parameters (z. z

. n. n

). We
will use the same notation for its natural analytic continuation.
It is worth noting that the correlation functions
k
(x
1
. x
2
. . . . . x
k
) determine the
process P uniquely.
It is a well-known elementary observation that the probability that a determi-
nantal point process with a correlation kernel K does not have particles in a given
part J of the phase space is equal to the Fredholm determinant det(1 K|
J
); see,
e.g., [54, 58].
1
In what follows we study determinants of the form det(1K|
J
) where K is the
continuous
2
F
1
kernel and J is a union of nitely many (possibly innite) intervals.
1
If the correlation kernel is self-adjoint and this probability is nonzero, then the integral operator
dened by the kernel K is of trace class and the determinant is well-dened; see [54, theorem 4].
For kernels that are not self-adjoint, the existence of the determinant, generally speaking, needs to be
justied; see, e.g., the end of Section 2.
1174 A. BORODIN AND P. DEIFT
2 Continuous
2
F
1
Kernel: Setting of the Problem
Following [16] we consider the continuous
2
F
1
kernel with parameters satisfy-
ing conditions (1) through (3) of Section 1.
To avoid unnecessary complications (poles in certain formulae below), we ex-
clude the set where z + z

+ n + n

= 0 from our consideration. Most of the


results, however, can be extended to this set by analytic continuation in one of the
parameters.
Recall that in Section 1 we introduced the space
X = R \
_

1
2
_
and divided it into two parts,
X = X
out
X
in
. X
out
=
_
.
1
2
_

_
1
2
. +
_
. X
in
=
_

1
2
.
1
2
_
.
Introduce the functions

out
: X
out
R
+
.
in
: X
in
R
+
.

out
(x) =
_
C(z. z

)
_
x
1
2
_
zz
_
x +
1
2
_
ww

. x >
1
2
.
C(n. n

)
_
x
1
2
_
ww
_
x +
1
2
_
zz

. x -
1
2
.

in
(x) =
_
1
2
x
_
z+z
_
1
2
+ x
_
w+w

.
1
2
- x -
1
2
.
C(z. z

) =
sin(z) sin(z

2
. C(n. n

) =
sin(n) sin(n

2
.
Note that C(z. z

) > 0 and C(n. n

) > 0, so that
out
(x) and
in
(x) are positive.
We now dene the
2
F
1
kernel on X. It is convenient to write it in block form
corresponding to the splitting X = X
out
X
in
:
K =
_
K
out,out
K
out,in
K
in,out
K
in,in
_
.
We set
K
out,out
(x. y) =
_

out
(x)
out
(y)
R
out
(x)S
out
(y) S
out
(x)R
out
(y)
x y
.
K
out,in
(x. y) =
_

out
(x)
in
(y)
R
out
(x)R
in
(y) S
out
(x)S
in
(y)
x y
.
K
in,out
(x. y) =
_

in
(x)
out
(y)
R
in
(x)R
out
(y) S
in
(x)S
out
(y)
x y
.
K
in,in
(x. y) =
_

in
(x)
in
(y)
R
in
(x)S
in
(y) S
in
(x)R
in
(y)
x y
.
where
R
out
(x) =
_
x +
1
2
x
1
2
_
w

2
F
1
_
z +n

. z

+n

z + z

+n +n

1
1
2
x
_
.
FREDHOLM DETERMINANTS 1175
S
out
(x) = I
_
z +n +1. z +n

+1. z

+n +1. z

+n

+1
z + z

+n +n

+1. z + z

+n +n

+2
_

1
x
1
2
_
x +
1
2
x
1
2
_
w

2
F
1
_
z +n

+1. z

+n

+1
z + z

+n +n

+2

1
1
2
x
_
.
R
in
(x) =
sin z

I
_
z

z. z +n +1. z +n

+1
z +n + z

+n

+1
_

_
1
2
+ x
_
w
_
1
2
x
_
z

2
F
1
_
z +n

+1. z

n
z z

+1

1
2
x
_

sin z

I
_
z z

. z

+n +1. z

+n

+1
z +n + z

+n

+1
_

_
1
2
+ x
_
w
_
1
2
x
_
z
2
F
1
_
z

+n

+1. z n
z

z +1

1
2
x
_
.
S
in
(x) =
sin z

I
_
z

z. z + z

+n +n

+n. z

+n

_
1
2
+ x
_
w
_
1
2
x
_
z

2
F
1
_
z +n

. z

n +1
z z

+1

1
2
x
_

sin z

I
_
z z

. z + z

+n +n

z +n. z +n

_
1
2
+ x
_
w
_
1
2
x
_
z
2
F
1
_
z

+n

. z n +1
z

z +1

1
2
x
_
.
Here
2
F
1
_
a,b
c
| x
_
is the Gauss hypergeometric function; see, e.g., [24, chap. 2],
and the notation I
_
a,b,...
c,d,...
_
means
I(a)I(b)
I(c)I(d)
.
Note that for z = z

, the functions R
in
and S
in
are, formally speaking, not
dened because of the presence of factors I(z z

) and I(z

z). However, the


formulae have a well-dened limit as z z

, because the second summands in the


formulae for R
in
and S
in
are equal to the rst summands with z and z

interchanged.
In what follows we will need to know certain analytic properties of the
2
F
1
kernel. We discuss these properties below.
2.1 Smoothness
K(x. y) is a real-analytic function in two variables dened on XX. Its values
on the diagonal are determined by LHpitals rule:
K(x. x) =
_

out
(x)(R

out
(x)S
out
(x) S

out
(x)R
out
(x)) . x X
out
.

in
(x)(R

in
(x)S
in
(x) S

in
(x)R
in
(x)) . x X
in
.
1176 A. BORODIN AND P. DEIFT
2.2 Symmetries of R
out
, S
out
, R
in
, and S
in
All four functions R
out
, S
out
, R
in
, and S
in
are invariant with respect to the trans-
positions z z

and n n

. This follows easily from the above formulae and


the identities
2
F
1
_
a. b
c


_
= (1 )
cab
2
F
1
_
c a. c b
c


_
.
2
F
1
_
a. b
c


_
=
2
F
1
_
b. a
c


_
.
Since
2
F
1
_
a. b
c


_
=
2
F
1
_
a.

b
c

_
.
where the bar means complex conjugation, the parameters (z. z

), as well as
(n. n

), are either real or complex conjugate, and the functions R


out
, S
out
, R
in
,
and S
in
take real values on X
out
and X
in
, respectively.
Further, let us denote by C the following change of parameters and independent
variable: (z. z

. n. n

. x) (n. n

. z. z

. x). Then
C(
out
) =
out
. C(
in
) =
in
.
C(R
out
) = R
out
. C(S
out
) = S
out
. C(R
in
) = R
in
. C(S
in
) = S
in
.
For
out
and
in
the claim is obvious from the denition. For R
out
and S
out
, the
symmetry relation follows from the identity
2
F
1
_
a. b
c


_
= (1 )
a
2
F
1
_
a. c b
c

1
_
= (1 )
b
2
F
1
_
c a. b
c

1
_
.
For R
in
and S
in
, the symmetry is a corollary of the symmetries of
in
, R
out
, S
out
,
and the branching relation (2.1) below.
2.3 Symmetries of the Kernel
Since the functions R
out
, S
out
, R
in
, and S
in
take real values, the kernel K(x. y)
is real. Moreover, from the explicit formulae for the kernel, it follows that
K
out,out
(x. y) = K
out,out
(y. x) . K
in,in
(x. y) = K
in,in
(y. x) .
K
in,out
(x. y) = K
out,in
(y. x) .
This means that the kernel K(x. y) is (formally) symmetric with respect to the
indenite metric id (id) on L
2
(X. dx) = L
2
(X
out
. dx) L
2
(X
in
. dx).
FREDHOLM DETERMINANTS 1177
2.4 Branching of Analytic Continuations
The formulae for R
out
, S
out
, R
in
, and S
in
above provide analytic continuations
of these functions. We can view R
out
and S
out
as functions that are analytic and
single-valued on C \ X
in
, and R
in
and S
in
as functions that are analytic and single-
valued on C \ X
out
. (Recall that the Gauss hypergeometric function can be viewed
as an analytic and single-valued function on C \ [1. +).)
For a function F( ) dened on C\R, we will denote by F
+
and F

its boundary
values:
F
+
(x) = F(x +i 0) . F

(x) = F(x i 0) .
We will show below that
on X
in
1

in
S

out
S
+
out
2i
= R
in
.
1

in
R

out
R
+
out
2i
= S
in
. (2.1)
on X
out
1

out
S

in
S
+
in
2i
= R
out
.
1

out
R

in
R
+
in
2i
= S
out
. (2.2)
We will use the following formula for the analytic continuation of the Gauss hy-
pergeometric function (see [24, 2.1.4(17)]):
2
F
1
_
a. b
c


_
=
I(b a)I(c)
I(b)I(c a)
( )
a
2
F
1
_
a. 1 c +a
1 b +a


1
_
+
I(a b)I(c)
I(a)I(c b)
( )
b
2
F
1
_
b. 1 c +b
1 a +b


1
_
.
(2.3)
This formula is valid if b a , Z, c , {0. 1. 2. . . . }, and , R
+
.
Both of the formulae in (2.1) are direct consequences of (2.3) and the trivial
relation
on R

(
u
)

(
u
)
+
2i
=
sin(u)

( )
u
. u C.
To verify the rst formula of (2.2), we use the relation (2.3) for both hyperge-
ometric functions in the denition of S
in
. Thus, we get four summands in total.
After computing the jump (S

in
S
+
in
),2i , the second and the fourth summands
cancel out. As for the rst and the third summands, they produce exactly
out
R
out
,
which can be seen from the identities
I(s)I(1 s) =

sin(s)
. s C.
sin((z +n)) sin((z +n

))
sin((z + z

+n +n

)) sin((z z

))
+
sin((z

+n)) sin((z

+n

))
sin((z + z

+n +n

)) sin((z

z))
= 1 .
The second part of (2.2) is proved similarly.
1178 A. BORODIN AND P. DEIFT
The restriction b a , Z for (2.3) in our situation means that our proof works
when z

= z. For z

= z the result is obtained by the limit transition z

z in
(2.1) and (2.2).
2.5 Differential Equations (due to G. Olshanski)
We use Riemanns notation
P
_
_
t
1
t
2
t
3
a b c
a

_
_
to denote the two-dimensional space of solutions to the second-order Fuchs equa-
tion with singular points t
1
, t
2
, and t
3
and exponents a and a

, b and b

, and c and
c

; see, e.g., [24, 2.6]. If a a

, Z, then this means that about t


1
, there are two
solutions of the form
( t
1
)
a
{a holomorphic function} . ( t
1
)
a

{a holomorphic function} .
If a = a

, then the basis of the space of solutions near t


1
has the form
( t
1
)
a
{a holomorphic function} .
ln( t
1
)( t
1
)
a
{a holomorphic function} .
The holomorphic functions above must take nonzero values at t
1
. For t
2
and t
3
the
picture is similar.
We always have a +a

+b +b

+c +c

= 1.
The Gauss hypergeometric function
2
F
1
_
a,b
c
|
_
belongs to the space
P
_
_
0 1
0 a 0
1 c b c a b
_
_
.
and, since it is holomorphic around the origin, it corresponds to the exponent 0 at
the origin.
Riemann showed (see [24, 2.6.1]) that
(2.4)
_
t
1
t
2
_

_
t
3
t
2
_

P
_
_
t
1
t
2
t
3
a b c
a

_
_
=
_
_
t
1
t
2
t
3
a + b j c +j
a

+ b

j c

+j
_
_
.
where if t
n
= , then the factor t
n
should be replaced by 1, and
P
_
_
t
1
t
2
t
3
a b c
a

_
_
= P
_
_
s
1
s
2
s
3
a b c
a

_
_
.
FREDHOLM DETERMINANTS 1179
where
=
A + B
C + D
. s
n
=
At
n
+ B
Ct
n
+ D
. n = 1. 2. 3.
A. B. C. D C. AD CB = 0 .
Using these facts, we immediately see that (denote S = z + z

+n +n

= 0)
(2.5) R
out
(x) P
_
_

1
2

1
2
n 0 z x
n

1 S z

_
_
.
Moreover, R
out
is the only element of this space that corresponds to the exponent 0
at innity and has asymptotics 1 there.
Similarly,
(2.6) S
out
(x) P
_
_

1
2

1
2
n 1 z x
n

S z

_
_
.
and this is the only element of this space, up to a multiplicative constant, with the
asymptotics const x
1
at innity.
Hence, by (2.1) and (2.4), we get
(2.7)
R
in
(x) P
_
_

1
2

1
2
n

0 z

x
n 1 +S z
_
_
. S
in
(x) P
_
_

1
2

1
2
n

1 z

x
n S z
_
_
.
2.6 Asymptotics at Singular Points
The results of the previous subsection (see (2.5)(2.7)) imply that near =
1
2
,
if z = z

then
R
out
( ) = c
1
_

1
2
_
z
_
1 + O
_

1
2
__
+c
2
_

1
2
_
z
_
1 + O
_

1
2
__
.
S
out
( ) = c
3
_

1
2
_
z
_
1 + O
_

1
2
__
+c
4
_

1
2
_
z
_
1 + O
_

1
2
__
.
R
in
( ) = c
5
_

1
2
_
z
_
1 + O
_

1
2
__
+c
6
_

1
2
_
z
_
1 + O
_

1
2
__
.
S
in
( ) = c
7
_

1
2
_
z
_
1 + O
_

1
2
__
+c
8
_

1
2
_
z
_
1 + O
_

1
2
__
.
Here and below we denote constants by the letters c
i
, i = 1. 2. . . . .
If z = z

, we have
R
out
( ) = c
1
_

1
2
_
z
_
1 + O
_

1
2
__
+c
2
ln
_

1
2
__

1
2
_
z
_
1 + O
_

1
2
__
.
S
out
( ) = c
3
_

1
2
_
z
_
1 + O
_

1
2
__
+c
4
ln
_

1
2
__

1
2
_
z
_
1 + O
_

1
2
__
.
1180 A. BORODIN AND P. DEIFT
R
in
( ) = c
5
_

1
2
_
z
_
1 + O
_

1
2
__
+c
6
ln
_

1
2
__

1
2
_
z
_
1 + O
_

1
2
__
.
S
in
( ) = c
7
_

1
2
_
z
_
1 + O
_

1
2
__
+c
8
ln
_

1
2
__

1
2
_
z
_
1 + O
_

1
2
__
.
Similar formulae hold near =
1
2
with the parameters (z. z

) substituted by
(n. n

).
Since the Gauss hypergeometric function is holomorphic around the origin, the
denitions of R
out
and S
out
imply that as ,
(2.8)
R
out
( ) = 1 + O(
1
) . S
out
( ) = c
1

1
(1 + O(
1
)) .
R

out
( ) = c
2

2
(1 + O(
1
)) . S

out
( ) = c
3

2
(1 + O(
1
)) .
R

out
( ) = c
4

3
(1 + O(
1
)) . S

out
( ) = c
5

3
(1 + O(
1
)) .
As for R
in
and S
in
, the results of the previous subsection (see (2.7)) imply that, as
,
R
in
( ) = c
1
(1 + O(
1
)) +c
2

1S
(1 + O(
1
)).
S
in
( ) =
_
c
2

1
(1 + O(
1
)) +c
3

S
(1 + O(
1
)) . S = 1 .
c
4

1
(1 + O(
1
)) +c
5
ln( )
1
(1 + O(
1
)) . S = 1 .
(2.9)
We will need the exact value of c
1
in (2.9) later on. In fact, c
1
= 1, and
(2.10) R
in
( ) = 1 + O(
1
) + O(
1S
) . .
To prove this, we do a similar calculation as in the verication of (2.2) above. That
is, we use relation (2.3) for both hypergeometric functions in the denition of R
in
.
Then out of the four summands that arise, the rst and the third summands give
contributions of order
1S
and higher, while the second and the fourth ones
produce a function in the variable ( +
1
2
)
1
holomorphic near the origin with
constant coefcient 1.
Now we are ready to formulate the problem. Let
(2.11)
J = (a
1
. a
2
) (a
3
. a
4
) (a
2m1
. a
2m
) R.
a
1
- a
2
- - a
2m
+.
be a union of disjoint (possibly innite) intervals inside the real line such that the
closure of J does not contain the points
1
2
. Denote by K
J
the restriction of the
continuous
2
F
1
kernel K(x. y) introduced above to J. Our primary goal is to study
the Fredholm determinant det(1 K
J
).
In the last part of this section we justify the existence of this determinant.
FREDHOLM DETERMINANTS 1181
Denote
J
out
= J X
out
. J
in
= J X
in
.
K
J
out,out
= K

J
out
J
out
. K
J
in,in
= K

J
in
J
in
.
K
J
out,in
= K

J
out
J
in
. K
J
in,out
= K

J
in
J
out
.
PROPOSITION 2.1 The kernels K
J
out,out
(x. y) and K
J
in,in
(x. y) dene positive trace
class operators in L
2
(J
out
. dx) and L
2
(J
in
. dx), respectively.
PROOF: Sections 2.1 and 2.3 above imply that the kernels K
J
out,out
(x. y) and
K
J
in,in
(x. y) are smooth, real-valued, and symmetric. Moreover, the principal mi-
nors of these kernels are always nonnegative, because the kernel K was obtained
as a limit of matrices with nonnegative principal minors; see Section 1. Thus, it
remains to prove that the integrals
_
J
out
K
J
out,out
(x. x)dx and
_
J
in
K
J
in,in
(x. x)dx
converge. For the second integral the claim is obvious, since J
in
(
1
2
.
1
2
), and the
integrand is bounded on J
in
. For the rst integral we need to control the behavior
of the integrand near innity (if J
out
is not bounded). Since
out
(x) = O(x
S
) as
x , by Section 2.1 and (2.8) we see that
K(x. x) = O(x
2S
) . x .
As S > 1, the integral converges.
We will assume that K
J
out,in
(x. y) = 0 and K
J
in,out
(x. y) = 0 if (x. y) does not
belong to the domain of denition of the corresponding kernel (J
out
J
in
for the
rst kernel and J
in
J
out
for the second one).
PROPOSITION 2.2 The kernel K
0
(x. y) = K
J
out,in
(x. y) + K
J
in,out
(x. y) denes a
trace class operator in L
2
(J. dx).
PROOF: Consider the operator
d
2
dx
2
acting, respectively, on
(i) C

0
(R) . (ii) C

0
(J) . (iii) C

0
(R \ J) .
In all three cases the operator is essentially self-adjoint, giving rise to the pos-
itive self-adjoint operators H, H
J
, and H
R\J
in L
2
(R), L
2
(J), and L
2
(R \ J),
respectively. It is well-known (see e.g., [52, theorem XI.21]) that the operator
T = (1 + x
2
)
1
(1 + H)
1
is trace class in L
2
(R). A direct proof can be given
as follows. Let p denote the (self-adjoint) closure of i
d
dx
acting on C

0
; then
H = p
2
. Commuting (1 i x)
1
and (1 +i p)
1
in the representation
T = (1 +i x)
1
(1 i x)
1
(1 +i p)
1
(1 i p)
1
.
1182 A. BORODIN AND P. DEIFT
we obtain the formula
T =
_
(1 +i x)
1
(1 +i p)
1
__
(1 i x)
1
(1 i p)
1
_
+(1 +i x)
1
(1 +i p)
1
(1 i x)
1
[x. p]
(1 i x)
1
(1 +i p)
1
(1 i p)
1
.
(2.12)
But a simple computation shows that (1 +i x)
1
(1 +i p)
1
has kernel
(1 +i x)
1

0
(y x)e
xy
.
where
0
denotes the characteristic function of (0. ), and as
_
y>x
(1 + x
2
)
1
e
xy
dxdy - .
it follows that (1 + i x)
1
(1 + i p)
1
is Hilbert-Schmidt. The same is true for
(1 i x)
1
(1 i p)
1
, and as [x. p] = i , the trace class property for T follows
immediately from (2.12).
For f L
2
(R), set
g =
_
(1 + H)
1
(1 +(H
J
H
R\J
))
1
_
f .
The function g solves (
d
2
dx
2
+ 1)g = 0 in the following weak sense: If
C

0
(R\ {a
1
. a
2
. . . . . a
2m
}), then
_
R
_
(
d
2
dx
2
+1)
_
g dx = 0. It follows that in each
component of R \ {a
1
. a
2
. . . . . a
2m
}, g is a linear combination of the functions e
x
and e
x
, and hence the operator (1+H)
1
(1+(H
J
H
R\J
))
1
is of nite rank.
Because T is trace class, it follows, in particular, that (1+x
2
)
1
(1+H
J
)
1
is trace
class in L
2
(J).
Observe that the kernel K
out,in
(x. y) has the form
F
1
(x)G
1
(y) + F
2
(x)G
2
(y)
x y
for suitable functions F
i
and G
j
. For x X
out
and y X
in
, set
K
1
(x. y) = K
out,in
(x. y)
__
F
1
(x)
x
G
1
(y) +
F
2
(x)
x
G
2
(y)
_
+
_
F
1
(x)
x
2
yG
1
(y) +
F
2
(x)
x
2
yG
2
(y)
__
V(x) .
Here V(x) is a smooth function on R that is zero for |x| L = max{|a
i
| : |a
i
| -
}, and V(x) = 1 for |x| L +1.
Finally, for x X
out
and y X
in
, set
K
2
(x. y) = K
1
(x. y)

|a
i
|<

a
i
(x)K
1
(a
i
. y) .
where the sum is taken over all the nite endpoints of J. Here
a
i
(x) is a smooth
function compactly supported in J that equals 1 in a neighborhood of a
i
and that
FREDHOLM DETERMINANTS 1183
vanishes at a
j
for j = i . Clearly K
2
(x. y) = 0 for x J, which implies that
K
2
( . y) dom H
J
for all y X
out
. By using the decay conditions (2.8) (each
differentiation with respect to x gives an extra power of decay), it follows that
(1 + H
J
)(1 + x
2
)K
2
(x. y) gives rise to a bounded operator on L
2
(J), and hence
K
2
=
_
(1 + x
2
)
1
(1 + H
J
)
1
__
(1 + H
J
)(1 + x
2
)K
2
_
is trace class. But clearly K
2
is a nite-rank perturbation of K
out,in
. A similar
computation is true for K
in,out
, and we conclude that K
0
is trace class on L
2
(J).
Propositions 2.7 and 2.8 prove that the operator
K
J
=
_
K
J
out,out
K
J
out,in
K
J
in,out
K
J
in,in
_
is trace class. This shows that the determinant det(1 K
J
) is well-dened.
3 The Resolvent Kernel and
the Corresponding Riemann-Hilbert Problem
Starting from this point we assume that the reader is familiar with the material
in the appendix.
As was explained in Section 1 (see Theorem 1.6 et seq.), the
2
F
1
kernel K
is a limit of certain discrete kernels that we denoted as K
(N)
. Moreover, these
discrete kernels have rather simple resolvent kernels L
(N)
= K
(N)
,(1 K
(N)
); see
Corollary 1.5. The kernels L
(N)
are integrable, and thus the kernels K
(N)
can be
found through solving (discrete) Riemann-Hilbert problems; see [8].
Our rst observation is that the kernel L
(N)
admits a scaling limit as N .
Recall that for x X, we denote by x
N
the point of the lattice X
(N)
that is closest
to x N.
The proof of the following proposition is straightforward.
PROPOSITION 3.1 [16] The limit
L(x. y) = lim
N
N L
(N)
(x
N
. y
N
) . x. y X.
exists. In the block form corresponding to the splitting X = X
out
X
in
, the kernel
L(x. y) has the following representation:
L =
_
0 A
A

0
_
.
where A is a kernel on X
out
X
in
of the form
A(x. y) =

out
(x)
in
(y)
x y
.
where the functions
out
and
in
were introduced at the beginning of Section 2.
1184 A. BORODIN AND P. DEIFT
Now an obvious conjecture would be that K = L(1 + L)
1
, and K can be
obtained through a solution of the corresponding Riemann-Hilbert problem. Both
claims are true, at least under certain restrictions on the set of parameters (z. z

. n.
n

). We begin by showing how to obtain K from an RHP.


Observe that the formulae for the
2
F
1
kernel given in Section 2 are identical to
(A.2) in the appendix with
(3.1) m =
_
m
11
m
12
m
21
m
22
_
=
_
R
out
S
in
S
out
R
in
_
. h
I
=
_

out
. h
II
=
_

in
.
In particular, this means that the
2
F
1
kernel is integrable. Clearly, the matrix-valued
function m is holomorphic in C\R, and as we will see, det m( ) 1 (see the proof
of Proposition 3.3).
PROPOSITION 3.2 The matrix m solves the Riemann-Hilbert problem (X. :) with
(3.2) :(x) =
_

_
_
1 2i
out
(x)
0 1
_
. x X
out
.
_
1 0
2i
in
(x) 1
_
. x X
in
.
If in addition z + z

+n +n

> 0, then m( ) I as .
PROOF: The jump condition m
+
= m

: is equivalent to (2.1) and (2.2). The


asymptotic relation m 1 at innity follows from (2.8), (2.9), and (2.10).
Note that the condition z + z

+ n + n

> 0 is only needed to guarantee the


decay of m
12
= S
in
at innity; see (2.9).
Now we investigate the nature of the singularities of m near the points
1
2
of
discontinuity of the jump matrix :. We will need this information further on.
Introduce the matrix
(3.3) C( ) =
_
_
_

1
2
_z+z

2
_
+
1
2
_w+w

2
0
0
_

1
2
_

z+z

2
_
+
1
2
_

w+w

2
_
_
.
Observe that C is holomorphic in C \ (.
1
2
]. Furthermore, on (.
1
2
),
(3.4)
C

(x)(C
+
(x))
1
=
_

_
_
e
i (z+z

)
0
0 e
i (z+z

)
_
. x
_

1
2
.
1
2
_
.
_
e
i (z+z

+w+w

)
0
0 e
i (z+z

+w+w

)
_
. x
_
.
1
2
_
.
is clearly a piecewise constant matrix.
FREDHOLM DETERMINANTS 1185
PROPOSITION 3.3 (i) Assume that z = z

. Then near the point =


1
2
m( )C
1
( ) =
_

_
H
1/2
( )
_
_
_

1
2
_zz

2
0
0
_

1
2
_z

z
2
_
_
U
1
. > 0 .
H
1/2
( )
_
_
_

1
2
_zz

2
0
0
_

1
2
_z

z
2
_
_
U
2
. - 0 .
for some nondegenerate constant matrices U
1
and U
2
and locally holomorphic
function H
1/2
( ) such that H
1/2
(
1
2
) is also nondegenerate.
(ii) Assume z = z

. Then near the point =


1
2
m( )C
1
( ) =
_

_
H
1/2
( )
_
1 ln
_

1
2
_
0 1
_
V
1
. > 0 .
H
1/2
( )
_
1 ln
_

1
2
_
0 1
_
V
2
. - 0 .
for some nondegenerate constant matrices V
1
and V
2
and locally holomorphic
function H
1/2
( ) such that H
1/2
(
1
2
) is also nondegenerate.
PROOF: Let us assume rst that z = z

. Dene a new matrix m( ) as follows:


m( ) =
_

_
m( )C
1
( ) . > 0 .
m( )C
1
( )
_
1 2i C(z. z

)
0 1
_
. - 0 .
(The constants C(z. z

) and C(n. n

) were dened at the beginning of Section 2.)


By (3.2) we see that the jump matrix : for m locally near the point
1
2
has the
form
: =
_
1 2i C(z. z

)
0 1
_
C

:C
1
+
=
_

_
I. x >
1
2
.
_
1 2i C(z. z

)
0 1
__
e
i (z+z

)
0
2i e
i (z+z

)
_
. x -
1
2
.
Note that this matrix is piecewise constant.
An easy computation shows that for a certain nondegenerate matrix U,
_
1 2i C(z. z

)
0 1
_ _
e
i (z+z

)
0
2i e
i (z+z

)
_
= U
1
_
e
i (zz

)
0
0 e
i (z

z)
_
.
1186 A. BORODIN AND P. DEIFT
This implies that
m
0
( ) =
_
_
_

1
2
_zz

2
0
0
_

1
2
_z

z
2
_
_
U
locally solves the RHP with the jump matrix :.
Our conditions on the parameters (z. z

. n. n

) imply that |(zz

)| - 1. Then
the asymptotic formulae of Section 2.6 imply that m is locally square integrable
near =
1
2
, and so are m
0
and m
1
0
, as follows from the formula above. Since m
and m
0
locally solve the same RHP, we obtain that m m
1
0
has no jump on R near
=
1
2
, and it is locally integrable as a product of two locally square integrable
functions. Hence, this ratio is a locally holomorphic function. We denote this
holomorphic function by H
1/2
( ), and set
U
1
= U . U
2
= U
_
1 2i C(z. z

)
0 1
_
.
Because :(x) in (3.2) has determinant 1, it follows that det m
+
(x) = det m

(x).
Also, as above, det m( ) = det m( ) is locally integrable. Thus, det m( ) is entire.
If z +z

+n+n

> 0, then as noted in Proposition 3.2, det m( ) 1 as ,


and hence, by Liouvilles theorem, det m( ) 1. Analytic continuation in the
parameters z. z

. n, and n

ensures that the same is true for all (allowable) values


of the parameters. The fact that H
1/2
(
1
2
) is invertible now follows from the fact that
det m( ) = det C( ) 1, and det U
1
and det U
2
are nonzero. The proof of (i) is
complete.
Assume now that z = z

. Then there exists a nondegenerate matrix V such that


_
1 2i C(z. z

)
0 1
_ _
e
i (z+z

)
0
2i e
i (z+z

)
_
= V
1
_
1 1
0 1
_
V.
and the local solution of the RHP with the jump matrix : has the form
m
0
( ) =
_
1 ln
_

1
2
_
0 1
_
V .
Repeating word for word the argument above, we get (ii) with
V
1
= V . V
2
= V
_
1 2i C(z. z

)
0 1
_
.

Similarly to Proposition 3.3 we have the following:


FREDHOLM DETERMINANTS 1187
PROPOSITION 3.4 (i) Assume that n = n

. Then near the point =


1
2
m( )C
1
( ) =
_

_
H
1/2
( )
_
_
_
+
1
2
_ww

2
0
0
_
+
1
2
_w

w
2
_
_
U
1
. > 0 .
H
1/2
( )
_
_
_
+
1
2
_ww

2
0
0
_
+
1
2
_w

w
2
_
_
U
2
. - 0 .
for some nondegenerate constant matrices U
1
and U
2
and locally holomorphic
function H
1/2
( ) such that H
1/2
(
1
2
) is also nondegenerate.
(ii) Assume n = n

. Then near the point =


1
2
m( )C
1
( ) =
_

_
H
1/2
( )
_
1 ln
_
+
1
2
_
0 1
_
V
1
. > 0 .
H
1/2
( )
_
1 ln
_
+
1
2
_
0 1
_
V
2
. - 0 .
for some nondegenerate constant matrices V
1
and V
2
and locally holomorphic
function H
1/2
( ) such that H
1/2
(
1
2
) is also nondegenerate.
We now return to the question raised after Proposition 3.1 of whether the kernel
L(x. y) provides a resolvent operator for the
2
F
1
kernel K. The reason that we
cannot immediately apply the general theory of the appendix in this case is that the
functions f
i
and g
i
(or h
I
=

out
and h
II
=

in
) in the notation of the appendix
are not bounded on the contour as required by (A.1). We proceed rather by direct
calculation.
First of all, we determine when the operator L is bounded.
PROPOSITION 3.5 The kernel L(x. y) denes a bounded operator in L
2
(X. dx) if
and only if |z + z

| - 1 and |n +n

| - 1.
PROOF: It sufces to consider the operator A : L
2
(X
in
. dx) L
2
(X
out
. dx)
with the kernel A(x. y) =

out
(x)
in
(y),(x y).
If |z +z

| 1, say, z +z

1, then the restriction of A(x. y) to (


1
2
. 1) (
1
2
. 0)
is a positive function in two variables bounded from below by
2
3
_

out
(x)
in
(y) =
2
3
_
C(z. z

x
1
2

z+z

x +
1
2

w+w

y
1
2

z+z

y +
1
2

w+w

2
.
This kernel has (x
1
2
)

z+z

2
behavior near x =
1
2
. Thus, A is unbounded.
Similarly, we see that A is unbounded if z + z

- 1 or |n +n

| 1.
Now assume that |z + z

| - 1 and |n + n

| - 1. Let be the characteristic


function of the set (.
1
2
c) (
1
2
+c. +) for some c > 0. Then the kernel
1188 A. BORODIN AND P. DEIFT
(x)A(x. y) denes a Hilbert-Schmidt (hence, bounded) operator on L
2
(X. dx).
Indeed,
_
X
out
X
in
|(x)A(x. y)|
2
dx dy
__
+
1
2
+

out
(x)
(x
1
2
)
2
dx +
_

1
2

out
(x)
(x +
1
2
)
2
dx
_

_
X
in

in
(y)dy - .
Hence, in order to prove that L is bounded, it is enough to show that for any
compactly supported smooth functions f on X
out
, supp f [
1
2
c.
1
2
)(
1
2
.
1
2
+c],
and g on X
in
,
(3.5)

_
X
in
X
out
A(x. y)g(x) f (y)

const f
2
g
2
.
We will assume that f is supported on (
1
2
.
1
2
+ c]. The case when f is supported
on [
1
2
c.
1
2
) is handled similarly. Assume that g is supported on [0.
1
2
). Let us
introduce the polar coordinates (r. ) by
x
1
2
= r cos .
1
2
y = r sin . 0

2
. 0 r r() .
for some r() const - . Then the integral above takes the form
(3.6)
_
C(z. z

)
_
2
0
_
r()
0
(cos )

z+z

2
(1 +r cos )

w+w

2
(sin )
z+z

2
(1 r sin )
w+w

2
cos +sin
g
_
r cos +
1
2
_
f
_
1
2
r sin
_
dr d .
Here r() is a uniformly bounded continuous function of . Clearly, the factors
|x +
1
2
|

w+w

2
= (1+r cos )

w+w

2
and |y +
1
2
|
w+w

2
= (1r cos )
w+w

2
are bounded
on the domain of integration. Using the inequalities

_

0
g
_
r cos +
1
2
_
f
_
1
2
r sin
_
dr

(cos sin )

1
2
f
2
g
2
.
cos +sin 1 .
we see that the integral (3.6) is bounded by
const
_
2
0
(cos )

z+z

+1
2
(sin )
z+z

1
2
d f
2
g
2
- const f
2
g
2
.
If f is supported on (
1
2
.
1
2
+c] and g is supported on (
1
2
. 0], then the denomi-
nator in A(x. y) is bounded away from zero, and A is bounded by simple estimates.
This completes the proof of (3.5) in the case where f is supported on (
1
2
.
1
2
+ c]
and g is supported on (
1
2
.
1
2
).
FREDHOLM DETERMINANTS 1189
Since L

= L, we know that if L is bounded, then (1 + L) is invertible. It


seems very plausible that whenever the operator L is bounded, the relation K =
L(1 + L)
1
should hold. We are able to prove this under the additional restriction
z + z

+n +n

> 0.
PROPOSITION 3.6 Assume that z +z

+n+n

> 0, |z +z

| - 1, and |n+n

| - 1.
Then K = L(1 + L)
1
.
PROOF: Since L is bounded and L = L

, L has a pure imaginary spectrum,


and 1 + L is invertible. Hence, it is enough to show that K + KL = L. The
restrictions on the parameters and the asymptotics of the functions R
out
, S
out
, R
in
,
and S
in
from Section 2.6 imply that the relation (2.1) and (2.2) can be rewritten in
the integral form:
(3.7)
_
X
out

out
(x)R
out
(x)
x y
dx = S
in
(y) .
_
X
out

out
(x)S
out
(x)
x y
dx = 1 R
in
(y) .
_
X
in

in
(x)S
in
(x)
x y
dx = 1 R
out
(y) .
_
X
in

in
(x)R
in
(x)
x y
dx = S
out
(y) .
The 1s on the right-hand side appear because R
out
( ) 1 and R
in
( ) 1 as
. The restriction z + z

+n +n

> 0 is needed to ensure the convergence


of the rst integral at innity. Indeed,
out
(x)R
out
(x) x
zz

ww

as x .
The identity
(3.8) K(x. y) +
_
X
L(x. )K(. y)d = L(x. y)
for all x. y X follows directly from the relations (3.7); see [13, theorem 3.3] for
a similar computation. On the other hand, by (2.8) we see that for any g C

0
(X),
G() =
_
X
K(. y)g(y)dy = Kg()
lies in L
2
(X. d). Integrating (3.8) against g(y), we see that (1 + L)G = Lg
and hence Kg = (1 + L)
1
Lg in L
2
(X). It follows that K extends to a bounded
operator (1+L)
1
L = L(1+L)
1
in L
2
(X). Conversely, we see that the bounded
operator L(1 + L)
1
has a kernel action given by the
2
F
1
kernel K(x. y).
Proposition 3.6 has the following corollary, which will be important for us later.
COROLLARY 3.7 Assume that z +z

+n+n

> 0, |z +z

| - 1, and |n+n

| - 1.
Then, in the notation of Section 2, the operator 1 K
J
is invertible.
1190 A. BORODIN AND P. DEIFT
PROOF: In the block form corresponding to the splitting J = J
out
J
in
, the
operator 1 K
J
has the form
1 K
J
=
_
1 K
J
out,out
K
J
out,in
K
J
in,out
1 K
J
in,in
_
.
But it is easy to see that an operator written in the block form
_
a b
c d
_
is invertible if
a is invertible and (d ca
1
b) is invertible. Therefore, it is enough to prove that
1 K
J
out,out
and (1 K
in,in
) K
J
in,out
_
1 K
J
out,out
_
1
K
J
out,in
are invertible.
Proposition 3.6 and the denition of the operator L imply that
K
out,out
= 1 (1 + AA

)
1
. K
in,in
= 1 (1 + A

A)
1
.
Hence, K
out,out
and K
in,in
are positive operators that are strictly less then 1. Thus,
the same is true for K
J
out,out
and K
J
in,in
. In particular, 1K
J
out,out
is invertible. Further,
K
J
out,in
= (K
J
in,out
)

. Hence,
(1 K
in,in
) K
J
in,out
_
1 K
J
out,out
_
1
K
J
out,in
=
(1 K
in,in
) +bounded positive operator
is invertible.
Remark 3.8. It is plausible that the operator 1 K
J
is invertible without any re-
strictions on the parameters (as opposed to the full operator 1K, which denitely
ceases to be invertible if we remove the restrictions |z +z

| - 1 and |n+n

| - 1).
However, we do not have a proof of this. In a similar but simpler situation of the
Whittaker kernel, we will prove the corresponding statement in Section 8.2; see
part (iii) of Proposition 8.4.
4 Associated System of Linear Differential Equations
with Rational Coefcients
Our goal in this section is to show that the kernel of the (trace class and hence
Hilbert-Schmidt) operator
R
J
=
K
J
1 K
J
can be expressed through a solution of a system of linear differential equations
with rational coefcients. This result will be crucial in our study of the Fredholm
determinant det(1 K
J
) in the next section.
In what follows we assume that S = z + z

+n +n

> 0.
As noted at the beginning of Section 3, K is an integrable kernel:
K(x. y) =
F
1
(x)G
1
(y) + F
2
(x)G
2
(y)
x y
.
FREDHOLM DETERMINANTS 1191
Hence, K
J
is an integrable kernel. Since J is bounded away from the points
1
2
,
it is easy to see that the functions F
i
and G
i
(which are, in fact, the functions

out
R
out
,

out
S
out
,

in
R
in
, and

in
S
in
rearranged in a certain way) belong
to L
p
(J. dx) L

(J. dx) for any p > 2S


1
. This follows from (2.8) and (2.9).
Set
:
J
= I 2i FG
t
=
_
1 2i F
1
G
1
2i F
1
G
2
2i F
2
G
1
1 2i F
2
G
2
_
.
Note that F
t
(x)G(x) = F
1
(x)G
1
(x) + F
2
(x)G
2
(x) = 0.
PROPOSITION 4.1 Assume that the operator 1K
J
is invertible. Then there exists
a solution m
J
of the normalized RHP (J. :
J
) such that the kernel of the operator
R
J
= K
J
(1 K
J
)
1
has the form
R
J
(x. y) =
F
1
(x)G
1
(y) +F
2
(x)G
2
(y)
x y
.
F = m
J+
F = m
J
F . G = m
J+
G = m
J
G .
The matrix m
J
is locally square integrable near the endpoints of J.
PROOF: See Proposition A.2 and the succeeding comment.
Concerning the invertibility of (1 K
J
), see Corollary 3.7 and Remark 3.8.
Later on we will need the following property of the decay of m
J
at innity:
PROPOSITION 4.2 As , C \ R, we have m

J
( )m
1
J
( ) = o(| |
1
).
PROOF: We will give the proof for J = (s. +), s >
1
2
. The proof for general
J is similar.
Observe that det :
J
1. Then det m
J
has no jump on J. Since m
J
is square
integrable near t , det m
J
is locally integrable. Moreover, det m
J
( ) 1 as
, because m
J
( ) I . Again by Liouvilles theorem, det m
J
1, and m
1
J
is
bounded near = . Therefore, it sufces to show that m

J
( ) = o(| |
1
).
The proof of proposition A.2 given in [20] implies that for C \ R
m
J
( ) = I
_
+
s
m
J+
(t )F(t )G
t
(t )
t
dt = I
_
+
s
m
J
(t )F(t )G
t
(t )
t
dt ;
therefore,
m

J
( ) =
_
+
s
m
J+
(t )F(t )G
t
(t )
(t )
2
dt =
_
+
s
m
J
(t )F(t )G
t
(t )
(t )
2
dt .
If > const | |, then |t | > const | |. That is, the distance of the point to
the contour of integration is of order | |. Since m
J
(t ) is bounded and F(t )G
t
(t )
decays at innity as a positive power of t , we see that m

J
( ) = o(| |
1
).
If the point is closer to the real line and, say, - 0, we can deform the line
of integration up to the line s +t e
i
, 0 - -

2
. In other words,
m

J
( ) =
_
+
0
m
J
(s +t e
i
)F(s +t e
i
)G
t
(s +t e
i
)
(s +t e
i
)
2
e
i
dt .
1192 A. BORODIN AND P. DEIFT
Here it is crucial that the vector functions F and G (which are expressed in terms
of the functions

out
R
out
and

out
S
out
) have analytic continuations in the sector
0 - arg( s) - . Now the distance of the point to the contour of integration
is again of order | |, and the argument above again implies m

J
( ) = o(| |
1
). If
> 0, the proof is similar with the line of integration deformed down.
We now describe a general procedure (cf. steps 1, 2, and 3 in the introduction)
to convert RHPs with complicated jump matrices to RHPs with simple jump
matrices. The procedure will be used again in Sections 8 and 9 to analyze a variety
of other examples of integrable kernels.
LEMMA 4.3 Suppose Y = Y
1
Y
2
and Y
1
Y
2
= is a decomposition of the
oriented contour Y C into two disjoint parts. Suppose : is a function on Y
with values in Mat(k. C). Suppose m and m
1
solve the RHPs (Y. :) and (Y
1
. :),
respectively. Then if m
1
exists, m
2
= m
1
m
1
solves the RHP (Y
2
. :
2
) where
:
2
= m
+
:
1
m
1
+
= m

:
1
m
1

. Conversely, if m and m
2
solve the RHPs (Y. :)
and (Y
2
. :
2
), respectively, then m
1
= m
2
m solves the RHP (Y
1
. :).
PROOF: The proof is by direct calculation.
Recall that, as noted at the beginning of Section 3, the formulae for the kernel
K are identical to (A.2) with m, h
I
, and h
II
given by (3.1). This, in particular,
means that
F = m

f . G = m
t

g .
with
f
1
(x) = g
2
(x) =
_

out
(x) . x X
out
.
0 . x X
in
.
f
2
(x) = g
1
(x) =
_
0 . x X
out
.

in
(x) . x X
in
.
Note that the matrix : in (3.2) has the form I +2i f g
t
.
LEMMA 4.4 If the matrix : in Lemma 4.3 has the form : = I + 2i f g
t
for
(arbitrary) f and g with f
t
(x)g(x) = 0, then
(4.1) :
2
= I 2i FG
t
.
where F = m
+
f = m

f and G = m
t
+
g = m
t

g.
PROOF: We have
:
2
= m
+
(I 2i f g
t
)m
1
+
= I 2i (m
+
f )(m
t
+
g)
t
.

FREDHOLM DETERMINANTS 1193


Let Y = X, Y
1
= X \ J, and Y
2
= J. Now m
J
solves the RHP (Y
2
. :
2
) with
:
2
= :
J
= I 2i FG
t
. But as noted above, F = m

f and G = m
t

g, and so it
follows by Lemmas 4.3 and 4.4 that
m
1
= m
X\J
m
J
m
satises the RHP (Y
1
. :), where : = I + 2i f g
t
as before. We think of :
2
as
the complicated jump matrix and : as the simple jump matrix. The formula
m
J
= m
X\J
m
1
shows that the analysis of the solution of the complicated RHP
(J. :
J
) reduces to the analysis of the solutions of two simple RHPs (X \ J. :)
and (X. :).
These two RHPs are simple for the following reason. Recall that in Section 3
we introduced a matrix C( ); see (3.3).
Set M = m
X\J
C
1
. This is a holomorphic function on C\R that has boundary
values M

(x) on R.
LEMMA 4.5 The matrix-valued function M( ) satises the jump relation M
+
=
M

V, where the jump matrix V has the form


V(x) =
_

_
_
1 2i C(z. z

)(x)
0 1
_
. x >
1
2
.
_
e
i (z+z

)
0
2i (x) e
i (z+z

)
_
.
1
2
> x >
1
2
.
_
e
i (z+z

+w+w

)
2i C(n. n

)(x)
0 e
i (z+z

+w+w

)
_
. x -
1
2
.
where (x) =
X\J
(x) is the characteristic function of the set X \ J.
PROOF: On X \ J we have
V = M
1

M
+
= C

:C
1
+
= C

C
1
+
+2i C

f (C
t
+
g)
t
.
and on J we have V = C

C
1
+
. The jump relation (3.4) and explicit formulae for
C, f , and g conclude the proof.
The important fact about the jump matrix V is that it is piecewise constant. As
discussed in the introduction, this allows us to prove the following central claim.
Recall that J is a union of m intervals with endpoints {a
j
}
2m
j =1
; see (2.11).
THEOREM 4.6 The matrix M satises the differential equation
M

( ) =
_
A

1
2
+
B
+
1
2
+
2m

j =1
C
j
a
j
_
M( )
with some constant matrices A, B, and {C
j
}
2m
j =1
. If a
1
= , then C
1
= 0, and
if a
2m
= +, then C
2m
= 0. Other than that, all matrices A, B, and {C
j
}
2m
j =1
are
1194 A. BORODIN AND P. DEIFT
nonzero. Moreover,
tr A = tr B = tr C
j
= 0 .
det A =
_
z z

2
_
2
. det B =
_
n n

2
_
2
. det C
j
= 0 .
for all j = 1. 2. . . . . 2m, and
A+B +
2m

j =1
C
j
=
z + z

+n +n

2

3
.
3
=
_
1 0
0 1
_
.
PROOF: Since M satises the jump condition with a piecewise constant jump
matrix V (Lemma 4.5), M

satises the jump condition with exactly the same jump


matrix. Therefore, M

M
1
has no jump across X. Note that
det M = det m
J
det m(det C)
1
1 .
and hence M
1
exists.
Thus, we know that M

M
1
is a holomorphic function away from the points
{
1
2
} {a
i
}
2m
i =1
. We now investigate the behavior of M near these points.
Near =
1
2
, m
J
( ) is holomorphic, and the behavior of m( )C
1
( ) is de-
scribed by Proposition 3.3. This implies, in the notation of Proposition 3.3, that for
z = z

( )M
1
( ) =
1

1
2
m
J
_
1
2
_
H
1/2
_
1
2
_
_
zz

2
0
0
z

z
2
_
H
1
1/2
_
1
2
_
m
1
J
_
1
2
_
+ O(1) .
and for z = z

( )M
1
( ) =
1

1
2
m
J
_
1
2
_
H
1/2
_
1
2
_
_
0 1
0 0
_
H
1
1/2
_
1
2
_
m
1
J
_
1
2
_
+ O(1) .
Similarly, near =
1
2
we have the following: For n = n

,
M

( )M
1
( ) =
1
+
1
2
m
J
_

1
2
_
H
1/2
_

1
2
_
_
ww

2
0
0
w

w
2
_
H
1
1/2
_

1
2
_
m
1
J
_

1
2
_
+ O(1) .
and for n = n

,
M

( )M
1
( ) =
1
+
1
2
m
J
_

1
2
_
H
1/2
_

1
2
_
_
0 1
0 0
_
H
1
1/2
_

1
2
_
m
1
J
_

1
2
_
+ O(1) .
As for the points {a
j
}
2m
j =1
, we will prove the following claim:
FREDHOLM DETERMINANTS 1195
LEMMA 4.7 In a neighborhood of any nite endpoint a
j
, j = 1. 2. . . . . 2m,
M

( )M
1
( ) =
C
j
a
j
+h
j
( ) .
where h
j
( ) is holomorphic near a
j
, and C
j
is a nonzero nilpotent matrix.
Let us postpone the proof of this lemma and proceed with the proof of Theorem
4.6. Observe that if we set
A =
_

_
m
J
_
1
2
_
H
1/2
_
1
2
_
_
zz

2
0
0
z

z
2
_
H
1
1/2
_
1
2
_
m
1
J
_
1
2
_
. z = z

.
m
J
_
1
2
_
H
1/2
_
1
2
_
_
0 1
0 0
_
H
1
1/2
_
1
2
_
m
1
J
_
1
2
_
. z = z

.
B =
_

_
m
J
_

1
2
_
H
1/2
_

1
2
_
_
ww

2
0
0
w

w
2
_
H
1
1/2
_

1
2
_
m
1
J
_

1
2
_
. n = n

.
m
J
_

1
2
_
H
1/2
_

1
2
_
_
0 1
0 0
_
H
1
1/2
_

1
2
_
m
1
J
_

1
2
_
. n = n

.
then the function
(4.2) M

( )M
1
( )
_
A

1
2
+
B
+
1
2
+
2m

j =1
C
j
a
j
_
is entire. At innity we have, using m
X\J
= m
J
m,
M

M
1
= m

J
m
1
J
+m
J
m

m
1
m
1
J
+m
J
m(C
1
)

Cm
1
m
1
J
.
We know that m I , m
J
I , m

J
m
1
J
= o(| |
1
) (see Proposition 4.2), and
m

m
1
= o(| |
1
) (which follows from (2.8) and differentiation of (2.9)), and by
direct computation
(C
1
)

( )C( ) =
z + z

+n +n

2

3

1
.
This implies that
M

( )M
1
( ) =
z + z

+n +n

2

3

1
+o(|
1
|) .
Then, by Liouvilles theorem, the expression (4.2) is identically equal to zero. Mul-
tiplying it by and passing to the limit , we see that
A+B +
2m

j =1
C
j
+
z + z

+n +n

2

3
= 0 .
The remaining properties of A and B follow directly from their denitions. This
concludes the proof of the theorem modulo Lemma 4.7.
1196 A. BORODIN AND P. DEIFT
PROOF OF LEMMA 4.7: Let us give a proof for an odd value of j . The proof
for the even j s is obtained by changing the sign of . We will omit the subscript
j in a
j
and C
j
.
Near the point a the jump matrix for M has the form (Lemma 4.5)
(M

(x))
1
M
+
(x) =
_
C
o
+2i f
o
(g
o
)
t
. x - a .
C
o
. x > a .
where f
o
= C

f and g
o
= C
t
+
g are locally constant vectors, and C
o
= C

C
1
+
is a locally constant matrix. Note that ((C
o
)
1
f
o
)
t
g
o
= f
t
g = 0.
Set

M( ) =
_
M( ) . > 0 .
M( )C
o
. - 0 .
Then
(

(x))
1

M
+
(x) =
_
I +2i (C
o
)
1
f
o
(g
o
)
t
. x - a .
I . x > a .
and we note that

M
o
= exp
_
1
2i
ln(I +2i (C
o
)
1
f
o
(g
o
)
t
) ln( a)
_
= exp
_
(C
o
)
1
f
o
(g
o
)
t
ln( a)
_
= I +(C
o
)
1
f
o
(g
o
)
t
ln( a)
is also a solution of this local RHP. (Here we use the fact that ((C
o
)
1
f
o
(g
o
)
t
)
2
=
0.) Hence,
M
o
( ) =
_

M
o
( ) . > 0 .

M
o
( )(C
o
)
1
. - 0 .
is a local solution of the RHP for M near = a.
Since M = m
J
mC
1
, m and C
1
are bounded near a, and m
J
is square inte-
grable near a (Proposition 4.1), we conclude that M is square integrable near a.
Clearly, M
o
is also locally square integrable, and det M
o
1. Hence, H
a
( )
M( )(M
o
( ))
1
is locally integrable and does not have any jump across R near a.
Therefore, H
a
( ) is holomorphic near a. Since det M 1, H
a
is nonsingular. We
obtain
(4.3) M( ) = H
a
( )M
o
( ) .
Computing M

M
1
explicitly, we arrive at the desired claim with
h = H

a
H
1
a
and C = H
a
(a)(C
o
)
1
f
o
g
o
H
1
a
(a) .
Since (C
o
)
1
f
o
g
o
is nilpotent and nonzero, the proof of Lemma 4.7 and Theo-
rem 4.6 is complete.
FREDHOLM DETERMINANTS 1197
Remark 4.8. Arguing in exactly the same way as we did in the proof of Theo-
rem 4.6 above, it is not hard to prove the equation
(m( )C
1
( ))

=
_
A
0

1
2
+
B
0
+
1
2
_
m( )C
1
( )
with some constant matrices A
0
and B
0
. These matrices can be explicitly computed
(as opposed to the matrices A, B, and {C
j
} in Theorem 4.6!). The resulting system
of differential equations is equivalent to (2.5), (2.6), and (2.7).
5 General Setting
As noted earlier, the arguments that we used to derive Theorem 4.6 and will use
to derive further results can be applied to other kernels as well (see Section 8 for
examples). In this short section we place the results of the previous section in a
general framework.
Let K(x. y) be a smooth integrable kernel
K(x. y) =

N
j =1
F
j
(x)G
j
(y)
x y
.
N

j =1
F
j
(x)G
j
(x) = 0 .
dened on a subset X of the real line. Let us assume that X is nite union of
(possibly innite) disjoint intervals.
We list the necessary conditions on the kernel K.
(1) Assume we are given functions { f
j
. g
j
}
N
j =1
on X for which there exists a
solution m of the RHP (X. :) with the jump matrix
: = I +2i f g
t
=
_

kl
+2i f
k
g
l
_
N
k,l=1
such that the relations F = m
+
f = m

f and G = m
t
+
g = m
t

g are satised.
Then necessarily
N

j =1
f
j
(x)g
j
(x) = f
t
(x)g(x) = (mf )
t
(x)(m
t
g)(x) = F
t
(x)G(x) = 0 .
For such functions { f
j
. g
j
}
N
j =1
the kernel L(x. y) dened by
L(x. y) =

N
j =1
f
j
(x)g
j
(y)
x y
formally satises the relation K = L(1 + L)
1
; see Proposition A.2. As we have
seen in Section 3, it can happen that the solution m of the RHP is dened but the
integral operator L in L
2
(X. dx) given by the kernel L(x. y) is unbounded. In such
cases, greater care must be taken in assigning a meaning to L(1 + L)
1
.
1198 A. BORODIN AND P. DEIFT
Let J be a subset of X formed by a union of nitely many possibly innite
disjoint intervals:
J = (a
1
. a
2
) (a
3
. a
4
) (a
2m1
. a
2m
) X.
a
1
- a
2
- - a
2m
+.
The endpoints {a
j
} of J are allowed to coincide with the endpoints of X.
(2) Assume that the kernel K
J
= K|
J
denes a trace class integral operator in
L
2
(J. dx).
(3) Assume also that the operator 1 K
J
is invertible in L
2
(J. dx).
(4) Further assume that the restrictions of the functions F
j
and G
j
to J lie in
L
p
(J. dx) L

(J. dx) for some p, 1 - p - .


Then by Proposition A.2 there exists a solution m
J
of the normalized RHP
(J. :
J
) with :
J
= I 2i FG
t
, and the kernel of the operator R
J
= K
J
(1K
J
)
1
has the form
R
J
(x. y) =

N
j =1
F
j
(x)G
j
(y)
x y
.
F = (m
J
)

F = (m
J
)

f. G = (m
t
J
)

G = (m
t
J
)

m
t

g.
Set m
X\J
= m
J
m. As in Section 4, we see that m
X\J
satises the RHP (X\J. :).
The crucial condition is that this RHP can be reduced to an RHP with a piecewise
constant jump matrix. We formulate this more precisely as follows:
(5) Assume that there exists a matrix-valued holomorphic function C : C \
S mat(N. C) such that
(a) C is invertible;
(b) f
o
= C

f is a piecewise constant vector on X;


(c) g
o
= C
t
+
g is a piecewise constant vector on X;
(d) C
o
= C

C
1
+
is an invertible piecewise constant matrix on X;
(e) (C
1
)

( )C( ) = D
1
+o(| |
1
) as , where D is a constant matrix.
Now form the matrix M = m
X\J
C
1
= m
J
mC
1
. Condition (5) implies that
the jump matrix for M, which is equal to
V = M
1

M
+
= C
o
+2i f
o
(g
o
)
t

X\J
(cf. Lemma 4.5), is piecewise constant.
Nowin order to ensure the existence of a differential equation for M with respect
to , we need to know something about the local behavior of M near the points of
discontinuity of V and near innity.
To state the condition on the local behavior of M we have to be sure that the
matrix M
1
exists. Note that the determinants of : and :
J
are identically equal to 1,
because both : and :
J
are equal to the identity plus a nilpotent matrix. This means
that the scalar functions det m and det m
J
have no jump across X. As m and m
J
tend to I at innity, det m and det m
J
tend to 1 at innity. Modulo certain regularity
conditions on det m and det m
J
near the endpoints of X and J (which are always
FREDHOLM DETERMINANTS 1199
satised in the applications), Liouvilles theorem implies that det m = det m
J
1,
and the matrices m, m
J
, and M are invertible.
(6) Assume that M

( )M
1
( ) = O(| a|
1
) at any endpoint a of X.
(7) Assume that m

X\J
( )m
1
X\J
( ) = o(| |
1
) as (recall that m
X\J
=
m
J
m).
Before going any further, we indicate where we proved that conditions (1)
through (7) hold for the
2
F
1
kernel.
Condition (1) is veried in Proposition 3.2; (2) follows from Propositions 2.1
and 2.2; (3) is Corollary 3.7 (here we needed additional restrictions on the param-
eters z. z

. n, and n

); (4) is a corollary of (2.8) and (2.9); (5) consists of obvious


properties of the matrix (3.3); (6) follows from Propositions 3.3 and 3.4; (7) is a
corollary of (2.8), (2.9), and Proposition 4.2.
Denote by {b
j
}
n
j =1
C all nite endpoints of X and J.
THEOREM 5.1 Under conditions (1) through (7) above, there exist constant matri-
ces {B
j
}
n
j =1
such that the matrix M satises the following linear differential equa-
tion:
(5.1) M

( ) =
n

j =1
B
j
b
j
M( ) .
If b
j
is an endpoint of J but not an endpoint of X, then the corresponding matrix
B
j
is nilpotent and nonzero. Moreover,

n
j =1
B
j
= D, where the constant matrix
D is given in (5e).
PROOF: We will follow the proof of Theorem 4.6. Since M has a constant
jump matrix, the matrix M

M
1
has no jump across X. If b
j
is an endpoint of X,
then by (6), b
j
is either a regular point or a rst-order pole of M

M
1
. If b
j
is an
endpoint of J and not an endpoint of X, then the proof of Lemma 4.7 (which can
be repeated word for word in the general setting) shows that near b
j
M

( )M
1
( ) =
B
j
b
j
+a locally holomorphic function
with a nilpotent constant matrix B
j
. Thus,
(5.2) M

( )M
1
( )
n

j =1
B
j
b
j
is an entire function for the (constant) matrices {B
j
}
n
j =1
.
Near = ,
M

M
1
= m

X\J
m
1
X\J
+m
X\J
(C
1
)

Cm
1
X\J
= D
1
+o(| |
1
) .
as follows from (5e) and (7). Hence, by Liouvilles theorem, the function (5.2)
is identically zero, and computing the terms of order
1
at innity we see that

n
j =1
B
j
= D.
1200 A. BORODIN AND P. DEIFT
Remark 5.2. Arguing as above and substituting the estimate m

m
1
= o(| |
1
) as
for condition (7), one can easily prove that
(m( )C
1
( ))

=
l

j =1
B
0
J
b
0
j
m( )C
1
( ) .
where {b
0
j
}
l
j =1
are the endpoints of X and {B
0
j
} are some constant matrices; cf. Re-
mark 4.8.
If we allow the differential equation to have an irregular singularity at innity,
then condition (5e) on the matrix C can be relaxed. Let us introduce the condi-
tion
(5e

) (C
1
)

( )C( ) = D +o(1) as , where D is a constant matrix.


We can then relax condition (7) to the following:
(7

) Assume that m

X\J
( )m
1
X\J
( ) = o(1) as .
The following claim is proved in exactly the same way as Theorem 5.1:
THEOREM 5.3 Under conditions (1)(4), (5ad), (5e

), (6), and (7

) above, there
exist constant matrices {B
j
}
n
j =1
such that the matrix M satises the following linear
differential equation:
(5.3) M

( ) =
_
n

j =1
B
j
b
j
+ D
_
M( ) .
If b
j
is an endpoint of J but not an endpoint of X, then the corresponding matrix
B
j
is nilpotent and nonzero.
Remark 5.4. Once again, if m

m
1
0 as , then
(m( )C
1
( ))

=
_
l

j =1
B
0
J
b
0
j
+ D
_
m( )C
1
( ) .
where {b
0
j
}
l
j =1
are the endpoints of X, and {B
0
j
} are some constant matrices; cf. Re-
marks 4.8 and 5.2.
6 Isomonodromy Deformations: The Jimbo-Miwa-Ueno -Function
Let M( ) be a matrix-valued function on the complex -plane satisfying a lin-
ear differential equation of the form M

( ) = B( )M( ), where B( ) is a rational


matrix.
Fix a fundamental solution M of this equation. In general, M( ) is a multival-
ued function. If {b
1
. b
2
. . . . . b
n
} are the poles of B, then {b
1
. b
2
. . . . . b
n
. } are
the branch points for M. When we continue M along a closed path avoiding
the branch points, the column vectors of M are changed into some linear combi-
nations of the columns of the original matrix: M( ) M( )X

. Here X

is
a constant invertible matrix depending on the homotopy class [ ] of the path .
FREDHOLM DETERMINANTS 1201
Thus, the X

s provide a monodromy representation of the fundamental group


of C \ {b
1
. b
2
. . . . . b
n
}:
X :
1
(C \ {b
1
. b
2
. . . . . b
n
}) GL(N. C) . [ ] X

.
Now view the singular points {b
1
. b
2
. . . . . b
n
} as variables. It may happen that
moving these points a little and changing the rational matrix B( ) in an appropriate
way, we do not change the monodromy representation. In such a case we say that
we have an isomonodromy deformation of the initial differential equation.
For general information on isomonodromy deformations, we refer the reader to
[31, 34].
Without loss of generality, we can assume that, in the notation of Section 5, the
rst k n points {b
1
. b
2
. . . . . b
k
} of the set {b
j
}
n
j =1
are exactly those endpoints of
J that are not the endpoints of X. Clearly, {b
j
}
k
j =1
{a
j
}
2m
j =1
.
The following statement is immediate:
PROPOSITION 6.1 Under the assumptions of Theorem 5.1 (or Theorem 5.3), there
exists c > 0 with the property that moving the points b
1
. b
2
. . . . . b
k
within their
c-neighborhoods inside R provides an isomonodromy deformation of the equation
(5.1) (or of the equation (5.3), respectively).
Note that the matrices {B
j
}
n
j =1
are now functions of b
1
. b
2
. . . . . b
k
.
PROOF: Choose c > 0 so that the points b
1
. b
2
. . . . . b
k
cannot collide between
themselves or with the other endpoints b
k+1
. b
k+2
. . . . . b
n
. Since the matrix M =
m
J
mC
1
has a nonzero determinant, this matrix can be viewed as a fundamental
solution of (5.1). The monodromy of this solution, as we go along any closed
curve that avoids the singular points, is equal to the product of the values of the
jump matrix V or their inverses at the points where the curve meets X. Since V
does not depend on b
1
. b
2
. . . . . b
k
, the proof is complete.
In 1912, Schlesinger realized that if the matrix B( ) has simple poles, then a
deformation of the b
j
s preserves monodromy if and only if the residues {B
j
} of B
at the singular points, as functions of the b
j
s, satisfy a certain system of nonlinear
partial differential equations. These equations are called the Schlesinger equations.
The analogues of the Schlesinger equations in the case when B has higher-order
poles were derived in [34].
In what follows we will use the Schlesinger equations arising from the isomon-
odromy deformation described in Proposition 6.1. Since our situation is simpler
than the general case in [34], it is more instructive to rederive the equations that we
need rather than to refer to the general theory.
PROPOSITION 6.2 (Schlesinger Equations) (i) The matrices {B
j
}
n
j =1
from for-
mula (5.1), as functions in b
1
. b
2
. . . . . b
k
, satisfy the equations
(6.1)
B
l
b
j
=
[B
j
. B
l
]
b
j
b
l
.
B
j
b
j
=

1ln
l=j
[B
j
. B
l
]
b
l
b
j
.
1202 A. BORODIN AND P. DEIFT
where j = 1. 2. . . . . k and l = 1. 2. . . . . n.
(ii) The matrices {B
j
}
n
j =1
from (5.3), as functions in b
1
. b
2
. . . . . b
k
, satisfy the
equations
(6.2)
B
l
b
j
=
[B
j
. B
l
]
b
j
b
l
.
B
j
b
j
=

1ln
l=j
[B
j
. B
l
]
b
l
b
j
[B
j
. D] .
where j = 1. 2. . . . . k and l = 1. 2. . . . . n.
SKETCH OF THE PROOF: Since M satises an RHP with a constant jump ma-
trix V, the derivative M
b
j
=
M
b
j
satises the same jump condition, j = 1. 2. . . . . k.
Hence, the matrix M
b
j
M
1
has no jump across X. Thus, it is holomorphic in
C \ {b
j
}. As was shown in the proof of Lemma 4.7, locally near = b
j
we have
M( ) = H( ) exp
_
(C
o
)
1
f
o
(g
o
)
t
ln( b
j
)
_
= H( )
_
I +(C
o
)
1
f
o
(g
o
)
t
ln( b
j
)
_
.
where H is holomorphic. With some additional effort, one can show that H is
differentiable with respect to b
j
, and differentiating with respect to b
j
, we see that
M
b
j
( )M
1
( ) =
H(b
j
)(C
o
)
1
f
o
g
o
H
1
(b
j
)
b
j
+ O(1) =
B
j
z b
j
+ O(1) .
Since M I at = , one can show that M
b
j
M
1
0 as . By
Liouvilles theorem, M
b
j
M
1
+B
j
,(z b
j
) 0, and
(6.3) M
b
j
=
B
j
b
j
M .
The linear equations (5.1) and (6.3) form a Lax pair for (6.1).
Differentiating (5.1) with respect to b
j
and (6.3) with respect to , subtracting
the results, and multiplying the difference by M
1
on the right, we obtain
n

l=1
B
l
b
j
1
b
l
=
1
b
j

1ln
l=j
[B
l
. B
j
]
b
l
.
The equality of residues at the points {b
l
}
n
l=1
on both sides of this identity gives
(6.1). The equations (6.2) are proved in exactly the same way.
COROLLARY 6.3 In the notation of Theorem 4.6,
A
a
j
=
[C
j
. A]
a
j

1
2
.
B
a
j
=
[C
j
. B]
a
j
+
1
2
. (6.4)
C
l
a
j
=
[C
j
. C
l
]
a
j
a
l
.
C
j
a
j
=
[C
j
. A]
a
j

1
2

[C
j
. B]
a
j
+
1
2

1l2m
l=j
[C
j
. C
l
]
a
j
a
l
. (6.5)
FREDHOLM DETERMINANTS 1203
Here j. l = 1. 2. . . . . 2m, and if a
1
= or a
2m
= +, then the corresponding
terms and equations are removed.
PROOF: The proof is a direct application of Proposition 6.2.
It is known that for any solution of Schlesinger equations there exists an associ-
ated remarkable 1-form that is closed; see [34, 53]. For equations (6.1), the form
of is as follows:
(6.6) =
k

j =1

1ln
l=j
tr(B
j
B
l
)
b
j
b
l
db
j
.
while for the equations (6.2) the form is different:
(6.7) =
k

j =1
_

1ln
l=j
tr(B
j
B
l
)
b
j
b
l
+tr(B
j
D)
_
db
j
.
DEFINITION 6.4 [34] A function = (b
1
. b
2
. . . . . b
k
) is called a -function for
the system of Schlesinger equations (6.1) (or (6.2)) if
d ln =
with given by (6.6) (or (6.7), respectively).
The denition can be extended to the most general case of an arbitrary rational
matrix B( ); see [34].
Since d = 0, the -function is dened at least locally. Clearly, the -function
is unique up to a multiplicative constant.
The following claim is a corollary of much more general statements proven in
[39] and [43].
PAINLEV PROPERTY Any solutions {B
j
}
n
j =1
of the Schlesinger equations (6.1) or
(6.2) are analytic functions in (b
1
. b
2
. . . . . b
k
) that have at most poles in addition
to the xed singularities b
j
= b
l
for some j = l.
The corresponding -function is holomorphic everywhere on the universal cov-
ering manifold of
C
k
\{(b
1
. b
2
. . . . . b
k
) | b
j
= b
l
for some j = l. j = 1. 2. . . . . k. l = 1. 2. . . . . n}.
Let us now return to the general setting of Section 5. The next statement is our
main result in this section.
THEOREM 6.5 Under the assumptions of Theorem 5.1 (or Theorem 5.3), the Fred-
holm determinant det(1K
J
) is the -function for the system of Schlesinger equa-
tions (6.1) (or (6.2), respectively).
1204 A. BORODIN AND P. DEIFT
PROOF: We will give a proof under the assumptions of Theorem 5.1; the case
of Theorem 5.3 is handled similarly.
First of all, by condition (2) of Section 5 the operator K
J
is trace class. Hence,
det(1 K
J
) is well-dened. Note that (1 K
J
) is invertible by condition (3). By
a well-known formula from functional analysis, we have that
ln det(1 K
J
)
b
j
= R
J
(b
j
. b
j
). j = 1. 2. . . . . k.
where R
J
= K
J
(1 K
J
)
1
, the sign + is chosen if b
j
is a left endpoint of J, and
the sign is chosen if b
j
is a right endpoint of J. Thus, in order to verify that
d ln det(1 K
J
) = , we must prove that
(6.8) R
J
(b
j
. b
j
) =

1ln
l=j
tr(B
j
B
l
)
b
j
b
l
. j = 1. 2. . . . . k.
We give a proof when b
j
is a left endpoint of an interval from J. The proof for
the right endpoints is obtained by changing the sign of .
We have
R
J
(b
j
. b
j
) = lim
x,yb
j
G
t
(y)F(x)
x y
= lim
x,yb
j
((m
t
X\J
)
+
g(y))
t
(m
X\J
)

(x) f (x)
x y
= lim
x,yb
j
((M
+
C
+
)
t
g(y))
t
M

f (x)
x y
= lim
x,yb
j
(g
o
)
t
M
1
+
(y)M

(x) f
o
x y
= (g
o
)
t
(M
1
+
M

)(b
j
) f
o
.
The local representation (4.3) of the matrix M( ) near the point = b
j
implies
that
M( ) =
_
H
b
j
( ) exp
_
(C
o
)
1
f
o
(g
o
)
t
ln( b
j
)
_
. > 0 .
H
b
j
( ) exp
_
(C
o
)
1
f
o
(g
o
)
t
ln( b
j
)
_
(C
o
)
1
. - 0 .
FREDHOLM DETERMINANTS 1205
Hence, for x X near b
j
,
M
1
+
= exp
_
(C
o
)
1
f
o
(g
o
)
t
ln(x b
j
)
_
H
1
b
j
.
M

= H

b
j
exp
_
(C
o
)
1
f
o
(g
o
)
t
ln(x b
j
)
_
(C
o
)
1
+ H
b
j
(C
o
)
1
f
o
(g
o
)
t
x b
j
exp
_
(C
o
)
1
f
o
(g
o
)
t
ln(x b
j
)
_
(C
o
)
1
.
M
1
+
M

=
(C
o
)
1
f
o
(g
o
)
t
(C
o
)
1
x b
j
+exp
_
(C
o
)
1
f
o
(g
o
)
t
ln(x b
j
)
_
H
1
b
j
H

b
j
exp
_
(C
o
)
1
f
o
(g
o
)
t
ln(x b
j
)
_
(C
o
)
1
.
where H
b
j
is a function holomorphic near b
j
and det H
b
j
(b
j
) = 0.
Since (g
o
)
t
(C
o
)
1
f
o
= ((C
o
)
1
f
o
)
t
g
o
= 0, we have
(g
o
)
t
(C
o
)
1
f
o
(g
o
)
t
(C
o
)
1
x b
j
= 0 .
(g
o
)
t
exp
_
(C
o
)
1
f
o
(g
o
)
t
ln(x b
j
)
_
= (g
o
)
t
.
exp
_
(C
o
)
1
f
o
(g
o
)
t
ln(x b
j
)
_
(C
o
)
1
f
o
= (C
o
)
1
f
o
;
therefore,
(6.9) R(b
j
. b
j
) = (g
o
)
t
(M
1
+
M

)(b
j
) f
o
= (g
o
)
t
(H
1
b
j
H

b
j
)(b
j
)(C
o
)
1
f
o
.
On the other hand, let us compute the right-hand side of (6.8) through C
o
, f
o
,
g
o
, and H
b
j
. As above, locally near b
j
we have
M

( )M
1
( ) = H

b
j
( )H
1
b
j
( ) +
H
b
j
( )(C
o
)
1
f
o
(g
o
)
t
H
1
b
j
( )
b
j
.
Comparing with (5.1), we conclude that
B
j
= H
b
j
(b
j
)(C
o
)
1
f
o
(g
o
)
t
H
1
b
j
(b
j
)
and

1ln
l=j
B
l
b
j
b
l
= H

b
j
(b
j
)H
1
b
j
(b
j
) + H

b
j
(b
j
)(C
o
)
1
f
o
(g
o
)
t
H
1
b
j
(b
j
)
H
b
j
(b
j
)(C
o
)
1
f
o
(g
o
)
t
H
1
b
j
(b
j
)H

b
j
(b
j
)H
1
b
j
(b
j
) .
Multiplying these two relations, taking the trace of both sides, and using the
fact that (g
o
)
t
(C
o
)
1
f
o
= 0, we obtain
tr
_
H

b
j
(b
j
)(C
o
)
1
f
o
(g
o
)
t
H
1
b
j
(b
j
)
_
=

1ln
l=j
tr B
l
B
j
b
j
b
l
.
1206 A. BORODIN AND P. DEIFT
But the left-hand side of the last equality equals the right-hand side of (6.9). This
concludes the proof of (6.8).
COROLLARY 6.6 Let K be the continuous
2
F
1
kernel of Section 2 and assume that
z + z

+n +n

> 0 . |z + z

| - 1. |n +n

| - 1 .
Then, in the notation of Theorem 4.6, det(1 K
J
) is the -function of the Schle-
singer equations (6.4) and (6.5), where the matrices A, B, and {C
j
}
2m
j =1
satisfy the
conditions stated in Theorem 4.6.
PROOF: The proof is a direct application of Theorem 6.5.
Note that the restrictions on the parameters z, z

, n, and n

come from Corol-


lary 3.7 (see also Remark 3.8).
7 Painlev VI
In this section we consider the case of the
2
F
1
kernel acting on J = (s. +)
for s >
1
2
. We will show that the Fredholm determinant det(1 K
s
) = det(1
K|
(s,+)
) can be expressed through a solution of the Painlev VI equation. The
appearance of the PVI equation is to be expected from the general results of [34];
the precise form of the equation is not clear in general and requires considerable
calculation, as we now show.
Our goal is to prove the following claim:
THEOREM 7.1 Let K
s
be the restriction of the continuous
2
F
1
kernel to the interval
(s. +), s >
1
2
. Assume that S = z + z

+ n + n

> 0, |z + z

| - 1, and
|n +n

| - 1. Then the function


(s) =
_
s
1
2
__
s +
1
2
_
d ln det(1 K
s
)
ds

2
1
s +

3

4
2
satises the differential equation

__
s
1
2
__
s +
1
2
_

_
2
=
_
2(s

4
_
2

+
2
1
__

+
2
2
__

+
2
3
__

+
2
4
_
.
(7.1)
where

1
=
2
=
z + z

+n +n

2
.
3
=
z z

+n n

2
.
4
=
z z

n +n

2
.
Remark 7.2. (i) The equation (7.1) is the so-called Jimbo-Miwa -version of
the Painlev VI equation; see [32, appendix C]. It is easily reduced to the standard
form of the Painlev VI; see [32, 40].
(ii) As s +,
(7.2)
d ln det(1 K
s
)
ds
K(s. s) =
out
(s)
_
R

out
(s)S
out
(s) S

out
(s)R
out
(s)
_
.
FREDHOLM DETERMINANTS 1207
The error termin this asymptotic relation is of order
_
+
s
K(s. y)K(y. s)dy. Using
the leading asymptotic terms

out
(s)
sin z sin z

2
s
S
. R
out
(s) 1 . S
out
(s) const s
1
.
we see that K(s. s) = O(s
S2
) and
_
+
s
K(s. y)K(y. s)dy = O(s
2S3
);
hence,
(s) =
2
1
s +

3

4
2
+
sin z sin z

2
s
2
1
+o(s
2
1
) .
This expansion determines (s) uniquely as a solution of (7.1) by a result of
O. Costin and R. D. Costin [18].
(iii) The restrictions S > 0, |z + z

| - 1, and |n + n

| - 1 are taken from


Corollary 3.7. Most likely, they can be removed from Corollary 3.7, and hence
from Theorem 7.1; see Remark 3.8. Another possible way of removing these re-
strictions from Theorem 7.1 is to prove that the Fredholm determinant det(1 K
s
)
and its derivatives with respect to s, which are well-dened for all admissible sets
of parameters (see the end of Section 2), are real-analytic functions of the parame-
ters. Then the result would follow by analytic continuation.
(iv) The equation (7.1) depends only on three independent parameters: the
shifts
z z + . z

+ . n n . n n

.
do not change the values of
1
.
2
. . . . .
4
. However, the solution of (7.1) that is of
interest here depends nontrivially on all four parameters, as can be seen from the
above asymptotic expansion.
(v) The proof of Theorem 7.1 follows the derivation of the Painlev VI equa-
tion from Schlesinger equations given in [32, appendix C]; see also [40] for a more
detailed description.
PROOF OF THEOREM 7.1: By Theorem 4.6, the matrix M satises a differen-
tial equation
d
d
M( ) =
_
A

1
2
+
B
+
1
2
+
C
s
_
M( )
with some constant matrices A, B, and C such that
tr A = tr B = tr C = 0 .
det A =
_
z z

2
_
2
. det B =
_
n n

2
_
2
. det C = 0 .
and
A+B +C =
S
2

3
.
3
=
_
1 0
0 1
_
.
1208 A. BORODIN AND P. DEIFT
By Corollary 6.3, the matrices A, B, and C satisfy the Schlesinger equations
A
s
=
[C. A]
s
1
2
.
B
s
=
[C. B]
s +
1
2
. (7.3)
C
s
=
[C. A]
s
1
2

[C. B]
s +
1
2
. (7.4)
Introduce the notation

A
=
z z

2
.
B
=
n n

2
.
Set
(s) =
_
s
1
2
__
s +
1
2
_
d ln det(1 K
s
)
ds
= (s) +
S
2
4
s

2
A

2
B
2
.
LEMMA 7.3 (s) = tr
___
s +
1
2
_
A+
_
s
1
2
_
B
_
C
_
.
PROOF: This lemma follows from (6.6), Denition 6.4, and Corollary 6.6.
Write the matrices A and C in the form
(7.5) A =
_
z
A
x
A
y
A
z
A
_
. C =
_
z
C
x
C
y
C
z
C
_
.
with
(7.6) x
A
y
A
= det A z
2
A
=
2
A
z
2
A
. x
C
y
C
= det C z
2
C
= z
2
C
.
LEMMA 7.4

= Sz
C
.
PROOF: Lemma 7.3 implies

(s) = tr((A+B)C) +
_
s +
1
2
_
tr(A

C)
+
_
s
1
2
_
tr(B

C) +tr
___
s +
1
2
_
A+
_
s
1
2
_
B
_
C

_
.
The Schlesinger equations (7.3) and (7.4) imply that the last three terms vanish due
to the identity tr([X. Y]X) = 0. Further, since A + B =
S
2

3
C and C
2
= 0,
we have

(s) = tr
__

S
2

3
C
_
C
_
=
S
2
tr(
3
C) = Sz
C
.

LEMMA 7.5
_
s
1
2
__
s +
1
2
_

(s) =
S
2
tr(
3
[A. C]) = S(x
C
y
A
x
A
y
C
).
PROOF: Differentiating the equality

(s) =
S
2
tr(
3
C) and using the equa-
tion (7.4), we get
_
s
1
2
__
s +
1
2
_

(s) =
S
2
tr
_

3
__
s +
1
2
_
A+
_
s
1
2
_
B. C
__
.
Substituting B =
S
2

3
AC and simplifying, we arrive at the rst equality. The
second equality follows from the explicit form of matrices A and C; see (7.5).
FREDHOLM DETERMINANTS 1209
LEMMA 7.6
_
s
1
2
_

(s) (s) = tr(AC) = (x


A
y
C
+ x
C
y
A
+2z
A
z
C
).
PROOF: We have
_
s
1
2
_

(s) (s) =
S
2
_
s
1
2
_
tr(
3
C) tr
___
s +
1
2
_
A+
_
s
1
2
_
B
_
C
_
=
S
2
_
s
1
2
_
tr(
3
C)
tr
__
_
s +
1
2
_
A+
_
s
1
2
_
_

S
2

3
AC
__
C
_
= tr(AC) .
where we used Lemmas 7.3 and 7.4 and the relations B =
S
2

3
A C and
C
2
= 0. The second equality follows from (7.5).
LEMMA 7.7
_
s +
1
2
_

(s) (s) = Sz
A
+
2
A

2
B
+
S
2
4
.
PROOF: We have
tr(AC) = tr
_
A
_
S
2

3
+B +A
__
=
1
2
(tr(A+B)
2
+tr A
2
tr B
2
) +
S
2
tr(A
3
)
=
1
2
_
tr
_
S
2

3
+C
_
2
+tr A
2
tr B
2
_
+Sz
A
= Sz
A
+Sz
C
+
2
A

2
B
+
S
2
4
.
where we used the equalities
tr A
2
= 2
2
A
. tr B
2
= 2
2
B
. C
2
= 0 . tr
2
3
= 2 .
Lemmas 7.4 and 7.6 conclude the proof.
Now we use the following trick to derive the differential equation for . We
learned this trick from [32], in which the authors refer further to [44].
From Lemmas 7.5 and 7.6 we know that
x
C
y
A
x
A
y
C
=
1
S
_
s
1
2
__
s +
1
2
_

(s) .
(x
A
y
C
+ x
C
y
A
) =
_
s
1
2
_

(s) (s) +2z


A
z
C
.
Squaring these equalities and then subtracting the rst one from the second one,
we obtain
4x
A
x
C
y
A
y
C
=
__
s
1
2
_

(s) (s) +2z


A
z
C
_
2

1
S
2
__
s
1
2
__
s +
1
2
_

(s)
_
2
.
1210 A. BORODIN AND P. DEIFT
But (7.6) implies that x
A
x
C
y
A
y
C
= (z
2
A

2
A
)z
2
C
. This gives
(7.7) 4
_
z
2
A

2
A
_
z
2
C
=
__
s
1
2
_

(s) (s)+2z
A
z
C
_
2

1
S
2
__
s
1
2
__
s+
1
2
_

(s)
_
2
.
Next, Lemmas 7.4 and 7.7 provide expressions for z
C
and z
A
via (s); namely,
z
C
=

(s)
S
. z
A
=
1
S
_
_
s +
1
2
_

(s) (s)
2
A
+
2
B

S
2
4
_
.
Substituting these relations into (7.7), we can obtain a differential equation for .
But we can also rewrite everything in terms of (s). We have

(s) =

(s) .
_
s
1
2
_

(s) (s) =
_
s
1
2
_

(s) (s) +

2
A

2
B
2

S
2
8
.
z
C
=
1
S
_

(s) +
S
2
4
_
.
z
A
=
1
S
_
_
s +
1
2
_

(s) (s)

2
A

2
B
2

S
2
8
_
.
Substituting this into (7.7) we have

1
S
2
__
s
1
2
__
s +
1
2
_

(s)
_
2
=
4
S
2
_
1
S
2
_
_
s +
1
2
_

(s) (s)

2
A

2
B
2

S
2
8
_
2

2
A
__

(s) +
S
2
4
_
2

_
_
s
1
2
_

(s) (s) +

2
A

2
B
2

S
2
8

2
S
2
_
_
s +
1
2
_

(s) (s)

2
A

2
B
2

S
2
8
__

(s) +
S
2
4
__
2
.
Purely algebraic manipulations show that the equation above after multiplication
by S
2

(s) turns into equation (7.1). Note that in this notation

1
=
2
=
S
2
.
3
=
A
+
B
.
4
=
A

B
.

8 Other Kernels
8.1 The Jacobi Kernel
We introduce some notation related to the Jacobi polynomials. Our notation
follows [24, 10.8].
Let {P
n
= P
(,)
n
(x)}

n=0
be the system of orthogonal polynomials on (1. 1),
deg P
n
= n, with respect to the weight function n(x) = (1 x)

(1 + x)

, where
FREDHOLM DETERMINANTS 1211
and are real constants, . > 1. The normalization is determined from the
relation
P
n
(1) =
_
n +
n
_
=
( +1)
n
n!
.
where (a)
k
= I(a +k), I(a) is the Pochhammer symbol. The P
n
s are the Jacobi
polynomials with parameters and . Let us denote by h
n
the square of the norm
of P
n
in L
2
((1. 1). n(x)dx) and by k
n
> 0 the highest coefcient of P
n
:
h
n
=
_
1
1
P
2
n
(x)n(x)dx . P
n
(x) = k
n
x
n
+{lower-order terms} .
The explicit form of these constants is known; see [24, 10.8],
h
n
=
2
++1
I(n + +1)I(n + +1)
(2n + + +1)n!I(n + + +1)
. k
n
= 2
n
_
2n + +
n
_
.
The Jacobi polynomials are expressed through the Gauss hypergeometric func-
tion
2
F
1
P
n
(x) =
_
n +
n
_
2
F
1
_
n. n + + +1
+1

1 x
2
_
.
The Jacobi functions of the second kind Q
n
(x) = Q
(,)
n
(x) are dened by the
formula
Q
n
(x) =
2
n++
I(n + +1)I(n + +1)
I(2n + + +2)
(x 1)
n1
(x +1)

2
F
1
_
n +1. n + +1
2n + + +2

2
1 x
_
.
Q
n
satises the same second-order differential equation as P
n
. The Jacobi func-
tions of the second kind are related to the Jacobi polynomials by a number of
well-known formulas; see [24, 6.8] and [55, sect. 4.6] for details.
PROPOSITION 8.1 For any n = 1. 2. . . . , take two arbitrary integers k and l such
that k +l = n and set
z = k . z

= k + . n = l . n

= l + ;
then

out
(x) 0 .

in
(x) =
(1 2x)
2k+
(1 +2x)
2l+
2
2n++
=
(1 2x)
2k
(1 +2x)
2l
n(2x)
2
2n++
.
R
in
(x) =
(1)
k+1
2
n
n!I(n + + +1)
I(2n + + +1)
(1 2x)
k
(1 +2x)
l
P
n
(2x).
S
in
(x) =
(1)
k+1
2
n
I(2n + +)
I(n +)I(n +)
(1 2x)
k
(1 +2x)
l
P
n1
(2x) .
1212 A. BORODIN AND P. DEIFT
PROOF: The proposition follows from the direct comparison of formulas. The
relation
sin(z

)I(z z

=
(1)
k
sin((z

z))I(z z

=
(1)
k
I(1 + z

z)
=
(1)
k
I(1 +)
should be used along the way.
Remark 8.2. The functions R
out
and S
out
can be similarly expressed through Q
n1
and Q
n
, respectively. Since we do not use the corresponding formulae below, we
leave their derivation to the interested reader.
The n
th
Christoffel-Darboux kernel for the Jacobi polynomials is given by the
formula
K
CD
N
(x. y) =
n1

j =0
P
j
(x)P
j
(y)
h
j
=
k
n1
k
n
h
n1
P
n
(x)P
n1
(y) P
n1
(x)P
n
(y)
x y
.
We dene the n
th
Jacobi kernel on the interval (
1
2
.
1
2
) by the formula
K
Jac
n
(x. y) = 2K
CD
n
(2x. 2y)
_
n(2x)n(2y) . x. y
_

1
2
.
1
2
_
.
The corresponding integral operator K
Jac
n
is the orthogonal projection in
L
2
__

1
2
.
1
2
_
. dx
_
onto the n-dimensional subspace spanned by
_
1
2
x
_
2
_
1
2
+ x
_

2
. x
_
1
2
x
_
2
_
1
2
+ x
_

2
. . . . . x
n1
_
1
2
x
_
2
_
1
2
+ x
_

2
.
PROPOSITION 8.3 Under the assumptions of Proposition 8.1,
K
out,out
= K
out,in
= K
in,out
= 0 . K
in,in
= K
Jac
n
.
where K is the continuous
2
F
1
kernel.
PROOF: The vanishing follows from the vanishing of
out
, which, in turn, fol-
lows from the vanishing of sin z and sin n. The equality K
in,in
= K
Jac
n
follows
from the denition of both kernels and Proposition 8.1.
Thus, the Jacobi kernel can be viewed as a special case of the
2
F
1
kernel. Our
next step is to extend the results of Section 6 and Section 7 to this kernel.
Let J = (a
1
. a
2
) (a
3
. a
4
) (a
2m1
. a
2m
) be a nite union of disjoint
intervals inside (
1
2
.
1
2
). It may happen that a
1
=
1
2
or a
2m
=
1
2
. However, we
require J to be a proper subset of (
1
2
.
1
2
).
PROPOSITION 8.4 Assume that 0 - -
1
2
and 0 - -
1
2
. Then the Jacobi
kernel satises conditions (1) through (7) of Section 5.
FREDHOLM DETERMINANTS 1213
PROOF: (1) follows from the fact that K
Jac
n
coincides with the
2
F
1
kernel for a
specic set of parameters (Proposition 8.3), and for that kernel the condition was
veried in Proposition 3.2.
(2) is obvious, since K
Jac
n
is a nite rank operator.
(3) follows from the fact that K
Jac
n
is a projection on a nite-dimensional space,
and the range of this projection intersects L
2
(J. dx) trivially (here we used the
condition that J is a proper subset of (
1
2
.
1
2
)).
(4) follows from the explicit form of the kernel (here we use the condition
. > 0, which guarantees the boundedness near the points
1
2
).
(5) is exactly the same as for the
2
F
1
kernel.
(6) is the only nontrivial condition. If a
1
=
1
2
and a
2m
=
1
2
, then the claim
follows from Propositions 3.3 and 3.4, as for the
2
F
1
kernel. Now assume that
a
1
=
1
2
. Since 0 - -
1
2
, Proposition 3.4 implies that m( )C
1
( ) is locally
in L
4
on any smooth curve passing through =
1
2
. By Proposition A.2, m
J
is
locally L
2
; hence, M = m
J
mC
1
is locally in L
4/3
.
The jump matrix V for M locally near
1
2
coincides with the jump matrix C
o
=
C

C
1
+
= C
1
+
C

for C
1
; see Section 5. This means that H( )C
1
( ) is a local
solution of the RHP for M for any locally holomorphic H( ). Set M
o
= HC
1
with
H( ) =
_
( +
1
2
)
w
0
0 ( +
1
2
)
w
_
.
Note that H has no branch at
1
2
, because n = l Z. Then
M
o
( ) =
_
(
1
2
)

z+z

2
( +
1
2
)
ww

2
0
0 (
1
2
)
z+z

2
( +
1
2
)
w

w
2
_
.
Hence, (M
o
( ))
1
(as well as M
o
( )) is locally in L
4
, because n

n =
(0.
1
2
). Thus, M(M
o
)
1
is a locally L
1
-function with no jump across R. This
means that near =
1
2
M( ) = H

1
2
( )
_
( +
1
2
)

2
0
0 ( +
1
2
)

2
_
for some locally holomorphic function H
1/2
( ) such that H
1/2
(
1
2
) is nonsingu-
lar. Hence
M

M
1
( ) =
B
+
1
2
+a locally holomorphic function.
where B has eigenvalues ,2 and ,2.
The argument in the case a
2m
=
1
2
is similar, and the eigenvalues of the residue
A of M

M
1
at =
1
2
are equal to ,2 and ,2.
Finally, condition (7) for the Jacobi kernel follows from that for the
2
F
1
kernel.

1214 A. BORODIN AND P. DEIFT


Now, by Theorem 5.1, for . (0.
1
2
), the matrix M corresponding to the
Jacobi kernel satises the differential equation (cf. Theorem 4.6)
M

( ) =
_
A

1
2
+
B
+
1
2
+
2m

j =1
C
j
a
j
_
M( )
for some constant matrices A, B, and {C
j
}
2m
j =1
. If a
1
=
1
2
then C
1
= 0, and if
a
2m
=
1
2
then C
2m
= 0.
Moreover,
tr A = tr B = tr C
j
= 0 . det A =

2
4
. det B =

2
4
. det C
j
= 0 .
for all j = 1. 2. . . . . 2m, and
A+B +
2m

j =1
C
j
= n
+
2

3
.
3
=
_
1 0
0 1
_
.
Further, by Proposition 6.2 the matrices A, B, and {C
j
}
2m
j =1
satisfy the Schle-
singer equations (6.4) and (6.5). Finally, Theorem 6.5 implies the following:
THEOREM 8.5 Assume that 0 - and -
1
2
. Then the Fredholm determinant
det(1 K
Jac
n
|
J
), where K
Jac
n
is the Jacobi kernel, is the -function for the system
of Schlesinger equations (6.4) and (6.5) with matrices A, B, and {C
j
}
2m
j =1
satisfying
the conditions stated above.
Similarly to the
2
F
1
kernel, the cases when J = (
1
2
. s) or J = (s.
1
2
) lead to
the Painlev VI equation. Note that there are no restrictions on and .
THEOREM 8.6 [27] Let K
s
be the restriction of the Jacobi kernel K
Jac
n
to either the
interval (
1
2
. s), s - 1, or to the interval (s.
1
2
), s > 1. Then the function
(s) =
_
s
1
2
__
s +
1
2
_
d ln det(1 K
s
)
ds

_
n +
+
2
_
2
s +

2

2
8
satises the differential equation

__
s
1
2
__
s +
1
2
_

_
2
=
_
2
_
s

4
_
2
(

+
2
1
)(

+
2
2
)(

+
2
3
)(

+
2
4
) .
where

1
=
2
= n +
+
2
.
3
=
+
2
.
4
=

2
.
PROOF: Simply repeat the proof of Theorem 7.1. Note that in this way we
only prove the theorem for . (0.
1
2
). But for the nite-dimensional Jacobi
kernel it is obvious that the determinant det(I K
s
) and all its derivatives depend
on the parameters and analytically. That is why we can remove the additional
restrictions on and .
FREDHOLM DETERMINANTS 1215
8.2 The Whittaker Kernel
The Whittaker kernel, which we are about to introduce, plays the same role in
harmonic analysis on the innite symmetric group as the
2
F
1
kernel plays in the
harmonic analysis on the innite-dimensional unitary group; see Section 1. The
problem for the innite symmetric group was investigated by G. Olshanski and
one of the authors in a series of papers; see [5, 6, 7, 11, 12, 13, 15, 48, 49]. For a
brief summary, we refer the reader to [7, introduction], [11], and [15, sect. 3].
Split the space X = R \ {0} into two parts
X = X
+
X

. X
+
= R
+
. X

= R

.
Let z and z

be two complex nonintegral numbers such that either z

= z or z and
z

are both real and k - z. z

- k +1 for some k Z.
The functions

+
: X
+
R
+
.

: X

R
+
.
are dened by the formulae

+
(x) = C(z. z

)x
zz

e
x
.

(x) = (x)
z+z

e
x
.
where C(z. z

) = sin z sin z

,
2
, as before. The Whittaker kernel is a kernel on
X, which in block form
K =
_
K
+,+
K
+,
K
,+
K
,
_
corresponding to the splitting X = X
+
X

, is given by
K
+,+
(x. y) =
_

+
(x)
+
(y)
R
+
(x)S
+
(y) S
+
(x)R
+
(y)
x y
.
K
+,
(x. y) =
_

+
(x)

(y)
R
+
(x)R

(y) S
+
(x)S

(y)
x y
.
K
,+
(x. y) =
_

(x)
+
(y)
R

(x)R
+
(y) S

(x)S
+
(y)
x y
.
K
,
(x. y) =
_

(x)

(y)
R

(x)S

(y) S

(x)R

(y)
x y
.
where
R
+
(x) = x
z+z

1
2
e
x
2
W
zz

+1
2
,
zz

2
(x) .
S
+
(x) = I(z +1)I(z

+1) x
z+z

1
2
e
x
2
W
zz

1
2
,
zz

2
(x) .
R

(x) = (x)
zz

1
2
e

x
2
W
z+z

+1
2
,
zz

2
(x) .
S

(x) =
1
I(z)I(z

)
(x)
zz

1
2
e

x
2
W
z+z

1
2
,
zz

2
(x) .
Here W
,
(x) is the Whittaker function; see [24, 6.9].
1216 A. BORODIN AND P. DEIFT
In the denition of the Whittaker kernel above we have switched the signs of the
parameters z and z

compared to the standard notation. The reason for the switch


is the following:
PROPOSITION 8.7 The Whittaker kernel K
W
can be realized as a scaling limit of
the
2
F
1
kernel K = K
F
introduced in Section 2,
K
W
(x. y) = lim
+0
K
F
_
1
2
+x.
1
2
+y
_
. x. y R \ {0} .
where for the
2
F
1
kernel we set n =
1
, n

= 0, and the parameters (z. z

) for
both kernels are the same.
PROOF: Using the well-known limit relations (a. b. c C)
lim
+0
(1 +a)
1/
= e
a
. lim
+0

ab
I(a +
1
)
I(b +
1
)
= 1 .
lim
+0
2
F
1
_
a. b
1, +c

1
x
_
= x
a+b1
2
e
x
2
Wab+1
2
ab
2
(x) . x , R

.
we see that (remember n =
1
, n

= 0)
lim
+0

z+z

out
_
1
2
+x
_
=
+
(x) . x > 0 .
lim
+0

zz

in
_
1
2
+x
_
=

(x) . x - 0 .
lim
+0
R
out
_
1
2
+x
_
= R
+
(x) . lim
+0

zz

S
out
_
1
2
+x
_
= S
+
(x) . x > 0.
Further, if we identify R
+
and S
+
with their analytic continuations, then on R

we have
1

+
S
+

2i
= R

.
1

+
R
+
+
2i
= S

.
where we denote by F
+
and F

the boundary values of a function F


F
+
(x) = F(x +i 0) . F

(x) = F(x i 0) ;
see [24, 6.5(7), 6.8(15), 6.9(4)]. Comparing these relations with (2.1), we conclude
that, for x - 0,
lim
+0
R
in
_
1
2
+x
_
= R

(x) . lim
+0

z+z

S
in
_
1
2
+x
_
= S

(x) .
The result now follows from the explicit form of the kernels.
Let J = (a
1
. a
2
) (a
3
. a
4
) (a
2m1
. a
2m
) be a union of disjoint, possibly
innite, intervals such that the closure of J does not contain the origin.
PROPOSITION 8.8 The Whittaker kernel satises conditions (1)(4), (5a)(5d),
(5e), (6), and (7) of Section 5 with the matrices
C( ) =
_

z+z

2
e
/2
0
0

z+z

2
e
/2
_
. D =
1
2
_
1 0
0 1
_
=

3
2
.
FREDHOLM DETERMINANTS 1217
PROOF: The proof of (1) is very similar to the case of the
2
F
1
kernel. The
kernel L(x. y) has the form
L =
_
0 A
A

0
_
.
where A is a kernel on R
+
R

of the form
A(x. y) =

+
(x)

(y)
x y
=
C(z. z

)x

z+z

2
e
x
(y)
z+z

2
e
y
x y
.
The jump condition is veried using the formulae [24, 6.5(7), 6.8(15), 6.9(4)].
(2) can be either veried in the same way as for the
2
F
1
kernel or deduced from
the fact that the Whittaker kernel is the correlation kernel of a determinantal point
process that has nitely many particles in J almost surely (see [54, theorem 4] for
the general theorem about determinantal point processes, and [11, 7] for the needed
property of the Whittaker kernel).
(3) If |z +z

| - 1, then the kernel L introduced above denes a skew, bounded


operator in L
2
(X. dx) and K = L(1 + L)
1
; see [11, 49]. Then, similarly to
Corollary 3.7, we can prove that 1 K
J
is invertible.
However, for the restricted operator K
J
, we can prove the invertibility of 1K
J
for all admissible values of (z. z

). The following argument is due to G. Olshanski.


Write K
J
in the block form
K
J
=
_
K
J
+,+
K
J
+,
K
J
,+
K
J
,
_
corresponding to the splitting J = (J R
+
)(J R

). Since K is a correlation ker-


nel, K
J
+,+
and K
J
,
are positive denite. Moreover, K
J
,+
(x. y) = K
J
+,
(y. x)
by denition of the Whittaker kernel. Thus, it is enough to prove the invertibility
of 1 K
J
+,+
and 1 K
J
,
(see the proof of Corollary 3.7).
We consider K
J
+,+
; the proof for K
J
,
is similar. By [54, theorem 3], K
+,+
1
and K
J
+,+
= K
+,+
|
J
1. The only way K
+,+
can have norm 1 (remember that
K
J
+,+
is of trace class and hence compact) is that K
+,+
has an eigenfunction with
eigenvalue 1 that is supported on J R
+
. By [49, prop. 3.1] (see also [11]),
K
+,+
= K
+,+
(x. y) commutes with a Sturm-Liouville operator
D
x
=
d
dx
x
2
d
dx
+
(z + z

+ x)
2
4
in the sense that
K
+,+
(x. y)D
y
= D
x
K
+,+
(x. y)
for all x. y > 0. Suppose f L
2
(R
+
) is an eigenfunction of K
+,+
with eigenvalue
1 and supported in J R
+
, i.e.,
_
R
+
K
+,+
(x. y) f (y)dy =
_
JR
+
K
+,+
(x. y) f (y)dy = f (x) . x > 0 .
1218 A. BORODIN AND P. DEIFT
Then using the decay and smoothness properties of K
+,+
(x. y), which follow eas-
ily from the known properties of the Whittaker function, one sees that D
x
f also
belongs to L
2
(R
+
) and
_
R
+
K
+,+
(x. y)D
y
f (y)dy = D
x
f (x) ;
thus
V = Span
_
D
k
x
f : k 0
_
Ker
_
1 K
J
+,+
_
L
2
(R
+
) .
But because K
J
+,+
is compact, dimV - , and hence V is a nite-dimensional
invariant subspace for D
x
. It follows that there exists a nonzero : V such that
D
x
: = : for some scalar . But since : V, it must vanish in a neighborhood
of x = 0, which is not possible for nontrivial solutions :(x) of the differential
equation D
x
: = :. Thus, we obtain a contradiction, and hence K
J
+,+
- 1 and
1 K
J
+,+
is invertible. The proof of (3) is now complete.
(4) and (5ad), (5e

) are easily veried. The proofs of (6) and (7) are similar to
the case of the
2
F
1
kernel, and we do not reproduce them here.
By Theorem 5.3, the matrix M for the Whittaker kernel satises the differential
equation
M

( ) =
_
A

+
2m

j =1
C
j
a
j


3
2
_
M( ) .
The matrices {C
j
}
2m
j =1
are nilpotent (if a
1
= or a
2m
= +, then C
1
= 0 or
C
2m
= 0, respectively), and an analogue of Proposition 3.3 shows that
tr A = 0. det A =
_
z z

2
_
2
.
THEOREM 8.9 The Fredholm determinant det(1 K|
J
), where K is the Whittaker
kernel, is the -function for the system of Schlesinger equations
(8.1)
A
a
j
=
2m

j =1
[C
j
. A]
a
j

1
2
.
C
l
a
j
=
[A. C
j
]
a
j

1l2m
l=j
[C
j
. C
l
]
a
j
a
l
+
[C
j
.
3
]
2
.
The matrices {C
j
}
2m
j =1
are nilpotent (if a
1
= or a
2m
= +, then C
1
= 0 or
C
2m
= 0, respectively), and
tr A = 0 . det A =
_
z z

2
_
2
.
PROOF: These results follow from Theorem 6.5.
The next step is to consider J = (s. +), s > 0. It turns out that in this case
the Schlesinger equations reduce to the -form of the Painlev V equation. This
reduction can be performed in the spirit of Section 7, following the corresponding
FREDHOLM DETERMINANTS 1219
part of [32, appendix C]. Although we do not perform the computation here, let us
state the result.
THEOREM 8.10 [57] Assume that s > 0. Then the function
(s) = s
d ln det(1 K|
(s,+)
)
ds
satises the -form of the Painlev V equation
(s

)
2
= (2(

)
2
s

+ +(
1
+
2
+
3
+
4
)

)
2
4(

+
1
)(

+
2
)(

+
3
)(

+
4
) .
(8.2)
where

1
=
2
= 0 .
3
= z .
4
= z

.
This result can of course also be obtained from Theorem 7.1 via the limit tran-
sition discussed in Proposition 8.8.
Very much in the same way as the
2
F
1
kernel becomes the Jacobi kernel at
integral values of z and n, the Whittaker kernel becomes the Laguerre kernel if
either of the parameters z or z

is an integer; see [12, remark 2.4]. Without giving


any details, we formulate the results that can be obtained using this specialization.
Let J be a proper subset of R
+
whose left endpoint is allowed to coincide with
0 and whose right endpoint is allowed to coincide with +.
THEOREM 8.11 Assume that 0 - -
1
2
. Then the Fredholm determinant det(1
K
Lag
n
|
J
), where K
Lag
n
is the n
th
Laguerre kernel with parameter , is the -function
for the system of Schlesinger equations (8.1). The matrices {C
j
}
2m
j =1
are nilpotent (if
a
1
= 0 or a
2m
= + then C
1
= 0 or C
2m
= 0, respectively), and the eigenvalues
of A are equal to ,2.
THEOREM 8.12 [61] Assume that s > 0 and > 1. Let K
s
be the n
th
Laguerre
kernel with parameter restricted to either (0. s) or (s. +). Then the function
(s) = s
d ln det(1 K
s
)
ds
satises the differential equation (8.2) with
1
=
2
= 0,
3
= n, and
4
= n +.
8.3 The Conuent Hypergeometric Kernel
This subsection is based on the following observation:
PROPOSITION 8.13 Set
(8.3) z = z
0
+i . z

= z
0
i . n = n
0
+i . n

= n
0
i .
Then the
2
F
1
kernel K
F
has the following scaling limit:
K(x. y) = lim
+
K
F
_

x
.

y
_
. x. y = 0 .
1220 A. BORODIN AND P. DEIFT
where the limit kernel depends on one complex parameter r = z
0
+n
0
, r >
1
2
,
and has the form
K(x. y) =
1
2
I
_
r +1. r +1
2r +1. 2r +2
_
Q(x)P(y) P(x)Q(y)
x y
.
P(x) = |2x|
r
e
i x+rsgn(x)/2
1
F
1
_
r
2r

2i x
_
.
Q(x) = 2x|2x|
r
e
i x+rsgn(x)/2
1
F
1
_
r +1
2r +2

2i x
_
.
Here
1
F
1
[
a
c
| x] is the conuent hypergeometric function, which is also denoted as
+(a. c; x); see [24, 6.1].
The determinantal point process with the correlation kernel K(x. y) describes
the decomposition of a remarkable family of measures on innite Hermitian matri-
ces on the ergodic (with respect to the U() action) measures; see [14]. We will
call K(x. y) the conuent hypergeometric kernel.
For real values of r this kernel was obtained in [64] as a scaling limit of Christof-
fel-Darboux kernels for a certain system of orthogonal polynomials (called the
pseudo-Jacobi polynomials). For complex values of r such a limit transition can
be carried out as well; see [14, sect. 2].
PROOF: This is a direct computation. The relevant limit relation for the hyper-
geometric functions in this case has the form
lim
+
2
F
1
_
a. b +2i
c

_
1
2


x
_
1
_
=
1
F
1
_
a
c

2i x
_
.

The determinantal point process dened by K has locally nite point congura-
tions almost surely; see [14]. Hence [54, theorem 4] the restriction of K
t
= K|
(0,t )
to any nite interval (0. t ) denes an operator of trace class, and det(1 K
t
) is
well-dened. It is natural to conjecture that this Fredholm determinant satises
a differential equation obtained by taking the corresponding scaling limit of the
Painlev VI equation of Theorem 7.1.
In the proposition below we check that the limit of the differential equation
exists, and we observe that it is a -form of the Painlev V equation. In [64] it was
proven that for the real values of r the determinant det(1 K
t
) does indeed satisfy
this equation. The justication of this statement for all values of r in our setup
requires a proof that the corresponding restriction of the
2
F
1
kernel converges to
K
t
in the trace norm, and we leave this technical issue aside here.
PROPOSITION 8.14 Under the change of parameters (8.3) and the change of the
independent variable s = ,t , equation (7.1) converges to the following -version
of the Painlev V:
(8.4) (t

)
2
= (2(t

) +(

)
2
+i ( r r)

)
2
(

)
2
(

2ir)(

+2i r) .
FREDHOLM DETERMINANTS 1221
where r = z
0
+n
0
and
(s) =

t
_
(t )
i (r r)
2
t +
(r + r)
2
4
_
.
PROOF: We derive (8.4) from (7.1), assuming that certain limits exist, as noted
above. Keeping in mind the relation s = ,t , we have
(s) =
_

t

1
2
__

t
+
1
2
__

t
2

d
dt
_
ln det
_
1 K
F
(/t,+)
_

(r + r)
2
4

t
+
1
2
_
2i +
z
0
+n
0
z
0
n
0
2
_
r r
2
=
_

d ln det(1 K|
t
)
dt

(r + r)
2
4t
+
i (r r)
2
_
+ O(1) .
Hence, anticipating that
(t ) = t
d ln det(1 K|
t
)
dt
+o(1) as +.
we dene (t ) by the relation
(s) =

t
_
(t )
i (r r)
2
t +
(r + r)
2
4
_
.
which leads to
d(s)
ds
=
t
2

d
dt
_

t
_
(t )
i (r r)
2
t +
(r + r)
2
4
__
= t
d(t )
dt
(t )
(r + r)
2
4
.
s
d(s)
ds
(s) =
_
d(t )
dt

i (r r)
2
_
.
d
2
(s)
ds
2
=
t
2

d
dt
_
t
d(t )
dt
(t )
(r + r)
2
4
_
=
t
3

d
2
(t )
dt
2
.
_
s
1
2
__
s +
1
2
_
d
2
(s)
ds
2
= t
d
2
(t )
dt
2
+ O(1) .
1222 A. BORODIN AND P. DEIFT
Substituting these relations into equation (7.1) and passing to the limit +,
we obtain

_
t
d(t )
dt
(t )
(r + r)
2
4
__
t
d
2
(t )
dt
2
_
2
=
_
2
_
d(t )
dt

i (r r)
2
__
t
d(t )
dt
(t )
(r + r)
2
4
_
2i
(r + r)
2
4
(r r)
2
_
2
+4
_
t
d(t )
dt
(t )
_
2
_
t
d(t )
dt
(t )
(r + r)
2
4
+
(r r)
2
4
_
.
It is readily seen that the right-hand side of this equation is divisible by
_
t
d(t )
dt
(t )
(r + r)
2
4
_
.
and after this cancellation the equation exactly coincides with (8.4).
Remark 8.15. For r = 0 the kernel K becomes the sine kernel
sin(x y)
(x y)
;
see [14, sect. 2]. Accordingly, equation (8.4) takes the form
(8.5) (t

)
2
= 4(t

)
_
t

+(

)
2
_
.
This agrees with the celebrated result of [33], which states that equation (8.5) is
satised by the function
(t ) = t
d
dt
ln det
_
1
sin(x y)
(x y)

(0,t )
_
.
9 Differential Equations: A General Approach
Lemmas 4.3 and 4.4 point to a general method for proving that a wide class of
determinants satisfy Painlev equations; see the introduction and [58][61]. We
illustrate the method in the case of the Airy kernel
A(x. y) =
Ai (x)Ai

(y) Ai (x)Ai

(y)
x y
where Ai (x) is the well-known Airy function. This kernel arises in random matrix
theory [25, 59] and plays a central role in the interaction of combinatorics and
random matrix theory; see, e.g., [3, 4, 10, 35, 45].
For s R, let A
s
denote the operator obtained by restricting the kernel A(x. y)
to L
2
(s. +). The basic result of Tracy and Widom [59] is that

d
2
ds
2
ln det(1 A
s
) = u
2
(s) .
FREDHOLM DETERMINANTS 1223
where u(s) solves the Painlev II equation
u

= 2u
3
+su
with initial conditions
u(s) Ai (s) as s +.
We now outline a proof of this fact using Lemmas 4.3 and 4.4. It will be clear
to the reader that the method extends, in particular, to the general class of kernels
considered in [61].
In the notation of Lemmas 4.3 and4.4, let Y = R, Y
2
= J = (s. +), and
Y
1
= Y \ Y
2
= (. s]. Let B( ) be a 2 2 fundamental solution of the
differential equation
dB( )
d
=
_
0
1 0
_
B( ) . det B( ) 1 .
with B
11
( ) = Ai

( ) and B
21
( ) = Ai ( ). Set
m( ) =
_
B( ) . > 0 .
B( )
_
1 2i
0 1
_
. - 0 .
Then m satises the jump relation m
+
(x) = m

(x):(x), x R, where
:(x)
_
1 2i
0 1
_
.
Set
f (x)
_
1
0
_
. g(x)
_
0
1
_
. x R.
and note that : = I +2i f g
t
. Also,
F(x) = m(x) f (x) =
_
Ai

(x)
Ai (x)
_
. G(x) = m
t
(x)g(x) =
_
Ai (x)
Ai

(x)
_
.
and we set
:
2
(x) = I 2i F(x)G
t
(x) . x Y
2
= (s. +) .
Let m
s
solve the normalized RHP (Y
2
. :
2
), m
s
( ) I as . By Lemma 4.4,
M( ) m
s
( )m( )
solves the simple jump relation M
+
= M

: on Y
1
. Standard arguments as in
Theorem 5.1 and Proposition 6.2 now imply that M satises the Lax pair
dM( )
d
=
__
a +b
1 a
_
+
1
s
_
p q
r p
_ _
M( ) . (9.1)
dM( )
ds
=
1
s
_
p q
r p
_
M( ) . (9.2)
1224 A. BORODIN AND P. DEIFT
where [
p q
r p
] is nilpotent. Here a, b, p, q, and r are suitable constants that depend
only on s. By (6.9),
d
ds
ln det(1 A
s
) =
_
H
1
(s)H

(s)
_
21
.
where the prime refers to the derivative with respect to , and
(9.3) M( ) = H( )
_
1 2i ln( s)
0 1
_
.
As noted, in Section 4 (see (4.3)), det H( ) 1 and H( ) is analytic near = s
(in fact, H( ) is entire). Using (9.1) and (9.3), we nd
M

M
1
= H

H
1
+
1
s
H
_
0 1
0 0
_
H
1
=
_
a +b
1 a
_
+
1
s
_
p q
r p
_
which leads to the relation
(H
1
(s)H

(s))
21
= 2ap +q +(s +b)r .
The compatibility of the Lax pair equations (9.1) and (9.2) yields (see section 2 in
[36]) the relations 2ap + q + (s + b)r = a,
da
ds
= r, where r solves the Painlev
34 equation
d
2
r
ds
2
=
1
2r
_
dr
ds
_
2
4r
2
+2sr .
Writing r = u
2
, a simple calculation shows that u solves the Painlev II equation.
This veries the above claim for

d
2
ds
2
ln det(1 A
s
) =
da
ds
= r = u
2
.
Note that one can also show that det(1 A
s
) is the -function for the isomon-
odromy deformation described by (9.1) and (9.2).
Appendix: Integrable Operators and Riemann-Hilbert Problems
This appendix contains a brief summary of results on integrable operators and
corresponding Riemann-Hilbert problems that can be found in [20, 30, 37].
Let Y be an oriented contour in C. We call an operator L acting in L
2
(Y. |d |)
integrable if its kernel has the form
L(.

) =

N
j =1
f
j
( )g
j
(

. .

Y.
for some functions f
j
and g
j
, j = 1. 2. . . . . N. We shall always assume that
N

j =1
f
j
( )g
j
( ) = 0 . Y.
so that the kernel L(.

) is nonsingular (this assumption is not necessary for the


general theory).
FREDHOLM DETERMINANTS 1225
We do not impose here any restrictions on the functions f
i
or g
i
or on the
contour Y. For our purposes it sufces to assume that Y is a nite union of disjoint
(possibly innite) intervals on the real line, f
i
and g
i
are smooth functions inside
S, and
(A.1) f
i
. g
i
L
p
(Y. |d |) L

(Y. |d |) for some p. 1 - p - +.


These restrictions guarantee, in particular, that L is a bounded operator in L
2
(Y).
Particular examples of integrable operators appeared in the mathematical phy-
sics literature a long time ago. However, integrable operators were rst singled out
as a distinguished class in [30].
It turns out that for an integrable operator L such that (1 + L)
1
exists, the
operator K = L(1 + L)
1
is also integrable.
PROPOSITION A.1 [30] Let L be an integrable operator as described above, and
let K = L(1 + L)
1
. Then the kernel K(.

) has the form


K(.

) =

N
j =1
F
j
( )G
j
(

. .

Y.
where
F
j
= (1 + L)
1
f
j
. G
j
= (1 + L
t
)
1
g
j
. j = 1. 2. . . . . N.
If

N
j =1
f
j
( )g
j
( ) = 0 on Y, then

N
j =1
F
j
( )G
j
( ) = 0 on Y as well.
A remarkable fact is that F
j
and G
j
can be expressed through a solution of an
associated Riemann-Hilbert problem (RHP, for short).
Let : be a map from Y to mat(k. C), where k is a xed integer.
We say that a matrix function m : C\ Y mat(k. C) is a solution of the RHP
(Y. :) if the following conditions are satised:
m( ) is analytic in C \ Y,
m
+
( ) = m

( ):( ), Y, where m

( ) = lim

()-side
m(

),
If in addition
m( ) I as ,
we say that m solves the normalized RHP (Y. :).
The matrix :( ) is called the jump matrix.
PROPOSITION A.2 [30] Let L be an integrable operator as described above such
that the operator 1 + L is invertible. Then there exists a unique solution m( ) of
the normalized RHP (Y. :) with
:( ) = I +2i f ( )g( )
T
mat(N. C) .
where
f = ( f
1
. f
2
. . . . . f
N
)
T
. g = (g
1
. g
2
. . . . . g
N
)
T
.
1226 A. BORODIN AND P. DEIFT
and the kernel of the operator K = L(1 + L)
1
has the form
K(.

) =
G
t
(

)F( )

. .

Y.
where
F = (F
1
. F
2
. . . . . F
N
)
T
. G = (G
1
. G
2
. . . . . G
N
)
T
.
are given by
F( ) = m
+
( ) f ( ) = m

( ) f ( ) . G( ) = m
t
+
( )g( ) = m
t

( )g( ) .
In other words, the inverse (1 + L)
1
of 1 plus an integrable operator can be
expressed in terms of the solution of an associated problem in complex variables.
The function m( ) may have singularities at the points of discontinuity of the
jump matrix : (e.g., at the endpoints of S). Unless specied otherwise, we assume
that m( ) belongs to the L
2
-space locally on any smooth curve passing through the
singular point. Under our restrictions on f
i
, g
i
, and S (see above), the solution
m( ) in Proposition A.2 satises this condition.
Discrete versions of Propositions A.1 and A.2 are given in [8].
Let now Y = Y
I
Y
II
be a union of two contours. Assume that the operator L
in the block form corresponding to this splitting is as follows:
L(x. y) =
_
0
h
I
(x)h
II
(y)
xy
h
I
(y)h
II
(x)
xy
0
_
for some functions h
I
( ) and h
II
( ) dened on Y
I
and Y
II
, respectively.
Then the operator L is integrable with N = 2. Indeed,
L(x. y) =
f
1
(x)g
1
(y) + f
2
(x)g
2
(y)
x y
. x. y Y.
where
f
1
(x) = g
2
(x) =
_
h
I
(x) . x Y
I
.
0 . x Y
II
.
f
2
(x) = g
1
(x) =
_
0 . x Y
I
.
h
II
(x) . x Y
II
.
The jump matrix :(x) of the corresponding RHP has the form
:(x) =
_

_
_
1 2i h
2
I
(x)
0 1
_
. x Y
I
.
_
1 0
2i h
2
II
(x) 1
_
. x Y
II
.
It can be easily seen that the RHP in such a situation is equivalent to the following
set of conditions:
matrix elements m
11
and m
21
are holomorphic in C \ Y
II
;
matrix elements m
12
and m
22
are holomorphic in C \ Y
I
;
FREDHOLM DETERMINANTS 1227
on Y
II
the following relations hold:
m
11+
(x) m
11
(x) = 2i h
2
I
(x)m
12
(x) .
m
21+
(x) m
21
(x) = 2i h
2
I
(x)m
22
(x) ;
on Y
I
the following relations hold:
m
12+
(x) m
12
(x) = 2i h
2
II
(x)m
11
(x) .
m
22+
(x) m
22
(x) = 2i h
2
II
(x)m
21
(x) ;
m(x) I as x .
According to Proposition A.2, the kernel K(x. y) in block form corresponding
to the splitting Y = Y
I
Y
II
is given by
(A.2) K(x. y) =
_
h
I
(x)h
I
(y)(m
11
(x)m
21
(y)+m
21
(x)m
11
(y))
xy
h
I
(x)h
II
(y)(m
11
(x)m
22
(y)m
21
(x)m
12
(y))
xy
h
II
(x)h
I
(y)(m
22
(x)m
11
(y)m
12
(x)m
21
(y))
xy
h
II
(x)h
II
(y)(m
22
(x)m
12
(y)+m
12
(x)m
22
(y))
xy
_
.
Acknowledgments. The authors would like to thank A. Kitaev for important
discussions about this work and O. Costin and R. D. Costin for informing us of
their calculations on Painlev VI. The authors would also like to thank A. Its and
G. Olshanski for many useful discussions. This research was partially conducted
during the period the rst author served as a Clay Mathematics Institute Long-Term
Prize Fellow. The work of the rst author was also supported in part by NSF Grant
DMS-9729992, and the work of the second author was supported in part by NSF
Grant DMS-0003268.
Bibliography
[1] Adler, M.; Shiota, T.; van Moerbeke, P. Random matrices, Virasoro algebras, and noncommu-
tative KP. Duke Math. J. 94 (1998), no. 2, 379431.
[2] Adler, M.; van Moerbeke, P. Hermitian, symmetric and symplectic random ensembles: PDEs
for the distribution of the spectrum. Ann. of Math. (2) 153 (2001), no. 1, 149189.
[3] Baik, J.; Deift, P.; Johansson, K. On the distribution of the length of the longest increasing
subsequence of random permutations. J. Amer. Math. Soc. 12 (1999), no. 4, 11191178.
[4] Baik, J.; Deift, P.; Johansson, K. On the distribution of the length of the second row of a Young
diagram under Plancherel measure. Geom. Funct. Anal. 10 (2000), no. 4, 702731.
[5] Borodin, A. Point processes and the innite symmetric group. II. Higher correlation functions.
Preprint, 1998.
[6] Borodin, A. Point processes and the innite symmetric group. IV. Matrix Whittaker kernel.
Preprint, 1998.
[7] Borodin, A. Harmonic analysis on the innite symmetric group, and the Whittaker kernel. Al-
gebra i Analiz 12 (2000), no. 5, 2863; translation in St. Petersburg Math. J. 12 (2001), no. 5,
733759.
[8] Borodin, A. Riemann-Hilbert problem and the discrete Bessel kernel. Internat. Math. Res. No-
tices 2000, no. 9, 467494.
[9] Borodin, A. Discrete gap probabilities and discrete Painlev equations. Duke Math. J., in press.
1228 A. BORODIN AND P. DEIFT
[10] Borodin, A.; Okounkov, A.; Olshanski, G. Asymptotics of Plancherel measures for symmetric
groups. J. Amer. Math. Soc. 13 (2000), no. 3, 481515.
[11] Borodin, A.; Olshanski, G. Point processes and the innite symmetric group. Math. Res. Lett.
5 (1998), no. 6, 799816.
[12] Borodin, A.; Olshanski, G. Point processes and the innite symmetric group. III. Fermion point
processes. Preprint, 1998.
[13] Borodin, A.; Olshanski, G. Distributions on partitions, point processes, and the hypergeometric
kernel. Comm. Math. Phys. 211 (2000), no. 2, 335358.
[14] Borodin, A.; Olshanski, G. Innite random matrices and ergodic measures. Comm. Math. Phys.
223 (2001), no. 1, 87123.
[15] Borodin, A.; Olshanski, G. z-measures on partitions, Robinson-Schensted-Knuth correspon-
dence, and = 2 randommatrix ensembles. Randommatrix models and their applications, 71
94. Mathematical Sciences Research Institute Publications, 40. Cambridge University, Cam-
bridge, 2001.
[16] Borodin, A.; Olshanski, G. Harmonic analysis on the innite-dimensional unitary group.
Preprint, 2001.
[17] Boyer, R. P. Innite traces of AF-algebras and characters of U(). J. Operator Theory 9 (1983),
no. 2, 205236.
[18] Costin, O.; Costin, R. D. Special solutions of PVI. In preparation.
[19] Daley, D. J.; Vere-Jones, D. An introduction to the theory of point processes. Springer Series in
Statistics. Springer, New York, 1988.
[20] Deift, P. Integrable operators. Differential operators and spectral theory, 6984. American
Mathematical Society Translations, Series 2, 189. American Mathematical Society, Providence,
R.I., 1999.
[21] Deift, P. A.; Its, A. R.; Zhou, X. Long-time asymptotics for integrable nonlinear wave equations.
Important developments in soliton theory, 181204. Springer Series in Nonlinear Dynamics.
Springer, Berlin, 1993.
[22] Deift, P. A.; Its, A. R.; Zhou, X. A Riemann-Hilbert approach to asymptotic problems arising in
the theory of random matrix models, and also in the theory of integrable statistical mechanics.
Ann. of Math. (2) 146 (1997), no. 1, 149235.
[23] Edrei, A. On the generating function of a doubly-innite, totally positive sequence. Trans. Amer.
Math. Soc. 74 (1953), no. 3, 367383.
[24] Erdlyi, A.; Magnus, W.; Oberhettinger, F.; Tricomi, F. G. Higher transcendental functions.
Vols. I, II. McGraw-Hill, New YorkTorontoLondon, 1953.
[25] Forrester, P. J. The spectrum edge of random matrix ensembles. Nuclear Phys. B 402 (1993),
no. 3, 709728.
[26] Forrester, P. J.; Witte, N. S. Application of the -function theory of Painlev equations to ran-
dom matrices: PIV, PII and the GUE. Comm. Math. Phys. 219 (2001), no. 2, 357398.
[27] Haine, L.; Semengue, J.-P. The Jacobi polynomial ensemble and the Painlev VI equation. J.
Math. Phys. 40 (1999), no. 4, 21172134.
[28] Harnad, J.; Its, A. R. Integrable Fredholm operators and dual isomonodromic deformations,
Comm. Math. Phys. 226 (2002), 497530.
[29] Its, A. R. A Riemann-Hilbert approach to the distribution functions of random matrix theory.
Lectures in Canterbury, May 2000.
[30] Its, A. R.; Izergin, A. G.; Korepin, V. E.; Slavnov, N. A. Differential equations for quantum
correlation functions. Internat. J. Modern Phys. B 4 (1990), no. 5, 10031037.
[31] Its, A. R.; Novokshenov, V. Y. The isomonodromic deformation method in the theory of Painlev
equations. Lecture Notes in Mathematics, 1191. Springer, Berlin, 1986.
FREDHOLM DETERMINANTS 1229
[32] Jimbo, M.; Miwa, T. Monodromy preserving deformation of linear ordinary differential equa-
tions with rational coefcients. II. Phys. D 2 (1981), no. 3, 407448.
[33] Jimbo, M.; Miwa, T.; Mri, Y.; Sato, M. Density matrix of an impenetrable Bose gas and the
fth Painlev transcendent. Phys. D 1 (1980), no. 1, 80158.
[34] Jimbo, M.; Miwa, T.; Ueno, K. Monodromy preserving deformation of linear ordinary differ-
ential equations with rational coefcients. I. General theory and -function. Phys. D 2 (1981),
no. 2, 306352.
[35] Johansson, K. Discrete orthogonal polynomial ensembles and the Plancherel measure. Ann. of
Math. (2) 153 (2001), no. 1, 259296.
[36] Kapaev, A. A.; Hubert, E. A note on the Lax pairs for Painlev equations. J. Phys. A 32 (1999),
no. 46, 81458156.
[37] Korepin, V. E.; Bogoliubov, N. M.; Izergin, A. G. Quantum inverse scattering method and
correlation functions. Cambridge Monographs on Mathematical Physics. Cambridge University
Press, Cambridge, 1993.
[38] Macdonald, I. G. Symmetric functions and Hall polynomials. Second edition. Oxford Mathe-
matical Monographs. Oxford Science Publications. Clarendon, Oxford University, New York,
1995.
[39] Malgrange, B. Sur les dformations isomonodromiques. I. Singularits rgulires. Mathematics
and physics (Paris, 1979/1982), 401426. Progress in Mathematics, 37. Birkhuser, Boston,
1983.
[40] Mahoux, G. Introduction to the theory of isomonodromic deformations of linear ordinary dif-
ferential equations with rational coefcients. The Painlev property, 3576. CRM Series in
Mathematical Physics. Springer, New York, 1999.
[41] Mehta, M. L. Random matrices. Second edition. Academic, Boston, 1991.
[42] Mehta, M. L. A nonlinear differential equation and a Fredholm determinant. J. Physique I 2
(1992), no. 9, 17211729.
[43] Miwa, T. Painlev property of monodromy preserving deformation equations and the analyticity
of functions. Publ. Res. Inst. Math. Sci. 17 (1981), no. 2, 703721.
[44] Okamoto, K. Polynomial Hamiltonians associated with Painlev equations. I. Proc. Japan
Acad. Ser. A Math. Sci. 56 (1980), no. 6, 264268.
[45] Okounkov, A. Random matrices and random permutations. Internat. Math. Res. Notices 2000,
no. 20, 10431095.
[46] Okounkov, A.; Olshanski, G. Asymptotics of Jack polynomials as the number of variables goes
to innity. Internat. Math. Res. Notices 1998, no. 13, 641682.
[47] Ol

shanski, G. I. Unitary representations of innite-dimensional pairs (G, K) and the formal-


ism of R. Howe. Representation of Lie groups and related topics, 269463. Advanced Studies
in Contemporary Mathematics, 7. Gordon and Breach, New York, 1990.
[48] Olshanski, G. Point processes and the innite symmetric group. I. The general formalism and
the density function. Preprint, 1998.
[49] Olshanski, G. Point processes and the innite symmetric group. V. Analysis of the matrix Whit-
taker kernel. Preprint, 1998.
[50] Olshanski, G. An introduction to harmonic analysis on the innite-dimensional unitary group.
Preprint, 2001.
[51] Palmer, J. Deformation analysis of matrix models. Phys. D 78 (1994), no. 3-4, 166185.
[52] Reed, M.; Simon, B. Methods of modern mathematical physics. III. Scattering theory. Aca-
demic [Harcourt Brace Jovanovich], New YorkLondon, 1979.
[53] Sato, M.; Miwa, T.; Jimbo, M. Holonomic quantum elds. II. The Riemann-Hilbert problem.
Publ. Res. Inst. Math. Sci. 15 (1979), no. 1, 201278.
1230 A. BORODIN AND P. DEIFT
[54] Soshnikov, A. Determinantal random point elds. Uspekhi Mat. Nauk 55 (2000), no. 5(335),
107160; translation in Russian Math. Surveys 55 (2000), no. 5, 923975.
[55] Szeg, G. Orthogonal polynomials. Fourth edition. American Mathematical Society, Collo-
quium Publications, 23. American Mathematical Society, Providence, R.I., 1975.
[56] Thoma, E. Characters of innite groups. Operator algebras and group representations, Vol. II
(Neptun, 1980), 211216. Monographs and Studies in Mathematics, 18. Pitman, Boston, 1984.
[57] Tracy, C. A. Whittaker kernel and the fth Painlev transcendent. Unpublished letter to
A. Borodin and G. Olshanski, April 29, 1998.
[58] Tracy, C. A.; Widom, H. Introduction to random matrices. Geometric and quantum aspects
of integrable systems (Scheveningen, 1992), 103130. Lecture Notes in Physics, 424. Springer,
Berlin, 1993.
[59] Tracy, C. A.; Widom, H. Level-spacing distributions and the Airy kernel. Comm. Math. Phys.
159 (1994), no. 1, 151174.
[60] Tracy, C. A.; Widom, H. Level spacing distributions and the Bessel kernel. Comm. Math. Phys.
161 (1994), no. 2, 289309.
[61] Tracy, C. A.; Widom, H. Fredholm determinants, differential equations and matrix models.
Comm. Math. Phys. 163 (1994), no. 1, 3372.
[62] Vershik, A. M.; Kerov, S. V. Characters and factor-representations of the innite unitary group.
Dokl. Akad. Nauk SSSR 267 (1982), no. 2, 272276; translation in Soviet Math. Dokl. 26 (1982),
570574.
[63] Voiculescu, D. Reprsentations factorielles de type II
1
de U(). J. Math. Pures Appl. (9) 55
(1976), no. 1, 120.
[64] Witte, N. S.; Forrester, P. J. Gap probabilities in the nite and scaled Cauchy random matrix
ensembles. Nonlinearity 13 (2000), no. 6, 19651986.
[65] Witte, N. S.; Forrester, P. J.; Cosgrove, Christopher M. Gap probabilities for edge intervals in
nite Gaussian and Jacobi unitary matrix ensembles. Nonlinearity 13 (2000), no. 5, 14391464.
[66] elobenko, D. P. Compact Lie groups and their representations. Nauka, Moscow, 1970; trans-
lation in Translations of Mathematical Monographs, 40. American Mathematical Society, Prov-
idence, R.I., 1973.
ALEXEI BORODIN PERCY DEIFT
Institute for Advanced Study Courant Institute
School of Mathematics 251 Mercer Street
Einstein Drive New York, NY 10012-1185
Princeton, NJ 08540 E-mail: deift@cims.nyu.edu
E-mail: borodine
@math.upenn.edu
Received November 2001.

Вам также может понравиться