Академический Документы
Профессиональный Документы
Культура Документы
LIMITED FEEDBACK
18307
Abstract
The recent increase in demand for high data-rate wireless applications
has resulted in a need for wireless communication systems with high
spectral efficiencies. One of the most promising ways to achieve such
high spectral efficiencies is a transmission technique called spatial
multiplexing. Over the recent years, spatial multiplexing has been
extensively studied for single-user wireless systems as well as for multiuser wireless networks such as broadcast networks and interference
networks. In particular, it has been shown that achievability of full
spatial multiplexing gain in both broadcast and interference networks
depends critically on the channel knowledge available at the sources.
Since sources often obtain channel knowledge through limited capacity
feedback links, it is important to develop transmission schemes that
can achieve full spatial multiplexing gain in broadcast and interference networks with limited feedback. In this thesis, we consider three
important classes of interference networks and a class of broadcast
networks, and propose, for each class, a (limited capacity) feedback
and transmission scheme that achieves full spatial multiplexing gain.
We first consider a large interference network that models the
second hop of the multiuser relaying protocol proposed by Bolcskei
et al. (2006). The results in B
olcskei et al. (2006) imply that full
spatial multiplexing gain is achievable in this network, provided that
each source has perfect knowledge of its channel to its corresponding
destination. We show that full spatial multiplexing gain is achievable
even if the perfect channel knowledge assumption is relaxed to having
an error-free 1-bit broadcast feedback link from each destination
vi
vii
Kurzfassung
Die j
ungst steigende Nachfrage nach drahtlosen Geraten mit hoher
Datenrate hat zu einem Bedarf an drahtlosen Kommunikationssystemen mit hoher spektraler Effizienz gef
uhrt. R
aumliches Multiplexen ist
f
ur jede dieser Klassen ein R
uckmeldungs- und Ubertragungsverfahren (mit begrenzter Kapazit
at) vor, das den vollen raumlichen
Multiplexgewinn erzielt.
Zuerst betrachten wir ein grosses Interferenznetzwerk, das den
zweiten Ubertragungsteil
des von B
olcskei et al. (2006) vorgeschlagenen Relaisprotokolls f
ur mehrere Nutzer darstellt. Die Resultate
in Bolcskei et al. (2006) implizieren, dass der volle raumliche Multiplexgewinn in diesem Netzwerk erzielbar ist, gegeben, dass jeder
ix
Sender u
ber vollkommene Kenntnis des Kanals zum zugehorigen
Empfanger verf
ugt. Wir zeigen, dass der volle raumliche Multiplex
gewinn selbst dann erzielbar ist, wenn die Annahme, die Kanale vollkommen zu kennen, dahingehend gelockert wird, einen fehlerfreien
1-bit R
uckkanal von jedem Empf
anger zu seinen zugehorigen Sendern
zu haben. Insbesondere stellen wir einen schrittweise arbeitenden
Algorithmus zur Strahlausrichtung (f
ur mehre Nutzer) vor, der auf
R
uckinformation basiert. Wir zeigen, dass r
aumliches Multiplexen,
welches auf den Strahlausrichtungsgewichten, die vom Algorithmus
erzeugt werden, basiert, den vollen r
aumlichen Multiplexgewinn erzielt,
gegeben, dass die Dauer der Lernperiode des Algorithmus linear mit
der Anzahl der Sender im Netzwerk w
achst. Dar
uber hinaus zeigen
wir, dass der schrittweise arbeitende Algorithmus in der Hinsicht
asymptotisch optimal ist, dass der volle r
aumliche Multiplexgewinn
nicht mit einer Lernperiode erreicht werden kann, die sublinear mit
der Anzahl von Sender anw
achst.
Als Zweites betrachten wir ein Netzwerk, das aus M Sender /
Empfanger-Paaren mit jeweils einer Antenne besteht, in dem alle
Kanale frequenzselektiv mit Impulsantworten der Lange L sind und
jeder Empfanger eine durchschnittliche Sendeleistung von P/M hat.
K
urzlich publizierte Resultate von Cadambe and Jafar (2008) und
Grokop et al. (2009) implizieren, dass in diesem Netzwerk der volle
raumliche Multiplexgewinn mit einem neuartigen Ubertragungsschema, der sogenannten Interferenzausrichtung, erzielbar ist. Damit
dies moglich ist, muss jeder Knoten (das heisst, jeder Sender und jeder
Empfanger) alle Kan
ale im Netzwerk vollst
andig kennen. Wir zeigen,
dass der volle r
aumliche Multiplexgewinn selbst dann erzielbar ist,
wenn die Annahme, die Kan
ale vollkommen zu kennen, dahingehend
ausgestattet. Unter der Annahme, dass alle Kanale in diesem Netzwerk frequenzselektiv mit einer Impulsantwort der Lange L sind,
xi
Acknowledgments
As far as learning is concerned, the last four years constitute the
most productive period of my life, so far. The lessons of patience,
self-discipline, paying attention to details, professional ethics and a
confidence to question conventional wisdom that I have developed
over these years, form an integral part of my personality today. And
no other person deserves more credit for this learning than Prof. Dr.
Helmut Bolcskei. Without his guidance and support, neither this
learning nor this dissertation would have been possible. In particular,
I can never forget his painstakingly going through each and every line
of our Allerton paper and converting it from a mathematical mess
that only I understood to a piece of research that any person with
suitable background can understand. I am also extremely thankful to
him for his numerous teachings, including his belief that it is better to
have one great contribution than to have ten mediocre contributions.
An old saying in my home country is that the best way to thank
a teacher is to follow his teachings. I therefore hope to follow Prof.
Bolcskeis teachings for the rest of my life.
I would also like to thank the members of the communication
theory group, whose support was critical to the accomplishment of
this thesis. In particular, I would like to thank Dr. Ulrich Schuster
and Dr. Giuseppe Durisi for teaching me the importance of being
precise in my formulations. I would like to thank Cemal Akcaba for his
friendship and support as a colleague throughout the last four years. I
am grateful to Veniamin Morgenshtern for all the diverse discussions
we had during our dinners at Mensa, on topics ranging from biology
xiii
Jatin Thukral
April 29, 2009
xiv
Contents
.
.
.
.
.
1
2
3
5
7
11
13
16
18
20
24
26
33
39
42
47
55
55
56
61
62
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
xv
CONTENTS
65
68
70
74
74
77
82
83
.
.
.
.
.
.
.
.
.
.
.
.
.
85
87
89
90
92
93
94
96
97
99
101
103
104
107
.
.
.
.
.
.
.
.
.
109
112
114
115
115
116
117
124
127
129
xvi
CONTENTS
131
Appendices
134
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
135
135
135
137
138
142
144
145
148
150
151
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
155
155
156
159
161
162
163
167
.
.
.
.
173
173
174
176
176
C. Notation
C.1. Miscellaneous . . . . . . .
C.2. Linear Algebra . . . . . .
C.3. Probability and Statistics
C.4. Information Theory . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
D. Acronyms
177
References
179
xvii
CONTENTS
Curriculum Vitae
xviii
185
C HAPTER
1.
1.1.
1.1.
INTERFERENCE NETWORKS
1.
1.1.
INTERFERENCE NETWORKS
1.
1.2.
BROADCAST NETWORKS
1.2.
B ROADCAST N ETWOR KS
1.
1.2.
BROADCAST NETWORKS
1.
10
1.3.
1.3.
OUTLI N E OF TH E TH ESIS
11
C HAPTER
13
2.
1- BIT
FEEDBACK
both its channel from its corresponding source and its channel to its
corresponding destination, perfectly. While a relay can easily acquire
close-to-perfect channel knowledge about its channel from the source,
it requires an infinite capacity feedback link from its destination to
know its channel to the destination perfectly. A natural question to
ask therefore is how the sum capacity scaling behavior changes if the
capacity of the feedback links from the destinations to the relays is
limited. To investigate the effect of this limited feedback on the sum
capacity scaling law, we focus in this chapter only on the second hop
of the protocol, that is, on the communication between the M K relays
and the M destinations. Specifically, we assume that the first hop
transmissions are error-free, so that the second hop is itself equivalent
to a large interference network with M K sources and M destinations,
as shown in Fig. 2.1. The details of this network are as follows.
The M K non-cooperating sources communicate with the M noncooperating destinations over slow-fading frequency-flat channels.
Each source and each destination is equipped with a single antenna.
The sources are divided into M mutually exclusive groups G1 , , GM
of K sources each so that every group Gi is dedicated to transmit a
common message to its unique destination Di . All transmissions occur
concurrently and in the same frequency band. The K non-cooperating
single-antenna sources in group Gi are denoted by Si1 , , SiK . The
results of Bolcskei et al. (2006) imply that if there exist infinite
capacity feedback links from each destination to its corresponding
group of sources, then the sum capacity of this network scales as
M log(1 + cK), with c a constant independent of K, for large values
of K. We assume, in contrast to the perfect feedback assumption
of Bolcskei et al. (2006), that there exists an error-free dedicated
1-bit broadcast feedback channel from each destination Di to its
corresponding group of sources in Gi . Our main contribution in this
chapter is to show that even with this relaxed assumption, the sum
capacity of the network scales with K as M log(1 + cK), that is, full
spatial multiplexing gain of M and a per-stream (distributed) array
Recall
14
that the relays are the sources for the second hop transmission.
S11
G1
S12
D1
S1K
S21
G2
S22
D2
S2K
1
SM
GM
2
SM
DM
K
SM
15
2.
1- BIT
FEEDBACK
2.1.
16
2.1.
X
K
hki,i ik [l]
xi [n] +
M X
K
X
r=1
r6=i
k=1
where l =
n
Tf
xr [n]
k=1
|
+ wi [n],
hki,r rk [l]
{z
interference
i = 1, . . . , M,
}
(2.1)
k
, yi [n] C denotes the symbol received at Di in the
n-th time slot, hki,r C stands for the channel coefficient between Srk
and Di and wi [n] denotes the CN (0, 2No ) i.i.d. noise sequence at Di .
The channel coefficients hki,r , for all i, r, k, are assumed i.i.d. CN (0, 2).
The signals xi [n] obey the average power constraint
P
E |xi [n]|2 ,
K
i = 1, . . . , M,
17
2.
1- BIT
FEEDBACK
R{hki,i }ik
k=1
K
X
R{hki,i },
=
for group Gi ,
k=1
2.2.
X
K
hki,i
ik
xi [n] + wi [n].
(2.2)
k=1
If the sources are allowed to cooperate, the destination D can estimate each
i
hki,i (by simply silencing all Sil , l 6= k, in a sequence) and feedback its phase to Sik .
Distributed beamforming would then be achieved in a straightforward manner.
18
2.2.
k ,k
i
k=1
PK
2
!
k
ik E |xi [n]|2
k=1 hi,i
= max log 1 +
2No
k
i ,k
!
PK
2
k
k=1 |hi,i | P
log 1 +
2KNo
(since |
ik | 1, i, k)
2 !
K E |hki,i | P
= log 1 +
2KNo
(since
K
X
|hki,i | = K E |hki,i | for K )
(2.4)
(2.5)
(2.6)
k=1
2 !
K E |hki,i | P
= log 1 +
2No
(2.7)
= log(1 + cK)
(2.8)
where
c=
2
E |hki,i | P
2No
(2.9)
Ri M log(1 + cK)
(2.10)
i=1
19
2.
1- BIT
FEEDBACK
Real-value assumption
For ease of exposition, we shall first establish all our results in Section 2.3, 2.4, 2.5 and 2.6 under the simplifying assumption of all the
channel coefficients as well as all the signals being real-valued, that is,
we will assume xi [n] R, yi [n] R, hki,r R (so that R{hki,r } = hki,r )
and wi [n] N (0, No ). We will then show in Section 2.7 that all our
results can be extended to complex-valued channel coefficients and
signals in a (relatively) straight-forward fashion.
2.3.
We shall next describe the algorithm carried out during the training
phase. The overall training phase is assumed to consist of Ttr = no M K
frames (each of which contains Tf time slots) divided into M blocks of
no K frames each. The role of the parameter no will become clear later.
During each of these M blocks, precisely one of the groups Gi follows
the three-step iterative distributed beamforming algorithm, described
below, while all the other groups of sources Gr , r 6= i, remain silent.
At the end of the training phase of no M K frames, each of the groups
Gi is in (close-to) beamforming configuration with respect to (w.r.t.)
its assigned destination. The order in which the groups follow the
three-step procedure below can be decided offline and communicated
to all the nodes in the network. Without loss of generality (w.l.o.g.),
we assume that the group Gi is being processed during the i-th block
20
2.3.
Step 1. Initialization of the received signal level: This step pertains to the zeroth frame in the block. Each of the sources S k
initializes its beamforming weight according to k [0] = 1, initializes an auxiliary beamforming weight
as
k [0] = k [0] and
p
starts transmitting the pilot symbol P/K. The corresponding
received signal at destination D is given by
r
y[n] =
K
P X k k
h [0] + w[n]
K
k=1
r
=
K
P X k
h + w[n],
K
n F0 .
k=1
1 X
y[n] =
=
Tf
nF0
K
P X k
h .
K
k=1
21
2.
1- BIT
FEEDBACK
k [l 1],
w.p. 1 K
k [l] =
.
(2.11)
1
k [l 1], w.p. K
p
Each of the sources S k then transmits the pilot symbol P/K
throughout the frame, using thepbeamforming weight k [l], that
is, S k transmits the signal k [l] P/K. At the end of the frame
under consideration, D estimates the corresponding received
signal level, as in the initialization step, according to
rx
1 X
=
y[n] =
Tf
nFl
K
P X k k
h [l]
K
(2.12)
k=1
and, through the 1-bit broadcast feedback channel, informs all the
sources in G whether Lrx > Lmax or not. Based on the received
feedback, the sources S k , k = 1, . . . , K, update their auxiliary
beamforming weights
k [l] as follows:
(
[l] =
k [l],
if Lrx > Lmax
22
2.3.
(2.15)
(2.16)
23
2.
1- BIT
FEEDBACK
2. 4 .
CONVERGENC E OF TH E ITERATIVE
ALGOR ITHM
hk
k =
K
X
k=1
hk sign(hk ) =
K
X
|hk |
(2.17)
k=1
24
2.4.
X
Sk G
h
[t] > zo
P
tno K
X
|h | > zo ,
Sk G
zo (, )
(2.18)
25
2.
1- BIT
FEEDBACK
Sk G
E
tno K
X
hk
|hk | .
(2.19)
Sk G
S k A[t]
S k A[t]
26
2.4.
k = 1, . . . , K
(2.21)
=P
X
|hk |
S k A[t]
(1)K
X
|ak |
k=1
|hk | > zo
S k A[t]
K
X
|ak | > zo
(2.22)
k=(1)K+1
27
2.
1- BIT
FEEDBACK
Proof: Let [t] {0, 1/K, 2/K, , 1} denote a minimum convergence in probability level that is guaranteed at the end of the t-th
iteration, assuming that the above training algorithm is followed. We
trivially note that at the end of the zeroth iteration,
X
X
k k
k
P
h
[0] > zo = P
h > zo
Sk G
Sk G
X
K
k
P
|a | > zo
k=1
(since ak hk , k)
(2.23)
28
Sk G
2.4.
we have
hk
k [t] > zo
Sk G
hk
k [t 1] > zo
Sk G
(1[t1])K
X
K
X
|ak |
k=1
|ak | > zo
(2.25)
k=(1[t1])K+1
t = 1, . . . , no K
(2.26)
[t] (1 o )
(2.27)
t=1
29
2.
1- BIT
FEEDBACK
(2.33)
e
where (2.33) follows because 0 [t 1] < o would imply that at least
(1 o )K sources are aligned, and hence, convergence in probability
to o -level is already achieved.
Using the fact that
o
eP [t] =
1
K
1,
t,
(from (2.33))
(2.34)
o
1
1
, w.p. eP [t]=
, if [t] = K
(
K
K)
o
1
[t] , 0, w.p. 1
, if [t] = K
1
eP ([t]= K
)
0, if [t] {0, 2 , . . . , 1}
K
so that [t], t = 1, . . . , no K, are i.i.d. Bernoulli distributed random
variables (Papoulis and Pillai, 2002, Sec. 4.3) with probabilities
1
o
P [t] =
= ,
t
K
e
30
2.4.
P ([t] = 0) = 1
o
,
e
t.
(2.35)
[t]
n
oK
X
[t]
(2.36)
t=1
t=1
and hence,
P
nX
oK
nX
oK
[t] (1 o ) .
[t] (1 o ) P
t=1
t=1
Pno K
Finally, noting that
t=1 [t] K is equal to the number of times
[t] takes on the value 1/K out of a total of no K Bernoulli trials
and therefore follows a binomial distribution (Papoulis and Pillai,
2002, Sec. 4.3) with mean
o
= no K
(2.37)
e
and standard deviation
r
=
no K
o
o
e
and assuming
that no > (1o )e/o , the probability P
1 o can be lowerbounded according to
P
n
oK
X
(2.38)
Pno K
t=1
[t]
!
[t] 1 o
t=1
n
oK
X
!
[t] 1 o
(2.39)
t=1
=P
n
oK
X
!
[t]K (1 o )K
(2.40)
t=1
31
2.
=1P
n
oK
X
1- BIT
FEEDBACK
!
[t]K < (1 o )K
(2.41)
t=1
!
no K
no Ko X
no Ko
=1P
[t]K >
(1 o )K
(2.42)
e
e
t=1
!
no K
n K
n K
o o X
o
o
1P
[t]K >
(1 o )K
(2.43)
e
e
t=1
no K
X
no Ko
=1P
[t]K
e }
| {z
t=1
=
!
r
(1 o )K
o
o
>q
1
no K e
e
no K eo 1 eo |
{z
}
no Ko
e
(2.44)
q
1
no Ko
e
o
e
no K
o
e
2
(1 o )K
{z
(2.45)
0 as K
=1
(2.46)
(1 o )e
o
(2.47)
we have
P
n
oK
X
!
[t] 1 o
=1
for K
(2.48)
t=1
32
2.4.
X
= Ehk E{k [l]}tl=0 |hk
|hk |
S k A[t]
|hk |
(2.49)
S k A[t]
K(1 2) E |hk | .
(2.50)
(2.51)
Sk G
Sk G
noX
K1
X k
Ehk E{k [l]}tl=0 |hk S new [t] + Ehk
h
t=1
(2.52)
Sk G
33
2.
1- BIT
FEEDBACK
noX
K1
Ehk E{k [l]}tl=0 |hk S new [t] .
(2.53)
t=1
K
X
P (q a [t] = s)
s=0
Ehk E{k [l]}tl=0 |hk [S new [t]qa [t] = 0, q a [t] = s] .
(2.54)
We next show that the expected value inside the summation in (2.54)
can be lower-bounded by co s, where co is a constant independent of K.
To this end, we start by noting that qa [t] = 0, q a [t] = s corresponds to
the case where precisely s sources move from A[t 1] to A[t] and none
of the sources moves from A[t 1] to A[t]. We denote the sets of these
s sources as R, and note that since the s sources in R
are chosen from
the K sources in A[t 1], there are precisely K
possible choices
s
for R, with the corresponding sets denoted as R1 , R2 , . . . , R(K ) .
s
34
2.4.
which implies
E{k [l]}tl=0 |hk [S new [t]qa [t] = 0, q a [t] = s]
X
1
E{k [l]}tl=0 |hk S new [t]Ri
= K
s
K
s
(a)
Ri {R1 ,...,R
|hk |
Ri {R1 ,...,R
K
s
2s
K
(K
s )
S k A[t1]
S Ri
(K
s )
K 1
|hk |
s1
|hk |
S k A[t1]
where Step (a) is a resultof the fact that each source in A[t 1] is
present in precisely K1
of the sets Ri . We therefore get
s1
Ehk E{k [l]}tl=0 |hk [S new [t]qa [t] = 0, q a [t] = s]
X
2s
k
=
E k
|h |
K h
k
S A[t1]
(a)
= 2s Ehk |hk |S k A[t 1]
Z
= 2s xf k
(x) dx
(2.56)
|h | A[t1]
where Step (a) follows from |A[t 1]| = K and the fact that the hk
are identically distributed. Next, we need to show that the integral
on the right hand side (RHS) of (2.56) can be lower-bounded by a
constant independent of K. To this end, we start by noting that
f|hk | (x) = P (A[t 1])f
|hk |A[t1]
+ P (A[t 1])f
(x)
|hk |A[t1]
(x)
35
2.
P (A[t 1])f
1- BIT
FEEDBACK
|hk |A[t1]
=f
|hk |A[t1]
(x)
(x)
|hk |A[t1]
(x)
f|hk | (x)
.
o
x < 0,
f (x) 0,
x 0,
f|hk | (x)
f (x)
o
Z
f (x) dx = 1
(2.57)
R
x
and
(2.58)
R
(x) dx
guarantees that this minimum is a lower bound on x xf k
|h | A[t1]
as f k
(x) is a member of this class of functions. Concretely,
|h | A[t1]
we want to determine
Z
min xf (x) dx
(2.59)
f ()
x
where the minimization is under the constraints (2.58). This minimization problem can be solved as follows. We start by setting
1 o
xo = Q1
(2.60)
2
q
R
u2
x2
where Q(v) = v 12 e 2 du, so that using f|hk | (x) = 2 e 2 , x
36
2.4.
f|hk | (x)
dx = 1.
o
Zxo
Z
xf (x) dx + xo f (x) dx
(2.62)
xo
0
Zxo
Zxo
xf (x) dx + xo 1 f (x) dx
(2.61)
xo
(2.63)
Zxo
Zxo
Zxo
f|hk | (x)
dx f (x) dx
= xf (x) dx + xo
o
0
Zxo
= xo
Zxo
(xo x)f (x) dx
f|hk | (x)
dx
o
Zxo
f|hk | (x)
= x
dx +
o
0
(2.64)
0
Zxo
(xo x)
f|hk | (x)
dx
o
(2.65)
(2.66)
Zxo
(xo x)f (x) dx
(2.67)
Zxo
Zxo
f|hk | (x)
f|hk | (x)
= x
dx + (xo x)
f (x) dx.
o
o
0
(2.68)
37
2.
1- BIT
FEEDBACK
(x)
|hk |
o
f (x) =
0,
0 x xo
(2.69)
otherwise
2
o
r
(Q1 ( 1
2 ))
Z
2
co
1 e
2
, .
(2.70)
xf (x) dx =
o
2
x
Substituting (2.70) into (2.56) and the result thereof into (2.54),
we obtain
Ehk E{k [l]}tl=0 |hk [S new [t]]
co P (qa [t] = 0)
K
X
sP (q a [t] = s)
(2.71)
s=0
(1)K
1
= co 1
K
Ks
K
X K 1 s
1
s
1
K
K
s
s=1
K X
s
K
1
1
K
K
= co 1
s
1
K
s
1 K
s=1
K X
s
K
1
K
1
= co 1
s
K
s
K 1
s=1
K
K1
1
1
1
K 1 +
= co 1
K
K 1
K 1
K
X
K
kxk1 = K(1 + x)K1
since
k
k=1
38
(2.72)
(2.73)
(2.74)
(2.75)
2.4.
K
K1
K
1
K
= co 1
K
K 1 K 1
(1)K
1
= co 1
K
co e(1)
(2.76)
(2.77)
(for large K)
(1o )
> co o e
(2.78)
(since > o ).
(2.79)
>
(1o )
l=0
e
k
t=1
S G
= (no K 1)
no K
co o
(1
o)
e
co o
e(1o )
(2.80)
(1 2o )e1o k
E |h |
co o
yields
X
Ehk E{k [l]}no K1 |hk
hk
k
l=0
Sk G
> K(1 2o ) E |hk |
(2.81)
39
2.
100
90 E
80
"
k
1- BIT
FEEDBACK
|h |
Magnitude
70
60
50
40
"$
!"
"#
"
"
k k "
E "
h [t]"
"
"
30
20
10
0
100
200
800
900 1000
"
k
|hk |
Magnitude
350
300
250
200
150
"$
!"
"#
"
"
k k "
E "
h [t]"
"
"
k
100
50
0
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Iteration index t
in Section 2.3. In particular, Fig. 2.2, 2.3, 2.4 and 2.5 show how
P
E S k G hk
k [t] (averaged over 50 realizations of the channel co-
40
2.4.
1000
900 E
800
"
|hk |
Magnitude
700
600
500
"$
!"
"#
"
"
k k "
E "
h [t]"
"
"
400
300
200
100
0
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Iteration index t
"
k
|h |
Magnitude
3500
3000
2500
"$
!"
"#
"
"
"
E "
hk k [t]"
"
"
2000
1500
1000
500
0
10000
20000
30000
Iteration index t
40000
50000
41
2.
1- BIT
FEEDBACK
2.5.
X
X
yi [n] = xi [n]
|hki,i |
|hki,i |
i
A
M X
K
X
r=1
r6=i
i
A
hki,r
rk xr [n] + wi [n], i = 1, . . . , M.
k=1
{z
interference
42
2.5.
hki,i , Sik Ai and hki,i , Sik Ai are distributed according to N (0, 1).
We furthermore assume Gaussian codebooks. Under the assumption of the destination Di having perfect knowledge of its effective
P
P
channel coefficient ( Ai |hki,i | A |hki,i |) and of the coefficients
i
PK
k
rk , for all r 6= i, corresponding to the effective interference
k=1 hi,r
channels, the outage probability for the link Gi Di is given by
1
out
log(1 + SINRi ) < Ri
Pi (Ri ) = P
(2.82)
2
= P SINRi < 22Ri 1
(2.83)
P
P
2
P
k
k
i |hi,i |
i |hi,i |
A
K
A
=P
< 22Ri 1
(2.84)
P
P
P
k
k 2 + N
h
o
r
k i,r
r6=i K
P
2
P
k
k
|h
|
|h
|
i,i
i,i
Ai
A
2R
(2.85)
= P P
P
2i KN < 2 i 1 .
k
o
k
h
+
r
r6=i
k i,r
P
To upper-bound the outage probability, we use the (union) bounding techniques introduced in Morgenshtern and Bolcskei (2007). In
particular, we note that employing the union bounds provided in Appendix A.1 for any positive n1 , n2 and n3 with n1 > n2 , the following
upper bound holds:
2
P
P
k
k
i |hi,i |
i |hi,i |
A
(n1 n2 )2
A
P P
P
2 KN <
k k +
o
n3
r
r6=i
k hi,r
P
2
X
X
P
|hki,i |
|hki,i | < (n1 n2 )2
i
A
i
A
2
X X
KNo
hki,r
rk +
+ P
> n3
P
r6=i
(2.86)
43
2.
1- BIT
FEEDBACK
2
X
X k
P
|hki,i | < (n1 n2 )2
|hi,i |
i
A
i
A
2
X
KNo
X
n
3
P
+
P
hki,r
rk >
M 1
r6=i
k
X
X
|hki,i | < n1 + P
|hki,i | > n2
P
i
A
(2.87)
i
A
s
X
KNo
X
n
3
P
+
P
hki,r
rk >
M 1
r6=i
(2.88)
2
P
2
|hki,i | A |hki,i |
(n1 n2 )
P P
P
2i KN <
k
o
k
n3
r + P
r6=i
k hi,r
X
X
k
k
P
|hi,i | > n2
|hi,i | < n1 + P
P
i
A
i
A
i
A
s
X
KNo
n
3
P
.
+ (M 1)P
hki,r >
M 1
(2.89)
S
X
!
|hki,i |
<m
k=1
S
X
k=1
44
!
|hki,i | > m
em
S
r !S
2
S
m2
2e 2S2
2.5.
P
!
hki,r m
m2
e 2S
k=1
en1
(1 o )K
r !(1o )K
o K
n2
2
2( 2K)2
o
+ 2e
n3
KNo
P
(2.90)
n2 = 2(1 + ln 2)(o K + K)
KNo
and n3 = (M 1)K 1+ +
(2.92)
P
with the constant > 0. Note that the condition n1 > n2 implies
41
that 0 <
= 0.2419. With the above choices for the
1+e
(1+ln 2)
(n1 n2 )2
n3
45
2.
K 1
(M 1)
1 o
e
1- BIT
FEEDBACK
p
2(1 + ln 2)o
2
, co K 1
2
(2.93)
so that Ri = (1/2) log 1 + co K 1 . Substituting (2.91) and (2.92)
into (2.90), in the large-K limit, we finally obtain
1
Piout
log 1 + co K 1
2
(1o )K
K
1
1
+ eo K K + 2(M 1)e 2
K
= e(1o )
0
+ eo K
+ 2(M 1)e
(as K ).
K
2
(2.94)
(2.95)
1
log(1 + ci K)
2
(2.96)
Ri =
1X
log(1 + ci K)
2 i=1
(2.97)
46
2.6.
2.6.
47
2.
upper-bounded according to
(
P
1
i Ri
lim
K 1 log K
2M r M
2
1- BIT
FEEDBACK
if 0 < r <
if
1
2
1
2M
1
2
1
2M
<r<1
(2.98 )
Proof: We start by noting that the outage probability for the link
Gi Di is lower-bounded, for any positive n1 , n2 , according to
1
out
Pi (Ri ) = P
log(1 + SINRi ) < Ri
(2.99)
2
= P SINRi < 22Ri 1
(2.100)
!
P
2
P
k
ik
k hi,i
K
=P P
< 22Ri 1
(2.101)
P
P
k
k 2 + N
h
o
r6=i K
k i,r r
!
P k k 2
i
k hi,i
2Ri
<2
1
(2.102)
=P P
P
k k 2 + KNo
r
r6=i
k hi,r
P
P k k 2
2
i
(n1 + n2 )
k hi,i
= P P
(2.103)
P
2 KN < (n1 +n2 )2
k
o
k
h
+
2R
r
i
r6=i
k i,r
(2
1)
P
X
P
hki,i
ik < n1 + n2
k
2
XX
(n1 + n2 )
KNo
2
> 2Ri
P
hki,r
rk +
P
2
1
r6=i
k
(
)
X k k
min P
hi,i
i < n1 + n2
k
i ,k
2
XX
KN
(n
+
n
)
2
o
1
2
> 2Ri
hki,r
rk +
P
P
2
1
r6=i
48
(2.104)
(2.105)
2.6.
where (2.104) follows from the intersection bound for division provided
in Appendix A.3 and the minimization of the first probability term in
(2.105) is over all possible feedback-based training algorithms, where
each training algorithm results in a selection of beamforming weights
ik , k.
We shall next focus on the minimization
(
)
X k k
min P
hi,i
i < n1 + n2
(2.106)
k
i ,k
(2.107)
Noting that the feedback rate from Di to all the sources in Gi over
a training period length scaling as K r is upper-bounded, according
to the cut-set bound with cut-set shown in Fig. 2.6, by K r , we then
have
(
)
X k k
hi,i
i < n1 + n2
min P
k
i ,k
r
X
K k
P
hi,i sign(hki,i ) +
K
X
hki,i
ik < n1 + n2
(2.108)
k=K r +1
k=1
r
X
K
=P
|hki,i | +
K
X
k=1
k=K r +1
hki,i
ik < n1 + n2 ,
n1 , n2
(2.109)
where P (
ik = +1) = P (
ik = 1) = 1/2, for k = K r + 1, , K and
(2.108) follows because at most K r realizations of sign(hki,i ) can be
communicated to the sources in Gi by employing K r bits of feedback
Note that the second probability term in (2.105) corresponds to interference
from other groups of sources and would therefore not depend on the training
algorithm.
49
2.
1- BIT
FEEDBACK
Nr
S1
S2
D
SN
Fig. 2.6: Cut-Set upper bound on the feedback rate from Di to all the sources in
Gi .
k
i ,k
r
X
K
P
|hki,i | +
K
X
k=1
k=K r +1
K
X
k=1
!
|hki,i |
< n1
hki,i
ik
< n1 + n2
K
X
k=K r +1
hki,i
ik
(2.110)
!
< n2
(2.111)
Our final goal is to show that any scaling behavior of Ri that violates
(2.98) leads to an outage probability of 1 in (2.105) and thereby
precludes achievability of full spatial multiplexing gain of M along
with a (per-stream) array gain proportional to K. The key to this
result is a judicious choice of n1 and n2 that drives both the probability
50
2.6.
2(1 + ) ln K,
k=1
r
=P
K
X
!
|hki,i | < K
p
r
2(1 + ) ln K
(2.112)
k=1
p
P K r |h1i,i | < K r 2(1 + ) ln K
p
= P |h1i,i | < 2(1 + ) ln K
p
= P |hki,i | < 2(1 + ) ln K, k = 1, . . . , K
p
K
= 1 2Q 2(1 + ) ln K
K
1
1 (1+) ln K
e
1 l2
(using Q(l) e 2 , l > 0)
2
K
1
= 1 1+
K
K 1+
K1
1
1
= 1 1+
K
K1
1
=
(for large K)
e
1
as
K .
(2.113)
(2.114)
(2.115)
(2.116)
(2.117)
(2.118)
(2.119)
(2.120)
(2.121)
(2.122)
PK
Moreover, since k=K r +1 hki,i
ik follows a Gaussian distribution with
51
2.
=P
K
X
k=K r +1
=1P
FEEDBACK
!
1
hki,i
ik < K 2 +
K
X
k=K r +1
(2.123)
!
1
hki,i
ik K 2 +
(2.124)
K
X
= 1 2P
1- BIT
hki,i
ik K
1
2 +
(2.125)
k=K r +1
1
K 2 +
K Kr
= 1 2Q
K 1+2
1 e 2(KK r )
1 l2
(using Q(l) e 2 )
2
=1e
1
K 2
1
K 1r
2 1
as
K .
k
i ,k
(2.126)
(2.127)
(2.128)
(2.129)
(2.130)
2
XX
(n1 + n2 )
KNo
2
> 2Ri
P
hki,r
rk +
P
2
1
r6=i
k
XX
2
(n1 + n2 )2
P
hki,r
rk > 2Ri
2
1
r6=i
k
1
r
+ 2
2(1+)
XX
2
(K ln K
+K
)
2
=P
hki,r
rk >
22Ri 1
r6=i
52
(2.131)
(2.132)
(2.133)
2.6.
!!M 1
1
X k k 2
(K r ln K 2(1+) + K 2 + )2
hi,r
r >
P
(2.134)
(22Ri 1)(M 1)
k
1
X
K r ln K 2(1+) + K 2 +
k
k
= 2P
hi,r
r > p
(22Ri 1)(M 1)
k
!!M 1
1
K r ln K 2(1+) + K 2 +
= 2Q
1
K 2 M 1 22Ri 1
(since
K
X
(2.135)
(2.136)
hki,k
kk N (0, K), for k 6= i)
k=1
= 2Q
ln K 2(1+)
1
2Ri K 2
+K
q
M 1 1
1
2 +
M 1
(2.137)
22Ri
Ri
log K
M 1
r 12
2(1+)
K
ln
K
q
lim 2Q
K
1
R
i
2
M 1 1 22Ri
(2.138)
and therefore if
r 12
2Ri
as
ln K 2(1+)
(2.139)
(2.140)
53
2.
1- BIT
FEEDBACK
r 12
2Ri
c
ln K 2(1+)
(for large K)
(2.141)
Case 1 0 < r < 12 : The outage probability in this case will always
Ri
be one because (2.141) cannot be satisfied for limK 1 log
>
K
2
0. The best strategy might be to simply perform time-sharing so
that
PM
Ri
=1
(2.142)
lim 1 i=1
K
log
K
2
that is, spatial multiplexing gain of one and an array gain proportional to K is achieved.
1
: In this case, Piout (Ri ) is less than one
Case 2 12 < r 12 + 2M
only if
2Ri
c
K ln K 2(1+)
p
1
Ri r
log K + log(c ln K 2(1+) )
2
Ri
lim 1
< 2r 1
K
2 log K
PM
Ri
lim 1 i=1
< 2M r M 1
K
2 log K
r 12
(2.143)
(2.144)
K 1
2
54
Ri
< 2M r M.
log K
i=1
(2.145)
2.7.
1
Therefore, if 12 + 2M
< r < 1, the spatial multiplexing gain in
the sum-rate is upper-bounded by 2M r M for an array gain
proportional to K.
Combining the above three cases completes the proof.
2.7.
55
2.
1- BIT
FEEDBACK
nX
k
o
hk
k [t]
X
tno K
|R{hk }|.
(2.146)
56
2.7.
X
X
|R{hki,i }|
yi [n] =
|R{hki,i }| xi [n]+
i
A
K
XX
r6=i
hki,r
rk
i
A
xr [n] +
k=1
X
K
I{hki,i }
ik
xi [n] + wi [n],
(2.147)
k=1
{z
interference
57
2.
1- BIT
FEEDBACK
Ai
PK
Piout (Ri )
= P (log(1 + SINRi ) < Ri )
(2.148)
Ri
= P SINRi < 2 1
2
P
P
!
P
k
k
i |R{hi,i }|
i |R{hi,i }|
A
K
A
=P P
< 2Ri 1
P
P
P
k
k }
k 2 + 2N
k 2 + P
h
I{h
o
i,i
i
r6=i K
k i,r r
k
K
2
P
P
!
k
k
i |R{hi,i }|
i |R{hi,i }|
A
A
Ri
<2 1 .
=P P
P
P
k k 2 +
k k 2 + 2KNo
r
i
r6=i
k hi,r
k I{hi,i }
P
In a manner similar to what we did for the real-valued case, for any
positive n1 , n2 and n3 such that n1 > n2 , we can obtain the following
upper bound:
2
P
!
|R{hki,i }| A |R{hki,i }|
2
(n
n
)
1
2
i
<
P P
P
P
k
k }
k 2 + 2KNo
k 2 +
n3
I{h
h
i,i i
r6=i
k i,r r
k
P
X
X
|R{hki,i }| > n2
P
|R{hki,i }| < n1 + P
P
i
A
i
A
i
A
2
2
X
X X
2KN
o
(2.149)
+ P
hki,r
rk +
I{hki,i }
ik > n3
P
r6=i
i
r6=i
k i,r r
k I{hi,i }
P
58
i
A
2KNo
P
(n1 n2 )2
<
n3
2.7.
P
|R{hki,i }|
< n1
+P
i
A
X
!
|R{hki,i }|
> n2
i
A
2
2
X X
X X
+P
R{hki,r } +
I{hki,r }
r6=i
k
r6=i
k
2
!
X
2KNo
k
I{hi,i } > n3
+
P
k
X
X
k
k
|R{hi,i }| > n2
P
|R{hi,i }| < n1 + P
i
A
(2.150)
i
A
s
X
2KNo
n3 P
+ (2M 1)P
I{hki,r } >
2M 1
(2.151)
where (2.151) follows from the union bound for summation in Appendix A.1 and the fact that I{hki,i } I{hki,r } R{hki,r }, r 6= i. Each
of the three terms in (2.151) can be upper-bounded using large deviations bounds of Shwartz and Weiss (1995). In particular, employing
(from (A.10),(A.17) and (A.18))
S
X
!
|R{hki,i }|
<m
i=1
S
X
!
|R{hki,i }|
>m
em
S
r !S
2
S
m2
2e 2S2
i=1
S
X
!
I{hki,i } m
m2
e 2S
i=1
in (2.151), we get
Piout (Ri )
en1
(1 o )K
r !(1o )K
o K
n2
2
2( 2K)2
o
+ 2e
59
2.
1- BIT
FEEDBACK
KNo
P
(2.152)
e
0
+ eo K
+ 2(2M 1)e
(as K ).
K
2
(2.153)
(2.154)
(2.155)
with ci < co , for all i, and get Piout (Ri ) 0, for all i, as K , so
that a sum rate of
M
X
i=1
Ri =
M
X
log(1 + ci K)
(2.156)
i=1
60
2.7.
(apart from the real part), then this pre-factor would be reduced
to 2(2M 2). However, to achieve coherent combining along the
imaginary part as well as the real part, the training period length
would have to be doubled. Thus, there is a clear tradeoff between
outage probability and length of the training period.
+
2N
o
r
i,r
r6=i K
k
!
P k k 2
i
k hi,i
Ri
<2 1
(2.160)
=P P
P
k
k 2 + 2KNo
h
r
i,r
r6=i
k
P
!
P
P
2
k
k 2
k
i +
ik
k R{hi,i }
k I{hi,i }
Ri
<2 1
=P
P
P
k
k 2 + 2KNo
h
r
r6=i
k i,r
P
!
P
k
k 2
i
2Ri 1
k R{hi,i }
P P
<
P
k
k 2 + 2KNo
2
h
r
r6=i
k i,r
P
!
P
2
k
ik
2Ri 1
k I{hi,i }
P P
<
(2.161)
P
k
k 2 + 2KNo
2
h
r
r6=i
k i,r
P
!
P
k
k 2
i
1
k R{hi,i }
Ri 1
P P
<2
P
2
2
hk
k + 2KNo
r6=i
i,r
61
2.
1- BIT
FEEDBACK
!
P
2
k
ik
1
k I{hi,i }
< 2Ri 1
P
P
k
k 2 + 2KNo
2
h
r6=i
k i,r r
P
(2.162)
r6=i
r6=i
P
with R{hki,r }
rk k I{hki,r }
rk , r 6= i, our analysis in Section 2.6
(from (2.102) onwards) implies that with a training period length
scaling as K r (for r < 1), each of the two probability terms in (2.162)
is lower-bounded by 1 (that is, outage probability is one) unless the
sum rate satisfies
(
P
1
1
if 0 < r < 21 + 2M
i Ri
lim
(2.163)
1
K log K
2M r M if 12 + 2M
<r<1
and hence full spatial multiplexing gain along with a per-stream array
gain proportional to K cannot be achieved with a training period
length scaling sub-linearly in K.
2.8.
OBSERVATIONS
62
2.8.
OBSERVATIONS
63
C HAPTER
65
3.
S1
D1
S2
D2
DM
SM
Ri + Rk
1.
log P
(3.1)
66
i=1
k6=i (Ri
log P
+ Rk )
M (M 1)
(3.2)
PM
2(M 1) i=1 Ri
M (M 1)
P
log P
PM
M
i=1 Ri
lim
P log P
2
lim
(3.3)
(3.4)
67
3.
3.1.
SYSTEM MODEL
r = 0, , N 1
(3.5)
is the channel coefficient between source Sk and destination Di , corresponding to the r-th tone. The input-output relation between Si
68
3.1.
SYSTEM MODEL
(3.6)
k6=i
{z
interference
(3.7)
k6=i
(3.8)
Finally, we shall also need the channel vector hi,k and the normalized
channel vector wi,k corresponding to the link Sk Di , defined as
hi,k = [hi,k [0] hi,k [L 1]]T CL1 and wi,k = hi,k /khi,k k
CL1 , respectively.
We assume that each destination Di knows its channels from each
of the sources Sk perfectly, that is, Di knows hi,k , k. We further
assume that there exist dedicated non-interfering error-free broadcast
feedback links from each destination Di to all the other nodes in
the network, that is, to the sources Sk , k, and to the destinations
Dk , k 6= i. In the remainder of this chapter, we distinguish between a
We
69
3.
Rsum
M
=
.
log P
2
(3.9)
3.2.
dk
X
m=1
70
vkm xm
k ,
k = 1, . . . , M
(3.10)
3.2.
m
N 1
2
=
where xm
with kvkm k2 = 1, and E |xm
k C, vk C
k |
P/(M dk ), k, m. Setting Q = (M 1)(M 2) 1, the number of
data symbols dk (corresponding to Sk ) and the number of tones N
are chosen according to (Cadambe and Jafar, 2008, Appendix III)
(
(t + 1)Q , k = 1
(3.11)
dk = Q
t ,
k = 2, . . . , M
N = (t + 1)Q + tQ
(3.12)
|
+
dk
XX
{z
interference
m H
H H p p
i
(um
i ) Hi,k vk xk +(ui ) z
}
(3.13)
k6=i p =1
{z
interference
N 1
2
for m = 1, . . . , di , i = 1, . . . , M, where um
with kum
i C
i k =
1, i, m. Choosing xm
k , k, m, to be i.i.d. Gaussian, treating the two
interference terms in (3.13) as additional noise, and assuming that
H H m
Di knows the effective channel coefficient (um
i ) Hi,i vi , m, perfectly, the rate of communication over link Si Di is lower-bounded
We will later let t and see that the difference between the multiplexing
gain achieved by IA and the maximum possible multiplexing gain can be made
arbitrarily small.
71
3.
according to
di
1 X
Ri
log 1 +
N m=1
P
m H H m 2
M di |(ui ) Hi,i vi |
Ii,1 + Ii,2 + No
(3.14)
with
Ii,1 =
X P
H H p 2
(um
i ) Hi,i vi
M di
(3.15)
p6=m
Ii,2 =
dk
XX
k6=i p
P m H H p 2
(ui ) Hi,k vk
M dk
=1
(3.16)
= 0,
i, m
(3.17)
i, m 6= p
(3.18)
k 6= i, m, p
(3.19)
(3.21)
(t + 1)Q + tQ
2
that is, full spatial multiplexing gain, in the sense of (3.9), is achieved.
72
3.2.
Under the assumption of every node (i.e., every source and every destination) knowing all the channels in the network perfectly, one way
p
to find vectors um
i , vi satisfying (3.17), (3.18) and (3.19) is provided
in (Cadambe and Jafar, 2008, Appendix III) (or see Appendix A.4
of this thesis). The basic idea is that each Sk computes, based on
its knowledge of all the channels in the network, a set of linearly
independent transmit direction vectors vk1 , . . . , vkdk such that all the
vectors corresponding to interference from Sk , k 6= i, at Di (that is, the
H vp , k 6= i, p = 1, . . . , dk ) span an (N di )-dimensional
vectors H
i,k k
complex subspace of CN . Consequently, di dimensions remain completely interference-free. Each Di , in turn, computes, based on its
knowledge of all the channels in the network, a set of di unit-norm
receive direction vectors u1i , . . . , udi i that spans the di -dimensional
interference-free subspace corresponding to the link Si Di , thereby
satisfying (3.19). Moreover, it was shown in (Cadambe and Jafar,
H vm , m, along with the
2008, Appendix III) that if the vectors H
i,i i
p
H v , k 6= i, p, span CN , then Di can choose the di receive
vectors H
i,k k
direction vectors um
i , m, such that along with (3.19), both (3.17) and
(3.18) are satisfied as well. It turns out that, in the frequency-selective
case, this is possible provided that L > ((t + 1)Q 1)/(3tQ) (the
proof of this statement is similar to (Grokop et al., 2009, Th. 6.4)
and the details are provided in Appendix A.5 of this thesis).
The developments in the remainder of this chapter are based on
the simple observation that if the interference power terms Ii,1 and
Ii,2 , for all i, are not equal to zero, but upper-bounded by a constant
independent of P , full spatial multiplexing gain is still achieved. The
key to realizing this will be a vector quantization scheme, which
H H p 2
satisfies (3.17) and ensures that both |(um
i ) Hi,i vi | , i, m 6= p,
m H H p 2
and |(ui ) Hi,k vk | , k 6= i, m, p, scale as 1/P when P .
It will turn out that the vector quantization scheme developed in
Mukkavilli et al. (2003) and Love et al. (2003) for beamforming in
single-user frequency-flat multi-input multi-output (MIMO) channels
satisfies this condition.
73
3.
3.3.
and set
{w
1, . . . , w
2Nd } = {p1 , . . . , pNpack }.
(3.23)
74
3.3.
This approach was used in Mukkavilli et al. (2003) and Love et al.
(2003) for beamforming in single-user MIMO channels.
Quantization error
We define the quantization error d (wi,k , w
i,k ) as
q
H w
d (wi,k , w
i,k ) , 1 |wi,k
i,k |2 .
The maximum quantization error max
is then given by
d
q
max
=
max
1 |xH w
x |2
d
xCL ,kxk=1
(3.24)
where w
x A is the unit-magnitude quantized version of x CL
obtained according to (3.22).
We will need an upper bound on max
in terms of Nd . While such
d
a bound is known in the literature (see Mukkavilli et al. (2003)), we
will provide a derivation, partly for completeness, and partly to get
the bound in a form required for our proof. We start by noting the
following two key properties of the chosen set of quantization vectors:
i) The following relation holds between Nd and sin() (see (Love
et al., 2003, Th. 3) or Appendix A.6 of this thesis):
2Nd
sin() 2
sin()
2
2(L1)
1
Nd
!
.
(3.25)
2 2(L1)
ii) The maximum quantization error max
is upper-bounded by
d
sin(). This can be proved by contradiction as follows:
75
3.
that is,
q
xo = arg max
1 |xH w
x| .
(3.26)
xCL ,kxk=1
Then, we have
q
2
1 |xH
xo | > sin()
o w
2
xo < 1 sin2 () = cos2 ()
xH
o w
xH
xo < cos()
o w
Nd
xH
.
o pl < cos(), l = 1, . . . , 2
(3.27)
(3.28)
(3.29)
(3.30)
(3.31)
76
3.3.
total of
Nf = M Nd
bits
(3.33)
r = 0, . . . , N 1
(3.34)
(3.35)
c i,k , i, k, that
i,k = W
IA is now performed naively assuming that H
is, each source Sk computes its transmit direction vectors v
km , m =
1, . . . , dk , and each destination Di computes its receive direction
c
vectors u
m
i , m = 1, . . . , di , from Wi,k , i, k (rather than from Hi,k ).
1
Sk transmits a linear combination of dk scalar symbols, xk , . . . , xdkk ,
in N frequency-slots by modulating the symbols onto the vectors
v
k1 , . . . , v
kdk , that is,
x
k =
dk
X
v
km xm
k ,
k = 1, . . . , M
(3.36)
m=1
77
3.
2
=
where xm
km CN 1 with k
vkm k2 = 1, and E |xm
k C, v
k |
P/(M dk ), k, m. The number of data symbols dk (corresponding
to Sk ) and the number of tones N are chosen according to (3.11) and
(3.12), respectively. Each destination Di computes the projections of
its received signal y
i onto the receive direction vectors u
1i , . . . , u
di i
P
resulting in a total of i di effective input-output relations given by
X
H
H H m m
H H p p
(
um
i = (
um
i xi +
(
um
i xi
i ) y
i ) Hi,i v
i ) Hi,i v
p6=m
dk
X
H H p p
H
i ,
(
um
k xk + (
um
i ) Hi,k v
i ) z
i, m
(3.37)
k6=i p =1
N 1
2
where u
m
with k
um
i C
i k = 1, i, m. Defining hi,k , [hi,k (0) . . .
m,p
p
T
m
dk
X
H
H
m,p p
i ,
h
um
i,k bi,k xk + (
i ) z
i, m.
(3.38)
k6=i p =1
Choosing xm
i , i, m, to be i.i.d. Gaussian, treating the two interference
terms in (3.38) as additional noise and assuming that the destination
m,m
H b
Di knows the effective channel coefficients h
i,i i,i , m, perfectly, the
rate of communication over the link Si Di is then lower-bounded
according to
!
di
P H m,m 2
1 X
M di |hi,i bi,i |
(3.39)
log 1 +
Ri
N m=1
Ii,1 + Ii,2 + No
with
Ii,1 =
2
X P
m,p
H
b
h
M di i,i i,i
p6=m
78
3.3.
Ii,2 =
dk
XX
P
M
dk
p =1
H m,p 2
hi,k bi,k .
k6=i
H H m
Recall that in IA with perfect CSI, we had |(um
i ) Hi,i vi | c >
m H H p
0, i, m (with c independent of P ), (ui ) Hi,i vi = 0, i, m 6= p, and
m,p
p
H H p
m
(um
i ) Hi,k vk = 0, k 6= i, m, p. Defining bi,k , (ui ) vk , these
conditions are equivalent to
m,m
H
|h
i,i bi,i | c > 0, i, m
m,p
H
h
= 0, i, m 6= p
i,i b
i,i
H m,p
hi,k bi,k
= 0, k 6= i, m, p.
(3.40)
(3.41)
(3.42)
Defining w
i,k , [w
i,k (0) w
i,k (1) . . . w
i,k (N 1)]T , naive IA therefore
p
m
entails finding vectors u
i , v
i satisfying the following conditions:
H m,m
|w
i,i
bi,i | c > 0,
H m,p
w
i,i
bi,i
H m,p
w
i,k
bi,k
i, m
(3.43)
= 0,
i, m 6= p
(3.44)
= 0,
k 6= i, m, p.
(3.45)
79
3.
m,p 2
H
2 H b
i,i
hi,i w
i,i + hi,i m,p ,
k
kb
i,i
i, m 6= p
which yields
P H m,p 2
h b
M di i,i i,i
2
P m,p 2 2 H
i,i
kbi,i k khi,i k hi,i w
M di
2
h
H
i,i
P m,p 2 2
i,i w
=
kb k khi,i k 1
khi,i k
M di i,i
2
hH w
P m,p 2 2
i,i i,i
kbi,i k khi,i k 1
=
khi,i k
M di
2
P m,p 2
kbi,i k
hi,i
(max
)2
d
M di
!
2
4P m,p 2
1
,
kb k
hi,i
Nd
M di i,i
2 (L1)
(3.47)
(3.48)
(3.49)
(3.50)
i, m 6= p
(3.51)
(3.52)
i,k
yields
P H m,p 2
h b
M dk i,k i,k
2
4P m,p 2
kbi,k k
hi,k
M dk
80
1
Nd
2 (L1)
!
,
k 6= i, m, p.
(3.53)
3.3.
2
hi,i
,
M di
{z
}
m,p k2
4kb
i,i
i, m 6= p
m,p
i,i
1
Nd
|
=
m,p k2
4kb
i,k
|
2
hi,k
M dk
{z
m,p
i,k
2 (L1)
{z }
=1/P
k 6= i, m, p
X
p6=m
m,p
i,i +
dk
XX
m,p
i,k .
(3.54)
k6=i p =1
|h
i,i i,i | khi,i kc > 0. Consequently, using (3.54), the spatial
81
3.
lim
log 1 +
di
M X
X
lim
i=1 m=1
P
=
di
H b
m,m 2
|h
i,i i,i |
dk
P P
m,p
m,p
i,i +
i,k +No
p6=m
k6=ip=1
N log P
P
M di
M
2
(3.55)
which proves that full spatial multiplexing gain is achieved. We complete the proof by noting that the number of bits fed back (broadcast) by each destination for achievability of full spatial multiplexing
gain using naive IA is given, according to (3.33), by Nf = M Nd =
M (L 1) log P .
3. 4 .
82
3.5.
3.5.
OBSERVATIONS
OBSERVATIONS
83
C HAPTER
85
4.
1
2
S1
D1
1
2
D2
S2
M
1
2
SM
DM
Fig. 4.1: A MISO interference network with M sources, each equipped with M
antennas, and M destinations, each equipped with a single antenna.
86
4.1.
SYSTEM MODEL
4 .1.
SYSTEM MODEL
The impulse response of the frequency-selective MISO channel between the source Sk and the destination Di is given by the taps
T
hi,k [l] = [h1i,k [l] . . . hM
CM 1 , for l = 0, . . . , L 1, where
i,k [l]]
m
hi,k [l] denotes the l-th tap of the channel impulse response between
the m-th transmit antenna of Sk and the single antenna destination
Di . The channel coefficients hm
i,k [l] are assumed to remain constant
throughout the time interval of interest (outage setting) and are drawn
independently (across i, k, m, l) from a continuous probability density
function such that 0 < |hm
i,k [l]| < , for all i, k, l, m, with probability
1. We use a cyclic signal model (such as in orthogonal frequency division multiplexing (OFDM)) to convert the L-tap frequency-selective
MISO channel Sk Di to N parallel frequency-flat MISO channels
T
M 1
given by hi,k (r) = [h1i,k (r) . . . hM
, for r = 0, . . . , N 1,
i,k (r)] C
where
m
hm
i,k (r) = F{hi,k [n]}
(4.1)
87
4.
then given by
yi (r) = hH
i,i (r)xi (r) +
hH
i,k (r)xk (r) + zi (r)
(4.2)
k6=i
{z
interference
(4.4)
We shall also need the channel vector hi,k and the normalized channel vector wi,k corresponding to the link Sk Di defined as hi,k =
[hTi,k [0] hTi,k [L 1]]T CM L1 and wi,k = hi,k /khi,k k CM L1 ,
respectively. Finally, defining wi,k [l] = hi,k [l]/khi,k k CM 1 , for all
T
T
i, k, l, we have wi,k = [wi,k
[0] . . . wi,k
[L 1]]T .
We assume that each destination Di knows its channel from each
of the sources perfectly, that is, Di knows hi,k , k. We further assume
that there exist dedicated non-interfering error-free broadcast feedback
links from each destination Di to its non-corresponding sources Sk ,
k 6= i. In the remainder of this section, we distinguish between a
channel feedback phase during which Nf bits of feedback are broadcast
by each destination and a data transmission phase following the
We
In
88
4.2.
Rsum
= M.
log P
(4.5)
4 .2.
89
4.
(4.7 )
(4.8)
hH
i,k vk = 0,
i 6= k
H
wi,k
vk
i 6= k
= 0,
(4.9)
(since wi,k = hi,k /khi,k k).
(4.10)
vk =
90
4.2.
Thanks to
Cof[{Wk }m,k ] = {adj(Wk )}k,m = det(Wk ) Wk1 k,m
we have vkH wi,k = 0, and hence, hH
i,k vk = 0, for all i 6= k. The received
symbol at Di is given by
X
yi [n] = hH
hH
(4.11)
i,i vi xi [n] +
i,k vk xk [n] + zi [n].
k6=i
P
H
2
M |hi,i vi |
2
P H
v
h
k6=i M
i,k k
Ri log 1 + P
+ No
(4.12)
91
4.
P
scheme, to upper-bound k6=i (P/M )|hH
k |2 by a constant, say co ,
i,k v
independent of P . The resulting lower bound on multiplexing gain is
then given by
H
2
P
PM
M |hi,i vi |
i=1 log 1 +
co +No
Rsum
lim
=M
(4.14)
lim
P
P log P
log P
and hence the full spatial multiplexing gain is achieved. In the rest of
this section we analyze this case in detail and indeed show that SIN,
along with the vector quantization scheme of Mukkavilli et al. (2003),
results in full spatial multiplexing gain with the number of feedback
bits as specified in (4.7).
max
xCM 1 ,kxk=1
!
1
2
.
(4.17)
Nd
2 2(M 1)
Total feedback rate Each destination Di feeds back Nd bits to each Sk ,
k 6= i. The total number of feedback bits broadcast by each destination
is therefore given by Nf = (M 1)Nd . Consequently, each source
Sk receives Nd bits of feedback from each of its non-corresponding
92
4.2.
Thanks to
k }m,k ] = {adj(W
k )}k,m = det(W
k ) {W
1 }k,m
Cof[{W
k
we have v
kH w
i,k = 0, for all i 6= k. The source Sk transmits the vector
signal
xk [n] = v
k xk [n]
(4.18)
93
4.
h
, the rate of communication Ri over the link
k
i,k
k6=i
Si Di is lower-bounded according to
P
H
i |2
M |hi,i v
2
P H
k
k6=i M hi,k v
Ri log 1 + P
+ No
(4.20)
(4.22)
k6=i
i 6= k.
(4.23)
Furthermore, since kw
i,k k = 1, k
vk k = 1 and w
i,k , v
k CM 1 , for
all i 6= k, we can always find q1 , . . . , qM 2 such that
{w
i,k , v
k , q1 , . . . , qM 2 }
94
4.2.
(4.24)
t=1
2
2
hH
i,k + hH
k
i,k w
i,k v
(4.25)
P
i,k
i,k
khi,k k2 1
=
khi,k k
M
P
2
khi,k k (max
)2
d
M
!
1
4P
2
khi,k k
.
Nd
M
2 (M 1)
(4.26)
(4.27)
(4.28)
(4.29)
(4.30)
=1/P
Rsum
log P
M
X
i=1
=M
lim
|hH
i |2
i,i v
4khi,k k2
+No
k6=i
M
P
M
log 1 +
log P
(4.31)
95
4.
4 .3.
(4.32 )
96
4.3.
(4.33)
k6=i
{z
interference
n = 0, . . . , N 1
(4.34)
(4.35)
k6=i
Defining
y
i , [yi [0] . . . yi [N 1]]T
CN 1
x
i , [xi [0] . . . xi [N 1]]T
CN 1
CN 1
97
4.
H
H
(wi,k
~v
k )[0]
. . . (wi,k
~v
k )[1]
H
H
(wi,k
~v
k )[1]
. . . (wi,k
~v
k )[2]
,
CN N
..
..
..
.
.
.
H
H
(wi,k ~ v
k )[N 1] . . . (wi,k ~ v
k )[0]
Ui,k
det
P
M
P
2
H
k=1 M khi,k k Ui,k Ui,k
+ N o IN
P
H
2
k6=i M khi,k k Ui,k Ui,k
+ N o IN
1
P
log
N
det
.
(4.37)
1
N
1
N
PM
P
2 H
H
det
kh
k
F
D
D
F
+
N
I
i,k
i,k i,k
o N
k=1 M
log P
P
2 FH D DH F + N I
det
kh
k
i,k
i,k i,k
o N
k6=i M
PM
P
2
H
det
k=1 M khi,k k Di,k Di,k + No IN
log P
P
2 D DH + N I
det
kh
k
i,k
i,k
o
N
k6=i M
i,k
98
(4.38)
(4.39)
4.3.
1
=
log
N
2
P
2
k=1 M khi,k k di,k (r)
P
P
2
2
k6=i M khi,k k di,k (r)
2
P
2
M khi,i k di,i (r)
2
P
P
2
k6=i M khi,k k di,k (r)
QN 1 PM
r=0
QN 1
r=0
N 1
1 X
=
log 1 +
N r=0
!
+ No
+ No
!
.
(4.40)
(4.41)
+ No
X P
2
H
khi,k k2 F{(wi,k
~v
k )[n]}
M
(4.42)
k6=i
r = 0, . . . , N 1
(4.43)
on the r-th tone, where xk (r) denotes a scalar symbol with variance
P/M and v
k (r) = [
vk1 (r) . . . vkM (r)]T , with vkm (r) = F{
vkm [n]}, is
the SIN vector for the r-th tone. The received symbol at Di on the
We will later assume that x (r), for all r, are i.i.d, and thus ensure that the
k
power constraint (4.4) is satisfied.
99
4.
hH
vk (r)xk (r)
i,k (r)
k6=i
+ zi (r),
r = 0, . . . , N 1.
(4.44)
X P
2
F{(hH
k )[n]}
i,k ~ v
M
(4.46)
X P
2
H
khi,k k2 F{(wi,k
~v
k )[n]}
M
(4.47)
k6=i
k6=i
100
4.3.
2
N
1
X
X P
j2rn
2 1
H
khi,k k
(wi,k ~ v
k )[n]e N
=
M
N n=0
(4.48)
k6=i
N 1
X P
j2rn
1 X
H
|(wi,k
~v
k )[n]|e N
khi,k k2
M
N n=0
k6=i
X
X
(using |
ai bi |
|ai | |bi |, for all ai , bi C)
i
!2
(4.49)
2
X P
N
H
~v
k )[n]|
khi,k k2 max |(wi,k
M
N n
k6=i
js
(since e = 1, for all real scalars s)
X PN
H
=
khi,k k2 (max |(wi,k
~v
k )[n]|)2 .
n
M
(4.50)
(4.51)
k6=i
H
khi,k k2 |(wi,k
~v
k )[n]|2
co M
,
PN
n.
(4.52)
101
4.
H
khi,k k2 |(wi,k
~v
k )[n]|2
k6=i
co M
0, n
PN
(4.53)
X
L1
H
H
wi,k
[l]
vk [n l] 0, k 6= i, n (4.54)
|(wi,k
~v
k )[n]| =
l=0
v
k , [
vkT [0] . . . v
kT [N 1]]T
(4.55)
k 6=
is M N -dimensional and hence a non-trivial solution (that is, v
0M N ) exists. Next, consider the case where a random tap, say lo -th
tap, is not jointly quantized with other taps. Then, the information
about the relative magnitudes kwi,k [lo ]k/kwi,k [l]k, for all l 6= lo , will
be lost. In such a case, the only way to satisfy (4.54) would be by
P
H
H
choosing v
k such that both wi,k
[lo ]vk [nlo ] and l6=lo wi,k
[l]vk [nl]
in (4.54) are individually zero. However, this would imply that we
102
4.3.
the dimension of v
k is fixed to be M N . Therefore, one may argue
that if we, for example, double the number of antennas at Sk to 2M ,
then we can do without one of the relative magnitudes. The problem
is that the resulting decrease of one degree of freedom in feedback
is outnumbered by the increase of M L degrees of freedom because
of each normalized channel vector now being 2M dimensional (recall
the L channel vector taps with M antennas have L(M 1) degrees of
freedom, but if Sk has 2M antennas, each destination Di , i 6= k will
have L(2M 1) degrees of freedom to feedback to Sk ).
In summary, the taps wi,k [l], for all l, must be quantized jointly
as normalized channel vector wi,k in order to satisfy (4.52). We next
present such a quantization scheme (that is, a quantization scheme
that satisfies (4.52)) and show that SIN, in conjunction with this
scheme, results in full spatial multiplexing gain.
max
1 |xH w
x |2
(4.57)
xCM L1 ,kxk=1
103
4.
Nd
!
.
(4.58)
2 2(M L1)
Note that in contrast to our analysis in Section 3.3.1 where a given
normalized channel vector is quantized over L-dimensional complex
space, the normalized channel vectors in the MISO frequency-selective
case are quantized over M L-dimensional complex space, resulting in a
factor of M L 1 (rather than L 1 as in (3.32)) in the denominator
of the power in (4.58).
Total feedback rate As in the frequency-flat case, each destination Di feeds back Nd bits to each Sk , k 6= i, resulting in a total
feedback of Nf = (M 1)Nd bits per destination. Each source Sk
therefore receives Nd bits of (error-free) feedback from each of its
non-corresponding destinations. Using this feedback, Sk recreates the
quantized normalized channel vectors w
i,k , for all i 6= k. Furthermore,
it computes the quantized normalized channel vectors w
i,k [l] CM 1 ,
for l = 0, . . . , L 1, for all i 6= k, according to
T
T
[w
i,k
[0] . . . w
i,k
[L 1]]T = w
i,k .
(4.59)
T
T
T
T
[L 1] . . . wi,k
[1]
wi,k [0] 0TM . . . 0TM wi,k
{z
}
|
N L times
for n = 0, . . . , N 1, so that
H
H
(wi,k
~v
k )[n] = gi,k
[n]v
k .
(4.60)
104
4.3.
ing to
g
i,k [n] = CyclicnM
T
T
T
T
w
i,k [0] 0TM . . . 0TM w
i,k
[L 1] . . . w
i,k
[1]
.
|
{z
}
N L times
k CM N M N is then computed as
The matrix G
k = [
G
g1,k [0] . . . g
1,k [N 1] g
M,k [0] . . . g
M,k [N 1]]
with g
k,k [n], for n = 0, . . . , N 1, replaced by (dummy) all-one vectors
(recall that Sk does not know w
k,k [l], for l = 0, . . . , L 1, and hence
cannot compute g
k,k [n]). Each source Sk computes v
k (and hence
M 1
v
k [n] C
, for n = 0, . . . , N 1,) as
iH
k }1,kN ] Cof[{G
k }M N,kN ]
Cof[{G
h
i
.
v
k =
k }1,kN ] Cof[{G
k }M N,kN ]
Cof[{G
h
Thanks to
k }n+1,kN ] = {adj(G
k )}kN,n+1
Cof[{G
k ) {G
1 }kN,n+1
= det(G
k
(4.61)
H
we have v
g
i,k
[n]v
k = 0,
n = 0, . . . , N 1, i 6= k
(4.62)
Furthermore, since k
gi,k [n]k = 1, kv
k k = 1 and g
i,k [n], v
k CM N 1 ,
for n = 0, . . . , N 1, we can always find unit-norm vectors q1 , . . . ,
qM N 2 such that
{
gi,k [n], v
k , q1 , . . . , qM N 2 }
105
4.
is an orthonormal basis for the M N dimensional complex inner product space. Expanding gi,k [n], i 6= k, into this orthonormal basis, we
get
2
2 H
H
[n]v
k
[n]
gi,k [n] + gi,k
kgi,k [n]k2 = gi,k
+
MX
N 2
H
gi,k [n]qt 2
(4.63)
t=1
H
2 H
2
gi,k
[n]
gi,k [n] + gi,k
[n]v
k
r
2
H
H
gi,k [n]v
k kgi,k [n]k2 gi,k
[n]
gi,k [n]
r
2
H
= 1 wi,k
, i 6= k, n
w
i,k max
d
(4.64)
(4.65)
(4.66)
H
gi,k
[n]
gi,k [n]
H
wi,k
w
i,k .
(4.67)
(4.68)
H
H
H
|F{(wi,k
~v
k )[n]}|2
X 4P N khi,k k2
2 M L1
k6=i
106
Nd
.
(4.70)
4.4.
OBSERVATIONS
k6=i
H
|F{(wi,k
~v
k )[n]}|2
X 4P N khi,k k2
k6=i
(4.71)
Nd
M L1
| 2 {z }
=1/P
X 4N khi,k k2
k6=i
(4.72)
M
X
i=1
1
N
PN 1
r=0
log 1 +
lim
=M
P
M
H
khi,i k2 |F {(wi,i
~
vi )[n]}|2
P
4N khi,k k2
+No
k6=i
M
log P
(4.73)
which proves that full spatial multiplexing gain is, indeed, achieved.
Finally, we complete the proof by noting that the number of bits
broadcast by each destination to its non-corresponding sources is
given by Nf = (M 1)Nd = (M 1)(M L 1) log P .
4.4.
OBSERVATIONS
107
4.
corresponding destinations. For the case where the source knows its
channels to each of its non-corresponding destinations perfectly, the
cancellation is perfect and each destination receives an interferencefree signal from its corresponding source. This naturally results in
full spatial multiplexing gain. On the other hand, if each source has
only partial CSI of its channels to its non-corresponding destinations,
then the cancellation can still be made near-perfect by employing
the vector quantization scheme of Mukkavilli et al. (2003) and a
transmission scheme called spatial interference-nulling. It was shown
that this near-perfect cancellation yields full spatial multiplexing gain,
provided that the number of feedback bits used to quantize each
channel scales appropriately with the sum transmit power P .
In contrast to spatial multiplexing with interference-alignment,
which depends critically on the assumptions of extremely large number of OFDM tones, large number of taps L in the frequency-selective
channels and feedback from each destination to all the other destinations, full spatial multiplexing gain is achievable with spatial
interference-nulling without any such constraints. The tradeoff, of
course, is that multiple antennas are required at each source for feasibility of spatial interference-nulling while interference alignment
yields full spatial multiplexing gain even with a single antenna at each
source.
108
C HAPTER
109
5.
D1
1
2
D2
DK
Fig. 5.1: A MISO broadcast network with a source equipped with M antennas,
and K single-antenna destinations.
110
111
5.
5.1.
SYSTEM MODEL
(5.1)
(5.2)
112
5.1.
Iterative Feedback
Scheme
Data Transmission
TF
TTX
SYSTEM MODEL
(5.3)
TF
1
TF + TT X
and the transmission rate loss due to the iterative feedback scheme
can be neglected in our discussion. We assume that each destination
Di knows its channel from the source perfectly, that is, Di knows
113
5.
Rsum
= M.
log P
(5.5)
5.2.
TH E K
M CASE
(5.6 )
114
5.2.
THE K
M CASE
The proof for the above result is refreshingly simple. In fact, with
slight modification, the feedback and transmission scheme discussed for
the MISO interference network case achieves full spatial multiplexing
gain in MISO broadcast networks as well. The details are as follows:
M
X
v
k (r)xk (r)
(5.8)
k=1
115
5.
k6=i
4N (M 1)khi k2
.
M
(5.9)
Choosing xk (r), k, r, to be i.i.d. Gaussian and assuming that destination Di knows its effective channel coefficient hH
vi (r) and
i (r)
P
2
its interference power k6=i (P/M )|hH
(r)
v
(r)|
,
the
rate
of comk
i
munication over the link S Di is then lower-bounded according
to
!
N 1
P
H
vi (r)|2
1 X
M |hi (r)
log 1 + P
Ri
(5.10)
P
H
N r=0
vk (r)|2 + No
k6=i M |hi (r)
!
N 1
P
H
vi (r)|2
1 X
M |hi (r)
log 1 + 4N (M 1)kh k2
(5.11)
i
N r=0
+ No
M
PM
Rsum
i=1 Ri
lim
= lim
=M
P log P
P log P
(5.12)
and hence, the full spatial multiplexing gain is achieved. Note that
this result generalizes the result in Jindal (2006) in three aspects,
namely i) to any fading distribution with finite energy; ii) to the
frequency-selective case and iii) to a per channel realization basis
rather than just in the sense of averages over the random channel.
5.3.
TH E K
M CASE
116
5.3.
THE K
M CASE
(5.13 )
(5.14)
117
5.
Gi
Qi1
Qi2
Gj
gi
gj
118
5.3.
THE K
M CASE
and
(5.15)
r: rC
M L1
)
(5.16)
(5.17)
119
5.
contradiction.
We can now obtain the Bound 1 as follows. The condition (5.17)
implies that each unit-magnitude vector u CM L1 lies in at
least one of the extended regions G1 , . . . , GK , where
Gs , r : r CM L1 , krk2 = 1 and |rH gs | cos () . (5.18)
Therefore, the sum of the surface areas of all the extended regions Gs must be greater than the total surface area of the
unit hypersphere in the M L-dimensional complex space. Since
each of the extended regions Gs has the same surface area
2 M L (sin())2(M L1) /(M L1)! (see Appendix A.8) and the total
surface area of the unit hypersphere is given by 2 M L /(M L 1)!
(see Appendix A.7), we obtain
2 M L
2 M L (sin())2(M L1)
(M L 1)!
(M L 1)!
1
(sin())2(M L1)
K
2(M L1)
1
2 sin
cos
2
2
K
2(M L1)
1
sin
2(M
L1)
2
K2
K
(5.19)
(5.20)
(5.21)
(5.22)
120
5.3.
THE K
M CASE
and
(5.23)
(5.25)
121
5.
none of the quantization regions Qs,p , s, p, overlap with another quantization region. Since there are a total of K2Nf quantization regions Qs,p , each Qs,p has a surface area equal to
2 M L (sin(1 /2))2(M L1) /(M L 1)! (see Appendix A.8), and the
total surface area of the M L-dimensional unit hypersphere is
2 M L /(M L 1)! (see Appendix A.7), we obtain
ML
(sin 21 )2(M L1)
2 M L
Nf 2
(5.26)
K2
(M L 1)!
(M L 1)!
2(M 1L1)
1
1
sin
(5.27)
2
K2Nf
1
.
K24(M L1)
(5.28)
122
5.3.
THE K
M CASE
less than cos(1 ) and the absolute value of the inner product of
each vector with gs being greater than cos ( 1 )/2 . Therefore,
2Nf can not be the solution of the line-packing problem, which
results in a contradiction.
We can now obtain the Bound 3 as follows. The condition (5.28)
implies that each unit-magnitude vector u Gs lies in at least
s,p , p = 1, . . . , 2Nf , where
one of the extended regions Q
s,p , r : r CM L1 , krk2 = 1 and |rH qs,p | cos (1 ) .
Q
Therefore, the sum of the surface areas of the extended regions
s,p , p, must be greater than the total surface area of Gs . Since
Q
s,p has the same surface area
each of the extended regions Q
ML
2(M L1)
2 (sin(1 ))
/(M L 1)! and the total surface area of
Gs is given by 2 M L (sin( 1 ))2(M L1) /(M L 1)! (see Appendix A.8), we obtain the following lower-bound:
2
Nf
2(M L1)
1
2 M L sin
2 M L (sin(1 ))2(M L1)
2
(M L 1)!
(M L 1)!
2(M L1)
1
Nf
2(M L1)
2 (sin(1 ))
sin
(5.29)
2
2(M L1)
sin
(5.30)
4
(since 1 < /2)
!2(M L1)
sin 2
(5.31)
4(M
L1)
K2
and hence Bound 3 is proved.
123
5.
(from Bound 2 ).
(5.34)
K2Nf
This upper bound will be later used to prove achievability of full
spatial multiplexing gain.
124
5.3.
THE K
M CASE
(5.35)
L1
P X H
hi [l]gs [l] + zi [L 1]
M
(5.37)
P
khi kwiH gs + zi [L 1].
M
(5.38)
l=0
r
=
1
P
|y 2 [L 1]|2
|wH gs |2
khi k2 i
M i
(5.39)
125
5.
ML
126
(5.40)
(5.41)
5.3.
1
K24(M L1)
THE K
(from Bound 3 ).
M CASE
(5.42)
(5.45)
Since the above probability goes to zero for large but finite (independent of K and P ) values of To , a finite number of iterations is
sufficient to identify at least one destination with arbitrarily high
probability, and hence, a finite number of iterations is sufficient to
identify M destinations with arbitrarily high probability.
127
5.
1
K2Nf
2(M 1L1)
.
(5.47)
log 1 + P
(5.49)
PN
max )2 + N
2
N r=0
o
k6=i M khi k (d
N
1
P
H
2
X
vi (r)|
1
M |hi (r)
log 1 +
(5.50)
1
(M L1)
P
N r=0
PN
1
2
+ No
k6=i M khi k
K2Nf
which, upon choosing Nf = (M L 1) log P log K, yields
N 1
P
|hH
vi (r)|2
1 X
i (r)
log 1 + PM 4N
Ri
2
N r=0
k6=i M khi k + No
PK
PM
i=1 Ri
i=1 Ri
lim
= lim
=M
P log P
P log P
!
(5.51)
(5.52)
128
5.4.
5. 4 .
OBSERVATIONS
OBSERVATIONS
A multi-input single-output (MISO) broadcast network was investigated for feasibility of spatial multiplexing with limited feedback. It
was assumed that the source is equipped with M antennas, that there
are K single-antenna destinations and that all the channels in the network are frequency-selective with L taps. Spatial multiplexing to M of
the K destinations by employing spatial interference-nulling precoding
vector sequences, computed according to Section 4.3.5 of Chapter 4,
was shown to result in full spatial multiplexing gain. For the case
where the number of destinations K is equal to the number of source
antennas M , it was observed that a feedback rate of (M L 1) log P
from each destination to the source guarantees the achievability of
full spatial multiplexing gain. However, if the number of destinations
K is much larger than the number of source antennas M , then a
measurably smaller feedback rate of (M L 1) log P log K from M
of the K destinations was found to be sufficient for achievability of full
spatial multiplexing gain. The key to this reduction of feedback rate
by log K is a two step channel vector quantization scheme, where the
quantization space of the normalized channel vectors is first divided
into K non-overlapping regions and each region is then divided into
a number of Voronoi regions. The source can identify M sources and
the corresponding M non-overlapping regions with a very few bits of
feedback using an innovative iterative algorithm, and the M sources
therefore need to feedback only the indices of their Voronoi regions
within their respective non-overlapping regions. Provided that each
of the K non-overlapping regions is approximately (1/K)-th of the
total quantization space, each of the M destination requires log K
less bits to feedback the index of any of its Voronoi region compared
to the case where the iterative algorithm is not followed.
Throughout this chapter, two important considerations were ignored. First, fairness of rates of communication to different destinations was not considered, that is, some destinations may get very high
rates of communication while others may have a zero rate. Second,
multiuser diversity gain that is inherent in the system due to the
129
5.
130
C HAPTER
Spatial multiplexing is one of the most promising transmission techniques to achieve high communication rates in future wireless systems.
In some wireless systems such as interference and broadcast networks
however, spatial multiplexing is feasible only if the source(s) have
at least some channel knowledge. Since channel knowledge is often
obtained at the sources through limited capacity feedback links, it
is important to develop transmission schemes that can achieve full
spatial multiplexing gain in broadcast and interference networks with
limited feedback. In this thesis, we proposed (limited) feedback and
transmission schemes for three different classes of interference networks and a class of broadcast networks and showed that the schemes
achieve full spatial multiplexing gain.
131
6.
132
133
6.
134
APPEN DIX A
A .1. U N ION B OU N DS
Assume the real-valued random variables X1 , X2 , X3 are such that
P (|Xi | Ci ) Pi ,
i = 1, 2
and
P (|X3 | C3 ) P3
(A.1)
P1 + P3 .
|X3 |
C3
(A.2)
(A.3)
135
Bound 1:
P
M
X
!
|hi | < k
i=1
PM
= P es( i=1 |hi |) > esk
h
i
PM
E es( i=1 |hi |)
"r
#M
2
1
min esk
s>0
"r
#M
2
1
= esk
(A.8)
(A.9)
s= M
k
Bound 2:
P
M
X
i=1
136
r !M
2
.
ek
M
!
|hi | > k
(A.10)
PM
= P es( i=1 |hi |) > esk
h PM
i
E es( i=1 |hi |)
(by Chebyshevs Inequality)
esk
h
iM
= esk E es|hi |
(since hi are i.i.d.)
h s2
iM
= esk e 2 2(1 Q(s))
h s2 i M
esk 2e 2
h
iM
s2
sk
M
2
min 2e
e
s>0
h
iM
s2
sk
M
2
= 2e
e
(A.11)
(A.12)
(A.13)
(A.14)
(A.15)
(A.16)
k
s= M
M
k2
.
= 2e 2M 2
(A.17)
(A.18)
i=1
A .3. I NTERSECTION B OU N D
Assume the real-valued random variables X1 , X2 , X3 are such that
P (|Xi | Ci ) Pi ,
P (|X3 | C3 ) P3 ,
i = 1, 2
and
(A.19)
137
P1 P3 .
|X3 |
C3
(A.20)
(A.21)
[vk1
...
vkdk ]
Cdk 1
(A.22)
N dk
(A.23)
138
so that
x
k = Vk xk
(A.24)
M
X
H
H
zi .
i,k Vk xk +
(A.25)
k=1
(A.26)
(A.27)
k = 2, . . . , K
k
/ {1, i}, i 6= 1
(A.28)
(A.29)
139
where
1 H
H
E = (H
H2,3 V3 CN d3
2,1 )
1 H H 1 H
H
Sk = (H
H1,3 (H2,3 ) H2,1 CN N ,
1,k )
Di,k
(A.30)
k = 2, . . . , M
1 H
H
= (H
Hi,k Sk , CN N ,
i,1 )
(A.31)
i = 2, 3, . . . , M, k = 2, 3, . . . , i 1, i + 1, . . . , M.
(A.32)
where i,k {0, 1, . . . , t 1}, (i, k) S. Notice that since each i,k
can take t values and there are Q = (M 1)(M 2) 1 terms in
the product in (A.34), precisely tQ column vectors of the form (A.34)
exist. The (t + 1)Q column vectors of V1 are chosen to be of the same
form as the column vectors of E, that is,
Y
(Di,k )i,k 1N
(A.35)
(i,k)S
140
(A.36)
and hence, (A.29) is satisfied. For the general case where (i, k) S, let
us consider any column vector of the matrix E, say e, that corresponds
to a given set {i,k : (i, k) S}. Then,
Y
p,q
Di,k e = Di,k
(Dp,q )
1N
(A.37)
(p,q)S
=g
(A.38)
(A.39)
and hence,
Di,k E V1
(i, k) S.
(A.40)
141
H
H
i,k Vk ]
(A.41)
vkm , m
L1
X
hi,j [l]Fl ,
i, j.
l=1
Y
(i,k)S
142
1 H
H
(H
Hi,k Sk
i,1 )
i,k
1N
(A.42)
Y
H
H H
H
i,1 H1,k H2,3
i,k
H
H H
H
i,k H1,3 H2,1
vkm , m
i,k !
1N
(i,k)S
i,k , i, k)
(using commutative property of diagonal matrices H
Y
H
H H t
=
H
i,1 H1,k H2,3
(i,k)S
Y
H
H H
H
i,1 H1,k H2,3
ti,k
H
H H
H
i,k H1,3 H2,1
i,k
1N
(A.43)
(i,k)S
v1m =
H
H H
H
i,1 H1,k H2,3
t
3tQ(L1)
X
g
m
F
1N
1,g
(A.44)
g=0
(i,k)S
where m
1,g , g = 1, . . . , 3tQ(L 1), are some polynomial functions in
hi,j [l], i, j, l with powers of hi,j [l] being functions of i,j . Defining
fg CN 1 as fg , Fg 1N , we can further simplify (A.44) as
Y
v1m =
H H
H
H
i,1 H1,k H2,3
t
3tQ(L1)
X
(A.45)
g=0
(i,k)S
m
f
1,g g .
{z
(a)
}|
{z
(b)
Since term (a) is the same for all m and term (b) is a linear combination of vectors fg , g = 0, . . . , 3tQ(L 1), the vectors v1m , m =
1, . . . , (t + 1)Q , will be linearly independent only if
3tQ(L 1) + 1 (t + 1)Q
(A.46)
143
(t + 1)Q 1
+1
3tQ
(t + 1)Q 1
L>
3tQ
(A.47)
(A.48)
that is, L must be greater than ((t + 1)Q 1)/(3tQ) for feasibility
of interference alignment. Also note that satisfying (A.48) indeed
guarantees that v1m , m = 1, . . . , (t+1)Q , are linearly independent. This
is because hi,j [l], i, j, l, are drawn independently from a continuous
probability density function, and hence, the probability of vectors
m
m
[m
1,1 1,2 . . . 1,(t+1)Q ], m, being linearly dependent is almost surely
0.
Finally, we note that the above analysis can be easily extended
to show that Vk , k 6= 1, will be of rank tQ (that is, vkm , m, will be
linearly independent) if and only if
L>
tQ 1
.
3tQ
(A.49)
A .6. LI N E PAC KI NG B OU N D
We use a bounding technique from Mukkavilli et al. (2003) to upperbound 2Nd in terms of sin(). In particular, we first define a
spherical cap around each unit-magnitude quantization vector pr , r =
1, . . . , Npack , as
2
2
(pr ) = x CM : kxk2 = 1 and |pH
x|
cos
.
r
2
144
m- DIMENSIONAL
HYPER- SPHERE
The spherical caps (pr ) have the following two key properties:
1. There is no overlap between any two of the spherical caps. This
follows from Appendix A.9.
2. The surface area of the spherical caps satisfies (see Appendix A.8)
|(pr )| =
2 M (sin( 2 ))2(M 1)
,
(M 1)!
r = 1, . . . , 2Nd .
Noting that the total surface area of the M -dimensional unit hypersphere is given by || = 2 M /(M 1)! (see Appendix A.7) and
using the fact that the spherical caps are non-overlapping, we get the
following upper-bound:
2(M 1)
M
2 M
Nd 2 (sin 2 )
(A.50)
2
(M 1)!
(M 1)!
2(M 1)
Nd
2 sin
(A.51)
2
2(M 1)
sin()
.
(A.52)
2
A .7. SU R FAC E AR EA OF
m- DIMENSIONAL
HYPER - SPH ER E
In this appendix, we elaborate the computation of the surface area
of an M -dimensional complex hypersphere as given in the Appendix
I of Mukkavilli et al. (2003) and based on a methodology given in
Kendall (1961).
We will first compute the volume of the complex sphere khk r,
145
(A.53)
(A.54)
where
Tj =
cos(j )
sin(j )
rj sin(j ) rj cos(j )
(A.55)
(A.56)
Therefore,
|J| = det (diag (T1 , T2 , . . . , TM ))
=
M
Y
(A.57)
det(Tj )
(A.58)
= r1 r 2 r M .
(A.59)
j=1
146
(A.60)
m- DIMENSIONAL
HYPER- SPHERE
j=1
x2jr +x2ji r 2
PM
2
2
j=1 rj r
0i 2
M
= (2)
(A.63)
PM
2
2
j=1 rj r
rM 1 = uc1 s2
rM = us1
(A.64)
(A.65)
147
(A.66)
Zr
VM (r) = (2)
u
u=0
2t1
Z/2 Z/2
du
1 =0 2 =0
Z/2
M 1 =0
3
5
(c2M
s1 )(c2M
s2 ) (c3M 2 sM 2 )(cM 1 sM 1 )
1
2
d1 d2 dM 1
r2 M
1
1
11
2M 2M 2 2M 4
42
(2)M r2M
=
.
2M M !
(A.67)
= (2)M
(A.68)
AM (r) =
(A.69)
148
Zr
Z
Z
VM (r, ro ) = (2)M
r1 =ro
PM
j=2
rj2 r 2 r12
(A.70)
Note that the multiple integral in brackets in (A.70) is nothing but
the
p volume of (t 1)-dimensional complex hypersphere of radius
r2 r12 with a scaling factor of (2)M 1 , which is given by (A.68).
On substitution, we get
VM (r, ro ) =
(2)M
M
1
2
(M 1)!
Zr
(A.71)
r1 =ro
149
(2)M
= M 1
2
(M 1)!
Zr
(r2 u)M 1
du
2
where
u = r12
u=ro2
(A.72)
M
(2)
(r2 ro2 )M .
2M M !
(A.73)
Finally the surface area of the spherical cap is obtained by differentiating VM (r, ro ) as follows:
dVM (r, ro )
dr
2 M
=
r(r2 ro2 )M 1 .
(M 1)!
AM (r, ro ) =
(A.74)
M
2
H 2
2
(pi ) , r C : krk = 1 and |pr r| cos
2
and pH
1 p2 < cos(), then (p1 ) and (p2 ) do not overlap.
This can be proved by contradiction as follows:
Let us assume that the spherical caps (p1 ) and (p2 ) overlap.
Then, the intersections of (p1 ) and (p2 ) with the linear span of
p1 and p2 (i.e. the two-dimensional plane containing all possible
linear combinations of p1 and p2 ) must also overlap. Let zo be
a unit-magnitude vector that lies on the region where the two
150
H
pH
z
cos
and
p
z
cos
.
1 o
2 o
2
2
2
Furthermore, since |qH
s,p gs | cos ( 1 )/2 , we have
1
arccos(|qH
s,p gs |).
2
(A.10.77)
cos
cos(arccos(|uH qs,p |) + arccos(|qH
s,p gs |))
2
(A.10.78)
(A.10.79)
151
1 |uH qs,p |2
2
1 |qH
s,p gs | .
(A.10.80)
Let {qs,p , t1 , . . . , tM L1 } be a set of orthonormal vectors in the M Ldimensional complex space such that u and gs can be expanded on
this set as
u = u1 qs,p +
MX
L1
u r tr
(A.10.81)
r=1
gs = gs,1 qs,p +
MX
L1
gs,r tr .
(A.10.82)
r=1
t
2
cos
|u1 | |gs,1 |
|ur | t
|gs,r |2
2
r=1
r=1
v
v
uM L1
uM L1
u X
u X
2
|ur | t
|gs,r |2
= |u1 | |gs,1 |
r=1
|u1 gs,1 |
MX
L1
(A.10.83)
(A.10.84)
r=1
|ur gs,r |
r=1
(A.10.85)
MX
L1
u1 gs,1 +
ur gs,r
r=1
152
(A.10.87)
(A.10.88)
153
APPEN DIX B
155
(B.1.1)
(B.1.2)
156
restated as
y[n] = hx[n] + z[n].
(B.1.4)
L1
X
h[l]x[n l] + z[n]
(B.1.6)
l=0
where h[l], l = 0, . . . , L1, denotes the l-th tap of the channel impulse
response between S and D.
D. Orthogonal frequency division multiplexing
Orthogonal frequency division multiplexing (OFDM) is a common
transmission technique to convert an L-tap frequency-selective channel
to N parallel frequency-flat channels with N L. Specifically, OFDM
entails that the source S computes its transmit symbol x[n], n =
0, . . . , N 1, as the inverse discrete Fourier transform (IDFT) of an
N -point frequency-domain sequence x(r), r = 0, . . . , N 1, according
157
to
x[n] = F 1 {x(r)}
(B.1.7)
and the destination D computes an N -point frequency-domain sequence y(r), r = 0, . . . , N 1, as the discrete Fourier transform (DFT)
of its received symbol y[n] according to
y(r) = F{y[n]}.
(B.1.8)
It can then be shown that x(r) and y(r) follow the input-output
relation
y(r) = h(r)x(r) + z(r),
r = 0, . . . , N 1
(B.1.9)
N 1
1 X
|h(r)|2 P
log 1 +
.
N r=0
No
(B.1.10)
158
a spatial multiplexing gain of one is any case achievable without channel feedback in (B.1.5) and (B.1.10), it trivially follows that channel
feedback is not required for achievability of full spatial multiplexing gain in a SISO channel. To be sure, channel feedback can be
used in the frequency-selective case to increase the term inside the
log() in the capacity expression (B.1.10) through water-filling (Tse
and Viswanath, 2005, Sec. 5.4.6). However, in this thesis, we have
restricted our analysis to the role of channel feedback in increasing
spatial multiplexing gain, and have ignored other potential benefits
of feedback.
(B.1.11)
where y[n] CK1 denotes the vector received along the K antennas
of D, x[n] CM 1 is the input vector transmitted along the M
antennas of S and z[n] CK1 denotes the noise vector at D, all for
the n-th time index. H CKM is the channel matrix between S
and D with its (i, k)-th element {H}i,k being the channel coefficient
between the k-th antenna of S and the i-th antenna of D. The input
vector x[n] obeys the power constraint
E kx[n]k2 P.
(B.1.12)
159
N
1
X
H[l]x[n l] + z[n]
(B.1.15)
l=0
where y[n], x[n] and z[n] are the same as defined for the frequency-flat
case and H[l], l = 0, . . . , L 1, is the channel matrix between S and
D with its (i, k)-th element {H[l]}i,k being the l-th tap of the channel
impulse response between k-th antenna of S and i-th antenna of D.
160
n = 0, . . . , N 1
(B.1.16)
r = 0, . . . , N 1
(B.1.17)
where H(r) is the channel matrix between S and D for the r-th tone
and its (i, k)-th element is computed as {H(r)}i,k = F{{H[n]}i,k },
for all i, k. An L-tap frequency-selective MIMO channel is therefore
equivalent to N parallel frequency-flat MIMO channels (represented
by (B.1.17)) and since full spatial multiplexing gain is achievable
in a frequency-flat MIMO channel without any channel feedback, it
follows that channel feedback is not required for achieving full spatial
multiplexing gain in frequency-selective MIMO channels as well.
161
M
X
hm xm [n] + z[n]
(B.2.18)
m=1
where y[n] CK1 is the symbol vector received along the K antennas
of D, xm [n] C denotes the symbol transmitted by Sm and z[n]
CK1 is the noise vector along the K antennas of D, all for n-th
time index. The channel coefficient from Sm to the i-th antenna of
T
D is denoted by him so that hm = [h1m hK
m ] is the single-input
multi-output channel vector between Sm and D. The input symbols
xm [n] obey the power constraint
P
E |xm [n]|2
.
M
(B.2.19)
162
X
i=1
i P
log 1 +
M No
(B.2.21)
163
k = 1, . . . , K
(B.2.22)
164
(B.2.24)
For the case K M , Yoo et al. (2007) showed that full spatial multiplexing gain is achievable, provided that all channels are frequency-flat
and that the feedback rate rf from each Dk to S scales with sum
(across all source antennas) transmit power P and number of destinations K according to
rf = (M 1) log P log K.
(B.2.25)
The results by Yoo et al. (2007) imply that presence of a large number
of destinations can reduce the feedback load required for achievability
of full spatial multiplexing gain.
B. Frequency-selective case
A frequency-selective broadcast network is represented by the IO
relations
yk [n] =
L1
X
k = 1, . . . , K
(B.2.26)
l=0
165
where yk [n], x[n] and zk [n] are the same as defined for the frequencyT
m
flat case and hk [l] = [h1k [l] hM
k [l]] with hk [l] being the l-th tap of
the channel impulse response between the m-th antenna of S and Dk .
If a cyclic signal model such as OFDM is employed to convert all the
L-tap frequency-selective channels to N (with N K) frequency-flat
channels, the time-domain IO relations (B.2.26) can be rewritten as
yk (r) = hTk (r)x(r) + zk (r),
k = 1, . . . , K
(B.2.27)
(B.2.28)
Similarly, the results by Yoo et al. (2007) imply that for K M and
all channels frequency-selective (with L taps), full spatial multiplexing
gain of M is achievable in the sum rate, provided that the feedback
rate rf from each Dk to S scales with sum (across all source antennas)
transmit power P according to
rf = N ((M 1) log P log K) .
166
(B.2.29)
The feedback scaling laws in (B.2.28) and (B.2.29) are however unlikely to be fundamental because of their dependence on the number
of tones N . In fact, it has been proved in this thesis (in Chapter 5)
that the results by Jindal (2006) and Yoo et al. (2007) can be generalized to the frequency-selective case with feedback rates scaling as
(M L 1) log P and (M L 1) log P log K, respectively.
167
yik [n] =
k
gi,m
xm [n] + zik [n],
k = 1, . . . , K, i = 1, . . . , M
m=1
(B.2.30)
where yik [n] C is the symbol received at Sik , xm [n] denotes the input
symbol for Sm and zik [n] is CN (0, No ) noise at Sik , all for n-th time
k
index. In addition, gi,m
stands for the channel coefficient between the
single-antenna source Sm and the single-antenna relay Sik and the
input symbols obey the power constraints
P
E |xm [n]|2
,
M
m = 1, . . . , M.
(B.2.31)
K
M X
X
i = 1, . . . , M
(B.2.32)
m=1 k=1
where yi [n] C is the symbol received at Di , xkm [m] denotes the input
k
symbol for Sm
and zi [n] is CN (0, No ) noise at Di , all for n-th time
index. In addition, hki,m stands for the channel coefficient between the
k
single-antenna relay Sm
and the single-antenna destination Di and
the input symbols obey the power constraints
P
E |xkm [n]|2 ,
N
k = 1, . . . , K, m = 1, . . . , M.
(B.2.33)
k
Under the assumption of perfect knowledge of both gi,i
and hki,i
k
at each Si , Morgenshtern and B
olcskei (2007) showed that for M
finite and K , full spatial multiplexing gain of M is achievable
168
M
X
i = 1, . . . , M
(B.2.34)
k=1
169
and each destination knows all the channels in the network and that
ii) all the channels are time-selective. Satisfying these two conditions
however require not only infinite capacity feedback links, but also
non-causal feedback. Although Grokop et al. (2009) generalized the
results of Cadambe and Jafar (2008) to time-flat (but frequencyselective) single-antenna interference networks and therefore obviated
the need for non-causal feedback, results of Grokop et al. (2009) still
depend critically on availability of infinite-capacity feedback links. The
achievability of full spatial multiplexing gain with limited-capacity
feedback links is not known in the literature so far. Nevertheless,
we have shown in this thesis (in Chapter 3) that naive interference
alignment, along with a vector quantization scheme proposed for
single-user beamforming by Mukkavilli et al. (2003) and Love et al.
(2003), achieves full spatial multiplexing gain of M/2, provided that
each destination can broadcast at least M (L 1) log P bits in an
error-free fashion to all the sources and destinations in the network.
C. MISO interference networks
MISO interference networks are defined as the interference networks
where each Si is equipped with multiple antennas and each Di is
equipped with a single antenna. Denoting the number of antennas at
Si , i, by Mi , the IO relations for frequency-selective MISO interference networks are given by
yi [n] =
M L1
X
X
i = 1, . . . , M
(B.2.35)
k=1 l=0
170
P
E kxi [n]k2
.
M
(B.2.36)
171
APPEN DIX C
Notation
C .1. MISC ELLAN EOUS
SD
GD
x[n]
x(r)
|A|
R
C
RN M
CN M
Z+
log(x)
ln(x)
arccos(a)
173
C . NOTATION
C .1.
R{a}
I{a}
|x|
j
ab
ab
bac
AB
(A ~ B)[n]
Cyclics {a}
AB
AB
1
a is much greater than b
a is much less than b
greatest integer that is smaller than or equal to the
real number a
the sets A and B are equal
PN 1
computed as s=0 A[n s]B[s], is circular
convolution between the N -point matrix sequences
A[n] CP Q and B[n] CQR , n = 0, . . . , N 1,
where A[m] = A[N m], for m = 1, . . . , N 1
vector obtained by cyclically shifting the elements
of the vector a downwards by s positions
set of column vectors of A CP Q is a subset of
the set of column vectors of B CP R
union of sets A and B
174
scalars
vectors
matrices
transpose of the vector a and the matrix A
element-wise complex conjugate of the scalar a, the
vector a, and the matrix A
C .2.
aH , AH
j2(i1)(k1)
N
{F}i,k = (1/ N )e
, i = 1, . . . , N, k =
1, . . . , N
175
C . NOTATION
C . 4 . I N FORMATION TH EORY
I(x, y)
h(x)
176
177
D. ACRONYMS
APPEN DIX D
Acronyms
AWGN
CDF
CSI
DFT
IDFT
IEEE
IA
IO
LHS
MIMO
MISO
MMSE
OFDM
PDF
RHS
SIN
SIMO
SISO
SNR
SINR
178
References
179
REFERENCES
180
REFERENCES
181
REFERENCES
182
REFERENCES
183
REFERENCES
184
Curriculum Vitae
Jatin Thukral
born 10 February 1982
Education
05/200504/2009
10/200304/2005
08/199905/2003
Professional Experience
05/200504/2009
06/200309/2003
06/200208/2002
185