Академический Документы
Профессиональный Документы
Культура Документы
max L(v, , , |)
(A.2)
(A.3)
(A.6)
Similarly, the above optimization problem is not convex. To
avoid being tracked into a bad local maximum, we can also
find a set of local maxima with different starting points and
choose the best one.
v,,,
v , 1, s s+1 0,
s.t. i i
0 < i , s < 1.
Since in SSAs, the allocation rule is not a continuous function of ad CTR profile , the utility of a bidder is not continuous with respect to ; as a result, the likelihood defined in
Eq. (A.1) is not continuous with respect to .
To address the discontinuity of the likelihood function, we
split the unknown parameters into two groups and sequentially optimize them in each iteration: we treat v, , as a group
and as the other group; in each iteration, we first optimize
v, , and then .
The function L(v, , , |) in Eq. (A.4) is continuous
with respect to the parameters in the first group. We can learn
a better set of v, , by solving the following sub optimization problem:
max L(v, , , |)
(A.4)
v,,
s.t.
vi , i 1, s s+1 0,
0 < s < 1.
8
9
L ;
Randomly generate an ad CTR profile ;
while True do
Fix and update v, , by solving the problem shown in
Eqs. (A.4) and (A.5);
for i 1, 2, , N do
Fix j , j 6= i, v, , and update i by solving the
sub problem as shown in Eq. (A.6) in each continuous
interval;
with the updated parameters
Calculate the likelihood L
v, , , ;
L > then L L;
if L
else return the learned parameters v, , , ;
(A.5)
Since the above optimization problem is non-convex, it is difficult to find the global maximum. We turn to find a set of
local maxima with different starting points and then choose
the best one to improve the possibility of reaching the global
maximum of the sub problem.
As aforementioned, the likelihood function is not continuous with respect to . Here we do not optimize bidders
ad CTRs simultaneously. Instead, we deal with them one by
one. Let us take the ad CTR i of bidder i as an example and
keep j , j 6= i fixed. Given that all the other parameters are
B.1
N
1
2
3
1
2
3
v
10.92
15.05
9.60
6.00
1.65
5.67
4711
2945
3060
6251
222
4700
.1770
.1683
.3685
.1348
.6466
.1827
.2137
.1898
.0742
.1515
.1011
.0504
O
1
1
error
u1
2
2
error
u2
3
3
error
u3
4.0
.6348
.6359
.0011
.4250
6.5
.6417
.6422
.0005
.0910
8.0
.7660
.7623
.0037
.2179
-1.8535
1.0
.6897
.6881
.0016
.0633
2.0
.2927
.2682
.0245
.0339
2.5
.1466
.1040
.0426
.0548
.0522
.0524
.0002
.0581
3.0
.2439
.2682
.0243
.0339
.1207
.1040
.0167
.0548
3.5
.1304
.1304
.0000
.0583
8.5
.3652
.3641
.0011
.4249
4.0
.8174
.8172
.0002
.0587
10.0
.3583
.3578
.0005
.0909
.2340
.2377
.0037
.2175
4.5
.4634
.4635
.0001
.0340
5.0
.0431
.1040
.0609
.0548
N
1
2
3
1
2
3
1
2
3
v
18.16
12.80
112.28
1.68106
1.29106
0.80106
8.37106
4.75106
3.08106
.5066
.9091
.1641
.3614
.6129
.3459
.0220
.0474
.0910
.8292
.0186
.0153
.0750
.0377
.0376
.0700
.0692
.0691
L
-3.5835
-2.0794
-4.7594
B.2
query
ok Bi
We apply Eq. (A.7) into Algorithm A.1 to estimate the parameters of the three queries presented in the experiment section.
We use 10 iterations and the results are depicted in Table B.4.