 • Search in book...
• Toggle Font Controls DERIVATIONS OF CWMR 181
Expressing ξ as above and replacing b, we rewrite the Lagrangian as
˜
L(τ) =−
τ
2
2
x
t
−¯x
t
1
2
+
1
2C
+τ(b
t
·x
t
).
Taking the derivative with respect to τ and setting it to zero, we can get
0 =
˜
L
τ
=−τ
x
t
−¯x
t
1
2
+
1
2C
+(b
t
·x
t
).
Then we get the update scheme of τ and project it to [0, ):
τ = max
0,
b
t
·x
t
x
t
−¯x
t
1
2
+
1
2C
!
=
t
x
t
−¯x
t
1
2
+
1
2C
.
B.3 Derivations of CWMR
B.3.1 Proof of Proposition 10.1
Proof Since considering the nonnegativity constraint introduces too much complex-
ity, ﬁrst we relax the optimization problem without it, and later we project the solution
to the simplex domain to obtain the required portfolio.
The Lagrangian for the optimization problem (10.3) is
L =
1
2
log
det
t
det
+Tr(
1
t
) +(μ
t
μ)
1
t
(μ
t
μ)
+λ(φx
t
x
t
+μ
x
t
) +η(μ
1 1).
Taking the derivative of the Lagrangian with respect to μ and setting it to zero, we
can get the update of μ
0 =
L
μ
=
1
t
(μ μ
t
) +λx
t
+η1 =⇒ μ
t+1
= μ
t
t
(λx
t
+η1), (B.7)
where
t
is assumed to be nonsingular. Multiplying both sides by 1
, we can get η
1 = 1 1
t
(λx
t
+η1) =⇒ η =−λ ¯x
t
, (B.8)
where ¯x
t
=
1
t
x
t
1
t
1
denotes the conﬁdence-weighted average of t-th price relatives.
Plugging Equation B.8 to Equation B.7, we can get
μ
t+1
= μ
t
λ
t
(x
t
−¯x
t
1). (B.9)
Moreover, taking the derivative of the Lagrangian with respect to and setting it to
zero, we can have the update of :
0 =
L
=−
1
2
1
+
1
2
1
t
+λφx
t
x
t
=⇒
1
t+1
=
1
t
+2λφx
t
x
t
.
(B.10)
T&F Cat #K23731 — K23731_A002 — page 181 — 9/28/2015 — 20:47 182 PROOFS AND DERIVATIONS
Now let us solve the Lagrange multiplier λ
t+1
using KKT conditions. First, follow-
ing Dredze et al. (2008), we can compute the inverse using Woodbury identity (Golub
and Van Loan 1996):
t+1
= (
1
t
+2λφx
t
x
t
)
1
=
t
t
x
t
2λφ
1 +2λφx
t
t
x
t
x
t
t
.
(B.11)
The KKTconditions imply that either λ = 0, and no update is needed; or theconstraint
in the optimization problem (10.3) is an equality after the update. Taking Equation B.9
and Equation B.11 to the equality version of the ﬁrst constraint, we can get
(μ
t
λ
t
(x
t
−¯x
t
1)) ·x
t
= φ
x
t
t
t
x
t
2λφ
1 +2λφx
t
t
x
t
x
t
t
x
t
.
Let M
t
= μ
t
x
t
be the return mean, V
t
= x
t
t
x
t
be the return variance of the t-th
trading period before updating, and W
t
= x
t
t
1 be the return variance of the t-th
price relative with cash. We can simplify the preceding equation to
λ
2
(2φV
2
t
2φ ¯x
t
V
t
W
t
) +λ(2φV
t
2φV
t
M
t
+V
t
−¯x
t
W
t
) +( M
t
φV
t
) = 0.
(B.12)
Let us deﬁne a = 2φV
2
t
2φ ¯x
t
V
t
W
t
, b = 2φV
t
2φV
t
M
t
+V
t
−¯x
t
W
t
, and c =
M
t
φV
t
. Note that the above quadratic form equation may have two, one, or
zero real roots. We can calculate its real roots (two real roots case: γ
t1
and γ
t2
; one
real root case: γ
t3
) as follows:
γ
t1
=
b +
b
2
4ac
2a
, γ
t2
=
b
b
2
4ac
2a
, or γ
t3
=−
c
b
.
To ensure the nonnegativity of the Lagrangian multiplier, we can project its value to
[0, +∞):
λ = max{γ
t1
, γ
t2
, 0}, or λ = max{γ
t3
, 0}, or λ = 0.
Note that the above equations, respectively, correspond to three cases of real roots
(two, one, or zero).
In practical computation, as we only adopt the diagonal elements of a covariance
matrix, it is equivalent to compute λ from Equation B.12 but update the covariance
matrix with the following rule instead of Equation B.10:
1
t+1
=
1
t
+2λφdiag
2
(x
t
),
where diag(x
t
) denotes a diagonal matrix with the elements of x
t
on its main diagonal.
B.3.2 Proof of Proposition 10.2
Proof Similar to the proof of Proposition 10.1, we relax the optimization problem
without the nonnegativity constraint and project the solution to the simplex domain
to obtain the required portfolio.
T&F Cat #K23731 — K23731_A002 — page 182 — 9/28/2015 — 20:47 DERIVATIONS OF CWMR 183
The Lagrangian for the optimization problem (10.4) is
L =
1
2
log
detϒ
2
t
detϒ
2
+Tr(ϒ
2
t
ϒ
2
) +(μ
t
μ)
ϒ
2
t
(μ
t
μ)
+λ(φϒx
t
+μ
x
t
) +η(μ
1 1).
Taking the derivative of the Lagrangian with respect to μ and setting it to zero, we
can get the update of μ,
0 =
L
μ
= ϒ
2
t
(μ μ
t
) +λx
t
+η1 =⇒ μ
t+1
= μ
t
ϒ
2
t
(λx
t
+η1),
where ϒ
t
is nonsingular. Multiplying both sides by 1
, we can get
1 = 1 1
ϒ
2
t
(λx
t
+η1) =⇒ η =−λ ¯x
t
,
where ¯x
t
=
1
ϒ
2
t
x
t
1
ϒ
2
t
1
is the conﬁdence-weighted average of t-th price relatives.
Plugging it into the update scheme of μ
t+1
, we can get
μ
t+1
= μ
t
λϒ
2
t
(x
t
−¯x
t
1).
Moreover, taking the derivative of the Lagrangian with respect to ϒ and setting it to
zero, we have
0 =
L
ϒ
=−ϒ
1
+
1
2
ϒ
2
t
ϒ +
1
2
ϒϒ
2
t
+λφ
x
t
x
t
ϒ
2
,
x
t
ϒ
2
x
t
+λφ
ϒx
t
x
t
2
,
x
t
ϒ
2
x
t
.
We can solve the preceding equation to obtain ϒ
2
:
ϒ
2
t+1
= ϒ
2
t
+λφ
x
t
x
t
,
x
t
ϒ
2
t+1
x
t
.
The preceding two updates can be expressed in terms of the covariance matrix,
μ
t+1
= μ
t
λ
t
(x
t
−¯x
t
1),
1
t+1
=
1
t
+λφ
x
t
x
t
,
x
t
t+1
x
t
. (B.13)
Here,
t+1
is positive semideﬁnite (PSD) and nonsingular.
Now, let us solve the Lagrangian multiplier using its KKT condition. Follow-
ing Crammer et al. (2008), we compute the inverse using Woodbury identity (Golub
and Van Loan 1996):
t+1
=
t
t
x
t
λφ
,
x
t
t+1
x
t
+λφx
t
t
x
t
x
t
t
. (B.14)
T&F Cat #K23731 — K23731_A002 — page 183 — 9/28/2015 — 20:47 184 PROOFS AND DERIVATIONS
Similar to the proof of Proposition 10.1, we set M
t
= μ
t
x
t
, V
t
= x
t
t
x
t
, W
t
=
x
t
t
1, and U
t
= x
t
t+1
x
t
. Multiplying the preceding equation by x
t
(left) and x
t
(right), we get U
t
= V
t
V
t
λφ
U
t
+λφV
t
V
t
, which can be solved for U
t
:
U
t
=
λφV
t
+
,
λ
2
φ
2
V
2
t
+4V
t
2
. (B.15)
The KKT condition implies that either λ = 0, and no update is needed; or the con-
straint in the optimization problem (10.4) is an equality after the update. Substituting
Equations B.13 and B.15 into the equality version of the constraint, after rearranging
in terms of λ,weget
λ
2
V
t
−¯x
t
W
t
+
φ
2
V
t
2
2
φ
4
V
2
t
4
+2λ( M
t
)
V
t
−¯x
t
W
t
+
φ
2
V
t
2
+( M
t
)
2
φ
2
V
t
= 0.
(B.16)
Let a =
V
t
−¯x
t
W
t
+
φ
2
V
t
2
2
φ
4
V
2
t
4
, b = 2( M
t
)
V
t
−¯x
t
W
t
+
φ
2
V
t
2
, and c =
( M
t
)
2
φ
2
V
t
. Note that we only consider real roots of the quadratic form equa-
tion. Thus, we can obtain γ
t
as its roots (two real roots case: γ
t1
and γ
t2
; one real
root case: γ
t3
):
γ
t1
=
b +
b
2
4ac
2a
, γ
t2
=
b
b
2
4ac
2a
or γ
t3
=−
c
b
.
To ensure the nonnegativity of the Lagrangian multiplier, we project the roots to
[0, +∞):
λ = max{γ
t1
, γ
t2
, 0}, or λ = max{γ
t3
, 0}, or λ = 0,
which corresponds to three cases (two, one, or zero real roots), respectively.
Following the Proof of Proposition 10.1, we can update the diagonal covariance
matrix as
1
t+1
=
1
t
+λ
φ
U
t
diag
2
(x
t
),
where diag(x
t
) denotes the diagonal matrix with the elements of x
t
on its main
diagonal.
B.4 Derivation of OLMAR
B.4.1 Proof of Proposition 11.1
Proof Since introducing a nonnegative constraint of the simplex constraint causes
much difﬁculty (Helmbold et al. 1998), ﬁrst we do not consider it and ﬁnally project
on the simplex domain.
T&F Cat #K23731 — K23731_A002 — page 184 — 9/28/2015 — 20:47 DERIVATION OF OLMAR 185
The Lagrangian of the optimization problem OLMAR is
L(b, λ, η) =
1
2
b b
t
2
+λ( b ·
˜
x
t+1
) +η(b·1 1),
where λ 0 and η are the Lagrangian multipliers. Taking the gradient with respect
to b and setting it to zero, we get
0 =
L
b
=(b b
t
) λ
˜
x
t+1
+η1 =⇒ b = b
t
+λ
˜
x
t+1
η1,
Multiplying both sides by 1
,weget
1 = 1 +λ
˜
x
t+1
·1 ηm =⇒ η = λ ¯x
t+1
,
where ¯x
t+1
denotes the average predicted price relative (market). Plugging the above
equation to the update of b, we get the update of b,
b = b
t
+λ(
˜
x
t+1
−¯x
t+1
1),
To solve the Lagrangian multiplier, let us plug the above equation to the Lagrangian,
L(λ) =λ( b
t
·
˜
x
t+1
)
1
2
λ
2
˜
x
t+1
−¯x
t+1
1
2
Taking derivative with respect to λ and setting it to zero, we get
0 =
L
λ
=( b
t
·
˜
x
t+1
) λ
˜
x
t+1
−¯x
t+1
1
2
=⇒ λ =
b
t
·
˜
x
t+1
˜
x
t+1
−¯x
t+1
1
2
.
Further projecting λ to [0, +∞),wegetλ = max
0,
b
t
·
˜
x
t+1
˜
x
t+1
−¯x
t+1
1
2
.
T&F Cat #K23731 — K23731_A002 — page 185 — 9/28/2015 — 20:47
• No Comment
..................Content has been hidden....................