Hierarchical risk parity

The key idea of hierarchical risk parity is to use hierarchical clustering on the covariance matrix in order to be able to group assets with similar correlations together, and reduce the number of degrees of freedom by only considering similar assets as substitutes when constructing the portfolio (see notebook and Python files in the hierarchical_risk_parity subfolder for details).

The first step is to compute a distance matrix that represents proximity for correlated assets and meets distance metric requirements. The resulting matrix becomes an input to the SciPy hierarchical clustering function which computes the successive clusters using one of several available methods discussed so far:

def get_distance_matrix(corr):
"""Compute distance matrix from correlation;
0 <= d[i,j] <= 1"""
return np.sqrt((1 - corr) / 2)
distance_matrix = get_distance_matrix(corr)
linkage_matrix = linkage(squareform(distance_matrix), 'single')

The linkage_matrix can be used as input to the seaborn.clustermap function to visualize the resulting hierarchical clustering. The dendrogram displayed by seaborn shows how individual assets and clusters of assets are merged based on their relative distances:

clustergrid = sns.clustermap(distance_matrix,
method='single',
row_linkage=linkage_matrix,
col_linkage=linkage_matrix,
cmap=cmap, center=0)
sorted_idx = clustergrid.dendrogram_row.reordered_ind
sorted_tickers = corr.index[sorted_idx].tolist()

Heatmap

Compared to a seaborn.heatmap of the original correlation matrix, there is now significantly more structure in the sorted data (right panel).

Using the tickers sorted according to the hierarchy induced by the clustering algorithm, HRP now proceeds to compute a top-down inverse-variance allocation that successively adjusts weights depending on the variance of the subclusters further down the tree:

def get_cluster_var(cov, cluster_items):
"""Compute variance per cluster"""
cov_ = cov.loc[cluster_items, cluster_items] # matrix slice
w_ = get_inverse_var_pf(cov_)
return (w_ @ cov_ @ w_).item()

To this end, the algorithm uses bisectional search to allocate the variance of a cluster to its elements based on their relative riskiness:

def get_hrp_allocation(cov, tickers):
"""Compute top-down HRP weights"""

weights = pd.Series(1, index=tickers)
clusters = [tickers] # initialize one cluster with all assets

while len(clusters) > 0:
# run bisectional search:
clusters = [c[start:stop] for c in clusters
for start, stop in ((0, int(len(c) / 2)),
(int(len(c) / 2), len(c)))
if len(c) > 1]
for i in range(0, len(clusters), 2): # parse in pairs
cluster0 = clusters[i]
cluster1 = clusters[i + 1]

cluster0_var = get_cluster_var(cov, cluster0)
cluster1_var = get_cluster_var(cov, cluster1)

weight_scaler = 1 - cluster0_var / (cluster0_var + cluster1_var)
weights[cluster0] *= weight_scaler
weights[cluster1] *= 1 - weight_scaler
return weights

The resulting portfolio allocation produces weights that sum to 1 and reflect the structure present in the correlation matrix (see notebook for details).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
54.234.136.147