Examples
This example walks through the mean-CVaR and Entropy Pooling functionality and illustrates how these two technologies can be combined.
We first load the necessary packages, data, and print some P&L stats:
import numpy as np
import pandas as pd
import fortitudo.tech as ft
R = pd.read_csv('pnl.csv')
instrument_names = list(R.columns)
means = np.mean(R, axis=0)
vols = np.std(R, axis=0)
stats_prior = pd.DataFrame(
np.vstack((means, vols)).T, index=instrument_names, columns=['Mean', 'Volatility'])
print(np.round(stats_prior * 100, 1))
This gives the following result:
Mean Volatility
Gov & MBS -0.7 3.2
Corp IG -0.4 3.4
Corp HY 1.9 6.1
EM Debt 2.7 7.5
DM Equity 6.4 14.9
EM Equity 8.0 26.9
Private Equity 13.7 27.8
Infrastructure 5.9 10.8
Real Estate 4.3 8.1
Hedge Funds 4.8 7.2
Next, we extract P&L dimension parameters \(S\) and \(I\), specify a prior probability vector \(p\), and create some portfolio constraints:
S, I = R.shape
p = np.ones((S, 1)) / S
G_pf = np.vstack((np.eye(I), -np.eye(I)))
h_pf = np.hstack((0.25 * np.ones(I), np.zeros(I)))
The above portfolio constraints simply specify that it is a long-only portfolio with an upper bound of 25% for individual assets. This ensures that the optimized portfolios are invested in at least 4 assets and imposes some diversification.
The next step is to input the P&L, constraints, and probability vector into the MeanCVaR object as well as optimize portfolios:
ft.cvar_options['demean'] = False
R = R.values
cvar_opt = ft.MeanCVaR(R, G=G_pf, h=h_pf, p=p)
w_min = cvar_opt.efficient_portfolio()
w_target = cvar_opt.efficient_portfolio(return_target=0.05)
Note that the MeanCVaR object uses demeaned P&L by default when optimizing the portfolio’s CVaR, as we believe it is best not to rely on the expected return estimates in both the risk and the expectation. In the above example, we illustrate how you can disable this feature and specify that the optimization should compute portfolio CVaR including its expected return.
Let us now assume that we have done some analysis and concluded that the mean of Private Equity should be 10%, while its volatility should be greater than or equal to 33%. Entropy Pooling allows us to incorporate this market view into our P&L assumption in a way that introduces the least amount of spurious structure, which is measured by the relative entropy between our prior and posterior probability vectors.
The above views for Private Equity are implemented below:
expected_return_row = R[:, 6][np.newaxis, :]
variance_row = (expected_return_row - 0.1) * (expected_return_row - 0.1)
A = np.vstack((np.ones((1, S)), expected_return_row))
b = np.array([[1], [0.1]])
G = -variance_row
h = np.array([[-0.33**2]])
q = ft.entropy_pooling(p, A, b, G, h)
means_post = q.T @ R
vols_post = np.sqrt(q.T @ (R - means_post)**2)
stats_post = pd.DataFrame(
np.vstack((means_post, vols_post)).T, index=instrument_names, columns=['Mean', 'Volatility'])
print(np.round(stats_post * 100, 1))
Which gives the following posterior means and volatilities:
Mean Volatility
Gov & MBS -0.5 3.2
Corp IG -0.5 3.4
Corp HY 1.2 6.4
EM Debt 2.3 7.6
DM Equity 4.4 16.4
EM Equity 5.2 29.2
Private Equity 10.0 33.0
Infrastructure 5.1 11.1
Real Estate 3.6 8.5
Hedge Funds 3.8 8.0
We note that our views regarding Private Equity are satisfied. In addition, we note that volatilities of the riskier assets have increased, while their expected returns have decreased. This illustrates how Entropy Pooling incorporates views/stress-tests in a way that tries to respect the dependencies of the prior distribution.
With the posterior probabilities at hand, we want to examine the effect of our views on the efficient CVaR portfolios. This is easy to do by simply specifying that the posterior probability vector \(q\) should be used in the CVaR optimization:
cvar_opt_post = ft.MeanCVaR(R, G=G_pf, h=h_pf, p=q)
w_min_post = cvar_opt_post.efficient_portfolio()
w_target_post = cvar_opt_post.efficient_portfolio(return_target=0.05)
We can then print the results of the optimization and compare allocations. First for the minimum risk portfolios:
min_risk_pfs = pd.DataFrame(
np.hstack((w_min, w_min_post)), index=instrument_names, columns=['Prior', 'Posterior'])
print(np.round(min_risk_pfs * 100, 2))
Which gives the following output:
Prior Posterior
Gov & MBS 25.00 25.00
Corp IG 25.00 25.00
Corp HY 0.50 6.45
EM Debt 3.87 5.00
DM Equity 0.00 0.00
EM Equity -0.00 0.00
Private Equity -0.00 0.00
Infrastructure 6.89 6.87
Real Estate 14.50 17.65
Hedge Funds 24.25 14.02
And then for the portfolios with an expected return target of 5%:
target_return_pfs = pd.DataFrame(
np.hstack((w_target, w_target_post)), index=instrument_names, columns=['Prior', 'Posterior'])
print(np.round(target_return_pfs * 100, 2))
Which gives the following output:
Prior Posterior
Gov & MBS 0.00 -0.00
Corp IG 0.00 0.00
Corp HY 0.00 0.00
EM Debt 19.81 8.08
DM Equity 0.00 0.00
EM Equity 0.00 0.00
Private Equity 5.19 16.92
Infrastructure 25.00 25.00
Real Estate 25.00 25.00
Hedge Funds 25.00 25.00
It should be straightforward to make sense of these results. In the minimum risk case, we see that we allocate less to the riskier assets that now have a higher risk due to the higher volatility view. In the 5% target return case, we note that we must allocate more to the riskier assets in order to reach the 5% expected return target.
From the allocation results, we note that the portfolios suffer from the well-known issues of concentrated portfolios. There are several ways of addressing this issue in practice, e.g., take parameter uncertainty into account and introduce transaction costs or turnover constraints with an initially diversified portfolio. These topics are however beyond the scope of this example and package.