Basic Introduction to BaseOptimizer

In this tutorial, we show how to use the BaseOptimizer to optimize a hypothetical portfolio.

In this portfolio, we have 2 assets with different expected returns and volatility. Our task is to find the optimal weights subject to some risk constraints. Let’s assume Asset \(A\) has an annual return of 12% with volatility at 4%, Asset \(B\) has an historical annual returns of 4% with volatility at 0.14% and both of them has a covariance of 0.2%. We start off by simulating 500 instances of their one-year ahead returns.

import numpy as np
from scipy.stats import multivariate_normal as mvn

assets_mean = [0.12, 0.04]  # asset mean returns vector
assets_std = [
    [0.04, 0.002],
    [0.002, 0.0014]
]  # asset covariance matrix

# hypothetical returns series
returns = mvn.rvs(mean=assets_mean, cov=assets_std, size=500, random_state=88)

Now that we have the returns series, our job is to optimize the portfolio where our objective is to maximize the expected returns subject to certain risk budgets. Let’s assume we are only comfortable with taking a volatility of at most 10%.

Our problem is thus given by

\[\begin{split}\begin{gather*} \underset{\mathbf{w}}{\max} \frac{1}{N}\sum_i^2 w_i \cdot r_{i} \\ s.t. \\ \sqrt{\frac{\sum_n^N \left(w_i \cdot r_{i, n} - \frac{1}{N}\sum_n^N\sum_i^2 w_i \cdot r_{i, n} \right)^2}{N-1}} \leq 0.1 \end{gather*}\end{split}\]

Looks complicated but let’s simplify it with some vector notations. Allowing \(r_n\) to be the returns at trial \(n\) after accounting for the weights (\(w\)), \(\mu\) to be the mean return across trials, the problem can be specified as

\[\begin{split}\begin{gather*} \underset{\mathbf{w}}{\max} \frac{\mathbf{w} \cdot \mathbf{r}}{N} \\ s.t. \\ \sqrt{\frac{\sum_n^N \left(r_n - \mu \right)^2}{N-1}} \leq 0.1 \end{gather*}\end{split}\]
from allopy.optimize import BaseOptimizer

def objective(w):
    return (returns @ w).mean()

def constraint(w):
    # we need to convert the constraint to standard form. So c(w) - K <= 0
    return (returns @ w).std() - 0.1

prob = BaseOptimizer(2)  # initialize the optimizer with 2 asset classes

# set the objective function

# set the inequality constraint function

# set lower and upper bounds to 0 and 1 for all free variables (weights)
prob.set_bounds(0, 1)

# set equality matrix constraint, Ax = b. Weights sum to 1
prob.add_equality_matrix_constraint([[1, 1]], [1])

sol = prob.optimize()
print('Solution: ', sol)
Solution:  [0.47209577 0.52790423]

Don’t be alarmed if you noticed the print outs, Setting gradient for .... By default, you actually have to set the gradient and possibly the hessian for your function. In fact, you could if you wanted to. This will give you more control over the optimization program. However, understanding that it may be tedious, we have opted to set the gradient for you if you didn’t do so.

This assumes you’re using a gradient based optimizer. In case you did, the default gradient is set using a second-order numerical derivative.

Also notice the solution given above. This means that the optimizer has successfully found the solution. To get even more information, we can use the .summary() method as seen below.


Portfolio Optimizer

Algorithm: Sequential Quadratic Programming (SQP) (local, derivative)

Problem Setup Value Optimizer Setup Value

Lower Bound Upper Bound


Program found a solution

Solution: [0.472096, 0.527904]

The following inequality constraints were tight:
  • 1: constraint