Regret Optimizer¶
The RegretOptimizer
inherits the DiscreteUncertaintyOptimizer
. The minimum regret optimization is a technique under decision theory on making decisions under uncertainty.

class
allopy.optimize.
RegretOptimizer
(num_assets, num_scenarios, prob=None, algorithm=40, c_eps=None, xtol_abs=None, xtol_rel=None, ftol_abs=None, ftol_rel=None, max_eval=None, verbose=False, sum_to_1=True, max_attempts=5)[source]¶ 
__init__
(num_assets, num_scenarios, prob=None, algorithm=40, c_eps=None, xtol_abs=None, xtol_rel=None, ftol_abs=None, ftol_rel=None, max_eval=None, verbose=False, sum_to_1=True, max_attempts=5)[source]¶ The RegretOptimizer is a convenience class for scenario based optimization.
Notes
The term regret refers to the instance where after having decided on one alternative, the choice of a different alternative would have led to a more optimal (better) outcome when the eventual scenario transpires.
The RegretOptimizer employs a 2 stage optimization process. In the first step, the optimizer calculates the optimal weights for each scenario. In the second stage, the optimizer minimizes the regret function to give the final optimal portfolio weights.
Assuming the objective is to maximize returns subject to some volatility constraints, the first stage optimization will be as listed
\[\begin{split}\begin{gather*} \underset{w_s}{\max} R_s(w_s) \forall s \in S \\ s.t. \\ \sigma_s(w_s) \leq \Sigma \end{gather*}\end{split}\]where \(R_s(\cdot)\) is the returns function for scenario \(s\), \(\sigma_s(\cdot)\) is the volatility function for scenario \(s\) and \(\Sigma\) is the volatility threshold. Subsequently, to minimize the regret across all scenarios, \(S\),
\[\begin{gather*} \underset{w}{\min} \sum_{s \in S} p_s \cdot D(R_s(w_s)  R_s(w)) \end{gather*}\]Where \(D(\cdot)\) is a distance function (usually quadratic) and \(p_s\) is the discrete probability of scenario \(s\) occurring.
 Parameters
num_assets (int) – Number of assets
num_scenarios (int) – Number of scenarios
prob (
Union
[Iterable
[float
],Iterable
[int
],ndarray
,None
]) – Vector containing probability of each scenario occurringalgorithm – The optimization algorithm. Default algorithm is Sequential Least Squares Quadratic Programming.
c_eps (float, optional) – Constraint epsilon is the tolerance for the inequality and equality constraints functions. Any value that is less than the constraint epsilon is considered to be within the boundary.
xtol_abs (float or np.ndarray, optional) – Set absolute tolerances on optimization parameters.
tol
is an array giving the tolerances: stop when an optimization step (or an estimate of the optimum) changes every parameterx[i]
by less thantol[i]
. For convenience, if a scalartol
is given, it will be used to set the absolute tolerances in all n optimization parameters to the same value. Criterion is disabled if tol is nonpositive.xtol_rel (float or np.ndarray, optional) – Set relative tolerance on optimization parameters: stop when an optimization step (or an estimate of the optimum) causes a relative change the parameters
x
by less thantol
, i.e. \(\\Delta x\_w < tol \cdot \x\_w\) measured by a weighted \(L_1\) norm \(\x\_w = \sum_i w_i x_i\), where the weights \(w_i\) default to 1. (If there is any chance that the optimal \(\x\\) is close to zero, you might want to set an absolute tolerance with code:`xtol_abs as well.) Criterion is disabled if tol is nonpositive.ftol_abs (float, optional) – Set absolute tolerance on function value: stop when an optimization step (or an estimate of the optimum) changes the function value by less than
tol
. Criterion is disabled if tol is nonpositive.ftol_rel (float, optional) – Set relative tolerance on function value: stop when an optimization step (or an estimate of the optimum) changes the objective function value by less than
tol
multiplied by the absolute value of the function value. (If there is any chance that your optimum function value is close to zero, you might want to set an absolute tolerance withftol_abs
as well.) Criterion is disabled if tol is nonpositive.max_eval (int, optional) – Stop when the number of function evaluations exceeds
maxeval
. (This is not a strict maximum: the number of function evaluations may exceedmaxeval
slightly, depending upon the algorithm.) Criterion is disabled if maxeval is nonpositive.verbose (bool) – If True, the optimizer will report its operations
sum_to_1 (bool) – If true, the optimal weights for each first level scenario must sum to 1.
max_attempts (int) – Number of times to retry optimization. This is useful when optimization is in a highly unstable or nonconvex space.
See also
DiscreteUncertaintyOptimizer
Discrete Uncertainty Optimizer

add_equality_constraint
(functions)[source]¶ Adds the equality constraint function in standard form, A = b. If the gradient of the constraint function is not specified and the algorithm used is a gradientbased one, the optimizer will attempt to insert a smart numerical gradient for it.
The list of functions needs to match the number of scenarios. The function at index 0 will be assigned as a constraint function to the first optimization regime.
 Parameters
functions (Callable or List of Callable) – Constraint function. The function signature should be such that the first argument takes in a weight vector and outputs a numeric (float). The second argument is optional and contains the gradient. If given, the user must put adjust the gradients inplace. If only a single function is given, the same function will be used for all the scenarios

add_equality_matrix_constraint
(Aeq, beq)[source]¶ Sets equality constraints in standard matrix form.
For equality, \(\mathbf{A} \cdot \mathbf{x} = \mathbf{b}\)
 Parameters
Aeq ({iterable float, ndarray}) – Equality matrix. Must be 2 dimensional
beq ({scalar, ndarray}) – Equality vector or scalar. If scalar, it will be propagated

add_inequality_constraint
(functions)[source]¶ Adds the equality constraint function in standard form, A <= b. If the gradient of the constraint function is not specified and the algorithm used is a gradientbased one, the optimizer will attempt to insert a smart numerical gradient for it.
The list of functions needs to match the number of scenarios. The function at index 0 will be assigned as a constraint function to the first optimization regime.
 Parameters
functions (Callable or List of Callable) – Constraint functions. The function signature should be such that the first argument takes in a weight vector and outputs a numeric (float). The second argument is optional and contains the gradient. If given, the user must put adjust the gradients inplace. If only a single function is given, the same function will be used for all the scenarios

add_inequality_matrix_constraint
(A, b)[source]¶ Sets inequality constraints in standard matrix form.
For inequality, \(\mathbf{A} \cdot \mathbf{x} \leq \mathbf{b}\)
 Parameters
A ({iterable float, ndarray}) – Inequality matrix. Must be 2 dimensional.
b ({scalar, ndarray}) – Inequality vector or scalar. If scalar, it will be propagated.

property
lower_bounds
¶ Lower bound of each variable for all the optimization models in the first and second stages

optimize
(x0_first_level=None, x0_prop=None, initial_solution='random', approx=True, dist_func=<ufunc 'square'>, random_state=None)[source]¶ Finds the minimal regret solution across the range of scenarios
Notes
The exact (actual) objective function to minimize regret is given below,
\[\begin{gather*} \underset{w}{\min} \sum_{s \in S} p_s \cdot D(R_s(w_s)  R_s(w)) \end{gather*}\]However, given certain problem formulations where the objective and constraint functions are linear and convex, the problem can be transformed to
\[\begin{gather*} \underset{a}{\min} \sum_{s \in S} p_s \cdot D(R_s(w_s)  R_s(W \cdot a)) \end{gather*}\]where \(W\) is a matrix where each rows represents a single scenario, \(s\) and each column represents an asset class. This formulation solves for \(a\) which represents the proportion of each scenario that contributes to the final portfolio weights. Thus if there are 3 scenarios and \(a\) is
[0.3, 0.5, 0.2]
, it means that the final portfolio took 30% from scenario 1, 50% from scenario 2 and 20% from scenario 3.This formulation makes a strong assumption that the final minimal regret portfolio is a linear combination of the weights from each scenario’s optimal.
The following lists the options for finding an initial solution for the optimization problem. It is best if the user supplies an initial value instead of using the heuristics provided if the user already knows the region to search.
 random
Randomly generates “boundfeasible” starting points for the decision variables. Note that these variables may not fulfil the other constraints. For problems where the bounds have been tightly defined, this often yields a good solution.
 min_constraint_norm
Solves the optimization problem listed below. The objective is to minimize the \(L_2\) norm of the constraint functions while keeping the decision variables bounded by the original problem’s bounds.
\[\begin{split}\min  constraint ^2 \\ s.t. \\ LB \leq x \leq UB\end{split}\]
 Parameters
x0_first_level (list of list of floats or ndarray, optional) – List of initial solution vector for each scenario optimization. If provided, the list must have the same length at the first dimension as the number of solutions.
x0_prop (list of floats, optional) – Initial solution vector for the regret optimization (2nd level). This can either be the final optimization weights if
approx
isFalse
or the scenario proportion otherwise.initial_solution (str, optional) – The method to find the initial solution if the initial vector
x0
is not specified. Set asNone
to disable. However, if disabled, the initial vector must be supplied. See notes on Initial Solution for more informationapprox (bool) – If True, a linear approximation will be used to calculate the regret optimal. See Notes.
dist_func (Callable) – A callable function that will be applied as a distance metric for the regret function. The default is a quadratic function. See Notes.
random_state (int, optional) – Random seed. Applicable if
initial_solution
is “random”
 Returns
Regret optimal solution weights
 Return type
np.ndarray

property
prob
¶ Vector containing probability of each scenario occurring

set_bounds
(lb, ub)[source]¶ Sets the lower and upper bound
 Parameters
lb ({int, float, ndarray}) – Vector of lower bounds. If array, must be same length as number of free variables. If
float
orint
, value will be propagated to all variables.ub ({int, float, ndarray}) – Vector of upper bounds. If array, must be same length as number of free variables. If
float
orint
, value will be propagated to all variables.

set_epsilon_constraint
(eps)[source]¶ Sets the tolerance for the constraint functions
 Parameters
eps (float) – Tolerance

set_ftol_abs
(tol)[source]¶ Set absolute tolerance on objective function value.
The absolute tolerance on function value: stop when an optimization step (or an estimate of the optimum) changes the function value by less than
tol
. Criterion is disabled if tol is nonpositive. Parameters
tol (float) – absolute tolerance of objective function value

set_ftol_rel
(tol)[source]¶ Set relative tolerance on objective function value.
Set relative tolerance on function value: stop when an optimization step (or an estimate of the optimum) changes the objective function value by less than
tol
multiplied by the absolute value of the function value. (If there is any chance that your optimum function value is close to zero, you might want to set an absolute tolerance withftol_abs
as well.) Criterion is disabled if tol is nonpositive. Parameters
tol (float, optional) – Absolute relative of objective function value

set_max_objective
(functions)[source]¶ Sets the optimizer to maximize the objective function. If gradient of the objective function is not set and the algorithm used to optimize is gradientbased, the optimizer will attempt to insert a smart numerical gradient for it.
The list of functions needs to match the number of scenarios. The function at index 0 will be assigned as the objective function to the first optimization regime.
 Parameters
functions (Callable or List of Callable) – Objective function. The function signature should be such that the first argument takes in a weight vector and outputs a numeric (float). The second argument is optional and contains the gradient. If given, the user must put adjust the gradients inplace. If only a single function is given, the same function will be used for all the scenarios

set_maxeval
(n)[source]¶ Sets maximum number of objective function evaluations.
Stop when the number of function evaluations exceeds
maxeval
. (This is not a strict maximum: the number of function evaluations may exceedmaxeval
slightly, depending upon the algorithm.) Criterion is disabled if maxeval is nonpositive. Parameters
n (int) – maximum number of evaluations

set_meta
(*, assets=None, scenarios=None)[source]¶ Sets meta data which will be used for result summary
 Parameters
assets (list of str, optional) – Names of each asset class
scenarios (list of str, optional) – Names of each scenario

set_min_objective
(functions)[source]¶ Sets the optimizer to minimize the objective function. If gradient of the objective function is not set and the algorithm used to optimize is gradientbased, the optimizer will attempt to insert a smart numerical gradient for it.
The list of functions needs to match the number of scenarios. The function at index 0 will be assigned as the objective function to the first optimization regime.
 Parameters
functions (Callable or List of Callable) – Objective function. The function signature should be such that the first argument takes in a weight vector and outputs a numeric (float). The second argument is optional and contains the gradient. If given, the user must put adjust the gradients inplace. If only a single function is given, the same function will be used for all the scenarios

set_xtol_abs
(tol)[source]¶ Sets absolute tolerances on optimization parameters.
The absolute tolerances on optimization parameters.
tol
is an array giving the tolerances: stop when an optimization step (or an estimate of the optimum) changes every parameterx[i]
by less thantol[i]
. For convenience, if a scalartol
is given, it will be used to set the absolute tolerances in all n optimization parameters to the same value. Criterion is disabled if tol is nonpositive. Parameters
tol (float or np.ndarray) – Absolute tolerance for each of the free variables

set_xtol_rel
(tol)[source]¶ Sets relative tolerances on optimization parameters.
Set relative tolerance on optimization parameters: stop when an optimization step (or an estimate of the optimum) causes a relative change the parameters
x
by less thantol
, i.e. \(\\Delta x\_w < tol \cdot \x\_w\) measured by a weighted \(L_1\) norm \(\x\_w = \sum_i w_i x_i\), where the weights \(w_i\) default to 1. (If there is any chance that the optimal \(\x\\) is close to zero, you might want to set an absolute tolerance with code:`xtol_abs as well.) Criterion is disabled if tol is nonpositive. Parameters
tol (float or np.ndarray) – relative tolerance for each of the free variables

property
upper_bounds
¶ Upper bound of each variable for all the optimization models in the first and second stages
