Base Optimizer¶
The BaseOptimizer
is the underlying object that is used to optimize anything. All other optimizers inherits this class. It offers the most flexibility in modelling.
-
class
allopy.optimize.
BaseOptimizer
(n, algorithm=40, *args, **kwargs)[source]¶ -
__init__
(n, algorithm=40, *args, **kwargs)[source]¶ The BaseOptimizer is the raw optimizer with minimal support. For advanced users, this class will provide the most flexibility. The default algorithm used is Sequential Least Squares Quadratic Programming.
- Parameters
n (int) – number of assets
algorithm (int or str) – the optimization algorithm
args – other arguments to setup the optimizer
kwargs – other keyword arguments
-
add_equality_constraint
(fn, tol=None)[source]¶ Adds the equality constraint function in standard form, A = b. If the gradient of the constraint function is not specified and the algorithm used is a gradient-based one, the optimizer will attempt to insert a smart numerical gradient for it.
- Parameters
fn (
Callable
[[ndarray
],float
]) – Constraint functiontol (float, optional) – A tolerance in judging feasibility for the purposes of stopping the optimization
- Returns
Own instance
- Return type
-
add_equality_matrix_constraint
(Aeq, beq, tol=None)[source]¶ Sets equality constraints in standard matrix form.
For equality, \(\mathbf{A} \cdot \mathbf{x} = \mathbf{b}\)
- Parameters
Aeq – Equality matrix. Must be 2 dimensional
beq – Equality vector or scalar. If scalar, it will be propagated
tol – A tolerance in judging feasibility for the purposes of stopping the optimization
- Returns
Own instance
- Return type
-
add_inequality_constraint
(fn, tol=None)[source]¶ Adds the equality constraint function in standard form, A <= b. If the gradient of the constraint function is not specified and the algorithm used is a gradient-based one, the optimizer will attempt to insert a smart numerical gradient for it.
- Parameters
fn (
Callable
[[ndarray
],float
]) – Constraint functiontol (float, optional) – A tolerance in judging feasibility for the purposes of stopping the optimization
- Returns
Own instance
- Return type
-
add_inequality_matrix_constraint
(A, b, tol=None)[source]¶ Sets inequality constraints in standard matrix form.
For inequality, \(\mathbf{A} \cdot \mathbf{x} \leq \mathbf{b}\)
- Parameters
A – Inequality matrix. Must be 2 dimensional.
b – Inequality vector or scalar. If scalar, it will be propagated.
tol – A tolerance in judging feasibility for the purposes of stopping the optimization
- Returns
Own instance
- Return type
-
property
lower_bounds
¶ Lower bound of each variable
-
property
model
¶ The underlying optimizer. Use this if you need to access lower level settings for the optimizer
-
optimize
(x0=None, *args, initial_solution='random', random_state=None)[source]¶ Runs the optimizer and returns the optimal results if any.
Notes
An initial vector must be set and the quality of any solution (especially gradient-based ones) will lie on this initial vector. Alternatively, the optimizer will ATTEMPT to randomly generate a feasible one if the
initial_solution
argument is set to “random”. However, there is no guarantee in the feasibility. In general, it is a tough problem to find a feasible solution in high-dimensional spaces, much more an optimal one. Thus use the random initial solution at your own risk.The following lists the options for finding an initial solution for the optimization problem. It is best if the user supplies an initial value instead of using the heuristics provided if the user already knows the region to search.
- random
Randomly generates “bound-feasible” starting points for the decision variables. Note that these variables may not fulfil the other constraints. For problems where the bounds have been tightly defined, this often yields a good solution.
- min_constraint_norm
Solves the optimization problem listed below. The objective is to minimize the \(L_2\) norm of the constraint functions while keeping the decision variables bounded by the original problem’s bounds.
\[\begin{split}\min | constraint |^2 \\ s.t. \\ LB \leq x \leq UB\end{split}\]
- Parameters
x0 (iterable float) – Initial vector. Starting position for free variables. In many cases, especially for derivative-based optimizers, it is important for the initial vector to be already feasible.
args – other arguments to pass into the optimizer
initial_solution (str, optional) – The method to find the initial solution if the initial vector
x0
is not specified. Set asNone
to disable. However, if disabled, the initial vector must be supplied. See notes on Initial Solution for more informationrandom_state (int, optional) – Random seed. Applicable if
initial_solution
is notNone
- Returns
Values of free variables at optimality
- Return type
ndarray
-
set_bounds
(lb, ub)[source]¶ Sets the lower and upper bound
- Parameters
lb (
Union
[ndarray
,Iterable
,int
,float
,complex
]) – Vector of lower bounds. If array, must be same length as number of free variables. Iffloat
orint
, value will be propagated to all variables.ub (
Union
[ndarray
,Iterable
,int
,float
,complex
]) – Vector of upper bounds. If array, must be same length as number of free variables. Iffloat
orint
, value will be propagated to all variables.
- Returns
Own instance
- Return type
-
set_epsilon
(eps)[source]¶ Sets the step difference used when calculating the gradient for derivative based optimization algorithms. This can ignored if you use a derivative free algorithm or if you specify your gradient specifically.
- Parameters
eps (float) – The gradient step
- Returns
Own instance
- Return type
-
set_epsilon_constraint
(eps)[source]¶ Sets the tolerance for the constraint functions
- Parameters
eps (float) – Tolerance
- Returns
Own instance
- Return type
-
set_ftol_abs
(tol)[source]¶ Set absolute tolerance on objective function value
- Parameters
tol (float) – absolute tolerance of objective function value
- Returns
Own instance
- Return type
-
set_ftol_rel
(tol)[source]¶ Set relative tolerance on objective function value
- Parameters
tol (float) – Absolute relative of objective function value
- Returns
Own instance
- Return type
-
set_lower_bounds
(lb)[source]¶ Sets the lower bounds
- Parameters
lb (
Union
[ndarray
,Iterable
,int
,float
,complex
]) – Vector of lower bounds. If vector, must be same length as number of free variables. Iffloat
orint
, value will be propagated to all variables.- Returns
Own instance
- Return type
-
set_max_objective
(fn, *args)[source]¶ Sets the optimizer to maximize the objective function. If gradient of the objective function is not set and the algorithm used to optimize is gradient-based, the optimizer will attempt to insert a smart numerical gradient for it.
- Parameters
fn (Callable) – Objective function
args – Other arguments to pass to the objective function. This can be ignored in most cases
- Returns
Own instance
- Return type
-
set_maxeval
(n)[source]¶ Sets maximum number of objective function evaluations.
After maximum number of evaluations, optimization will stop. Set 0 or negative for no limit.
- Parameters
n (int) – maximum number of evaluations
- Returns
Own instance
- Return type
-
set_min_objective
(fn, *args)[source]¶ Sets the optimizer to minimize the objective function. If gradient of the objective function is not set and the algorithm used to optimize is gradient-based, the optimizer will attempt to insert a smart numerical gradient for it.
- Parameters
fn (Callable) – Objective function
args – Other arguments to pass to the objective function. This can be ignored in most cases
- Returns
Own instance
- Return type
-
set_stopval
(stopval)[source]¶ Stop when an objective value of at least/most stopval is found depending on min or max objective
- Parameters
stopval (float) – Stopping value
- Returns
Own instance
- Return type
-
set_upper_bounds
(ub)[source]¶ Sets the upper bound
- Parameters
ub (
Union
[ndarray
,Iterable
,int
,float
,complex
]) – Vector of lower bounds. If vector, must be same length as number of free variables. Iffloat
orint
, value will be propagated to all variables.- Returns
Own instance
- Return type
-
set_xtol_abs
(tol)[source]¶ Sets absolute tolerances on optimization parameters.
The tol input must be an array of length n specified in the initialization. Alternatively, pass a single number in order to set the same tolerance for all optimization parameters.
- Parameters
tol ({float, ndarray}) – Absolute tolerance for each of the free variables
- Returns
Own instance
- Return type
-
set_xtol_rel
(tol)[source]¶ Sets relative tolerances on optimization parameters.
The tol input must be an array of length n specified in the initialization. Alternatively, pass a single number in order to set the same tolerance for all optimization parameters.
- Parameters
tol (float or ndarray, optional) – relative tolerance for each of the free variables
- Returns
Own instance
- Return type
-
property
summary
¶ Prints a summary report of the optimizer
-
property
upper_bounds
¶ Upper bound of each variable
-