Академический Документы
Профессиональный Документы
Культура Документы
Release 0.3.3.dev0
Mike McKerns
i
2.26 search module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
2.27 solvers module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
2.28 strategy module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
2.29 support module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
2.30 svc module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
2.31 svr module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
2.32 symbolic module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
2.33 termination module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
2.34 tools module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
ii
CHAPTER 1
The mystic framework provides a collection of optimization algorithms and tools that allows the user to more
robustly (and easily) solve hard optimization problems. All optimization algorithms included in mystic provide
workflow at the fitting layer, not just access to the algorithms as function calls. mystic gives the user fine-grained
power to both monitor and steer optimizations as the fit processes are running. Optimizers can advance one iteration
with Step, or run to completion with Solve. Users can customize optimizer stop conditions, where both compound
and user-provided conditions may be used. Optimizers can save state, can be reconfigured dynamically, and can be
restarted from a saved solver or from a results file. All solvers can also leverage parallel computing, either within each
iteration or as an ensemble of solvers.
Where possible, mystic optimizers share a common interface, and thus can be easily swapped without the user
having to write any new code. mystic solvers all conform to a solver API, thus also have common method calls to
configure and launch an optimization job. For more details, see mystic.abstract_solver. The API also makes
it easy to bind a favorite 3rd party solver into the mystic framework.
Optimization algorithms in mystic can accept parameter constraints, either in the form of penaties (which “penalize”
regions of solution space that violate the constraints), or as constraints (which “constrain” the solver to only search
in regions of solution space where the constraints are respected), or both. mystic provides a large selection of
constraints, including probabistic and dimensionally reducing constraints. By providing a robust interface designed to
enable the user to easily configure and control solvers, mystic greatly reduces the barrier to solving hard optimization
problems.
mystic is in active development, so any user feedback, bug reports, comments, or suggestions are highly appreciated.
A list of known issues is maintained at http://trac.mystic.cacr.caltech.edu/project/mystic/query.html, with a public
ticket list at https://github.com/uqfoundation/mystic/issues.
1
mystic Documentation, Release 0.3.3.dev0
• a common interface
• a control handler with: pause, continue, exit, and callback
• ease in selecting initial population conditions: guess, random, etc
• ease in checkpointing and restarting from a log or saved state
• the ability to leverage parallel & distributed computing
• the ability to apply a selection of logging and/or verbose monitors
• the ability to configure solver-independent termination conditions
• the ability to impose custom and user-defined penalties and constraints
To get up and running quickly, mystic also provides infrastructure to:
• easily generate a model (several standard test models are included)
• configure and auto-generate a cost function from a model
• configure an ensemble of solvers to perform a specific task
You can get the latest development version with all the shiny new features at:
https://github.com/uqfoundation
If you have a new contribution, please submit a pull request.
1.5 Installation
mystic is packaged to install from source, so you must download the tarball, unzip, and run the installer:
[download]
$ tar -xvzf mystic-0.3.2.tar.gz
$ cd mystic-0.3.2
$ python setup py build
$ python setup py install
You will be warned of any missing dependencies and/or settings after you run the “build” step above. mystic
depends on dill, numpy and sympy, so you should install them first. There are several functions within mystic
where scipy is used if it is available; however, scipy is an optional dependency. Having matplotlib installed
is necessary for running several of the examples, and you should probably go get it even though it’s not required.
matplotlib is required for results visualization available in the scripts packaged with mystic.
Alternately, mystic can be installed with pip or easy_install:
1.6 Requirements
mystic requires:
• python, version >= 2.6 or version >= 3.1, or pypy
• numpy, version >= 1.0
• sympy, version >= 0.6.7
• dill, version >= 0.2.8.2
• klepto, version >= 0.1.5.2
Optional requirements:
• setuptools, version >= 0.6
• matplotlib, version >= 0.91
• scipy, version >= 0.6.0
• pathos, version >= 0.2.2.1
• pyina, version >= 0.2.0
Probably the best way to get started is to look at the documentation at http://mystic.rtfd.io. Also see mystic.tests
for a set of scripts that demonstrate several of the many features of the mystic framework. You can run the test suite
with python -m mystic.tests. There are several plotting scripts that are installed with mystic, primary
of which are mystic_log_reader‘ (also available with python -m mystic) and the mystic_model_plotter
(also available with python -m mystic.models). There are several other plotting scripts that come with
mystic, and they are detailed elsewhere in the documentation. See mystic.examples for examples that demon-
strate the basic use cases for configuration and launching of optimization jobs using one of the sample models provided
in mystic.models. Many of the included examples are standard optimization test problems. The use of constraints
and penalties are detailed in mystic.examples2, while more advanced features leveraging ensemble solvers and
dimensional collapse are found in mystic.examples3. The scripts in mystic.examples4 demonstrate lever-
aging pathos for parallel computing, as well as demonstrate some auto-partitioning schemes. mystic has the
ability to work in product measure space, and the scripts in mystic.examples5 show to work with product mea-
sures. The source code is generally well documented, so further questions may be resolved by inspecting the code
itself. Please feel free to submit a ticket on github, or ask a question on stackoverflow (@Mike McKerns). If you
would like to share how you use mystic in your work, please send an email (to mmckerns at uqfoundation dot
org).
Instructions on building a new model are in mystic.models.abstract_model. mystic provides base classes
for two types of models:
1.6. Requirements 3
mystic Documentation, Release 0.3.3.dev0
1.8 Citation
If you use mystic to do research that leads to publication, we ask that you acknowledge use of mystic by citing
the following in your publication:
1.8. Citation 5
mystic Documentation, Release 0.3.3.dev0
or as a function call:
Parameters
• model (str) – full import path for the model (e.g. mystic.models.rosen)
• logfile (str, default=None) – name of convergence logfile (e.g. log.txt)
Returns None
Notes
• The option out takes a string of the filepath for the generated plot.
• The option bounds takes an indicator string, where bounds are given as comma-separated slices. For
example, using bounds = "-1:10, 0:20" will set lower and upper bounds for x to be (-1,10) and
y to be (0,20). The “step” can also be given, to control the number of lines plotted in the grid. Thus
"-1:10:.1, 0:20" sets the bounds as above, but uses increments of .1 along x and the default step
along y. For models > 2D, the bounds can be used to specify 2 dimensions plus fixed values for remaining
dimensions. Thus, "-1:10, 0:20, 1.0" plots the 2D surface where the z-axis is fixed at z=1.0.
When called from a script, slice objects can be used instead of a string, thus "-1:10:.1, 0:20, 1.
0" becomes (slice(-1,10,.1), slice(20), 1.0).
• The option label takes comma-separated strings. For example, label = "x,y," will place ‘x’ on the
x-axis, ‘y’ on the y-axis, and nothing on the z-axis. LaTeX is also accepted. For example, label = "$
h $, $ {\alpha}$, $ v$" will label the axes with standard LaTeX math formatting. Note that the
leading space is required, while a trailing space aligns the text with the axis instead of the plot frame.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option iter takes an integer of the largest iteration to plot.
• The option reduce can be given to reduce the output of a model to a scalar, thus converting
model(params) to reduce(model(params)). A reducer is given by the import path (e.g.
numpy.add).
• The option scale will convert the plot to log-scale, and scale the cost by z=log(4*z*scale+1)+2.
This is useful for visualizing small contour changes around the minimium.
• If using log-scale produces negative numbers, the option shift can be used to shift the cost by z=z+shift.
Both shift and scale are intended to help visualize contours.
• The option fill takes a boolean, to plot using filled contours.
• The option depth takes a boolean, to plot contours in 3D.
• The option dots takes a boolean, to show trajectory points in the plot.
• The option join takes a boolean, to connect trajectory points.
• The option verb takes a boolean, to print the model documentation.
log_reader(filename, **kwds)
plot parameter convergence from file written with LoggingMonitor
Available from the command shell as:
or as a function call:
mystic.log_reader(filename, **options)
Notes
• The option out takes a string of the filepath for the generated plot.
• The option dots takes a boolean, and will show data points in the plot.
• The option line takes a boolean, and will connect the data with a line.
• The option iter takes an integer of the largest iteration to plot.
• The option legend takes a boolean, and will display the legend.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option param takes an indicator string. The indicator string is built from comma-separated array slices.
For example, params = ":" will plot all parameters. Alternatively, params = ":2, 3:" will plot
all parameters except for the third parameter, while params = "0" will only plot the first parameter.
collapse_plotter(filename, **kwds)
generate cost convergence rate plots from file written with write_support_file
Available from the command shell as:
or as a function call:
mystic.collapse_plotter(filename, **options)
Notes
• The option dots takes a boolean, and will show data points in the plot.
• The option linear takes a boolean, and will plot in a linear scale.
• The option out takes a string of the filepath for the generated plot.
• The option iter takes an integer of the largest iteration to plot.
1.8. Citation 7
mystic Documentation, Release 0.3.3.dev0
• The option label takes a label string. For example, label = "y" will label the plot with a ‘y’, while
label = " log-cost, $ log_{10}(\hat{P} - \hat{P}_{max})$" will label the y-axis
with standard LaTeX math formatting. Note that the leading space is required, and that the text is aligned
along the axis.
• The option col takes a string of comma-separated integers indicating iteration numbers where parameter
collapse has occurred. If a second set of integers is provided (delineated by a semicolon), the additional
set of integers will be plotted with a different linestyle (to indicate a different type of collapse).
This module contains the base class for launching several mystic solvers instances – utilizing a parallel map
function to enable parallel computing. This module describes the ensemble solver interface. As with the
AbstractSolver, the _Step method must be overwritten with the derived solver’s optimization algorithm. Simi-
lar to AbstractMapSolver, a call to map is required. In addition to the class interface, a simple function interface
for a derived solver class is often provided. For an example, see the following.
The default map API settings are provided within mystic, while distributed and parallel computing maps can be
obtained from the pathos package (http://dev.danse.us/trac/pathos).
Examples
9
mystic Documentation, Release 0.3.3.dev0
2.1.1 Handler
All solvers packaged with mystic include a signal handler that provides the following options:
Handlers are enabled with the enable_signal_handler method, and are configured through the solver’s Solve
method. Handlers trigger when a signal interrupt (usually, Ctrl-C) is given while the solver is running.
Notes
Additional inputs:
SetDistribution(dist=None)
Set the distribution used for determining solver starting points
Inputs:
• dist: a mystic.math.Distribution instance
SetInitialPoints(x0, radius=0.05)
Set Initial Points with Guess (x0)
input::
• x0: must be a sequence of length self.nDim
• radius: generate random points within [-radius*x0, radius*x0] for i!=0 when a simplex-type
initial guess in required
* this method must be overwritten *
SetMultinormalInitialPoints(mean, var=None)
Generate Initial Points from Multivariate Normal.
input::
• mean must be a sequence of length self.nDim
• var can be. . . None: -> it becomes the identity scalar: -> var becomes scalar * I matrix: -> the
variance matrix. must be the right size!
* this method must be overwritten *
SetNestedSolver(solver)
set the nested solver
input::
• solver: a mystic solver instance (e.g. NelderMeadSimplexSolver(3) )
SetRandomInitialPoints(min=None, max=None)
Generate Random Initial Points within given Bounds
input::
• min, max: must be a sequence of length self.nDim
• each min[i] should be <= the corresponding max[i]
* this method must be overwritten *
SetSampledInitialPoints(dist=None)
Generate Random Initial Points from Distribution (dist)
input::
• dist: a mystic.math.Distribution instance
* this method must be overwritten *
Solve(cost, termination=None, ExtraArgs=(), **kwds)
Minimize a ‘cost’ function with given termination conditions.
Uses an ensemble of optimizers to find the minimum of a function of one or more variables.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• disp (bool, default=False) – if True, print convergence messages.
Returns None
Terminated(disp=False, info=False, termination=None)
check if the solver meets the given termination conditions
Input::
• disp = if True, print termination statistics and/or warnings
• info = if True, return termination message (instead of boolean)
• termination = termination conditions to check against
Notes:: If no termination conditions are given, the solver’s stored termination conditions will be used.
_AbstractEnsembleSolver__get_solver_instance()
ensure the solver is a solver instance
_InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
* this method must be overwritten *
__init__(dim, **kwds)
Takes one initial input:
Additional inputs:
__module__ = 'mystic.abstract_ensemble_solver'
_update_objective()
decorate the cost function with bounds, penalties, monitors, etc
This module contains the base classes for pathos pool and pipe objects, and describes the map and pipe interfaces. A
pipe is defined as a connection between two ‘nodes’, where a node is something that does work. A pipe may be a
one-way or two-way connection. A map is defined as a one-to-many connection between nodes. In both map and pipe
connections, results from the connected nodes can be returned to the calling node. There are several variants of pipe
and map, such as whether the connection is blocking, or ordered, or asynchronous. For pipes, derived methods must
overwrite the ‘pipe’ method, while maps must overwrite the ‘map’ method. Pipes and maps are available from worker
pool objects, where the work is done by any of the workers in the pool. For more specific point-to-point connections
(such as a pipe between two specific compute nodes), use the pipe object directly.
2.2.1 Usage
Notes
Each of the pathos worker pools rely on a different transport protocol (e.g. threads, multiprocessing, etc), where the
use of each pool comes with a few caveats. See the usage documentation and examples for each worker pool for more
information.
class AbstractPipeConnection(*args, **kwds)
Bases: object
AbstractPipeConnection base class for pathos pipes.
Required input: ???
Additional inputs: ???
Important class members: ???
Other class members: ???
__module__ = 'mystic.abstract_launcher'
__repr__() <==> repr(x)
__weakref__
list of weak references to the object (if defined)
_serve(*args, **kwds)
Create a new server if one isn’t already initialized
amap(f, *args, **kwds)
run a batch of jobs with an asynchronous map
Returns a results object which containts the results of applying the function f to the items of the argument
sequence(s). If more than one sequence is given, the function is called with an argument list consisting
of the corresponding item of each sequence. To retrieve the results, call the get() method on the returned
results object. The call to get() is blocking, until all results are retrieved. Use the ready() method on the
result object to check if all results are ready.
apipe(f, *args, **kwds)
submit a job asynchronously to a queue
Returns a results object which containts the result of calling the function f on a selected worker. To retrieve
the results, call the get() method on the returned results object. The call to get() is blocking, until the result
is available. Use the ready() method on the results object to check if the result is ready.
clear()
Remove server with matching state
imap(f, *args, **kwds)
run a batch of jobs with a non-blocking and ordered map
Returns a list iterator of results of applying the function f to the items of the argument sequence(s). If more
than one sequence is given, the function is called with an argument list consisting of the corresponding
item of each sequence.
map(f, *args, **kwds)
run a batch of jobs with a blocking and ordered map
Returns a list of results of applying the function f to the items of the argument sequence(s). If more than
one sequence is given, the function is called with an argument list consisting of the corresponding item of
each sequence.
pipe(f, *args, **kwds)
submit a job and block until results are available
Returns result of calling the function f on a selected worker. This function will block until results are
available.
uimap(f, *args, **kwds)
run a batch of jobs with a non-blocking and unordered map
Returns a list iterator of results of applying the function f to the items of the argument sequence(s). If more
than one sequence is given, the function is called with an argument list consisting of the corresponding
item of each sequence. The order of the resulting sequence is not guaranteed.
This module contains the base class for mystic solvers that utilize a parallel map function to enable parallel comput-
ing. This module describes the map solver interface. As with the AbstractSolver, the _Step method must be
overwritten with the derived solver’s optimization algorithm. Additionally, for the AbstractMapSolver, a call
to map is required. In addition to the class interface, a simple function interface for a derived solver class is often
provided. For an example, see the following.
The default map API settings are provided within mystic, while distributed and parallel computing maps can be
obtained from the pathos package (http://dev.danse.us/trac/pathos).
Examples
2.3.1 Handler
All solvers packaged with mystic include a signal handler that provides the following options:
Handlers are enabled with the enable_signal_handler method, and are configured through the solver’s Solve
method. Handlers trigger when a signal interrupt (usually, Ctrl-C) is given while the solver is running.
Notes
Additional inputs:
npop -- size of the trial solution population. [default = 1]
Inputs: scheduler – scheduler function (see pyina.launchers) [DEFAULT: None] queue – queue name
string (see pyina.launchers) [DEFAULT: None]
Additional inputs: timelimit – time string HH:MM:SS format [DEFAULT: ‘00:05:00’]
SelectServers(servers, ncpus=None)
Select the compute server.
Description:
Accepts a tuple of (‘hostname:port’,), listing each available computing server.
If ncpus=None, then ‘autodetect’; or if ncpus=0, then ‘no local’. If servers=(‘*’,), then ‘autode-
tect’; or if servers=(), then ‘no remote’.
SetLauncher(launcher, nnodes=None)
Set launcher and (optionally) number of nodes.
Description:
Uses a launcher to provide the solver with the syntax to configure and launch optimization jobs
on the selected resource.
SetMapper(map, strategy=None)
Set the map function and the mapping strategy.
Description:
Sets a mapping function to perform the map-reduce algorithm. Uses a mapping strategy to pro-
vide the algorithm for distributing the work list of optimization jobs across available resources.
Inputs: map – the mapping function [DEFAULT: python_map] strategy – map strategy (see py-
ina.mappers) [DEFAULT: worker_pool]
__init__(dim, **kwds)
Takes one initial input:
Additional inputs:
__module__ = 'mystic.abstract_map_solver'
This module contains the base class for mystic solvers, and describes the mystic solver interface. The _Step method
must be overwritten with the derived solver’s optimization algorithm. In addition to the class interface, a simple
function interface for a derived solver class is often provided. For an example, see mystic.scipy_optimize,
and the following.
Examples
An equivalent, but less flexible, call using the function interface is:
2.4.1 Handler
All solvers packaged with mystic include a signal handler that provides the following options:
Handlers are enabled with the enable_signal_handler method, and are configured through the solver’s Solve
method. Handlers trigger when a signal interrupt (usually, Ctrl-C) is given while the solver is running.
class AbstractSolver(dim, **kwds)
Bases: object
AbstractSolver base class for mystic optimizers.
Takes one initial input:
Additional inputs:
Collapse(disp=False)
if solver has terminated by collapse, apply the collapse (unless both collapse and “stop” are simultaneously
satisfied)
Collapsed(disp=False, info=False)
check if the solver meets the given collapse conditions
Input::
• disp = if True, print details about the solver state at collapse
• info = if True, return collapsed state (instead of boolean)
Finalize(**kwds)
cleanup upon exiting the main optimization loop
SaveSolver(filename=None, **kwds)
save solver state to a restart file
SetConstraints(constraints)
apply a constraints function to the optimization
input::
• a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector.
Ideally, this function is constructed so the parameter vector it passes to the cost function will
satisfy the desired (i.e. encoded) constraints.
SetEvaluationLimits(generations=None, evaluations=None, new=False, **kwds)
set limits for generations and/or evaluations
input::
• generations = maximum number of solver iterations (i.e. steps)
• evaluations = maximum number of function evaluations
SetEvaluationMonitor(monitor, new=False)
select a callable to monitor (x, f(x)) after each cost function evaluation
SetGenerationMonitor(monitor, new=False)
select a callable to monitor (x, f(x)) after each solver iteration
SetInitialPoints(x0, radius=0.05)
Set Initial Points with Guess (x0)
input::
• x0: must be a sequence of length self.nDim
• radius: generate random points within [-radius*x0, radius*x0] for i!=0 when a simplex-type
initial guess in required
SetMultinormalInitialPoints(mean, var=None)
Generate Initial Points from Multivariate Normal.
input::
• mean must be a sequence of length self.nDim
• var can be. . . None: -> it becomes the identity scalar: -> var becomes scalar * I matrix: -> the
variance matrix. must be the right size!
SetObjective(cost, ExtraArgs=None)
decorate the cost function with bounds, penalties, monitors, etc
SetPenalty(penalty)
apply a penalty function to the optimization
input::
• a penalty function of the form: y’ = penalty(xk), with y = cost(xk) + y’, where xk is the current
parameter vector. Ideally, this function is constructed so a penalty is applied when the desired
(i.e. encoded) constraints are violated. Equality constraints should be considered satisfied when
the penalty condition evaluates to zero, while inequality constraints are satisfied when the penalty
condition evaluates to a non-positive number.
SetRandomInitialPoints(min=None, max=None)
Generate Random Initial Points within given Bounds
input::
• min, max: must be a sequence of length self.nDim
• each min[i] should be <= the corresponding max[i]
SetReducer(reducer, arraylike=False)
apply a reducer function to the cost function
input::
• a reducer function of the form: y’ = reducer(yk), where yk is a results vector and y’ is a single
value. Ideally, this method is applied to a cost function with a multi-value return, to reduce the
output to a single value. If arraylike, the reducer provided should take a single array as input and
produce a scalar; otherwise, the reducer provided should meet the requirements of the python’s
builtin ‘reduce’ method (e.g. lambda x,y: x+y), taking two scalars and producing a scalar.
SetSampledInitialPoints(dist=None)
Generate Random Initial Points from Distribution (dist)
input::
• dist: a mystic.math.Distribution instance
SetSaveFrequency(generations=None, filename=None, **kwds)
set frequency for saving solver restart file
input::
• generations = number of solver iterations before next save of state
• filename = name of file in which to save solver state
note:: SetSaveFrequency(None) will disable saving solver restart file
SetStrictRanges(min=None, max=None)
ensure solution is within bounds
input::
Notes
To run the solver until termination, call Solve(). Alternately, use Terminated() as the stop condition
in a while loop over Step.
If the algorithm does not meet the given termination conditions after the call to Step, the solver may be
left in an “out-of-sync” state. When abandoning an non-terminated solver, one should call Finalize()
to make sure the solver is fully returned to a “synchronized” state.
Additional inputs:
__module__ = 'mystic.abstract_solver'
__weakref__
list of weak references to the object (if defined)
_bootstrap_objective(cost=None, ExtraArgs=None)
HACK to enable not explicitly calling _decorate_objective
_clipGuessWithinRangeBoundary(x0, at=True)
ensure that initial guess is set within bounds
input::
• x0: must be a sequence of length self.nDim
_decorate_objective(cost, ExtraArgs=None)
decorate the cost function with bounds, penalties, monitors, etc
_process_inputs(kwds)
process and activate input settings
_update_objective()
decorate the cost function with bounds, penalties, monitors, etc
bestEnergy
get the bestEnergy (default – bestEnergy = popEnergy[0])
bestSolution
get the bestSolution (default – bestSolution = population[0])
disable_signal_handler()
disable workflow interrupt handler while solver is running
enable_signal_handler()
enable workflow interrupt handler while solver is running
energy_history
get the energy_history (default – energy_history = _stepmon._y)
evaluations
get the number of function calls
generations
get the number of iterations
solution_history
get the solution_history (default – solution_history = _stepmon.x)
decorators for caching function outputs, with function inputs as the keys
Notes
_index_selector(mask)
generate a selector for a mask of indices
_pair_selector(mask)
generate a selector for a mask of tuples (pairs)
_position_filter(mask)
generate a filter for a position mask (dict, set, or where)
_split_mask(mask)
separate a mask into a list of ints and list of tuples (pairs). mask should be composed of indices and pairs of
indices
_weight_filter(mask)
generate a filter for a weight mask (dict, set, or where)
collapse_as(stepmon, offset=False, tolerance=0.005, generations=50, mask=None)
return a set of pairs of indices where the parameters exhibit a dimensional collapse. Dimensional collapse is de-
fined by: max(pairwise(parameters)) <= tolerance over N generations (offset=False), ptp(pairwise(parameters))
<= tolerance over N generations (offset=True).
collapse will be ignored at any pairs of indices specififed in the mask. If single indices are provided, ignore all
pairs with the given indices.
collapse_at(stepmon, target=None, tolerance=0.005, generations=50, mask=None)
return a set of indices where the parameters exhibit a dimensional collapse at the specified target. Dimen-
sional collapse is defined by: change(param[i]) <= tolerance over N generations, where: change(param[i]) =
max(param[i]) - min(param[i]) if target = None, or change(param[i]) = abs(param[i] - target) otherwise.
target can be None, a single value, or a list of values of param length
collapse will be ignored at any indices specififed in the mask
Additional Inputs: args – arguments for the constraints solver [default: ()] kwds – keyword arguments for the
constraints solver [default: {}] k – penalty multiplier h – iterative multiplier
as_constraint(penalty, *args, **kwds)
Convert a penalty function to a constraints solver.
Inputs: penalty – a penalty function
Additional Inputs: lower_bounds – list of lower bounds on solution values. upper_bounds – list of upper
bounds on solution values. nvars – number of parameter values. solver – the mystic solver to use in the
optimization termination – the mystic termination to use in the optimization
NOTE: The default solver is ‘diffev’, with npop=min(40, ndim*5). The default termination is
ChangeOverGeneration(), and the default guess is randomly selected points between the upper and
lower bounds.
with_mean(target)
bind a mean constraint to a given constraints function.
Inputs: target – the target mean
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling
of “impose_mean” onto another constraints function c(x), such that: x’ = impose_mean(target, c(x)).
For example: >>> @with_mean(5.0) . . . def constraint(x): . . . x[-1] = x[0] . . . return x . . . >>> x =
constraint([1,2,3,4]) >>> print(x) [4.25, 5.25, 6.25, 4.25] >>> mean(x) 5.0
with_variance(target)
bind a variance constraint to a given constraints function.
Inputs: target – the target variance
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling
of “impose_variance” onto another constraints function c(x), such that: x’ = impose_variance(target, c(x)).
For example: >>> @with_variance(1.0) . . . def constraint(x): . . . x[-1] = x[0] . . . re-
turn x . . . >>> x = constraint([1,2,3]) >>> print(x) [0.6262265521467858, 2.747546895706428,
0.6262265521467858] >>> variance(x) 0.99999999999999956
with_std(target)
bind a standard deviation constraint to a given constraints function.
Inputs: target – the target standard deviation
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling
of “impose_std” onto another constraints function c(x), such that: x’ = impose_std(target, c(x)).
For example: >>> @with_std(1.0) . . . def constraint(x): . . . x[-1] = x[0] . . . return x . . . >>> x =
constraint([1,2,3]) >>> print(x) [0.6262265521467858, 2.747546895706428, 0.6262265521467858]
>>> std(x) 0.99999999999999956
with_spread(target)
bind a range constraint to a given constraints function.
Inputs: target – the target range
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling
of “impose_spread” onto another constraints function c(x), such that: x’ = impose_spread(target, c(x)).
For example: >>> @with_spread(10.0) . . . def constraint(x): . . . return [i**2 for i in x]
. . . >>> x = constraint([1,2,3,4]) >>> print(x) [3.1666666666666665, 5.1666666666666661, 8.5,
13.166666666666666] >>> spread(x) 10.0
normalized(mass=1.0)
bind a normalization constraint to a given constraints function.
Inputs: mass – the target sum of normalized weights
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling
of “normalize” onto another constraints function c(x), such that: x’ = normalize(c(x), mass).
For example: >>> @normalized() . . . def constraint(x): . . . return x . . . >>> constraint([1,2,3])
[0.16666666666666666, 0.33333333333333331, 0.5]
issolution(constraints, guess, tol=0.001)
Returns whether the guess is a solution to the constraints
Input: constraints – a constraints solver function or a penalty function guess – list of parameter values proposed
to solve the constraints tol – residual error magnitude for which constraints are considered solved
For example: >>> @normalized() . . . def constraint(x): . . . return x . . . >>> constraint([.5,.5]) [0.5, 0.5]
>>> issolution(constraint, [.5,.5]) True >>> >>> from mystic.penalty import quadratic_inequality >>>
@quadratic_inequality(lambda x: x[0] - x[1] + 10) . . . def penalty(x): . . . return 0.0 . . . >>> penalty([-
10,.5]) 0.0 >>> issolution(penalty, [-10,.5]) True
solve(constraints, guess=None, nvars=None, solver=None, lower_bounds=None, upper_bounds=None, ter-
mination=None)
Use optimization to find a solution to a set of constraints.
Inputs: constraints – a constraints solver function or a penalty function
Additional Inputs: guess – list of parameter values proposed to solve the constraints. lower_bounds – list
of lower bounds on solution values. upper_bounds – list of upper bounds on solution values. nvars –
number of parameter values. solver – the mystic solver to use in the optimization termination – the mystic
termination to use in the optimization
NOTE: The resulting constraints will likely be more expensive to evaluate and less accurate than writing
the constraints solver from scratch.
NOTE: The ensemble solvers are available, using the default NestedSolver, where the keyword ‘guess’ can
be used to set the number of solvers.
NOTE: The default solver is ‘diffev’, with npop=min(40, ndim*5). The default termination is
ChangeOverGeneration(), and the default guess is randomly selected points between the upper and
lower bounds.
discrete(samples, index=None)
impose a discrete set of input values for the selected function
The function’s input will be mapped to the given discrete set
>>> squared([0,2,4,6,8,10])
[1, 4, 16, 25, 64, 100]
integers(ints=True, index=None)
impose the set of integers (by rounding) for the given function
The function’s input will be mapped to the ints, where:
• if ints is True, return results as ints; otherwise, use floats
• if index tuple provided, only round at the given indices
>>> @integers()
... def identity(x):
... return x
near_integers(x)
the sum of all deviations from int values
unique(seq, full=None)
replace the duplicate values with unique values in ‘full’
If full is a type (int or float), then unique values of the given type are selected from range(min(seq),max(seq)).
If full is a dict of {‘min’:min, ‘max’:max}, then unique floats are selected from range(min(seq),max(seq)). If
full is a sequence (list or set), then unique values are selected from the given sequence.
For example: >>> unique([1,2,3,1,2,4], range(11)) [1, 2, 3, 9, 8, 4] >>> unique([1,2,3,1,2,9], range(11)) [1,
2, 3, 8, 5, 9] >>> try: . . . unique([1,2,3,1,2,13], range(11)) . . . except ValueError: . . . pass . . . >>>
>>> unique([1,2,3,1,2,4], {‘min’:0, ‘max’:11}) [1, 2, 3, 4.175187820357143, 2.5407265707465716, 4] >>>
unique([1,2,3,1,2,4], {‘min’:0, ‘max’:11, ‘type’:int}) [1, 2, 3, 6, 8, 4] >>> unique([1,2,3,1,2,4], float) [1, 2, 3,
1.012375036824941, 3.9821250727509905, 4] >>> unique([1,2,3,1,2,10], int) [1, 2, 3, 9, 6, 10] >>> try: . . .
unique([1,2,3,1,2,4], int) . . . except ValueError: . . . pass . . .
has_unique(x)
check for uniqueness of the members of x
impose_unique(seq=None)
ensure all values are unique and found in the given set
For example: >>> @impose_unique(range(11)) . . . def doit(x): . . . return x . . . >>> doit([1,2,3,1,2,4]) [1, 2,
3, 9, 8, 4] >>> doit([1,2,3,1,2,10]) [1, 2, 3, 8, 5, 10] >>> try: . . . doit([1,2,3,1,2,13]) . . . except ValueError: . . .
print(“Bad Input”) . . . Bad Input
bounded(seq, bounds, index=None, clip=True, nearest=True)
bound a sequence by bounds = [min,max]
For example: >>> sequence = [0.123, 1.244, -4.755, 10.731, 6.207] >>> >>> bounded(sequence, (0,5)) ar-
ray([0.123, 1.244, 0. , 5. , 5. ]) >>> >>> bounded(sequence, (0,5), index=(0,2,4)) array([ 0.123, 1.244, 0. ,
10.731, 5. ]) >>> >>> bounded(sequence, (0,5), clip=False) array([0.123 , 1.244 , 3.46621839, 1.44469038,
4.88937466]) >>> >>> bounds = [(0,5),(7,10)] >>> my.constraints.bounded(sequence, bounds) array([ 0.123,
1.244, 0. , 10. , 7. ]) >>> my.constraints.bounded(sequence, bounds, nearest=False) array([ 0.123, 1.244,
7. , 10. , 5. ]) >>> my.constraints.bounded(sequence, bounds, nearest=False, clip=False) array([0.123 ,
>>> @impose_as([(0,1),(3,1),(4,5),(5,6),(5,7)])
... def same(x):
... return x
...
>>> same([9,8,7,6,5,4,3,2,1])
[9, 9, 7, 9, 5, 5, 5, 5, 1]
>>> same([0,1,0,1])
[0, 0, 0, 0]
>>> same([-1,-2,-3,-4,-5,-6,-7])
[-1, -1, -3, -1, -5, -5, -5]
>>>
>>> @impose_as([(0,1),(3,1),(4,5),(5,6),(5,7)], 10)
... def doit(x):
... return x
...
>>> doit([9,8,7,6,5,4,3,2,1])
[9, 19, 7, 9, 5, 15, 25, 25, 1]
>>> doit([0,1,0,1])
[0, 10, 0, 0]
>>> doit([-1,-2,-3,-4,-5,-6])
[-1, 9, -3, -1, -5, 5]
>>> doit([-1,-2,-3,-4,-5,-6,-7])
[-1, 9, -3, -1, -5, 5, 15]
impose_at(index, target=0.0)
generate a function, where some input is set to the target
index should be a set of indices to be fixed at the target. The target can either be a single value (e.g. float), or a
list of values.
For example,
˓→6666666666666667]
>>>
>>> @impose_measure(npts, {}, wts)
... def doit(x):
... return x
...
>>> doit([.5, 0., .5, 2., 4., 6., .25, .5, .25, 6., 4., 2.])
[0.5, 0.0, 0.5, 2.0, 4.0, 6.0, 1.0, 0.0, 0.0, 4.0, 2.0, 0.0]
(continues on next page)
>>>
impose_position(npts, tracking)
generate a function, that constrains measure positions
npts is a tuple of the product_measure dimensionality
tracking is a dict of collapses, or a tuple of dicts of collapses. a tracking collapse is a dict of {measure: {pairs
of indices}}, where the pairs of indices are where the positions will be constrained to have the same value, and
the weight from the second index in the pair will be removed and added to the weight of the first index
For example,
˓→6666666666666667]
>>>
impose_weight(npts, noweight)
generate a function, that constrains measure weights
npts is a tuple of the product_measure dimensionality
noweight is a dict of collapses, or a tuple of dicts of collapses. a noweight collapse is a dict of {measure:
{indices}), where the indices are where the measure will be constrained to have zero weight
For example,
and_(*constraints, **settings)
combine several constraints into a single constraint
Inputs: constraints – constraint functions
Additional Inputs: maxiter – maximum number of iterations to attempt to solve [default: 100]
or_(*constraints, **settings)
create a constraint that is satisfied if any constraints are satisfied
Inputs: constraints – constraint functions
Additional Inputs: maxiter – maximum number of iterations to attempt to solve [default: 100]
not_(constraint, **settings)
invert the region where the given constraints are valid, then solve
Inputs: constraint – constraint function
Additional Inputs: maxiter – maximum number of iterations to attempt to solve [default: 100]
Function Couplers
These methods can be used to couple two functions together, and represent some common patterns found in applying
constraints and penalty methods.
For example, the “outer” method called on y = f(x), with outer=c(x), will convert y = f(x) to y’ = c(f(x)). Similarly,
the “inner” method called on y = f(x), with inner=c(x), will convert y = f(x) to y’ = f(c(x)).
additive(penalty=<function <lambda>>, args=None, kwds=None)
penalize a function with another function: y = f(x) to y’ = f(x) + p(x)
This is useful, for example, in penalizing a cost function where the constraints are violated; thus, the satisfying
the constraints will be preferred at every cost function evaluation.
For example: >>> def squared(x): . . . return x**2 . . . >>> # equivalent to: (x+1) + (x**2) >>>
@additive(squared) . . . def constrain(x): . . . return x+1 . . . >>> from numpy import array >>> x =
array([1,2,3,4,5]) >>> constrain(x) array([ 3, 7, 13, 21, 31])
additive_proxy(penalty=<function <lambda>>, args=None, kwds=None)
penalize a function with another function: y = f(x) to y’ = f(x) + p(x)
This is useful, for example, in penalizing a cost function where the constraints are violated; thus, the satisfying
the constraints will be preferred at every cost function evaluation.
This function does not preserve decorated function signature, but passes args and kwds to the penalty function.
and_(*penalties, **settings)
combine several penalties into a single penalty function by summation
Inputs: penalties – penalty functions
Additional Inputs: ptype – penalty function type [default: linear_equality] args – arguments for the penalty
function [default: ()] kwds – keyword arguments for the penalty function [default: {}] k – penalty multi-
plier [default: 1] h – iterative multiplier [default: 5]
NOTE: The defaults provide a linear combination of the individual penalties without any scaling. A dif-
ferent ptype (from ‘mystic.penalty’) will apply a nonlinear scaling to the combined penalty, while a differ-
ent k will apply a linear scaling.
NOTE: This function is also useful for combining constraints solvers into a single constraints solver, how-
ever can not do so directly. Constraints solvers must first be converted to penalty functions (i.e. with
‘as_penalty’), then combined, then can be converted to a constraints solver (i.e. with ‘as_constraint’). The
resulting constraints will likely be more expensive to evaluate and less accurate than writing the constraints
solver from scratch.
inner(inner=<function <lambda>>, args=None, kwds=None)
nest a function within another function: convert y = f(x) to y’ = f(c(x))
This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable
x as input, returning a modified x. The “inner” coupler is utilized by mystic.solvers to bind constraints to a cost
function; thus the constraints are imposed every cost function evaluation.
For example: >>> def squared(x): . . . return x**2 . . . >>> # equivalent to: ((x**2)+1) >>> @in-
ner(squared) . . . def constrain(x): . . . return x+1 . . . >>> from numpy import array >>> x =
array([1,2,3,4,5]) >>> constrain(x) array([ 2, 5, 10, 17, 26])
inner_proxy(inner=<function <lambda>>, args=None, kwds=None)
nest a function within another function: convert y = f(x) to y’ = f(c(x))
This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable
x as input, returning a modified x.
This function applies the “inner” coupler pattern. However, it does not preserve decorated function signature –
it passes args and kwds to the inner function instead of the decorated function.
not_(penalty, **settings)
invert, so penalizes the region where the given penalty is valid
Inputs: penalty – a penalty function
Additional Inputs: ptype – penalty function type [default: linear_equality] args – arguments for the penalty
function [default: ()] kwds – keyword arguments for the penalty function [default: {}] k – penalty multi-
plier [default: 1] h – iterative multiplier [default: 5]
or_(*penalties, **settings)
create a single penalty that selects the minimum of several penalties
Inputs: penalties – penalty functions
Additional Inputs: ptype – penalty function type [default: linear_equality] args – arguments for the penalty
function [default: ()] kwds – keyword arguments for the penalty function [default: {}] k – penalty multi-
plier [default: 1] h – iterative multiplier [default: 5]
NOTE: The defaults provide a linear combination of the individual penalties without any scaling. A dif-
ferent ptype (from ‘mystic.penalty’) will apply a nonlinear scaling to the combined penalty, while a differ-
ent k will apply a linear scaling.
NOTE: This function is also useful for combining constraints solvers into a single constraints solver, how-
ever can not do so directly. Constraints solvers must first be converted to penalty functions (i.e. with
‘as_penalty’), then combined, then can be converted to a constraints solver (i.e. with ‘as_constraint’). The
resulting constraints will likely be more expensive to evaluate and less accurate than writing the constraints
solver from scratch.
outer(outer=<function <lambda>>, args=None, kwds=None)
wrap a function around another function: convert y = f(x) to y’ = c(f(x))
This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable
x as input, returning a modified x.
For example: >>> def squared(x): . . . return x**2 . . . >>> # equivalent to: ((x+1)**2) >>>
@outer(squared) . . . def constrain(x): . . . return x+1 . . . >>> from numpy import array >>> x
= array([1,2,3,4,5]) >>> constrain(x) array([ 4, 9, 16, 25, 36])
outer_proxy(outer=<function <lambda>>, args=None, kwds=None)
wrap a function around another function: convert y = f(x) to y’ = c(f(x))
This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable
x as input, returning a modified x.
This function applies the “outer” coupler pattern. However, it does not preserve decorated function signature –
it passes args and kwds to the outer function instead of the decorated function.
2.9.1 Solvers
This module contains a collection of optimization routines based on Storn and Price’s differential evolution algorithm.
The core solver algorithm was adapted from Phillips’s DETest.py. An alternate solver is provided that follows the logic
in Price, Storn, and Lampen – in that both a current generation and a trial generation are maintained, and all vectors
for creating difference vectors and mutations draw from the current generation. . . which remains invariant until the
end of the iteration.
A minimal interface that mimics a scipy.optimize interface has also been implemented, and functionality from the
mystic solver API has been added with reasonable defaults.
Minimal function interface to optimization routines:: diffev – Differential Evolution (DE) solver diffev2 – Price
& Storn’s Differential Evolution solver
The corresponding solvers built on mystic’s AbstractSolver are:: DifferentialEvolutionSolver – a DE solver Dif-
ferentialEvolutionSolver2 – Storn & Price’s DE solver
Mystic solver behavior activated in diffev and diffev2::
• EvaluationMonitor = Monitor()
• StepMonitor = Monitor()
• strategy = Best1Bin
• termination = ChangeOverGeneration(ftol,gtol), if gtol provided ‘’ = VTRChangeOverGenera-
tions(ftol), otherwise
Storn & Price’s DE Solver has also been implemented to use the “map” interface. Mystic enables the user to override
the standard python map function with their own ‘map’ function, or one of the map functions provided by the pathos
package (see http://dev.danse.us/trac/pathos) for distributed and high-performance computing.
2.9.2 Usage
Practical advice for how to configure the Differential Evolution Solver for your own objective function can be found
on R. Storn’s web page (http://www.icsi.berkeley.edu/~storn/code.html), and is reproduced here:
First try the following classical settings for the solver configuration:
Choose a crossover strategy (e.g. Rand1Bin), set the number of parents
NP to 10 times the number of parameters, select ScalingFactor=0.8, and
CrossProbability=0.9.
It has been found recently that selecting ScalingFactor from the interval
[0.5, 1.0] randomly for each generation or for each difference vector,
a technique called dither, improves convergence behaviour significantly,
especially for noisy objective functions.
If you still get misconvergence you might want to instead try a different
crossover strategy. The most commonly used are Rand1Bin, Rand1Exp,
Best1Bin, and Best1Exp. The crossover strategy is not so important a
choice, although K. Price claims that binomial (Bin) is never worse than
exponential (Exp).
All solvers included in this module provide the standard signal handling. For more information, see mys-
tic.mystic.abstract_solver.
References
1. Storn, R. and Price, K. Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces. Journal of Global Optimization 11: 341-359, 1997.
2. Price, K., Storn, R., and Lampinen, J. - Differential Evolution, A Practical Approach to Global Optimization.
Springer, 1st Edition, 2005.
class DifferentialEvolutionSolver(dim, NP=4)
Bases: mystic.abstract_solver.AbstractSolver
Differential Evolution optimization.
Takes two initial inputs: dim – dimensionality of the problem NP – size of the trial solution population. [re-
quires: NP >= 4]
All important class members are inherited from AbstractSolver.
SetConstraints(constraints)
apply a constraints function to the optimization
input::
• a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector.
Ideally, this function is constructed so the parameter vector it passes to the cost function will
satisfy the desired (i.e. encoded) constraints.
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using differential evolution.
Uses a differential evolution algorithm to find the minimum of a function of one or more variables.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial so-
lutions.
• CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
• ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.
• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• disp (bool, default=False) – if True, print convergence messages.
Returns None
UpdateGenealogyRecords(id, newchild)
Override me for more refined behavior. Currently all changes are logged.
_Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration Note that ExtraArgs should be a tuple of extra arguments
__init__(dim, NP=4)
Takes two initial inputs: dim – dimensionality of the problem NP – size of the trial solution population.
[requires: NP >= 4]
All important class members are inherited from AbstractSolver.
__module__ = 'mystic.differential_evolution'
_decorate_objective(cost, ExtraArgs=None)
decorate cost function with bounds, penalties, monitors, etc
_process_inputs(kwds)
process and activate input settings
class DifferentialEvolutionSolver2(dim, NP=4)
Bases: mystic.abstract_map_solver.AbstractMapSolver
Differential Evolution optimization, using Storn and Price’s algorithm.
Alternate implementation:
• utilizes a map-reduce interface, extensible to parallel computing
• both a current and a next generation are kept, while the current generation is invariant during the main
DE logic
Takes two initial inputs: dim – dimensionality of the problem NP – size of the trial solution population. [re-
quires: NP >= 4]
All important class members are inherited from AbstractSolver.
SetConstraints(constraints)
apply a constraints function to the optimization
input::
• a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector.
Ideally, this function is constructed so the parameter vector it passes to the cost function will
satisfy the desired (i.e. encoded) constraints.
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using differential evolution.
Uses a differential evolution algorithm to find the minimum of a function of one or more variables. This
implementation holds the current generation invariant until the end of each iteration.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial so-
lutions.
• CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
• ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.
• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
Notes
Notes
2.10.1 Solvers
This module contains a collection of optimization routines that use “map” to distribute several optimizer instances
over parameter space. Each solver accepts a imported solver object as the “nested” solver, which becomes the target
of the map function.
The set of solvers built on mystic’s AbstractEnsembleSolver are:: LatticeSolver – start from center of N grid
points BuckshotSolver – start from N random points in parameter space SparsitySolver – start from N points
sampled in sparse regions of space
2.10.2 Usage
_InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
__init__(dim, npts=8, rtol=None)
Takes three initial inputs: dim – dimensionality of the problem npts – number of parallel solver instances
rtol – size of radial tolerance for sparsity
All important class members are inherited from AbstractEnsembleSolver.
__module__ = 'mystic.ensemble'
lattice(cost, ndim, nbins=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None,
full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using the lattice ensemble solver.
Uses a lattice ensemble algorithm to find the minimum of a function of one or more variables. Mimics the
scipy.optimize.fmin interface. Starts N solver instances at regular intervals in parameter space, deter-
mined by nbins (N = numpy.prod(nbins); len(nbins) == ndim).
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• ndim (int) – dimensionality of the problem.
• nbins (tuple(int), default=8) – total bins, or # of bins in each dimension.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=10) – maximum iterations to run without improvement.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• solver (solver, default=None) – override the default nested Solver instance.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
• map (func, default=None) – a (parallel) map function y = map(f, x).
Notes
Notes
Notes
Input/output ‘filters’
Identity(x)
identity filter, F, where F(x) yields x
NullChecker(params, evalpts, *args)
null validity check
PickComponent(n, multiplier=1.0)
component filter, F, where F(x) yields x[n]
This module contains classes that aid in constructing cost functions. Cost function can easily be created by hand;
however, mystic also provides an automated method that allows the dynamic wrapping of forward models into cost
function objects.
2.12.1 Usage
The basic usage pattern for a cost factory is to generate a cost function from a set of data points and a corresponding
set of evaluation points. The cost factory requires a “model factory”, which is just a generator of model function in-
stances from a list of coefficients. The following example uses numpy.poly1d, which provides a factory for generating
polynomials. An expanded version of the following can be found in mystic.examples.example12.
>>> # get a model factory
>>> import numpy as np
>>> FunctionFactory = np.poly1d
>>>
>>> # generate some evaluation points
>>> xpts = 0.1 * np.arange(-50.,51.)
>>>
>>> # we don't have real data, so generate fake data from target and model
>>> target = [2.,-5.,3.]
>>> ydata = FunctionFactory(target)(xpts)
>>> noise = np.random.normal(0,1,size=len(ydata))
>>> ydata = ydata + noise
>>>
>>> # get a cost factory
>>> from mystic.forward_model import CostFactory
>>> C = CostFactory()
>>>
>>> # generate a cost function for the model factory
>>> metric = lambda x: np.sum(x*x)
>>> C.addModel(FunctionFactory, inputs=len(target))
>>> cost = C.getCostFunction(evalpts=xpts, observations=ydata,
... sigma=1.0, metric=metric)
>>>
>>> # pass the cost function to the optimizer
>>> initial_guess = [1.,-2.,1.]
>>> solution = fmin_powell(cost, initial_guess)
>>> print(solution)
[ 2.00495233 -5.0126248 2.72873734]
In general, a user will be required to write their own model factory. See the examples contained in mystic.models for
more information.
The CostFactory can be used to couple models together into a single cost function. For an example, see mys-
tic.examples.forward_model.
class CostFactory
Bases: object
A cost function generator.
CostFactory builds a list of forward model factories, and maintains a list of associated model names and number
of inputs. Can be used to combine several models into a single cost function.
Takes no initial inputs.
__dict__ = dict_proxy({'__module__': 'mystic.forward_model', 'getCostFunction': <func
__init__()
CostFactory builds a list of forward model factories, and maintains a list of associated model names and
number of inputs. Can be used to combine several models into a single cost function.
Takes no initial inputs.
__module__ = 'mystic.forward_model'
__repr__() <==> repr(x)
__weakref__
list of weak references to the object (if defined)
addModel(model, inputs, name=None, outputFilter=<function Identity>, inputChecker=<function
NullChecker>)
Adds a forward model factory to the cost factory.
Inputs: model – a callable function factory object inputs – number of input arguments to model name – a
string representing the model name
Example
Example
getCostFunctionSlow(evalpts, observations)
Get a cost function that allows simultaneous evaluation of all forward models for the same set of evaluation
points and observation points.
Parameters
• evalpts (list(float)) – a list of evaluation points (i.e. input).
• observations (list(float)) – a list of data points (i.e. output).
Notes
The cost metric is hard-wired to be the sum of the real part of |x|^2, where x is the VectorCostFunction
for a given set of parameters.
Input parameters do NOT go through filters registered as inputCheckers.
Examples
getForwardEvaluator(evalpts)
Get a model factory that allows simultaneous evaluation of all forward models for the same set of evalua-
tion points.
Inputs: evalpts – a list of evaluation points
Example
getParameterList()
Get a ‘pretty’ listing of the input parameters and corresponding models.
getRandomParams()
getVectorCostFunction(evalpts, observations)
Get a vector cost function that allows simultaneous evaluation of all forward models for the same set of
evaluation points and observation points.
Inputs: evalpts – a list of evaluation points observations – a list of data points
The vector cost metric is hard-wired to be the sum of the difference of getForwardEvaluator(evalpts) and
the observations.
NOTE: Input parameters do NOT go through filters registered as inputCheckers.
Example
_extend_mask(condition, mask)
extend the mask in the termination condition with the given mask
_replace_mask(condition, mask)
replace the mask in the termination condition with the given mask
_update_masks(condition, mask, kind=”, new=False)
update the termination condition with the given mask
get_mask(condition)
get mask from termination condition
update_mask(condition, collapse, new=False)
update the termination condition with the given collapse (dict)
update_position_masks(condition, mask, new=False)
update all position masks in the given termination condition
update_weight_masks(condition, mask, new=False)
update all weight masks in the given termination condition
2.16.1 Functions
Mystic provides a set of mathematical functions that support various advanced optimization features such as uncer-
tainty analysis and parameter sensitivity.
2.16.2 Tools
Mystic also provides a set of mathematical tools that support advanced features such as parameter space partitioning
and monte carlo estimation. These mathematical tools are provided:
Notes
The tolerance values are positive, typically very small numbers. The relative difference (rtol * abs(b)) and the
absolute difference atol are added together to compare against the absolute difference between a and b.
Parameters
• a, b (array_like) – Input arrays to compare.
• rtol (float) – The relative tolerance parameter (see Notes).
• atol (float) – The absolute tolerance parameter (see Notes).
Returns y – Returns True if the two arrays are equal within the given tolerance; False otherwise. If
either array contains NaN, then False is returned.
Return type bool
Notes
Examples
approx module
Notes
Examples
Notes
If x and y are floats, return True if y is within either absolute error tol or relative error rel of x. You can disable
either the absolute or relative check by passing None as tol or rel (but not both).
For any other objects, x and y are checked in that order for a method __approx_equal__, and the result of
that is returned as a bool. Any optional arguments are passed to the __approx_equal__ method.
__approx_equal__ can return NotImplemented to signal it doesn’t know how to perform the specific
comparison, in which case the other object is checked instead. If neither object has the method, or both defer by
returning NotImplemented, then fall back on the same numeric comparison that is used for floats.
Examples
compressed module
binary(n)
converts an int to binary (returned as a string) Hence, int(binary(x), base=2) == x
binary2coords(binaries, positions, **kwds)
convert a list of binary strings to product measure coordinates
differs_by_one(ith, binaries, all=True, index=True)
get the binary string that differs by exactly one index
Inputs: ith = the target index binaries = a list of binary strings all = if False, return only the results for indices
< i index = if True, return the index of the results (not results themselves)
index2binary(index, npts=None)
convert a list of integers to a list of binary strings
discrete module
Classes for discrete measure data objects. Includes point_mass, measure, product_measure, and scenario classes.
compose(samples, weights=None)
Generate a product_measure object from a nested list of N x 1D discrete measure positions and a nested list of
N x 1D weights. If weights are not provided, a uniform distribution with norm = 1.0 will be used.
decompose(c)
Decomposes a product_measure object into a nested list of N x 1D discrete measure positions and a nested list
of N x 1D weights.
unflatten(params, npts)
Map a list of random variables to N x 1D discrete measures in a product_measure object.
flatten(c)
Flattens a product_measure object into a list.
bounded_mean(mean_x, samples, xmin, xmax, wts=None)
norm_wts_constraintsFactory(pts)
factory for a constraints function that: - normalizes weights
mean_y_norm_wts_constraintsFactory(target, pts)
factory for a constraints function that: - imposes a mean on scenario values - normalizes weights
impose_feasible(cutoff, data, guess=None, **kwds)
impose shortness on a given scenario
This function attempts to minimize the infeasibility between observed data and a scenario of synthetic data by
perforing an optimization on w,x,y over the given bounds.
Parameters
• cutoff (float) – maximum acceptable deviation from shortness
• data (mystic.math.discrete.scenario) – a dataset of observed points
• guess (mystic.math.discrete.scenario, default=None) – the synthetic points
• tol (float, default=0.0) – maximum acceptable optimizer termination for
sum(infeasibility).
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function x' = constraints(x), where x is a
scenario that has been converted into a list of parameters (e.g. with scenario.flatten),
and x' is the list of parameters after the encoded constaints have been satisfied.
Notes
Here, tol is used to set the optimization termination for minimizing the sum(infeasibility), while cutoff
is used in defining the deviation from shortness for observed x,y and synthetic x',y'.
guess can be either a scenario providing initial guess at feasibility, or a tuple of the dimensions of the desired
scenario, where initial values will be chosen at random.
impose_valid(cutoff, model, guess=None, **kwds)
impose model validity on a given scenario
This function attempts to minimize the graph distance between reality (data), y = G(x), and an approximating
function, y' = F(x'), by perforing an optimization on w,x,y over the given bounds.
Parameters
• cutoff (float) – maximum acceptable model invalidity |y - F(x')|.
• model (func) – the model function, y' = F(x').
• guess (scenario, default=None) – a scenario, defines y = G(x).
• hausdorff (bool, default=False) – hausdorff norm, where if given, then ytol = |y -
F(x')| + |x - x'|/norm
• xtol (float, default=0.0) – maximum acceptable pointwise graphical distance between model
and reality.
• tol (float, default=0.0) – maximum acceptable optimizer termination for sum(graphical
distances).
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function x' = constraints(x), where x is a
scenario that has been converted into a list of parameters (e.g. with scenario.flatten),
and x' is the list of parameters after the encoded constaints have been satisfied.
Returns a scenario with the desired model validity
Notes
xtol defines the n-dimensional base of a pilar of height cutoff, centered at each point. The region inside the pilar
defines the space where a “valid” model must intersect. If xtol is not specified, then the base of the pilar will
be a dirac at x' = x. This function performs an optimization to find a set of points where the model is valid.
Here, tol is used to set the optimization termination for minimizing the sum(graphical_distances),
while cutoff is used in defining the graphical distance between x,y and x',F(x').
guess can be either a scenario providing initial guess at validity, or a tuple of the dimensions of the desired
scenario, where initial values will be chosen at random.
class point_mass(position, weight=1.0)
Bases: object
a point mass object with weight and position
Parameters
• position (tuple(float)) – position of the point mass
Notes
center_mass
sum of weights * positions
ess_maximum(f, tol=0.0)
calculate the maximum for the support of a given function
Parameters
• f (func) – a function that takes a list and returns a number
• tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the maximum value of f over all measure positions with support
ess_minimum(f, tol=0.0)
calculate the minimum for the support of a given function
Parameters
• f (func) – a function that takes a list and returns a number
• tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the minimum value of f over all measure positions with support
expect(f )
calculate the expectation for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the expectation of f over all measure positions
expect_var(f )
calculate the expected variance for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the expected variance of f over all measure positions
mass
readonly – the sum of the weights
maximum(f )
calculate the maximum for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the maximum value of f over all measure positions
minimum(f )
calculate the minimum for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the minimum value of f over all measure positions
normalize()
normalize the weights to 1.0
Parameters None
Returns None
npts
readonly – the number of point masses in the measure
positions
a list of positions for all point masses in the measure
range
|max - min| for the positions
set_expect(expected, f, bounds=None, constraints=None, **kwds)
impose an expectation on the measure by adjusting the positions
Parameters
• expected (float) – target expected mean
• f (func) – a function that takes a list and returns a number
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function c' = constraints(c), where c is a
product measure, and c' is a product measure where the encoded constaints are satisfied.
• tol (float, default=None) – maximum allowable deviation from expected
• npop (int, default=200) – size of the trial solution population
• maxiter (int, default=1000) – the maximum number of iterations to perform
• maxfun (int, default=1e+6) – the maximum number of function evaluations
Returns None
Notes
Expectation E is calculated by minimizing mean(f(x)) - expected, over the given bounds, and will
terminate when E is found within deviation tol of the target mean expected. If tol is not provided,
then a relative deviation of 1% of expected will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw
the mean, variance, and etc from.
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper
bounds, for each parameter.
set_expect_mean_and_var(expected, f, bounds=None, constraints=None, **kwds)
impose expected mean and var on the measure by adjusting the positions
Parameters
• expected (tuple(float)) – (expected mean, expected var)
• f (func) – a function that takes a list and returns a number
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function c' = constraints(c), where c is a
product measure, and c' is a product measure where the encoded constaints are satisfied.
• tol (float, default=None) – maximum allowable deviation from expected
• npop (int, default=200) – size of the trial solution population
• maxiter (int, default=1000) – the maximum number of iterations to perform
• maxfun (int, default=1e+6) – the maximum number of function evaluations
Returns None
Notes
Expected mean E and expected variance R are calculated by minimizing the sum of the absolute values
of mean(f(x)) - m and variance(f(x)) - v over the given bounds, and will terminate when
E and R are found within tolerance tol of the target mean m and variance v, respectively. If tol is not
provided, then a relative deviation of 1% of max(m,v) will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw
the mean, variance, and etc from
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper
bounds, for each parameter
set_expect_var(expected, f, bounds=None, constraints=None, **kwds)
impose an expected variance on the measure by adjusting the positions
Parameters
• expected (float) – target expected variance
• f (func) – a function that takes a list and returns a number
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function c' = constraints(c), where c is a
product measure, and c' is a product measure where the encoded constaints are satisfied.
• tol (float, default=None) – maximum allowable deviation from expected
• npop (int, default=200) – size of the trial solution population
• maxiter (int, default=1000) – the maximum number of iterations to perform
• maxfun (int, default=1e+6) – the maximum number of function evaluations
Returns None
Notes
Expected var E is calculated by minimizing var(f(x)) - expected, over the given bounds, and will
terminate when E is found within deviation tol of the target variance expected. If tol is not provided,
then a relative deviation of 1% of expected will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw
the mean, variance, and etc from.
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper
bounds, for each parameter.
support(tol=0)
get the positions with non-zero weight (i.e. support)
Parameters tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the list of positions with support
support_index(tol=0)
get the indices where there is support (i.e. non-zero weight)
Parameters tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the list of indices where there is support
var
mean(|positions - mean(positions)|**2)
weights
a list of weights for all point masses in the measure
class product_measure
Bases: list
a N-d measure-theoretic product of discrete measures
Parameters iterable (list) – a list of mystic.math.discrete.measure objects
Notes
center_mass
sum of weights * positions
differs_by_one(ith, all=True, index=True)
get the coordinates where the associated binary string differs by exactly one index
Parameters
• ith (int) – the target index
• all (bool, default=True) – if False, only return results for indices < i
• index (bool, default=True) – if True, return the indices of the results instead of the results
themselves
Returns the coordinates where the associated binary string differs by one, or if index is True,
return the corresponding indices
ess_maximum(f, tol=0.0)
calculate the maximum for the support of a given function
Parameters
• f (func) – a function that takes a list and returns a number
• tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the maximum value of f over all measure positions with support
ess_minimum(f, tol=0.0)
calculate the minimum for the support of a given function
Parameters
• f (func) – a function that takes a list and returns a number
• tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the minimum value of f over all measure positions with support
expect(f )
calculate the expectation for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the expectation of f over all measure positions
expect_var(f )
calculate the expected variance for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the expected variance of f over all measure positions
flatten()
convert a product measure to a single list of parameters
Parameters None
Returns a list of parameters
Notes
Given product_measure.pts = (M, N, ...), then the returned list is params = [wx1, .
.., wxM, x1, ..., xM, wy1, ..., wyN, y1, ..., yN, ...]. Thus, params will have
M weights and M corresponding positions, followed by N weights and N corresponding positions, with this
pattern followed for each new dimension of the desired product measure.
load(params, pts)
load the product measure from a list of parameters
Parameters
• params (list(float)) – parameters corresponding to N 1D discrete measures
Notes
To append len(pts) new discrete measures to the product measure, it is assumed params either cor-
responds to the correct number of weights and positions specified by pts, or params has additional
values (typically output values) which will be ignored. It is assumed that len(params) >= 2 *
sum(product_measure.pts).
Given the value of pts = (M, N, ...), it is assumed that params = [wx1, ..., wxM, x1,
..., xM, wy1, ..., wyN, y1, ..., yN, ...]. Thus, params should have M weights and M
corresponding positions, followed by N weights and N corresponding positions, with this pattern followed
for each new dimension of the desired product measure.
mass
readonly – a list of weight norms
maximum(f )
calculate the maximum for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the maximum value of f over all measure positions
minimum(f )
calculate the minimum for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the minimum value of f over all measure positions
npts
readonly – the total number of point masses in the product measure
pof(f )
calculate probability of failure for a given function
Parameters f (func) – a function returning True for ‘success’ and False for ‘failure’
Returns the probabilty of failure, a float in [0.0,1.0]
Notes
pos
readonly – a list of positions for each discrete mesure
positions
a list of positions for all point masses in the product measure
pts
readonly – the number of point masses for each discrete mesure
sampled_pof(f, npts=10000)
use sampling to calculate probability of failure for a given function
Parameters
• f (func) – a function returning True for ‘success’ and False for ‘failure’
• npts (int, default=10000) – the number of point masses sampled from the underlying
discrete measures
Returns the probabilty of failure, a float in [0.0,1.0]
Notes
sampled_support(npts=10000)
randomly select support points from the underlying discrete measures
Parameters npts (int, default=10000) – the number of sampled points
Returns a list of len(product measure) lists, each of length len(npts)
select(*index, **kwds)
generate product measure positions for the selected position indices
Parameters index (tuple(int)) – tuple of position indicies
Returns a list of product measure positions for the selected indices
Examples
>>> r
[[9, 8], [1, 3], [4, 2]]
>>> r.select(*range(r.npts))
[(9, 1, 4), (8, 1, 4), (9, 3, 4), (8, 3, 4), (9, 1, 2), (8, 1, 2), (9, 3, 2),
˓→(8, 3, 2)]
>>>
>>> _pack(r)
[(9, 1, 4), (8, 1, 4), (9, 3, 4), (8, 3, 4), (9, 1, 2), (8, 1, 2), (9, 3, 2),
˓→(8, 3, 2)]
Notes
Notes
Expectation E is calculated by minimizing mean(f(x)) - expected, over the given bounds, and will
terminate when E is found within deviation tol of the target mean expected. If tol is not provided,
then a relative deviation of 1% of expected will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw
the mean, variance, and etc from.
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper
bounds, for each parameter.
set_expect_mean_and_var(expected, f, bounds=None, constraints=None, **kwds)
impose expected mean and var on the measure by adjusting the positions
Parameters
• expected (tuple(float)) – (expected mean, expected var)
• f (func) – a function that takes a list and returns a number
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function c' = constraints(c), where c is a
product measure, and c' is a product measure where the encoded constaints are satisfied.
• tol (float, default=None) – maximum allowable deviation from expected
• npop (int, default=200) – size of the trial solution population
• maxiter (int, default=1000) – the maximum number of iterations to perform
• maxfun (int, default=1e+6) – the maximum number of function evaluations
Returns None
Notes
Expected mean E and expected variance R are calculated by minimizing the sum of the absolute values
of mean(f(x)) - m and variance(f(x)) - v over the given bounds, and will terminate when
E and R are found within tolerance tol of the target mean m and variance v, respectively. If tol is not
provided, then a relative deviation of 1% of max(m,v) will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw
the mean, variance, and etc from
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper
bounds, for each parameter
Notes
Expected var E is calculated by minimizing var(f(x)) - expected, over the given bounds, and will
terminate when E is found within deviation tol of the target variance expected. If tol is not provided,
then a relative deviation of 1% of expected will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw
the mean, variance, and etc from.
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper
bounds, for each parameter.
support(tol=0)
get the positions with non-zero weight (i.e. support)
Parameters tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the list of positions with support
support_index(tol=0)
get the indices where there is support (i.e. non-zero weight)
Parameters tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the list of indices where there is support
update(params)
update the product measure from a list of parameters
Parameters params (list(float)) – parameters corresponding to N 1D discrete measures
Returns the product measure itself
Return type self (measure)
Notes
The dimensions of the product measure will not change upon update, and it is assumed params either cor-
responds to the correct number of weights and positions for the existing product_measure, or params
has additional values (typically output values) which will be ignored. It is assumed that len(params)
>= 2 * sum(product_measure.pts).
If product_measure.pts = (M, N, ...), then it is assumed that params = [wx1, ...,
wxM, x1, ..., xM, wy1, ..., wyN, y1, ..., yN, ...]. Thus, params should have M
weights and M corresponding positions, followed by N weights and N corresponding positions, with this
pattern followed for each new dimension of the desired product measure.
weights
a list of weights for all point masses in the product measure
wts
readonly – a list of weights for each discrete mesure
class scenario(pm=None, values=None)
Bases: mystic.math.discrete.product_measure
a N-d product measure with associated data values
A scenario is a measure-theoretic product of discrete measures that also includes a list of associated values,
with the values corresponding to measured or synthetic data for each measure position. Each point mass in the
product measure is paired with a value, and thus, essentially, a scenario is equivalent to a mystic.math.
legacydata.dataset stored in a product_measure representation.
Parameters
• pm (mystic.math.discrete.product_measure, default=None) – a product measure
• values (list(float), default=None) – values associated with each position
Notes
flatten(all=True)
convert a scenario to a single list of parameters
Parameters all (bool, default=True) – if True, append the scenario values
Returns a list of parameters
Notes
Given scenario.pts = (M, N, ...), then the returned list is params = [wx1, ..., wxM,
x1, ..., xM, wy1, ..., wyN, y1, ..., yN, ...]. Thus, params will have M weights
and M corresponding positions, followed by N weights and N corresponding positions, with this pattern
followed for each new dimension of the scenario. If all is True, then the scenario.values will be
appended to the list of parameters.
load(params, pts)
load the scenario from a list of parameters
Parameters
• params (list(float)) – parameters corresponding to N 1D discrete measures
• pts (tuple(int)) – number of point masses in each of the discrete measures
Returns the scenario itself
Return type self (scenario)
Notes
To append len(pts) new discrete measures to the scenario, it is assumed params either corresponds to
the correct number of weights and positions specified by pts, or params has additional values which will
be saved as the scenario.values. It is assumed that len(params) >= 2 * sum(scenario.
pts).
Given the value of pts = (M, N, ...), it is assumed that params = [wx1, ..., wxM, x1,
..., xM, wy1, ..., wyN, y1, ..., yN, ...]. Thus, params should have M weights and M
corresponding positions, followed by N weights and N corresponding positions, with this pattern followed
for each new dimension of the desired scenario. Any remaining parameters will be treated as scenario.
values.
mean_value()
calculate the mean of the associated values for a scenario
Parameters None
Returns the weighted mean of the scenario values
pof_value(f )
calculate probability of failure for a given function
Parameters f (func) – a function returning True for ‘success’ and False for ‘failure’
Returns the probabilty of failure, a float in [0.0,1.0]
Notes
• the function f should take a list of values (for example, scenario.values) and return a single
value (e.g. 0.0 or False)
Notes
set_mean_value(m)
set the mean for the associated values of a scenario
Parameters m (float) – the target weighted mean of the scenario values
Returns None
set_valid(model, cutoff=0.0, bounds=None, constraints=None, **kwds)
impose model validity on a scenario by adjusting positions and values
This function attempts to minimize the graph distance between reality (data), y = G(x), and an approx-
imating function, y' = F(x'), by perforing an optimization on w,x,y over the given bounds.
Parameters
• model (func) – a model y' = F(x') that approximates reality y = G(x)
• cutoff (float, default=0.0) – acceptable model invalidity |y - F(x')|
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function x' = constraints(x), where x is
a scenario that has been converted into a list of parameters (e.g. with scenario.
flatten), and x' is the list of parameters after the encoded constaints have been satis-
fied.
• hausdorff (bool, default=False) – hausdorff norm, where if given, then ytol = |y -
F(x')| + |x - x'|/norm
• xtol (float, default=0.0) – maximum acceptable pointwise graphical distance between
model and reality.
• tol (float, default=0.0) – maximum acceptable optimizer termination for
sum(graphical distances).
Returns None
Notes
xtol defines the n-dimensional base of a pilar of height cutoff, centered at each point. The region in-
side the pilar defines the space where a “valid” model must intersect. If xtol is not specified, then the
base of the pilar will be a dirac at x' = x. This function performs an optimization to find a set of
points where the model is valid. Here, tol is used to set the optimization termination for minimizing the
sum(graphical_distances), while cutoff is used in defining the graphical distance between x,y
and x',F(x').
short_wrt_data(data, L=None, blamelist=False, pairs=True, all=False, raw=False, **kwds)
check for shortness with respect to the given data
Parameters
• data (list) – a list of data points or dataset to compare against.
• L (float, default=None) – the lipschitz constant, if different than in data.
• blamelist (bool, default=False) – if True, indicate the infeasible points.
• pairs (bool, default=True) – if True, indicate indices of infeasible points.
• all (bool, default=False) – if True, get results for each individual point.
• raw (bool, default=False) – if False, get boolean results (i.e. non-float).
• tol (float, default=0.0) – maximum acceptable deviation from shortness.
• cutoff (float, default=tol) – zero out distances less than cutoff.
Notes
Each point x,y can be thought to have an associated double-cone with slope equal to the lipschitz constant.
Shortness with respect to another point is defined by the first point not being inside the cone of the second.
We can allow for some error in shortness, a short tolerance tol, for which the point x,y is some acceptable
y-distance inside the cone. While very tightly related, cutoff and tol play distinct roles; tol is subtracted
from calculation of the lipschitz_distance, while cutoff zeros out the value of any element less than the
cutoff.
short_wrt_self(L, blamelist=False, pairs=True, all=False, raw=False, **kwds)
check for shortness with respect to the scenario itself
Parameters
• L (float) – the lipschitz constant.
• blamelist (bool, default=False) – if True, indicate the infeasible points.
• pairs (bool, default=True) – if True, indicate indices of infeasible points.
• all (bool, default=False) – if True, get results for each individual point.
• raw (bool, default=False) – if False, get boolean results (i.e. non-float).
• tol (float, default=0.0) – maximum acceptable deviation from shortness.
• cutoff (float, default=tol) – zero out distances less than cutoff.
Notes
Each point x,y can be thought to have an associated double-cone with slope equal to the lipschitz constant.
Shortness with respect to another point is defined by the first point not being inside the cone of the second.
We can allow for some error in shortness, a short tolerance tol, for which the point x,y is some acceptable
y-distance inside the cone. While very tightly related, cutoff and tol play distinct roles; tol is subtracted
from calculation of the lipschitz_distance, while cutoff zeros out the value of any element less than the
cutoff.
update(params)
update the scenario from a list of parameters
Parameters params (list(float)) – parameters corresponding to N 1D discrete measures
Returns the scenario itself
Return type self (scenario)
Notes
The dimensions of the scenario will not change upon update, and it is assumed params either corresponds
to the correct number of weights and positions for the existing scenario, or params has additional
values which will be saved as the scenario.values. It is assumed that len(params) >= 2 *
sum(scenario.pts).
If scenario.pts = (M, N, ...), then it is assumed that params = [wx1, ..., wxM,
x1, ..., xM, wy1, ..., wyN, y1, ..., yN, ...]. Thus, params should have M weights
and M corresponding positions, followed by N weights and N corresponding positions, with this pattern
followed for each new dimension of the desired scenario.
valid_wrt_model(model, blamelist=False, pairs=True, all=False, raw=False, **kwds)
check for scenario validity with respect to the model
Parameters
• model (func) – the model function, y' = F(x').
• blamelist (bool, default=False) – if True, indicate the infeasible points.
• pairs (bool, default=True) – if True, indicate indices of infeasible points.
• all (bool, default=False) – if True, get results for each individual point.
• raw (bool, default=False) – if False, get boolean results (i.e. non-float).
• ytol (float, default=0.0) – maximum acceptable difference |y - F(x')|.
• xtol (float, default=0.0) – maximum acceptable difference |x - x'|.
• cutoff (float, default=ytol) – zero out distances less than cutoff.
• hausdorff (bool, default=False) – hausdorff norm, where if given, then ytol = |y -
F(x')| + |x - x'|/norm.
Notes
xtol defines the n-dimensional base of a pilar of height ytol, centered at each point. The region inside the
pilar defines the space where a “valid” model must intersect. If xtol is not specified, then the base of the
pilar will be a dirac at x' = x. This function performs an optimization for each x to find an appropriate
x'.
ytol is a single value, while xtol is a single value or an iterable. cutoff takes a float or a boolean, where
cutoff=True will set the value of cutoff to the default. Typically, the value of cutoff is ytol, 0.0, or
None. hausdorff can be False (e.g. norm = 1.0), True (e.g. norm = spread(x)), or a list of points
of len(x).
While cutoff and ytol are very tightly related, they play a distinct role; ytol is used to set the optimization
termination for an acceptable |y - F(x')|, while cutoff is applied post-optimization.
If we are using the hausdorff norm, then ytol will set the optimization termination for an acceptable |y -
F(x')| + |x - x'|/norm, where the x values are normalized by norm = hausdorff.
values
a list of values corresponding to output data for all point masses in the underlying product measure
distance module
Notes
• axis (int, default=None) – if not None, reduce across the given axis
Returns an array of absolute distances between points
Notes
most common usage has pair=False and axis=0, or pairwise distance with pair=True and axis=1
euclidean(x, xp=None, pair=False, dmin=0, axis=None)
L-2 norm distance between points in euclidean space
d(2) = sqrt(sum(|x[0] - x[0]'|^2, |x[1] - x[1]'|^2, ..., |x[n] -
x[n]'|^2))
Parameters
• x (array) – an array of points, x
• xp (array, default=None) – a second array of points, x'
• pair (bool, default=False) – if True, return the pairwise distances
• dmin (int, default=0) – upconvert x,x' to dimension >= dmin
• axis (int, default=None) – if not None, reduce across the given axis
Returns an array of absolute distances between points
Notes
most common usage has pair=False and axis=0, or pairwise distance with pair=True and axis=1
graphical_distance(model, points, **kwds)
find the radius(x') that minimizes the graph between reality (data), y = G(x), and an approximating
function, y' = F(x').
Parameters
• model (func) – a model y' = F(x') that approximates reality y = G(x)
• points (mystic.math.legacydata.dataset) – a dataset, defines y = G(x)
• ytol (float, default=0.0) – maximum acceptable difference |y - F(x')|.
• xtol (float, default=0.0) – maximum acceptable difference |x - x'|.
• cutoff (float, default=ytol) – zero out distances less than cutoff.
• hausdorff (bool, default=False) – hausdorff norm, where if given, then ytol = |y -
F(x')| + |x - x'|/norm.
Returns the radius (the minimum distance x,G(x) to x',F(x') for each x)
Notes
ytol is a single value, while xtol is a single value or an iterable. cutoff takes a float or a boolean, where
cutoff=True will set the value of cutoff to the default. Typically, the value of cutoff is ytol, 0.0, or None.
hausdorff can be False (e.g. norm = 1.0), True (e.g. norm = spread(x)), or a list of points of len(x).
While cutoff and ytol are very tightly related, they play a distinct role; ytol is used to set the optimization
termination for an acceptable |y - F(x')|, while cutoff is applied post-optimization.
If we are using the hausdorff norm, then ytol will set the optimization termination for an acceptable |y -
F(x')| + |x - x'|/norm, where the x values are normalized by norm = hausdorff.
hamming(x, xp=None, pair=False, dmin=0, axis=None)
zero ‘norm’ distance between points in euclidean space
d(0) = sum(x[0] != x[0]', x[1] != x[1]', ..., x[n] != x[n]')
Parameters
• x (array) – an array of points, x
• xp (array, default=None) – a second array of points, x'
• pair (bool, default=False) – if True, return the pairwise distances
• dmin (int, default=0) – upconvert x,x' to dimension >= dmin
• axis (int, default=None) – if not None, reduce across the given axis
Returns an array of absolute distances between points
Notes
most common usage has pair=False and axis=0, or pairwise distance with pair=True and axis=1
infeasibility(distance, cutoff=0.0)
amount by which the distance exceeds the given cutoff distance
Parameters
• distance (array) – the measure of feasibility for each point
• cutoff (float, default=0.0) – maximum acceptable distance
Returns an array of distances by which each point is infeasbile
is_feasible(distance, cutoff=0.0)
determine if the distance exceeds the given cutoff distance
Parameters
• distance (array) – the measure of feasibility for each point
• cutoff (float, default=0.0) – maximum acceptable distance
Returns bool array, with True where the distance is less than cutoff
lipschitz_distance(L, points1, points2, **kwds)
calculate the lipschitz distance between two sets of datapoints
Parameters
• L (list) – a list of lipschitz constants
• points1 (mystic.math.legacydata.dataset) – a dataset
• points2 (mystic.math.legacydata.dataset) – a second dataset
Notes
Notes
most common usage has pair=False and axis=0, or pairwise distance with pair=True and axis=1
minkowski(x, xp=None, pair=False, dmin=0, p=3, axis=None)
p-norm distance between points in euclidean space
d(p) = sum(|x[0] - x[0]'|^p, |x[1] - x[1]'|^p, ..., |x[n] - x[n]'|^p)^(1/
p)
Parameters
• x (array) – an array of points, x
• xp (array, default=None) – a second array of points, x'
• pair (bool, default=False) – if True, return the pairwise distances
• dmin (int, default=0) – upconvert x,x' to dimension >= dmin
• p (int, default=3) – value of p for the p-norm
• axis (int, default=None) – if not None, reduce across the given axis
Returns an array of absolute distances between points
Notes
most common usage has pair=False and axis=0, or pairwise distance with pair=True and axis=1
grid module
Notes
integrate module
References
1. “A Primer on Scientific Programming with Python”, by Hans Petter Langtangen, page 443-445, 2014.
2. http://en.wikipedia.org/wiki/Monte_Carlo_integration
3. http://math.fullerton.edu/mathews/n2003/MonteCarloMod.html
legacydata module
Notes
a datapoint can have an assigned id and cone; also has utilities for comparison against other datapoints (intended
for use in a dataset)
collisions(pts)
return True where a point exists with same ‘position’ and different ‘value’
conflicts(pts)
return True where a point exists with same ‘id’ but different ‘raw’
duplicates(pts)
return True where a point exists with same ‘raw’ and ‘id’
position
repeats(pts)
return True where a point exists with same ‘raw’ but different ‘id’
value
class dataset
Bases: list
a collection of data points s = dataset([point1, point2, . . . , pointN])
queries: s.values – returns list of values s.coords – returns list of positions s.ids – returns list of ids s.raw –
returns list of points s.npts – returns the number of points s.lipschitz – returns list of lipschitz constants
settings: s.lipschitz = [s1, s2, . . . , sn] – sets lipschitz constants
short -- check for shortness with respect to given data (or self)
valid -- check for validity with respect to given model
update -- update the positions and values in the dataset
load -- load a list of positions and a list of values to the dataset
fetch -- fetch the list of positions and the list of values in the dataset
intersection -- return the set intersection between self and query
filter -- return dataset entries where mask array is True
has_id -- return True where dataset ids are in query
has_position -- return True where dataset coords are in query
has_point -- return True where dataset points are in query
has_datapoint -- return True where dataset entries are in query
Notes
collisions
conflicts
coords
duplicates
fetch()
fetch the list of positions and the list of values in the dataset
filter(mask)
return dataset entries where mask array is True
Notes
Notes
Notes
Notes
Notes
Parameters
• data (list, default=None) – a list of data points, or the dataset itself.
• L (float, default=None) – the lipschitz constant, or the dataset’s constant.
• blamelist (bool, default=False) – if True, indicate the infeasible points.
• pairs (bool, default=True) – if True, indicate indices of infeasible points.
• all (bool, default=False) – if True, get results for each individual point.
• raw (bool, default=False) – if False, get boolean results (i.e. non-float).
• tol (float, default=0.0) – maximum acceptable deviation from shortness.
• cutoff (float, default=tol) – zero out distances less than cutoff.
Notes
Each point x,y can be thought to have an associated double-cone with slope equal to the lipschitz constant.
Shortness with respect to another point is defined by the first point not being inside the cone of the second.
We can allow for some error in shortness, a short tolerance tol, for which the point x,y is some acceptable
y-distance inside the cone. While very tightly related, cutoff and tol play distinct roles; tol is subtracted
from calculation of the lipschitz_distance, while cutoff zeros out the value of any element less than the
cutoff.
update(positions, values)
update the positions and values in the dataset
Returns the dataset itself
Return type self (dataset)
Notes
Notes
xtol defines the n-dimensional base of a pilar of height ytol, centered at each point. The region inside the
pilar defines the space where a “valid” model must intersect. If xtol is not specified, then the base of the
pilar will be a dirac at x' = x. This function performs an optimization for each x to find an appropriate
x'.
ytol is a single value, while xtol is a single value or an iterable. cutoff takes a float or a boolean, where
cutoff=True will set the value of cutoff to the default. Typically, the value of cutoff is ytol, 0.0, or
None. hausdorff can be False (e.g. norm = 1.0), True (e.g. norm = spread(x)), or a list of points
of len(x).
While cutoff and ytol are very tightly related, they play a distinct role; ytol is used to set the optimization
termination for an acceptable |y - F(x')|, while cutoff is applied post-optimization.
If we are using the hausdorff norm, then ytol will set the optimization termination for an acceptable |y -
F(x')| + |x - x'|/norm, where the x values are normalized by norm = hausdorff.
values
class lipschitzcone(datapoint, slopes=None)
Bases: list
Lipschitz double cone around a data point, with vertex and slope
queries: vertex – coordinates of lipschitz cone vertex slopes – lipschitz slopes for the cone (should be same
dimension as ‘vertex’)
contains -- return True if a given point is within the cone
distance -- sum of lipschitz-weighted distance between a point and the vertex
contains(point)
return True if a given point is within the cone
distance(point)
sum of lipschitz-weighted distance between a point and the vertex
load_dataset(filename, filter=None)
read dataset from selected file
filename – string name of dataset file filter – tuple of points to select (‘False’ to ignore filter stored in file)
class point(position, value)
Bases: object
n-d data point with position and value but no id (i.e. ‘raw’)
queries: p.value – returns value p.position – returns position p.rms – returns the square root of sum of squared
position
settings: p.value = v1 – set the value p.position = (x1, x2, . . . , xn) – set the position
rms
save_dataset(data, filename=’dataset.txt’, filter=None, new=True)
save dataset to selected file
data – data set filename – string name of dataset file filter – tuple, filter to apply to dataset upon reading new –
boolean, False if appending to existing file
measures module
Parameters
• f (func) – a function that takes a list and returns a number
• samples (list) – a list of sample points
Returns the minimum output value for a function at the given inputs
ess_minimum(f, samples, weights=None, tol=0.0)
calculate the min of function for support on the given list of points
ess_minimum(f,x,w) = min(f(support(x,w)))
Parameters
• f (func) – a function that takes a list and returns a number
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
• tol (float, default=0.0) – a tolerance, where any weight <= tol is zero
Returns the minimum output value for a function at the given support points
expectation(f, samples, weights=None, tol=0.0)
calculate the (weighted) expectation of a function for a list of points
Parameters
• f (func) – a function that takes a list and returns a number
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
• tol (float, default=0.0) – a tolerance, where any weight <= tol is zero
Returns the weighted expectation for a list of sample points
expected_variance(f, samples, weights=None, tol=0.0)
calculate the (weighted) expected variance of a function
Parameters
• f (func) – a function that takes a list and returns a number
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
• tol (float, default=0.0) – a tolerance, where any weight <= tol is zero
Returns the weighted expected variance of f on a list of sample points
expected_std(f, samples, weights=None, tol=0.0)
calculate the (weighted) expected standard deviation of a function
Parameters
• f (func) – a function that takes a list and returns a number
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
• tol (float, default=0.0) – a tolerance, where any weight <= tol is zero
Returns the weighted expected standard deviation of f on a list of sample points
variance(samples, weights=None)
calculate the (weighted) variance for a list of points
Parameters
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
Returns the weighted variance for a list of sample points
std(samples, weights=None)
calculate the (weighted) standard deviation for a list of points
Parameters
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
Returns the weighted standard deviation for a list of sample points
skewness(samples, weights=None)
calculate the (weighted) skewness for a list of points
Parameters
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
Returns the weighted skewness for a list of sample points
kurtosis(samples, weights=None)
calculate the (weighted) kurtosis for a list of points
Parameters
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
Returns the weighted kurtosis for a list of sample points
impose_mean(m, samples, weights=None)
impose a mean on a list of (weighted) points
Parameters
• m (float) – the target mean
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
Returns a list of sample points with the desired weighted mean
Notes
this function does not alter the weighted range or the weighted variance
impose_variance(v, samples, weights=None)
impose a variance on a list of (weighted) points
Parameters
• v (float) – the target variance
Notes
Notes
Notes
Notes
Notes
Expectation value E is calculated by minimizing mean(f(x)) - m, over the given bounds, and will terminate
when E is found within deviation tol of the target mean m. If tol is not provided, then a relative deviation of
1% of m will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw the
mean, variance, and etc from
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper bounds,
for each parameter
Examples
Notes
Expected variance E is calculated by minimizing variance(f(x)) - v, over the given bounds, and will
terminate when E is found within deviation tol of the target variance v. If tol is not provided, then a relative
deviation of 1% of v will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw the
mean, variance, and etc from
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper bounds,
for each parameter
Examples
Notes
Expected std E is calculated by minimizing std(f(x)) - s, over the given bounds, and will terminate when
E is found within deviation tol of the target std s. If tol is not provided, then a relative deviation of 1% of s
will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw the
mean, variance, and etc from
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper bounds,
for each parameter
Examples
Notes
Expected mean E and expected variance R are calculated by minimizing the sum of the absolute values of
mean(f(x)) - m and variance(f(x)) - v over the given bounds, and will terminate when E and R
are found within tolerance tol of the target mean m and variance v, respectively. If tol is not provided, then
a relative deviation of 1% of max(m,v) will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw the
mean, variance, and etc from
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper bounds,
for each parameter
Examples
>>> impose_support([0,1],[1,2,3,4,5],[.2,.2,.2,.2,.2])
([2.5, 3.5, 4.5, 5.5, 6.5], [0.5, 0.5, 0.0, 0.0, 0.0])
>>> impose_support([0,1,2,3],[1,2,3,4,5],[.2,.2,.2,.2,.2])
([1.5, 2.5, 3.5, 4.5, 5.5], [0.25, 0.25, 0.25, 0.25, 0.0])
>>> impose_support([4],[1,2,3,4,5],[.2,.2,.2,.2,.2])
([-1.0, 0.0, 1.0, 2.0, 3.0], [0.0, 0.0, 0.0, 0.0, 1.0])
>>> impose_unweighted([0,1,2],[1,2,3,4,5],[.2,.2,.2,.2,.2])
([-0.5, 0.5, 1.5, 2.5, 3.5], [0.0, 0.0, 0.0, 0.5, 0.5])
>>> impose_unweighted([3,4],[1,2,3,4,5],[.2,.2,.2,.2,.2])
([2.0, 3.0, 4.0, 5.0, 6.0], [0.33333333333333331, 0.33333333333333331, 0.
˓→33333333333333331, 0.0, 0.0])
>>> impose_collapse({(0,1),(0,2)},[1,2,3,4,5],[.2,.2,.2,.2,.2])
([1.5999999999999996, 1.5999999999999996, 1.5999999999999996, 4.
˓→5999999999999996, 5.5999999999999996], [0.6000000000000001, 0.0, 0.0, 0.2,
˓→0.2])
>>> impose_collapse({(0,1),(3,4)},[1,2,3,4,5],[.2,.2,.2,.2,.2])
([1.3999999999999999, 1.3999999999999999, 3.3999999999999999, 4.
˓→4000000000000004, 4.4000000000000004], [0.4, 0.0, 0.2, 0.4, 0.0])
Inputs: params – a flat list of weights and positions (formatted as noted below) npts – a tuple describing the
shape of the target lists
For example:
>>> nx = 3; ny = 2; nz = 1
>>> par = ['wx']*nx + ['x']*nx + ['wy']*ny + ['y']*ny + ['wz']*nz + ['z']*nz
>>> weights, positions = split_param(par, (nx,ny,nz))
>>> weights
['wx','wx','wx','wy','wy','wz']
>>> positions
['x','x','x','y','y','z']
poly module
samples module
stats module
shortcut (?) math tools related to statistics; also, math tools related to gaussian distributions
cdf_factory(mean, variance)
Returns cumulative distribution function (as a Python function) for a Gaussian, given the mean and variance
erf(x)
evaluate the error function at x
gamma(x)
evaluate the gamma function at x
lgamma(x)
evaluate the natual log of the abs value of the gamma function at x
mcdiarmid_bound(mean, diameter)
calculates McDiarmid bound given mean and McDiarmid diameter
mean(expectation, volume)
calculates mean given expectation and volume
meanconf(std, npts, percent=95)
mean confidence interval: returns conf, where interval = mean +/- conf
pdf_factory(mean, variance)
Returns a probability density function (as a Python function) for a Gaussian, given the mean and variance
prob_mass(volume, norm)
calculates probability mass given volume and norm
sampvar(var, npts)
sample variance from variance
stderr(std, npts)
standard error
varconf(var, npts, percent=95, tight=False)
var confidence interval: returns max interval distance from var
volume(lb, ub)
calculates volume for a uniform distribution in n-dimensions
metropolis_hastings(proposal, target, x)
Proposal(x) -> next point. Must be symmetric. This is because otherwise the PDF of the proposal density is
needed (not just a way to draw from the proposal)
2.18.1 Functions
Mystic provides a set of standard fitting functions that derive from the function API found in mystic.models.
abstract_model. These standard functions are provided:
2.18.2 Models
Mystic also provides a set of example models that derive from the model API found in mystic.models.
abstract_model. These standard models are provided:
Additionally, circle has been extended to provide three additional models, each with different packing densities:
Further, poly provides additional models for 2nd, 4th, 6th, 8th, and 16th order Chebyshev polynomials:
Also, rosen has been modified to provide models for the 0th and 1st derivative of the Rosenbrock function:
class AbstractFunction(ndim=None)
Bases: object
Base class for mystic functions
The ‘function’ method must be overwritten, thus allowing calls to the class instance to mimic calls to the function
object.
For example, if function is overwritten with the Rosenbrock function:
CostFactory(target, pts)
generates a cost function instance from list of coefficients and evaluation points
CostFactory2(pts, datapts, nparams)
generates a cost function instance from datapoints and evaluation points
ForwardFactory(coeffs)
generates a forward model instance from a list of coefficients
__dict__ = dict_proxy({'__module__': 'mystic.models.abstract_model', 'CostFactory2':
__init__(name=’dummy’, metric=<function <lambda>>, sigma=1.0)
Provides a base class for mystic models.
Inputs:: name – a name string for the model metric – the cost metric object [default => lambda x:
numpy.sum(x*x)] sigma – a scaling factor applied to the raw cost
__module__ = 'mystic.models.abstract_model'
__weakref__
list of weak references to the object (if defined)
evaluate(coeffs, x)
takes list of coefficients & evaluation points, returns f(x)
ackley(x)
evaluates Ackley’s function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = -20 * exp(-0.2 * sqrt(1/N * sum_(i=0)^(N-1) x_(i)^(2))) and: f_1(x) = -exp(1/N *
sum_(i=0)^(N-1) cos(2 * pi * x_(i))) + 20 + exp(1)
Inspect with mystic_model_plotter using:: mystic.models.ackley -b “-10:10:.1, -10:10:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
branins(x)
evaluates Branins’s function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = a * (x_1 - b * x_(0)^(2) + c * x_0 - d)^2 and f_1(x) = e * (1 - f) * cos(x_0) + e and a=1,
b=5.1/(4*pi^2), c=5/pi, d=6, e=10, f=1/(8*pi)
Inspect with mystic_model_plotter using:: mystic.models.branins -b “-10:20:.1, -5:25:.1” -d -x 1
The minimum is f(x)=0.397887 at x=((2 +/- (2*i)+1)*pi, 2.275 + 10*i*(i+1)/2) for all i
corana(x)
evaluates a 4-D Corana’s parabola function for a list of coeffs
f(x) = sum_(i=0)^(3) f_0(x)
Where for bs(x_i - z_i) < 0.05: f_0(x) = 0.15*(z_i - 0.05*sign(z_i))^(2) * d_i and otherwise: f_0(x) = d_i *
x_(i)^(2), with z_i = loor(bs(x_i/0.2)+0.49999)*sign(x_i)*0.2 and d_i = 1,1000,10,100.
For len(x) == 1, x = x_0,0,0,0; for len(x) == 2, x = x_0,0,x_1,0; for len(x) == 3, x = x_0,0,x_1,x_2; for len(x)
>= 4, x = x_0,x_1,x_2,x_3.
Inspect with mystic_model_plotter using:: mystic.models.corana -b “-1:1:.01, -1:1:.01” -d -x 1
The minimum is f(x)=0 for bs(x_i) < 0.05 for all i.
easom(x)
evaluates Easom’s function for a list of coeffs
rosen0der(x)
evaluates an N-dimensional Rosenbrock saddle for a list of coeffs
f(x) = sum_(i=0)^(N-2) 100*(x_(i+1) - x_(i)^(2))^(2) + (1 - x_(i))^(2)
Inspect with mystic_model_plotter using:: mystic.models.rosen -b “-3:3:.1, -1:5:.1, 1” -d -x 1
The minimum is f(x)=0.0 at x_i=1.0 for all i
rosen1der(x)
evaluates an N-dimensional Rosenbrock derivative for a list of coeffs
The minimum is f’(x)=[0.0]*n at x=[1.0]*n, where len(x) >= 2.
schwefel(x)
evaluates Schwefel’s function for a list of coeffs
f(x) = sum_(i=0)^(N-1) -x_i * sin(sqrt(bs(x_i)))
Where abs(x_i) <= 500.
Inspect with mystic_model_plotter using:: mystic.models.schwefel -b “-500:500:10, -500:500:10” -d
The minimum is f(x)=-(N+1)*418.98288727243374 at x_i=420.9687465 for all i
shekel(x)
evaluates a 2-D Shekel’s Foxholes function for a list of coeffs
f(x) = 1 / (0.002 + f_0(x))
Where: f_0(x) = sum_(i=0)^(24) 1 / (i + sum_(j=0)^(1) (x_j - a_ij)^(6)) with a_ij=(-32,-16,0,16,32). for j=0 and
i=(0,1,2,3,4), a_i0=a_k0 with k=i mod 5 also j=1 and i=(0,5,10,15,20), a_i1=a_k1 with k=i+k’ and k’=(1,2,3,4).
Inspect with mystic_model_plotter using:: mystic.models.shekel -b “-50:50:1, -50:50:1” -d -x 1
The minimum is f(x)=0 for x=(-32,-32)
sphere(x)
evaluates an N-dimensional spherical function for a list of coeffs
f(x) = sum_(i=0)^(N-1) x_(i)^2
Inspect with mystic_model_plotter using:: mystic.models.sphere -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
step(x)
evaluates an N-dimensional step function for a list of coeffs
f(x) = f_0(x) + p_i(x), with i=0,1
Where for abs(x_i) <= 5.12: f_0(x) = 30 + sum_(i=0)^(N-1) loor x_i and for x_i > 5.12: p_0(x) = 30 * (1 + (x_i
- 5.12)) and for x_i < 5.12: p_1(x) = 30 * (1 + (5.12 - x_i)) Otherwise, f_0(x) = 0 and p_i(x)=0 for i=0,1.
Inspect with mystic_model_plotter using:: mystic.models.step -b “-10:10:.2, -10:10:.2” -d -x 1
The minimum is f(x)=(30 - 6*N) for all x_i=[-5.12,-5)
venkat91(x)
evaluates Venkataraman’s sinc function for a list of coeffs
f(x) = -20 * sin(r(x))/r(x)
Where: r(x) = sqrt((x_0 - 4)^2 + (x_1 - 4)^2 + 0.1)
Inspect with mystic_model_plotter using:: mystic.models.venkat91 -b “-10:10:.1, -10:10:.1” -d
The minimum is f(x)=-19.668329370585823 at x=(4.0, 4.0)
wavy1(x)
evaluates the wavy1 function for a list of coeffs
f(x) = bs(x + 3*sin(x + pi) + pi)
Inspect with mystic_model_plotter using:: mystic.models.wavy1 -b “-20:20:.5, -20:20:.5” -d -r numpy.add
The minimum is f(x)=0.0 at x_i=-pi for all i
wavy2(x)
evaluates the wavy2 function for a list of coeffs
f(x) = 4*sin(x)+sin(4*x)+sin(8*x)+sin(16*x)+sin(32*x)+sin(64*x)
Inspect with mystic_model_plotter using:: mystic.models.wavy2 -b “-10:10:.2, -10:10:.2” -d -r numpy.add
The function has degenerate global minima of f(x)=-6.987594 at x_i = 4.489843526 + 2*k*pi for all i, and k is
an integer
zimmermann(x)
evaluates a Zimmermann function for a list of coeffs
f(x) = max(f_0(x), p_i(x)), with i = 0,1,2,3
Where: f_0(x) = 9 - x_0 - x_1 with for x_0 < 0: p_0(x) = -100 * x_0 and for x_1 < 0: p_1(x) = -100 * x_1 and
for c_2(x) > 16 and c_3(x) > 14: p_i(x) = 100 * c_i(x), with i = 2,3 c_2(x) = (x_0 - 3)^2 + (x_1 - 2)^2 c_3(x) =
x_0 * x_1 Otherwise, p_i(x)=0 for i=0,1,2,3 and c_i(x)=0 for i=2,3.
Inspect with mystic_model_plotter using:: mystic.models.zimmermann -b “-5:10:.1, -5:10:.1” -d -x 1
The minimum is f(x)=0.0 at x=(7.0,2.0)
abstract_model module
Base classes for mystic’s provided models:: AbstractFunction – evaluates f(x) for given evaluation points x Ab-
stractModel – generates f(x,p) for given coefficients p
class AbstractFunction(ndim=None)
Bases: object
Base class for mystic functions
The ‘function’ method must be overwritten, thus allowing calls to the class instance to mimic calls to the function
object.
For example, if function is overwritten with the Rosenbrock function:
function(coeffs)
takes a list of coefficients x, returns f(x)
minimizers = None
class AbstractModel(name=’dummy’, metric=<function <lambda>>, sigma=1.0)
Bases: object
Base class for mystic models
The ‘evaluate’ and ‘ForwardFactory’ methods must be overwritten, thus providing a standard interface for gen-
erating a forward model factory and evaluating a forward model. Additionally, two common ways to generate
a cost function are built into the model. For “standard models”, the cost function generator will work with no
modifications.
See mystic.models.poly for a few basic examples.
Provides a base class for mystic models.
Inputs:: name – a name string for the model metric – the cost metric object [default => lambda x:
numpy.sum(x*x)] sigma – a scaling factor applied to the raw cost
CostFactory(target, pts)
generates a cost function instance from list of coefficients and evaluation points
CostFactory2(pts, datapts, nparams)
generates a cost function instance from datapoints and evaluation points
ForwardFactory(coeffs)
generates a forward model instance from a list of coefficients
__init__(name=’dummy’, metric=<function <lambda>>, sigma=1.0)
Provides a base class for mystic models.
Inputs:: name – a name string for the model metric – the cost metric object [default => lambda x:
numpy.sum(x*x)] sigma – a scaling factor applied to the raw cost
evaluate(coeffs, x)
takes list of coefficients & evaluation points, returns f(x)
br8 module
References
1. “Data Reduction and Error Analysis for the Physical Sciences”, Bevington & Robinson, Second Edition,
McGraw-Hill, New York (1992).
class BevingtonDecay(name=’decay’, metric=<function <lambda>>)
Bases: mystic.models.abstract_model.AbstractModel
Computes dual exponential decay [1]. y = a1 + a2 Exp[-t / a4] + a3 Exp[-t/a5]
CostFactory(target, pts)
generates a cost function instance from list of coefficients & evaluation points
CostFactory2(pts, datapts, nparams)
generates a cost function instance from datapoints & evaluation points
ForwardFactory(coeffs)
generates a dual decay model instance from a list of coefficients
evaluate(coeffs, evalpts)
evaluate dual exponential decay with given coeffs over given evalpts coeffs = (a1,a2,a3,a4,a5)
circle module
References
None
class Circle(packing=None, name=’circle’, sigma=1.0)
Bases: mystic.models.abstract_model.AbstractModel
Computes 2D array representation of a circle where the circle minimally bounds the 2D data points
data points with [minimal, sparse, or dense] packing=[~0.2, ~1.0, or ~5.0] setting packing = None will constrain
all points to the circle’s radius
CostFactory(target, npts=None)
generates a cost function instance from list of coefficients & number of evaluation points (x,y,r) = target
coeffs
CostFactory2(datapts)
generates a cost function instance from a 2D array of datapoints
ForwardFactory(coeffs)
generates a circle instance from a list of coefficients (x,y,r) = coeffs
forward(coeffs, npts=None)
generate a 2D array of points contained within a circle built from a list of coefficients (x,y,r) = coeffs
default npts = packing * floor( pi*radius^2 )
gencircle(coeffs, interval=0.02)
generate a 2D array representation of a circle of given coeffs coeffs = (x,y,r)
gendata(coeffs, npts=20)
Generate a 2D dataset of npts enclosed in circle of given coeffs, where coeffs = (x,y,r).
NOTE: if npts == None, constrain all points to circle of given radius
dejong module
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘De Jong’ function definitions drawn
from [3].
References
1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. and Rosen, B. “Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison” J. of
Mathematical and Computer Modeling 16(11), 87-100, 1992.
class Quartic(ndim=30)
Bases: mystic.models.abstract_model.AbstractFunction
a De Jong quartic function generator
De Jong’s quartic function [1,2,3] is designed to test the behavior of minimizers in the presence of noise. The
function’s global minumum depends on the expectation value of a random variable, and also includes several
randomly distributed local minima.
The generated function f(x) is a modified version of equation (20) of [2], where len(x) >= 0.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘De Jong’ function definitions
drawn from [3].
References
1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. and Rosen, B. “Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison” J. of
Mathematical and Computer Modeling 16(11), 87-100, 1992.
function(coeffs)
evaluates an N-dimensional quartic function for a list of coeffs
f(x) = sum_(i=0)^(N-1) (x_(i)^4 * (i+1) + k_i)
Where k_i is a random variable with uniform distribution bounded by [0,1).
Inspect with mystic_model_plotter using:: mystic.models.quartic -b “-3:3:.1, -3:3:.1” -d -x 1
The minimum is f(x)=N*E[k] for x_i=0.0, where E[k] is the expectation of k, and thus E[k]=0.5 for a
uniform distribution bounded by [0,1).
minimizers = None
class Rosenbrock(ndim=2, axis=None)
Bases: mystic.models.abstract_model.AbstractFunction
a Rosenbrock’s Saddle function generator
Rosenbrock’s Saddle function [1,2,3] has the reputation of being a difficult minimization problem. In two
dimensions, the function is a saddle with an inverted basin, where the global minimum occurs along the rim of
the inverted basin.
The generated function f(x) is a modified version of equation (18) of [2], where len(x) >= 0.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘De Jong’ function definitions
drawn from [3].
References
1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. and Rosen, B. “Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison” J. of
Mathematical and Computer Modeling 16(11), 87-100, 1992.
derivative(coeffs)
evaluates an N-dimensional Rosenbrock derivative for a list of coeffs
The minimum is f’(x)=[0.0]*n at x=[1.0]*n, where len(x) >= 2.
function(coeffs)
evaluates an N-dimensional Rosenbrock saddle for a list of coeffs
f(x) = sum_(i=0)^(N-2) 100*(x_(i+1) - x_(i)^(2))^(2) + (1 - x_(i))^(2)
Inspect with mystic_model_plotter using:: mystic.models.rosen -b “-3:3:.1, -1:5:.1, 1” -d -x 1
The minimum is f(x)=0.0 at x_i=1.0 for all i
hessian(coeffs)
evaluates an N-dimensional Rosenbrock hessian for the given coeffs
The function f’‘(x) requires len(x) >= 2.
hessian_product(coeffs, p)
evaluates an N-dimensional Rosenbrock hessian product for p and the given coeffs
The hessian product requires both p and coeffs to have len >= 2.
minimizers = [1.0]
class Shekel(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Shekel’s Foxholes function generator
Shekel’s Foxholes function [1,2,3] has a generally flat surface with several narrow wells. The function’s global
minimum is at (-32, -32), with local minima at (i,j) in (-32, -16, 0, 16, 32).
The generated function f(x) is a modified version of equation (21) of [2], where len(x) == 2.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘De Jong’ function definitions
drawn from [3].
References
1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. and Rosen, B. “Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison” J. of
Mathematical and Computer Modeling 16(11), 87-100, 1992.
function(coeffs)
evaluates a 2-D Shekel’s Foxholes function for a list of coeffs
f(x) = 1 / (0.002 + f_0(x))
References
1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. and Rosen, B. “Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison” J. of
Mathematical and Computer Modeling 16(11), 87-100, 1992.
function(coeffs)
evaluates an N-dimensional spherical function for a list of coeffs
f(x) = sum_(i=0)^(N-1) x_(i)^2
Inspect with mystic_model_plotter using:: mystic.models.sphere -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
minimizers = [0.0]
class Step(ndim=5)
Bases: mystic.models.abstract_model.AbstractFunction
a De Jong step function generator
De Jong’s step function [1,2,3] has several plateaus, which pose difficulty for many optimization algorithms.
Degenerate global minima occur for all x_i on the lowest plateau, with degenerate local minima on all other
plateaus.
The generated function f(x) is a modified version of equation (19) of [2], where len(x) >= 0.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘De Jong’ function definitions
drawn from [3].
References
1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. and Rosen, B. “Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison” J. of
Mathematical and Computer Modeling 16(11), 87-100, 1992.
function(coeffs)
evaluates an N-dimensional step function for a list of coeffs
f(x) = f_0(x) + p_i(x), with i=0,1
Where for abs(x_i) <= 5.12: f_0(x) = 30 + sum_(i=0)^(N-1) loor x_i and for x_i > 5.12: p_0(x) = 30 * (1
+ (x_i - 5.12)) and for x_i < 5.12: p_1(x) = 30 * (1 + (5.12 - x_i)) Otherwise, f_0(x) = 0 and p_i(x)=0 for
i=0,1.
Inspect with mystic_model_plotter using:: mystic.models.step -b “-10:10:.2, -10:10:.2” -d -x 1
The minimum is f(x)=(30 - 6*N) for all x_i=[-5.12,-5)
minimizers = None
functions module
For len(x) == 1, x = x_0,0,0,0; for len(x) == 2, x = x_0,0,x_1,0; for len(x) == 3, x = x_0,0,x_1,x_2; for len(x)
>= 4, x = x_0,x_1,x_2,x_3.
Inspect with mystic_model_plotter using:: mystic.models.corana -b “-1:1:.01, -1:1:.01” -d -x 1
The minimum is f(x)=0 for bs(x_i) < 0.05 for all i.
easom(x)
evaluates Easom’s function for a list of coeffs
f(x) = -cos(x_0) * cos(x_1) * exp(-((x_0-pi)^2+(x_1-pi)^2))
Inspect with mystic_model_plotter using:: mystic.models.easom -b “-5:10:.1, -5:10:.1” -d
The minimum is f(x)=-1.0 at x=(pi,pi)
ellipsoid(x)
evaluates the rotated hyper-ellipsoid function for a list of coeffs
f(x) = sum_(i=0)^(N-1) (sum_(j=0)^(i) x_j)^2
Inspect with mystic_model_plotter using:: mystic.models.ellipsoid -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
fosc3d(x)
evaluates the fOsc3D function for a list of coeffs
f(x) = f_0(x) + p(x)
Where: f_0(x) = -4 * exp(-x_(0)^2 - x_(1)^2) + sin(6*x_(0)) * sin(5*x_(1)) with for x_1 < 0: p(x) =
100.*x_(1)^2 and otherwise: p(x) = 0.
Inspect with mystic_model_plotter using:: mystic.models.fosc3d -b “-5:5:.1, 0:5:.1” -d
The minimum is f(x)=-4.501069742528923 at x=(-0.215018, 0.240356)
goldstein(x)
evaluates Goldstein-Price’s function for a list of coeffs
f(x) = (1 + (x_0 + x_1 + 1)^2 * f_0(x)) * (30 + (2*x_0 - 3*x_1)^2 * f_1(x))
Where: f_0(x) = 19 - 14*x_0 + 3*x_(0)^2 - 14*x_1 + 6*x_(0)*x_(1) + 3*x_(1)^2 and f_1(x) = 18 - 32*x_0 +
12*x_(0)^2 + 48*x_1 - 36*x_(0)*x_(1) + 27*x_(1)^2
Inspect with mystic_model_plotter using:: mystic.models.goldstein -b “-5:5:.1, -5:5:.1” -d -x 1
The minimum is f(x)=3.0 at x=(0,-1)
griewangk(x)
evaluates an N-dimensional Griewangk’s function for a list of coeffs
f(x) = f_0(x) - f_1(x) + 1
Where: f_0(x) = sum_(i=0)^(N-1) x_(i)^(2) / 4000. and: f_1(x) = prod_(i=0)^(N-1) cos( x_i / (i+1)^(1/2) )
Inspect with mystic_model_plotter using:: mystic.models.griewangk -b “-10:10:.1, -10:10:.1” -d -x 5
The minimum is f(x)=0.0 for x_i=0.0
michal(x)
evaluates Michalewicz’s function for a list of coeffs
f(x) = -sum_(i=0)^(N-1) sin(x_i) * (sin((i+1) * (x_i)^(2) / pi))^(20)
Inspect with mystic_model_plotter using:: mystic.models.michal -b “0:3.14:.1, 0:3.14:.1, 1.28500168,
1.92305311, 1.72047194” -d
For x=(2.20289811, 1.57078059, 1.28500168, 1.92305311, 1.72047194, . . . )[:N] and c=(-0.801303, -1.0, -
0.959092, -0.896699, -1.030564, . . . )[:N], the minimum is f(x)=sum(c) for all x_i=(0,pi)
nmin51(x)
evaluates the NMinimize51 function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = exp(sin(50*x_0)) + sin(60*exp(x_1)) + sin(70*sin(x_0)) and f_1(x) = sin(sin(80*x_1)) -
sin(10*(x_0 + x_1)) + (x_(0)^2 + x_(1)^2)/4
Inspect with mystic_model_plotter using:: mystic.models.nmin51 -b “-5:5:.1, 0:5:.1” -d
The minimum is f(x)=-3.306869 at x=(-0.02440313,0.21061247)
paviani(x)
evaluates Paviani’s function for a list of coeffs
f(x) = f_0(x) - f_1(x)
Where: f_0(x) = sum_(i=0)^(N-1) (ln(x_i - 2)^2 + ln(10 - x_i)^2) and f_1(x) = prod_(i=0)^(N-1) x_(i)^(.2)
Inspect with mystic_model_plotter using:: mystic.models.paviani -b “2:10:.1, 2:10:.1” -d
For N=1, the minimum is f(x)=2.133838 at x_i=8.501586, for N=3, the minimum is f(x)=7.386004 at
x_i=8.589578, for N=5, the minimum is f(x)=9.730525 at x_i=8.740743, for N=8, the minimum is f(x)=-
3.411859 at x_i=9.086900, for N=10, the minimum is f(x)=-45.778470 at x_i=9.350241.
peaks(x)
evaluates an 2-dimensional peaks function for a list of coeffs
f(x) = f_0(x) - f_1(x) - f_2(x)
Where: f_0(x) = 3 * (1 - x_0)^2 * exp(-x_0^2 - (x_1 + 1)^2) and f_1(x) = 10 * (.2 * x_0 - x_0^3 - x_1^5) *
exp(-x_0^2 - x_1^2) and f_2(x) = exp(-(x_0 + 1)^2 - x_1^2) / 3
Inspect with mystic_model_plotter using:: mystic.models.peaks -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=-6.551133332835841 at x=(0.22827892, -1.62553496)
powers(x)
evaluates the sum of different powers function for a list of coeffs
f(x) = sum_(i=0)^(N-1) bs(x_(i))^(i+2)
Inspect with mystic_model_plotter using:: mystic.models.powers -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
quartic(x)
evaluates an N-dimensional quartic function for a list of coeffs
f(x) = sum_(i=0)^(N-1) (x_(i)^4 * (i+1) + k_i)
Where k_i is a random variable with uniform distribution bounded by [0,1).
Inspect with mystic_model_plotter using:: mystic.models.quartic -b “-3:3:.1, -3:3:.1” -d -x 1
The minimum is f(x)=N*E[k] for x_i=0.0, where E[k] is the expectation of k, and thus E[k]=0.5 for a uniform
distribution bounded by [0,1).
rastrigin(x)
evaluates Rastrigin’s function for a list of coeffs
f(x) = 10 * N + sum_(i=0)^(N-1) (x_(i)^2 - 10 * cos(2 * pi * x_(i)))
Inspect with mystic_model_plotter using:: mystic.models.rastrigin -b “-5:5:.1, -5:5:.1” -d
venkat91(x)
evaluates Venkataraman’s sinc function for a list of coeffs
f(x) = -20 * sin(r(x))/r(x)
Where: r(x) = sqrt((x_0 - 4)^2 + (x_1 - 4)^2 + 0.1)
Inspect with mystic_model_plotter using:: mystic.models.venkat91 -b “-10:10:.1, -10:10:.1” -d
The minimum is f(x)=-19.668329370585823 at x=(4.0, 4.0)
wavy1(x)
evaluates the wavy1 function for a list of coeffs
f(x) = bs(x + 3*sin(x + pi) + pi)
Inspect with mystic_model_plotter using:: mystic.models.wavy1 -b “-20:20:.5, -20:20:.5” -d -r numpy.add
The minimum is f(x)=0.0 at x_i=-pi for all i
wavy2(x)
evaluates the wavy2 function for a list of coeffs
f(x) = 4*sin(x)+sin(4*x)+sin(8*x)+sin(16*x)+sin(32*x)+sin(64*x)
Inspect with mystic_model_plotter using:: mystic.models.wavy2 -b “-10:10:.2, -10:10:.2” -d -r numpy.add
The function has degenerate global minima of f(x)=-6.987594 at x_i = 4.489843526 + 2*k*pi for all i, and k is
an integer
zimmermann(x)
evaluates a Zimmermann function for a list of coeffs
f(x) = max(f_0(x), p_i(x)), with i = 0,1,2,3
Where: f_0(x) = 9 - x_0 - x_1 with for x_0 < 0: p_0(x) = -100 * x_0 and for x_1 < 0: p_1(x) = -100 * x_1 and
for c_2(x) > 16 and c_3(x) > 14: p_i(x) = 100 * c_i(x), with i = 2,3 c_2(x) = (x_0 - 3)^2 + (x_1 - 2)^2 c_3(x) =
x_0 * x_1 Otherwise, p_i(x)=0 for i=0,1,2,3 and c_i(x)=0 for i=2,3.
Inspect with mystic_model_plotter using:: mystic.models.zimmermann -b “-5:10:.1, -5:10:.1” -d -x 1
The minimum is f(x)=0.0 at x=(7.0,2.0)
lorentzian module
References
None
class Lorentzian(name=’lorentz’, metric=<function <lambda>>, sigma=1.0)
Bases: mystic.models.abstract_model.AbstractModel
Computes lorentzian
ForwardFactory(coeffs)
generates a lorentzian model instance from a list of coefficients
evaluate(coeffs, evalpts)
evaluate lorentzian with given coeffs over given evalpts coeffs = (a1,a2,a3,A0,E0,G0,n)
mogi module
Mogi’s model of surface displacements from a point spherical source in an elastic half space
References
1. Mogi, K. “Relations between the eruptions of various volcanoes and the deformations of the ground surfaces
around them”, Bull. Earthquake. Res. Inst., 36, 99-134, 1958.
class Mogi(name=’mogi’, metric=<function <lambda>>, sigma=1.0)
Bases: mystic.models.abstract_model.AbstractModel
Computes surface displacements Ux, Uy, Uz in meters from a point spherical pressure source in an elastic half
space [1].
CostFactory(target, pts)
generates a cost function instance from list of coefficients & evaluation points
CostFactory2(pts, datapts, nparams)
generates a cost function instance from datapoints & evaluation points
ForwardFactory(coeffs)
generates a mogi source instance from a list of coefficients
evaluate(coeffs, evalpts)
evaluate a single Mogi peak over a 2D (2 by N) numpy array of evalpts, where coeffs = (x0,y0,z0,dV)
nag module
This is drawn from examples in the NAG Library, with the ‘peaks’ function definition found in [1].
References
1. Numerical Algorithms Group, “NAG Library”, Oxford UK, Mark 24, 2013. http://www.nag.co.uk/numeric/CL/
nagdoc_cl24/pdf/E05/e05jbc.pdf
class Peaks(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a peaks function generator
A peaks function [1] is essentially flat, with three wells and three peaks near the origin. The global minimum is
separated from the local minima by peaks.
The generated function f(x) is identical to the ‘peaks’ function in section 10 of [1], and requires len(x) == 2.
This is drawn from examples in the NAG Library, with the ‘peaks’ function definition found in [1].
References
1. Numerical Algorithms Group, “NAG Library”, Oxford UK, Mark 24, 2013. http://www.nag.co.uk/
numeric/CL/nagdoc_cl24/pdf/E05/e05jbc.pdf
function(coeffs)
evaluates an 2-dimensional peaks function for a list of coeffs
f(x) = f_0(x) - f_1(x) - f_2(x)
Where: f_0(x) = 3 * (1 - x_0)^2 * exp(-x_0^2 - (x_1 + 1)^2) and f_1(x) = 10 * (.2 * x_0 - x_0^3 - x_1^5)
* exp(-x_0^2 - x_1^2) and f_2(x) = exp(-(x_0 + 1)^2 - x_1^2) / 3
Inspect with mystic_model_plotter using:: mystic.models.peaks -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=-6.551133332835841 at x=(0.22827892, -1.62553496)
minimizers = [(0.22827892, -1.62553496), (-1.34739625, 0.20451886), (0.29644556, 0.3201
pohlheim module
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5], [6],
and [7].
References
1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version 3.80,
2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK, 1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston MA,
1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin, Hei-
delberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear Equa-
tions”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville KY,
1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.
class Ackley(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
an Ackley’s path function generator
At a very coarse level, Ackley’s path function [1,3] is a slightly parabolic plane, with a sharp cone-shaped
depression at the origin. The global minimum is found at the origin. There are several local minima evenly
distributed across the function surface, where the surface modulates similarly to cosine.
The generated function f(x) is identical to function (10) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].
References
1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.
function(coeffs)
evaluates Ackley’s function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = -20 * exp(-0.2 * sqrt(1/N * sum_(i=0)^(N-1) x_(i)^(2))) and: f_1(x) = -exp(1/N *
sum_(i=0)^(N-1) cos(2 * pi * x_(i))) + 20 + exp(1)
Inspect with mystic_model_plotter using:: mystic.models.ackley -b “-10:10:.1, -10:10:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
minimizers = None
class Branins(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Branins’s rcos function generator
Branins’s function [1,5] is very similar to Rosenbrock’s saddle function. However unlike Rosenbrock’s saddle,
Branins’s function has a degenerate global minimum.
The generated function f(x) is identical to function (13) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].
References
1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.
function(coeffs)
evaluates Branins’s function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = a * (x_1 - b * x_(0)^(2) + c * x_0 - d)^2 and f_1(x) = e * (1 - f) * cos(x_0) + e and a=1,
b=5.1/(4*pi^2), c=5/pi, d=6, e=10, f=1/(8*pi)
Inspect with mystic_model_plotter using:: mystic.models.branins -b “-10:20:.1, -5:25:.1” -d -x 1
The minimum is f(x)=0.397887 at x=((2 +/- (2*i)+1)*pi, 2.275 + 10*i*(i+1)/2) for all i
minimizers = None
class DifferentPowers(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Pohlheim’s sum of different powers function generator
Pohlheim’s sum of different powers function [1] is unimodal, and similar to the hyper-ellipsoid and De Jong’s
sphere. The global minimum is at the origin, at the center of a broad basin.
The generated function f(x) is identical to function (9) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].
References
1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.
function(coeffs)
evaluates the sum of different powers function for a list of coeffs
f(x) = sum_(i=0)^(N-1) bs(x_(i))^(i+2)
Inspect with mystic_model_plotter using:: mystic.models.powers -b “-5:5:.1, -5:5:.1” -d
References
1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.
function(coeffs)
evaluates Easom’s function for a list of coeffs
f(x) = -cos(x_0) * cos(x_1) * exp(-((x_0-pi)^2+(x_1-pi)^2))
Inspect with mystic_model_plotter using:: mystic.models.easom -b “-5:10:.1, -5:10:.1” -d
The minimum is f(x)=-1.0 at x=(pi,pi)
minimizers = [(3.141592653589793, 3.141592653589793)]
class GoldsteinPrice(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Goldstein-Price’s function generator
Goldstein-Price’s function [1,7] provides a function with several peaks surrounding a roughly flat valley. There
are a few shallow scorings across the valley, where the global minimum is found at the intersection of the deepest
of the two scorings. Local minima occur at other intersections of scorings.
The generated function f(x) is identical to function (15) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].
References
1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.
function(coeffs)
evaluates Goldstein-Price’s function for a list of coeffs
f(x) = (1 + (x_0 + x_1 + 1)^2 * f_0(x)) * (30 + (2*x_0 - 3*x_1)^2 * f_1(x))
Where: f_0(x) = 19 - 14*x_0 + 3*x_(0)^2 - 14*x_1 + 6*x_(0)*x_(1) + 3*x_(1)^2 and f_1(x) = 18 - 32*x_0
+ 12*x_(0)^2 + 48*x_1 - 36*x_(0)*x_(1) + 27*x_(1)^2
Inspect with mystic_model_plotter using:: mystic.models.goldstein -b “-5:5:.1, -5:5:.1” -d -x 1
The minimum is f(x)=3.0 at x=(0,-1)
minimizers = [(0, -1), (-0.6, -0.4), (1.8, 0.2)]
class HyperEllipsoid(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Pohlheim’s rotated hyper-ellipsoid function generator
Pohlheim’s rotated hyper-ellipsoid function [1] is continuous, convex, and unimodal. The global minimum is
located at the center of the N-dimensional axis parallel hyper-ellipsoid.
The generated function f(x) is identical to function (1b) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].
References
1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.
function(coeffs)
evaluates the rotated hyper-ellipsoid function for a list of coeffs
f(x) = sum_(i=0)^(N-1) (sum_(j=0)^(i) x_j)^2
Inspect with mystic_model_plotter using:: mystic.models.ellipsoid -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
minimizers = [0.0]
class Michalewicz(ndim=5)
Bases: mystic.models.abstract_model.AbstractFunction
a Michalewicz’s function generator
Michalewicz’s function [1,4] in general evaluates to zero. However, there are long narrow channels that create
local minima. At the intersection of the channels, the function additionally has sharp dips – one of which is the
global minimum.
The generated function f(x) is identical to function (12) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].
References
1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.
function(coeffs)
evaluates Michalewicz’s function for a list of coeffs
f(x) = -sum_(i=0)^(N-1) sin(x_i) * (sin((i+1) * (x_i)^(2) / pi))^(20)
Inspect with mystic_model_plotter using:: mystic.models.michal -b “0:3.14:.1, 0:3.14:.1, 1.28500168,
1.92305311, 1.72047194” -d
For x=(2.20289811, 1.57078059, 1.28500168, 1.92305311, 1.72047194, . . . )[:N] and c=(-0.801303, -1.0,
-0.959092, -0.896699, -1.030564, . . . )[:N], the minimum is f(x)=sum(c) for all x_i=(0,pi)
minimizers = None
class Rastrigin(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Rastrigin’s function generator
Rastrigin’s function [1] is essentially De Jong’s sphere with the addition of cosine modulation to produce several
regularly distributed local minima. The global minimum is at the origin.
The generated function f(x) is identical to function (6) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].
References
1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.
function(coeffs)
evaluates Rastrigin’s function for a list of coeffs
f(x) = 10 * N + sum_(i=0)^(N-1) (x_(i)^2 - 10 * cos(2 * pi * x_(i)))
Inspect with mystic_model_plotter using:: mystic.models.rastrigin -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
minimizers = None
class Schwefel(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Schwefel’s function generator
Schwefel’s function [1,2] has alternating rows of peaks and valleys, with the global minimum near the edge of
the bounded parameter space. This funciton can be misleading for optimizers as the next best local minima are
near the other corners of the bounded parameter space. The intensity of the peaks and valleys increases as one
moves away from the origin.
The generated function f(x) is identical to function (7) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].
References
1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.
function(coeffs)
evaluates Schwefel’s function for a list of coeffs
f(x) = sum_(i=0)^(N-1) -x_i * sin(sqrt(bs(x_i)))
Where abs(x_i) <= 500.
Inspect with mystic_model_plotter using:: mystic.models.schwefel -b “-500:500:10, -500:500:10” -d
The minimum is f(x)=-(N+1)*418.98288727243374 at x_i=420.9687465 for all i
minimizers = None
poly module
References
1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Storn, R. “Constrained Optimization” Dr. Dobb’s Journal, May, 119-123, 1995.
class Chebyshev(order=8, name=’poly’, metric=<function <lambda>>, sigma=1.0)
Bases: mystic.models.poly.Polynomial
Chebyshev polynomial models and functions, including specific methods for Tn(z) n=2,4,6,8,16, Equation (27-
33) of [2]
NOTE: default is T8(z)
CostFactory(target, pts)
generates a cost function instance from list of coefficients & evaluation points
CostFactory2(pts, datapts, nparams)
generates a cost function instance from datapoints & evaluation points
ForwardFactory(coeffs)
generates a 1-D polynomial instance from a list of coefficients
cost(trial, M=61)
The costfunction for order-n Chebyshev fitting. M evaluation points between [-1, 1], and two end points
forward(x)
forward Chebyshev function
class Polynomial(name=’poly’, metric=<function <lambda>>, sigma=1.0)
Bases: mystic.models.abstract_model.AbstractModel
1-D Polynomial models and functions
ForwardFactory(coeffs)
generates a 1-D polynomial instance from a list of coefficients using numpy.poly1d(coeffs)
evaluate(coeffs, x)
takes list of coefficients & evaluation points, returns f(x) thus, [a3, a2, a1, a0] yields a3 x^3 + a2 x^2 + a1
x^1 + a0
chebyshev16cost(trial, M=61)
The costfunction for order-n Chebyshev fitting. M evaluation points between [-1, 1], and two end points
chebyshev2cost(trial, M=61)
The costfunction for order-n Chebyshev fitting. M evaluation points between [-1, 1], and two end points
chebyshev4cost(trial, M=61)
The costfunction for order-n Chebyshev fitting. M evaluation points between [-1, 1], and two end points
chebyshev6cost(trial, M=61)
The costfunction for order-n Chebyshev fitting. M evaluation points between [-1, 1], and two end points
chebyshev8cost(trial, M=61)
The costfunction for order-n Chebyshev fitting. M evaluation points between [-1, 1], and two end points
chebyshevcostfactory(target)
schittkowski module
This is part of Hock and Schittkowski’s test suite in [1], with function definitions drawn from [1] and [2].
References
1. Hock, W. and Schittkowski, K. “Test Examples for Nonlinear Programming Codes”, Lecture Notes in Eco-
nomics and Mathematical Systems, Vol. 187, Springer, 1981. http://www.ai7.uni-bayreuth.de/test_problem_
coll.pdf
2. Paviani, D.A. “A new method for the solution of the general nonlinear programming problem”, Ph.D. disserta-
tion, The University of Texas, Austin, TX, 1969.
class Paviani(ndim=10)
Bases: mystic.models.abstract_model.AbstractFunction
a Paviani’s function generator
Paviani’s function [1,2] is a relatively flat basin that quickly jumps to infinity for x_i >= 10 or x_i <= 2. The
global minimum is located near the corner of one of the basin corners. There are local minima in the corners
ajacent to the global minima.
The generated function f(x) is identical to function (110) of [1], where len(x) >= 0.
This is part of Hock and Schittkowski’s test suite in [1], with function definitions drawn from [1] and [2].
References
1. Hock, W. and Schittkowski, K. “Test Examples for Nonlinear Programming Codes”, Lecture Notes in
Economics and Mathematical Systems, Vol. 187, Springer, 1981. http://www.ai7.uni-bayreuth.de/test_
problem_coll.pdf
2. Paviani, D.A. “A new method for the solution of the general nonlinear programming problem”, Ph.D.
dissertation, The University of Texas, Austin, TX, 1969.
function(coeffs)
evaluates Paviani’s function for a list of coeffs
f(x) = f_0(x) - f_1(x)
Where: f_0(x) = sum_(i=0)^(N-1) (ln(x_i - 2)^2 + ln(10 - x_i)^2) and f_1(x) = prod_(i=0)^(N-1) x_(i)^(.2)
Inspect with mystic_model_plotter using:: mystic.models.paviani -b “2:10:.1, 2:10:.1” -d
For N=1, the minimum is f(x)=2.133838 at x_i=8.501586, for N=3, the minimum is f(x)=7.386004 at
x_i=8.589578, for N=5, the minimum is f(x)=9.730525 at x_i=8.740743, for N=8, the minimum is f(x)=-
3.411859 at x_i=9.086900, for N=10, the minimum is f(x)=-45.778470 at x_i=9.350241.
minimizers = None
storn module
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘Corana’ function definitions drawn
from [3,4], ‘Griewangk’ function definitions drawn from [5], and ‘Zimmermann’ function definitions drawn from [6].
References
1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. “Simulated Annealing: Practice Versus Theory” J. of Mathematical and Computer Modeling 18(11),
29-57, 1993.
4. Corana, A. and Marchesi, M. and Martini, C. and Ridella, S. “Minimizing Multimodal Functions of Continuous
Variables with the ‘Simulated Annealing Algorithm’” ACM Transactions on Mathematical Software, March,
272-280, 1987.
5. Griewangk, A.O. “Generalized Descent for Global Optimization” Journal of Optimization Theory and Applica-
tions 34: 11-39, 1981.
6. Zimmermann, W. “Operations Research” Oldenbourg Munchen, Wien, 1990.
class Corana(ndim=4)
Bases: mystic.models.abstract_model.AbstractFunction
a Corana’s parabola function generator
Corana’s parabola function [1,2,3,4] defines a paraboloid whose axes are parallel to the coordinate axes. This
funciton has a large number of wells that increase in depth with proximity to the origin. The global minimum is
a plateau around the origin.
The generated function f(x) is a modified version of equation (22) of [2], where len(x) <= 4.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘Corana’ function definitions
drawn from [3,4], ‘Griewangk’ function definitions drawn from [5], and ‘Zimmermann’ function definitions
drawn from [6].
References
1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. “Simulated Annealing: Practice Versus Theory” J. of Mathematical and Computer Modeling
18(11), 29-57, 1993.
4. Corana, A. and Marchesi, M. and Martini, C. and Ridella, S. “Minimizing Multimodal Functions of Con-
tinuous Variables with the ‘Simulated Annealing Algorithm’” ACM Transactions on Mathematical Soft-
ware, March, 272-280, 1987.
5. Griewangk, A.O. “Generalized Descent for Global Optimization” Journal of Optimization Theory and
Applications 34: 11-39, 1981.
6. Zimmermann, W. “Operations Research” Oldenbourg Munchen, Wien, 1990.
function(coeffs)
evaluates a 4-D Corana’s parabola function for a list of coeffs
f(x) = sum_(i=0)^(3) f_0(x)
Where for bs(x_i - z_i) < 0.05: f_0(x) = 0.15*(z_i - 0.05*sign(z_i))^(2) * d_i and otherwise: f_0(x) = d_i
* x_(i)^(2), with z_i = loor(bs(x_i/0.2)+0.49999)*sign(x_i)*0.2 and d_i = 1,1000,10,100.
For len(x) == 1, x = x_0,0,0,0; for len(x) == 2, x = x_0,0,x_1,0; for len(x) == 3, x = x_0,0,x_1,x_2; for
len(x) >= 4, x = x_0,x_1,x_2,x_3.
Inspect with mystic_model_plotter using:: mystic.models.corana -b “-1:1:.01, -1:1:.01” -d -x 1
The minimum is f(x)=0 for bs(x_i) < 0.05 for all i.
minimizers = None
class Griewangk(ndim=10)
Bases: mystic.models.abstract_model.AbstractFunction
a Griewangk’s function generator
Griewangk’s function [1,2,5] is a multi-dimensional cosine function that provides several periodic local minima,
with the global minimum at the origin. The local minima are fractionally more shallow than the global minimum,
such that when viewed at a very coarse scale the function appears as a multi-dimensional parabola similar to De
Jong’s sphere.
The generated function f(x) is a modified version of equation (23) of [2], where len(x) >= 0.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘Corana’ function definitions
drawn from [3,4], ‘Griewangk’ function definitions drawn from [5], and ‘Zimmermann’ function definitions
drawn from [6].
References
1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. “Simulated Annealing: Practice Versus Theory” J. of Mathematical and Computer Modeling
18(11), 29-57, 1993.
4. Corana, A. and Marchesi, M. and Martini, C. and Ridella, S. “Minimizing Multimodal Functions of Con-
tinuous Variables with the ‘Simulated Annealing Algorithm’” ACM Transactions on Mathematical Soft-
ware, March, 272-280, 1987.
5. Griewangk, A.O. “Generalized Descent for Global Optimization” Journal of Optimization Theory and
Applications 34: 11-39, 1981.
6. Zimmermann, W. “Operations Research” Oldenbourg Munchen, Wien, 1990.
function(coeffs)
evaluates an N-dimensional Griewangk’s function for a list of coeffs
f(x) = f_0(x) - f_1(x) + 1
Where: f_0(x) = sum_(i=0)^(N-1) x_(i)^(2) / 4000. and: f_1(x) = prod_(i=0)^(N-1) cos( x_i / (i+1)^(1/2)
)
Inspect with mystic_model_plotter using:: mystic.models.griewangk -b “-10:10:.1, -10:10:.1” -d -x 5
The minimum is f(x)=0.0 for x_i=0.0
minimizers = [0.0]
class Zimmermann(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Zimmermann function generator
A Zimmermann function [1,2,6] poses difficulty for minimizers as the minimum is located at the corner of the
constrained region. A penalty is applied to all values outside the constrained region, creating a local minimum.
The generated function f(x) is a modified version of equation (24-26) of [2], and requires len(x) == 2.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘Corana’ function definitions
drawn from [3,4], ‘Griewangk’ function definitions drawn from [5], and ‘Zimmermann’ function definitions
drawn from [6].
References
1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. “Simulated Annealing: Practice Versus Theory” J. of Mathematical and Computer Modeling
18(11), 29-57, 1993.
4. Corana, A. and Marchesi, M. and Martini, C. and Ridella, S. “Minimizing Multimodal Functions of Con-
tinuous Variables with the ‘Simulated Annealing Algorithm’” ACM Transactions on Mathematical Soft-
ware, March, 272-280, 1987.
5. Griewangk, A.O. “Generalized Descent for Global Optimization” Journal of Optimization Theory and
Applications 34: 11-39, 1981.
6. Zimmermann, W. “Operations Research” Oldenbourg Munchen, Wien, 1990.
function(coeffs)
evaluates a Zimmermann function for a list of coeffs
f(x) = max(f_0(x), p_i(x)), with i = 0,1,2,3
Where: f_0(x) = 9 - x_0 - x_1 with for x_0 < 0: p_0(x) = -100 * x_0 and for x_1 < 0: p_1(x) = -100 * x_1
and for c_2(x) > 16 and c_3(x) > 14: p_i(x) = 100 * c_i(x), with i = 2,3 c_2(x) = (x_0 - 3)^2 + (x_1 - 2)^2
c_3(x) = x_0 * x_1 Otherwise, p_i(x)=0 for i=0,1,2,3 and c_i(x)=0 for i=2,3.
Inspect with mystic_model_plotter using:: mystic.models.zimmermann -b “-5:10:.1, -5:10:.1” -d -x 1
The minimum is f(x)=0.0 at x=(7.0,2.0)
minimizers = [(7.0, 2.0), (2.3547765, 5.948322)]
venkataraman module
This is drawn from examples in Applied Optimization with MATLAB programming, with the function definition
found in [1].
References
1. Venkataraman, P. “Applied Optimization with MATLAB Programming”, John Wiley and Sons, Hoboken NJ,
2nd Edition, 2009.
class Sinc(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Venkataraman’s sinc function generator
Venkataraman’s sinc function [1] has the global minimum at the center of concentric rings of local minima, with
well depth decreasing with distance from center.
The generated function f(x) is identical to equation (9.5) of example 9.1 of [1], and requires len(x) == 2.
This is drawn from examples in Applied Optimization with MATLAB programming, with the function definition
found in [1].
References
1. Venkataraman, P. “Applied Optimization with MATLAB Programming”, John Wiley and Sons, Hoboken
NJ, 2nd Edition, 2009.
function(coeffs)
evaluates Venkataraman’s sinc function for a list of coeffs
f(x) = -20 * sin(r(x))/r(x)
Where: r(x) = sqrt((x_0 - 4)^2 + (x_1 - 4)^2 + 0.1)
Inspect with mystic_model_plotter using:: mystic.models.venkat91 -b “-10:10:.1, -10:10:.1” -d
The minimum is f(x)=-19.668329370585823 at x=(4.0, 4.0)
minimizers = None
wavy module
Multi-minima example functions with vector outputs, which require a ‘reducing’ function to provide scalar return
values.
References
None
class Wavy1(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a wavy1 function generator
A wavy1 function has a vector return value, and oscillates similarly to x+sin(x) in each direction. When a
reduction function, like ‘numpy.add’ is applied, the surface can be visualized. The global minimum is at the
center of a cross-hairs running along x_i = -pi, with periodic local minima in each direction.
The generated function f(x) requires len(x) > 0, and a reducing function for use in most optimizers.
Multi-minima example functions with vector outputs, which require a ‘reducing’ function to provide scalar
return values.
References
None
function(coeffs)
evaluates the wavy1 function for a list of coeffs
f(x) = bs(x + 3*sin(x + pi) + pi)
Inspect with mystic_model_plotter using:: mystic.models.wavy1 -b “-20:20:.5, -20:20:.5” -d -r
numpy.add
The minimum is f(x)=0.0 at x_i=-pi for all i
minimizers = [-3.141592653589793]
class Wavy2(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
function(coeffs)
evaluates the wavy2 function for a list of coeffs
f(x) = 4*sin(x)+sin(4*x)+sin(8*x)+sin(16*x)+sin(32*x)+sin(64*x)
wolfram module
This is drawn from Mathematica’s example suites, with the ‘fOsc3D’ function definition found in [1], and the ‘XXX’
function found in [2].
References
1. Trott, M. “The Mathematica GuideBook for Numerics”, Springer-Verlag, New York, 2006.
2. Champion, B. and Strzebonski, A. “Wolfram Mathematica Tutorial Collection on Constrained Optimization”,
Wolfram Research, USA, 2008. http://reference.wolfram.com/language/guide/Optimization.html
class NMinimize51(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a NMinimize51 function generator
A NMinimize51 function [2] has many local minima. The minima are periodic over parameter space, and
modulate the surface of a parabola at the coarse scale. The global minimum is located at the deepest of the many
periodic wells.
The generated function f(x) is identical to equation (51) of the ‘NMinimize’ section in [2], and requires len(x)
== 2.
This is drawn from Mathematica’s example suites, with the ‘fOsc3D’ function definition found in [1], and the
‘XXX’ function found in [2].
References
1. Trott, M. “The Mathematica GuideBook for Numerics”, Springer-Verlag, New York, 2006.
2. Champion, B. and Strzebonski, A. “Wolfram Mathematica Tutorial Collection on Constrained Optimiza-
tion”, Wolfram Research, USA, 2008. http://reference.wolfram.com/language/guide/Optimization.html
function(coeffs)
evaluates the NMinimize51 function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = exp(sin(50*x_0)) + sin(60*exp(x_1)) + sin(70*sin(x_0)) and f_1(x) = sin(sin(80*x_1)) -
sin(10*(x_0 + x_1)) + (x_(0)^2 + x_(1)^2)/4
Inspect with mystic_model_plotter using:: mystic.models.nmin51 -b “-5:5:.1, 0:5:.1” -d
The minimum is f(x)=-3.306869 at x=(-0.02440313,0.21061247)
minimizers = [(-0.02440313, 0.21061247)]
class fOsc3D(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a fOsc3D function generator
A fOsc3D function [1] for positive x_1 values yields small sinusoidal oscillations on a flat plane, where a
sinkhole containing the global minimum and a few local minima is found in a small region near the origin. For
negative x_1 values, a parabolic penalty is applied that decreases as the x_1 appoaches zero.
The generated function f(x) is identical to equation (75) of section 1.10 of [1], and requires len(x) == 2.
This is drawn from Mathematica’s example suites, with the ‘fOsc3D’ function definition found in [1], and the
‘XXX’ function found in [2].
References
1. Trott, M. “The Mathematica GuideBook for Numerics”, Springer-Verlag, New York, 2006.
2. Champion, B. and Strzebonski, A. “Wolfram Mathematica Tutorial Collection on Constrained Optimiza-
tion”, Wolfram Research, USA, 2008. http://reference.wolfram.com/language/guide/Optimization.html
function(coeffs)
evaluates the fOsc3D function for a list of coeffs
f(x) = f_0(x) + p(x)
Where: f_0(x) = -4 * exp(-x_(0)^2 - x_(1)^2) + sin(6*x_(0)) * sin(5*x_(1)) with for x_1 < 0: p(x) =
100.*x_(1)^2 and otherwise: p(x) = 0.
Inspect with mystic_model_plotter using:: mystic.models.fosc3d -b “-5:5:.1, 0:5:.1” -d
The minimum is f(x)=-4.501069742528923 at x=(-0.215018, 0.240356)
minimizers = [(-0.215018, 0.240356)]
2.19.1 Monitors
Monitors provide the ability to monitor progress as the optimization is underway. Monitors also can be used to extract
and prepare information for mystic’s analysis viewers. Each of mystic’s monitors are customizable, and provide the
user with a different type of output. The following monitors are available:
2.19.2 Usage
Typically monitors are either bound to a model function by a modelFactory, or bound to a cost function by a
Solver.
Examples
_Monitor__step()
__add__(monitor)
add the contents of self and the given monitor
__call__(...) <==> x(...)
__dict__ = dict_proxy({'iy': <property object>, '_Monitor__step': <function __step>,
__init__(**kwds)
x.__init__(. . . ) initializes x; see help(type(x)) for signature
__len__()
__module__ = 'mystic.monitors'
__weakref__
list of weak references to the object (if defined)
_get_y(monitor)
avoid double-conversion by combining k’s
_ik(y, k=False, type=<type ’list’>)
_k(y, type=<type ’list’>)
_pos
Positions
_step
_wts
Weights
ax
Params
ay
Costs
extend(monitor)
append the contents of the given monitor
get_ax()
get_ay()
get_id()
get_info()
get_ipos()
get_iwts()
get_ix()
get_iy()
get_pos()
get_wts()
get_x()
get_y()
id
Id
info(message)
ix
Params
iy
Costs
pos
Positions
prepend(monitor)
prepend the contents of the given monitor
wts
Weights
x
Params
y
Costs
class VerboseMonitor(interval=10, xinterval=inf, all=True, **kwds)
Bases: mystic.monitors.Monitor
A verbose version of the basic Monitor.
Prints output ‘y’ every ‘interval’, and optionally prints input parameters ‘x’ every ‘xinterval’.
__call__(...) <==> x(...)
__init__(interval=10, xinterval=inf, all=True, **kwds)
x.__init__(. . . ) initializes x; see help(type(x)) for signature
__module__ = 'mystic.monitors'
info(message)
class LoggingMonitor(interval=1, filename=’log.txt’, new=False, all=True, info=None, **kwds)
Bases: mystic.monitors.Monitor
A basic Monitor that writes to a file at specified intervals.
Logs output ‘y’ and input parameters ‘x’ to a file every ‘interval’.
__call__(...) <==> x(...)
__init__(interval=1, filename=’log.txt’, new=False, all=True, info=None, **kwds)
x.__init__(. . . ) initializes x; see help(type(x)) for signature
__module__ = 'mystic.monitors'
__reduce__()
helper for pickle
__setstate__(state)
info(message)
class VerboseLoggingMonitor(interval=1, yinterval=10, xinterval=inf, filename=’log.txt’,
new=False, all=True, info=None, **kwds)
Bases: mystic.monitors.LoggingMonitor
A Monitor that writes to a file and the screen at specified intervals.
Logs output ‘y’ and input parameters ‘x’ to a file every ‘interval’, also print every ‘yinterval’.
__call__(...) <==> x(...)
__init__(interval=1, yinterval=10, xinterval=inf, filename=’log.txt’, new=False, all=True,
info=None, **kwds)
x.__init__(. . . ) initializes x; see help(type(x)) for signature
__module__ = 'mystic.monitors'
__reduce__()
helper for pickle
__setstate__(state)
info(message)
CustomMonitor(*args, **kwds)
generate a custom Monitor
Parameters
• args (tuple(str)) – tuple of the required Monitor inputs (e.g. x).
• kwds (dict(str)) – dict of {"input":"doc"} (e.g. x='Params').
Returns a customized monitor instance
Examples
_solutions(monitor, last=None)
return the params from the last N entries in a monitor
_measures(monitor, last=None, weights=False)
return positions or weights from the last N entries in a monitor
this function requires a montor that is monitoring a product_measure
_positions(monitor, last=None)
return positions from the last N entries in a monitor
this function requires a montor that is monitoring a product_measure
_weights(monitor, last=None)
return weights from the last N entries in a monitor
this function requires a montor that is monitoring a product_measure
_load(path, monitor=None, verbose=False)
load npts, params, and cost into monitor from file at given path
__orig_write_converge_file(mon, log_file=’paramlog.py’)
__orig_write_support_file(mon, log_file=’paramlog.py’)
converge_to_support(steps, energy)
converge_to_support_converter(file_in, file_out)
isNull(mon)
logfile_reader(filename)
old_to_new_support_converter(file_in, file_out)
raw_to_converge(steps, energy)
raw_to_converge_converter(file_in, file_out)
raw_to_support(steps, energy)
raw_to_support_converter(file_in, file_out)
read_converge_file(file_in)
read_history(source)
read parameter history and cost history from the given source
‘source’ can be a monitor, logfile, support file, or solver restart file
read_import(file, *targets)
import the targets; targets are name strings
read_monitor(mon, id=False)
read_old_support_file(file_in)
read_raw_file(file_in)
read_support_file(file_in)
read_trajectories(source)
read trajectories from a convergence logfile or a monitor
source can either be a monitor instance or a logfile path
sequence(x)
True if x is a list, tuple, or a ndarray
write_converge_file(mon, log_file=’paramlog.py’, **kwds)
write_monitor(steps, energy, id=[], k=None)
write_raw_file(mon, log_file=’paramlog.py’, **kwds)
write_support_file(mon, log_file=’paramlog.py’, **kwds)
Examples
References
1. http://en.wikipedia.org/wiki/Penalty_method
2. “Applied Optimization with MATLAB Programming”, by Venkataraman, Wiley, 2nd edition, 2009.
3. http://www.srl.gatech.edu/education/ME6103/Penalty-Barrier.ppt
4. “An Augmented Lagrange Multiplier Based Method for Mixed Integer Discrete Continuous Optimization and
Its Applications to Mechanical Design”, by Kannan and Kramer, 1994.
barrier_inequality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)
apply a infinite barrier if the given inequality constraint is violated, and a logarithmic penalty if the inequality
constraint is satisfied
penalty is p(x) = inf if constraint is violated, otherwise penalty is p(x) = -1/pk*log(-f(x)), with pk = 2k*pow(h,n)
and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) <= 0.0
lagrange_equality(condition=<function <lambda>>, args=None, kwds=None, k=20, h=5)
apply a quadratic penalty if the given equality constraint is violated
penalty is p(x) = pk*f(x)**2 + lam*f(x), with pk = k*pow(h,n) and n=0 also lagrange multiplier lam = 2k*f(x)
where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) = 0.0
lagrange_inequality(condition=<function <lambda>>, args=None, kwds=None, k=20, h=5)
apply a quadratic penalty if the given inequality constraint is violated
penalty is p(x) = pk*mpf**2 + beta*mpf, with pk = k*pow(h,n) and n=0 also mpf = max(-beta/2k, f(x)) and
lagrange multiplier beta = 2k*mpf where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) <= 0.0
linear_equality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)
apply a linear penalty if the given equality constraint is violated
penalty is p(x) = pk*abs(f(x)), with pk = k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) == 0.0
linear_inequality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)
apply a linear penalty if the given inequality constraint is violated
penalty is p(x) = pk*abs(f(x)), with pk = 2k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) <= 0.0
quadratic_equality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)
apply a quadratic penalty if the given equality constraint is violated
penalty is p(x) = pk*f(x)**2, with pk = k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) == 0.0
This module contains map and pipe interfaces to standard (i.e. serial) python.
Pipe methods provided: pipe - blocking communication pipe [returns: value]
Map methods provided: map - blocking and ordered worker pool [returns: list] imap - non-blocking and ordered
worker pool [returns: iterator]
2.22.1 Usage
A typical call to a pathos python map will roughly follow this example:
Notes
This worker pool leverages the built-in python maps, and thus does not have limitations due to serialization of the
function f or the sequences in args. The maps in this worker pool have full functionality whether run from a script or
in the python interpreter, and work reliably for both imported and interactively-defined functions.
Defaults for mapper and launcher. These should be available as a minimal (dependency-free) pure-python install from
pathos:
Notes
serial_launcher(kdict={})
prepare launch for standard execution syntax: (python) (program) (progargs)
Notes
run non-python shell commands by setting python to a null string: kdict = {'python':'', ...}
worker_pool()
use the ‘worker pool’ strategy; hence one job is allocated to each worker, and the next new work item is provided
when a node completes its work
Implements the “Shuffled Complex Evolution Metropolis” Algoritm of Vrugt et al. [1]
Reference:
[1] Jasper A. Vrugt, Hoshin V. Gupta, Willem Bouten, and Soroosh Sorooshian A Shuffled Complex
Evolution Metropolis algorithm for optimization and uncertainty assessment of hydrologic model param-
eters, WATER RESOURCES RESEARCH, VOL. 39, NO. 8, 1201, doi:10.1029/2002WR001642, 2003
http://www.agu.org/pubs/crossref/2003/2002WR001642.shtml
[2] Vrugt JA, Nuallain , Robinson BA, Bouten W, Dekker SC, Sloot PM Application of parallel computing to stochastic
parameter estimation in environmental models, Computers & Geosciences, Vol. 32, No. 8. (October 2006), pp. 1139-
1155. http://www.science.uva.nl/research/scs/papers/archive/Vrugt2006b.pdf
multinormal_pdf(mean, var)
var must be symmetric positive definite
myinsert(a, x)
remix(Cs, As)
Mixing and dealing the complexes. The types of Cs and As are very important. . . .
scem(Ck, ak, Sk, Sak, target, cn)
This is the SCEM algorithm starting from line [35] of the reference [1].
• [inout] Ck is the kth ‘complex’ with m points. This should be an m by n array n being the dimensionality
of the density function. i.e., the data are arranged in rows.
Ck is assumed to be sorted according to the target density.
• [inout] ak, the density of the points of Ck.
• [inout] Sk, the entire chain. (should be a list)
• [inout] Sak, the cost of the entire chain (should be a list)
Sak would be more convenient to use if it is a numpy array, but we need to append to it frequently.
• [in] target: target density function
• [in] cn: jumprate. (see Paragraph 37 of [1.]
• The invariants: ak is always aligned with Ck, and are the cost of Ck
• Similarly, Sak is always aligned with Sk in the same way.
• On return. . . sort order in Ck/ak is destroyed. but see sort_complex2
sequential_deal(inarray, n)
• inarray: should be a set of N objects (the objects can be vectors themselves, but inarray should be index-
able like a list. It is coerced into a numpy array because the last operations requires that it is also indexable
by a ‘list.’
• it should have a length divisble by n, otherwise the reshape will fail (this is a feature !)
• sequential_deal(range(20), 5) wil return a 5 element list, each element being a 4-list of index. (see below)
sort_ab_with_b(a, b, ord=-1)
default is descending. . .
sort_and_deal(cards, target, nplayers)
sort_complex(c, a)
sort_complex0(c, a)
sort_complex2(c, a)
• c and a are partially sorted (either the last one is bad, or the first one)
• pos [0 (first one out of order)] -1 (last one out of order)
2.25.1 Solvers
This module contains a collection of optimization routines adapted from scipy.optimize. The minimal scipy interface
has been preserved, and functionality from the mystic solver API has been added with reasonable defaults.
Minimal function interface to optimization routines::
fmin – Nelder-Mead Simplex algorithm (uses only function calls)
fmin_powell – Powell’s (modified) level set method (uses only function calls)
The corresponding solvers built on mystic’s AbstractSolver are:: NelderMeadSimplexSolver – Nelder-Mead
Simplex algorithm PowellDirectionalSolver – Powell’s (modified) level set method
Mystic solver behavior activated in fmin::
• EvaluationMonitor = Monitor()
• StepMonitor = Monitor()
• termination = CandidateRelativeTolerance(xtol,ftol)
Mystic solver behavior activated in fmin_powell::
• EvaluationMonitor = Monitor()
• StepMonitor = Monitor()
• termination = NormalizedChangeOverGeneration(ftol)
2.25.2 Usage
References
1. Nelder, J.A. and Mead, R. (1965), “A simplex method for function minimization”, The Computer Journal, 7, pp.
308-313.
2. Wright, M.H. (1996), “Direct Search Methods: Once Scorned, Now Respectable”, in Numerical Analysis 1995,
Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis, D.F. Griffiths and G.A. Watson
(Eds.), Addison Wesley Longman, Harlow, UK, pp. 191-208.
3. Gao, F. and Han, L. (2012), “Implementing the Nelder-Mead simplex algorithm with adaptive parameters”,
Computational Optimization and Applications. 51:1, pp. 259-277.
4. Powell M.J.D. (1964) An efficient method for finding the minimum of a function of several variables without
calculating derivatives, Computer Journal, 7 (2):155-162.
5. Press W., Teukolsky S.A., Vetterling W.T., and Flannery B.P.: Numerical Recipes (any edition), Cambridge
University Press
class NelderMeadSimplexSolver(dim)
Bases: mystic.abstract_solver.AbstractSolver
Nelder Mead Simplex optimization adapted from scipy.optimize.fmin.
Takes one initial input: dim – dimensionality of the problem
The size of the simplex is dim+1.
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using the downhill simplex algorithm.
Uses a Nelder-Mead simplex algorithm to find the minimum of a function of one or more variables.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• disp (bool, default=False) – if True, print convergence messages.
• radius (float, default=0.05) – percentage change for initial simplex values.
• adaptive (bool, default=False) – adapt algorithm parameters to the dimensionality of the
initial parameter vector x.
Returns None
_SetEvaluationLimits(iterscale=200, evalscale=200)
set the evaluation limits
_Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration Note that ExtraArgs should be a tuple of extra arguments
__init__(dim)
Takes one initial input: dim – dimensionality of the problem
The size of the simplex is dim+1.
__module__ = 'mystic.scipy_optimize'
_decorate_objective(cost, ExtraArgs=None)
decorate the cost function with bounds, penalties, monitors, etc
_process_inputs(kwds)
process and activate input settings
_setSimplexWithinRangeBoundary(radius=None)
ensure that initial simplex is set within bounds - radius: size of the initial simplex [default=0.05]
class PowellDirectionalSolver(dim)
Bases: mystic.abstract_solver.AbstractSolver
Powell Direction Search optimization, adapted from scipy.optimize.fmin_powell.
Takes one initial input: dim – dimensionality of the problem
Finalize(**kwds)
cleanup upon exiting the main optimization loop
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using modified Powell’s method.
Uses a modified Powell Directional Search algorithm to find the minimum of a function of one or more
variables.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• direc (tuple, default=None) – the initial direction set.
• xtol (float, default=1e-4) – line-search error tolerance.
• imax (float, default=500) – line-search maximum iterations.
• disp (bool, default=False) – if True, print convergence messages.
Returns None
_PowellDirectionalSolver__generations()
get the number of iterations
_SetEvaluationLimits(iterscale=1000, evalscale=1000)
set the evaluation limits
_Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration Note that ExtraArgs should be a tuple of extra arguments
__init__(dim)
Takes one initial input: dim – dimensionality of the problem
__module__ = 'mystic.scipy_optimize'
_process_inputs(kwds)
process and activate input settings
generations
get the number of iterations
fmin(cost, x0, args=(), bounds=None, xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None,
full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using the downhill simplex algorithm.
Uses a Nelder-Mead simplex algorithm to find the minimum of a function of one or more variables. This
algorithm only uses function values, not derivatives or second derivatives. Mimics the scipy.optimize.
fmin interface.
This algorithm has a long history of successful use in applications. It will usually be slower than an algorithm
that uses first or second derivative information. In practice it can have poor performance in high-dimensional
problems and is not robust to minimizing complicated functions. Additionally, there currently is no complete
theory describing when the algorithm will successfully converge to the minimum, or how fast it will if it does.
Both the ftol and xtol criteria must be met for convergence.
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• x0 (ndarray) – the initial guess parameter vector x.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• xtol (float, default=1e-4) – acceptable absolute error in xopt for convergence.
• ftol (float, default=1e-4) – acceptable absolute error in cost(xopt) for convergence.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
Returns (xopt, {fopt, iter, funcalls, warnflag}, {allvecs})
Notes
Notes
or as a function call:
Parameters
• model (str) – full import path for the model (e.g. mystic.models.rosen)
• logfile (str, default=None) – name of convergence logfile (e.g. log.txt)
Returns None
Notes
• The option out takes a string of the filepath for the generated plot.
• The option bounds takes an indicator string, where bounds are given as comma-separated slices. For
example, using bounds = "-1:10, 0:20" will set lower and upper bounds for x to be (-1,10) and
y to be (0,20). The “step” can also be given, to control the number of lines plotted in the grid. Thus
"-1:10:.1, 0:20" sets the bounds as above, but uses increments of .1 along x and the default step
along y. For models > 2D, the bounds can be used to specify 2 dimensions plus fixed values for remaining
dimensions. Thus, "-1:10, 0:20, 1.0" plots the 2D surface where the z-axis is fixed at z=1.0.
When called from a script, slice objects can be used instead of a string, thus "-1:10:.1, 0:20, 1.
0" becomes (slice(-1,10,.1), slice(20), 1.0).
• The option label takes comma-separated strings. For example, label = "x,y," will place ‘x’ on the
x-axis, ‘y’ on the y-axis, and nothing on the z-axis. LaTeX is also accepted. For example, label = "$
h $, $ {\alpha}$, $ v$" will label the axes with standard LaTeX math formatting. Note that the
leading space is required, while a trailing space aligns the text with the axis instead of the plot frame.
• The option nid takes an integer of the nth simultaneous points to plot.
log_reader(filename, **kwds)
plot parameter convergence from file written with LoggingMonitor
Available from the command shell as:
or as a function call:
mystic.log_reader(filename, **options)
Notes
• The option out takes a string of the filepath for the generated plot.
• The option dots takes a boolean, and will show data points in the plot.
• The option line takes a boolean, and will connect the data with a line.
• The option iter takes an integer of the largest iteration to plot.
• The option legend takes a boolean, and will display the legend.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option param takes an indicator string. The indicator string is built from comma-separated array slices.
For example, params = ":" will plot all parameters. Alternatively, params = ":2, 3:" will plot
all parameters except for the third parameter, while params = "0" will only plot the first parameter.
collapse_plotter(filename, **kwds)
generate cost convergence rate plots from file written with write_support_file
Available from the command shell as:
or as a function call:
mystic.collapse_plotter(filename, **options)
Notes
• The option dots takes a boolean, and will show data points in the plot.
• The option linear takes a boolean, and will plot in a linear scale.
• The option out takes a string of the filepath for the generated plot.
• The option iter takes an integer of the largest iteration to plot.
• The option label takes a label string. For example, label = "y" will label the plot with a ‘y’, while
label = " log-cost, $ log_{10}(\hat{P} - \hat{P}_{max})$" will label the y-axis
with standard LaTeX math formatting. Note that the leading space is required, and that the text is aligned
along the axis.
• The option col takes a string of comma-separated integers indicating iteration numbers where parameter
collapse has occurred. If a second set of integers is provided (delineated by a semicolon), the additional
set of integers will be plotted with a different linestyle (to indicate a different type of collapse).
a global searcher
class Searcher(npts=4, retry=1, tol=8, memtol=1, map=None, archive=None, sprayer=None,
seeker=None, traj=False, disp=False)
Bases: object
searcher, which searches for all minima of a response surface
Input: npts - number of solvers in the ensemble retry - max consectutive retries w/o an archive ‘miss’ tol
- rounding precision for the minima comparator memtol - rounding precision for memoization map -
map used for spawning solvers in the ensemble archive - the sampled point archive(s) sprayer - the mys-
tic.ensemble instance seeker - the mystic.solvers instance traj - if True, save the parameter trajectories disp
- if True, be verbose
Coordinates(unique=False)
return the sequence of stored parameter trajectories
Input: unique: if True, only return unique values
Output: a list of parameter trajectories
Minima(tol=None)
return a dict of (coordinates,values) of all discovered minima
Input: tol: tolerance within which to consider a point a minima
Output: a dict of (coordinates,values) of all discovered minima
Reset(archive=None, inv=None)
clear the archive of sampled points
Input: archive - the sampled point archive(s) inv - if True, reset the archive for the inverse of the objective
Samples()
return array of (coordinates, cost) for all trajectories
Search(model, bounds, stop=None, monitor=None, traj=None, disp=None)
use an ensemble of optimizers to search for all minima
Inputs: model - function z=f(x) to be used as the objective of the Searcher bounds - tuple of floats
(min,max), bounds on the search region stop - termination condition monitor - mystic.monitor in-
stance to store parameter trajectories traj - klepto.archive to store sampled points disp - if true, be
verbose
Trajectories()
return tuple (iteration, coordinates, cost) of all trajectories
UseTrajectories(traj=True)
save all sprayers, thus save all trajectories
Values(unique=False)
return the sequence of stored response surface outputs
Input: unique: if True, only return unique values
Output: a list of stored response surface outputs
Verbose(disp=True)
be verbose
__dict__ = dict_proxy({'Reset': <function Reset>, '__module__': 'mystic.search', '_pr
__init__(npts=4, retry=1, tol=8, memtol=1, map=None, archive=None, sprayer=None,
seeker=None, traj=False, disp=False)
searcher, which searches for all minima of a response surface
Input: npts - number of solvers in the ensemble retry - max consectutive retries w/o an archive ‘miss’
tol - rounding precision for the minima comparator memtol - rounding precision for memoization
map - map used for spawning solvers in the ensemble archive - the sampled point archive(s) sprayer
- the mystic.ensemble instance seeker - the mystic.solvers instance traj - if True, save the parameter
trajectories disp - if True, be verbose
__module__ = 'mystic.search'
__weakref__
list of weak references to the object (if defined)
_configure(model, bounds, stop=None, monitor=None)
generate ensemble solver from objective, bounds, termination, monitor
_memoize(solver, tol=1)
apply caching archive to ensemble solver instance
_print(solver, tol=8)
print bestSolution and bestEnergy for each sprayer
_search(sid)
run the solver, store the trajectory, and cache to the archive
_solve(id=None, disp=None)
run the solver (i.e. search for the minima)
_summarize()
provide a summary of the search results
All of mystic’s optimizers derive from the solver API, which provides each optimizer with a standard, but highly-
customizable interface. A description of the solver API is found in mystic.models.abstract_model, and in
each derived optimizer. Mystic’s optimizers are:
** Global Optimizers **
DifferentialEvolutionSolver -- Differential Evolution algorithm
DifferentialEvolutionSolver2 -- Price & Storn's Differential Evolution
** Pseudo-Global Optimizers **
SparsitySolver -- N Solvers sampled where point desity is low
BuckshotSolver -- Uniform Random Distribution of N Solvers
LatticeSolver -- Distribution of N Solvers on a Regular Grid
** Local-Search Optimizers **
NelderMeadSimplexSolver -- Nelder-Mead Simplex algorithm
PowellDirectionalSolver -- Powell's (modified) Level Set algorithm
Most of mystic’s optimizers can be called from a minimal (i.e. one-line) interface. The collection of arguments is
often unique to the optimizer, and if the underlying solver derives from a third-party package, the original interface is
reproduced. Minimal interfaces to these optimizers are provided:
** Global Optimizers **
diffev -- DifferentialEvolutionSolver
diffev2 -- DifferentialEvolutionSolver2
** Pseudo-Global Optimizers **
sparsity -- SparsitySolver
buckshot -- BuckshotSolver
lattice -- LatticeSolver
** Local-Search Optimizers **
fmin -- NelderMeadSimplexSolver
fmin_powell -- PowellDirectionalSolver
For more information, please see the solver documentation found here:
• mystic.differential_evolution [differential evolution solvers]
• mystic.scipy_optimize [scipy local-search solvers]
• mystic.ensemble [pseudo-global solvers]
or the API documentation found here:
• mystic.abstract_solver [the solver API definition]
• mystic.abstract_map_solver [the parallel solver API]
• mystic.abstract_ensemble_solver [the ensemble solver API]
UpdateGenealogyRecords(id, newchild)
Override me for more refined behavior. Currently all changes are logged.
_Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration Note that ExtraArgs should be a tuple of extra arguments
__init__(dim, NP=4)
Takes two initial inputs: dim – dimensionality of the problem NP – size of the trial solution population.
[requires: NP >= 4]
All important class members are inherited from AbstractSolver.
__module__ = 'mystic.differential_evolution'
_decorate_objective(cost, ExtraArgs=None)
decorate cost function with bounds, penalties, monitors, etc
_process_inputs(kwds)
process and activate input settings
class DifferentialEvolutionSolver2(dim, NP=4)
Bases: mystic.abstract_map_solver.AbstractMapSolver
Differential Evolution optimization, using Storn and Price’s algorithm.
Alternate implementation:
• utilizes a map-reduce interface, extensible to parallel computing
• both a current and a next generation are kept, while the current generation is invariant during the main
DE logic
Takes two initial inputs: dim – dimensionality of the problem NP – size of the trial solution population. [re-
quires: NP >= 4]
All important class members are inherited from AbstractSolver.
SetConstraints(constraints)
apply a constraints function to the optimization
input::
• a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector.
Ideally, this function is constructed so the parameter vector it passes to the cost function will
satisfy the desired (i.e. encoded) constraints.
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using differential evolution.
Uses a differential evolution algorithm to find the minimum of a function of one or more variables. This
implementation holds the current generation invariant until the end of each iteration.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial so-
lutions.
• CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
• ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.
Uses a Nelder-Mead simplex algorithm to find the minimum of a function of one or more variables.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• disp (bool, default=False) – if True, print convergence messages.
• radius (float, default=0.05) – percentage change for initial simplex values.
• adaptive (bool, default=False) – adapt algorithm parameters to the dimensionality of the
initial parameter vector x.
Returns None
_SetEvaluationLimits(iterscale=200, evalscale=200)
set the evaluation limits
_Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration Note that ExtraArgs should be a tuple of extra arguments
__init__(dim)
Takes one initial input: dim – dimensionality of the problem
The size of the simplex is dim+1.
__module__ = 'mystic.scipy_optimize'
_decorate_objective(cost, ExtraArgs=None)
decorate the cost function with bounds, penalties, monitors, etc
_process_inputs(kwds)
process and activate input settings
_setSimplexWithinRangeBoundary(radius=None)
ensure that initial simplex is set within bounds - radius: size of the initial simplex [default=0.05]
class PowellDirectionalSolver(dim)
Bases: mystic.abstract_solver.AbstractSolver
Powell Direction Search optimization, adapted from scipy.optimize.fmin_powell.
Takes one initial input: dim – dimensionality of the problem
Finalize(**kwds)
cleanup upon exiting the main optimization loop
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using modified Powell’s method.
Uses a modified Powell Directional Search algorithm to find the minimum of a function of one or more
variables.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
Notes
Notes
Notes
Notes
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• xtol (float, default=1e-4) – acceptable relative error in xopt for convergence.
• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=2) – maximum iterations to run without improvement.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• direc (tuple, default=None) – the initial direction set.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
Returns (xopt, {fopt, iter, funcalls, warnflag, direc}, {allvecs})
Notes
Uses a lattice ensemble algorithm to find the minimum of a function of one or more variables. Mimics the
scipy.optimize.fmin interface. Starts N solver instances at regular intervals in parameter space, deter-
mined by nbins (N = numpy.prod(nbins); len(nbins) == ndim).
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• ndim (int) – dimensionality of the problem.
• nbins (tuple(int), default=8) – total bins, or # of bins in each dimension.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=10) – maximum iterations to run without improvement.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• solver (solver, default=None) – override the default nested Solver instance.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
• map (func, default=None) – a (parallel) map function y = map(f, x).
• dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting
position using the given distribution.
Returns (xopt, {fopt, iter, funcalls, warnflag, allfuncalls},
{allvecs})
Notes
Notes
Rand1Bin(inst, candidate)
trial solution is randomly chosen candidate plus scaled difference of two other randomly chosen candidates;
mutates at random
trial = candidate1 + scale*(candidate2 - candidate3)
Rand1Exp(inst, candidate)
trial solution is randomly chosen candidate plus scaled difference of two other randomly chosen candidates;
mutates until random stop
trial = candidate1 + scale*(candidate2 - candidate3)
Rand2Bin(inst, candidate)
trial solution is randomly chosen candidate plus scaled contributions of four other randomly chosen candidates;
mutates at random
trial = candidate1 + scale*(candidate2 - candidate3 - candidate4 - candidate5)
Rand2Exp(inst, candidate)
trial solution is randomly chosen candidate plus scaled contributions from four other randomly chosen candi-
dates; mutates until random stop
trial = candidate1 + scale*(candidate2 + candidate3 - candidate4 - candidate5)
RandToBest1Bin(inst, candidate)
trial solution is itself plus scaled difference of best solution and trial solution, plus the difference of two randomly
chosen candidates; mutates until random stop
trial += scale*(best - trial) + scale*(candidate1 - candidate2)
RandToBest1Exp(inst, candidate)
trial solution is itself plus scaled difference of best solution and trial solution, plus the difference of two randomly
chosen candidates; mutates at random
trial += scale*(best - trial) + scale*(candidate1 - candidate2)
get_random_candidates(NP, exclude, N)
select N random candidates from population of size NP, where exclude is the candidate to exclude from selection.
Thus, get_random_candidates(x,1,2) randomly selects two nPop[i], where i != 1
or as a function call:
mystic.support.convergence(filename, **options)
Notes
• The option out takes a string of the filepath for the generated plot.
• The option iter takes an integer of the largest iteration to plot.
• The option param takes an indicator string. The indicator string is built from comma-separated array slices.
For example, params = ":" will plot all parameters in a single plot. Alternatively, params = ":2,
2:" will split the parameters into two plots, and params = "0" will only plot the first parameter.
• The option label takes comma-separated strings. For example, label = "x,y," will label the y-axis
of the first plot with ‘x’, a second plot with ‘y’, and not add a label to a third or subsequent plots. If more
labels are given than plots, then the last label will be used for the y-axis of the ‘cost’ plot. LaTeX is also
accepted. For example, label = "$ h$, $ a$, $ v$" will label the axes with standard LaTeX
math formatting. Note that the leading space is required, and the text is aligned along the axis.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option cost takes a boolean, and will also plot the parameter cost.
• The option legend takes a boolean, and will display the legend.
hypercube(filename, **kwds)
generate parameter support plots from file written with write_support_file
Available from the command shell as:
or as a function call:
mystic.support.hypercube(filename, **options)
Notes
• The option out takes a string of the filepath for the generated plot.
• The options bounds, axes, and iters all take indicator strings. The bounds should be given as comma-
separated slices. For example, using bounds = "60:105, 0:30, 2.1:2.8" will set the lower
and upper bounds for x to be (60,105), y to be (0,30), and z to be (2.1,2.8). Similarly, axes also accepts
comma-separated groups of ints; however, for axes, each entry indicates which parameters are to be plotted
along each axis – the first group for the x direction, the second for the y direction, and third for z. Thus,
axes = "2 3, 6 7, 10 11" would set 2nd and 3rd parameters along x. Iters also accepts strings
built from comma-separated array slices. For example, iters = ":" will plot all iters in a single plot.
Alternatively, iters = ":2, 2:" will split the iters into two plots, while iters = "0" will only
plot the first iteration.
• The option label takes comma-separated strings. Thus label = "x,y," will place ‘x’ on the x-axis,
‘y’ on the y-axis, and nothing on the z-axis. LaTeX, such as label = "$ h $, $ a$, $ v$" will
label the axes with standard LaTeX math formatting. Note that the leading space is required, while a
trailing space aligns the text with the axis instead of the plot frame.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option scale takes an integer as a grayscale contrast multiplier.
hypercube_measures(filename, **kwds)
generate measure support plots from file written with write_support_file
Available from the command shell as:
or as a function call:
mystic.support.hypercube_measures(filename, **options)
Notes
• The option out takes a string of the filepath for the generated plot.
• The options bounds, axes, weight, and iters all take indicator strings. The bounds should be given
as comma-separated slices. For example, using bounds = "60:105, 0:30, 2.1:2.8" will set
lower and upper bounds for x to be (60,105), y to be (0,30), and z to be (2.1,2.8). Similarly, axes also
accepts comma-separated groups of ints; however, for axes, each entry indicates which parameters are to
be plotted along each axis – the first group for the x direction, the second for the y direction, and third for z.
Thus, axes = "2 3, 6 7, 10 11" would set 2nd and 3rd parameters along x. The corresponding
weights are used to color the measure points, where 1.0 is black and 0.0 is white. For example, using
weight = "0 1, 4 5, 8 9" would use the 0th and 1st parameters to weight x. Iters is also similar,
however only accepts comma-separated ints. Hence, iters = "-1" will plot the last iteration, while
iters = "0, 300, 700" will plot the 0th, 300th, and 700th in three plots.
• The option label takes comma-separated strings. Thus label = "x,y," will place ‘x’ on the x-axis,
‘y’ on the y-axis, and nothing on the z-axis. LaTeX, such as label = "$ h $, $ a$, $ v$" will
label the axes with standard LaTeX math formatting. Note that the leading space is required, while a
trailing space aligns the text with the axis instead of the plot frame.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option scale takes an integer as a grayscale contrast multiplier.
• The option flat takes a boolean, to plot results in a single plot.
Warning: This function is intended to visualize weighted measures (i.e. weights and positions), where the
weights must be normalized (to 1) or an error will be thrown.
or as a function call:
Parameters
• filename (str) – name of the convergence logfile (e.g. paramlog.py)
• datafile (str, default=None) – name of the dataset file (e.g. data.txt)
Returns None
Notes
• The option out takes a string of the filepath for the generated plot.
• The options bounds, dim, and iters all take indicator strings. The bounds should be given as comma-
separated slices. For example, using bounds = ".062:.125, 0:30, 2300:3200" will set lower
and upper bounds for x to be (.062,.125), y to be (0,30), and z to be (2300,3200). If all bounds are to not be
strictly enforced, append an asterisk * to the string. The dim (dimensions of the scenario) should comma-
separated ints. For example, dim = "1, 1, 2" will convert the params to a two-member 3-D dataset.
Iters accepts a string built from comma-separated array slices. Thus, iters = ":" will plot all iters
in a single plot. Alternatively, iters = ":2, 2:" will split the iters into two plots, while iters =
"0" will only plot the first iteration.
• The option label takes comma-separated strings. Thus label = "x,y," will place ‘x’ on the x-axis,
‘y’ on the y-axis, and nothing on the z-axis. LaTeX, such as label = "$ h $, $ a$, $ v$" will
label the axes with standard LaTeX math formatting. Note that the leading space is required, while a
trailing space aligns the text with the axis instead of the plot frame.
• The option “filter” is used to select datapoints from a given dataset, and takes comma-separated ints.
• A “mask” is given as comma-separated ints. When the mask has more than one int, the plot will be 2D.
• The option “vertical” will plot the dataset values on the vertical axis; for 2D plots, cones are always plotted
on the vertical axis.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option scale takes an integer as a grayscale contrast multiplier.
• The option gap takes an integer distance from cone center to vertex.
• The option data takes a boolean, to plot legacy data, if provided.
• The option cones takes a boolean, to plot cones, if provided.
• The option flat takes a boolean, to plot results in a single plot.
best_dimensions(n)
get the ‘best’ dimensions (i x j) for arranging plots
Parameters n (int) – number of plots
Returns tuple (i,j) of i rows j columns, where i*j is roughly n
swap(alist, index=None)
swap the selected list element with the last element in alist
Parameters
• alist (list) – a list of objects
• index (int, default=None) – the selected element
Examples
>>> a = [1, 2]
>>> np.asarray(a)
array([1, 2])
dot(a, b, out=None)
Dot product of two arrays. Specifically,
• If both a and b are 1-D arrays, it is inner product of vectors (without complex conjugation).
• If both a and b are 2-D arrays, it is matrix multiplication, but using matmul() or a @ b is preferred.
• If either a or b is 0-D (scalar), it is equivalent to multiply() and using numpy.multiply(a, b)
or a * b is preferred.
• If a is an N-D array and b is a 1-D array, it is a sum product over the last axis of a and b.
• If a is an N-D array and b is an M-D array (where M>=2), it is a sum product over the last axis of a and
the second-to-last axis of b:
Parameters
• a (array_like) – First argument.
• b (array_like) – Second argument.
• out (ndarray, optional) – Output argument. This must have the exact kind that would be re-
turned if it was not used. In particular, it must have the right type, must be C-contiguous, and
its dtype must be the dtype that would be returned for dot(a,b). This is a performance fea-
ture. Therefore, if these conditions are not met, an exception is raised, instead of attempting
to be flexible.
Returns output – Returns the dot product of a and b. If a and b are both scalars or both 1-D arrays
then a scalar is returned; otherwise an array is returned. If out is given, then it is returned.
Return type ndarray
Raises ValueError – If the last dimension of a is not the same size as the second-to-last dimen-
sion of b.
See also:
Examples
>>> np.dot(3, 4)
12
>>> a = np.arange(3*4*5*6).reshape((3,4,5,6))
>>> b = np.arange(3*4*5*6)[::-1].reshape((5,4,6,3))
>>> np.dot(a, b)[2,3,2,1,2,2]
499128
>>> sum(a[2,3,2,:] * b[1,2,:,2])
499128
• out (ndarray, optional) – Alternative output array in which to place the result. It must have
the same shape as the expected output, but the type of the output values will be cast if
necessary.
• keepdims (bool, optional) – If this is set to True, the axes which are reduced are left in
the result as dimensions with size one. With this option, the result will broadcast correctly
against the input array.
If the default value is passed, then keepdims will not be passed through to the sum method
of sub-classes of ndarray, however any non-default value will be. If the sub-class’ method
does not implement keepdims any exceptions will be raised.
• initial (scalar, optional) – Starting value for the sum. See ~numpy.ufunc.reduce for details.
New in version 1.15.0.
Returns sum_along_axis – An array with the same shape as a, with the specified axis removed. If
a is a 0-d array, or if axis is None, a scalar is returned. If an output array is specified, a reference
to out is returned.
Return type ndarray
See also:
mean(), average()
Notes
Arithmetic is modular when using integer types, and no error is raised on overflow.
The sum of an empty array is the neutral element 0:
>>> np.sum([])
0.0
Examples
You can also start the sum with a value other than zero:
transpose(a, axes=None)
Permute the dimensions of an array.
Parameters
• a (array_like) – Input array.
• axes (list of ints, optional) – By default, reverse the dimensions, otherwise permute the axes
according to the values given.
Returns p – a with its axes permuted. A view is returned whenever possible.
Return type ndarray
See also:
moveaxis(), argsort()
Notes
Use transpose(a, argsort(axes)) to invert the transposition of tensors when using the axes keyword argument.
Transposing a 1-D array returns an unchanged view of the original array.
Examples
>>> x = np.arange(4).reshape((2,2))
>>> x
array([[0, 1],
[2, 3]])
>>> np.transpose(x)
array([[0, 2],
[1, 3]])
Additional Inputs:
markers – desired variable name. Default is ‘$’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base.
For example:
>>> variables = ['x1','x2','x3']
>>> constraints = "min(x1*x2) - sin(x3)"
>>> print(replace_variables(constraints, variables, ['x','y','z']))
'min(x*y) - sin(z)'
get_variables(constraints, variables=’x’)
extract a list of the string variable names from constraints string
Inputs:
constraints – a string of symbolic constraints, with one constraint equation per line. Constraints can
be equality and/or inequality constraints. Standard python syntax should be followed (with the math
and numpy modules already imported).
For example:
>>> constraints = '''
... x1 + x2 = x3*4
... x3 = x2*x4'''
>>> get_variables(constraints)
['x1', 'x2', 'x3', 'x4']
Additional Inputs:
variables – desired variable name. Default is ‘x’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base, and can include variables that are not found
in the constraints equation string.
For example:
>>> constraints = '''
... y = min(u,v) - z*sin(x)
... z = x**2 + 1.0
... u = v*z'''
(continues on next page)
merge(*equations, **kwds)
merge bounds in a sequence of equations (e.g. [A<0, A>0] --> [A!=0])
Parameters
• equations (tuple(str)) – a sequence of equations
• inclusive (bool, default=True) – if False, bounds are exclusive
Returns tuple sequence of equations, where the bounds have been merged
Notes
Examples
>>> merge(*['A > 0', 'A > 0', 'B >= 0', 'B <= 0'], inclusive=False)
('A > 0', 'B = 0')
>>> merge(*['A > 0', 'A > 0', 'B >= 0', 'B <= 0'], inclusive=True)
('A > 0',)
Additional Inputs:
variables – desired variable name. Default is ‘x’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base, and can include variables that are not found
in the constraints equation string.
target – list providing the order for which the variables will be solved. If there are “N” constraint
equations, the first “N” variables given will be selected as the dependent variables. By default, in-
creasing order is used.
For example:
Further Inputs:
locals – a dictionary of additional variables used in the symbolic constraints equations, and their de-
sired values.
simplify(constraints, variables=’x’, target=None, **kwds)
simplify a system of symbolic constraints equations.
Returns a system of equations where a single variable has been isolated on the left-hand side of each constraints
equation, thus all constraints are of the form “x_i = f(x)”.
Inputs:
constraints – a string of symbolic constraints, with one constraint equation per line. Standard python
syntax should be followed (with the math and numpy modules already imported).
For example:
Additional Inputs:
variables – desired variable name. Default is ‘x’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base, and can include variables that are not found
in the constraints equation string.
target – list providing the order for which the variables will be solved. If there are “N” constraint
equations, the first “N” variables given will be selected as the dependent variables. By default, in-
creasing order is used.
Further Inputs:
locals – a dictionary of additional variables used in the symbolic constraints equations, and their de-
sired values.
cycle – boolean to cycle the order for which the variables are solved. If cycle is True, there should be
more variety on the left-hand side of the simplified equations. By default, the variables do not cycle.
all – boolean to return all simplifications due to negative values. When dividing by a possibly negative
variable, an inequality may flip, thus creating alternate simplifications. If all is True, return all possible
simplifications due to negative values in an inequalty. The default is False, returning only one possible
simplification.
comparator(equation)
identify the comparator (e.g. ‘<’, ‘=’, . . . ) in a constraints equation
flip(equation, bounds=False)
flip the inequality in the equation (i.e. ‘<’ to ‘>’), if one exists
Inputs: equation – an equation string; can be an equality or inequality bounds – if True, ensure set boundaries
are respected (i.e. ‘<’ to ‘>=’)
_flip(cmp, bounds=False)
flip the comparator (i.e. ‘<’ to ‘>’, or ‘<’ to ‘>=’ if bounds=True)
condense(*equations, **kwds)
condense tuples of equations to the simplest representation
Inputs: equations – tuples of inequalities or equalities
For example: >>> condense((‘C <= 0’, ‘B <= 0’), (‘C <= 0’, ‘B >= 0’)) [(‘C <= 0’,)] >>> condense((‘C
<= 0’, ‘B <= 0’), (‘C >= 0’, ‘B <= 0’)) [(‘B <= 0’,)] >>> condense((‘C <= 0’, ‘B <= 0’), (‘C >= 0’, ‘B >=
0’)) [(‘C <= 0’, ‘B <= 0’), (‘C >= 0’, ‘B >= 0’)]
Additional Inputs: verbose – if True, print diagnostic information. Default is False.
equals(before, after, vals=None, **kwds)
check if equations before and after are equal at the given vals
Inputs: before – an equation string after – an equation string vals – a dict with variable names as keys and floats
as values
Additional Inputs: variables – a list of variable names locals – a dict with variable names as keys and ‘fixed’
values error – if False, ZeroDivisionError evaluates as None variants – a list of ints to use as variants for
fractional powers
penalty_parser(constraints, variables=’x’, nvars=None)
parse symbolic constraints into penalty constraints. Returns a tuple of inequality constraints and a tuple of
equality constraints.
Inputs:
constraints – a string of symbolic constraints, with one constraint equation per line. Constraints can
be equality and/or inequality constraints. Standard python syntax should be followed (with the math
and numpy modules already imported).
For example:
Additional Inputs:
nvars – number of variables. Includes variables not explicitly given by the constraint equations (e.g.
‘x2’ in the example above).
variables – desired variable name. Default is ‘x’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base, and can include variables that are not found
in the constraints equation string.
constraints_parser(constraints, variables=’x’, nvars=None)
parse symbolic constraints into a tuple of constraints solver equations. The left-hand side of each constraint
must be simplified to support assignment.
Inputs:
constraints – a string of symbolic constraints, with one constraint equation per line. Constraints can
be equality and/or inequality constraints. Standard python syntax should be followed (with the math
and numpy modules already imported).
For example:
Additional Inputs:
nvars – number of variables. Includes variables not explicitly given by the constraint equations (e.g.
‘x2’ in the example above).
variables – desired variable name. Default is ‘x’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base, and can include variables that are not found
in the constraints equation string.
generate_conditions(constraints, variables=’x’, nvars=None, locals=None)
generate penalty condition functions from a set of constraint strings
Inputs:
constraints – a string of symbolic constraints, with one constraint equation per line. Constraints can
be equality and/or inequality constraints. Standard python syntax should be followed (with the math
and numpy modules already imported).
NOTE: Alternately, constraints may be a tuple of strings of symbolic constraints. Will return a tuple
of penalty condition functions.
For example:
Additional Inputs:
nvars – number of variables. Includes variables not explicitly given by the constraint equations (e.g.
‘x2’ in the example above).
variables – desired variable name. Default is ‘x’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base, and can include variables that are not found
in the constraints equation string.
locals – a dictionary of additional variables used in the symbolic constraints equations, and their de-
sired values. Default is {‘tol’: 1e-15, ‘rel’: 1e-15}, where ‘tol’ and ‘rel’ are the absolute and relative
difference from the extremal value in a given inequality. For more details, see mystic.math.tolerance.
generate_solvers(constraints, variables=’x’, nvars=None, locals=None)
generate constraints solver functions from a set of constraint strings
Inputs:
constraints – a string of symbolic constraints, with one constraint equation per line. Constraints can
be equality and/or inequality constraints. Standard python syntax should be followed (with the math
and numpy modules already imported). The left-hand side of each equation must be simplified to
support assignment.
NOTE: Alternately, constraints may be a tuple of strings of symbolic constraints. Will return a tuple
of constraint solver functions.
For example:
Additional Inputs:
nvars – number of variables. Includes variables not explicitly given by the constraint equations (e.g.
‘x2’ in the example above).
variables – desired variable name. Default is ‘x’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base, and can include variables that are not found
in the constraints equation string.
locals – a dictionary of additional variables used in the symbolic constraints equations, and their de-
sired values. Default is {‘tol’: 1e-15, ‘rel’: 1e-15}, where ‘tol’ and ‘rel’ are the absolute and relative
difference from the extremal value in a given inequality. For more details, see mystic.math.tolerance.
generate_penalty(conditions, ptype=None, join=None, **kwds)
converts a penalty constraint function to a mystic.penalty function.
Parameters
• conditions (object) – a penalty contraint function, or list of penalty constraint functions.
• ptype (object, default=None) – a mystic.penalty type, or a list of mystic.penalty
types of the same length as conditions.
• join (object, default=None) – and_ or or_ from mystic.coupler.
• k (int, default=None) – penalty multiplier.
Notes
If join=None, then apply the given penalty constraints iteratively. Otherwise, couple the penalty constraints
with the selected coupler.
Examples
Notes
If join=None, then apply the given constraints iteratively. Otherwise, couple the constraints with the selected
coupler.
Warning: This constraint generator doesn’t check for conflicts in conditions, but simply applies conditions
in the given order. This constraint generator assumes that a single variable has been isolated on the left-hand
side of each constraints equation, thus all constraints are of the form “x_i = f(x)”. This solver picks speed
over robustness, and relies on the user to formulate the constraints so that they do not conflict.
Examples
Standard python math conventions are used. For example, if an int is used in a constraint equation, one or
more variable may be evaluate to an int – this can affect solved values for the variables.
__module__ = 'mystic.termination'
static __new__(self, *args)
Takes one or more termination conditions: args – tuple of termination conditions
Usage:
ChangeOverGeneration(tolerance=1e-06, generations=30)
change in cost is < tolerance over a number of generations:
cost[-g] - cost[-1] <= tolerance, with g=generations
CollapseAs(offset=False, tolerance=0.0001, generations=50, mask=None)
max(pairwise(x)) is < tolerance over a number of generations, and mask is column indices of selected params:
bool(collapse_as(monitor, **kwds))
CollapseAt(target=None, tolerance=0.0001, generations=50, mask=None)
change(x[i]) is < tolerance over a number of generations, where target can be a single value or a list of values of
x length, change(x[i]) = max(x[i]) - min(x[i]) if target=None else abs(x[i] - target), and mask is column indices
of selected params:
bool(collapse_at(monitor, **kwds))
CollapseCost(clip=False, limit=1.0, samples=50, mask=None)
cost(x) - min(cost) is >= limit for all samples within an interval, where if clip is True, then clip beyond the
space sampled the optimizer, and mask is a dict of {index:bounds} where bounds are provided as an interval
(min,max), or a list of intervals:
bool(collapse_cost(monitor, **kwds))
CollapsePosition(tolerance=0.005, generations=50, mask=None, **kwds)
max(pairwise(positions)) < tolerance over a number of generations, where (measures,indices) are (row,column)
indices of selected positions:
bool(collapse_position(monitor, **kwds))
CollapseWeight(tolerance=0.005, generations=50, mask=None, **kwds)
value of weights are < tolerance over a number of generations, where mask is (row,column) indices of the
selected weights:
bool(collapse_weight(monitor, **kwds))
EvaluationLimits(generations=None, evaluations=None)
number of iterations is > generations, or number of function calls is > evaluations:
iterations >= generations or fcalls >= evaluations
GradientNormTolerance(tolerance=1e-05, norm=inf )
gradient norm is < tolerance, given user-supplied norm:
sum( abs(gradient)**norm )**(1.0/norm) <= tolerance
Lnorm(weights, p=1, axis=None)
calculate L-p norm of weights
Parameters
• weights (array(float)) – an array of weights
• p (int, default=1) – the power of the p-norm, where p in [0,inf]
• axis (int, default=None) – axis used to take the norm along
Returns a float distance norm for the weights
NormalizedChangeOverGeneration(tolerance=0.0001, generations=10)
normalized change in cost is < tolerance over number of generations:
(cost[-g] - cost[-1]) / 0.5*(abs(cost[-g]) + abs(cost[-1])) <= tolerance
__call__(solver, info=False)
check if the termination conditions are satisfied.
Inputs: solver – the solver instance
Additional Inputs: info – if True, return information about the satisfied conditions
__module__ = 'mystic.termination'
static __new__(self, *args)
Takes one or more termination conditions: args – tuple of termination conditions
Usage:
class When
Bases: tuple
provide a termination condition with more reporting options.
Terminates when the given condition is satisfied.
Takes a termination condition: arg – termination condition
Usage:
__call__(solver, info=False)
check if the termination conditions are satisfied.
Inputs: solver – the solver instance
Additional Inputs: info – if True, return information about the satisfied conditions
__dict__ = dict_proxy({'__module__': 'mystic.termination', '__new__': <staticmethod o
__module__ = 'mystic.termination'
static __new__(self, arg)
Takes a termination condition: arg – termination condition
Usage:
_inverted(pairs)
return a list of tuples, where each tuple has been reversed
_kdiv(num, denom, type=None)
‘special’ scalar division for ‘k’
_multiply(x, n)
elementwise multiplication of x by n, as if x were an array
_symmetric(pairs)
returns a set of tuples, where each tuple includes it’s inverse
chain(*decorators)
chain together decorators into a single decorator
For example:
>>> wm = with_mean(5.0)
>>> wv = with_variance(5.0)
>>>
>>> @chain(wm, wv)
... def doit(x):
... return x
...
>>> res = doit([1,2,3,4,5])
>>> mean(res), variance(res)
(5.0, 5.0000000000000018)
interval_overlap(bounds1, bounds2)
find the intersection of intervals in the given bounds
bounds1 and bounds2 are a dict of {index:bounds}, where bounds is a list of tuples [(lo,hi),. . . ]
isNull(mon)
isiterable(x)
check if an object is iterable
itertype(x, default=<type ’tuple’>)
get the ‘underlying’ type used to construct x
list_or_tuple(x)
True if x is a list or a tuple
list_or_tuple_or_ndarray(x)
True if x is a list, tuple, or a ndarray
listify(x)
recursivly convert all members of a sequence to a list
masked(mask=None)
generate a masked function, given a function and mask provided
mask should be a dictionary of the positional index and a value (e.g. {0:1.0}), where keys must be integers, and
values can be any object (typically a float).
functions are expected to take a single argument, a n-dimensional list or array, where the mask will be applied to
the input array. Hence, instead of masking the inputs, the function is “masked”. Conceptually, f(mask(x)) ==>
f’(x), instead of f(mask(x)) ==> f(x’).
For example:
>>> @masked({0:10,3:-1})
... def same(x):
... return x
...
>>> same([1,2,3])
[10, 1, 2, -1, 3]
>>>
>>> @masked({0:10,3:-1})
... def foo(x):
w,x,y,z = x # requires a lenth-4 sequence
... return w+x+y+z
...
>>> foo([-5,2]) # produces [10,-5,2,-1]
6
measure_indices(npts)
get the indices corresponding to weights and to positions
multiply(x, n, type=<type ’list’>, recurse=False)
multiply: recursive elementwise casting multiply of x by n
pairwise(x, indices=False)
convert an array of positions to an array of pairwise distances
if indices=True, also return indices to relate input and output arrays
partial(mask)
generate a function, where some input has fixed values
mask should be a dictionary of the positional index and a value (e.g. {0:1.0}), where keys must be integers, and
values can be any object (typically a float).
functions are expected to take a single argument, a n-dimensional list or array, where the mask will be applied
to the input array.
For example:
>>> @partial({0:10,3:-1})
... def same(x):
... return x
...
>>> same([-5,9])
[10, 9]
>>> same([0,1,2,3,4])
[10, 1, 2, -1, 4]
class permutations
Bases: object
permutations(iterable[, r]) –> permutations object
Return successive r-length permutations of elements in the iterable.
permutations(range(3), 2) –> (0,1), (0,2), (1,0), (1,2), (2,0), (2,1)
__getattribute__
x.__getattribute__(‘name’) <==> x.name
__iter__
__new__(S, ...) → a new object with type S, a subtype of T
next
random_seed(s=None)
sets the seed for calls to ‘random()’
random_state(module=’random’, new=False, seed=’!’)
return a (optionally manually seeded) random generator
For a given module, return an object that has random number generation (RNG) methods available. If
new=False, use the global copy of the RNG object. If seed=’!’, do not reseed the RNG (using seed=None
‘removes’ any seeding). If seed=’*’, use a seed that depends on the process id (PID); this is useful for building
RNGs that are different across multiple threads or processes.
reduced(reducer=None, arraylike=False)
apply a reducer function to reduce output to a single value
For example:
select_params(params, index)
get params for the given indices as a tuple of index,values
solver_bounds(solver)
return a dict {index:bounds} of tightest bounds defined for the solver
suppress(x, tol=1e-08, clip=True)
suppress small values less than tol
suppressed(tol=1e-08, exit=False, clip=True)
generate a function, where values less than tol are suppressed
For example:
>>> @suppressed(1e-8)
... def square(x):
(continues on next page)
synchronized(mask)
generate a function, where some input tracks another input
mask should be a dictionary of positional index and tracked index (e.g. {0:1}), where keys and values should
be different integers. However, if a tuple is provided instead of the tracked index (e.g. {0:(1,lambda x:2*x)} or
{0:(1,2)}), the second member of the tuple will be used to scale the tracked index.
functions are expected to take a single argument, a n-dimensional list or array, where the mask will be applied
to the input array.
operations within a single mask are unordered. If a specific ordering of operations is required, apply multiple
masks in the desired order.
For example:
>>> @synchronized({0:1,3:-1})
... def same(x):
... return x
...
>>> same([-5,9])
[9, 9]
>>> same([0,1,2,3,4])
[1, 1, 2, 4, 4]
>>> same([0,9,2,3,6])
[9, 9, 2, 6, 6]
>>>
>>> @synchronized({0:(1,lambda x:1/x),3:(1,-1)})
... def doit(x):
... return x
...
>>> doit([-5.,9.])
[0.1111111111111111, 9.0]
>>> doit([0.,1.,2.,3.,4.])
[1.0, 1.0, 2.0, -1.0, 4.0]
>>> doit([0.,9.,2.,3.,6.])
[0.1111111111111111, 9.0, 2.0, -9.0, 6.0]
>>>
>>> @synchronized({1:2})
... @synchronized({0:1})
... def invert(x):
... return [-i for i in x]
...
(continues on next page)
unpair(pairs)
convert a 1D array of N pairs to two 1D arrays of N values
For example:
>>> unpair([(a0,b0),(a1,b1),(a2,b2)])
[a0,a1,a2],[b0,b1,b2]
Examples
generate cost convergence rate plots from file written with write_support_file
Available from the command shell as:
or as a function call:
mystic.collapse_plotter(filename, **options)
Notes
• The option dots takes a boolean, and will show data points in the plot.
• The option linear takes a boolean, and will plot in a linear scale.
• The option out takes a string of the filepath for the generated plot.
• The option iter takes an integer of the largest iteration to plot.
• The option label takes a label string. For example, label = "y" will label the plot with a ‘y’, while label
= " log-cost, $ log_{10}(\hat{P} - \hat{P}_{max})$" will label the y-axis with standard
LaTeX math formatting. Note that the leading space is required, and that the text is aligned along the axis.
• The option col takes a string of comma-separated integers indicating iteration numbers where parameter collapse
has occurred. If a second set of integers is provided (delineated by a semicolon), the additional set of integers
will be plotted with a different linestyle (to indicate a different type of collapse).
193
mystic Documentation, Release 0.3.3.dev0
or as a function call:
mystic.log_reader(filename, **options)
Notes
• The option out takes a string of the filepath for the generated plot.
• The option dots takes a boolean, and will show data points in the plot.
• The option line takes a boolean, and will connect the data with a line.
• The option iter takes an integer of the largest iteration to plot.
• The option legend takes a boolean, and will display the legend.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option param takes an indicator string. The indicator string is built from comma-separated array slices.
For example, params = ":" will plot all parameters. Alternatively, params = ":2, 3:" will plot all
parameters except for the third parameter, while params = "0" will only plot the first parameter.
generate surface contour plots for model, specified by full import path; and generate model trajectory from logfile (or
solver restart file), if provided
Available from the command shell as:
or as a function call:
Parameters
• model (str) – full import path for the model (e.g. mystic.models.rosen)
• logfile (str, default=None) – name of convergence logfile (e.g. log.txt)
returns None
Notes
• The option out takes a string of the filepath for the generated plot.
• The option bounds takes an indicator string, where bounds are given as comma-separated slices. For example,
using bounds = "-1:10, 0:20" will set lower and upper bounds for x to be (-1,10) and y to be (0,20).
The “step” can also be given, to control the number of lines plotted in the grid. Thus "-1:10:.1, 0:20" sets
the bounds as above, but uses increments of .1 along x and the default step along y. For models > 2D, the bounds
can be used to specify 2 dimensions plus fixed values for remaining dimensions. Thus, "-1:10, 0:20, 1.
0" plots the 2D surface where the z-axis is fixed at z=1.0. When called from a script, slice objects can be used
instead of a string, thus "-1:10:.1, 0:20, 1.0" becomes (slice(-1,10,.1), slice(20), 1.
0).
• The option label takes comma-separated strings. For example, label = "x,y," will place ‘x’ on the x-axis,
‘y’ on the y-axis, and nothing on the z-axis. LaTeX is also accepted. For example, label = "$ h $, $
{\alpha}$, $ v$" will label the axes with standard LaTeX math formatting. Note that the leading space
is required, while a trailing space aligns the text with the axis instead of the plot frame.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option iter takes an integer of the largest iteration to plot.
• The option reduce can be given to reduce the output of a model to a scalar, thus converting model(params)
to reduce(model(params)). A reducer is given by the import path (e.g. numpy.add).
• The option scale will convert the plot to log-scale, and scale the cost by z=log(4*z*scale+1)+2. This is
useful for visualizing small contour changes around the minimium.
• If using log-scale produces negative numbers, the option shift can be used to shift the cost by z=z+shift.
Both shift and scale are intended to help visualize contours.
• The option fill takes a boolean, to plot using filled contours.
• The option depth takes a boolean, to plot contours in 3D.
• The option dots takes a boolean, to show trajectory points in the plot.
• The option join takes a boolean, to connect trajectory points.
• The option verb takes a boolean, to print the model documentation.
or as a function call:
mystic.support.convergence(filename, **options)
Notes
• The option out takes a string of the filepath for the generated plot.
• The option iter takes an integer of the largest iteration to plot.
• The option param takes an indicator string. The indicator string is built from comma-separated array slices.
For example, params = ":" will plot all parameters in a single plot. Alternatively, params = ":2, 2:"
will split the parameters into two plots, and params = "0" will only plot the first parameter.
• The option label takes comma-separated strings. For example, label = "x,y," will label the y-axis of the
first plot with ‘x’, a second plot with ‘y’, and not add a label to a third or subsequent plots. If more labels are
given than plots, then the last label will be used for the y-axis of the ‘cost’ plot. LaTeX is also accepted. For
example, label = "$ h$, $ a$, $ v$" will label the axes with standard LaTeX math formatting. Note
that the leading space is required, and the text is aligned along the axis.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option cost takes a boolean, and will also plot the parameter cost.
• The option legend takes a boolean, and will display the legend.
or as a function call:
mystic.support.hypercube(filename, **options)
Notes
• The option out takes a string of the filepath for the generated plot.
• The options bounds, axes, and iters all take indicator strings. The bounds should be given as comma-separated
slices. For example, using bounds = "60:105, 0:30, 2.1:2.8" will set the lower and upper bounds
for x to be (60,105), y to be (0,30), and z to be (2.1,2.8). Similarly, axes also accepts comma-separated groups
of ints; however, for axes, each entry indicates which parameters are to be plotted along each axis – the first
group for the x direction, the second for the y direction, and third for z. Thus, axes = "2 3, 6 7, 10
11" would set 2nd and 3rd parameters along x. Iters also accepts strings built from comma-separated array
slices. For example, iters = ":" will plot all iters in a single plot. Alternatively, iters = ":2, 2:"
will split the iters into two plots, while iters = "0" will only plot the first iteration.
• The option label takes comma-separated strings. Thus label = "x,y," will place ‘x’ on the x-axis, ‘y’ on
the y-axis, and nothing on the z-axis. LaTeX, such as label = "$ h $, $ a$, $ v$" will label the
axes with standard LaTeX math formatting. Note that the leading space is required, while a trailing space aligns
the text with the axis instead of the plot frame.
• The option nid takes an integer of the nth simultaneous points to plot.
or as a function call:
mystic.support.hypercube_measures(filename, **options)
Notes
• The option out takes a string of the filepath for the generated plot.
• The options bounds, axes, weight, and iters all take indicator strings. The bounds should be given as comma-
separated slices. For example, using bounds = "60:105, 0:30, 2.1:2.8" will set lower and upper
bounds for x to be (60,105), y to be (0,30), and z to be (2.1,2.8). Similarly, axes also accepts comma-separated
groups of ints; however, for axes, each entry indicates which parameters are to be plotted along each axis – the
first group for the x direction, the second for the y direction, and third for z. Thus, axes = "2 3, 6 7,
10 11" would set 2nd and 3rd parameters along x. The corresponding weights are used to color the measure
points, where 1.0 is black and 0.0 is white. For example, using weight = "0 1, 4 5, 8 9" would use
the 0th and 1st parameters to weight x. Iters is also similar, however only accepts comma-separated ints. Hence,
iters = "-1" will plot the last iteration, while iters = "0, 300, 700" will plot the 0th, 300th, and
700th in three plots.
• The option label takes comma-separated strings. Thus label = "x,y," will place ‘x’ on the x-axis, ‘y’ on
the y-axis, and nothing on the z-axis. LaTeX, such as label = "$ h $, $ a$, $ v$" will label the
axes with standard LaTeX math formatting. Note that the leading space is required, while a trailing space aligns
the text with the axis instead of the plot frame.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option scale takes an integer as a grayscale contrast multiplier.
• The option flat takes a boolean, to plot results in a single plot.
Warning: This function is intended to visualize weighted measures (i.e. weights and positions), where the weights
must be normalized (to 1) or an error will be thrown.
generate scenario support plots from file written with write_support_file; and generate legacy data and cones
from a dataset file, if provided
or as a function call:
Parameters
• filename (str) – name of the convergence logfile (e.g. paramlog.py)
• datafile (str, default=None) – name of the dataset file (e.g. data.txt)
returns None
Notes
• The option out takes a string of the filepath for the generated plot.
• The options bounds, dim, and iters all take indicator strings. The bounds should be given as comma-separated
slices. For example, using bounds = ".062:.125, 0:30, 2300:3200" will set lower and upper
bounds for x to be (.062,.125), y to be (0,30), and z to be (2300,3200). If all bounds are to not be strictly
enforced, append an asterisk * to the string. The dim (dimensions of the scenario) should comma-separated
ints. For example, dim = "1, 1, 2" will convert the params to a two-member 3-D dataset. Iters accepts
a string built from comma-separated array slices. Thus, iters = ":" will plot all iters in a single plot.
Alternatively, iters = ":2, 2:" will split the iters into two plots, while iters = "0" will only plot
the first iteration.
• The option label takes comma-separated strings. Thus label = "x,y," will place ‘x’ on the x-axis, ‘y’ on
the y-axis, and nothing on the z-axis. LaTeX, such as label = "$ h $, $ a$, $ v$" will label the
axes with standard LaTeX math formatting. Note that the leading space is required, while a trailing space aligns
the text with the axis instead of the plot frame.
• The option “filter” is used to select datapoints from a given dataset, and takes comma-separated ints.
• A “mask” is given as comma-separated ints. When the mask has more than one int, the plot will be 2D.
• The option “vertical” will plot the dataset values on the vertical axis; for 2D plots, cones are always plotted on
the vertical axis.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option scale takes an integer as a grayscale contrast multiplier.
• The option gap takes an integer distance from cone center to vertex.
• The option data takes a boolean, to plot legacy data, if provided.
• The option cones takes a boolean, to plot cones, if provided.
• The option flat takes a boolean, to plot results in a single plot.
• genindex
• modindex
• search
199
mystic Documentation, Release 0.3.3.dev0
_ mystic.mask, 51
_mystic_collapse_plotter, 193 mystic.math, 51
_mystic_log_reader, 194 mystic.math.approx, 53
_mystic_model_plotter, 194 mystic.math.compressed, 54
_support_convergence, 195 mystic.math.discrete, 55
_support_hypercube, 196 mystic.math.distance, 71
_support_hypercube_measures, 197 mystic.math.grid, 75
_support_hypercube_scenario, 197 mystic.math.integrate, 76
mystic.math.legacydata, 76
a mystic.math.measures, 81
mystic.abstract_ensemble_solver, 9 mystic.math.poly, 93
mystic.abstract_launcher, 13 mystic.math.samples, 93
mystic.abstract_map_solver, 15 mystic.math.stats, 94
mystic.abstract_solver, 18 mystic.metropolis, 94
mystic.models, 95
c mystic.models.abstract_model, 101
mystic.cache, 25 mystic.models.br8, 102
mystic.collapse, 25 mystic.models.circle, 103
mystic.constraints, 26 mystic.models.dejong, 103
mystic.coupler, 33 mystic.models.functions, 107
mystic.models.lorentzian, 111
d mystic.models.mogi, 112
mystic.models.nag, 112
mystic.differential_evolution, 35
mystic.models.pohlheim, 113
e mystic.models.poly, 120
mystic.models.schittkowski, 121
mystic.ensemble, 41
mystic.models.storn, 122
mystic.models.venkataraman, 125
f mystic.models.wavy, 126
mystic.filters, 46 mystic.models.wolfram, 127
mystic.forward_model, 47 mystic.monitors, 128
mystic.munge, 133
h
mystic.helputil, 50 p
mystic.penalty, 134
l mystic.pools, 136
mystic.linesearch, 50 mystic.python_map, 138
m s
mystic, ?? mystic.scemtools, 138
201
mystic Documentation, Release 0.3.3.dev0
mystic.scipy_optimize, 140
mystic.scripts, 145
mystic.search, 147
mystic.solvers, 149
mystic.strategy, 163
mystic.support, 164
mystic.svc, 168
mystic.svr, 172
mystic.symbolic, 173
t
mystic.termination, 181
mystic.tools, 184
203
mystic Documentation, Release 0.3.3.dev0
204 Index
mystic Documentation, Release 0.3.3.dev0
Index 205
mystic Documentation, Release 0.3.3.dev0
Best1Bin() (in module mystic.strategy), 163 CollapseAs() (in module mystic.termination), 182
Best1Exp() (in module mystic.strategy), 163 CollapseAt() (in module mystic.termination), 182
Best2Bin() (in module mystic.strategy), 163 CollapseCost() (in module mystic.termination), 182
Best2Exp() (in module mystic.strategy), 163 Collapsed() (AbstractSolver method), 20
best_dimensions() (in module mystic.support), 167 collapsed() (in module mystic.collapse), 26
bestEnergy (AbstractSolver attribute), 24 CollapsePosition() (in module mystic.termination), 182
bestSolution (AbstractSolver attribute), 24 CollapseWeight() (in module mystic.termination), 182
BevingtonDecay (class in mystic.models.br8), 102 collisions (dataset attribute), 77
Bias() (in module mystic.svc), 168 collisions() (datapoint method), 76
Bias() (in module mystic.svr), 173 commandfy() (in module mystic.helputil), 50
binary() (in module mystic.math.compressed), 54 commandstring() (in module mystic.helputil), 50
binary2coords() (in module mystic.math.compressed), 55 comparator() (in module mystic.symbolic), 177
bounded() (in module mystic.constraints), 29 compose() (in module mystic.math.discrete), 55
bounded_mean() (in module mystic.math.discrete), 55 condense() (in module mystic.symbolic), 177
Branins (class in mystic.models.pohlheim), 114 conflicts (dataset attribute), 77
branins() (in module mystic.models), 97 conflicts() (datapoint method), 76
branins() (in module mystic.models.functions), 107 connected() (in module mystic.tools), 186
buckshot() (in module mystic.ensemble), 44 constraints_parser() (in module mystic.symbolic), 178
buckshot() (in module mystic.solvers), 154 contains() (lipschitzcone method), 80
BuckshotSolver (class in mystic.ensemble), 42 converge_to_support() (in module mystic.munge), 133
BuckshotSolver (class in mystic.solvers), 149 converge_to_support_converter() (in module mys-
tic.munge), 133
C convergence() (in module mystic.support), 164
CandidateRelativeTolerance() (in module mys- Coordinates() (Searcher method), 147
tic.termination), 181 coords (dataset attribute), 77
cdf_factory() (in module mystic.math.stats), 94 Corana (class in mystic.models.storn), 122
center_mass (measure attribute), 57 corana() (in module mystic.models), 97
center_mass (product_measure attribute), 60 corana() (in module mystic.models.functions), 107
chain() (in module mystic.tools), 186 CosineKernel() (in module mystic.svr), 173
ChangeOverGeneration() (in module mystic.termination), cost() (Chebyshev method), 121
181 CostFactory (class in mystic.forward_model), 47
Chebyshev (class in mystic.models.poly), 120 CostFactory() (AbstractModel method), 96, 102
chebyshev() (in module mystic.math.distance), 71 CostFactory() (BevingtonDecay method), 102
chebyshev16cost() (in module mystic.models.poly), 121 CostFactory() (Chebyshev method), 120
chebyshev2cost() (in module mystic.models.poly), 121 CostFactory() (Circle method), 103
chebyshev4cost() (in module mystic.models.poly), 121 CostFactory() (Mogi method), 112
chebyshev6cost() (in module mystic.models.poly), 121 CostFactory2() (AbstractModel method), 97, 102
chebyshev8cost() (in module mystic.models.poly), 121 CostFactory2() (BevingtonDecay method), 102
chebyshevcostfactory() (in module mystic.models.poly), CostFactory2() (Chebyshev method), 121
121 CostFactory2() (Circle method), 103
Circle (class in mystic.models.circle), 103 CostFactory2() (Mogi method), 112
citation() (in module mystic), 5 CustomMonitor() (in module mystic.monitors), 132
clear() (AbstractWorkerPool method), 15
clear() (SerialPool method), 137 D
clipped() (in module mystic.tools), 186 datapoint (class in mystic.math.legacydata), 76
close() (SerialPool method), 137 dataset (class in mystic.math.legacydata), 77
Collapse() (AbstractSolver method), 20 decompose() (in module mystic.math.discrete), 55
collapse_as() (in module mystic.collapse), 25 derivative() (Rosenbrock method), 105
collapse_at() (in module mystic.collapse), 25 DifferentialEvolutionSolver (class in mys-
collapse_cost() (in module mystic.collapse), 25 tic.differential_evolution), 37
collapse_plotter() (in module mystic), 7 DifferentialEvolutionSolver (class in mystic.solvers), 150
collapse_plotter() (in module mystic.scripts), 146 DifferentialEvolutionSolver2 (class in mys-
collapse_position() (in module mystic.collapse), 26 tic.differential_evolution), 38
collapse_weight() (in module mystic.collapse), 26
206 Index
mystic Documentation, Release 0.3.3.dev0
Index 207
mystic Documentation, Release 0.3.3.dev0
208 Index
mystic Documentation, Release 0.3.3.dev0
Index 209
mystic Documentation, Release 0.3.3.dev0
210 Index
mystic Documentation, Release 0.3.3.dev0
Index 211
mystic Documentation, Release 0.3.3.dev0
212 Index
mystic Documentation, Release 0.3.3.dev0
Index 213
mystic Documentation, Release 0.3.3.dev0
X
x (Monitor attribute), 132
x (Null attribute), 130
Y
y (Monitor attribute), 132
y (Null attribute), 130
Z
Zimmermann (class in mystic.models.storn), 124
zimmermann() (in module mystic.models), 101
zimmermann() (in module mystic.models.functions), 111
214 Index