Вы находитесь на странице: 1из 218

mystic Documentation

Release 0.3.3.dev0

Mike McKerns

Oct 26, 2018


Contents:

1 mystic: highly-constrained non-convex optimization and uncertainty quantification 1


1.1 About Mystic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Major Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Current Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Development Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.5 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.6 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.7 More Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.8 Citation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 mystic module documentation 9


2.1 abstract_ensemble_solver module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 abstract_launcher module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 abstract_map_solver module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 abstract_solver module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5 cache module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.6 collapse module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.7 constraints module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.8 coupler module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.9 differential_evolution module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.10 ensemble module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.11 filters module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.12 forward_model module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.13 helputil module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.14 linesearch module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.15 mask module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.16 math module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.17 metropolis module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
2.18 models module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
2.19 monitors module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
2.20 munge module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
2.21 penalty module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
2.22 pools module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
2.23 python_map module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
2.24 scemtools module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
2.25 scipy_optimize module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

i
2.26 search module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
2.27 solvers module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
2.28 strategy module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
2.29 support module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
2.30 svc module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
2.31 svr module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
2.32 symbolic module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
2.33 termination module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
2.34 tools module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

3 mystic scripts documentation 193


3.1 mystic_collapse_plotter script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
3.2 mystic_log_reader script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
3.3 mystic_model_plotter script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
3.4 support_convergence script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
3.5 support_hypercube script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
3.6 support_hypercube_measures script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
3.7 support_hypercube_scenario script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

4 Indices and tables 199

Python Module Index 201

ii
CHAPTER 1

mystic: highly-constrained non-convex optimization and uncertainty


quantification

1.1 About Mystic

The mystic framework provides a collection of optimization algorithms and tools that allows the user to more
robustly (and easily) solve hard optimization problems. All optimization algorithms included in mystic provide
workflow at the fitting layer, not just access to the algorithms as function calls. mystic gives the user fine-grained
power to both monitor and steer optimizations as the fit processes are running. Optimizers can advance one iteration
with Step, or run to completion with Solve. Users can customize optimizer stop conditions, where both compound
and user-provided conditions may be used. Optimizers can save state, can be reconfigured dynamically, and can be
restarted from a saved solver or from a results file. All solvers can also leverage parallel computing, either within each
iteration or as an ensemble of solvers.
Where possible, mystic optimizers share a common interface, and thus can be easily swapped without the user
having to write any new code. mystic solvers all conform to a solver API, thus also have common method calls to
configure and launch an optimization job. For more details, see mystic.abstract_solver. The API also makes
it easy to bind a favorite 3rd party solver into the mystic framework.
Optimization algorithms in mystic can accept parameter constraints, either in the form of penaties (which “penalize”
regions of solution space that violate the constraints), or as constraints (which “constrain” the solver to only search
in regions of solution space where the constraints are respected), or both. mystic provides a large selection of
constraints, including probabistic and dimensionally reducing constraints. By providing a robust interface designed to
enable the user to easily configure and control solvers, mystic greatly reduces the barrier to solving hard optimization
problems.
mystic is in active development, so any user feedback, bug reports, comments, or suggestions are highly appreciated.
A list of known issues is maintained at http://trac.mystic.cacr.caltech.edu/project/mystic/query.html, with a public
ticket list at https://github.com/uqfoundation/mystic/issues.

1.2 Major Features

mystic provides a stock set of configurable, controllable solvers with:

1
mystic Documentation, Release 0.3.3.dev0

• a common interface
• a control handler with: pause, continue, exit, and callback
• ease in selecting initial population conditions: guess, random, etc
• ease in checkpointing and restarting from a log or saved state
• the ability to leverage parallel & distributed computing
• the ability to apply a selection of logging and/or verbose monitors
• the ability to configure solver-independent termination conditions
• the ability to impose custom and user-defined penalties and constraints
To get up and running quickly, mystic also provides infrastructure to:
• easily generate a model (several standard test models are included)
• configure and auto-generate a cost function from a model
• configure an ensemble of solvers to perform a specific task

1.3 Current Release

This documentation is for version mystic-0.3.3.dev0.


The latest released version of mystic is available from:
https://pypi.org/project/mystic
mystic is distributed under a 3-clause BSD license.

>>> import mystic


>>> print (mystic.license())

1.4 Development Version

You can get the latest development version with all the shiny new features at:
https://github.com/uqfoundation
If you have a new contribution, please submit a pull request.

1.5 Installation

mystic is packaged to install from source, so you must download the tarball, unzip, and run the installer:

[download]
$ tar -xvzf mystic-0.3.2.tar.gz
$ cd mystic-0.3.2
$ python setup py build
$ python setup py install

2 Chapter 1. mystic: highly-constrained non-convex optimization and uncertainty quantification


mystic Documentation, Release 0.3.3.dev0

You will be warned of any missing dependencies and/or settings after you run the “build” step above. mystic
depends on dill, numpy and sympy, so you should install them first. There are several functions within mystic
where scipy is used if it is available; however, scipy is an optional dependency. Having matplotlib installed
is necessary for running several of the examples, and you should probably go get it even though it’s not required.
matplotlib is required for results visualization available in the scripts packaged with mystic.
Alternately, mystic can be installed with pip or easy_install:

$ pip install mystic

1.6 Requirements

mystic requires:
• python, version >= 2.6 or version >= 3.1, or pypy
• numpy, version >= 1.0
• sympy, version >= 0.6.7
• dill, version >= 0.2.8.2
• klepto, version >= 0.1.5.2
Optional requirements:
• setuptools, version >= 0.6
• matplotlib, version >= 0.91
• scipy, version >= 0.6.0
• pathos, version >= 0.2.2.1
• pyina, version >= 0.2.0

1.7 More Information

Probably the best way to get started is to look at the documentation at http://mystic.rtfd.io. Also see mystic.tests
for a set of scripts that demonstrate several of the many features of the mystic framework. You can run the test suite
with python -m mystic.tests. There are several plotting scripts that are installed with mystic, primary
of which are mystic_log_reader‘ (also available with python -m mystic) and the mystic_model_plotter
(also available with python -m mystic.models). There are several other plotting scripts that come with
mystic, and they are detailed elsewhere in the documentation. See mystic.examples for examples that demon-
strate the basic use cases for configuration and launching of optimization jobs using one of the sample models provided
in mystic.models. Many of the included examples are standard optimization test problems. The use of constraints
and penalties are detailed in mystic.examples2, while more advanced features leveraging ensemble solvers and
dimensional collapse are found in mystic.examples3. The scripts in mystic.examples4 demonstrate lever-
aging pathos for parallel computing, as well as demonstrate some auto-partitioning schemes. mystic has the
ability to work in product measure space, and the scripts in mystic.examples5 show to work with product mea-
sures. The source code is generally well documented, so further questions may be resolved by inspecting the code
itself. Please feel free to submit a ticket on github, or ask a question on stackoverflow (@Mike McKerns). If you
would like to share how you use mystic in your work, please send an email (to mmckerns at uqfoundation dot
org).
Instructions on building a new model are in mystic.models.abstract_model. mystic provides base classes
for two types of models:

1.6. Requirements 3
mystic Documentation, Release 0.3.3.dev0

• AbstractFunction [evaluates f(x) for given evaluation points x]


• AbstractModel [generates f(x,p) for given coefficients p]
mystic also provides some convienence functions to help you build a model instance and a cost function instance
on-the-fly. For more information, see mystic.forward_model. It is, however, not necessary to use base classes
or the model builder in building your own model or cost function, as any standard python function can be used as long
as it meets the basic AbstractFunction interface of cost = f(x).
All mystic solvers are highly configurable, and provide a robust set of methods to help customize the solver for
your particular optimization problem. For each solver, a minimal (scipy.optimize) interface is also provided for
users who prefer to configure and launch their solvers as a single function call. For more information, see mystic.
abstract_solver for the solver API, and each of the individual solvers for their minimal functional interface.
mystic enables solvers to use parallel computing whenever the user provides a replacement for the (serial)
python map function. mystic includes a sample map in mystic.python_map that mirrors the behavior
of the built-in python map, and a pool in mystic.pools that provides map functions using the pathos
(i.e. multiprocessing) interface. mystic solvers are designed to utilize distributed and parallel tools pro-
vided by the pathos package. For more information, see mystic.abstract_map_solver, mystic.
abstract_ensemble_solver, and the pathos documentation at http://dev.danse.us/trac/pathos.
Important classes and functions are found here:
• mystic.solvers [solver optimization algorithms]
• mystic.termination [solver termination conditions]
• mystic.strategy [solver population mutation strategies]
• mystic.monitors [optimization monitors]
• mystic.symbolic [symbolic math in constaints]
• mystic.constraints [constraints functions]
• mystic.penalty [penalty functions]
• mystic.collapse [checks for dimensional collapse]
• mystic.coupler [decorators for function coupling]
• mystic.pools [parallel worker pool interface]
• mystic.munge [file readers and writers]
• mystic.scripts [model and convergence plotting]
• mystic.support [hypercube measure support plotting]
• mystic.forward_model [cost function generator]
• mystic.tools [constraints, wrappers, and other tools]
• mystic.cache [results caching and archiving]
• mystic.models [models and test functions]
• mystic.math [mathematical functions and tools]
Important functions within mystic.math are found here:
• mystic.math.Distribution [a sampling distribution object]
• mystic.math.legacydata [classes for legacy data observations]
• mystic.math.discrete [classes for discrete measures]
• mystic.math.measures [tools to support discrete measures]

4 Chapter 1. mystic: highly-constrained non-convex optimization and uncertainty quantification


mystic Documentation, Release 0.3.3.dev0

• mystic.math.approx [tools for measuring equality]


• mystic.math.grid [tools for generating points on a grid]
• mystic.math.distance [tools for measuring distance and norms]
• mystic.math.poly [tools for polynomial functions]
• mystic.math.samples [tools related to sampling]
• mystic.math.integrate [tools related to integration]
• mystic.math.stats [tools related to distributions]
Solver and model API definitions are found here:
• mystic.abstract_solver [the solver API definition]
• mystic.abstract_map_solver [the parallel solver API]
• mystic.abstract_ensemble_solver [the ensemble solver API]
• mystic.models.abstract_model [the model API definition]
mystic also provides several convience scripts that are used to visualize models, convergence, and support on the
hypercube. These scripts are installed to a directory on the user’s $PATH, and thus can be run from anywhere:
• mystic_log_reader [parameter and cost convergence]
• mystic_collapse_plotter [convergence and dimensional collapse]
• mystic_model_plotter [model surfaces and solver trajectory]
• support_convergence [convergence plots for measures]
• support_hypercube [parameter support on the hypercube]
• support_hypercube_measures [measure support on the hypercube]
• support_hypercube_scenario [scenario support on the hypercube]
Typing --help as an argument to any of the above scripts will print out an instructive help message.

1.8 Citation

If you use mystic to do research that leads to publication, we ask that you acknowledge use of mystic by citing
the following in your publication:

M.M. McKerns, L. Strand, T. Sullivan, A. Fang, M.A.G. Aivazis,


"Building a framework for predictive science", Proceedings of
the 10th Python in Science Conference, 2011;
http://arxiv.org/pdf/1202.1056

Michael McKerns, Patrick Hung, and Michael Aivazis,


"mystic: highly-constrained non-convex optimization and UQ", 2009- ;
http://trac.mystic.cacr.caltech.edu/project/mystic

Please see http://trac.mystic.cacr.caltech.edu/project/mystic or http://arxiv.org/pdf/1202.1056 for further information.


license()
print license
citation()
print citation

1.8. Citation 5
mystic Documentation, Release 0.3.3.dev0

model_plotter(model, logfile=None, **kwds)


generate surface contour plots for model, specified by full import path; and generate model trajectory from
logfile (or solver restart file), if provided
Available from the command shell as:

mystic_model_plotter.py model (logfile) [options]

or as a function call:

mystic.model_plotter(model, logfile=None, **options)

Parameters
• model (str) – full import path for the model (e.g. mystic.models.rosen)
• logfile (str, default=None) – name of convergence logfile (e.g. log.txt)
Returns None

Notes

• The option out takes a string of the filepath for the generated plot.
• The option bounds takes an indicator string, where bounds are given as comma-separated slices. For
example, using bounds = "-1:10, 0:20" will set lower and upper bounds for x to be (-1,10) and
y to be (0,20). The “step” can also be given, to control the number of lines plotted in the grid. Thus
"-1:10:.1, 0:20" sets the bounds as above, but uses increments of .1 along x and the default step
along y. For models > 2D, the bounds can be used to specify 2 dimensions plus fixed values for remaining
dimensions. Thus, "-1:10, 0:20, 1.0" plots the 2D surface where the z-axis is fixed at z=1.0.
When called from a script, slice objects can be used instead of a string, thus "-1:10:.1, 0:20, 1.
0" becomes (slice(-1,10,.1), slice(20), 1.0).
• The option label takes comma-separated strings. For example, label = "x,y," will place ‘x’ on the
x-axis, ‘y’ on the y-axis, and nothing on the z-axis. LaTeX is also accepted. For example, label = "$
h $, $ {\alpha}$, $ v$" will label the axes with standard LaTeX math formatting. Note that the
leading space is required, while a trailing space aligns the text with the axis instead of the plot frame.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option iter takes an integer of the largest iteration to plot.
• The option reduce can be given to reduce the output of a model to a scalar, thus converting
model(params) to reduce(model(params)). A reducer is given by the import path (e.g.
numpy.add).
• The option scale will convert the plot to log-scale, and scale the cost by z=log(4*z*scale+1)+2.
This is useful for visualizing small contour changes around the minimium.
• If using log-scale produces negative numbers, the option shift can be used to shift the cost by z=z+shift.
Both shift and scale are intended to help visualize contours.
• The option fill takes a boolean, to plot using filled contours.
• The option depth takes a boolean, to plot contours in 3D.
• The option dots takes a boolean, to show trajectory points in the plot.
• The option join takes a boolean, to connect trajectory points.
• The option verb takes a boolean, to print the model documentation.

6 Chapter 1. mystic: highly-constrained non-convex optimization and uncertainty quantification


mystic Documentation, Release 0.3.3.dev0

log_reader(filename, **kwds)
plot parameter convergence from file written with LoggingMonitor
Available from the command shell as:

mystic_log_reader.py filename [options]

or as a function call:

mystic.log_reader(filename, **options)

Parameters filename (str) – name of the convergence logfile (e.g log.txt).


Returns None

Notes

• The option out takes a string of the filepath for the generated plot.
• The option dots takes a boolean, and will show data points in the plot.
• The option line takes a boolean, and will connect the data with a line.
• The option iter takes an integer of the largest iteration to plot.
• The option legend takes a boolean, and will display the legend.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option param takes an indicator string. The indicator string is built from comma-separated array slices.
For example, params = ":" will plot all parameters. Alternatively, params = ":2, 3:" will plot
all parameters except for the third parameter, while params = "0" will only plot the first parameter.

collapse_plotter(filename, **kwds)
generate cost convergence rate plots from file written with write_support_file
Available from the command shell as:

mystic_collapse_plotter.py filename [options]

or as a function call:

mystic.collapse_plotter(filename, **options)

Parameters filename (str) – name of the convergence logfile (e.g paramlog.py).


Returns None

Notes

• The option dots takes a boolean, and will show data points in the plot.
• The option linear takes a boolean, and will plot in a linear scale.
• The option out takes a string of the filepath for the generated plot.
• The option iter takes an integer of the largest iteration to plot.

1.8. Citation 7
mystic Documentation, Release 0.3.3.dev0

• The option label takes a label string. For example, label = "y" will label the plot with a ‘y’, while
label = " log-cost, $ log_{10}(\hat{P} - \hat{P}_{max})$" will label the y-axis
with standard LaTeX math formatting. Note that the leading space is required, and that the text is aligned
along the axis.
• The option col takes a string of comma-separated integers indicating iteration numbers where parameter
collapse has occurred. If a second set of integers is provided (delineated by a semicolon), the additional
set of integers will be plotted with a different linestyle (to indicate a different type of collapse).

8 Chapter 1. mystic: highly-constrained non-convex optimization and uncertainty quantification


CHAPTER 2

mystic module documentation

2.1 abstract_ensemble_solver module

This module contains the base class for launching several mystic solvers instances – utilizing a parallel map
function to enable parallel computing. This module describes the ensemble solver interface. As with the
AbstractSolver, the _Step method must be overwritten with the derived solver’s optimization algorithm. Simi-
lar to AbstractMapSolver, a call to map is required. In addition to the class interface, a simple function interface
for a derived solver class is often provided. For an example, see the following.
The default map API settings are provided within mystic, while distributed and parallel computing maps can be
obtained from the pathos package (http://dev.danse.us/trac/pathos).

Examples

A typical call to a ‘ensemble’ solver will roughly follow this example:


>>> # the function to be minimized and the initial values
>>> from mystic.models import rosen
>>> lb = [0.0, 0.0, 0.0]
>>> ub = [2.0, 2.0, 2.0]
>>>
>>> # get monitors and termination condition objects
>>> from mystic.monitors import Monitor
>>> stepmon = Monitor()
>>> from mystic.termination import CandidateRelativeTolerance as CRT
>>>
>>> # select the parallel launch configuration
>>> from pyina.launchers import Mpi as Pool
>>> NNODES = 4
>>> nbins = [4,4,4]
>>>
>>> # instantiate and configure the solver
>>> from mystic.solvers import NelderMeadSimplexSolver
(continues on next page)

9
mystic Documentation, Release 0.3.3.dev0

(continued from previous page)


>>> from mystic.solvers import LatticeSolver
>>> solver = LatticeSolver(len(nbins), nbins)
>>> solver.SetNestedSolver(NelderMeadSimplexSolver)
>>> solver.SetStrictRanges(lb, ub)
>>> solver.SetMapper(Pool(NNODES).map)
>>> solver.SetGenerationMonitor(stepmon)
>>> solver.SetTermination(CRT())
>>> solver.Solve(rosen)
>>>
>>> # obtain the solution
>>> solution = solver.Solution()

2.1.1 Handler

All solvers packaged with mystic include a signal handler that provides the following options:

sol: Print current best solution.


cont: Continue calculation.
call: Executes sigint_callback, if provided.
exit: Exits with current best solution.

Handlers are enabled with the enable_signal_handler method, and are configured through the solver’s Solve
method. Handlers trigger when a signal interrupt (usually, Ctrl-C) is given while the solver is running.

Notes

The handler is currently disabled when the solver is run in parallel.


class AbstractEnsembleSolver(dim, **kwds)
Bases: mystic.abstract_map_solver.AbstractMapSolver
AbstractEnsembleSolver base class for mystic optimizers that are called within a parallel map. This allows
pseudo-global coverage of parameter space using non-global optimizers.
Takes one initial input:

dim -- dimensionality of the problem.

Additional inputs:

npop -- size of the trial solution population. [default = 1]


nbins -- tuple of number of bins in each dimension. [default = [1]*dim]
npts -- number of solver instances. [default = 1]
rtol -- size of radial tolerance for sparsity. [default = None]

Important class members:

nDim, nPop = dim, npop


generations - an iteration counter.
evaluations - an evaluation counter.
bestEnergy - current best energy.
bestSolution - current best parameter set. [size = dim]
popEnergy - set of all trial energy solutions. [size = npop]
population - set of all trial parameter solutions. [size = dim*npop]
(continues on next page)

10 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

(continued from previous page)


solution_history - history of bestSolution status. [StepMonitor.x]
energy_history - history of bestEnergy status. [StepMonitor.y]
signal_handler - catches the interrupt signal. [***disabled***]

SetDistribution(dist=None)
Set the distribution used for determining solver starting points
Inputs:
• dist: a mystic.math.Distribution instance
SetInitialPoints(x0, radius=0.05)
Set Initial Points with Guess (x0)
input::
• x0: must be a sequence of length self.nDim
• radius: generate random points within [-radius*x0, radius*x0] for i!=0 when a simplex-type
initial guess in required
* this method must be overwritten *
SetMultinormalInitialPoints(mean, var=None)
Generate Initial Points from Multivariate Normal.
input::
• mean must be a sequence of length self.nDim
• var can be. . . None: -> it becomes the identity scalar: -> var becomes scalar * I matrix: -> the
variance matrix. must be the right size!
* this method must be overwritten *
SetNestedSolver(solver)
set the nested solver
input::
• solver: a mystic solver instance (e.g. NelderMeadSimplexSolver(3) )
SetRandomInitialPoints(min=None, max=None)
Generate Random Initial Points within given Bounds
input::
• min, max: must be a sequence of length self.nDim
• each min[i] should be <= the corresponding max[i]
* this method must be overwritten *
SetSampledInitialPoints(dist=None)
Generate Random Initial Points from Distribution (dist)
input::
• dist: a mystic.math.Distribution instance
* this method must be overwritten *
Solve(cost, termination=None, ExtraArgs=(), **kwds)
Minimize a ‘cost’ function with given termination conditions.
Uses an ensemble of optimizers to find the minimum of a function of one or more variables.

2.1. abstract_ensemble_solver module 11


mystic Documentation, Release 0.3.3.dev0

Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• disp (bool, default=False) – if True, print convergence messages.
Returns None
Terminated(disp=False, info=False, termination=None)
check if the solver meets the given termination conditions
Input::
• disp = if True, print termination statistics and/or warnings
• info = if True, return termination message (instead of boolean)
• termination = termination conditions to check against
Notes:: If no termination conditions are given, the solver’s stored termination conditions will be used.
_AbstractEnsembleSolver__get_solver_instance()
ensure the solver is a solver instance
_InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
* this method must be overwritten *
__init__(dim, **kwds)
Takes one initial input:

dim -- dimensionality of the problem.

Additional inputs:

npop -- size of the trial solution population. [default = 1]


nbins -- tuple of number of bins in each dimension. [default = [1]*dim]
npts -- number of solver instances. [default = 1]
rtol -- size of radial tolerance for sparsity. [default = None]

Important class members:

nDim, nPop = dim, npop


generations - an iteration counter.
evaluations - an evaluation counter.
bestEnergy - current best energy.
bestSolution - current best parameter set. [size = dim]
popEnergy - set of all trial energy solutions. [size = npop]
population - set of all trial parameter solutions. [size = dim*npop]
solution_history - history of bestSolution status. [StepMonitor.x]
energy_history - history of bestEnergy status. [StepMonitor.y]
signal_handler - catches the interrupt signal. [***disabled***]

__module__ = 'mystic.abstract_ensemble_solver'

12 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

_update_objective()
decorate the cost function with bounds, penalties, monitors, etc

2.2 abstract_launcher module

This module contains the base classes for pathos pool and pipe objects, and describes the map and pipe interfaces. A
pipe is defined as a connection between two ‘nodes’, where a node is something that does work. A pipe may be a
one-way or two-way connection. A map is defined as a one-to-many connection between nodes. In both map and pipe
connections, results from the connected nodes can be returned to the calling node. There are several variants of pipe
and map, such as whether the connection is blocking, or ordered, or asynchronous. For pipes, derived methods must
overwrite the ‘pipe’ method, while maps must overwrite the ‘map’ method. Pipes and maps are available from worker
pool objects, where the work is done by any of the workers in the pool. For more specific point-to-point connections
(such as a pipe between two specific compute nodes), use the pipe object directly.

2.2.1 Usage

A typical call to a pathos map will roughly follow this example:

>>> # instantiate and configure the worker pool


>>> from pathos.pools import ProcessPool
>>> pool = ProcessPool(nodes=4)
>>>
>>> # do a blocking map on the chosen function
>>> results = pool.map(pow, [1,2,3,4], [5,6,7,8])
>>>
>>> # do a non-blocking map, then extract the results from the iterator
>>> results = pool.imap(pow, [1,2,3,4], [5,6,7,8])
>>> print("...")
>>> results = list(results)
>>>
>>> # do an asynchronous map, then get the results
>>> results = pool.amap(pow, [1,2,3,4], [5,6,7,8])
>>> while not results.ready():
... time.sleep(5); print(".", end=' ')
...
>>> results = results.get()

Notes

Each of the pathos worker pools rely on a different transport protocol (e.g. threads, multiprocessing, etc), where the
use of each pool comes with a few caveats. See the usage documentation and examples for each worker pool for more
information.
class AbstractPipeConnection(*args, **kwds)
Bases: object
AbstractPipeConnection base class for pathos pipes.
Required input: ???
Additional inputs: ???
Important class members: ???
Other class members: ???

2.2. abstract_launcher module 13


mystic Documentation, Release 0.3.3.dev0

__dict__ = dict_proxy({'__module__': 'mystic.abstract_launcher', '__repr__': <functio


__init__(*args, **kwds)
Required input: ???
Additional inputs: ???
Important class members: ???
Other class members: ???
__module__ = 'mystic.abstract_launcher'
__repr__() <==> repr(x)
__weakref__
list of weak references to the object (if defined)
class AbstractWorkerPool(*args, **kwds)
Bases: object
AbstractWorkerPool base class for pathos pools.
Important class members: nodes - number (and potentially description) of workers ncpus - number of worker
processors servers - list of worker servers scheduler - the associated scheduler workdir - associated
$WORKDIR for scratch calculations/files
Other class members: scatter - True, if uses ‘scatter-gather’ (instead of ‘worker-pool’) source - False, if mini-
mal use of TemporaryFiles is desired timeout - number of seconds to wait for return value from scheduler
_AbstractWorkerPool__get_nodes()
get the number of nodes in the pool
_AbstractWorkerPool__imap(f, *args, **kwds)
default filter for imap inputs
_AbstractWorkerPool__init(*args, **kwds)
default filter for __init__ inputs
_AbstractWorkerPool__map(f, *args, **kwds)
default filter for map inputs
_AbstractWorkerPool__nodes = 1
_AbstractWorkerPool__pipe(f, *args, **kwds)
default filter for pipe inputs
_AbstractWorkerPool__set_nodes(nodes)
set the number of nodes in the pool
__dict__ = dict_proxy({'map': <function map>, '__module__': 'mystic.abstract_launcher
__enter__()
__exit__(*args)
__init__(*args, **kwds)
Important class members: nodes - number (and potentially description) of workers ncpus - number of
worker processors servers - list of worker servers scheduler - the associated scheduler workdir - asso-
ciated $WORKDIR for scratch calculations/files
Other class members: scatter - True, if uses ‘scatter-gather’ (instead of ‘worker-pool’) source - False, if
minimal use of TemporaryFiles is desired timeout - number of seconds to wait for return value from
scheduler

14 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

__module__ = 'mystic.abstract_launcher'
__repr__() <==> repr(x)
__weakref__
list of weak references to the object (if defined)
_serve(*args, **kwds)
Create a new server if one isn’t already initialized
amap(f, *args, **kwds)
run a batch of jobs with an asynchronous map
Returns a results object which containts the results of applying the function f to the items of the argument
sequence(s). If more than one sequence is given, the function is called with an argument list consisting
of the corresponding item of each sequence. To retrieve the results, call the get() method on the returned
results object. The call to get() is blocking, until all results are retrieved. Use the ready() method on the
result object to check if all results are ready.
apipe(f, *args, **kwds)
submit a job asynchronously to a queue
Returns a results object which containts the result of calling the function f on a selected worker. To retrieve
the results, call the get() method on the returned results object. The call to get() is blocking, until the result
is available. Use the ready() method on the results object to check if the result is ready.
clear()
Remove server with matching state
imap(f, *args, **kwds)
run a batch of jobs with a non-blocking and ordered map
Returns a list iterator of results of applying the function f to the items of the argument sequence(s). If more
than one sequence is given, the function is called with an argument list consisting of the corresponding
item of each sequence.
map(f, *args, **kwds)
run a batch of jobs with a blocking and ordered map
Returns a list of results of applying the function f to the items of the argument sequence(s). If more than
one sequence is given, the function is called with an argument list consisting of the corresponding item of
each sequence.
pipe(f, *args, **kwds)
submit a job and block until results are available
Returns result of calling the function f on a selected worker. This function will block until results are
available.
uimap(f, *args, **kwds)
run a batch of jobs with a non-blocking and unordered map
Returns a list iterator of results of applying the function f to the items of the argument sequence(s). If more
than one sequence is given, the function is called with an argument list consisting of the corresponding
item of each sequence. The order of the resulting sequence is not guaranteed.

2.3 abstract_map_solver module

This module contains the base class for mystic solvers that utilize a parallel map function to enable parallel comput-
ing. This module describes the map solver interface. As with the AbstractSolver, the _Step method must be

2.3. abstract_map_solver module 15


mystic Documentation, Release 0.3.3.dev0

overwritten with the derived solver’s optimization algorithm. Additionally, for the AbstractMapSolver, a call
to map is required. In addition to the class interface, a simple function interface for a derived solver class is often
provided. For an example, see the following.
The default map API settings are provided within mystic, while distributed and parallel computing maps can be
obtained from the pathos package (http://dev.danse.us/trac/pathos).

Examples

A typical call to a ‘map’ solver will roughly follow this example:

>>> # the function to be minimized and the initial values


>>> from mystic.models import rosen
>>> lb = [0.0, 0.0, 0.0]
>>> ub = [2.0, 2.0, 2.0]
>>>
>>> # get monitors and termination condition objects
>>> from mystic.monitors import Monitor
>>> stepmon = Monitor()
>>> from mystic.termination import CandidateRelativeTolerance as CRT
>>>
>>> # select the parallel launch configuration
>>> from pyina.launchers import Mpi as Pool
>>> NNODES = 4
>>> npts = 20
>>>
>>> # instantiate and configure the solver
>>> from mystic.solvers import BuckshotSolver
>>> solver = BuckshotSolver(len(lb), npts)
>>> solver.SetMapper(Pool(NNODES).map)
>>> solver.SetGenerationMonitor(stepmon)
>>> solver.SetTermination(CRT())
>>> solver.Solve(rosen)
>>>
>>> # obtain the solution
>>> solution = solver.Solution()

2.3.1 Handler

All solvers packaged with mystic include a signal handler that provides the following options:

sol: Print current best solution.


cont: Continue calculation.
call: Executes sigint_callback, if provided.
exit: Exits with current best solution.

Handlers are enabled with the enable_signal_handler method, and are configured through the solver’s Solve
method. Handlers trigger when a signal interrupt (usually, Ctrl-C) is given while the solver is running.

Notes

The handler is currently disabled when the solver is run in parallel.

16 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

class AbstractMapSolver(dim, **kwds)


Bases: mystic.abstract_solver.AbstractSolver
AbstractMapSolver base class for mystic optimizers that utilize parallel map.
Takes one initial input:
dim -- dimensionality of the problem.

Additional inputs:
npop -- size of the trial solution population. [default = 1]

Important class members:


nDim, nPop = dim, npop
generations - an iteration counter.
evaluations - an evaluation counter.
bestEnergy - current best energy.
bestSolution - current best parameter set. [size = dim]
popEnergy - set of all trial energy solutions. [size = npop]
population - set of all trial parameter solutions. [size = dim*npop]
solution_history - history of bestSolution status. [StepMonitor.x]
energy_history - history of bestEnergy status. [StepMonitor.y]
signal_handler - catches the interrupt signal. [***disabled***]

SelectScheduler(scheduler, queue, timelimit=None)


Select scheduler and queue (and optionally) timelimit.
Description:
Takes a scheduler function and a string queue name to submit the optimization job. Additionally
takes string time limit for scheduled job.
Example: scheduler, queue=’normal’, timelimit=‘00:02’

Inputs: scheduler – scheduler function (see pyina.launchers) [DEFAULT: None] queue – queue name
string (see pyina.launchers) [DEFAULT: None]
Additional inputs: timelimit – time string HH:MM:SS format [DEFAULT: ‘00:05:00’]

SelectServers(servers, ncpus=None)
Select the compute server.
Description:
Accepts a tuple of (‘hostname:port’,), listing each available computing server.
If ncpus=None, then ‘autodetect’; or if ncpus=0, then ‘no local’. If servers=(‘*’,), then ‘autode-
tect’; or if servers=(), then ‘no remote’.

Inputs: servers – tuple of compute servers [DEFAULT: autodetect]


Additional inputs: ncpus – number of local processors [DEFAULT: autodetect]

SetLauncher(launcher, nnodes=None)
Set launcher and (optionally) number of nodes.
Description:
Uses a launcher to provide the solver with the syntax to configure and launch optimization jobs
on the selected resource.

2.3. abstract_map_solver module 17


mystic Documentation, Release 0.3.3.dev0

Inputs: launcher – launcher function (see pyina.launchers) [DEFAULT: serial_launcher]


Additional inputs: nnodes – number of parallel compute nodes [DEFAULT: 1]

SetMapper(map, strategy=None)
Set the map function and the mapping strategy.
Description:
Sets a mapping function to perform the map-reduce algorithm. Uses a mapping strategy to pro-
vide the algorithm for distributing the work list of optimization jobs across available resources.

Inputs: map – the mapping function [DEFAULT: python_map] strategy – map strategy (see py-
ina.mappers) [DEFAULT: worker_pool]

__init__(dim, **kwds)
Takes one initial input:

dim -- dimensionality of the problem.

Additional inputs:

npop -- size of the trial solution population. [default = 1]

Important class members:

nDim, nPop = dim, npop


generations - an iteration counter.
evaluations - an evaluation counter.
bestEnergy - current best energy.
bestSolution - current best parameter set. [size = dim]
popEnergy - set of all trial energy solutions. [size = npop]
population - set of all trial parameter solutions. [size = dim*npop]
solution_history - history of bestSolution status. [StepMonitor.x]
energy_history - history of bestEnergy status. [StepMonitor.y]
signal_handler - catches the interrupt signal. [***disabled***]

__module__ = 'mystic.abstract_map_solver'

2.4 abstract_solver module

This module contains the base class for mystic solvers, and describes the mystic solver interface. The _Step method
must be overwritten with the derived solver’s optimization algorithm. In addition to the class interface, a simple
function interface for a derived solver class is often provided. For an example, see mystic.scipy_optimize,
and the following.

Examples

A typical call to a solver will roughly follow this example:

>>> # the function to be minimized and the initial values


>>> from mystic.models import rosen
>>> x0 = [0.8, 1.2, 0.7]
>>>
(continues on next page)

18 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

(continued from previous page)


>>> # get monitors and termination condition objects
>>> from mystic.monitors import Monitor
>>> stepmon = Monitor()
>>> evalmon = Monitor()
>>> from mystic.termination import CandidateRelativeTolerance as CRT
>>>
>>> # instantiate and configure the solver
>>> from mystic.solvers import NelderMeadSimplexSolver
>>> solver = NelderMeadSimplexSolver(len(x0))
>>> solver.SetInitialPoints(x0)
>>> solver.SetEvaluationMonitor(evalmon)
>>> solver.SetGenerationMonitor(stepmon)
>>> solver.enable_signal_handler()
>>> solver.SetTermination(CRT())
>>> solver.Solve(rosen)
>>>
>>> # obtain the solution
>>> solution = solver.Solution()

An equivalent, but less flexible, call using the function interface is:

>>> # the function to be minimized and the initial values


>>> from mystic.models import rosen
>>> x0 = [0.8, 1.2, 0.7]
>>>
>>> # configure the solver and obtain the solution
>>> from mystic.solvers import fmin
>>> solution = fmin(rosen,x0)

2.4.1 Handler

All solvers packaged with mystic include a signal handler that provides the following options:

sol: Print current best solution.


cont: Continue calculation.
call: Executes sigint_callback, if provided.
exit: Exits with current best solution.

Handlers are enabled with the enable_signal_handler method, and are configured through the solver’s Solve
method. Handlers trigger when a signal interrupt (usually, Ctrl-C) is given while the solver is running.
class AbstractSolver(dim, **kwds)
Bases: object
AbstractSolver base class for mystic optimizers.
Takes one initial input:

dim -- dimensionality of the problem.

Additional inputs:

npop -- size of the trial solution population. [default = 1]

Important class members:

2.4. abstract_solver module 19


mystic Documentation, Release 0.3.3.dev0

nDim, nPop = dim, npop


generations - an iteration counter.
evaluations - an evaluation counter.
bestEnergy - current best energy.
bestSolution - current best parameter set. [size = dim]
popEnergy - set of all trial energy solutions. [size = npop]
population - set of all trial parameter solutions. [size = dim*npop]
solution_history - history of bestSolution status. [StepMonitor.x]
energy_history - history of bestEnergy status. [StepMonitor.y]
signal_handler - catches the interrupt signal.

Collapse(disp=False)
if solver has terminated by collapse, apply the collapse (unless both collapse and “stop” are simultaneously
satisfied)
Collapsed(disp=False, info=False)
check if the solver meets the given collapse conditions
Input::
• disp = if True, print details about the solver state at collapse
• info = if True, return collapsed state (instead of boolean)
Finalize(**kwds)
cleanup upon exiting the main optimization loop
SaveSolver(filename=None, **kwds)
save solver state to a restart file
SetConstraints(constraints)
apply a constraints function to the optimization
input::
• a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector.
Ideally, this function is constructed so the parameter vector it passes to the cost function will
satisfy the desired (i.e. encoded) constraints.
SetEvaluationLimits(generations=None, evaluations=None, new=False, **kwds)
set limits for generations and/or evaluations
input::
• generations = maximum number of solver iterations (i.e. steps)
• evaluations = maximum number of function evaluations
SetEvaluationMonitor(monitor, new=False)
select a callable to monitor (x, f(x)) after each cost function evaluation
SetGenerationMonitor(monitor, new=False)
select a callable to monitor (x, f(x)) after each solver iteration
SetInitialPoints(x0, radius=0.05)
Set Initial Points with Guess (x0)
input::
• x0: must be a sequence of length self.nDim
• radius: generate random points within [-radius*x0, radius*x0] for i!=0 when a simplex-type
initial guess in required

20 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

SetMultinormalInitialPoints(mean, var=None)
Generate Initial Points from Multivariate Normal.
input::
• mean must be a sequence of length self.nDim
• var can be. . . None: -> it becomes the identity scalar: -> var becomes scalar * I matrix: -> the
variance matrix. must be the right size!
SetObjective(cost, ExtraArgs=None)
decorate the cost function with bounds, penalties, monitors, etc
SetPenalty(penalty)
apply a penalty function to the optimization
input::
• a penalty function of the form: y’ = penalty(xk), with y = cost(xk) + y’, where xk is the current
parameter vector. Ideally, this function is constructed so a penalty is applied when the desired
(i.e. encoded) constraints are violated. Equality constraints should be considered satisfied when
the penalty condition evaluates to zero, while inequality constraints are satisfied when the penalty
condition evaluates to a non-positive number.
SetRandomInitialPoints(min=None, max=None)
Generate Random Initial Points within given Bounds
input::
• min, max: must be a sequence of length self.nDim
• each min[i] should be <= the corresponding max[i]
SetReducer(reducer, arraylike=False)
apply a reducer function to the cost function
input::
• a reducer function of the form: y’ = reducer(yk), where yk is a results vector and y’ is a single
value. Ideally, this method is applied to a cost function with a multi-value return, to reduce the
output to a single value. If arraylike, the reducer provided should take a single array as input and
produce a scalar; otherwise, the reducer provided should meet the requirements of the python’s
builtin ‘reduce’ method (e.g. lambda x,y: x+y), taking two scalars and producing a scalar.
SetSampledInitialPoints(dist=None)
Generate Random Initial Points from Distribution (dist)
input::
• dist: a mystic.math.Distribution instance
SetSaveFrequency(generations=None, filename=None, **kwds)
set frequency for saving solver restart file
input::
• generations = number of solver iterations before next save of state
• filename = name of file in which to save solver state
note:: SetSaveFrequency(None) will disable saving solver restart file
SetStrictRanges(min=None, max=None)
ensure solution is within bounds
input::

2.4. abstract_solver module 21


mystic Documentation, Release 0.3.3.dev0

• min, max: must be a sequence of length self.nDim


• each min[i] should be <= the corresponding max[i]
note:: SetStrictRanges(None) will remove strict range constraints
SetTermination(termination)
set the termination conditions
Solution()
return the best solution
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a ‘cost’ function with given termination conditions.
Uses an optimization algorithm to find the minimum of a function of one or more variables.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• disp (bool, default=False) – if True, print convergence messages.
Returns None
Step(cost=None, termination=None, ExtraArgs=None, **kwds)
Take a single optimization step using the given ‘cost’ function.
Uses an optimization algorithm to take one ‘step’ toward the minimum of a function of one or more
variables.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• disp (bool, default=False) – if True, print convergence messages.
Returns None

Notes

To run the solver until termination, call Solve(). Alternately, use Terminated() as the stop condition
in a while loop over Step.
If the algorithm does not meet the given termination conditions after the call to Step, the solver may be
left in an “out-of-sync” state. When abandoning an non-terminated solver, one should call Finalize()
to make sure the solver is fully returned to a “synchronized” state.

22 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

Terminated(disp=False, info=False, termination=None)


check if the solver meets the given termination conditions
Input::
• disp = if True, print termination statistics and/or warnings
• info = if True, return termination message (instead of boolean)
• termination = termination conditions to check against
Notes:: If no termination conditions are given, the solver’s stored termination conditions will be used.
_AbstractSolver__bestEnergy()
get the bestEnergy (default: bestEnergy = popEnergy[0])
_AbstractSolver__bestSolution()
get the bestSolution (default: bestSolution = population[0])
_AbstractSolver__energy_history()
get the energy_history (default: energy_history = _stepmon._y)
_AbstractSolver__evaluations()
get the number of function calls
_AbstractSolver__generations()
get the number of iterations
_AbstractSolver__load_state(solver, **kwds)
load solver.__dict__ into self.__dict__; override with kwds
_AbstractSolver__save_state(force=False)
save the solver state, if chosen save frequency is met
_AbstractSolver__set_bestEnergy(energy)
set the bestEnergy (energy=None will sync with popEnergy[0])
_AbstractSolver__set_bestSolution(params)
set the bestSolution (params=None will sync with population[0])
_AbstractSolver__set_energy_history(energy)
set the energy_history (energy=None will sync with _stepmon._y)
_AbstractSolver__set_solution_history(params)
set the solution_history (params=None will sync with _stepmon.x)
_AbstractSolver__solution_history()
get the solution_history (default: solution_history = _stepmon.x)
_SetEvaluationLimits(iterscale=None, evalscale=None)
set the evaluation limits
_Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration
* this method must be overwritten *
__copy__()
__deepcopy__(memo)
__dict__ = dict_proxy({'Terminated': <function Terminated>, '__module__': 'mystic.abs
__init__(dim, **kwds)
Takes one initial input:

2.4. abstract_solver module 23


mystic Documentation, Release 0.3.3.dev0

dim -- dimensionality of the problem.

Additional inputs:

npop -- size of the trial solution population. [default = 1]

Important class members:

nDim, nPop = dim, npop


generations - an iteration counter.
evaluations - an evaluation counter.
bestEnergy - current best energy.
bestSolution - current best parameter set. [size = dim]
popEnergy - set of all trial energy solutions. [size = npop]
population - set of all trial parameter solutions. [size = dim*npop]
solution_history - history of bestSolution status. [StepMonitor.x]
energy_history - history of bestEnergy status. [StepMonitor.y]
signal_handler - catches the interrupt signal.

__module__ = 'mystic.abstract_solver'
__weakref__
list of weak references to the object (if defined)
_bootstrap_objective(cost=None, ExtraArgs=None)
HACK to enable not explicitly calling _decorate_objective
_clipGuessWithinRangeBoundary(x0, at=True)
ensure that initial guess is set within bounds
input::
• x0: must be a sequence of length self.nDim
_decorate_objective(cost, ExtraArgs=None)
decorate the cost function with bounds, penalties, monitors, etc
_process_inputs(kwds)
process and activate input settings
_update_objective()
decorate the cost function with bounds, penalties, monitors, etc
bestEnergy
get the bestEnergy (default – bestEnergy = popEnergy[0])
bestSolution
get the bestSolution (default – bestSolution = population[0])
disable_signal_handler()
disable workflow interrupt handler while solver is running
enable_signal_handler()
enable workflow interrupt handler while solver is running
energy_history
get the energy_history (default – energy_history = _stepmon._y)
evaluations
get the number of function calls

24 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

generations
get the number of iterations
solution_history
get the solution_history (default – solution_history = _stepmon.x)

2.5 cache module

decorators for caching function outputs, with function inputs as the keys

Notes

This module has been deprecated in favor of the klepto package.

2.5.1 mystic.cache module documentation

This module has been deprecated in favor of the klepto package.

2.6 collapse module

_index_selector(mask)
generate a selector for a mask of indices
_pair_selector(mask)
generate a selector for a mask of tuples (pairs)
_position_filter(mask)
generate a filter for a position mask (dict, set, or where)
_split_mask(mask)
separate a mask into a list of ints and list of tuples (pairs). mask should be composed of indices and pairs of
indices
_weight_filter(mask)
generate a filter for a weight mask (dict, set, or where)
collapse_as(stepmon, offset=False, tolerance=0.005, generations=50, mask=None)
return a set of pairs of indices where the parameters exhibit a dimensional collapse. Dimensional collapse is de-
fined by: max(pairwise(parameters)) <= tolerance over N generations (offset=False), ptp(pairwise(parameters))
<= tolerance over N generations (offset=True).
collapse will be ignored at any pairs of indices specififed in the mask. If single indices are provided, ignore all
pairs with the given indices.
collapse_at(stepmon, target=None, tolerance=0.005, generations=50, mask=None)
return a set of indices where the parameters exhibit a dimensional collapse at the specified target. Dimen-
sional collapse is defined by: change(param[i]) <= tolerance over N generations, where: change(param[i]) =
max(param[i]) - min(param[i]) if target = None, or change(param[i]) = abs(param[i] - target) otherwise.
target can be None, a single value, or a list of values of param length
collapse will be ignored at any indices specififed in the mask

2.5. cache module 25


mystic Documentation, Release 0.3.3.dev0

collapse_cost(stepmon, clip=False, limit=1.0, samples=50, mask=None)


return a dict of {index:bounds} where the parameters exhibit a collapse in bounds for regions of parameter space
with a comparably high cost value. Bounds are provided by an interval (min,max), or a list of intervals. Bounds
collapse will occur when: cost(param) - min(cost) >= limit, for all N samples within an interval.
if clip is True, then clip beyond the space sampled by stepmon
if mask is provided, the intersection of bounds and mask is returned. mask is a dict of {index:bounds}, formatted
same as the return value.
collapse_position(stepmon, tolerance=0.005, generations=50, mask=None)
return a dict of {measure: pairs_of_indices} where the product_measure exhibits a dimensional collapse in
position. Dimensional collapse in position is defined by:
collapse will be ignored at (measure,pairs) as specified in the mask. Format of mask will determine the return
value for this function. Default mask format is dict of {measure: pairs_of_indices}, with alternate formatting
available as a set of tuples of (measure,pair).
collapse_weight(stepmon, tolerance=0.005, generations=50, mask=None)
return a dict of {measure:indices} where the product_measure exhibits a dimensional collapse in weight. Di-
mensional collapse in weight is defined by: max(weight[i]) <= tolerance over N generations.
collapse will be ignored at (measure,indices) as specified in the mask. Format of mask will determine the return
value for this function. Default mask format is dict of {measure: indices}, with alternate formatting available as
a set of tuples of (measure,index).
collapsed(message)
extract collapse result from collapse message
selector(mask)
generate a selector for a mask of pairs and/or indices

2.7 constraints module

Tools for building and applying constraints and penalties.


with_penalty(ptype, *args, **kwds)
convert a condition to a penalty function of the chosen type
condition f(x) is satisfied when f(x) == 0.0 for equality constraints and f(x) <= 0.0 for inequality constraints.
ptype is a mystic.penalty type.
For example: >>> @with_penalty(quadratic_equality, kwds={‘target’:5.0}) . . . def penalty(x, tar-
get): . . . return mean(x) - target >>> >>> penalty([1,2,3,4,5]) 400.0 >>> penalty([3,4,5,6,7])
7.8886090522101181e-29
with_constraint(ctype, *args, **kwds)
convert a set transformation to a constraints solver of the chosen type
transformation f(x) is a mapping between x and x’, where x’ = f(x). ctype is a mystic.coupler type [inner, outer,
inner_proxy, outer_proxy].
For example: >>> @with_constraint(inner, kwds={‘target’:5.0}) . . . def constraint(x, target): . . .
return impose_mean(target, x) . . . >>> x = constraint([1,2,3,4,5]) >>> print(x) [3.0, 4.0, 5.0, 6.0,
7.0] >>> mean(x) 5.0
as_penalty(constraint, ptype=None, *args, **kwds)
Convert a constraints solver to a penalty function.
Inputs: constraint – a constraints solver ptype – penalty function type [default: quadratic_equality]

26 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

Additional Inputs: args – arguments for the constraints solver [default: ()] kwds – keyword arguments for the
constraints solver [default: {}] k – penalty multiplier h – iterative multiplier
as_constraint(penalty, *args, **kwds)
Convert a penalty function to a constraints solver.
Inputs: penalty – a penalty function
Additional Inputs: lower_bounds – list of lower bounds on solution values. upper_bounds – list of upper
bounds on solution values. nvars – number of parameter values. solver – the mystic solver to use in the
optimization termination – the mystic termination to use in the optimization
NOTE: The default solver is ‘diffev’, with npop=min(40, ndim*5). The default termination is
ChangeOverGeneration(), and the default guess is randomly selected points between the upper and
lower bounds.
with_mean(target)
bind a mean constraint to a given constraints function.
Inputs: target – the target mean
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling
of “impose_mean” onto another constraints function c(x), such that: x’ = impose_mean(target, c(x)).
For example: >>> @with_mean(5.0) . . . def constraint(x): . . . x[-1] = x[0] . . . return x . . . >>> x =
constraint([1,2,3,4]) >>> print(x) [4.25, 5.25, 6.25, 4.25] >>> mean(x) 5.0
with_variance(target)
bind a variance constraint to a given constraints function.
Inputs: target – the target variance
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling
of “impose_variance” onto another constraints function c(x), such that: x’ = impose_variance(target, c(x)).
For example: >>> @with_variance(1.0) . . . def constraint(x): . . . x[-1] = x[0] . . . re-
turn x . . . >>> x = constraint([1,2,3]) >>> print(x) [0.6262265521467858, 2.747546895706428,
0.6262265521467858] >>> variance(x) 0.99999999999999956
with_std(target)
bind a standard deviation constraint to a given constraints function.
Inputs: target – the target standard deviation
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling
of “impose_std” onto another constraints function c(x), such that: x’ = impose_std(target, c(x)).
For example: >>> @with_std(1.0) . . . def constraint(x): . . . x[-1] = x[0] . . . return x . . . >>> x =
constraint([1,2,3]) >>> print(x) [0.6262265521467858, 2.747546895706428, 0.6262265521467858]
>>> std(x) 0.99999999999999956
with_spread(target)
bind a range constraint to a given constraints function.
Inputs: target – the target range
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling
of “impose_spread” onto another constraints function c(x), such that: x’ = impose_spread(target, c(x)).
For example: >>> @with_spread(10.0) . . . def constraint(x): . . . return [i**2 for i in x]
. . . >>> x = constraint([1,2,3,4]) >>> print(x) [3.1666666666666665, 5.1666666666666661, 8.5,
13.166666666666666] >>> spread(x) 10.0

2.7. constraints module 27


mystic Documentation, Release 0.3.3.dev0

normalized(mass=1.0)
bind a normalization constraint to a given constraints function.
Inputs: mass – the target sum of normalized weights
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling
of “normalize” onto another constraints function c(x), such that: x’ = normalize(c(x), mass).
For example: >>> @normalized() . . . def constraint(x): . . . return x . . . >>> constraint([1,2,3])
[0.16666666666666666, 0.33333333333333331, 0.5]
issolution(constraints, guess, tol=0.001)
Returns whether the guess is a solution to the constraints
Input: constraints – a constraints solver function or a penalty function guess – list of parameter values proposed
to solve the constraints tol – residual error magnitude for which constraints are considered solved
For example: >>> @normalized() . . . def constraint(x): . . . return x . . . >>> constraint([.5,.5]) [0.5, 0.5]
>>> issolution(constraint, [.5,.5]) True >>> >>> from mystic.penalty import quadratic_inequality >>>
@quadratic_inequality(lambda x: x[0] - x[1] + 10) . . . def penalty(x): . . . return 0.0 . . . >>> penalty([-
10,.5]) 0.0 >>> issolution(penalty, [-10,.5]) True
solve(constraints, guess=None, nvars=None, solver=None, lower_bounds=None, upper_bounds=None, ter-
mination=None)
Use optimization to find a solution to a set of constraints.
Inputs: constraints – a constraints solver function or a penalty function
Additional Inputs: guess – list of parameter values proposed to solve the constraints. lower_bounds – list
of lower bounds on solution values. upper_bounds – list of upper bounds on solution values. nvars –
number of parameter values. solver – the mystic solver to use in the optimization termination – the mystic
termination to use in the optimization
NOTE: The resulting constraints will likely be more expensive to evaluate and less accurate than writing
the constraints solver from scratch.
NOTE: The ensemble solvers are available, using the default NestedSolver, where the keyword ‘guess’ can
be used to set the number of solvers.
NOTE: The default solver is ‘diffev’, with npop=min(40, ndim*5). The default termination is
ChangeOverGeneration(), and the default guess is randomly selected points between the upper and
lower bounds.
discrete(samples, index=None)
impose a discrete set of input values for the selected function
The function’s input will be mapped to the given discrete set

>>> @discrete([1.0, 2.0])


... def identity(x):
... return x

>>> identity([0.123, 1.789, 4.000])


[1.0, 2.0, 2.0]

>>> @discrete([1,3,5,7], index=(0,3))


... def squared(x):
.... return [i**2 for i in x]

>>> squared([0,2,4,6,8,10])
[1, 4, 16, 25, 64, 100]

28 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

integers(ints=True, index=None)
impose the set of integers (by rounding) for the given function
The function’s input will be mapped to the ints, where:
• if ints is True, return results as ints; otherwise, use floats
• if index tuple provided, only round at the given indices

>>> @integers()
... def identity(x):
... return x

>>> identity([0.123, 1.789, 4.000])


[0, 2, 4]

>>> @integers(ints=float, index=(0,3,4))


... def squared(x):
.... return [i**2 for i in x]

>>> squared([0.12, 0.12, 4.01, 4.01, 8, 8])


[0.0, 0.0144, 16.080099999999998, 16.0, 64.0, 64.0]

near_integers(x)
the sum of all deviations from int values
unique(seq, full=None)
replace the duplicate values with unique values in ‘full’
If full is a type (int or float), then unique values of the given type are selected from range(min(seq),max(seq)).
If full is a dict of {‘min’:min, ‘max’:max}, then unique floats are selected from range(min(seq),max(seq)). If
full is a sequence (list or set), then unique values are selected from the given sequence.
For example: >>> unique([1,2,3,1,2,4], range(11)) [1, 2, 3, 9, 8, 4] >>> unique([1,2,3,1,2,9], range(11)) [1,
2, 3, 8, 5, 9] >>> try: . . . unique([1,2,3,1,2,13], range(11)) . . . except ValueError: . . . pass . . . >>>
>>> unique([1,2,3,1,2,4], {‘min’:0, ‘max’:11}) [1, 2, 3, 4.175187820357143, 2.5407265707465716, 4] >>>
unique([1,2,3,1,2,4], {‘min’:0, ‘max’:11, ‘type’:int}) [1, 2, 3, 6, 8, 4] >>> unique([1,2,3,1,2,4], float) [1, 2, 3,
1.012375036824941, 3.9821250727509905, 4] >>> unique([1,2,3,1,2,10], int) [1, 2, 3, 9, 6, 10] >>> try: . . .
unique([1,2,3,1,2,4], int) . . . except ValueError: . . . pass . . .
has_unique(x)
check for uniqueness of the members of x
impose_unique(seq=None)
ensure all values are unique and found in the given set
For example: >>> @impose_unique(range(11)) . . . def doit(x): . . . return x . . . >>> doit([1,2,3,1,2,4]) [1, 2,
3, 9, 8, 4] >>> doit([1,2,3,1,2,10]) [1, 2, 3, 8, 5, 10] >>> try: . . . doit([1,2,3,1,2,13]) . . . except ValueError: . . .
print(“Bad Input”) . . . Bad Input
bounded(seq, bounds, index=None, clip=True, nearest=True)
bound a sequence by bounds = [min,max]
For example: >>> sequence = [0.123, 1.244, -4.755, 10.731, 6.207] >>> >>> bounded(sequence, (0,5)) ar-
ray([0.123, 1.244, 0. , 5. , 5. ]) >>> >>> bounded(sequence, (0,5), index=(0,2,4)) array([ 0.123, 1.244, 0. ,
10.731, 5. ]) >>> >>> bounded(sequence, (0,5), clip=False) array([0.123 , 1.244 , 3.46621839, 1.44469038,
4.88937466]) >>> >>> bounds = [(0,5),(7,10)] >>> my.constraints.bounded(sequence, bounds) array([ 0.123,
1.244, 0. , 10. , 7. ]) >>> my.constraints.bounded(sequence, bounds, nearest=False) array([ 0.123, 1.244,
7. , 10. , 5. ]) >>> my.constraints.bounded(sequence, bounds, nearest=False, clip=False) array([0.123 ,

2.7. constraints module 29


mystic Documentation, Release 0.3.3.dev0

1.244 , 0.37617154, 8.79013111, 7.40864242]) >>> my.constraints.bounded(sequence, bounds, clip=False) ar-


ray([0.123 , 1.244 , 2.38186577, 7.41374049, 9.14662911]) >>>
impose_bounds(bounds, index=None, clip=True, nearest=True)
generate a function where bounds=[min,max] on a sequence are imposed
For example: >>> sequence = [0.123, 1.244, -4.755, 10.731, 6.207] >>> >>> @impose_bounds((0,5)) . . . def
simple(x): . . . return x . . . >>> simple(sequence) [0.123, 1.244, 0.0, 5.0, 5.0] >>> >>> @impose_bounds((0,5),
index=(0,2,4)) . . . def double(x): . . . return [i*2 for i in x] . . . >>> double(sequence) [0.246, 2.488, 0.0, 21.462,
10.0] >>> >>> @impose_bounds((0,5), index=(0,2,4), clip=False) . . . def square(x): . . . return [i*i for i in x]
. . . >>> square(sequence) [0.015129, 1.547536, 14.675791119810688, 115.154361, 1.399551896073788] >>>
>>> @impose_bounds([(0,5),(7,10)]) . . . def simple(x): . . . return x . . . >>> simple(sequence) [0.123, 1.244,
0.0, 10.0, 7.0] >>> >>> @impose_bounds([(0,5),(7,10)], nearest=False) . . . def simple(x): . . . return x . . . >>>
simple(sequence) [0.123, 1.244, 0.0, 5.0, 5.0] >>> simple(sequence) [0.123, 1.244, 7.0, 10.0, 5.0] >>> >>>
@impose_bounds({0:(0,5), 2:(0,5), 4:[(0,5),(7,10)]}) . . . def simple(x): . . . return x . . . >>> simple(sequence)
[0.123, 1.244, 0.0, 10.731, 7.0] >>> >>> @impose_bounds({0:(0,5), 2:(0,5), 4:[(0,5),(7,10)]}, index=(0,2)) . . .
def simple(x): . . . return x . . . >>> simple(sequence) [0.123, 1.244, 0.0, 10.731, 6.207]
impose_as(mask, offset=None)
generate a function, where some input tracks another input
mask should be a set of tuples of positional index and tracked index, where the tuple should contain two different
integers. The mask will be applied to the input, before the decorated function is called.
The offset is applied to the second member of the tuple, and can accumulate.
For example,

>>> @impose_as([(0,1),(3,1),(4,5),(5,6),(5,7)])
... def same(x):
... return x
...
>>> same([9,8,7,6,5,4,3,2,1])
[9, 9, 7, 9, 5, 5, 5, 5, 1]
>>> same([0,1,0,1])
[0, 0, 0, 0]
>>> same([-1,-2,-3,-4,-5,-6,-7])
[-1, -1, -3, -1, -5, -5, -5]
>>>
>>> @impose_as([(0,1),(3,1),(4,5),(5,6),(5,7)], 10)
... def doit(x):
... return x
...
>>> doit([9,8,7,6,5,4,3,2,1])
[9, 19, 7, 9, 5, 15, 25, 25, 1]
>>> doit([0,1,0,1])
[0, 10, 0, 0]
>>> doit([-1,-2,-3,-4,-5,-6])
[-1, 9, -3, -1, -5, 5]
>>> doit([-1,-2,-3,-4,-5,-6,-7])
[-1, 9, -3, -1, -5, 5, 15]

impose_at(index, target=0.0)
generate a function, where some input is set to the target
index should be a set of indices to be fixed at the target. The target can either be a single value (e.g. float), or a
list of values.
For example,

30 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

>>> @impose_at([1,3,4,5,7], -99)


... def same(x):
... return x
...
>>> same([1,1,1,1,1,1,1])
[1, -99, 1, -99, -99, -99, 1]
>>> same([1,1,1,1])
[1, -99, 1, -99]
>>> same([1,1])
[1, -99]
>>>
>>> @impose_at([1,3,4,5,7], [0,2,4,6])
... def doit(x):
... return x
...
>>> doit([1,1,1,1,1,1,1])
[1, 0, 1, 2, 4, 6, 1]
>>> doit([1,1,1,1])
[1, 0, 1, 2]
>>> doit([1,1])
[1, 0]

impose_measure(npts, tracking={}, noweight={})


generate a function, that constrains measure positions and weights
npts is a tuple of the product_measure dimensionality
tracking is a dict of collapses, or a tuple of dicts of collapses. a tracking collapse is a dict of {measure: {pairs
of indices}}, where the pairs of indices are where the positions will be constrained to have the same value, and
the weight from the second index in the pair will be removed and added to the weight of the first index
noweight is a dict of collapses, or a tuple of dicts of collapses. a noweight collapse is a dict of {measure:
{indices}), where the indices are where the measure will be constrained to have zero weight
For example,

>>> pos = {0: {(0,1)}, 1:{(0,2)}}


>>> wts = {0: {1}, 1: {1, 2}}
>>> npts = (3,3)
>>>
>>> @impose_measure(npts, pos)
... def same(x):
... return x
...
>>> same([.5, 0., .5, 2., 4., 6., .25, .5, .25, 6., 4., 2.])
[0.5, 0.0, 0.5, 2.0, 2.0, 6.0, 0.5, 0.5, 0.0, 5.0, 3.0, 5.0]
>>> same([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.])
[0.6666666666666666, 0.0, 0.3333333333333333, 1.3333333333333335, 1.
˓→3333333333333335, 3.3333333333333335, 0.6666666666666666, 0.

˓→3333333333333333, 0.0, 1.6666666666666667, 2.666666666666667, 1.

˓→6666666666666667]

>>>
>>> @impose_measure(npts, {}, wts)
... def doit(x):
... return x
...
>>> doit([.5, 0., .5, 2., 4., 6., .25, .5, .25, 6., 4., 2.])
[0.5, 0.0, 0.5, 2.0, 4.0, 6.0, 1.0, 0.0, 0.0, 4.0, 2.0, 0.0]
(continues on next page)

2.7. constraints module 31


mystic Documentation, Release 0.3.3.dev0

(continued from previous page)


>>> doit([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.])
[0.5, 0.0, 0.5, 1.0, 2.0, 3.0, 1.0, 0.0, 0.0, 2.0, 3.0, 4.0]
>>>
>>> @impose_measure(npts, pos, wts)
... def both(x):
... return x
...
>>> both([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.])
[0.66666666666666663, 0.0, 0.33333333333333331, 1.3333333333333335, 1.
˓→3333333333333335, 3.3333333333333335, 1.0, 0.0, 0.0, 2.0, 3.0, 2.0]

>>>

impose_position(npts, tracking)
generate a function, that constrains measure positions
npts is a tuple of the product_measure dimensionality
tracking is a dict of collapses, or a tuple of dicts of collapses. a tracking collapse is a dict of {measure: {pairs
of indices}}, where the pairs of indices are where the positions will be constrained to have the same value, and
the weight from the second index in the pair will be removed and added to the weight of the first index
For example,

>>> pos = {0: {(0,1)}, 1:{(0,2)}}


>>> npts = (3,3)
>>>
>>> @impose_position(npts, pos)
... def same(x):
... return x
...
>>> same([.5, 0., .5, 2., 4., 6., .25, .5, .25, 6., 4., 2.])
[0.5, 0.0, 0.5, 2.0, 2.0, 6.0, 0.5, 0.5, 0.0, 5.0, 3.0, 5.0]
>>> same([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.])
[0.6666666666666666, 0.0, 0.3333333333333333, 1.3333333333333335, 1.
˓→3333333333333335, 3.3333333333333335, 0.6666666666666666, 0.

˓→3333333333333333, 0.0, 1.6666666666666667, 2.666666666666667, 1.

˓→6666666666666667]

>>>

impose_weight(npts, noweight)
generate a function, that constrains measure weights
npts is a tuple of the product_measure dimensionality
noweight is a dict of collapses, or a tuple of dicts of collapses. a noweight collapse is a dict of {measure:
{indices}), where the indices are where the measure will be constrained to have zero weight
For example,

>>> wts = {0: {1}, 1: {1, 2}}


>>> npts = (3,3)
>>>
>>> @impose_weight(npts, wts)
... def doit(x):
... return x
...
>>> doit([.5, 0., .5, 2., 4., 6., .25, .5, .25, 6., 4., 2.])
[0.5, 0.0, 0.5, 2.0, 4.0, 6.0, 1.0, 0.0, 0.0, 4.0, 2.0, 0.0]
(continues on next page)

32 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

(continued from previous page)


>>> doit([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.])
[0.5, 0.0, 0.5, 1.0, 2.0, 3.0, 1.0, 0.0, 0.0, 2.0, 3.0, 4.0]
>>>

and_(*constraints, **settings)
combine several constraints into a single constraint
Inputs: constraints – constraint functions
Additional Inputs: maxiter – maximum number of iterations to attempt to solve [default: 100]

Note: If a repeating cycle is detected, some of the inputs may be randomized.

or_(*constraints, **settings)
create a constraint that is satisfied if any constraints are satisfied
Inputs: constraints – constraint functions
Additional Inputs: maxiter – maximum number of iterations to attempt to solve [default: 100]

Note: If a repeating cycle is detected, some of the inputs may be randomized.

not_(constraint, **settings)
invert the region where the given constraints are valid, then solve
Inputs: constraint – constraint function
Additional Inputs: maxiter – maximum number of iterations to attempt to solve [default: 100]

Note: If a repeating cycle is detected, some of the inputs may be randomized.

2.8 coupler module

Function Couplers
These methods can be used to couple two functions together, and represent some common patterns found in applying
constraints and penalty methods.
For example, the “outer” method called on y = f(x), with outer=c(x), will convert y = f(x) to y’ = c(f(x)). Similarly,
the “inner” method called on y = f(x), with inner=c(x), will convert y = f(x) to y’ = f(c(x)).
additive(penalty=<function <lambda>>, args=None, kwds=None)
penalize a function with another function: y = f(x) to y’ = f(x) + p(x)
This is useful, for example, in penalizing a cost function where the constraints are violated; thus, the satisfying
the constraints will be preferred at every cost function evaluation.
For example: >>> def squared(x): . . . return x**2 . . . >>> # equivalent to: (x+1) + (x**2) >>>
@additive(squared) . . . def constrain(x): . . . return x+1 . . . >>> from numpy import array >>> x =
array([1,2,3,4,5]) >>> constrain(x) array([ 3, 7, 13, 21, 31])
additive_proxy(penalty=<function <lambda>>, args=None, kwds=None)
penalize a function with another function: y = f(x) to y’ = f(x) + p(x)

2.8. coupler module 33


mystic Documentation, Release 0.3.3.dev0

This is useful, for example, in penalizing a cost function where the constraints are violated; thus, the satisfying
the constraints will be preferred at every cost function evaluation.
This function does not preserve decorated function signature, but passes args and kwds to the penalty function.
and_(*penalties, **settings)
combine several penalties into a single penalty function by summation
Inputs: penalties – penalty functions
Additional Inputs: ptype – penalty function type [default: linear_equality] args – arguments for the penalty
function [default: ()] kwds – keyword arguments for the penalty function [default: {}] k – penalty multi-
plier [default: 1] h – iterative multiplier [default: 5]
NOTE: The defaults provide a linear combination of the individual penalties without any scaling. A dif-
ferent ptype (from ‘mystic.penalty’) will apply a nonlinear scaling to the combined penalty, while a differ-
ent k will apply a linear scaling.
NOTE: This function is also useful for combining constraints solvers into a single constraints solver, how-
ever can not do so directly. Constraints solvers must first be converted to penalty functions (i.e. with
‘as_penalty’), then combined, then can be converted to a constraints solver (i.e. with ‘as_constraint’). The
resulting constraints will likely be more expensive to evaluate and less accurate than writing the constraints
solver from scratch.
inner(inner=<function <lambda>>, args=None, kwds=None)
nest a function within another function: convert y = f(x) to y’ = f(c(x))
This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable
x as input, returning a modified x. The “inner” coupler is utilized by mystic.solvers to bind constraints to a cost
function; thus the constraints are imposed every cost function evaluation.
For example: >>> def squared(x): . . . return x**2 . . . >>> # equivalent to: ((x**2)+1) >>> @in-
ner(squared) . . . def constrain(x): . . . return x+1 . . . >>> from numpy import array >>> x =
array([1,2,3,4,5]) >>> constrain(x) array([ 2, 5, 10, 17, 26])
inner_proxy(inner=<function <lambda>>, args=None, kwds=None)
nest a function within another function: convert y = f(x) to y’ = f(c(x))
This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable
x as input, returning a modified x.
This function applies the “inner” coupler pattern. However, it does not preserve decorated function signature –
it passes args and kwds to the inner function instead of the decorated function.
not_(penalty, **settings)
invert, so penalizes the region where the given penalty is valid
Inputs: penalty – a penalty function
Additional Inputs: ptype – penalty function type [default: linear_equality] args – arguments for the penalty
function [default: ()] kwds – keyword arguments for the penalty function [default: {}] k – penalty multi-
plier [default: 1] h – iterative multiplier [default: 5]
or_(*penalties, **settings)
create a single penalty that selects the minimum of several penalties
Inputs: penalties – penalty functions
Additional Inputs: ptype – penalty function type [default: linear_equality] args – arguments for the penalty
function [default: ()] kwds – keyword arguments for the penalty function [default: {}] k – penalty multi-
plier [default: 1] h – iterative multiplier [default: 5]

34 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

NOTE: The defaults provide a linear combination of the individual penalties without any scaling. A dif-
ferent ptype (from ‘mystic.penalty’) will apply a nonlinear scaling to the combined penalty, while a differ-
ent k will apply a linear scaling.
NOTE: This function is also useful for combining constraints solvers into a single constraints solver, how-
ever can not do so directly. Constraints solvers must first be converted to penalty functions (i.e. with
‘as_penalty’), then combined, then can be converted to a constraints solver (i.e. with ‘as_constraint’). The
resulting constraints will likely be more expensive to evaluate and less accurate than writing the constraints
solver from scratch.
outer(outer=<function <lambda>>, args=None, kwds=None)
wrap a function around another function: convert y = f(x) to y’ = c(f(x))
This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable
x as input, returning a modified x.
For example: >>> def squared(x): . . . return x**2 . . . >>> # equivalent to: ((x+1)**2) >>>
@outer(squared) . . . def constrain(x): . . . return x+1 . . . >>> from numpy import array >>> x
= array([1,2,3,4,5]) >>> constrain(x) array([ 4, 9, 16, 25, 36])
outer_proxy(outer=<function <lambda>>, args=None, kwds=None)
wrap a function around another function: convert y = f(x) to y’ = c(f(x))
This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable
x as input, returning a modified x.
This function applies the “outer” coupler pattern. However, it does not preserve decorated function signature –
it passes args and kwds to the outer function instead of the decorated function.

2.9 differential_evolution module

2.9.1 Solvers

This module contains a collection of optimization routines based on Storn and Price’s differential evolution algorithm.
The core solver algorithm was adapted from Phillips’s DETest.py. An alternate solver is provided that follows the logic
in Price, Storn, and Lampen – in that both a current generation and a trial generation are maintained, and all vectors
for creating difference vectors and mutations draw from the current generation. . . which remains invariant until the
end of the iteration.
A minimal interface that mimics a scipy.optimize interface has also been implemented, and functionality from the
mystic solver API has been added with reasonable defaults.
Minimal function interface to optimization routines:: diffev – Differential Evolution (DE) solver diffev2 – Price
& Storn’s Differential Evolution solver
The corresponding solvers built on mystic’s AbstractSolver are:: DifferentialEvolutionSolver – a DE solver Dif-
ferentialEvolutionSolver2 – Storn & Price’s DE solver
Mystic solver behavior activated in diffev and diffev2::
• EvaluationMonitor = Monitor()
• StepMonitor = Monitor()
• strategy = Best1Bin
• termination = ChangeOverGeneration(ftol,gtol), if gtol provided ‘’ = VTRChangeOverGenera-
tions(ftol), otherwise

2.9. differential_evolution module 35


mystic Documentation, Release 0.3.3.dev0

Storn & Price’s DE Solver has also been implemented to use the “map” interface. Mystic enables the user to override
the standard python map function with their own ‘map’ function, or one of the map functions provided by the pathos
package (see http://dev.danse.us/trac/pathos) for distributed and high-performance computing.

2.9.2 Usage

Practical advice for how to configure the Differential Evolution Solver for your own objective function can be found
on R. Storn’s web page (http://www.icsi.berkeley.edu/~storn/code.html), and is reproduced here:

First try the following classical settings for the solver configuration:
Choose a crossover strategy (e.g. Rand1Bin), set the number of parents
NP to 10 times the number of parameters, select ScalingFactor=0.8, and
CrossProbability=0.9.

It has been found recently that selecting ScalingFactor from the interval
[0.5, 1.0] randomly for each generation or for each difference vector,
a technique called dither, improves convergence behaviour significantly,
especially for noisy objective functions.

It has also been found that setting CrossProbability to a low value,


e.g. CrossProbability=0.2 helps optimizing separable functions since
it fosters the search along the coordinate axes. On the contrary,
this choice is not effective if parameter dependence is encountered,
something which is frequently occuring in real-world optimization
problems rather than artificial test functions. So for parameter
dependence the choice of CrossProbability=0.9 is more appropriate.

Another interesting empirical finding is that rasing NP above, say, 40


does not substantially improve the convergence, independent of the
number of parameters. It is worthwhile to experiment with these suggestions.

Make sure that you initialize your parameter vectors by exploiting


their full numerical range, i.e. if a parameter is allowed to exhibit
values in the range [-100, 100] it's a good idea to pick the initial
values from this range instead of unnecessarily restricting diversity.

Keep in mind that different problems often require different settings


for NP, ScalingFactor and CrossProbability (see Ref 1, 2). If you
experience misconvergence, you typically can increase the value for NP,
but often you only have to adjust ScalingFactor to be a little lower or
higher than 0.8. If you increase NP and simultaneously lower ScalingFactor
a little, convergence is more likely to occur but generally takes longer,
i.e. DE is getting more robust (a convergence speed/robustness tradeoff).

If you still get misconvergence you might want to instead try a different
crossover strategy. The most commonly used are Rand1Bin, Rand1Exp,
Best1Bin, and Best1Exp. The crossover strategy is not so important a
choice, although K. Price claims that binomial (Bin) is never worse than
exponential (Exp).

In case of continued misconvergence, check the choice of objective function.


There might be a better one to describe your problem. Any knowledge that
you have about the problem should be worked into the objective function.
A good objective function can make all the difference.

See mystic.examples.test_rosenbrock for an example of using DifferentialEvolutionSolver. DifferentialEvolution-


Solver2 has the identical interface and usage.

36 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

All solvers included in this module provide the standard signal handling. For more information, see mys-
tic.mystic.abstract_solver.

References

1. Storn, R. and Price, K. Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces. Journal of Global Optimization 11: 341-359, 1997.
2. Price, K., Storn, R., and Lampinen, J. - Differential Evolution, A Practical Approach to Global Optimization.
Springer, 1st Edition, 2005.
class DifferentialEvolutionSolver(dim, NP=4)
Bases: mystic.abstract_solver.AbstractSolver
Differential Evolution optimization.
Takes two initial inputs: dim – dimensionality of the problem NP – size of the trial solution population. [re-
quires: NP >= 4]
All important class members are inherited from AbstractSolver.
SetConstraints(constraints)
apply a constraints function to the optimization
input::
• a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector.
Ideally, this function is constructed so the parameter vector it passes to the cost function will
satisfy the desired (i.e. encoded) constraints.
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using differential evolution.
Uses a differential evolution algorithm to find the minimum of a function of one or more variables.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial so-
lutions.
• CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
• ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.
• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• disp (bool, default=False) – if True, print convergence messages.
Returns None
UpdateGenealogyRecords(id, newchild)
Override me for more refined behavior. Currently all changes are logged.
_Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration Note that ExtraArgs should be a tuple of extra arguments

2.9. differential_evolution module 37


mystic Documentation, Release 0.3.3.dev0

__init__(dim, NP=4)
Takes two initial inputs: dim – dimensionality of the problem NP – size of the trial solution population.
[requires: NP >= 4]
All important class members are inherited from AbstractSolver.
__module__ = 'mystic.differential_evolution'
_decorate_objective(cost, ExtraArgs=None)
decorate cost function with bounds, penalties, monitors, etc
_process_inputs(kwds)
process and activate input settings
class DifferentialEvolutionSolver2(dim, NP=4)
Bases: mystic.abstract_map_solver.AbstractMapSolver
Differential Evolution optimization, using Storn and Price’s algorithm.
Alternate implementation:
• utilizes a map-reduce interface, extensible to parallel computing
• both a current and a next generation are kept, while the current generation is invariant during the main
DE logic
Takes two initial inputs: dim – dimensionality of the problem NP – size of the trial solution population. [re-
quires: NP >= 4]
All important class members are inherited from AbstractSolver.
SetConstraints(constraints)
apply a constraints function to the optimization
input::
• a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector.
Ideally, this function is constructed so the parameter vector it passes to the cost function will
satisfy the desired (i.e. encoded) constraints.
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using differential evolution.
Uses a differential evolution algorithm to find the minimum of a function of one or more variables. This
implementation holds the current generation invariant until the end of each iteration.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial so-
lutions.
• CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
• ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.
• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.

38 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

• disp (bool, default=False) – if True, print convergence messages.


Returns None
UpdateGenealogyRecords(id, newchild)
Override me for more refined behavior. Currently all changes are logged.
_Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration Note that ExtraArgs should be a tuple of extra arguments
__init__(dim, NP=4)
Takes two initial inputs: dim – dimensionality of the problem NP – size of the trial solution population.
[requires: NP >= 4]
All important class members are inherited from AbstractSolver.
__module__ = 'mystic.differential_evolution'
_decorate_objective(cost, ExtraArgs=None)
decorate cost function with bounds, penalties, monitors, etc
_process_inputs(kwds)
process and activate input settings
diffev(cost, x0, npop=4, args=(), bounds=None, ftol=0.005, gtol=None, maxiter=None, maxfun=None,
cross=0.9, scale=0.8, full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using differential evolution.
Uses a differential evolution algorithm to find the minimum of a function of one or more variables. Mimics a
scipy.optimize style interface.
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• x0 (ndarray) – the initial guess parameter vector x if desired start is a single point, otherwise
takes a list of (min,max) bounds that define a region from which random initial points are
drawn.
• npop (int, default=4) – size of the trial solution population.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• ftol (float, default=5e-3) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=None) – maximum iterations to run without improvement.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• cross (float, default=0.9) – the probability of cross-parameter mutations.
• scale (float, default=0.8) – multiplier for mutations on the trial solution.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.

2.9. differential_evolution module 39


mystic Documentation, Release 0.3.3.dev0

• handler (bool, default=False) – if True, enable handling interrupt signals.


• strategy (strategy, default=None) – override the default mutation strategy.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
• map (func, default=None) – a (parallel) map function y = map(f, x).
Returns (xopt, {fopt, iter, funcalls, warnflag}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function


• fopt (float): value of cost function at minimum: fopt = cost(xopt)
• iter (int): number of iterations
• funcalls (int): number of function calls
• warnflag (int): warning flag:
– 1 : Maximum number of function evaluations
– 2 : Maximum number of iterations
• allvecs (list): a list of solutions at each iteration

diffev2(cost, x0, npop=4, args=(), bounds=None, ftol=0.005, gtol=None, maxiter=None, maxfun=None,


cross=0.9, scale=0.8, full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using Storn & Price’s differential evolution.
Uses Storn & Prices’s differential evolution algorithm to find the minimum of a function of one or more vari-
ables. Mimics a scipy.optimize style interface.
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• x0 (ndarray) – the initial guess parameter vector x if desired start is a single point, otherwise
takes a list of (min,max) bounds that define a region from which random initial points are
drawn.
• npop (int, default=4) – size of the trial solution population.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• ftol (float, default=5e-3) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=None) – maximum iterations to run without improvement.
• maxiter (int, default=None) – the maximum number of iterations to perform.

40 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

• maxfun (int, default=None) – the maximum number of function evaluations.


• cross (float, default=0.9) – the probability of cross-parameter mutations.
• scale (float, default=0.8) – multiplier for mutations on the trial solution.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• strategy (strategy, default=None) – override the default mutation strategy.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
• map (func, default=None) – a (parallel) map function y = map(f, x).
Returns (xopt, {fopt, iter, funcalls, warnflag}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function


• fopt (float): value of cost function at minimum: fopt = cost(xopt)
• iter (int): number of iterations
• funcalls (int): number of function calls
• warnflag (int): warning flag:
– 1 : Maximum number of function evaluations
– 2 : Maximum number of iterations
• allvecs (list): a list of solutions at each iteration

2.10 ensemble module

2.10.1 Solvers

This module contains a collection of optimization routines that use “map” to distribute several optimizer instances
over parameter space. Each solver accepts a imported solver object as the “nested” solver, which becomes the target
of the map function.

2.10. ensemble module 41


mystic Documentation, Release 0.3.3.dev0

The set of solvers built on mystic’s AbstractEnsembleSolver are:: LatticeSolver – start from center of N grid
points BuckshotSolver – start from N random points in parameter space SparsitySolver – start from N points
sampled in sparse regions of space

2.10.2 Usage

See mystic.examples.buckshot_example06 for an example of using BuckshotSolver. See mys-


tic.examples.lattice_example06 or an example of using LatticeSolver.
All solvers included in this module provide the standard signal handling. For more information, see mys-
tic.mystic.abstract_solver.
class LatticeSolver(dim, nbins=8)
Bases: mystic.abstract_ensemble_solver.AbstractEnsembleSolver
parallel mapped optimization starting from the centers of N grid points
Takes two initial inputs: dim – dimensionality of the problem nbins – tuple of number of bins in each dimen-
sion
All important class members are inherited from AbstractEnsembleSolver.
_InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
__init__(dim, nbins=8)
Takes two initial inputs: dim – dimensionality of the problem nbins – tuple of number of bins in each
dimension
All important class members are inherited from AbstractEnsembleSolver.
__module__ = 'mystic.ensemble'
class BuckshotSolver(dim, npts=8)
Bases: mystic.abstract_ensemble_solver.AbstractEnsembleSolver
parallel mapped optimization starting from N uniform randomly sampled points
Takes two initial inputs: dim – dimensionality of the problem npts – number of parallel solver instances
All important class members are inherited from AbstractEnsembleSolver.
_InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
__init__(dim, npts=8)
Takes two initial inputs: dim – dimensionality of the problem npts – number of parallel solver instances
All important class members are inherited from AbstractEnsembleSolver.
__module__ = 'mystic.ensemble'
class SparsitySolver(dim, npts=8, rtol=None)
Bases: mystic.abstract_ensemble_solver.AbstractEnsembleSolver
parallel mapped optimization starting from N points sampled from sparse regions
Takes three initial inputs: dim – dimensionality of the problem npts – number of parallel solver instances rtol
– size of radial tolerance for sparsity
All important class members are inherited from AbstractEnsembleSolver.

42 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

_InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
__init__(dim, npts=8, rtol=None)
Takes three initial inputs: dim – dimensionality of the problem npts – number of parallel solver instances
rtol – size of radial tolerance for sparsity
All important class members are inherited from AbstractEnsembleSolver.
__module__ = 'mystic.ensemble'
lattice(cost, ndim, nbins=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None,
full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using the lattice ensemble solver.
Uses a lattice ensemble algorithm to find the minimum of a function of one or more variables. Mimics the
scipy.optimize.fmin interface. Starts N solver instances at regular intervals in parameter space, deter-
mined by nbins (N = numpy.prod(nbins); len(nbins) == ndim).
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• ndim (int) – dimensionality of the problem.
• nbins (tuple(int), default=8) – total bins, or # of bins in each dimension.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=10) – maximum iterations to run without improvement.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• solver (solver, default=None) – override the default nested Solver instance.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
• map (func, default=None) – a (parallel) map function y = map(f, x).

2.10. ensemble module 43


mystic Documentation, Release 0.3.3.dev0

• dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting


position using the given distribution.
Returns (xopt, {fopt, iter, funcalls, warnflag, allfuncalls},
{allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function


• fopt (float): value of cost function at minimum: fopt = cost(xopt)
• iter (int): number of iterations
• funcalls (int): number of function calls
• warnflag (int): warning flag:
– 1 : Maximum number of function evaluations
– 2 : Maximum number of iterations
• allfuncalls (int): total function calls (for all solver instances)
• allvecs (list): a list of solutions at each iteration

buckshot(cost, ndim, npts=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None,


full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using the buckshot ensemble solver.
Uses a buckshot ensemble algorithm to find the minimum of a function of one or more variables. Mimics the
scipy.optimize.fmin interface. Starts npts solver instances at random points in parameter space.
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• ndim (int) – dimensionality of the problem.
• npts (int, default=8) – number of solver instances.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=10) – maximum iterations to run without improvement.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• solver (solver, default=None) – override the default nested Solver instance.
• handler (bool, default=False) – if True, enable handling interrupt signals.

44 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

• itermon (monitor, default=None) – override the default GenerationMonitor.


• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
• map (func, default=None) – a (parallel) map function y = map(f, x).
• dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting
position using the given distribution.
Returns (xopt, {fopt, iter, funcalls, warnflag, allfuncalls},
{allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function


• fopt (float): value of cost function at minimum: fopt = cost(xopt)
• iter (int): number of iterations
• funcalls (int): number of function calls
• warnflag (int): warning flag:
– 1 : Maximum number of function evaluations
– 2 : Maximum number of iterations
• allfuncalls (int): total function calls (for all solver instances)
• allvecs (list): a list of solutions at each iteration

sparsity(cost, ndim, npts=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None,


full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using the sparsity ensemble solver.
Uses a sparsity ensemble algorithm to find the minimum of a function of one or more variables. Mimics the
scipy.optimize.fmin interface. Starts npts solver instances at points in parameter space where existing
points are sparse.
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• ndim (int) – dimensionality of the problem.
• npts (int, default=8) – number of solver instances.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=10) – maximum iterations to run without improvement.

2.10. ensemble module 45


mystic Documentation, Release 0.3.3.dev0

• rtol (float, default=None) – minimum acceptable distance from other points.


• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• solver (solver, default=None) – override the default nested Solver instance.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
• map (func, default=None) – a (parallel) map function y = map(f, x).
• dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting
position using the given distribution.
Returns (xopt, {fopt, iter, funcalls, warnflag, allfuncalls},
{allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function


• fopt (float): value of cost function at minimum: fopt = cost(xopt)
• iter (int): number of iterations
• funcalls (int): number of function calls
• warnflag (int): warning flag:
– 1 : Maximum number of function evaluations
– 2 : Maximum number of iterations
• allfuncalls (int): total function calls (for all solver instances)
• allvecs (list): a list of solutions at each iteration

2.11 filters module

Input/output ‘filters’

46 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

Identity(x)
identity filter, F, where F(x) yields x
NullChecker(params, evalpts, *args)
null validity check
PickComponent(n, multiplier=1.0)
component filter, F, where F(x) yields x[n]

2.12 forward_model module

This module contains classes that aid in constructing cost functions. Cost function can easily be created by hand;
however, mystic also provides an automated method that allows the dynamic wrapping of forward models into cost
function objects.

2.12.1 Usage

The basic usage pattern for a cost factory is to generate a cost function from a set of data points and a corresponding
set of evaluation points. The cost factory requires a “model factory”, which is just a generator of model function in-
stances from a list of coefficients. The following example uses numpy.poly1d, which provides a factory for generating
polynomials. An expanded version of the following can be found in mystic.examples.example12.
>>> # get a model factory
>>> import numpy as np
>>> FunctionFactory = np.poly1d
>>>
>>> # generate some evaluation points
>>> xpts = 0.1 * np.arange(-50.,51.)
>>>
>>> # we don't have real data, so generate fake data from target and model
>>> target = [2.,-5.,3.]
>>> ydata = FunctionFactory(target)(xpts)
>>> noise = np.random.normal(0,1,size=len(ydata))
>>> ydata = ydata + noise
>>>
>>> # get a cost factory
>>> from mystic.forward_model import CostFactory
>>> C = CostFactory()
>>>
>>> # generate a cost function for the model factory
>>> metric = lambda x: np.sum(x*x)
>>> C.addModel(FunctionFactory, inputs=len(target))
>>> cost = C.getCostFunction(evalpts=xpts, observations=ydata,
... sigma=1.0, metric=metric)
>>>
>>> # pass the cost function to the optimizer
>>> initial_guess = [1.,-2.,1.]
>>> solution = fmin_powell(cost, initial_guess)
>>> print(solution)
[ 2.00495233 -5.0126248 2.72873734]

In general, a user will be required to write their own model factory. See the examples contained in mystic.models for
more information.
The CostFactory can be used to couple models together into a single cost function. For an example, see mys-
tic.examples.forward_model.

2.12. forward_model module 47


mystic Documentation, Release 0.3.3.dev0

class CostFactory
Bases: object
A cost function generator.
CostFactory builds a list of forward model factories, and maintains a list of associated model names and number
of inputs. Can be used to combine several models into a single cost function.
Takes no initial inputs.
__dict__ = dict_proxy({'__module__': 'mystic.forward_model', 'getCostFunction': <func
__init__()
CostFactory builds a list of forward model factories, and maintains a list of associated model names and
number of inputs. Can be used to combine several models into a single cost function.
Takes no initial inputs.
__module__ = 'mystic.forward_model'
__repr__() <==> repr(x)
__weakref__
list of weak references to the object (if defined)
addModel(model, inputs, name=None, outputFilter=<function Identity>, inputChecker=<function
NullChecker>)
Adds a forward model factory to the cost factory.
Inputs: model – a callable function factory object inputs – number of input arguments to model name – a
string representing the model name

Example

>>> import numpy as np


>>> C = CostFactory()
>>> C.addModel(np.poly, inputs=3)

getCostFunction(evalpts, observations, sigma=None, metric=<function <lambda>>)


Get a cost function that allows simultaneous evaluation of all forward models for the same set of evaluation
points and observation points.
Inputs: evalpts – a list of evaluation points observations – a list of data points sigma – a scaling factor
applied to the raw cost metric – the cost metric object
The cost metric should be a function of one parameter (possibly an array) that returns a scalar. The default
is L2. When called, the “misfit” will be passed in.
NOTE: Input parameters WILL go through filters registered as inputCheckers.

Example

>>> import numpy as np


>>> C = CostFactory()
>>> C.addModel(np.poly, inputs=3)
>>> x = np.array([-2., -1., 0., 1., 2.])
>>> y = np.array([-4., -2., 0., 2., 4.])
>>> F = C.getCostFunction(x, y, metric=lambda x: np.sum(x))
>>> F([1,0,0])
(continues on next page)

48 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

(continued from previous page)


0.0
>>> F([2,0,0])
10.0
>>> F = C.getCostFunction(x, y)
>>> F([2,0,0])
34.0

getCostFunctionSlow(evalpts, observations)
Get a cost function that allows simultaneous evaluation of all forward models for the same set of evaluation
points and observation points.
Parameters
• evalpts (list(float)) – a list of evaluation points (i.e. input).
• observations (list(float)) – a list of data points (i.e. output).

Notes

The cost metric is hard-wired to be the sum of the real part of |x|^2, where x is the VectorCostFunction
for a given set of parameters.
Input parameters do NOT go through filters registered as inputCheckers.

Examples

>>> import numpy as np


>>> C = CostFactory()
>>> C.addModel(np.poly, inputs=3)
>>> x = np.array([-2., -1., 0., 1., 2.])
>>> y = np.array([-4., -2., 0., 2., 4.])
>>> F = C.getCostFunctionSlow(x, y)
>>> F([1,0,0])
0.0
>>> F([2,0,0])
100.0

getForwardEvaluator(evalpts)
Get a model factory that allows simultaneous evaluation of all forward models for the same set of evalua-
tion points.
Inputs: evalpts – a list of evaluation points

Example

>>> import numpy as np


>>> C = CostFactory()
>>> C.addModel(np.poly, inputs=3)
>>> F = C.getForwardEvaluator([1,2,3,4,5])
>>> F([1,0,0])
[array([ 1, 4, 9, 16, 25])]
>>> F([0,1,0])
[array([1, 2, 3, 4, 5])]

2.12. forward_model module 49


mystic Documentation, Release 0.3.3.dev0

getParameterList()
Get a ‘pretty’ listing of the input parameters and corresponding models.
getRandomParams()
getVectorCostFunction(evalpts, observations)
Get a vector cost function that allows simultaneous evaluation of all forward models for the same set of
evaluation points and observation points.
Inputs: evalpts – a list of evaluation points observations – a list of data points
The vector cost metric is hard-wired to be the sum of the difference of getForwardEvaluator(evalpts) and
the observations.
NOTE: Input parameters do NOT go through filters registered as inputCheckers.

Example

>>> import numpy as np


>>> C = CostFactory()
>>> C.addModel(np.poly, inputs=3)
>>> x = np.array([-2., -1., 0., 1., 2.])
>>> y = np.array([-4., -2., 0., 2., 4.])
>>> F = C.getVectorCostFunction(x, y)
>>> F([1,0,0])
0.0
>>> F([2,0,0])
10.0

2.13 helputil module

Tools for prettifying help


Some of following code is taken from Ka-Ping Yee’s pydoc module
commandfy(text)
Format a command string
commandstring(text, BoldQ)
Bolds all lines in text that returns true by predicate BoldQ.
paginate(text, BoldQ=<function <lambda>>)
break printed content into pages

2.14 linesearch module

local copy of scipy.optimize.linesearch


line_search(f, myfprime, xk, pk, gfk, old_fval, old_old_fval, args=(), c1=0.0001, c2=0.9, amax=50)

50 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

2.15 mask module

_extend_mask(condition, mask)
extend the mask in the termination condition with the given mask
_replace_mask(condition, mask)
replace the mask in the termination condition with the given mask
_update_masks(condition, mask, kind=”, new=False)
update the termination condition with the given mask
get_mask(condition)
get mask from termination condition
update_mask(condition, collapse, new=False)
update the termination condition with the given collapse (dict)
update_position_masks(condition, mask, new=False)
update all position masks in the given termination condition
update_weight_masks(condition, mask, new=False)
update all weight masks in the given termination condition

2.16 math module

math: mathematical functions and tools for use in mystic

2.16.1 Functions

Mystic provides a set of mathematical functions that support various advanced optimization features such as uncer-
tainty analysis and parameter sensitivity.

2.16.2 Tools

Mystic also provides a set of mathematical tools that support advanced features such as parameter space partitioning
and monte carlo estimation. These mathematical tools are provided:

polyeval -- fast evaluation of an n-dimensional polynomial


poly1d -- generate a 1d polynomial instance
gridpts -- generate a set of regularly spaced points
fillpts -- generate a set of space-filling points
samplepts -- generate a set of randomly sampled points
tolerance -- absolute difference plus relative difference
almostEqual -- test if equal within some absolute or relative tolerance
Distribution -- generate a sampling distribution instance

class Distribution(generator=None, *args, **kwds)


Bases: object
Sampling distribution for mystic optimizers
generate a sampling distribution with interface dist(size=None)
input::
• generator: a ‘distribution’ method from scipy.stats or numpy.random

2.15. mask module 51


mystic Documentation, Release 0.3.3.dev0

• args: positional arguments for the distribtution object


• kwds: keyword arguments for the distribution object
note:: this method only accepts numpy.random methods with the keyword ‘size’
__call__(size=None)
generate a sample of given size (tuple) from the distribution
__dict__ = dict_proxy({'__module__': 'mystic.math', '__dict__': <attribute '__dict__'
__init__(generator=None, *args, **kwds)
generate a sampling distribution with interface dist(size=None)
input::
• generator: a ‘distribution’ method from scipy.stats or numpy.random
• args: positional arguments for the distribtution object
• kwds: keyword arguments for the distribution object
note:: this method only accepts numpy.random methods with the keyword ‘size’
__module__ = 'mystic.math'
__weakref__
list of weak references to the object (if defined)
polyeval(coeffs, x)
takes list of coefficients & evaluation points, returns f(x) thus, [a3, a2, a1, a0] yields a3 x^3 + a2 x^2 + a1 x^1
+ a0
poly1d(coeff )
generates a 1-D polynomial instance from a list of coefficients using numpy.poly1d(coeffs)
gridpts(q, dist=None)
takes a list of lists of arbitrary length q = [[1,2],[3,4]] and produces a list of gridpoints g = [[1,3],[1,4],[2,3],[2,4]]

Notes

if a mystic.math.Distribution is provided, use it to inject randomness


samplepts(lb, ub, npts, dist=None)
takes lower and upper bounds (e.g. lb = [0,3], ub = [2,4]) produces a list of sample points s =
[[1,3],[1,4],[2,3],[2,4]]
Inputs: lb – a list of the lower bounds ub – a list of the upper bounds npts – number of sample points dist – a
mystic.math.Distribution instance
fillpts(lb, ub, npts, data=None, rtol=None, dist=None)
takes lower and upper bounds (e.g. lb = [0,3], ub = [2,4]) finds npts that are at least rtol away from legacy data
produces a list of sample points s = [[1,3],[1,4],[2,3],[2,4]]
Inputs: lb – a list of the lower bounds ub – a list of the upper bounds npts – number of sample points data – a
list of legacy sample points rtol – target radial distance from each point dist – a mystic.math.Distribution
instance
Notes: if rtol is None, use max rtol; if rtol < 0, use quick-n-dirty method
almostEqual(x, y, tol=1e-18, rel=1e-07)
Returns True if two arrays are element-wise equal within a tolerance.

52 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

The tolerance values are positive, typically very small numbers. The relative difference (rtol * abs(b)) and the
absolute difference atol are added together to compare against the absolute difference between a and b.
Parameters
• a, b (array_like) – Input arrays to compare.
• rtol (float) – The relative tolerance parameter (see Notes).
• atol (float) – The absolute tolerance parameter (see Notes).
Returns y – Returns True if the two arrays are equal within the given tolerance; False otherwise. If
either array contains NaN, then False is returned.
Return type bool

Notes

If the following equation is element-wise True, then almostEqual returns True.


absolute(a - b) <= (atol + rtol * absolute(b))

Examples

>>> almostEqual([1e10,1.2345678], [1e10,1.2345677])


True
>>> almostEqual([1e10,1.234], [1e10,1.235])
False

tolerance(x, tol=1e-15, rel=1e-15)


relative plus absolute difference

2.16.3 mystic.math module documentation

approx module

tools for measuring equality


almostEqual(x, y, tol=1e-18, rel=1e-07)
Returns True if two arrays are element-wise equal within a tolerance.
The tolerance values are positive, typically very small numbers. The relative difference (rtol * abs(b)) and the
absolute difference atol are added together to compare against the absolute difference between a and b.
Parameters
• a, b (array_like) – Input arrays to compare.
• rtol (float) – The relative tolerance parameter (see Notes).
• atol (float) – The absolute tolerance parameter (see Notes).
Returns y – Returns True if the two arrays are equal within the given tolerance; False otherwise. If
either array contains NaN, then False is returned.
Return type bool

2.16. math module 53


mystic Documentation, Release 0.3.3.dev0

Notes

If the following equation is element-wise True, then almostEqual returns True.


absolute(a - b) <= (atol + rtol * absolute(b))

Examples

>>> almostEqual([1e10,1.2345678], [1e10,1.2345677])


True
>>> almostEqual([1e10,1.234], [1e10,1.235])
False

approx_equal(x, y, *args, **kwargs)


Return True if x and y are approximately equal, otherwise False.
Parameters
• x (object) – first object to compare
• y (object) – second object to compare
• tol (float, default=1e-18) – absolute error
• rel (float, default=1e-7) – relative error
Returns True if x and y are equal within tolerance, otherwise returns False.

Notes

If x and y are floats, return True if y is within either absolute error tol or relative error rel of x. You can disable
either the absolute or relative check by passing None as tol or rel (but not both).
For any other objects, x and y are checked in that order for a method __approx_equal__, and the result of
that is returned as a bool. Any optional arguments are passed to the __approx_equal__ method.
__approx_equal__ can return NotImplemented to signal it doesn’t know how to perform the specific
comparison, in which case the other object is checked instead. If neither object has the method, or both defer by
returning NotImplemented, then fall back on the same numeric comparison that is used for floats.

Examples

>>> approx_equal(1.2345678, 1.2345677)


True
>>> approx_equal(1.234, 1.235)
False

tolerance(x, tol=1e-15, rel=1e-15)


relative plus absolute difference

compressed module

helpers for compressed format for measures

54 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

binary(n)
converts an int to binary (returned as a string) Hence, int(binary(x), base=2) == x
binary2coords(binaries, positions, **kwds)
convert a list of binary strings to product measure coordinates
differs_by_one(ith, binaries, all=True, index=True)
get the binary string that differs by exactly one index
Inputs: ith = the target index binaries = a list of binary strings all = if False, return only the results for indices
< i index = if True, return the index of the results (not results themselves)
index2binary(index, npts=None)
convert a list of integers to a list of binary strings

discrete module

Classes for discrete measure data objects. Includes point_mass, measure, product_measure, and scenario classes.
compose(samples, weights=None)
Generate a product_measure object from a nested list of N x 1D discrete measure positions and a nested list of
N x 1D weights. If weights are not provided, a uniform distribution with norm = 1.0 will be used.
decompose(c)
Decomposes a product_measure object into a nested list of N x 1D discrete measure positions and a nested list
of N x 1D weights.
unflatten(params, npts)
Map a list of random variables to N x 1D discrete measures in a product_measure object.
flatten(c)
Flattens a product_measure object into a list.
bounded_mean(mean_x, samples, xmin, xmax, wts=None)
norm_wts_constraintsFactory(pts)
factory for a constraints function that: - normalizes weights
mean_y_norm_wts_constraintsFactory(target, pts)
factory for a constraints function that: - imposes a mean on scenario values - normalizes weights
impose_feasible(cutoff, data, guess=None, **kwds)
impose shortness on a given scenario
This function attempts to minimize the infeasibility between observed data and a scenario of synthetic data by
perforing an optimization on w,x,y over the given bounds.
Parameters
• cutoff (float) – maximum acceptable deviation from shortness
• data (mystic.math.discrete.scenario) – a dataset of observed points
• guess (mystic.math.discrete.scenario, default=None) – the synthetic points
• tol (float, default=0.0) – maximum acceptable optimizer termination for
sum(infeasibility).
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function x' = constraints(x), where x is a
scenario that has been converted into a list of parameters (e.g. with scenario.flatten),
and x' is the list of parameters after the encoded constaints have been satisfied.

2.16. math module 55


mystic Documentation, Release 0.3.3.dev0

Returns a scenario with desired shortness

Notes

Here, tol is used to set the optimization termination for minimizing the sum(infeasibility), while cutoff
is used in defining the deviation from shortness for observed x,y and synthetic x',y'.
guess can be either a scenario providing initial guess at feasibility, or a tuple of the dimensions of the desired
scenario, where initial values will be chosen at random.
impose_valid(cutoff, model, guess=None, **kwds)
impose model validity on a given scenario
This function attempts to minimize the graph distance between reality (data), y = G(x), and an approximating
function, y' = F(x'), by perforing an optimization on w,x,y over the given bounds.
Parameters
• cutoff (float) – maximum acceptable model invalidity |y - F(x')|.
• model (func) – the model function, y' = F(x').
• guess (scenario, default=None) – a scenario, defines y = G(x).
• hausdorff (bool, default=False) – hausdorff norm, where if given, then ytol = |y -
F(x')| + |x - x'|/norm
• xtol (float, default=0.0) – maximum acceptable pointwise graphical distance between model
and reality.
• tol (float, default=0.0) – maximum acceptable optimizer termination for sum(graphical
distances).
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function x' = constraints(x), where x is a
scenario that has been converted into a list of parameters (e.g. with scenario.flatten),
and x' is the list of parameters after the encoded constaints have been satisfied.
Returns a scenario with the desired model validity

Notes

xtol defines the n-dimensional base of a pilar of height cutoff, centered at each point. The region inside the pilar
defines the space where a “valid” model must intersect. If xtol is not specified, then the base of the pilar will
be a dirac at x' = x. This function performs an optimization to find a set of points where the model is valid.
Here, tol is used to set the optimization termination for minimizing the sum(graphical_distances),
while cutoff is used in defining the graphical distance between x,y and x',F(x').
guess can be either a scenario providing initial guess at validity, or a tuple of the dimensions of the desired
scenario, where initial values will be chosen at random.
class point_mass(position, weight=1.0)
Bases: object
a point mass object with weight and position
Parameters
• position (tuple(float)) – position of the point mass

56 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

• weight (float, default=1.0) – weight of the point mass


Variables
• position (tuple(float)) – position of the point mass
• weight (float) – weight of the point mass
rms
readonly – square root of the sum of squared position
class measure
Bases: list
a 1-d collection of point masses forming a ‘discrete measure’
Parameters iterable (list) – a list of mystic.math.discrete.point_mass objects

Notes

• assumes only contains mystic.math.discrete.point_mass objects


• assumes measure.n = len(measure.positions) == len(measure.weights)
• relies on constraints to impose notions such as sum(weights) == 1.0

center_mass
sum of weights * positions
ess_maximum(f, tol=0.0)
calculate the maximum for the support of a given function
Parameters
• f (func) – a function that takes a list and returns a number
• tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the maximum value of f over all measure positions with support
ess_minimum(f, tol=0.0)
calculate the minimum for the support of a given function
Parameters
• f (func) – a function that takes a list and returns a number
• tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the minimum value of f over all measure positions with support
expect(f )
calculate the expectation for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the expectation of f over all measure positions
expect_var(f )
calculate the expected variance for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the expected variance of f over all measure positions

2.16. math module 57


mystic Documentation, Release 0.3.3.dev0

mass
readonly – the sum of the weights
maximum(f )
calculate the maximum for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the maximum value of f over all measure positions
minimum(f )
calculate the minimum for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the minimum value of f over all measure positions
normalize()
normalize the weights to 1.0
Parameters None
Returns None
npts
readonly – the number of point masses in the measure
positions
a list of positions for all point masses in the measure
range
|max - min| for the positions
set_expect(expected, f, bounds=None, constraints=None, **kwds)
impose an expectation on the measure by adjusting the positions
Parameters
• expected (float) – target expected mean
• f (func) – a function that takes a list and returns a number
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function c' = constraints(c), where c is a
product measure, and c' is a product measure where the encoded constaints are satisfied.
• tol (float, default=None) – maximum allowable deviation from expected
• npop (int, default=200) – size of the trial solution population
• maxiter (int, default=1000) – the maximum number of iterations to perform
• maxfun (int, default=1e+6) – the maximum number of function evaluations
Returns None

Notes

Expectation E is calculated by minimizing mean(f(x)) - expected, over the given bounds, and will
terminate when E is found within deviation tol of the target mean expected. If tol is not provided,
then a relative deviation of 1% of expected will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw
the mean, variance, and etc from.

58 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper
bounds, for each parameter.
set_expect_mean_and_var(expected, f, bounds=None, constraints=None, **kwds)
impose expected mean and var on the measure by adjusting the positions
Parameters
• expected (tuple(float)) – (expected mean, expected var)
• f (func) – a function that takes a list and returns a number
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function c' = constraints(c), where c is a
product measure, and c' is a product measure where the encoded constaints are satisfied.
• tol (float, default=None) – maximum allowable deviation from expected
• npop (int, default=200) – size of the trial solution population
• maxiter (int, default=1000) – the maximum number of iterations to perform
• maxfun (int, default=1e+6) – the maximum number of function evaluations
Returns None

Notes

Expected mean E and expected variance R are calculated by minimizing the sum of the absolute values
of mean(f(x)) - m and variance(f(x)) - v over the given bounds, and will terminate when
E and R are found within tolerance tol of the target mean m and variance v, respectively. If tol is not
provided, then a relative deviation of 1% of max(m,v) will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw
the mean, variance, and etc from
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper
bounds, for each parameter
set_expect_var(expected, f, bounds=None, constraints=None, **kwds)
impose an expected variance on the measure by adjusting the positions
Parameters
• expected (float) – target expected variance
• f (func) – a function that takes a list and returns a number
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function c' = constraints(c), where c is a
product measure, and c' is a product measure where the encoded constaints are satisfied.
• tol (float, default=None) – maximum allowable deviation from expected
• npop (int, default=200) – size of the trial solution population
• maxiter (int, default=1000) – the maximum number of iterations to perform
• maxfun (int, default=1e+6) – the maximum number of function evaluations
Returns None

2.16. math module 59


mystic Documentation, Release 0.3.3.dev0

Notes

Expected var E is calculated by minimizing var(f(x)) - expected, over the given bounds, and will
terminate when E is found within deviation tol of the target variance expected. If tol is not provided,
then a relative deviation of 1% of expected will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw
the mean, variance, and etc from.
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper
bounds, for each parameter.
support(tol=0)
get the positions with non-zero weight (i.e. support)
Parameters tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the list of positions with support
support_index(tol=0)
get the indices where there is support (i.e. non-zero weight)
Parameters tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the list of indices where there is support
var
mean(|positions - mean(positions)|**2)
weights
a list of weights for all point masses in the measure
class product_measure
Bases: list
a N-d measure-theoretic product of discrete measures
Parameters iterable (list) – a list of mystic.math.discrete.measure objects

Notes

• all measures are treated as if they are orthogonal


• assumes only contains mystic.math.discrete.measure objects
• assumes len(product_measure.positions) == len(product_measure.weights)
• relies on constraints to impose notions such as sum(weights) == 1.0
• relies on constraints to impose expectation (within acceptable deviation)
• positions are (xi,yi,zi) with weights (wxi,wyi,wzi), where weight wxi at xi should be the
same for each (yj,zk). Similarly for each wyi and wzi.

center_mass
sum of weights * positions
differs_by_one(ith, all=True, index=True)
get the coordinates where the associated binary string differs by exactly one index
Parameters
• ith (int) – the target index

60 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

• all (bool, default=True) – if False, only return results for indices < i
• index (bool, default=True) – if True, return the indices of the results instead of the results
themselves
Returns the coordinates where the associated binary string differs by one, or if index is True,
return the corresponding indices
ess_maximum(f, tol=0.0)
calculate the maximum for the support of a given function
Parameters
• f (func) – a function that takes a list and returns a number
• tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the maximum value of f over all measure positions with support
ess_minimum(f, tol=0.0)
calculate the minimum for the support of a given function
Parameters
• f (func) – a function that takes a list and returns a number
• tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the minimum value of f over all measure positions with support
expect(f )
calculate the expectation for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the expectation of f over all measure positions
expect_var(f )
calculate the expected variance for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the expected variance of f over all measure positions
flatten()
convert a product measure to a single list of parameters
Parameters None
Returns a list of parameters

Notes

Given product_measure.pts = (M, N, ...), then the returned list is params = [wx1, .
.., wxM, x1, ..., xM, wy1, ..., wyN, y1, ..., yN, ...]. Thus, params will have
M weights and M corresponding positions, followed by N weights and N corresponding positions, with this
pattern followed for each new dimension of the desired product measure.
load(params, pts)
load the product measure from a list of parameters
Parameters
• params (list(float)) – parameters corresponding to N 1D discrete measures

2.16. math module 61


mystic Documentation, Release 0.3.3.dev0

• pts (tuple(int)) – number of point masses in each of the discrete measures


Returns the product measure itself
Return type self (measure)

Notes

To append len(pts) new discrete measures to the product measure, it is assumed params either cor-
responds to the correct number of weights and positions specified by pts, or params has additional
values (typically output values) which will be ignored. It is assumed that len(params) >= 2 *
sum(product_measure.pts).
Given the value of pts = (M, N, ...), it is assumed that params = [wx1, ..., wxM, x1,
..., xM, wy1, ..., wyN, y1, ..., yN, ...]. Thus, params should have M weights and M
corresponding positions, followed by N weights and N corresponding positions, with this pattern followed
for each new dimension of the desired product measure.
mass
readonly – a list of weight norms
maximum(f )
calculate the maximum for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the maximum value of f over all measure positions
minimum(f )
calculate the minimum for a given function
Parameters f (func) – a function that takes a list and returns a number
Returns the minimum value of f over all measure positions
npts
readonly – the total number of point masses in the product measure
pof(f )
calculate probability of failure for a given function
Parameters f (func) – a function returning True for ‘success’ and False for ‘failure’
Returns the probabilty of failure, a float in [0.0,1.0]

Notes

• the function f should take a list of positions (for example, scenario.positions or


product_measure.positions) and return a single value (e.g. 0.0 or False)

pos
readonly – a list of positions for each discrete mesure
positions
a list of positions for all point masses in the product measure
pts
readonly – the number of point masses for each discrete mesure

62 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

sampled_pof(f, npts=10000)
use sampling to calculate probability of failure for a given function
Parameters
• f (func) – a function returning True for ‘success’ and False for ‘failure’
• npts (int, default=10000) – the number of point masses sampled from the underlying
discrete measures
Returns the probabilty of failure, a float in [0.0,1.0]

Notes

• the function f should take a list of positions (for example, scenario.positions or


product_measure.positions) and return a single value (e.g. 0.0 or False)

sampled_support(npts=10000)
randomly select support points from the underlying discrete measures
Parameters npts (int, default=10000) – the number of sampled points
Returns a list of len(product measure) lists, each of length len(npts)
select(*index, **kwds)
generate product measure positions for the selected position indices
Parameters index (tuple(int)) – tuple of position indicies
Returns a list of product measure positions for the selected indices

Examples

>>> r
[[9, 8], [1, 3], [4, 2]]
>>> r.select(*range(r.npts))
[(9, 1, 4), (8, 1, 4), (9, 3, 4), (8, 3, 4), (9, 1, 2), (8, 1, 2), (9, 3, 2),
˓→(8, 3, 2)]

>>>
>>> _pack(r)
[(9, 1, 4), (8, 1, 4), (9, 3, 4), (8, 3, 4), (9, 1, 2), (8, 1, 2), (9, 3, 2),
˓→(8, 3, 2)]

Notes

This only works for product measures of dimension 2^K


set_expect(expected, f, bounds=None, constraints=None, **kwds)
impose an expectation on the measure by adjusting the positions
Parameters
• expected (float) – target expected mean
• f (func) – a function that takes a list and returns a number
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)

2.16. math module 63


mystic Documentation, Release 0.3.3.dev0

• constraints (func, default=None) – a function c' = constraints(c), where c is a


product measure, and c' is a product measure where the encoded constaints are satisfied.
• tol (float, default=None) – maximum allowable deviation from expected
• npop (int, default=200) – size of the trial solution population
• maxiter (int, default=1000) – the maximum number of iterations to perform
• maxfun (int, default=1e+6) – the maximum number of function evaluations
Returns None

Notes

Expectation E is calculated by minimizing mean(f(x)) - expected, over the given bounds, and will
terminate when E is found within deviation tol of the target mean expected. If tol is not provided,
then a relative deviation of 1% of expected will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw
the mean, variance, and etc from.
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper
bounds, for each parameter.
set_expect_mean_and_var(expected, f, bounds=None, constraints=None, **kwds)
impose expected mean and var on the measure by adjusting the positions
Parameters
• expected (tuple(float)) – (expected mean, expected var)
• f (func) – a function that takes a list and returns a number
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function c' = constraints(c), where c is a
product measure, and c' is a product measure where the encoded constaints are satisfied.
• tol (float, default=None) – maximum allowable deviation from expected
• npop (int, default=200) – size of the trial solution population
• maxiter (int, default=1000) – the maximum number of iterations to perform
• maxfun (int, default=1e+6) – the maximum number of function evaluations
Returns None

Notes

Expected mean E and expected variance R are calculated by minimizing the sum of the absolute values
of mean(f(x)) - m and variance(f(x)) - v over the given bounds, and will terminate when
E and R are found within tolerance tol of the target mean m and variance v, respectively. If tol is not
provided, then a relative deviation of 1% of max(m,v) will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw
the mean, variance, and etc from
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper
bounds, for each parameter

64 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

set_expect_var(expected, f, bounds=None, constraints=None, **kwds)


impose an expected variance on the measure by adjusting the positions
Parameters
• expected (float) – target expected variance
• f (func) – a function that takes a list and returns a number
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function c' = constraints(c), where c is a
product measure, and c' is a product measure where the encoded constaints are satisfied.
• tol (float, default=None) – maximum allowable deviation from expected
• npop (int, default=200) – size of the trial solution population
• maxiter (int, default=1000) – the maximum number of iterations to perform
• maxfun (int, default=1e+6) – the maximum number of function evaluations
Returns None

Notes

Expected var E is calculated by minimizing var(f(x)) - expected, over the given bounds, and will
terminate when E is found within deviation tol of the target variance expected. If tol is not provided,
then a relative deviation of 1% of expected will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw
the mean, variance, and etc from.
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper
bounds, for each parameter.
support(tol=0)
get the positions with non-zero weight (i.e. support)
Parameters tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the list of positions with support
support_index(tol=0)
get the indices where there is support (i.e. non-zero weight)
Parameters tol (float, default=0.0) – tolerance, where any weight <= tol is zero
Returns the list of indices where there is support
update(params)
update the product measure from a list of parameters
Parameters params (list(float)) – parameters corresponding to N 1D discrete measures
Returns the product measure itself
Return type self (measure)

2.16. math module 65


mystic Documentation, Release 0.3.3.dev0

Notes

The dimensions of the product measure will not change upon update, and it is assumed params either cor-
responds to the correct number of weights and positions for the existing product_measure, or params
has additional values (typically output values) which will be ignored. It is assumed that len(params)
>= 2 * sum(product_measure.pts).
If product_measure.pts = (M, N, ...), then it is assumed that params = [wx1, ...,
wxM, x1, ..., xM, wy1, ..., wyN, y1, ..., yN, ...]. Thus, params should have M
weights and M corresponding positions, followed by N weights and N corresponding positions, with this
pattern followed for each new dimension of the desired product measure.
weights
a list of weights for all point masses in the product measure
wts
readonly – a list of weights for each discrete mesure
class scenario(pm=None, values=None)
Bases: mystic.math.discrete.product_measure
a N-d product measure with associated data values
A scenario is a measure-theoretic product of discrete measures that also includes a list of associated values,
with the values corresponding to measured or synthetic data for each measure position. Each point mass in the
product measure is paired with a value, and thus, essentially, a scenario is equivalent to a mystic.math.
legacydata.dataset stored in a product_measure representation.
Parameters
• pm (mystic.math.discrete.product_measure, default=None) – a product measure
• values (list(float), default=None) – values associated with each position

Notes

• all measures are treated as if they are orthogonal


• relies on constraints to impose notions such as sum(weights) == 1.0
• relies on constraints to impose expectation (within acceptable deviation)
• positions are (xi,yi,zi) with weights (wxi,wyi,wzi), where weight wxi at xi should be the
same for each (yj,zk). Similarly for each wyi and wzi.

flatten(all=True)
convert a scenario to a single list of parameters
Parameters all (bool, default=True) – if True, append the scenario values
Returns a list of parameters

Notes

Given scenario.pts = (M, N, ...), then the returned list is params = [wx1, ..., wxM,
x1, ..., xM, wy1, ..., wyN, y1, ..., yN, ...]. Thus, params will have M weights
and M corresponding positions, followed by N weights and N corresponding positions, with this pattern
followed for each new dimension of the scenario. If all is True, then the scenario.values will be
appended to the list of parameters.

66 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

load(params, pts)
load the scenario from a list of parameters
Parameters
• params (list(float)) – parameters corresponding to N 1D discrete measures
• pts (tuple(int)) – number of point masses in each of the discrete measures
Returns the scenario itself
Return type self (scenario)

Notes

To append len(pts) new discrete measures to the scenario, it is assumed params either corresponds to
the correct number of weights and positions specified by pts, or params has additional values which will
be saved as the scenario.values. It is assumed that len(params) >= 2 * sum(scenario.
pts).
Given the value of pts = (M, N, ...), it is assumed that params = [wx1, ..., wxM, x1,
..., xM, wy1, ..., wyN, y1, ..., yN, ...]. Thus, params should have M weights and M
corresponding positions, followed by N weights and N corresponding positions, with this pattern followed
for each new dimension of the desired scenario. Any remaining parameters will be treated as scenario.
values.
mean_value()
calculate the mean of the associated values for a scenario
Parameters None
Returns the weighted mean of the scenario values
pof_value(f )
calculate probability of failure for a given function
Parameters f (func) – a function returning True for ‘success’ and False for ‘failure’
Returns the probabilty of failure, a float in [0.0,1.0]

Notes

• the function f should take a list of values (for example, scenario.values) and return a single
value (e.g. 0.0 or False)

set_feasible(data, cutoff=0.0, bounds=None, constraints=None, with_self=True, **kwds)


impose shortness with respect to the given data points
This function attempts to minimize the infeasibility between observed data and the scenario of synthetic
data by perforing an optimization on w,x,y over the given bounds.
Parameters
• data (mystic.math.discrete.scenario) – a dataset of observed points
• cutoff (float, default=0.0) – maximum acceptable deviation from shortness
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)

2.16. math module 67


mystic Documentation, Release 0.3.3.dev0

• constraints (func, default=None) – a function x' = constraints(x), where x is


a scenario that has been converted into a list of parameters (e.g. with scenario.
flatten), and x' is the list of parameters after the encoded constaints have been satis-
fied.
• with_self (bool, default=True) – if True, shortness is also self-consistent
• tol (float, default=0.0) – maximum acceptable optimizer termination for
sum(infeasibility).
Returns None

Notes

• both scenario.positions and scenario.values may be adjusted.


• if with_self is True, shortness will be measured not only from the scenario to the given data, but also
between scenario datapoints.

set_mean_value(m)
set the mean for the associated values of a scenario
Parameters m (float) – the target weighted mean of the scenario values
Returns None
set_valid(model, cutoff=0.0, bounds=None, constraints=None, **kwds)
impose model validity on a scenario by adjusting positions and values
This function attempts to minimize the graph distance between reality (data), y = G(x), and an approx-
imating function, y' = F(x'), by perforing an optimization on w,x,y over the given bounds.
Parameters
• model (func) – a model y' = F(x') that approximates reality y = G(x)
• cutoff (float, default=0.0) – acceptable model invalidity |y - F(x')|
• bounds (tuple, default=None) – (all lower bounds, all upper bounds)
• constraints (func, default=None) – a function x' = constraints(x), where x is
a scenario that has been converted into a list of parameters (e.g. with scenario.
flatten), and x' is the list of parameters after the encoded constaints have been satis-
fied.
• hausdorff (bool, default=False) – hausdorff norm, where if given, then ytol = |y -
F(x')| + |x - x'|/norm
• xtol (float, default=0.0) – maximum acceptable pointwise graphical distance between
model and reality.
• tol (float, default=0.0) – maximum acceptable optimizer termination for
sum(graphical distances).
Returns None

Notes

xtol defines the n-dimensional base of a pilar of height cutoff, centered at each point. The region in-
side the pilar defines the space where a “valid” model must intersect. If xtol is not specified, then the

68 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

base of the pilar will be a dirac at x' = x. This function performs an optimization to find a set of
points where the model is valid. Here, tol is used to set the optimization termination for minimizing the
sum(graphical_distances), while cutoff is used in defining the graphical distance between x,y
and x',F(x').
short_wrt_data(data, L=None, blamelist=False, pairs=True, all=False, raw=False, **kwds)
check for shortness with respect to the given data
Parameters
• data (list) – a list of data points or dataset to compare against.
• L (float, default=None) – the lipschitz constant, if different than in data.
• blamelist (bool, default=False) – if True, indicate the infeasible points.
• pairs (bool, default=True) – if True, indicate indices of infeasible points.
• all (bool, default=False) – if True, get results for each individual point.
• raw (bool, default=False) – if False, get boolean results (i.e. non-float).
• tol (float, default=0.0) – maximum acceptable deviation from shortness.
• cutoff (float, default=tol) – zero out distances less than cutoff.

Notes

Each point x,y can be thought to have an associated double-cone with slope equal to the lipschitz constant.
Shortness with respect to another point is defined by the first point not being inside the cone of the second.
We can allow for some error in shortness, a short tolerance tol, for which the point x,y is some acceptable
y-distance inside the cone. While very tightly related, cutoff and tol play distinct roles; tol is subtracted
from calculation of the lipschitz_distance, while cutoff zeros out the value of any element less than the
cutoff.
short_wrt_self(L, blamelist=False, pairs=True, all=False, raw=False, **kwds)
check for shortness with respect to the scenario itself
Parameters
• L (float) – the lipschitz constant.
• blamelist (bool, default=False) – if True, indicate the infeasible points.
• pairs (bool, default=True) – if True, indicate indices of infeasible points.
• all (bool, default=False) – if True, get results for each individual point.
• raw (bool, default=False) – if False, get boolean results (i.e. non-float).
• tol (float, default=0.0) – maximum acceptable deviation from shortness.
• cutoff (float, default=tol) – zero out distances less than cutoff.

Notes

Each point x,y can be thought to have an associated double-cone with slope equal to the lipschitz constant.
Shortness with respect to another point is defined by the first point not being inside the cone of the second.
We can allow for some error in shortness, a short tolerance tol, for which the point x,y is some acceptable
y-distance inside the cone. While very tightly related, cutoff and tol play distinct roles; tol is subtracted

2.16. math module 69


mystic Documentation, Release 0.3.3.dev0

from calculation of the lipschitz_distance, while cutoff zeros out the value of any element less than the
cutoff.
update(params)
update the scenario from a list of parameters
Parameters params (list(float)) – parameters corresponding to N 1D discrete measures
Returns the scenario itself
Return type self (scenario)

Notes

The dimensions of the scenario will not change upon update, and it is assumed params either corresponds
to the correct number of weights and positions for the existing scenario, or params has additional
values which will be saved as the scenario.values. It is assumed that len(params) >= 2 *
sum(scenario.pts).
If scenario.pts = (M, N, ...), then it is assumed that params = [wx1, ..., wxM,
x1, ..., xM, wy1, ..., wyN, y1, ..., yN, ...]. Thus, params should have M weights
and M corresponding positions, followed by N weights and N corresponding positions, with this pattern
followed for each new dimension of the desired scenario.
valid_wrt_model(model, blamelist=False, pairs=True, all=False, raw=False, **kwds)
check for scenario validity with respect to the model
Parameters
• model (func) – the model function, y' = F(x').
• blamelist (bool, default=False) – if True, indicate the infeasible points.
• pairs (bool, default=True) – if True, indicate indices of infeasible points.
• all (bool, default=False) – if True, get results for each individual point.
• raw (bool, default=False) – if False, get boolean results (i.e. non-float).
• ytol (float, default=0.0) – maximum acceptable difference |y - F(x')|.
• xtol (float, default=0.0) – maximum acceptable difference |x - x'|.
• cutoff (float, default=ytol) – zero out distances less than cutoff.
• hausdorff (bool, default=False) – hausdorff norm, where if given, then ytol = |y -
F(x')| + |x - x'|/norm.

Notes

xtol defines the n-dimensional base of a pilar of height ytol, centered at each point. The region inside the
pilar defines the space where a “valid” model must intersect. If xtol is not specified, then the base of the
pilar will be a dirac at x' = x. This function performs an optimization for each x to find an appropriate
x'.
ytol is a single value, while xtol is a single value or an iterable. cutoff takes a float or a boolean, where
cutoff=True will set the value of cutoff to the default. Typically, the value of cutoff is ytol, 0.0, or
None. hausdorff can be False (e.g. norm = 1.0), True (e.g. norm = spread(x)), or a list of points
of len(x).

70 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

While cutoff and ytol are very tightly related, they play a distinct role; ytol is used to set the optimization
termination for an acceptable |y - F(x')|, while cutoff is applied post-optimization.
If we are using the hausdorff norm, then ytol will set the optimization termination for an acceptable |y -
F(x')| + |x - x'|/norm, where the x values are normalized by norm = hausdorff.
values
a list of values corresponding to output data for all point masses in the underlying product measure

distance module

distances and norms for the legacy data module


Lnorm(weights, p=1, axis=None)
calculate L-p norm of weights
Parameters
• weights (array(float)) – an array of weights
• p (int, default=1) – the power of the p-norm, where p in [0,inf]
• axis (int, default=None) – axis used to take the norm along
Returns a float distance norm for the weights
absolute_distance(x, xp=None, pair=False, dmin=0)
pointwise (or pairwise) absolute distance
pointwise = |x.T[:,newaxis] - x'.T| or pairwise = |x.T - x'.T|.T
Parameters
• x (array) – an array of points, x
• xp (array, default=None) – a second array of points, x'
• pair (bool, default=False) – if True, return the pairwise distances
• dmin (int, default=0) – upconvert x,x' to dimension >= dmin
Returns an array of absolute distances between points

Notes

• x'==x is symmetric with zeros on the diagonal


• use dmin=2 for the forced upconversion of 1-D arrays

chebyshev(x, xp=None, pair=False, dmin=0, axis=None)


infinity norm distance between points in euclidean space
d(inf) = max(|x[0] - x[0]'|, |x[1] - x[1]'|, ..., |x[n] - x[n]'|)
Parameters
• x (array) – an array of points, x
• xp (array, default=None) – a second array of points, x'
• pair (bool, default=False) – if True, return the pairwise distances
• dmin (int, default=0) – upconvert x,x' to dimension >= dmin

2.16. math module 71


mystic Documentation, Release 0.3.3.dev0

• axis (int, default=None) – if not None, reduce across the given axis
Returns an array of absolute distances between points

Notes

most common usage has pair=False and axis=0, or pairwise distance with pair=True and axis=1
euclidean(x, xp=None, pair=False, dmin=0, axis=None)
L-2 norm distance between points in euclidean space
d(2) = sqrt(sum(|x[0] - x[0]'|^2, |x[1] - x[1]'|^2, ..., |x[n] -
x[n]'|^2))
Parameters
• x (array) – an array of points, x
• xp (array, default=None) – a second array of points, x'
• pair (bool, default=False) – if True, return the pairwise distances
• dmin (int, default=0) – upconvert x,x' to dimension >= dmin
• axis (int, default=None) – if not None, reduce across the given axis
Returns an array of absolute distances between points

Notes

most common usage has pair=False and axis=0, or pairwise distance with pair=True and axis=1
graphical_distance(model, points, **kwds)
find the radius(x') that minimizes the graph between reality (data), y = G(x), and an approximating
function, y' = F(x').
Parameters
• model (func) – a model y' = F(x') that approximates reality y = G(x)
• points (mystic.math.legacydata.dataset) – a dataset, defines y = G(x)
• ytol (float, default=0.0) – maximum acceptable difference |y - F(x')|.
• xtol (float, default=0.0) – maximum acceptable difference |x - x'|.
• cutoff (float, default=ytol) – zero out distances less than cutoff.
• hausdorff (bool, default=False) – hausdorff norm, where if given, then ytol = |y -
F(x')| + |x - x'|/norm.
Returns the radius (the minimum distance x,G(x) to x',F(x') for each x)

Notes

points can be a mystic.math.legacydata.dataset or a list of mystic.math.legacydata.


datapoint objects.
xtol defines the n-dimensional base of a pilar of height ytol, centered at each point. The region inside the pilar
defines the space where a “valid” model must intersect. If xtol is not specified, then the base of the pilar will be
a dirac at x' = x. This function performs an optimization for each x to find an appropriate x'.

72 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

ytol is a single value, while xtol is a single value or an iterable. cutoff takes a float or a boolean, where
cutoff=True will set the value of cutoff to the default. Typically, the value of cutoff is ytol, 0.0, or None.
hausdorff can be False (e.g. norm = 1.0), True (e.g. norm = spread(x)), or a list of points of len(x).
While cutoff and ytol are very tightly related, they play a distinct role; ytol is used to set the optimization
termination for an acceptable |y - F(x')|, while cutoff is applied post-optimization.
If we are using the hausdorff norm, then ytol will set the optimization termination for an acceptable |y -
F(x')| + |x - x'|/norm, where the x values are normalized by norm = hausdorff.
hamming(x, xp=None, pair=False, dmin=0, axis=None)
zero ‘norm’ distance between points in euclidean space
d(0) = sum(x[0] != x[0]', x[1] != x[1]', ..., x[n] != x[n]')
Parameters
• x (array) – an array of points, x
• xp (array, default=None) – a second array of points, x'
• pair (bool, default=False) – if True, return the pairwise distances
• dmin (int, default=0) – upconvert x,x' to dimension >= dmin
• axis (int, default=None) – if not None, reduce across the given axis
Returns an array of absolute distances between points

Notes

most common usage has pair=False and axis=0, or pairwise distance with pair=True and axis=1
infeasibility(distance, cutoff=0.0)
amount by which the distance exceeds the given cutoff distance
Parameters
• distance (array) – the measure of feasibility for each point
• cutoff (float, default=0.0) – maximum acceptable distance
Returns an array of distances by which each point is infeasbile
is_feasible(distance, cutoff=0.0)
determine if the distance exceeds the given cutoff distance
Parameters
• distance (array) – the measure of feasibility for each point
• cutoff (float, default=0.0) – maximum acceptable distance
Returns bool array, with True where the distance is less than cutoff
lipschitz_distance(L, points1, points2, **kwds)
calculate the lipschitz distance between two sets of datapoints
Parameters
• L (list) – a list of lipschitz constants
• points1 (mystic.math.legacydata.dataset) – a dataset
• points2 (mystic.math.legacydata.dataset) – a second dataset

2.16. math module 73


mystic Documentation, Release 0.3.3.dev0

• tol (float, default=0.0) – maximum acceptable deviation from shortness


• cutoff (float, default=tol) – zero out distances less than cutoff
Returns a list of lipschitz distances

Notes

Both points1 and points2 can be a mystic.math.legacydata.dataset, or a list of mystic.math.


legacydata.datapoint objects, or a list of lipschitzcone.vertex objects (from mystic.math.
legacydata). cutoff takes a float or a boolean, where cutoff=True will set the value of cutoff to the
default. Typically, the value of cutoff is tol, 0.0, or None.
Each point x,y can be thought to have an associated double-cone with slope equal to the lipschitz constant.
Shortness with respect to another point is defined by the first point not being inside the cone of the second. We
can allow for some error in shortness, a short tolerance tol, for which the point x,y is some acceptable y-distance
inside the cone. While very tightly related, cutoff and tol play distinct roles; tol is subtracted from calculation
of the lipschitz_distance, while cutoff zeros out the value of any element less than the cutoff.
lipschitz_metric(L, x, xp=None)
sum of lipschitz-weighted distance between points
d = sum(L[i] * |x[i] - x'[i]|)
Parameters
• L (array) – an array of Lipschitz constants, L
• x (array) – an array of points, x
• xp (array, default=None) – a second array of points, x'
Returns an array of absolute distances between points
manhattan(x, xp=None, pair=False, dmin=0, axis=None)
L-1 norm distance between points in euclidean space
d(1) = sum(|x[0] - x[0]'|, |x[1] - x[1]'|, ..., |x[n] - x[n]'|)
Parameters
• x (array) – an array of points, x
• xp (array, default=None) – a second array of points, x'
• pair (bool, default=False) – if True, return the pairwise distances
• dmin (int, default=0) – upconvert x,x' to dimension >= dmin
• axis (int, default=None) – if not None, reduce across the given axis
Returns an array of absolute distances between points

Notes

most common usage has pair=False and axis=0, or pairwise distance with pair=True and axis=1
minkowski(x, xp=None, pair=False, dmin=0, p=3, axis=None)
p-norm distance between points in euclidean space
d(p) = sum(|x[0] - x[0]'|^p, |x[1] - x[1]'|^p, ..., |x[n] - x[n]'|^p)^(1/
p)

74 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

Parameters
• x (array) – an array of points, x
• xp (array, default=None) – a second array of points, x'
• pair (bool, default=False) – if True, return the pairwise distances
• dmin (int, default=0) – upconvert x,x' to dimension >= dmin
• p (int, default=3) – value of p for the p-norm
• axis (int, default=None) – if not None, reduce across the given axis
Returns an array of absolute distances between points

Notes

most common usage has pair=False and axis=0, or pairwise distance with pair=True and axis=1

grid module

tools for generating points on a grid


fillpts(lb, ub, npts, data=None, rtol=None, dist=None)
takes lower and upper bounds (e.g. lb = [0,3], ub = [2,4]) finds npts that are at least rtol away from legacy data
produces a list of sample points s = [[1,3],[1,4],[2,3],[2,4]]
Inputs: lb – a list of the lower bounds ub – a list of the upper bounds npts – number of sample points data – a
list of legacy sample points rtol – target radial distance from each point dist – a mystic.math.Distribution
instance
Notes: if rtol is None, use max rtol; if rtol < 0, use quick-n-dirty method
gridpts(q, dist=None)
takes a list of lists of arbitrary length q = [[1,2],[3,4]] and produces a list of gridpoints g = [[1,3],[1,4],[2,3],[2,4]]

Notes

if a mystic.math.Distribution is provided, use it to inject randomness


randomly_bin(N, ndim=None, ones=True, exact=True)
generate N bins randomly gridded across ndim dimensions
Inputs: N – integer number of bins, where N = prod(bins) ndim – integer length of bins, thus ndim = len(bins)
ones – if False, prevent bins from containing “1s”, wherever possible exact – if False, find N-1 bins for
prime numbers
samplepts(lb, ub, npts, dist=None)
takes lower and upper bounds (e.g. lb = [0,3], ub = [2,4]) produces a list of sample points s =
[[1,3],[1,4],[2,3],[2,4]]
Inputs: lb – a list of the lower bounds ub – a list of the upper bounds npts – number of sample points dist – a
mystic.math.Distribution instance

2.16. math module 75


mystic Documentation, Release 0.3.3.dev0

integrate module

math tools related to integration


integrate(f, lb, ub)
Returns the integral of an n-dimensional function f from lb to ub
Inputs: f – a function that takes a list and returns a number lb – a list of lower bounds ub – a list of upper
bounds
If scipy is installed, and number of dimensions is 3 or less, scipy.integrate is used. Otherwise, use mystic’s
n-dimensional Monte Carlo integrator.
integrated_mean(f, lb, ub)
calculate the integrated mean of a function f
Inputs: f – a function that takes a list and returns a number lb – a list of lower bounds ub – a list of upper
bounds
integrated_variance(f, lb, ub)
calculate the integrated variance of a function f
Inputs: f – a function that takes a list and returns a number lb – a list of lower bounds ub – a list of upper
bounds
monte_carlo_integrate(f, lb, ub, n=10000)
Returns the integral of an m-dimensional function f from lb to ub using a Monte Carlo integration of n points
Inputs: f – a function that takes a list and returns a number. lb – a list of lower bounds ub – a list of upper
bounds n – the number of points to sample [Default is n=10000]

References

1. “A Primer on Scientific Programming with Python”, by Hans Petter Langtangen, page 443-445, 2014.
2. http://en.wikipedia.org/wiki/Monte_Carlo_integration
3. http://math.fullerton.edu/mathews/n2003/MonteCarloMod.html

legacydata module

data structures for legacy data observations of lipschitz functions


class datapoint(position, value=None, id=None, lipschitz=None)
Bases: object
n-d data point with position and value
queries: p.value – returns value p.position – returns position
settings: p.value = v1 – set the value p.position = (x1, x2, . . . , xn) – set the position

Notes

a datapoint can have an assigned id and cone; also has utilities for comparison against other datapoints (intended
for use in a dataset)
collisions(pts)
return True where a point exists with same ‘position’ and different ‘value’

76 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

conflicts(pts)
return True where a point exists with same ‘id’ but different ‘raw’
duplicates(pts)
return True where a point exists with same ‘raw’ and ‘id’
position
repeats(pts)
return True where a point exists with same ‘raw’ but different ‘id’
value
class dataset
Bases: list
a collection of data points s = dataset([point1, point2, . . . , pointN])
queries: s.values – returns list of values s.coords – returns list of positions s.ids – returns list of ids s.raw –
returns list of points s.npts – returns the number of points s.lipschitz – returns list of lipschitz constants
settings: s.lipschitz = [s1, s2, . . . , sn] – sets lipschitz constants
short -- check for shortness with respect to given data (or self)
valid -- check for validity with respect to given model
update -- update the positions and values in the dataset
load -- load a list of positions and a list of values to the dataset
fetch -- fetch the list of positions and the list of values in the dataset
intersection -- return the set intersection between self and query
filter -- return dataset entries where mask array is True
has_id -- return True where dataset ids are in query
has_position -- return True where dataset coords are in query
has_point -- return True where dataset points are in query
has_datapoint -- return True where dataset entries are in query

Notes

• datapoints should not be edited; except possibly for id


• assumes that s.n = len(s.coords) == len(s.values)
• all datapoints in a dataset should have the same cone.slopes

collisions
conflicts
coords
duplicates
fetch()
fetch the list of positions and the list of values in the dataset
filter(mask)
return dataset entries where mask array is True

2.16. math module 77


mystic Documentation, Release 0.3.3.dev0

Inputs: mask – a boolean array of the same length as dataset


has_datapoint(query)
return True where dataset entries are in query

Notes

query must be iterable


has_id(query)
return True where dataset ids are in query

Notes

query must be iterable


has_point(query)
return True where dataset points are in query

Notes

query must be iterable


has_position(query)
return True where dataset coords are in query

Notes

query must be iterable


ids
intersection(query)
return the set intersection between self and query
lipschitz
load(positions, values, ids=[])
load a list of positions and a list of values to the dataset
Returns the dataset itself
Return type self (dataset)

Notes

positions and values provided must be iterable


npts
raw
repeats
short(data=None, L=None, blamelist=False, pairs=True, all=False, raw=False, **kwds)
check for shortness with respect to given data (or self)

78 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

Parameters
• data (list, default=None) – a list of data points, or the dataset itself.
• L (float, default=None) – the lipschitz constant, or the dataset’s constant.
• blamelist (bool, default=False) – if True, indicate the infeasible points.
• pairs (bool, default=True) – if True, indicate indices of infeasible points.
• all (bool, default=False) – if True, get results for each individual point.
• raw (bool, default=False) – if False, get boolean results (i.e. non-float).
• tol (float, default=0.0) – maximum acceptable deviation from shortness.
• cutoff (float, default=tol) – zero out distances less than cutoff.

Notes

Each point x,y can be thought to have an associated double-cone with slope equal to the lipschitz constant.
Shortness with respect to another point is defined by the first point not being inside the cone of the second.
We can allow for some error in shortness, a short tolerance tol, for which the point x,y is some acceptable
y-distance inside the cone. While very tightly related, cutoff and tol play distinct roles; tol is subtracted
from calculation of the lipschitz_distance, while cutoff zeros out the value of any element less than the
cutoff.
update(positions, values)
update the positions and values in the dataset
Returns the dataset itself
Return type self (dataset)

Notes

positions and values provided must be iterable


valid(model, blamelist=False, pairs=True, all=False, raw=False, **kwds)
check for validity with respect to given model
Parameters
• model (func) – the model function, y' = F(x').
• blamelist (bool, default=False) – if True, indicate the infeasible points.
• pairs (bool, default=True) – if True, indicate indices of infeasible points.
• all (bool, default=False) – if True, get results for each individual point.
• raw (bool, default=False) – if False, get boolean results (i.e. non-float).
• ytol (float, default=0.0) – maximum acceptable difference |y - F(x')|.
• xtol (float, default=0.0) – maximum acceptable difference |x - x'|.
• cutoff (float, default=ytol) – zero out distances less than cutoff.
• hausdorff (bool, default=False) – hausdorff norm, where if given, then ytol = |y -
F(x')| + |x - x'|/norm.

2.16. math module 79


mystic Documentation, Release 0.3.3.dev0

Notes

xtol defines the n-dimensional base of a pilar of height ytol, centered at each point. The region inside the
pilar defines the space where a “valid” model must intersect. If xtol is not specified, then the base of the
pilar will be a dirac at x' = x. This function performs an optimization for each x to find an appropriate
x'.
ytol is a single value, while xtol is a single value or an iterable. cutoff takes a float or a boolean, where
cutoff=True will set the value of cutoff to the default. Typically, the value of cutoff is ytol, 0.0, or
None. hausdorff can be False (e.g. norm = 1.0), True (e.g. norm = spread(x)), or a list of points
of len(x).
While cutoff and ytol are very tightly related, they play a distinct role; ytol is used to set the optimization
termination for an acceptable |y - F(x')|, while cutoff is applied post-optimization.
If we are using the hausdorff norm, then ytol will set the optimization termination for an acceptable |y -
F(x')| + |x - x'|/norm, where the x values are normalized by norm = hausdorff.
values
class lipschitzcone(datapoint, slopes=None)
Bases: list
Lipschitz double cone around a data point, with vertex and slope
queries: vertex – coordinates of lipschitz cone vertex slopes – lipschitz slopes for the cone (should be same
dimension as ‘vertex’)
contains -- return True if a given point is within the cone
distance -- sum of lipschitz-weighted distance between a point and the vertex
contains(point)
return True if a given point is within the cone
distance(point)
sum of lipschitz-weighted distance between a point and the vertex
load_dataset(filename, filter=None)
read dataset from selected file
filename – string name of dataset file filter – tuple of points to select (‘False’ to ignore filter stored in file)
class point(position, value)
Bases: object
n-d data point with position and value but no id (i.e. ‘raw’)
queries: p.value – returns value p.position – returns position p.rms – returns the square root of sum of squared
position
settings: p.value = v1 – set the value p.position = (x1, x2, . . . , xn) – set the position
rms
save_dataset(data, filename=’dataset.txt’, filter=None, new=True)
save dataset to selected file
data – data set filename – string name of dataset file filter – tuple, filter to apply to dataset upon reading new –
boolean, False if appending to existing file

80 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

measures module

Methods to support discrete measures


weighted_select(samples, weights, mass=1.0)
randomly select a sample from weighted set of samples
Parameters
• samples (list) – a list of sample points
• weights (list) – a list of sample weights
• mass (float, default=1.0) – sum of normalized weights
Returns a randomly selected sample point
spread(samples)
calculate the range for a list of points
spread(x) = max(x) - min(x)
Parameters samples (list) – a list of sample points
Returns the range of the samples
norm(weights)
calculate the norm of a list of points
norm(x) = mean(x)
Parameters weights (list) – a list of sample weights
Returns the mean of the weights
maximum(f, samples)
calculate the max of function for the given list of points
maximum(f,x) = max(f(x))
Parameters
• f (func) – a function that takes a list and returns a number
• samples (list) – a list of sample points
Returns the maximum output value for a function at the given inputs
ess_maximum(f, samples, weights=None, tol=0.0)
calculate the max of function for support on the given list of points
ess_maximum(f,x,w) = max(f(support(x,w)))
Parameters
• f (func) – a function that takes a list and returns a number
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
• tol (float, default=0.0) – a tolerance, where any weight <= tol is zero
Returns the maximum output value for a function at the given support points
minimum(f, samples)
calculate the min of function for the given list of points
minimum(f,x) = min(f(x))

2.16. math module 81


mystic Documentation, Release 0.3.3.dev0

Parameters
• f (func) – a function that takes a list and returns a number
• samples (list) – a list of sample points
Returns the minimum output value for a function at the given inputs
ess_minimum(f, samples, weights=None, tol=0.0)
calculate the min of function for support on the given list of points
ess_minimum(f,x,w) = min(f(support(x,w)))
Parameters
• f (func) – a function that takes a list and returns a number
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
• tol (float, default=0.0) – a tolerance, where any weight <= tol is zero
Returns the minimum output value for a function at the given support points
expectation(f, samples, weights=None, tol=0.0)
calculate the (weighted) expectation of a function for a list of points
Parameters
• f (func) – a function that takes a list and returns a number
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
• tol (float, default=0.0) – a tolerance, where any weight <= tol is zero
Returns the weighted expectation for a list of sample points
expected_variance(f, samples, weights=None, tol=0.0)
calculate the (weighted) expected variance of a function
Parameters
• f (func) – a function that takes a list and returns a number
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
• tol (float, default=0.0) – a tolerance, where any weight <= tol is zero
Returns the weighted expected variance of f on a list of sample points
expected_std(f, samples, weights=None, tol=0.0)
calculate the (weighted) expected standard deviation of a function
Parameters
• f (func) – a function that takes a list and returns a number
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
• tol (float, default=0.0) – a tolerance, where any weight <= tol is zero
Returns the weighted expected standard deviation of f on a list of sample points

82 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

mean(samples, weights=None, tol=0)


calculate the (weighted) mean for a list of points
Parameters
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
• tol (float, default=0.0) – a tolerance, where any mean <= tol is zero
Returns the weighted mean for a list of sample points
support_index(weights, tol=0)
get the indices of the positions which have non-zero weight
Parameters
• weights (list) – a list of sample weights
• tol (float, default=0.0) – a tolerance, where any weight <= tol is zero
Returns a list of indices of positions with non-zero weight
support(samples, weights, tol=0)
get the positions which have non-zero weight
Parameters
• samples (list) – a list of sample points
• weights (list) – a list of sample weights
• tol (float, default=0.0) – a tolerance, where any weight <= tol is zero
Returns a list of positions with non-zero weight
moment(samples, weights=None, order=1, tol=0)
calculate the (weighted) nth-order moment for a list of points
Parameters
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
• order (int, default=1) – the degree, a positive integer
• tol (float, default=0.0) – a tolerance, where any mean <= tol is zero
Returns the weighted nth-order moment for a list of sample points
standard_moment(samples, weights=None, order=1, tol=0)
calculate the (weighted) nth-order standard moment for a list of points
standard_moment(x,w,order) = moment(x,w,order)/std(x,w)^order
Parameters
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
• order (int, default=1) – the degree, a positive integer
• tol (float, default=0.0) – a tolerance, where any mean <= tol is zero
Returns the weighted nth-order standard moment for a list of sample points

2.16. math module 83


mystic Documentation, Release 0.3.3.dev0

variance(samples, weights=None)
calculate the (weighted) variance for a list of points
Parameters
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
Returns the weighted variance for a list of sample points
std(samples, weights=None)
calculate the (weighted) standard deviation for a list of points
Parameters
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
Returns the weighted standard deviation for a list of sample points
skewness(samples, weights=None)
calculate the (weighted) skewness for a list of points
Parameters
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
Returns the weighted skewness for a list of sample points
kurtosis(samples, weights=None)
calculate the (weighted) kurtosis for a list of points
Parameters
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
Returns the weighted kurtosis for a list of sample points
impose_mean(m, samples, weights=None)
impose a mean on a list of (weighted) points
Parameters
• m (float) – the target mean
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
Returns a list of sample points with the desired weighted mean

Notes

this function does not alter the weighted range or the weighted variance
impose_variance(v, samples, weights=None)
impose a variance on a list of (weighted) points
Parameters
• v (float) – the target variance

84 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

• samples (list) – a list of sample points


• weights (list, default=None) – a list of sample weights
Returns a list of sample points with the desired weighted variance

Notes

this function does not alter the weighted mean


impose_std(s, samples, weights=None)
impose a standard deviation on a list of (weighted) points
Parameters
• s (float) – the target standard deviation
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
Returns a list of sample points with the desired weighted standard deviation

Notes

this function does not alter the weighted mean


impose_moment(m, samples, weights=None, order=1, tol=0, skew=None)
impose the selected moment on a list of (weighted) points
Parameters
• m (float) – the target moment
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights
• order (int, default=1) – the degree, a positive integer
• tol (float, default=0.0) – a tolerance, where any mean <= tol is zero
• skew (bool, default=None) – if True, allow skew in the samples
Returns a list of sample points with the desired weighted moment

Notes

this function does not alter the weighted mean


if skew is None, then allow skew when order is odd
impose_spread(r, samples, weights=None)
impose a range on a list of (weighted) points
Parameters
• r (float) – the target range
• samples (list) – a list of sample points
• weights (list, default=None) – a list of sample weights

2.16. math module 85


mystic Documentation, Release 0.3.3.dev0

Returns a list of sample points with the desired weighted range

Notes

this function does not alter the weighted mean


impose_expectation(m, f, npts, bounds=None, weights=None, **kwds)
impose a given expectation value E on a given function f, where E = m +/- tol and E = mean(f(x))
for x in bounds
Parameters
• m (float) – target expected mean
• f (func) – a function that takes a list and returns a number
• npts (tuple(int)) – a tuple of dimensions of the target product measure
• bounds (tuple, default=None) – tuple is (lower_bounds, upper_bounds)
• weights (list, default=None) – a list of sample weights
• tol (float, default=None) – maximum allowable deviation from m
• constraints (func, default=None) – a function that takes a nested list of N x 1D discrete
measure positions and weights, with the intended purpose of kernel-transforming x,w as
x' = constraints(x, w)
• npop (int, default=200) – size of the trial solution population
• maxiter (int, default=1000) – the maximum number of iterations to perform
• maxfun (int, default=1e+6) – the maximum number of function evaluations
Returns a list of sample positions, with expectation E

Notes

Expectation value E is calculated by minimizing mean(f(x)) - m, over the given bounds, and will terminate
when E is found within deviation tol of the target mean m. If tol is not provided, then a relative deviation of
1% of m will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw the
mean, variance, and etc from
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper bounds,
for each parameter

Examples

>>> # provide the dimensions and bounds


>>> nx = 3; ny = 2; nz = 1
>>> x_lb = [10.0]; y_lb = [0.0]; z_lb = [10.0]
>>> x_ub = [50.0]; y_ub = [9.0]; z_ub = [90.0]
>>>
>>> # prepare the bounds
>>> lb = (nx * x_lb) + (ny * y_lb) + (nz * z_lb)
>>> ub = (nx * x_ub) + (ny * y_ub) + (nz * z_ub)
(continues on next page)

86 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

(continued from previous page)


>>>
>>> # generate a list of samples with mean +/- dev imposed
>>> mean = 2.0; dev = 0.01
>>> samples = impose_expectation(mean, f, (nx,ny,nz), (lb,ub), tol=dev)
>>>
>>> # test the results by calculating the expectation value for the samples
>>> expectation(f, samples)
>>> 2.000010010122465

impose_expected_variance(v, f, npts, bounds=None, weights=None, **kwds)


impose a given expected variance E on a given function f, where E = v +/- tol and E =
variance(f(x)) for x in bounds
Parameters
• v (float) – target expected variance
• f (func) – a function that takes a list and returns a number
• npts (tuple(int)) – a tuple of dimensions of the target product measure
• bounds (tuple, default=None) – tuple is (lower_bounds, upper_bounds)
• weights (list, default=None) – a list of sample weights
• tol (float, default=None) – maximum allowable deviation from v
• constraints (func, default=None) – a function that takes a nested list of N x 1D discrete
measure positions and weights, with the intended purpose of kernel-transforming x,w as
x' = constraints(x, w)
• npop (int, default=200) – size of the trial solution population
• maxiter (int, default=1000) – the maximum number of iterations to perform
• maxfun (int, default=1e+6) – the maximum number of function evaluations
Returns a list of sample positions, with expected variance E

Notes

Expected variance E is calculated by minimizing variance(f(x)) - v, over the given bounds, and will
terminate when E is found within deviation tol of the target variance v. If tol is not provided, then a relative
deviation of 1% of v will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw the
mean, variance, and etc from
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper bounds,
for each parameter

Examples

>>> # provide the dimensions and bounds


>>> nx = 3; ny = 2; nz = 1
>>> x_lb = [10.0]; y_lb = [0.0]; z_lb = [10.0]
>>> x_ub = [50.0]; y_ub = [9.0]; z_ub = [90.0]
>>>
(continues on next page)

2.16. math module 87


mystic Documentation, Release 0.3.3.dev0

(continued from previous page)


>>> # prepare the bounds
>>> lb = (nx * x_lb) + (ny * y_lb) + (nz * z_lb)
>>> ub = (nx * x_ub) + (ny * y_ub) + (nz * z_ub)
>>>
>>> # generate a list of samples with variance +/- dev imposed
>>> var = 2.0; dev = 0.01
>>> samples = impose_expected_variance(var, f, (nx,ny,nz), (lb,ub), tol=dev)
>>>
>>> # test the results by calculating the expected variance for the samples
>>> expected_variance(f, samples)
>>> 2.000010010122465

impose_expected_std(s, f, npts, bounds=None, weights=None, **kwds)


impose a given expected std E on a given function f, where E = s +/- tol and E = std(f(x)) for x in
bounds
Parameters
• s (float) – target expected standard deviation
• f (func) – a function that takes a list and returns a number
• npts (tuple(int)) – a tuple of dimensions of the target product measure
• bounds (tuple, default=None) – tuple is (lower_bounds, upper_bounds)
• weights (list, default=None) – a list of sample weights
• tol (float, default=None) – maximum allowable deviation from s
• constraints (func, default=None) – a function that takes a nested list of N x 1D discrete
measure positions and weights, with the intended purpose of kernel-transforming x,w as
x' = constraints(x, w)
• npop (int, default=200) – size of the trial solution population
• maxiter (int, default=1000) – the maximum number of iterations to perform
• maxfun (int, default=1e+6) – the maximum number of function evaluations
Returns a list of sample positions, with expected standard deviation E

Notes

Expected std E is calculated by minimizing std(f(x)) - s, over the given bounds, and will terminate when
E is found within deviation tol of the target std s. If tol is not provided, then a relative deviation of 1% of s
will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw the
mean, variance, and etc from
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper bounds,
for each parameter

Examples

88 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

>>> # provide the dimensions and bounds


>>> nx = 3; ny = 2; nz = 1
>>> x_lb = [10.0]; y_lb = [0.0]; z_lb = [10.0]
>>> x_ub = [50.0]; y_ub = [9.0]; z_ub = [90.0]
>>>
>>> # prepare the bounds
>>> lb = (nx * x_lb) + (ny * y_lb) + (nz * z_lb)
>>> ub = (nx * x_ub) + (ny * y_ub) + (nz * z_ub)
>>>
>>> # generate a list of samples with std +/- dev imposed
>>> std = 2.0; dev = 0.01
>>> samples = impose_expected_std(std, f, (nx,ny,nz), (lb,ub), tol=dev)
>>>
>>> # test the results by calculating the expected std for the samples
>>> expected_std(f, samples)
>>> 2.000010010122465

impose_expected_mean_and_variance(param, f, npts, bounds=None, weights=None, **kwds)


impose a given expected mean E on a given function f, where E = m +/- tol and E = mean(f(x))
for x in bounds. Additionally, impose a given expected variance R on f, where R = v +/- tol and R =
variance(f(x)) for x in bounds.
Parameters
• param (tuple(float)) – target parameters, (mean, variance)
• f (func) – a function that takes a list and returns a number
• npts (tuple(int)) – a tuple of dimensions of the target product measure
• bounds (tuple, default=None) – tuple is (lower_bounds, upper_bounds)
• weights (list, default=None) – a list of sample weights
• tol (float, default=None) – maximum allowable deviation from m and v
• constraints (func, default=None) – a function that takes a nested list of N x 1D discrete
measure positions and weights, with the intended purpose of kernel-transforming x,w as
x' = constraints(x, w)
• npop (int, default=200) – size of the trial solution population
• maxiter (int, default=1000) – the maximum number of iterations to perform
• maxfun (int, default=1e+6) – the maximum number of function evaluations
Returns a list of sample positions, with expected mean E and variance R

Notes

Expected mean E and expected variance R are calculated by minimizing the sum of the absolute values of
mean(f(x)) - m and variance(f(x)) - v over the given bounds, and will terminate when E and R
are found within tolerance tol of the target mean m and variance v, respectively. If tol is not provided, then
a relative deviation of 1% of max(m,v) will be used.
This function does not preserve the mean, variance, or range, as there is no initial list of samples to draw the
mean, variance, and etc from
bounds is tuple with length(bounds) == 2, composed of all the lower bounds, then all the upper bounds,
for each parameter

2.16. math module 89


mystic Documentation, Release 0.3.3.dev0

Examples

>>> # provide the dimensions and bounds


>>> nx = 3; ny = 2; nz = 1
>>> x_lb = [10.0]; y_lb = [0.0]; z_lb = [10.0]
>>> x_ub = [50.0]; y_ub = [9.0]; z_ub = [90.0]
>>>
>>> # prepare the bounds
>>> lb = (nx * x_lb) + (ny * y_lb) + (nz * z_lb)
>>> ub = (nx * x_ub) + (ny * y_ub) + (nz * z_ub)
>>>
>>> # generate a list of samples with mean and variance imposed
>>> mean = 5.0; var = 2.0; tol = 0.01
>>> samples = impose_expected_mean_and_variance((mean,var), f, (nx,ny,nz), ...
˓→ (lb,ub), tol=tol)
>>>
>>> # test the results by calculating the expected mean for the samples
>>> expected_mean(f, samples)
>>>
>>> # test the results by calculating the expected variance for the samples
>>> expected_variance(f, samples)
>>> 2.000010010122465

impose_weight_norm(samples, weights, mass=1.0)


normalize the weights for a list of (weighted) points (this function is ‘mean-preserving’)
Inputs: samples – a list of sample points weights – a list of sample weights mass – float target of normalized
weights
normalize(weights, mass=’l2’, zsum=False, zmass=1.0)
normalize a list of points (e.g. normalize to 1.0)
Inputs: weights – a list of sample weights mass – float target of normalized weights (or string for Ln norm)
zsum – use counterbalance when mass = 0.0 zmass – member scaling when mass = 0.0
Notes: if mass=’l1’, will use L1-norm; if mass=’l2’ will use L2-norm; etc.
impose_reweighted_mean(m, samples, weights=None, solver=None)
impose a mean on a list of points by reweighting weights
impose_reweighted_variance(v, samples, weights=None, solver=None)
impose a variance on a list of points by reweighting weights
impose_reweighted_std(s, samples, weights=None, solver=None)
impose a standard deviation on a list of points by reweighting weights
median(samples, weights=None)
calculate the (weighted) median for a list of points
Inputs: samples – a list of sample points weights – a list of sample weights
mad(samples, weights=None)
calculate the (weighted) median absolute deviation for a list of points
Inputs: samples – a list of sample points weights – a list of sample weights
impose_median(m, samples, weights=None)
impose a median on a list of (weighted) points (this function is ‘range-preserving’ and ‘mad-preserving’)
Inputs: m – the target median samples – a list of sample points weights – a list of sample weights

90 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

impose_mad(s, samples, weights=None)


impose a median absolute deviation on a list of (weighted) points (this function is ‘median-preserving’)
Inputs: s – the target median absolute deviation samples – a list of sample points weights – a list of sample
weights
tmean(samples, weights=None, k=0, clip=False)
calculate the (weighted) trimmed mean for a list of points
Inputs: samples – a list of sample points weights – a list of sample weights k – percent samples to trim (k%)
[tuple (lo,hi) or float if lo=hi] clip – if True, winsorize instead of trimming k% of samples
NOTE: if all samples are excluded, will return nan
tvariance(samples, weights=None, k=0, clip=False)
calculate the (weighted) trimmed variance for a list of points
Inputs: samples – a list of sample points weights – a list of sample weights k – percent samples to trim (k%)
[tuple (lo,hi) or float if lo=hi] clip – if True, winsorize instead of trimming k% of samples
NOTE: if all samples are excluded, will return nan
tstd(samples, weights=None, k=0, clip=False)
calculate the (weighted) trimmed standard deviation for a list of points
Inputs: samples – a list of sample points weights – a list of sample weights k – percent samples to trim (k%)
[tuple (lo,hi) or float if lo=hi] clip – if True, winsorize instead of trimming k% of samples
NOTE: if all samples are excluded, will return nan
impose_tmean(m, samples, weights=None, k=0, clip=False)
impose a trimmed mean (at k%) on a list of (weighted) points (this function is ‘range-preserving’ and
‘tvariance-preserving’)
Inputs: m – the target trimmed mean samples – a list of sample points weights – a list of sample weights k
– percent samples to be trimmed (k%) [tuple (lo,hi) or float if lo=hi] clip – if True, winsorize instead of
trimming k% of samples
impose_tvariance(v, samples, weights=None, k=0, clip=False)
impose a trimmed variance (at k%) on a list of (weighted) points (this function is ‘tmean-preserving’)
Inputs: v – the target trimmed variance samples – a list of sample points weights – a list of sample weights k
– percent samples to be trimmed (k%) [tuple (lo,hi) or float if lo=hi] clip – if True, winsorize instead of
trimming k% of samples
impose_tstd(s, samples, weights=None, k=0, clip=False)
impose a trimmed std (at k%) on a list of (weighted) points (this function is ‘tmean-preserving’)
Inputs: s – the target trimmed standard deviation samples – a list of sample points weights – a list of sample
weights k – percent samples to be trimmed (k%) [tuple (lo,hi) or float if lo=hi] clip – if True, winsorize
instead of trimming k% of samples
impose_support(index, samples, weights)
set all weights not appearing in ‘index’ to zero
Inputs: samples – a list of sample points weights – a list of sample weights index – a list of desired support
indices (weights will be non-zero)
For example:

2.16. math module 91


mystic Documentation, Release 0.3.3.dev0

>>> impose_support([0,1],[1,2,3,4,5],[.2,.2,.2,.2,.2])
([2.5, 3.5, 4.5, 5.5, 6.5], [0.5, 0.5, 0.0, 0.0, 0.0])
>>> impose_support([0,1,2,3],[1,2,3,4,5],[.2,.2,.2,.2,.2])
([1.5, 2.5, 3.5, 4.5, 5.5], [0.25, 0.25, 0.25, 0.25, 0.0])
>>> impose_support([4],[1,2,3,4,5],[.2,.2,.2,.2,.2])
([-1.0, 0.0, 1.0, 2.0, 3.0], [0.0, 0.0, 0.0, 0.0, 1.0])

Notes: is ‘mean-preserving’ for samples and ‘norm-preserving’ for weights


impose_unweighted(index, samples, weights, nullable=True)
set all weights appearing in ‘index’ to zero
Inputs: samples – a list of sample points weights – a list of sample weights index – a list of indices where
weight is to be zero nullable – if False, avoid null weights by reweighting non-index weights
For example:

>>> impose_unweighted([0,1,2],[1,2,3,4,5],[.2,.2,.2,.2,.2])
([-0.5, 0.5, 1.5, 2.5, 3.5], [0.0, 0.0, 0.0, 0.5, 0.5])
>>> impose_unweighted([3,4],[1,2,3,4,5],[.2,.2,.2,.2,.2])
([2.0, 3.0, 4.0, 5.0, 6.0], [0.33333333333333331, 0.33333333333333331, 0.
˓→33333333333333331, 0.0, 0.0])

Notes: is ‘mean-preserving’ for samples and ‘norm-preserving’ for weights


impose_collapse(pairs, samples, weights)
collapse the weight and position of each pair (i,j) in pairs
Collapse is defined as weight[j] += weight[i] and weights[i] = 0, with samples[j] = samples[i].
Inputs: samples – a list of sample points weights – a list of sample weights pairs – set of tuples of indices (i,j)
where collapse occurs
For example:

>>> impose_collapse({(0,1),(0,2)},[1,2,3,4,5],[.2,.2,.2,.2,.2])
([1.5999999999999996, 1.5999999999999996, 1.5999999999999996, 4.
˓→5999999999999996, 5.5999999999999996], [0.6000000000000001, 0.0, 0.0, 0.2,

˓→0.2])

>>> impose_collapse({(0,1),(3,4)},[1,2,3,4,5],[.2,.2,.2,.2,.2])
([1.3999999999999999, 1.3999999999999999, 3.3999999999999999, 4.
˓→4000000000000004, 4.4000000000000004], [0.4, 0.0, 0.2, 0.4, 0.0])

Notes: is ‘mean-preserving’ for samples and ‘norm-preserving’ for weights


impose_sum(mass, weights, zsum=False, zmass=1.0)
impose a sum on a list of points
Inputs: mass – target sum of weights weights – a list of sample weights zsum – use counterbalance when mass
= 0.0 zmass – member scaling when mass = 0.0
impose_product(mass, weights, zsum=False, zmass=1.0)
impose a product on a list of points
Inputs: mass – target product of weights weights – a list of sample weights zsum – use counterbalance when
mass = 0.0 zmass – member scaling when mass = 0.0
split_param(params, npts)
splits a flat parameter list into a flat list of weights and a flat list of positions; weights and positions are expected
to have the same dimensions (given by npts)

92 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

Inputs: params – a flat list of weights and positions (formatted as noted below) npts – a tuple describing the
shape of the target lists
For example:

>>> nx = 3; ny = 2; nz = 1
>>> par = ['wx']*nx + ['x']*nx + ['wy']*ny + ['y']*ny + ['wz']*nz + ['z']*nz
>>> weights, positions = split_param(par, (nx,ny,nz))
>>> weights
['wx','wx','wx','wy','wy','wz']
>>> positions
['x','x','x','y','y','z']

poly module

tools for polynomial functions


poly1d(coeff )
generates a 1-D polynomial instance from a list of coefficients using numpy.poly1d(coeffs)
polyeval(coeffs, x)
takes list of coefficients & evaluation points, returns f(x) thus, [a3, a2, a1, a0] yields a3 x^3 + a2 x^2 + a1 x^1
+ a0

samples module

tools related to sampling


alpha(n, diameter, epsilon=0.01)
random_samples(lb, ub, npts=10000, dist=None, clip=False)
generate npts samples from the given distribution between given lb & ub
Inputs: dist – a mystic.tools.Distribution instance lower bounds – a list of the lower bounds upper bounds – a
list of the upper bounds npts – number of sample points [default = 10000] clip – if True, clip at bounds,
else resample [default = False]
sample(f, lb, ub, npts=10000)
return number of failures and successes for some boolean function f
Inputs: f – a function that returns True for ‘success’ and False for ‘failure’ lb – a list of lower bounds ub – a
list of upper bounds npts – the number of points to sample [Default is npts=10000]
sampled_mean(f, lb, ub, npts=10000)
use random sampling to calculate the mean of a function
Inputs: f – a function that takes a list and returns a number lb – a list of lower bounds ub – a list of upper
bounds npts – the number of points to sample [Default is npts=10000]
sampled_pof(f, lb, ub, npts=10000)
use random sampling to calculate probability of failure for a function
Inputs: f – a function that returns True for ‘success’ and False for ‘failure’ lb – a list of lower bounds ub – a
list of upper bounds npts – the number of points to sample [Default is npts=10000]
sampled_prob(pts, lb, ub)
calculates probability by sampling if points are inside the given bounds
Inputs: pts – a list of sample points lb – a list of lower bounds ub – a list of upper bounds

2.16. math module 93


mystic Documentation, Release 0.3.3.dev0

sampled_pts(pts, lb, ub)


determine the number of sample points inside the given bounds
Inputs: pts – a list of sample points lb – a list of lower bounds ub – a list of upper bounds
sampled_variance(f, lb, ub, npts=10000)
use random sampling to calculate the variance of a function
Inputs: f – a function that takes a list and returns a number lb – a list of lower bounds ub – a list of upper
bounds npts – the number of points to sample [Default is npts=10000]

stats module

shortcut (?) math tools related to statistics; also, math tools related to gaussian distributions
cdf_factory(mean, variance)
Returns cumulative distribution function (as a Python function) for a Gaussian, given the mean and variance
erf(x)
evaluate the error function at x
gamma(x)
evaluate the gamma function at x
lgamma(x)
evaluate the natual log of the abs value of the gamma function at x
mcdiarmid_bound(mean, diameter)
calculates McDiarmid bound given mean and McDiarmid diameter
mean(expectation, volume)
calculates mean given expectation and volume
meanconf(std, npts, percent=95)
mean confidence interval: returns conf, where interval = mean +/- conf
pdf_factory(mean, variance)
Returns a probability density function (as a Python function) for a Gaussian, given the mean and variance
prob_mass(volume, norm)
calculates probability mass given volume and norm
sampvar(var, npts)
sample variance from variance
stderr(std, npts)
standard error
varconf(var, npts, percent=95, tight=False)
var confidence interval: returns max interval distance from var
volume(lb, ub)
calculates volume for a uniform distribution in n-dimensions

2.17 metropolis module

Implements a simple version of the Metropolis-Hastings algorithm

94 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

metropolis_hastings(proposal, target, x)
Proposal(x) -> next point. Must be symmetric. This is because otherwise the PDF of the proposal density is
needed (not just a way to draw from the proposal)

2.18 models module

models: sample models and functions prepared for use in mystic

2.18.1 Functions

Mystic provides a set of standard fitting functions that derive from the function API found in mystic.models.
abstract_model. These standard functions are provided:

sphere -- De Jong's spherical function


rosen -- Sum of Rosenbrock's function
step -- De Jong's step function
quartic -- De Jong's quartic function
shekel -- Shekel's function
corana -- Corana's function
fosc3d -- Trott's fOsc3D function
nmin51 -- Champion's NMinimize test function
griewangk -- Griewangk's function
zimmermann -- Zimmermann's function
peaks -- NAG's peaks function
venkat91 -- Venkataraman's sinc function
schwefel -- Schwefel's function
ellipsoid -- Pohlheim's rotated hyper-ellipsoid function
rastrigin -- Rastrigin's function
powers -- Pohlheim's sum of different powers function
ackley -- Ackley's path function
michal -- Michalewicz's function
branins -- Branins's rcos function
easom -- Easom's function
goldstein -- Goldstein-Price's function
paviani -- Paviani's function
wavy1 -- a simple sine-based multi-minima function
wavy2 -- another simple sine-based multi-minima function

2.18.2 Models

Mystic also provides a set of example models that derive from the model API found in mystic.models.
abstract_model. These standard models are provided:

poly -- 1d model representation for polynomials


circle -- 2d array representation of a circle
lorentzian -- Lorentzian peak model
br8 -- Bevington & Robinson's model of dual exponential decay
mogi -- Mogi's model of surface displacements from a point spherical
source in an elastic half space

Additionally, circle has been extended to provide three additional models, each with different packing densities:

2.18. models module 95


mystic Documentation, Release 0.3.3.dev0

dense_circle, sparse_circle, and minimal_circle

Further, poly provides additional models for 2nd, 4th, 6th, 8th, and 16th order Chebyshev polynomials:

chebyshev2, chebyshev4, chebyshev6, chebyshev8, chebyshev16

Also, rosen has been modified to provide models for the 0th and 1st derivative of the Rosenbrock function:

rosen0der, and rosen1der

class AbstractFunction(ndim=None)
Bases: object
Base class for mystic functions
The ‘function’ method must be overwritten, thus allowing calls to the class instance to mimic calls to the function
object.
For example, if function is overwritten with the Rosenbrock function:

>>> rosen = Rosenbrock(ndim=3)


>>> rosen([1,1,1])
0.

Provides a base class for mystic functions.


Takes optional input ‘ndim’ (number of dimensions).
__call__(...) <==> x(...)
__dict__ = dict_proxy({'function': <function function>, '__dict__': <attribute '__dic
__init__(ndim=None)
Provides a base class for mystic functions.
Takes optional input ‘ndim’ (number of dimensions).
__module__ = 'mystic.models.abstract_model'
__weakref__
list of weak references to the object (if defined)
function(coeffs)
takes a list of coefficients x, returns f(x)
minimizers = None
class AbstractModel(name=’dummy’, metric=<function <lambda>>, sigma=1.0)
Bases: object
Base class for mystic models
The ‘evaluate’ and ‘ForwardFactory’ methods must be overwritten, thus providing a standard interface for gen-
erating a forward model factory and evaluating a forward model. Additionally, two common ways to generate
a cost function are built into the model. For “standard models”, the cost function generator will work with no
modifications.
See mystic.models.poly for a few basic examples.
Provides a base class for mystic models.
Inputs:: name – a name string for the model metric – the cost metric object [default => lambda x:
numpy.sum(x*x)] sigma – a scaling factor applied to the raw cost

96 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

CostFactory(target, pts)
generates a cost function instance from list of coefficients and evaluation points
CostFactory2(pts, datapts, nparams)
generates a cost function instance from datapoints and evaluation points
ForwardFactory(coeffs)
generates a forward model instance from a list of coefficients
__dict__ = dict_proxy({'__module__': 'mystic.models.abstract_model', 'CostFactory2':
__init__(name=’dummy’, metric=<function <lambda>>, sigma=1.0)
Provides a base class for mystic models.
Inputs:: name – a name string for the model metric – the cost metric object [default => lambda x:
numpy.sum(x*x)] sigma – a scaling factor applied to the raw cost
__module__ = 'mystic.models.abstract_model'
__weakref__
list of weak references to the object (if defined)
evaluate(coeffs, x)
takes list of coefficients & evaluation points, returns f(x)
ackley(x)
evaluates Ackley’s function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = -20 * exp(-0.2 * sqrt(1/N * sum_(i=0)^(N-1) x_(i)^(2))) and: f_1(x) = -exp(1/N *
sum_(i=0)^(N-1) cos(2 * pi * x_(i))) + 20 + exp(1)
Inspect with mystic_model_plotter using:: mystic.models.ackley -b “-10:10:.1, -10:10:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
branins(x)
evaluates Branins’s function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = a * (x_1 - b * x_(0)^(2) + c * x_0 - d)^2 and f_1(x) = e * (1 - f) * cos(x_0) + e and a=1,
b=5.1/(4*pi^2), c=5/pi, d=6, e=10, f=1/(8*pi)
Inspect with mystic_model_plotter using:: mystic.models.branins -b “-10:20:.1, -5:25:.1” -d -x 1
The minimum is f(x)=0.397887 at x=((2 +/- (2*i)+1)*pi, 2.275 + 10*i*(i+1)/2) for all i
corana(x)
evaluates a 4-D Corana’s parabola function for a list of coeffs
f(x) = sum_(i=0)^(3) f_0(x)
Where for bs(x_i - z_i) < 0.05: f_0(x) = 0.15*(z_i - 0.05*sign(z_i))^(2) * d_i and otherwise: f_0(x) = d_i *
x_(i)^(2), with z_i = loor(bs(x_i/0.2)+0.49999)*sign(x_i)*0.2 and d_i = 1,1000,10,100.
For len(x) == 1, x = x_0,0,0,0; for len(x) == 2, x = x_0,0,x_1,0; for len(x) == 3, x = x_0,0,x_1,x_2; for len(x)
>= 4, x = x_0,x_1,x_2,x_3.
Inspect with mystic_model_plotter using:: mystic.models.corana -b “-1:1:.01, -1:1:.01” -d -x 1
The minimum is f(x)=0 for bs(x_i) < 0.05 for all i.
easom(x)
evaluates Easom’s function for a list of coeffs

2.18. models module 97


mystic Documentation, Release 0.3.3.dev0

f(x) = -cos(x_0) * cos(x_1) * exp(-((x_0-pi)^2+(x_1-pi)^2))


Inspect with mystic_model_plotter using:: mystic.models.easom -b “-5:10:.1, -5:10:.1” -d
The minimum is f(x)=-1.0 at x=(pi,pi)
ellipsoid(x)
evaluates the rotated hyper-ellipsoid function for a list of coeffs
f(x) = sum_(i=0)^(N-1) (sum_(j=0)^(i) x_j)^2
Inspect with mystic_model_plotter using:: mystic.models.ellipsoid -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
fosc3d(x)
evaluates the fOsc3D function for a list of coeffs
f(x) = f_0(x) + p(x)
Where: f_0(x) = -4 * exp(-x_(0)^2 - x_(1)^2) + sin(6*x_(0)) * sin(5*x_(1)) with for x_1 < 0: p(x) =
100.*x_(1)^2 and otherwise: p(x) = 0.
Inspect with mystic_model_plotter using:: mystic.models.fosc3d -b “-5:5:.1, 0:5:.1” -d
The minimum is f(x)=-4.501069742528923 at x=(-0.215018, 0.240356)
goldstein(x)
evaluates Goldstein-Price’s function for a list of coeffs
f(x) = (1 + (x_0 + x_1 + 1)^2 * f_0(x)) * (30 + (2*x_0 - 3*x_1)^2 * f_1(x))
Where: f_0(x) = 19 - 14*x_0 + 3*x_(0)^2 - 14*x_1 + 6*x_(0)*x_(1) + 3*x_(1)^2 and f_1(x) = 18 - 32*x_0 +
12*x_(0)^2 + 48*x_1 - 36*x_(0)*x_(1) + 27*x_(1)^2
Inspect with mystic_model_plotter using:: mystic.models.goldstein -b “-5:5:.1, -5:5:.1” -d -x 1
The minimum is f(x)=3.0 at x=(0,-1)
griewangk(x)
evaluates an N-dimensional Griewangk’s function for a list of coeffs
f(x) = f_0(x) - f_1(x) + 1
Where: f_0(x) = sum_(i=0)^(N-1) x_(i)^(2) / 4000. and: f_1(x) = prod_(i=0)^(N-1) cos( x_i / (i+1)^(1/2) )
Inspect with mystic_model_plotter using:: mystic.models.griewangk -b “-10:10:.1, -10:10:.1” -d -x 5
The minimum is f(x)=0.0 for x_i=0.0
michal(x)
evaluates Michalewicz’s function for a list of coeffs
f(x) = -sum_(i=0)^(N-1) sin(x_i) * (sin((i+1) * (x_i)^(2) / pi))^(20)
Inspect with mystic_model_plotter using:: mystic.models.michal -b “0:3.14:.1, 0:3.14:.1, 1.28500168,
1.92305311, 1.72047194” -d
For x=(2.20289811, 1.57078059, 1.28500168, 1.92305311, 1.72047194, . . . )[:N] and c=(-0.801303, -1.0, -
0.959092, -0.896699, -1.030564, . . . )[:N], the minimum is f(x)=sum(c) for all x_i=(0,pi)
nmin51(x)
evaluates the NMinimize51 function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = exp(sin(50*x_0)) + sin(60*exp(x_1)) + sin(70*sin(x_0)) and f_1(x) = sin(sin(80*x_1)) -
sin(10*(x_0 + x_1)) + (x_(0)^2 + x_(1)^2)/4

98 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

Inspect with mystic_model_plotter using:: mystic.models.nmin51 -b “-5:5:.1, 0:5:.1” -d


The minimum is f(x)=-3.306869 at x=(-0.02440313,0.21061247)
paviani(x)
evaluates Paviani’s function for a list of coeffs
f(x) = f_0(x) - f_1(x)
Where: f_0(x) = sum_(i=0)^(N-1) (ln(x_i - 2)^2 + ln(10 - x_i)^2) and f_1(x) = prod_(i=0)^(N-1) x_(i)^(.2)
Inspect with mystic_model_plotter using:: mystic.models.paviani -b “2:10:.1, 2:10:.1” -d
For N=1, the minimum is f(x)=2.133838 at x_i=8.501586, for N=3, the minimum is f(x)=7.386004 at
x_i=8.589578, for N=5, the minimum is f(x)=9.730525 at x_i=8.740743, for N=8, the minimum is f(x)=-
3.411859 at x_i=9.086900, for N=10, the minimum is f(x)=-45.778470 at x_i=9.350241.
peaks(x)
evaluates an 2-dimensional peaks function for a list of coeffs
f(x) = f_0(x) - f_1(x) - f_2(x)
Where: f_0(x) = 3 * (1 - x_0)^2 * exp(-x_0^2 - (x_1 + 1)^2) and f_1(x) = 10 * (.2 * x_0 - x_0^3 - x_1^5) *
exp(-x_0^2 - x_1^2) and f_2(x) = exp(-(x_0 + 1)^2 - x_1^2) / 3
Inspect with mystic_model_plotter using:: mystic.models.peaks -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=-6.551133332835841 at x=(0.22827892, -1.62553496)
powers(x)
evaluates the sum of different powers function for a list of coeffs
f(x) = sum_(i=0)^(N-1) bs(x_(i))^(i+2)
Inspect with mystic_model_plotter using:: mystic.models.powers -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
quartic(x)
evaluates an N-dimensional quartic function for a list of coeffs
f(x) = sum_(i=0)^(N-1) (x_(i)^4 * (i+1) + k_i)
Where k_i is a random variable with uniform distribution bounded by [0,1).
Inspect with mystic_model_plotter using:: mystic.models.quartic -b “-3:3:.1, -3:3:.1” -d -x 1
The minimum is f(x)=N*E[k] for x_i=0.0, where E[k] is the expectation of k, and thus E[k]=0.5 for a uniform
distribution bounded by [0,1).
rastrigin(x)
evaluates Rastrigin’s function for a list of coeffs
f(x) = 10 * N + sum_(i=0)^(N-1) (x_(i)^2 - 10 * cos(2 * pi * x_(i)))
Inspect with mystic_model_plotter using:: mystic.models.rastrigin -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
rosen(x)
evaluates an N-dimensional Rosenbrock saddle for a list of coeffs
f(x) = sum_(i=0)^(N-2) 100*(x_(i+1) - x_(i)^(2))^(2) + (1 - x_(i))^(2)
Inspect with mystic_model_plotter using:: mystic.models.rosen -b “-3:3:.1, -1:5:.1, 1” -d -x 1
The minimum is f(x)=0.0 at x_i=1.0 for all i

2.18. models module 99


mystic Documentation, Release 0.3.3.dev0

rosen0der(x)
evaluates an N-dimensional Rosenbrock saddle for a list of coeffs
f(x) = sum_(i=0)^(N-2) 100*(x_(i+1) - x_(i)^(2))^(2) + (1 - x_(i))^(2)
Inspect with mystic_model_plotter using:: mystic.models.rosen -b “-3:3:.1, -1:5:.1, 1” -d -x 1
The minimum is f(x)=0.0 at x_i=1.0 for all i
rosen1der(x)
evaluates an N-dimensional Rosenbrock derivative for a list of coeffs
The minimum is f’(x)=[0.0]*n at x=[1.0]*n, where len(x) >= 2.
schwefel(x)
evaluates Schwefel’s function for a list of coeffs
f(x) = sum_(i=0)^(N-1) -x_i * sin(sqrt(bs(x_i)))
Where abs(x_i) <= 500.
Inspect with mystic_model_plotter using:: mystic.models.schwefel -b “-500:500:10, -500:500:10” -d
The minimum is f(x)=-(N+1)*418.98288727243374 at x_i=420.9687465 for all i
shekel(x)
evaluates a 2-D Shekel’s Foxholes function for a list of coeffs
f(x) = 1 / (0.002 + f_0(x))
Where: f_0(x) = sum_(i=0)^(24) 1 / (i + sum_(j=0)^(1) (x_j - a_ij)^(6)) with a_ij=(-32,-16,0,16,32). for j=0 and
i=(0,1,2,3,4), a_i0=a_k0 with k=i mod 5 also j=1 and i=(0,5,10,15,20), a_i1=a_k1 with k=i+k’ and k’=(1,2,3,4).
Inspect with mystic_model_plotter using:: mystic.models.shekel -b “-50:50:1, -50:50:1” -d -x 1
The minimum is f(x)=0 for x=(-32,-32)
sphere(x)
evaluates an N-dimensional spherical function for a list of coeffs
f(x) = sum_(i=0)^(N-1) x_(i)^2
Inspect with mystic_model_plotter using:: mystic.models.sphere -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
step(x)
evaluates an N-dimensional step function for a list of coeffs
f(x) = f_0(x) + p_i(x), with i=0,1
Where for abs(x_i) <= 5.12: f_0(x) = 30 + sum_(i=0)^(N-1) loor x_i and for x_i > 5.12: p_0(x) = 30 * (1 + (x_i
- 5.12)) and for x_i < 5.12: p_1(x) = 30 * (1 + (5.12 - x_i)) Otherwise, f_0(x) = 0 and p_i(x)=0 for i=0,1.
Inspect with mystic_model_plotter using:: mystic.models.step -b “-10:10:.2, -10:10:.2” -d -x 1
The minimum is f(x)=(30 - 6*N) for all x_i=[-5.12,-5)
venkat91(x)
evaluates Venkataraman’s sinc function for a list of coeffs
f(x) = -20 * sin(r(x))/r(x)
Where: r(x) = sqrt((x_0 - 4)^2 + (x_1 - 4)^2 + 0.1)
Inspect with mystic_model_plotter using:: mystic.models.venkat91 -b “-10:10:.1, -10:10:.1” -d
The minimum is f(x)=-19.668329370585823 at x=(4.0, 4.0)

100 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

wavy1(x)
evaluates the wavy1 function for a list of coeffs
f(x) = bs(x + 3*sin(x + pi) + pi)
Inspect with mystic_model_plotter using:: mystic.models.wavy1 -b “-20:20:.5, -20:20:.5” -d -r numpy.add
The minimum is f(x)=0.0 at x_i=-pi for all i
wavy2(x)
evaluates the wavy2 function for a list of coeffs
f(x) = 4*sin(x)+sin(4*x)+sin(8*x)+sin(16*x)+sin(32*x)+sin(64*x)
Inspect with mystic_model_plotter using:: mystic.models.wavy2 -b “-10:10:.2, -10:10:.2” -d -r numpy.add
The function has degenerate global minima of f(x)=-6.987594 at x_i = 4.489843526 + 2*k*pi for all i, and k is
an integer
zimmermann(x)
evaluates a Zimmermann function for a list of coeffs
f(x) = max(f_0(x), p_i(x)), with i = 0,1,2,3
Where: f_0(x) = 9 - x_0 - x_1 with for x_0 < 0: p_0(x) = -100 * x_0 and for x_1 < 0: p_1(x) = -100 * x_1 and
for c_2(x) > 16 and c_3(x) > 14: p_i(x) = 100 * c_i(x), with i = 2,3 c_2(x) = (x_0 - 3)^2 + (x_1 - 2)^2 c_3(x) =
x_0 * x_1 Otherwise, p_i(x)=0 for i=0,1,2,3 and c_i(x)=0 for i=2,3.
Inspect with mystic_model_plotter using:: mystic.models.zimmermann -b “-5:10:.1, -5:10:.1” -d -x 1
The minimum is f(x)=0.0 at x=(7.0,2.0)

2.18.3 mystic.models module documentation

abstract_model module

Base classes for mystic’s provided models:: AbstractFunction – evaluates f(x) for given evaluation points x Ab-
stractModel – generates f(x,p) for given coefficients p
class AbstractFunction(ndim=None)
Bases: object
Base class for mystic functions
The ‘function’ method must be overwritten, thus allowing calls to the class instance to mimic calls to the function
object.
For example, if function is overwritten with the Rosenbrock function:

>>> rosen = Rosenbrock(ndim=3)


>>> rosen([1,1,1])
0.

Provides a base class for mystic functions.


Takes optional input ‘ndim’ (number of dimensions).
__init__(ndim=None)
Provides a base class for mystic functions.
Takes optional input ‘ndim’ (number of dimensions).

2.18. models module 101


mystic Documentation, Release 0.3.3.dev0

function(coeffs)
takes a list of coefficients x, returns f(x)
minimizers = None
class AbstractModel(name=’dummy’, metric=<function <lambda>>, sigma=1.0)
Bases: object
Base class for mystic models
The ‘evaluate’ and ‘ForwardFactory’ methods must be overwritten, thus providing a standard interface for gen-
erating a forward model factory and evaluating a forward model. Additionally, two common ways to generate
a cost function are built into the model. For “standard models”, the cost function generator will work with no
modifications.
See mystic.models.poly for a few basic examples.
Provides a base class for mystic models.
Inputs:: name – a name string for the model metric – the cost metric object [default => lambda x:
numpy.sum(x*x)] sigma – a scaling factor applied to the raw cost
CostFactory(target, pts)
generates a cost function instance from list of coefficients and evaluation points
CostFactory2(pts, datapts, nparams)
generates a cost function instance from datapoints and evaluation points
ForwardFactory(coeffs)
generates a forward model instance from a list of coefficients
__init__(name=’dummy’, metric=<function <lambda>>, sigma=1.0)
Provides a base class for mystic models.
Inputs:: name – a name string for the model metric – the cost metric object [default => lambda x:
numpy.sum(x*x)] sigma – a scaling factor applied to the raw cost
evaluate(coeffs, x)
takes list of coefficients & evaluation points, returns f(x)

br8 module

Bevington & Robinson’s model of dual exponential decay

References

1. “Data Reduction and Error Analysis for the Physical Sciences”, Bevington & Robinson, Second Edition,
McGraw-Hill, New York (1992).
class BevingtonDecay(name=’decay’, metric=<function <lambda>>)
Bases: mystic.models.abstract_model.AbstractModel
Computes dual exponential decay [1]. y = a1 + a2 Exp[-t / a4] + a3 Exp[-t/a5]
CostFactory(target, pts)
generates a cost function instance from list of coefficients & evaluation points
CostFactory2(pts, datapts, nparams)
generates a cost function instance from datapoints & evaluation points

102 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

ForwardFactory(coeffs)
generates a dual decay model instance from a list of coefficients
evaluate(coeffs, evalpts)
evaluate dual exponential decay with given coeffs over given evalpts coeffs = (a1,a2,a3,a4,a5)

circle module

2d array representation of a circle

References

None
class Circle(packing=None, name=’circle’, sigma=1.0)
Bases: mystic.models.abstract_model.AbstractModel
Computes 2D array representation of a circle where the circle minimally bounds the 2D data points
data points with [minimal, sparse, or dense] packing=[~0.2, ~1.0, or ~5.0] setting packing = None will constrain
all points to the circle’s radius
CostFactory(target, npts=None)
generates a cost function instance from list of coefficients & number of evaluation points (x,y,r) = target
coeffs
CostFactory2(datapts)
generates a cost function instance from a 2D array of datapoints
ForwardFactory(coeffs)
generates a circle instance from a list of coefficients (x,y,r) = coeffs
forward(coeffs, npts=None)
generate a 2D array of points contained within a circle built from a list of coefficients (x,y,r) = coeffs
default npts = packing * floor( pi*radius^2 )
gencircle(coeffs, interval=0.02)
generate a 2D array representation of a circle of given coeffs coeffs = (x,y,r)
gendata(coeffs, npts=20)
Generate a 2D dataset of npts enclosed in circle of given coeffs, where coeffs = (x,y,r).
NOTE: if npts == None, constrain all points to circle of given radius

dejong module

This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘De Jong’ function definitions drawn
from [3].

References

1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf

2.18. models module 103


mystic Documentation, Release 0.3.3.dev0

3. Ingber, L. and Rosen, B. “Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison” J. of
Mathematical and Computer Modeling 16(11), 87-100, 1992.
class Quartic(ndim=30)
Bases: mystic.models.abstract_model.AbstractFunction
a De Jong quartic function generator
De Jong’s quartic function [1,2,3] is designed to test the behavior of minimizers in the presence of noise. The
function’s global minumum depends on the expectation value of a random variable, and also includes several
randomly distributed local minima.
The generated function f(x) is a modified version of equation (20) of [2], where len(x) >= 0.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘De Jong’ function definitions
drawn from [3].

References

1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. and Rosen, B. “Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison” J. of
Mathematical and Computer Modeling 16(11), 87-100, 1992.

function(coeffs)
evaluates an N-dimensional quartic function for a list of coeffs
f(x) = sum_(i=0)^(N-1) (x_(i)^4 * (i+1) + k_i)
Where k_i is a random variable with uniform distribution bounded by [0,1).
Inspect with mystic_model_plotter using:: mystic.models.quartic -b “-3:3:.1, -3:3:.1” -d -x 1
The minimum is f(x)=N*E[k] for x_i=0.0, where E[k] is the expectation of k, and thus E[k]=0.5 for a
uniform distribution bounded by [0,1).
minimizers = None
class Rosenbrock(ndim=2, axis=None)
Bases: mystic.models.abstract_model.AbstractFunction
a Rosenbrock’s Saddle function generator
Rosenbrock’s Saddle function [1,2,3] has the reputation of being a difficult minimization problem. In two
dimensions, the function is a saddle with an inverted basin, where the global minimum occurs along the rim of
the inverted basin.
The generated function f(x) is a modified version of equation (18) of [2], where len(x) >= 0.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘De Jong’ function definitions
drawn from [3].

References

1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.

104 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. and Rosen, B. “Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison” J. of
Mathematical and Computer Modeling 16(11), 87-100, 1992.

derivative(coeffs)
evaluates an N-dimensional Rosenbrock derivative for a list of coeffs
The minimum is f’(x)=[0.0]*n at x=[1.0]*n, where len(x) >= 2.
function(coeffs)
evaluates an N-dimensional Rosenbrock saddle for a list of coeffs
f(x) = sum_(i=0)^(N-2) 100*(x_(i+1) - x_(i)^(2))^(2) + (1 - x_(i))^(2)
Inspect with mystic_model_plotter using:: mystic.models.rosen -b “-3:3:.1, -1:5:.1, 1” -d -x 1
The minimum is f(x)=0.0 at x_i=1.0 for all i
hessian(coeffs)
evaluates an N-dimensional Rosenbrock hessian for the given coeffs
The function f’‘(x) requires len(x) >= 2.
hessian_product(coeffs, p)
evaluates an N-dimensional Rosenbrock hessian product for p and the given coeffs
The hessian product requires both p and coeffs to have len >= 2.
minimizers = [1.0]
class Shekel(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Shekel’s Foxholes function generator
Shekel’s Foxholes function [1,2,3] has a generally flat surface with several narrow wells. The function’s global
minimum is at (-32, -32), with local minima at (i,j) in (-32, -16, 0, 16, 32).
The generated function f(x) is a modified version of equation (21) of [2], where len(x) == 2.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘De Jong’ function definitions
drawn from [3].

References

1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. and Rosen, B. “Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison” J. of
Mathematical and Computer Modeling 16(11), 87-100, 1992.

function(coeffs)
evaluates a 2-D Shekel’s Foxholes function for a list of coeffs
f(x) = 1 / (0.002 + f_0(x))

2.18. models module 105


mystic Documentation, Release 0.3.3.dev0

Where: f_0(x) = sum_(i=0)^(24) 1 / (i + sum_(j=0)^(1) (x_j - a_ij)^(6)) with a_ij=(-32,-16,0,16,32). for


j=0 and i=(0,1,2,3,4), a_i0=a_k0 with k=i mod 5 also j=1 and i=(0,5,10,15,20), a_i1=a_k1 with k=i+k’ and
k’=(1,2,3,4).
Inspect with mystic_model_plotter using:: mystic.models.shekel -b “-50:50:1, -50:50:1” -d -x 1
The minimum is f(x)=0 for x=(-32,-32)
i = 32
j = 32
minimizers = [(-32, -32), (-16, -32), (0, -32), (16, -32), (32, -32), (-32, -16), (-16,
class Sphere(ndim=3)
Bases: mystic.models.abstract_model.AbstractFunction
a De Jong spherical function generator
De Jong’s spherical function [1,2,3] is considered to be a simple task for every serious minimization method.
The minimum is located at the center of the N-dimensional spehere. There are no local minima.
The generated function f(x) is identical to equation (17) of [2], where len(x) >= 0.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘De Jong’ function definitions
drawn from [3].

References

1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. and Rosen, B. “Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison” J. of
Mathematical and Computer Modeling 16(11), 87-100, 1992.

function(coeffs)
evaluates an N-dimensional spherical function for a list of coeffs
f(x) = sum_(i=0)^(N-1) x_(i)^2
Inspect with mystic_model_plotter using:: mystic.models.sphere -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
minimizers = [0.0]
class Step(ndim=5)
Bases: mystic.models.abstract_model.AbstractFunction
a De Jong step function generator
De Jong’s step function [1,2,3] has several plateaus, which pose difficulty for many optimization algorithms.
Degenerate global minima occur for all x_i on the lowest plateau, with degenerate local minima on all other
plateaus.
The generated function f(x) is a modified version of equation (19) of [2], where len(x) >= 0.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘De Jong’ function definitions
drawn from [3].

106 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

References

1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. and Rosen, B. “Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison” J. of
Mathematical and Computer Modeling 16(11), 87-100, 1992.

function(coeffs)
evaluates an N-dimensional step function for a list of coeffs
f(x) = f_0(x) + p_i(x), with i=0,1
Where for abs(x_i) <= 5.12: f_0(x) = 30 + sum_(i=0)^(N-1) loor x_i and for x_i > 5.12: p_0(x) = 30 * (1
+ (x_i - 5.12)) and for x_i < 5.12: p_1(x) = 30 * (1 + (5.12 - x_i)) Otherwise, f_0(x) = 0 and p_i(x)=0 for
i=0,1.
Inspect with mystic_model_plotter using:: mystic.models.step -b “-10:10:.2, -10:10:.2” -d -x 1
The minimum is f(x)=(30 - 6*N) for all x_i=[-5.12,-5)
minimizers = None

functions module

convert bound instances into functions


ackley(x)
evaluates Ackley’s function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = -20 * exp(-0.2 * sqrt(1/N * sum_(i=0)^(N-1) x_(i)^(2))) and: f_1(x) = -exp(1/N *
sum_(i=0)^(N-1) cos(2 * pi * x_(i))) + 20 + exp(1)
Inspect with mystic_model_plotter using:: mystic.models.ackley -b “-10:10:.1, -10:10:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
branins(x)
evaluates Branins’s function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = a * (x_1 - b * x_(0)^(2) + c * x_0 - d)^2 and f_1(x) = e * (1 - f) * cos(x_0) + e and a=1,
b=5.1/(4*pi^2), c=5/pi, d=6, e=10, f=1/(8*pi)
Inspect with mystic_model_plotter using:: mystic.models.branins -b “-10:20:.1, -5:25:.1” -d -x 1
The minimum is f(x)=0.397887 at x=((2 +/- (2*i)+1)*pi, 2.275 + 10*i*(i+1)/2) for all i
corana(x)
evaluates a 4-D Corana’s parabola function for a list of coeffs
f(x) = sum_(i=0)^(3) f_0(x)
Where for bs(x_i - z_i) < 0.05: f_0(x) = 0.15*(z_i - 0.05*sign(z_i))^(2) * d_i and otherwise: f_0(x) = d_i *
x_(i)^(2), with z_i = loor(bs(x_i/0.2)+0.49999)*sign(x_i)*0.2 and d_i = 1,1000,10,100.

2.18. models module 107


mystic Documentation, Release 0.3.3.dev0

For len(x) == 1, x = x_0,0,0,0; for len(x) == 2, x = x_0,0,x_1,0; for len(x) == 3, x = x_0,0,x_1,x_2; for len(x)
>= 4, x = x_0,x_1,x_2,x_3.
Inspect with mystic_model_plotter using:: mystic.models.corana -b “-1:1:.01, -1:1:.01” -d -x 1
The minimum is f(x)=0 for bs(x_i) < 0.05 for all i.
easom(x)
evaluates Easom’s function for a list of coeffs
f(x) = -cos(x_0) * cos(x_1) * exp(-((x_0-pi)^2+(x_1-pi)^2))
Inspect with mystic_model_plotter using:: mystic.models.easom -b “-5:10:.1, -5:10:.1” -d
The minimum is f(x)=-1.0 at x=(pi,pi)
ellipsoid(x)
evaluates the rotated hyper-ellipsoid function for a list of coeffs
f(x) = sum_(i=0)^(N-1) (sum_(j=0)^(i) x_j)^2
Inspect with mystic_model_plotter using:: mystic.models.ellipsoid -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
fosc3d(x)
evaluates the fOsc3D function for a list of coeffs
f(x) = f_0(x) + p(x)
Where: f_0(x) = -4 * exp(-x_(0)^2 - x_(1)^2) + sin(6*x_(0)) * sin(5*x_(1)) with for x_1 < 0: p(x) =
100.*x_(1)^2 and otherwise: p(x) = 0.
Inspect with mystic_model_plotter using:: mystic.models.fosc3d -b “-5:5:.1, 0:5:.1” -d
The minimum is f(x)=-4.501069742528923 at x=(-0.215018, 0.240356)
goldstein(x)
evaluates Goldstein-Price’s function for a list of coeffs
f(x) = (1 + (x_0 + x_1 + 1)^2 * f_0(x)) * (30 + (2*x_0 - 3*x_1)^2 * f_1(x))
Where: f_0(x) = 19 - 14*x_0 + 3*x_(0)^2 - 14*x_1 + 6*x_(0)*x_(1) + 3*x_(1)^2 and f_1(x) = 18 - 32*x_0 +
12*x_(0)^2 + 48*x_1 - 36*x_(0)*x_(1) + 27*x_(1)^2
Inspect with mystic_model_plotter using:: mystic.models.goldstein -b “-5:5:.1, -5:5:.1” -d -x 1
The minimum is f(x)=3.0 at x=(0,-1)
griewangk(x)
evaluates an N-dimensional Griewangk’s function for a list of coeffs
f(x) = f_0(x) - f_1(x) + 1
Where: f_0(x) = sum_(i=0)^(N-1) x_(i)^(2) / 4000. and: f_1(x) = prod_(i=0)^(N-1) cos( x_i / (i+1)^(1/2) )
Inspect with mystic_model_plotter using:: mystic.models.griewangk -b “-10:10:.1, -10:10:.1” -d -x 5
The minimum is f(x)=0.0 for x_i=0.0
michal(x)
evaluates Michalewicz’s function for a list of coeffs
f(x) = -sum_(i=0)^(N-1) sin(x_i) * (sin((i+1) * (x_i)^(2) / pi))^(20)
Inspect with mystic_model_plotter using:: mystic.models.michal -b “0:3.14:.1, 0:3.14:.1, 1.28500168,
1.92305311, 1.72047194” -d

108 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

For x=(2.20289811, 1.57078059, 1.28500168, 1.92305311, 1.72047194, . . . )[:N] and c=(-0.801303, -1.0, -
0.959092, -0.896699, -1.030564, . . . )[:N], the minimum is f(x)=sum(c) for all x_i=(0,pi)
nmin51(x)
evaluates the NMinimize51 function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = exp(sin(50*x_0)) + sin(60*exp(x_1)) + sin(70*sin(x_0)) and f_1(x) = sin(sin(80*x_1)) -
sin(10*(x_0 + x_1)) + (x_(0)^2 + x_(1)^2)/4
Inspect with mystic_model_plotter using:: mystic.models.nmin51 -b “-5:5:.1, 0:5:.1” -d
The minimum is f(x)=-3.306869 at x=(-0.02440313,0.21061247)
paviani(x)
evaluates Paviani’s function for a list of coeffs
f(x) = f_0(x) - f_1(x)
Where: f_0(x) = sum_(i=0)^(N-1) (ln(x_i - 2)^2 + ln(10 - x_i)^2) and f_1(x) = prod_(i=0)^(N-1) x_(i)^(.2)
Inspect with mystic_model_plotter using:: mystic.models.paviani -b “2:10:.1, 2:10:.1” -d
For N=1, the minimum is f(x)=2.133838 at x_i=8.501586, for N=3, the minimum is f(x)=7.386004 at
x_i=8.589578, for N=5, the minimum is f(x)=9.730525 at x_i=8.740743, for N=8, the minimum is f(x)=-
3.411859 at x_i=9.086900, for N=10, the minimum is f(x)=-45.778470 at x_i=9.350241.
peaks(x)
evaluates an 2-dimensional peaks function for a list of coeffs
f(x) = f_0(x) - f_1(x) - f_2(x)
Where: f_0(x) = 3 * (1 - x_0)^2 * exp(-x_0^2 - (x_1 + 1)^2) and f_1(x) = 10 * (.2 * x_0 - x_0^3 - x_1^5) *
exp(-x_0^2 - x_1^2) and f_2(x) = exp(-(x_0 + 1)^2 - x_1^2) / 3
Inspect with mystic_model_plotter using:: mystic.models.peaks -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=-6.551133332835841 at x=(0.22827892, -1.62553496)
powers(x)
evaluates the sum of different powers function for a list of coeffs
f(x) = sum_(i=0)^(N-1) bs(x_(i))^(i+2)
Inspect with mystic_model_plotter using:: mystic.models.powers -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
quartic(x)
evaluates an N-dimensional quartic function for a list of coeffs
f(x) = sum_(i=0)^(N-1) (x_(i)^4 * (i+1) + k_i)
Where k_i is a random variable with uniform distribution bounded by [0,1).
Inspect with mystic_model_plotter using:: mystic.models.quartic -b “-3:3:.1, -3:3:.1” -d -x 1
The minimum is f(x)=N*E[k] for x_i=0.0, where E[k] is the expectation of k, and thus E[k]=0.5 for a uniform
distribution bounded by [0,1).
rastrigin(x)
evaluates Rastrigin’s function for a list of coeffs
f(x) = 10 * N + sum_(i=0)^(N-1) (x_(i)^2 - 10 * cos(2 * pi * x_(i)))
Inspect with mystic_model_plotter using:: mystic.models.rastrigin -b “-5:5:.1, -5:5:.1” -d

2.18. models module 109


mystic Documentation, Release 0.3.3.dev0

The minimum is f(x)=0.0 at x_i=0.0 for all i


rosen(x)
evaluates an N-dimensional Rosenbrock saddle for a list of coeffs
f(x) = sum_(i=0)^(N-2) 100*(x_(i+1) - x_(i)^(2))^(2) + (1 - x_(i))^(2)
Inspect with mystic_model_plotter using:: mystic.models.rosen -b “-3:3:.1, -1:5:.1, 1” -d -x 1
The minimum is f(x)=0.0 at x_i=1.0 for all i
rosen0der(x)
evaluates an N-dimensional Rosenbrock saddle for a list of coeffs
f(x) = sum_(i=0)^(N-2) 100*(x_(i+1) - x_(i)^(2))^(2) + (1 - x_(i))^(2)
Inspect with mystic_model_plotter using:: mystic.models.rosen -b “-3:3:.1, -1:5:.1, 1” -d -x 1
The minimum is f(x)=0.0 at x_i=1.0 for all i
rosen1der(x)
evaluates an N-dimensional Rosenbrock derivative for a list of coeffs
The minimum is f’(x)=[0.0]*n at x=[1.0]*n, where len(x) >= 2.
schwefel(x)
evaluates Schwefel’s function for a list of coeffs
f(x) = sum_(i=0)^(N-1) -x_i * sin(sqrt(bs(x_i)))
Where abs(x_i) <= 500.
Inspect with mystic_model_plotter using:: mystic.models.schwefel -b “-500:500:10, -500:500:10” -d
The minimum is f(x)=-(N+1)*418.98288727243374 at x_i=420.9687465 for all i
shekel(x)
evaluates a 2-D Shekel’s Foxholes function for a list of coeffs
f(x) = 1 / (0.002 + f_0(x))
Where: f_0(x) = sum_(i=0)^(24) 1 / (i + sum_(j=0)^(1) (x_j - a_ij)^(6)) with a_ij=(-32,-16,0,16,32). for j=0 and
i=(0,1,2,3,4), a_i0=a_k0 with k=i mod 5 also j=1 and i=(0,5,10,15,20), a_i1=a_k1 with k=i+k’ and k’=(1,2,3,4).
Inspect with mystic_model_plotter using:: mystic.models.shekel -b “-50:50:1, -50:50:1” -d -x 1
The minimum is f(x)=0 for x=(-32,-32)
sphere(x)
evaluates an N-dimensional spherical function for a list of coeffs
f(x) = sum_(i=0)^(N-1) x_(i)^2
Inspect with mystic_model_plotter using:: mystic.models.sphere -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
step(x)
evaluates an N-dimensional step function for a list of coeffs
f(x) = f_0(x) + p_i(x), with i=0,1
Where for abs(x_i) <= 5.12: f_0(x) = 30 + sum_(i=0)^(N-1) loor x_i and for x_i > 5.12: p_0(x) = 30 * (1 + (x_i
- 5.12)) and for x_i < 5.12: p_1(x) = 30 * (1 + (5.12 - x_i)) Otherwise, f_0(x) = 0 and p_i(x)=0 for i=0,1.
Inspect with mystic_model_plotter using:: mystic.models.step -b “-10:10:.2, -10:10:.2” -d -x 1
The minimum is f(x)=(30 - 6*N) for all x_i=[-5.12,-5)

110 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

venkat91(x)
evaluates Venkataraman’s sinc function for a list of coeffs
f(x) = -20 * sin(r(x))/r(x)
Where: r(x) = sqrt((x_0 - 4)^2 + (x_1 - 4)^2 + 0.1)
Inspect with mystic_model_plotter using:: mystic.models.venkat91 -b “-10:10:.1, -10:10:.1” -d
The minimum is f(x)=-19.668329370585823 at x=(4.0, 4.0)
wavy1(x)
evaluates the wavy1 function for a list of coeffs
f(x) = bs(x + 3*sin(x + pi) + pi)
Inspect with mystic_model_plotter using:: mystic.models.wavy1 -b “-20:20:.5, -20:20:.5” -d -r numpy.add
The minimum is f(x)=0.0 at x_i=-pi for all i
wavy2(x)
evaluates the wavy2 function for a list of coeffs
f(x) = 4*sin(x)+sin(4*x)+sin(8*x)+sin(16*x)+sin(32*x)+sin(64*x)
Inspect with mystic_model_plotter using:: mystic.models.wavy2 -b “-10:10:.2, -10:10:.2” -d -r numpy.add
The function has degenerate global minima of f(x)=-6.987594 at x_i = 4.489843526 + 2*k*pi for all i, and k is
an integer
zimmermann(x)
evaluates a Zimmermann function for a list of coeffs
f(x) = max(f_0(x), p_i(x)), with i = 0,1,2,3
Where: f_0(x) = 9 - x_0 - x_1 with for x_0 < 0: p_0(x) = -100 * x_0 and for x_1 < 0: p_1(x) = -100 * x_1 and
for c_2(x) > 16 and c_3(x) > 14: p_i(x) = 100 * c_i(x), with i = 2,3 c_2(x) = (x_0 - 3)^2 + (x_1 - 2)^2 c_3(x) =
x_0 * x_1 Otherwise, p_i(x)=0 for i=0,1,2,3 and c_i(x)=0 for i=2,3.
Inspect with mystic_model_plotter using:: mystic.models.zimmermann -b “-5:10:.1, -5:10:.1” -d -x 1
The minimum is f(x)=0.0 at x=(7.0,2.0)

lorentzian module

Lorentzian peak model

References

None
class Lorentzian(name=’lorentz’, metric=<function <lambda>>, sigma=1.0)
Bases: mystic.models.abstract_model.AbstractModel
Computes lorentzian
ForwardFactory(coeffs)
generates a lorentzian model instance from a list of coefficients
evaluate(coeffs, evalpts)
evaluate lorentzian with given coeffs over given evalpts coeffs = (a1,a2,a3,A0,E0,G0,n)

2.18. models module 111


mystic Documentation, Release 0.3.3.dev0

gendata(params, xmin, xmax, npts=4000)


Generate a lorentzian dataset of npts between [min,max] from given params
histogram(data, binwidth, xmin, xmax)
generate bin-centered histogram of provided data return bins of given binwidth (and histogram) generated be-
tween [xmin,xmax]

mogi module

Mogi’s model of surface displacements from a point spherical source in an elastic half space

References

1. Mogi, K. “Relations between the eruptions of various volcanoes and the deformations of the ground surfaces
around them”, Bull. Earthquake. Res. Inst., 36, 99-134, 1958.
class Mogi(name=’mogi’, metric=<function <lambda>>, sigma=1.0)
Bases: mystic.models.abstract_model.AbstractModel
Computes surface displacements Ux, Uy, Uz in meters from a point spherical pressure source in an elastic half
space [1].
CostFactory(target, pts)
generates a cost function instance from list of coefficients & evaluation points
CostFactory2(pts, datapts, nparams)
generates a cost function instance from datapoints & evaluation points
ForwardFactory(coeffs)
generates a mogi source instance from a list of coefficients
evaluate(coeffs, evalpts)
evaluate a single Mogi peak over a 2D (2 by N) numpy array of evalpts, where coeffs = (x0,y0,z0,dV)

nag module

This is drawn from examples in the NAG Library, with the ‘peaks’ function definition found in [1].

References

1. Numerical Algorithms Group, “NAG Library”, Oxford UK, Mark 24, 2013. http://www.nag.co.uk/numeric/CL/
nagdoc_cl24/pdf/E05/e05jbc.pdf
class Peaks(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a peaks function generator
A peaks function [1] is essentially flat, with three wells and three peaks near the origin. The global minimum is
separated from the local minima by peaks.
The generated function f(x) is identical to the ‘peaks’ function in section 10 of [1], and requires len(x) == 2.
This is drawn from examples in the NAG Library, with the ‘peaks’ function definition found in [1].

112 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

References

1. Numerical Algorithms Group, “NAG Library”, Oxford UK, Mark 24, 2013. http://www.nag.co.uk/
numeric/CL/nagdoc_cl24/pdf/E05/e05jbc.pdf

function(coeffs)
evaluates an 2-dimensional peaks function for a list of coeffs
f(x) = f_0(x) - f_1(x) - f_2(x)
Where: f_0(x) = 3 * (1 - x_0)^2 * exp(-x_0^2 - (x_1 + 1)^2) and f_1(x) = 10 * (.2 * x_0 - x_0^3 - x_1^5)
* exp(-x_0^2 - x_1^2) and f_2(x) = exp(-(x_0 + 1)^2 - x_1^2) / 3
Inspect with mystic_model_plotter using:: mystic.models.peaks -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=-6.551133332835841 at x=(0.22827892, -1.62553496)
minimizers = [(0.22827892, -1.62553496), (-1.34739625, 0.20451886), (0.29644556, 0.3201

pohlheim module

This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5], [6],
and [7].

References

1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version 3.80,
2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK, 1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston MA,
1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin, Hei-
delberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear Equa-
tions”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville KY,
1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.
class Ackley(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
an Ackley’s path function generator
At a very coarse level, Ackley’s path function [1,3] is a slightly parabolic plane, with a sharp cone-shaped
depression at the origin. The global minimum is found at the origin. There are several local minima evenly
distributed across the function surface, where the surface modulates similarly to cosine.
The generated function f(x) is identical to function (10) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].

2.18. models module 113


mystic Documentation, Release 0.3.3.dev0

References

1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.

function(coeffs)
evaluates Ackley’s function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = -20 * exp(-0.2 * sqrt(1/N * sum_(i=0)^(N-1) x_(i)^(2))) and: f_1(x) = -exp(1/N *
sum_(i=0)^(N-1) cos(2 * pi * x_(i))) + 20 + exp(1)
Inspect with mystic_model_plotter using:: mystic.models.ackley -b “-10:10:.1, -10:10:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
minimizers = None
class Branins(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Branins’s rcos function generator
Branins’s function [1,5] is very similar to Rosenbrock’s saddle function. However unlike Rosenbrock’s saddle,
Branins’s function has a degenerate global minimum.
The generated function f(x) is identical to function (13) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].

References

1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.

114 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.

function(coeffs)
evaluates Branins’s function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = a * (x_1 - b * x_(0)^(2) + c * x_0 - d)^2 and f_1(x) = e * (1 - f) * cos(x_0) + e and a=1,
b=5.1/(4*pi^2), c=5/pi, d=6, e=10, f=1/(8*pi)
Inspect with mystic_model_plotter using:: mystic.models.branins -b “-10:20:.1, -5:25:.1” -d -x 1
The minimum is f(x)=0.397887 at x=((2 +/- (2*i)+1)*pi, 2.275 + 10*i*(i+1)/2) for all i
minimizers = None
class DifferentPowers(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Pohlheim’s sum of different powers function generator
Pohlheim’s sum of different powers function [1] is unimodal, and similar to the hyper-ellipsoid and De Jong’s
sphere. The global minimum is at the origin, at the center of a broad basin.
The generated function f(x) is identical to function (9) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].

References

1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.

function(coeffs)
evaluates the sum of different powers function for a list of coeffs
f(x) = sum_(i=0)^(N-1) bs(x_(i))^(i+2)
Inspect with mystic_model_plotter using:: mystic.models.powers -b “-5:5:.1, -5:5:.1” -d

2.18. models module 115


mystic Documentation, Release 0.3.3.dev0

The minimum is f(x)=0.0 at x_i=0.0 for all i


minimizers = [0.0]
class Easom(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Easom’s function generator
Easom’s function [1,6] is a unimodal function that exaluates to zero everywhere except in the region around the
global minimum. The global minimum is at the bottom of a sharp well.
The generated function f(x) is identical to function (14) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].

References

1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.

function(coeffs)
evaluates Easom’s function for a list of coeffs
f(x) = -cos(x_0) * cos(x_1) * exp(-((x_0-pi)^2+(x_1-pi)^2))
Inspect with mystic_model_plotter using:: mystic.models.easom -b “-5:10:.1, -5:10:.1” -d
The minimum is f(x)=-1.0 at x=(pi,pi)
minimizers = [(3.141592653589793, 3.141592653589793)]
class GoldsteinPrice(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Goldstein-Price’s function generator
Goldstein-Price’s function [1,7] provides a function with several peaks surrounding a roughly flat valley. There
are a few shallow scorings across the valley, where the global minimum is found at the intersection of the deepest
of the two scorings. Local minima occur at other intersections of scorings.
The generated function f(x) is identical to function (15) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].

116 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

References

1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.

function(coeffs)
evaluates Goldstein-Price’s function for a list of coeffs
f(x) = (1 + (x_0 + x_1 + 1)^2 * f_0(x)) * (30 + (2*x_0 - 3*x_1)^2 * f_1(x))
Where: f_0(x) = 19 - 14*x_0 + 3*x_(0)^2 - 14*x_1 + 6*x_(0)*x_(1) + 3*x_(1)^2 and f_1(x) = 18 - 32*x_0
+ 12*x_(0)^2 + 48*x_1 - 36*x_(0)*x_(1) + 27*x_(1)^2
Inspect with mystic_model_plotter using:: mystic.models.goldstein -b “-5:5:.1, -5:5:.1” -d -x 1
The minimum is f(x)=3.0 at x=(0,-1)
minimizers = [(0, -1), (-0.6, -0.4), (1.8, 0.2)]
class HyperEllipsoid(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Pohlheim’s rotated hyper-ellipsoid function generator
Pohlheim’s rotated hyper-ellipsoid function [1] is continuous, convex, and unimodal. The global minimum is
located at the center of the N-dimensional axis parallel hyper-ellipsoid.
The generated function f(x) is identical to function (1b) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].

References

1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.

2.18. models module 117


mystic Documentation, Release 0.3.3.dev0

5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.

function(coeffs)
evaluates the rotated hyper-ellipsoid function for a list of coeffs
f(x) = sum_(i=0)^(N-1) (sum_(j=0)^(i) x_j)^2
Inspect with mystic_model_plotter using:: mystic.models.ellipsoid -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
minimizers = [0.0]
class Michalewicz(ndim=5)
Bases: mystic.models.abstract_model.AbstractFunction
a Michalewicz’s function generator
Michalewicz’s function [1,4] in general evaluates to zero. However, there are long narrow channels that create
local minima. At the intersection of the channels, the function additionally has sharp dips – one of which is the
global minimum.
The generated function f(x) is identical to function (12) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].

References

1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.

function(coeffs)
evaluates Michalewicz’s function for a list of coeffs
f(x) = -sum_(i=0)^(N-1) sin(x_i) * (sin((i+1) * (x_i)^(2) / pi))^(20)
Inspect with mystic_model_plotter using:: mystic.models.michal -b “0:3.14:.1, 0:3.14:.1, 1.28500168,
1.92305311, 1.72047194” -d

118 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

For x=(2.20289811, 1.57078059, 1.28500168, 1.92305311, 1.72047194, . . . )[:N] and c=(-0.801303, -1.0,
-0.959092, -0.896699, -1.030564, . . . )[:N], the minimum is f(x)=sum(c) for all x_i=(0,pi)
minimizers = None
class Rastrigin(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Rastrigin’s function generator
Rastrigin’s function [1] is essentially De Jong’s sphere with the addition of cosine modulation to produce several
regularly distributed local minima. The global minimum is at the origin.
The generated function f(x) is identical to function (6) of [1], where len(x) >= 0.
This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].

References

1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.

function(coeffs)
evaluates Rastrigin’s function for a list of coeffs
f(x) = 10 * N + sum_(i=0)^(N-1) (x_(i)^2 - 10 * cos(2 * pi * x_(i)))
Inspect with mystic_model_plotter using:: mystic.models.rastrigin -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
minimizers = None
class Schwefel(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Schwefel’s function generator
Schwefel’s function [1,2] has alternating rows of peaks and valleys, with the global minimum near the edge of
the bounded parameter space. This funciton can be misleading for optimizers as the next best local minima are
near the other corners of the bounded parameter space. The intensity of the peaks and valleys increases as one
moves away from the origin.
The generated function f(x) is identical to function (7) of [1], where len(x) >= 0.

2.18. models module 119


mystic Documentation, Release 0.3.3.dev0

This is part of Pohlheim’s “GEATbx” test suite in [1], with function definitions drawn from [1], [2], [3], [4], [5],
[6], and [7].

References

1. Pohlheim, H. “GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB”, Version
3.80, 2006. http://www.geatbx.com/docu
2. Schwefel, H.-P. “Numerical Optimization of Computer Models”, John Wiley and Sons, Chichester UK,
1981.
3. Ackley, D.H. “A Connectionist Machine for Genetic Hillclimbing”, Kluwer Academic Publishers, Boston
MA, 1987.
4. Michalewicz, Z. “Genetic Algorithms + Data Structures = Evolution Programs”, Springer-Verlag, Berlin,
Heidelberg, New York, 1992.
5. Branin, F.K. “A Widely Convergent Method for Finding Multiple Solutions of Simultaneous Nonlinear
Equations”, IBM J. Res. Develop., 504-522, Sept 1972.
6. Easom, E.E. “A Survey of Global Optimization Techniques”, M. Eng. Thesis, U. Louisville, Louisville
KY, 1990.
7. Goldstein, A.A. and Price, I.F. “On Descent from Local Minima”, Math. Comput., (25) 115, 1971.

function(coeffs)
evaluates Schwefel’s function for a list of coeffs
f(x) = sum_(i=0)^(N-1) -x_i * sin(sqrt(bs(x_i)))
Where abs(x_i) <= 500.
Inspect with mystic_model_plotter using:: mystic.models.schwefel -b “-500:500:10, -500:500:10” -d
The minimum is f(x)=-(N+1)*418.98288727243374 at x_i=420.9687465 for all i
minimizers = None

poly module

1d model representation for polynomials

References

1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Storn, R. “Constrained Optimization” Dr. Dobb’s Journal, May, 119-123, 1995.
class Chebyshev(order=8, name=’poly’, metric=<function <lambda>>, sigma=1.0)
Bases: mystic.models.poly.Polynomial
Chebyshev polynomial models and functions, including specific methods for Tn(z) n=2,4,6,8,16, Equation (27-
33) of [2]
NOTE: default is T8(z)

120 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

CostFactory(target, pts)
generates a cost function instance from list of coefficients & evaluation points
CostFactory2(pts, datapts, nparams)
generates a cost function instance from datapoints & evaluation points
ForwardFactory(coeffs)
generates a 1-D polynomial instance from a list of coefficients
cost(trial, M=61)
The costfunction for order-n Chebyshev fitting. M evaluation points between [-1, 1], and two end points
forward(x)
forward Chebyshev function
class Polynomial(name=’poly’, metric=<function <lambda>>, sigma=1.0)
Bases: mystic.models.abstract_model.AbstractModel
1-D Polynomial models and functions
ForwardFactory(coeffs)
generates a 1-D polynomial instance from a list of coefficients using numpy.poly1d(coeffs)
evaluate(coeffs, x)
takes list of coefficients & evaluation points, returns f(x) thus, [a3, a2, a1, a0] yields a3 x^3 + a2 x^2 + a1
x^1 + a0
chebyshev16cost(trial, M=61)
The costfunction for order-n Chebyshev fitting. M evaluation points between [-1, 1], and two end points
chebyshev2cost(trial, M=61)
The costfunction for order-n Chebyshev fitting. M evaluation points between [-1, 1], and two end points
chebyshev4cost(trial, M=61)
The costfunction for order-n Chebyshev fitting. M evaluation points between [-1, 1], and two end points
chebyshev6cost(trial, M=61)
The costfunction for order-n Chebyshev fitting. M evaluation points between [-1, 1], and two end points
chebyshev8cost(trial, M=61)
The costfunction for order-n Chebyshev fitting. M evaluation points between [-1, 1], and two end points
chebyshevcostfactory(target)

schittkowski module

This is part of Hock and Schittkowski’s test suite in [1], with function definitions drawn from [1] and [2].

References

1. Hock, W. and Schittkowski, K. “Test Examples for Nonlinear Programming Codes”, Lecture Notes in Eco-
nomics and Mathematical Systems, Vol. 187, Springer, 1981. http://www.ai7.uni-bayreuth.de/test_problem_
coll.pdf
2. Paviani, D.A. “A new method for the solution of the general nonlinear programming problem”, Ph.D. disserta-
tion, The University of Texas, Austin, TX, 1969.
class Paviani(ndim=10)
Bases: mystic.models.abstract_model.AbstractFunction
a Paviani’s function generator

2.18. models module 121


mystic Documentation, Release 0.3.3.dev0

Paviani’s function [1,2] is a relatively flat basin that quickly jumps to infinity for x_i >= 10 or x_i <= 2. The
global minimum is located near the corner of one of the basin corners. There are local minima in the corners
ajacent to the global minima.
The generated function f(x) is identical to function (110) of [1], where len(x) >= 0.
This is part of Hock and Schittkowski’s test suite in [1], with function definitions drawn from [1] and [2].

References

1. Hock, W. and Schittkowski, K. “Test Examples for Nonlinear Programming Codes”, Lecture Notes in
Economics and Mathematical Systems, Vol. 187, Springer, 1981. http://www.ai7.uni-bayreuth.de/test_
problem_coll.pdf
2. Paviani, D.A. “A new method for the solution of the general nonlinear programming problem”, Ph.D.
dissertation, The University of Texas, Austin, TX, 1969.

function(coeffs)
evaluates Paviani’s function for a list of coeffs
f(x) = f_0(x) - f_1(x)
Where: f_0(x) = sum_(i=0)^(N-1) (ln(x_i - 2)^2 + ln(10 - x_i)^2) and f_1(x) = prod_(i=0)^(N-1) x_(i)^(.2)
Inspect with mystic_model_plotter using:: mystic.models.paviani -b “2:10:.1, 2:10:.1” -d
For N=1, the minimum is f(x)=2.133838 at x_i=8.501586, for N=3, the minimum is f(x)=7.386004 at
x_i=8.589578, for N=5, the minimum is f(x)=9.730525 at x_i=8.740743, for N=8, the minimum is f(x)=-
3.411859 at x_i=9.086900, for N=10, the minimum is f(x)=-45.778470 at x_i=9.350241.
minimizers = None

storn module

This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘Corana’ function definitions drawn
from [3,4], ‘Griewangk’ function definitions drawn from [5], and ‘Zimmermann’ function definitions drawn from [6].

References

1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. “Simulated Annealing: Practice Versus Theory” J. of Mathematical and Computer Modeling 18(11),
29-57, 1993.
4. Corana, A. and Marchesi, M. and Martini, C. and Ridella, S. “Minimizing Multimodal Functions of Continuous
Variables with the ‘Simulated Annealing Algorithm’” ACM Transactions on Mathematical Software, March,
272-280, 1987.
5. Griewangk, A.O. “Generalized Descent for Global Optimization” Journal of Optimization Theory and Applica-
tions 34: 11-39, 1981.
6. Zimmermann, W. “Operations Research” Oldenbourg Munchen, Wien, 1990.

122 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

class Corana(ndim=4)
Bases: mystic.models.abstract_model.AbstractFunction
a Corana’s parabola function generator
Corana’s parabola function [1,2,3,4] defines a paraboloid whose axes are parallel to the coordinate axes. This
funciton has a large number of wells that increase in depth with proximity to the origin. The global minimum is
a plateau around the origin.
The generated function f(x) is a modified version of equation (22) of [2], where len(x) <= 4.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘Corana’ function definitions
drawn from [3,4], ‘Griewangk’ function definitions drawn from [5], and ‘Zimmermann’ function definitions
drawn from [6].

References

1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. “Simulated Annealing: Practice Versus Theory” J. of Mathematical and Computer Modeling
18(11), 29-57, 1993.
4. Corana, A. and Marchesi, M. and Martini, C. and Ridella, S. “Minimizing Multimodal Functions of Con-
tinuous Variables with the ‘Simulated Annealing Algorithm’” ACM Transactions on Mathematical Soft-
ware, March, 272-280, 1987.
5. Griewangk, A.O. “Generalized Descent for Global Optimization” Journal of Optimization Theory and
Applications 34: 11-39, 1981.
6. Zimmermann, W. “Operations Research” Oldenbourg Munchen, Wien, 1990.

function(coeffs)
evaluates a 4-D Corana’s parabola function for a list of coeffs
f(x) = sum_(i=0)^(3) f_0(x)
Where for bs(x_i - z_i) < 0.05: f_0(x) = 0.15*(z_i - 0.05*sign(z_i))^(2) * d_i and otherwise: f_0(x) = d_i
* x_(i)^(2), with z_i = loor(bs(x_i/0.2)+0.49999)*sign(x_i)*0.2 and d_i = 1,1000,10,100.
For len(x) == 1, x = x_0,0,0,0; for len(x) == 2, x = x_0,0,x_1,0; for len(x) == 3, x = x_0,0,x_1,x_2; for
len(x) >= 4, x = x_0,x_1,x_2,x_3.
Inspect with mystic_model_plotter using:: mystic.models.corana -b “-1:1:.01, -1:1:.01” -d -x 1
The minimum is f(x)=0 for bs(x_i) < 0.05 for all i.
minimizers = None
class Griewangk(ndim=10)
Bases: mystic.models.abstract_model.AbstractFunction
a Griewangk’s function generator
Griewangk’s function [1,2,5] is a multi-dimensional cosine function that provides several periodic local minima,
with the global minimum at the origin. The local minima are fractionally more shallow than the global minimum,
such that when viewed at a very coarse scale the function appears as a multi-dimensional parabola similar to De
Jong’s sphere.

2.18. models module 123


mystic Documentation, Release 0.3.3.dev0

The generated function f(x) is a modified version of equation (23) of [2], where len(x) >= 0.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘Corana’ function definitions
drawn from [3,4], ‘Griewangk’ function definitions drawn from [5], and ‘Zimmermann’ function definitions
drawn from [6].

References

1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.
2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. “Simulated Annealing: Practice Versus Theory” J. of Mathematical and Computer Modeling
18(11), 29-57, 1993.
4. Corana, A. and Marchesi, M. and Martini, C. and Ridella, S. “Minimizing Multimodal Functions of Con-
tinuous Variables with the ‘Simulated Annealing Algorithm’” ACM Transactions on Mathematical Soft-
ware, March, 272-280, 1987.
5. Griewangk, A.O. “Generalized Descent for Global Optimization” Journal of Optimization Theory and
Applications 34: 11-39, 1981.
6. Zimmermann, W. “Operations Research” Oldenbourg Munchen, Wien, 1990.

function(coeffs)
evaluates an N-dimensional Griewangk’s function for a list of coeffs
f(x) = f_0(x) - f_1(x) + 1
Where: f_0(x) = sum_(i=0)^(N-1) x_(i)^(2) / 4000. and: f_1(x) = prod_(i=0)^(N-1) cos( x_i / (i+1)^(1/2)
)
Inspect with mystic_model_plotter using:: mystic.models.griewangk -b “-10:10:.1, -10:10:.1” -d -x 5
The minimum is f(x)=0.0 for x_i=0.0
minimizers = [0.0]
class Zimmermann(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Zimmermann function generator
A Zimmermann function [1,2,6] poses difficulty for minimizers as the minimum is located at the corner of the
constrained region. A penalty is applied to all values outside the constrained region, creating a local minimum.
The generated function f(x) is a modified version of equation (24-26) of [2], and requires len(x) == 2.
This is part of Storn’s “Differential Evolution” test suite, as defined in [2], with ‘Corana’ function definitions
drawn from [3,4], ‘Griewangk’ function definitions drawn from [5], and ‘Zimmermann’ function definitions
drawn from [6].

References

1. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” Journal of Global Optimization 11: 341-359, 1997.

124 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

2. Storn, R. and Price, K. “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces” TR-95-012, ICSI, 1995. http://www.icsi.berkeley.edu/~storn/TR-95-012.pdf
3. Ingber, L. “Simulated Annealing: Practice Versus Theory” J. of Mathematical and Computer Modeling
18(11), 29-57, 1993.
4. Corana, A. and Marchesi, M. and Martini, C. and Ridella, S. “Minimizing Multimodal Functions of Con-
tinuous Variables with the ‘Simulated Annealing Algorithm’” ACM Transactions on Mathematical Soft-
ware, March, 272-280, 1987.
5. Griewangk, A.O. “Generalized Descent for Global Optimization” Journal of Optimization Theory and
Applications 34: 11-39, 1981.
6. Zimmermann, W. “Operations Research” Oldenbourg Munchen, Wien, 1990.

function(coeffs)
evaluates a Zimmermann function for a list of coeffs
f(x) = max(f_0(x), p_i(x)), with i = 0,1,2,3
Where: f_0(x) = 9 - x_0 - x_1 with for x_0 < 0: p_0(x) = -100 * x_0 and for x_1 < 0: p_1(x) = -100 * x_1
and for c_2(x) > 16 and c_3(x) > 14: p_i(x) = 100 * c_i(x), with i = 2,3 c_2(x) = (x_0 - 3)^2 + (x_1 - 2)^2
c_3(x) = x_0 * x_1 Otherwise, p_i(x)=0 for i=0,1,2,3 and c_i(x)=0 for i=2,3.
Inspect with mystic_model_plotter using:: mystic.models.zimmermann -b “-5:10:.1, -5:10:.1” -d -x 1
The minimum is f(x)=0.0 at x=(7.0,2.0)
minimizers = [(7.0, 2.0), (2.3547765, 5.948322)]

venkataraman module

This is drawn from examples in Applied Optimization with MATLAB programming, with the function definition
found in [1].

References

1. Venkataraman, P. “Applied Optimization with MATLAB Programming”, John Wiley and Sons, Hoboken NJ,
2nd Edition, 2009.
class Sinc(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a Venkataraman’s sinc function generator
Venkataraman’s sinc function [1] has the global minimum at the center of concentric rings of local minima, with
well depth decreasing with distance from center.
The generated function f(x) is identical to equation (9.5) of example 9.1 of [1], and requires len(x) == 2.
This is drawn from examples in Applied Optimization with MATLAB programming, with the function definition
found in [1].

References

1. Venkataraman, P. “Applied Optimization with MATLAB Programming”, John Wiley and Sons, Hoboken
NJ, 2nd Edition, 2009.

2.18. models module 125


mystic Documentation, Release 0.3.3.dev0

function(coeffs)
evaluates Venkataraman’s sinc function for a list of coeffs
f(x) = -20 * sin(r(x))/r(x)
Where: r(x) = sqrt((x_0 - 4)^2 + (x_1 - 4)^2 + 0.1)
Inspect with mystic_model_plotter using:: mystic.models.venkat91 -b “-10:10:.1, -10:10:.1” -d
The minimum is f(x)=-19.668329370585823 at x=(4.0, 4.0)
minimizers = None

wavy module

Multi-minima example functions with vector outputs, which require a ‘reducing’ function to provide scalar return
values.

References

None
class Wavy1(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a wavy1 function generator
A wavy1 function has a vector return value, and oscillates similarly to x+sin(x) in each direction. When a
reduction function, like ‘numpy.add’ is applied, the surface can be visualized. The global minimum is at the
center of a cross-hairs running along x_i = -pi, with periodic local minima in each direction.
The generated function f(x) requires len(x) > 0, and a reducing function for use in most optimizers.
Multi-minima example functions with vector outputs, which require a ‘reducing’ function to provide scalar
return values.

References

None
function(coeffs)
evaluates the wavy1 function for a list of coeffs
f(x) = bs(x + 3*sin(x + pi) + pi)
Inspect with mystic_model_plotter using:: mystic.models.wavy1 -b “-20:20:.5, -20:20:.5” -d -r
numpy.add
The minimum is f(x)=0.0 at x_i=-pi for all i
minimizers = [-3.141592653589793]
class Wavy2(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
function(coeffs)
evaluates the wavy2 function for a list of coeffs
f(x) = 4*sin(x)+sin(4*x)+sin(8*x)+sin(16*x)+sin(32*x)+sin(64*x)

126 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

Inspect with mystic_model_plotter using:: mystic.models.wavy2 -b “-10:10:.2, -10:10:.2” -d -r


numpy.add
The function has degenerate global minima of f(x)=-6.987594 at x_i = 4.489843526 + 2*k*pi for all i, and
k is an integer
minimizers = None

wolfram module

This is drawn from Mathematica’s example suites, with the ‘fOsc3D’ function definition found in [1], and the ‘XXX’
function found in [2].

References

1. Trott, M. “The Mathematica GuideBook for Numerics”, Springer-Verlag, New York, 2006.
2. Champion, B. and Strzebonski, A. “Wolfram Mathematica Tutorial Collection on Constrained Optimization”,
Wolfram Research, USA, 2008. http://reference.wolfram.com/language/guide/Optimization.html
class NMinimize51(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a NMinimize51 function generator
A NMinimize51 function [2] has many local minima. The minima are periodic over parameter space, and
modulate the surface of a parabola at the coarse scale. The global minimum is located at the deepest of the many
periodic wells.
The generated function f(x) is identical to equation (51) of the ‘NMinimize’ section in [2], and requires len(x)
== 2.
This is drawn from Mathematica’s example suites, with the ‘fOsc3D’ function definition found in [1], and the
‘XXX’ function found in [2].

References

1. Trott, M. “The Mathematica GuideBook for Numerics”, Springer-Verlag, New York, 2006.
2. Champion, B. and Strzebonski, A. “Wolfram Mathematica Tutorial Collection on Constrained Optimiza-
tion”, Wolfram Research, USA, 2008. http://reference.wolfram.com/language/guide/Optimization.html

function(coeffs)
evaluates the NMinimize51 function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = exp(sin(50*x_0)) + sin(60*exp(x_1)) + sin(70*sin(x_0)) and f_1(x) = sin(sin(80*x_1)) -
sin(10*(x_0 + x_1)) + (x_(0)^2 + x_(1)^2)/4
Inspect with mystic_model_plotter using:: mystic.models.nmin51 -b “-5:5:.1, 0:5:.1” -d
The minimum is f(x)=-3.306869 at x=(-0.02440313,0.21061247)
minimizers = [(-0.02440313, 0.21061247)]

2.18. models module 127


mystic Documentation, Release 0.3.3.dev0

class fOsc3D(ndim=2)
Bases: mystic.models.abstract_model.AbstractFunction
a fOsc3D function generator
A fOsc3D function [1] for positive x_1 values yields small sinusoidal oscillations on a flat plane, where a
sinkhole containing the global minimum and a few local minima is found in a small region near the origin. For
negative x_1 values, a parabolic penalty is applied that decreases as the x_1 appoaches zero.
The generated function f(x) is identical to equation (75) of section 1.10 of [1], and requires len(x) == 2.
This is drawn from Mathematica’s example suites, with the ‘fOsc3D’ function definition found in [1], and the
‘XXX’ function found in [2].

References

1. Trott, M. “The Mathematica GuideBook for Numerics”, Springer-Verlag, New York, 2006.
2. Champion, B. and Strzebonski, A. “Wolfram Mathematica Tutorial Collection on Constrained Optimiza-
tion”, Wolfram Research, USA, 2008. http://reference.wolfram.com/language/guide/Optimization.html

function(coeffs)
evaluates the fOsc3D function for a list of coeffs
f(x) = f_0(x) + p(x)
Where: f_0(x) = -4 * exp(-x_(0)^2 - x_(1)^2) + sin(6*x_(0)) * sin(5*x_(1)) with for x_1 < 0: p(x) =
100.*x_(1)^2 and otherwise: p(x) = 0.
Inspect with mystic_model_plotter using:: mystic.models.fosc3d -b “-5:5:.1, 0:5:.1” -d
The minimum is f(x)=-4.501069742528923 at x=(-0.215018, 0.240356)
minimizers = [(-0.215018, 0.240356)]

2.19 monitors module

monitors: callable class instances that record data

2.19.1 Monitors

Monitors provide the ability to monitor progress as the optimization is underway. Monitors also can be used to extract
and prepare information for mystic’s analysis viewers. Each of mystic’s monitors are customizable, and provide the
user with a different type of output. The following monitors are available:

Monitor -- the basic monitor; only writes to internal state


LoggingMonitor -- a logging monitor; also writes to a logfile
VerboseMonitor -- a verbose monitor; also writes to stdout/stderr
VerboseLoggingMonitor -- a verbose logging monitor; best of both worlds
CustomMonitor -- a customizable 'n-variable' version of Monitor
Null -- a null object, which reliably does nothing

128 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

2.19.2 Usage

Typically monitors are either bound to a model function by a modelFactory, or bound to a cost function by a
Solver.

Examples

>>> # get and configure monitors


>>> from mystic.monitors import Monitor, VerboseMonitor
>>> evalmon = Monitor()
>>> stepmon = VerboseMonitor(5)
>>>
>>> # instantiate and configure the solver
>>> from mystic.solvers import NelderMeadSimplexSolver
>>> from mystic.termination import CandidateRelativeTolerance as CRT
>>> solver = NelderMeadSimplexSolver(len(x0))
>>> solver.SetInitialPoints(x0)
>>>
>>> # associate the monitor with a solver, then solve
>>> solver.SetEvaluationMonitor(evalmon)
>>> solver.SetGenerationMonitor(stepmon)
>>> solver.Solve(rosen, CRT())
>>>
>>> # access the 'iteration' history
>>> stepmon.x # parameters after each iteration
>>> stepmon.y # cost after each iteration
>>>
>>> # access the 'evaluation' history
>>> evalmon.x # parameters after each evaluation
>>> evalmon.y # cost after each evaluation

class Null(*args, **kwargs)


Bases: object
A Null object
Null objects always and reliably “do nothing.”
__bool__()
__call__(...) <==> x(...)
__delattr__(name)
x.__delattr__(‘name’) <==> del x.name
__dict__ = dict_proxy({'__module__': 'mystic.monitors', '__dict__': <attribute '__dic
__getattr__(name)
__getnewargs__()
__init__(*args, **kwargs)
x.__init__(. . . ) initializes x; see help(type(x)) for signature
__len__()
__module__ = 'mystic.monitors'
static __new__(cls, *args, **kwargs)
__nonzero__()

2.19. monitors module 129


mystic Documentation, Release 0.3.3.dev0

__repr__() <==> repr(x)


__setattr__(name, value)
x.__setattr__(‘name’, value) <==> x.name = value
__weakref__
list of weak references to the object (if defined)
_id = ()
_inst
A Null object
Null objects always and reliably “do nothing.”
_npts = None
_x = ()
_y = ()
info
A Null object
Null objects always and reliably “do nothing.”
k = None
label = None
x = ()
y = ()
class Monitor(**kwds)
Bases: object
Instances of objects that can be passed as monitors. Typically, a Monitor logs a list of parameters and the
corresponding costs, retrievable by accessing the Monitor’s member variables.
example usage. . .

>>> sow = Monitor()


>>> sow([1,2],3)
>>> sow([4,5],6)
>>> sow.x
[[1, 2], [4, 5]]
>>> sow.y
[3, 6]

_Monitor__step()
__add__(monitor)
add the contents of self and the given monitor
__call__(...) <==> x(...)
__dict__ = dict_proxy({'iy': <property object>, '_Monitor__step': <function __step>,
__init__(**kwds)
x.__init__(. . . ) initializes x; see help(type(x)) for signature
__len__()
__module__ = 'mystic.monitors'

130 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

__weakref__
list of weak references to the object (if defined)
_get_y(monitor)
avoid double-conversion by combining k’s
_ik(y, k=False, type=<type ’list’>)
_k(y, type=<type ’list’>)
_pos
Positions
_step
_wts
Weights
ax
Params
ay
Costs
extend(monitor)
append the contents of the given monitor
get_ax()
get_ay()
get_id()
get_info()
get_ipos()
get_iwts()
get_ix()
get_iy()
get_pos()
get_wts()
get_x()
get_y()
id
Id
info(message)
ix
Params
iy
Costs
pos
Positions
prepend(monitor)
prepend the contents of the given monitor

2.19. monitors module 131


mystic Documentation, Release 0.3.3.dev0

wts
Weights
x
Params
y
Costs
class VerboseMonitor(interval=10, xinterval=inf, all=True, **kwds)
Bases: mystic.monitors.Monitor
A verbose version of the basic Monitor.
Prints output ‘y’ every ‘interval’, and optionally prints input parameters ‘x’ every ‘xinterval’.
__call__(...) <==> x(...)
__init__(interval=10, xinterval=inf, all=True, **kwds)
x.__init__(. . . ) initializes x; see help(type(x)) for signature
__module__ = 'mystic.monitors'
info(message)
class LoggingMonitor(interval=1, filename=’log.txt’, new=False, all=True, info=None, **kwds)
Bases: mystic.monitors.Monitor
A basic Monitor that writes to a file at specified intervals.
Logs output ‘y’ and input parameters ‘x’ to a file every ‘interval’.
__call__(...) <==> x(...)
__init__(interval=1, filename=’log.txt’, new=False, all=True, info=None, **kwds)
x.__init__(. . . ) initializes x; see help(type(x)) for signature
__module__ = 'mystic.monitors'
__reduce__()
helper for pickle
__setstate__(state)
info(message)
class VerboseLoggingMonitor(interval=1, yinterval=10, xinterval=inf, filename=’log.txt’,
new=False, all=True, info=None, **kwds)
Bases: mystic.monitors.LoggingMonitor
A Monitor that writes to a file and the screen at specified intervals.
Logs output ‘y’ and input parameters ‘x’ to a file every ‘interval’, also print every ‘yinterval’.
__call__(...) <==> x(...)
__init__(interval=1, yinterval=10, xinterval=inf, filename=’log.txt’, new=False, all=True,
info=None, **kwds)
x.__init__(. . . ) initializes x; see help(type(x)) for signature
__module__ = 'mystic.monitors'
__reduce__()
helper for pickle
__setstate__(state)
info(message)

132 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

CustomMonitor(*args, **kwds)
generate a custom Monitor
Parameters
• args (tuple(str)) – tuple of the required Monitor inputs (e.g. x).
• kwds (dict(str)) – dict of {"input":"doc"} (e.g. x='Params').
Returns a customized monitor instance

Examples

>>> sow = CustomMonitor('x','y',x="Params",y="Costs",e="Error",d="Deriv")


>>> sow(1,1)
>>> sow(2,4,e=0)
>>> sow.x
[1,2]
>>> sow.y
[1,4]
>>> sow.e
[0]
>>> sow.d
[]

_solutions(monitor, last=None)
return the params from the last N entries in a monitor
_measures(monitor, last=None, weights=False)
return positions or weights from the last N entries in a monitor
this function requires a montor that is monitoring a product_measure
_positions(monitor, last=None)
return positions from the last N entries in a monitor
this function requires a montor that is monitoring a product_measure
_weights(monitor, last=None)
return weights from the last N entries in a monitor
this function requires a montor that is monitoring a product_measure
_load(path, monitor=None, verbose=False)
load npts, params, and cost into monitor from file at given path

2.20 munge module

__orig_write_converge_file(mon, log_file=’paramlog.py’)
__orig_write_support_file(mon, log_file=’paramlog.py’)
converge_to_support(steps, energy)
converge_to_support_converter(file_in, file_out)
isNull(mon)
logfile_reader(filename)

2.20. munge module 133


mystic Documentation, Release 0.3.3.dev0

old_to_new_support_converter(file_in, file_out)
raw_to_converge(steps, energy)
raw_to_converge_converter(file_in, file_out)
raw_to_support(steps, energy)
raw_to_support_converter(file_in, file_out)
read_converge_file(file_in)
read_history(source)
read parameter history and cost history from the given source
‘source’ can be a monitor, logfile, support file, or solver restart file
read_import(file, *targets)
import the targets; targets are name strings
read_monitor(mon, id=False)
read_old_support_file(file_in)
read_raw_file(file_in)
read_support_file(file_in)
read_trajectories(source)
read trajectories from a convergence logfile or a monitor
source can either be a monitor instance or a logfile path
sequence(x)
True if x is a list, tuple, or a ndarray
write_converge_file(mon, log_file=’paramlog.py’, **kwds)
write_monitor(steps, energy, id=[], k=None)
write_raw_file(mon, log_file=’paramlog.py’, **kwds)
write_support_file(mon, log_file=’paramlog.py’, **kwds)

2.21 penalty module

penalty methods: methods used to convert a function into a penalty function


Suppose a given condition f(x) is satisfied when f(x) == 0.0 for equality constraints, and f(x) <= 0.0 for
inequality constraints. This condition f(x) can be used as the basis for a mystic.penalty function.

Examples

>>> def penalty_mean(x, target):


... return mean(x) - target
...
>>> @quadratic_equality(condition=penalty_mean, kwds={'target':5.0})
... def penalty(x):
... return 0.0
...
>>> penalty([1,2,3,4,5])
(continues on next page)

134 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

(continued from previous page)


400.0
>>> penalty([3,4,5,6,7])
7.8886090522101181e-29

References

1. http://en.wikipedia.org/wiki/Penalty_method
2. “Applied Optimization with MATLAB Programming”, by Venkataraman, Wiley, 2nd edition, 2009.
3. http://www.srl.gatech.edu/education/ME6103/Penalty-Barrier.ppt
4. “An Augmented Lagrange Multiplier Based Method for Mixed Integer Discrete Continuous Optimization and
Its Applications to Mechanical Design”, by Kannan and Kramer, 1994.
barrier_inequality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)
apply a infinite barrier if the given inequality constraint is violated, and a logarithmic penalty if the inequality
constraint is satisfied
penalty is p(x) = inf if constraint is violated, otherwise penalty is p(x) = -1/pk*log(-f(x)), with pk = 2k*pow(h,n)
and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) <= 0.0
lagrange_equality(condition=<function <lambda>>, args=None, kwds=None, k=20, h=5)
apply a quadratic penalty if the given equality constraint is violated
penalty is p(x) = pk*f(x)**2 + lam*f(x), with pk = k*pow(h,n) and n=0 also lagrange multiplier lam = 2k*f(x)
where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) = 0.0
lagrange_inequality(condition=<function <lambda>>, args=None, kwds=None, k=20, h=5)
apply a quadratic penalty if the given inequality constraint is violated
penalty is p(x) = pk*mpf**2 + beta*mpf, with pk = k*pow(h,n) and n=0 also mpf = max(-beta/2k, f(x)) and
lagrange multiplier beta = 2k*mpf where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) <= 0.0
linear_equality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)
apply a linear penalty if the given equality constraint is violated
penalty is p(x) = pk*abs(f(x)), with pk = k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) == 0.0
linear_inequality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)
apply a linear penalty if the given inequality constraint is violated
penalty is p(x) = pk*abs(f(x)), with pk = 2k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) <= 0.0
quadratic_equality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)
apply a quadratic penalty if the given equality constraint is violated
penalty is p(x) = pk*f(x)**2, with pk = k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) == 0.0

2.21. penalty module 135


mystic Documentation, Release 0.3.3.dev0

quadratic_inequality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)


apply a quadratic penalty if the given inequality constraint is violated
penalty is p(x) = pk*f(x)**2, with pk = 2k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) <= 0.0
uniform_equality(condition=<function <lambda>>, args=None, kwds=None, k=inf, h=5)
apply a uniform penalty if the given equality constraint is violated
penalty is p(x) = pk, with pk = k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) == 0.0
uniform_inequality(condition=<function <lambda>>, args=None, kwds=None, k=inf, h=5)
apply a uniform penalty if the given inequality constraint is violated
penalty is p(x) = pk, with pk = k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) <= 0.0

2.22 pools module

This module contains map and pipe interfaces to standard (i.e. serial) python.
Pipe methods provided: pipe - blocking communication pipe [returns: value]
Map methods provided: map - blocking and ordered worker pool [returns: list] imap - non-blocking and ordered
worker pool [returns: iterator]

2.22.1 Usage

A typical call to a pathos python map will roughly follow this example:

>>> # instantiate and configure the worker pool


>>> from pathos.serial import SerialPool
>>> pool = SerialPool()
>>>
>>> # do a blocking map on the chosen function
>>> print(pool.map(pow, [1,2,3,4], [5,6,7,8]))
>>>
>>> # do a non-blocking map, then extract the results from the iterator
>>> results = pool.imap(pow, [1,2,3,4], [5,6,7,8])
>>> print("...")
>>> print(list(results))
>>>
>>> # do one item at a time, using a pipe
>>> print(pool.pipe(pow, 1, 5))
>>> print(pool.pipe(pow, 2, 6))

Notes

This worker pool leverages the built-in python maps, and thus does not have limitations due to serialization of the
function f or the sequences in args. The maps in this worker pool have full functionality whether run from a script or
in the python interpreter, and work reliably for both imported and interactively-defined functions.

136 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

class SerialPool(*args, **kwds)


Bases: mystic.abstract_launcher.AbstractWorkerPool
Mapper that leverages standard (i.e. serial) python maps.
Important class members: nodes - number (and potentially description) of workers ncpus - number of worker
processors servers - list of worker servers scheduler - the associated scheduler workdir - associated
$WORKDIR for scratch calculations/files
Other class members: scatter - True, if uses ‘scatter-gather’ (instead of ‘worker-pool’) source - False, if mini-
mal use of TemporaryFiles is desired timeout - number of seconds to wait for return value from scheduler
_SerialPool__get_nodes()
get the number of nodes in the pool
_SerialPool__set_nodes(nodes)
set the number of nodes in the pool
__module__ = 'mystic.pools'
_exiting = False
_is_alive(negate=False, run=True)
clear()
hard restart
close()
close the pool to any new jobs
imap(f, *args, **kwds)
run a batch of jobs with a non-blocking and ordered map
Returns a list iterator of results of applying the function f to the items of the argument sequence(s). If more
than one sequence is given, the function is called with an argument list consisting of the corresponding
item of each sequence.
join()
cleanup the closed worker processes
map(f, *args, **kwds)
run a batch of jobs with a blocking and ordered map
Returns a list of results of applying the function f to the items of the argument sequence(s). If more than
one sequence is given, the function is called with an argument list consisting of the corresponding item of
each sequence.
nodes
get the number of nodes in the pool
pipe(f, *args, **kwds)
submit a job and block until results are available
Returns result of calling the function f on a selected worker. This function will block until results are
available.
restart(force=False)
restart a closed pool
terminate()
a more abrupt close

2.22. pools module 137


mystic Documentation, Release 0.3.3.dev0

2.23 python_map module

Defaults for mapper and launcher. These should be available as a minimal (dependency-free) pure-python install from
pathos:

serial_launcher -- syntax for standard python execution


python_map -- wrapper around the standard python map
worker_pool -- the worker_pool map strategy

python_map(func, *arglist, **kwds)


maps function func across arguments arglist.
Provides the standard python map function, however also accepts kwds in order to conform with the (deprecated)
pyina.ez_map interface.

Notes

The following kwds used in ez_map are accepted, but disabled:


• nodes – the number of parallel nodes
• launcher – the launcher object
• scheduler – the scheduler object
• mapper – the mapper object
• timelimit – string representation of maximum run time (e.g. ‘00:02’)
• queue – string name of selected queue (e.g. ‘normal’)

serial_launcher(kdict={})
prepare launch for standard execution syntax: (python) (program) (progargs)

Notes

run non-python shell commands by setting python to a null string: kdict = {'python':'', ...}
worker_pool()
use the ‘worker pool’ strategy; hence one job is allocated to each worker, and the next new work item is provided
when a node completes its work

2.24 scemtools module

Implements the “Shuffled Complex Evolution Metropolis” Algoritm of Vrugt et al. [1]
Reference:
[1] Jasper A. Vrugt, Hoshin V. Gupta, Willem Bouten, and Soroosh Sorooshian A Shuffled Complex
Evolution Metropolis algorithm for optimization and uncertainty assessment of hydrologic model param-
eters, WATER RESOURCES RESEARCH, VOL. 39, NO. 8, 1201, doi:10.1029/2002WR001642, 2003
http://www.agu.org/pubs/crossref/2003/2002WR001642.shtml
[2] Vrugt JA, Nuallain , Robinson BA, Bouten W, Dekker SC, Sloot PM Application of parallel computing to stochastic
parameter estimation in environmental models, Computers & Geosciences, Vol. 32, No. 8. (October 2006), pp. 1139-
1155. http://www.science.uva.nl/research/scs/papers/archive/Vrugt2006b.pdf

138 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

multinormal_pdf(mean, var)
var must be symmetric positive definite
myinsert(a, x)
remix(Cs, As)
Mixing and dealing the complexes. The types of Cs and As are very important. . . .
scem(Ck, ak, Sk, Sak, target, cn)
This is the SCEM algorithm starting from line [35] of the reference [1].
• [inout] Ck is the kth ‘complex’ with m points. This should be an m by n array n being the dimensionality
of the density function. i.e., the data are arranged in rows.
Ck is assumed to be sorted according to the target density.
• [inout] ak, the density of the points of Ck.
• [inout] Sk, the entire chain. (should be a list)
• [inout] Sak, the cost of the entire chain (should be a list)
Sak would be more convenient to use if it is a numpy array, but we need to append to it frequently.
• [in] target: target density function
• [in] cn: jumprate. (see Paragraph 37 of [1.]
• The invariants: ak is always aligned with Ck, and are the cost of Ck
• Similarly, Sak is always aligned with Sk in the same way.
• On return. . . sort order in Ck/ak is destroyed. but see sort_complex2
sequential_deal(inarray, n)
• inarray: should be a set of N objects (the objects can be vectors themselves, but inarray should be index-
able like a list. It is coerced into a numpy array because the last operations requires that it is also indexable
by a ‘list.’
• it should have a length divisble by n, otherwise the reshape will fail (this is a feature !)
• sequential_deal(range(20), 5) wil return a 5 element list, each element being a 4-list of index. (see below)

>>> for l in sequential_deal(range(20),5):


... print(l)
...
[ 0 5 10 15]
[ 1 6 11 16]
[ 2 7 12 17]
[ 3 8 13 18]
[ 4 9 14 19]

sort_ab_with_b(a, b, ord=-1)
default is descending. . .
sort_and_deal(cards, target, nplayers)
sort_complex(c, a)
sort_complex0(c, a)
sort_complex2(c, a)
• c and a are partially sorted (either the last one is bad, or the first one)
• pos [0 (first one out of order)] -1 (last one out of order)

2.24. scemtools module 139


mystic Documentation, Release 0.3.3.dev0

update_complex(Ck, ak, c, a, pos)


• ak is sorted (descending)
• Ck[pos] and ak[pos] will be removed, and then c and a spliced in at the proper place
• pos is 0, or -1

2.25 scipy_optimize module

2.25.1 Solvers

This module contains a collection of optimization routines adapted from scipy.optimize. The minimal scipy interface
has been preserved, and functionality from the mystic solver API has been added with reasonable defaults.
Minimal function interface to optimization routines::
fmin – Nelder-Mead Simplex algorithm (uses only function calls)
fmin_powell – Powell’s (modified) level set method (uses only function calls)
The corresponding solvers built on mystic’s AbstractSolver are:: NelderMeadSimplexSolver – Nelder-Mead
Simplex algorithm PowellDirectionalSolver – Powell’s (modified) level set method
Mystic solver behavior activated in fmin::
• EvaluationMonitor = Monitor()
• StepMonitor = Monitor()
• termination = CandidateRelativeTolerance(xtol,ftol)
Mystic solver behavior activated in fmin_powell::
• EvaluationMonitor = Monitor()
• StepMonitor = Monitor()
• termination = NormalizedChangeOverGeneration(ftol)

2.25.2 Usage

See mystic.examples.test_rosenbrock2 for an example of using NelderMeadSimplexSolver. See mys-


tic.examples.test_rosenbrock3 or an example of using PowellDirectionalSolver.
All solvers included in this module provide the standard signal handling. For more information, see mys-
tic.mystic.abstract_solver.

References

1. Nelder, J.A. and Mead, R. (1965), “A simplex method for function minimization”, The Computer Journal, 7, pp.
308-313.
2. Wright, M.H. (1996), “Direct Search Methods: Once Scorned, Now Respectable”, in Numerical Analysis 1995,
Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis, D.F. Griffiths and G.A. Watson
(Eds.), Addison Wesley Longman, Harlow, UK, pp. 191-208.
3. Gao, F. and Han, L. (2012), “Implementing the Nelder-Mead simplex algorithm with adaptive parameters”,
Computational Optimization and Applications. 51:1, pp. 259-277.

140 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

4. Powell M.J.D. (1964) An efficient method for finding the minimum of a function of several variables without
calculating derivatives, Computer Journal, 7 (2):155-162.
5. Press W., Teukolsky S.A., Vetterling W.T., and Flannery B.P.: Numerical Recipes (any edition), Cambridge
University Press
class NelderMeadSimplexSolver(dim)
Bases: mystic.abstract_solver.AbstractSolver
Nelder Mead Simplex optimization adapted from scipy.optimize.fmin.
Takes one initial input: dim – dimensionality of the problem
The size of the simplex is dim+1.
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using the downhill simplex algorithm.
Uses a Nelder-Mead simplex algorithm to find the minimum of a function of one or more variables.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• disp (bool, default=False) – if True, print convergence messages.
• radius (float, default=0.05) – percentage change for initial simplex values.
• adaptive (bool, default=False) – adapt algorithm parameters to the dimensionality of the
initial parameter vector x.
Returns None
_SetEvaluationLimits(iterscale=200, evalscale=200)
set the evaluation limits
_Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration Note that ExtraArgs should be a tuple of extra arguments
__init__(dim)
Takes one initial input: dim – dimensionality of the problem
The size of the simplex is dim+1.
__module__ = 'mystic.scipy_optimize'
_decorate_objective(cost, ExtraArgs=None)
decorate the cost function with bounds, penalties, monitors, etc
_process_inputs(kwds)
process and activate input settings
_setSimplexWithinRangeBoundary(radius=None)
ensure that initial simplex is set within bounds - radius: size of the initial simplex [default=0.05]

2.25. scipy_optimize module 141


mystic Documentation, Release 0.3.3.dev0

class PowellDirectionalSolver(dim)
Bases: mystic.abstract_solver.AbstractSolver
Powell Direction Search optimization, adapted from scipy.optimize.fmin_powell.
Takes one initial input: dim – dimensionality of the problem
Finalize(**kwds)
cleanup upon exiting the main optimization loop
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using modified Powell’s method.
Uses a modified Powell Directional Search algorithm to find the minimum of a function of one or more
variables.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• direc (tuple, default=None) – the initial direction set.
• xtol (float, default=1e-4) – line-search error tolerance.
• imax (float, default=500) – line-search maximum iterations.
• disp (bool, default=False) – if True, print convergence messages.
Returns None
_PowellDirectionalSolver__generations()
get the number of iterations
_SetEvaluationLimits(iterscale=1000, evalscale=1000)
set the evaluation limits
_Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration Note that ExtraArgs should be a tuple of extra arguments
__init__(dim)
Takes one initial input: dim – dimensionality of the problem
__module__ = 'mystic.scipy_optimize'
_process_inputs(kwds)
process and activate input settings
generations
get the number of iterations
fmin(cost, x0, args=(), bounds=None, xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None,
full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using the downhill simplex algorithm.
Uses a Nelder-Mead simplex algorithm to find the minimum of a function of one or more variables. This
algorithm only uses function values, not derivatives or second derivatives. Mimics the scipy.optimize.
fmin interface.

142 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

This algorithm has a long history of successful use in applications. It will usually be slower than an algorithm
that uses first or second derivative information. In practice it can have poor performance in high-dimensional
problems and is not robust to minimizing complicated functions. Additionally, there currently is no complete
theory describing when the algorithm will successfully converge to the minimum, or how fast it will if it does.
Both the ftol and xtol criteria must be met for convergence.
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• x0 (ndarray) – the initial guess parameter vector x.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• xtol (float, default=1e-4) – acceptable absolute error in xopt for convergence.
• ftol (float, default=1e-4) – acceptable absolute error in cost(xopt) for convergence.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
Returns (xopt, {fopt, iter, funcalls, warnflag}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function


• fopt (float): value of cost function at minimum: fopt = cost(xopt)
• iter (int): number of iterations
• funcalls (int): number of function calls
• warnflag (int): warning flag:
– 1 : Maximum number of function evaluations
– 2 : Maximum number of iterations

2.25. scipy_optimize module 143


mystic Documentation, Release 0.3.3.dev0

• allvecs (list): a list of solutions at each iteration

fmin_powell(cost, x0, args=(), bounds=None, xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None,


full_output=0, disp=1, retall=0, callback=None, direc=None, **kwds)
Minimize a function using modified Powell’s method.
Uses a modified Powell Directional Search algorithm to find the minimum of a function of one or more variables.
This method only uses function values, not derivatives. Mimics the scipy.optimize.fmin_powell
interface.
Powell’s method is a conjugate direction method that has two loops. The outer loop simply iterates over the
inner loop, while the inner loop minimizes over each current direction in the direction set. At the end of the
inner loop, if certain conditions are met, the direction that gave the largest decrease is dropped and replaced
with the difference between the current estimated x and the estimated x from the beginning of the inner-loop.
The conditions for replacing the direction of largest increase is that: (a) no further gain can be made along the
direction of greatest increase in the iteration, and (b) the direction of greatest increase accounted for a large
sufficient fraction of the decrease in the function value from the current iteration of the inner loop.
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• x0 (ndarray) – the initial guess parameter vector x.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• xtol (float, default=1e-4) – acceptable relative error in xopt for convergence.
• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=2) – maximum iterations to run without improvement.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• direc (tuple, default=None) – the initial direction set.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
Returns (xopt, {fopt, iter, funcalls, warnflag, direc}, {allvecs})

144 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

Notes

• xopt (ndarray): the minimizer of the cost function


• fopt (float): value of cost function at minimum: fopt = cost(xopt)
• iter (int): number of iterations
• funcalls (int): number of function calls
• warnflag (int): warning flag:
– 1 : Maximum number of function evaluations
– 2 : Maximum number of iterations
• direc (tuple): the current direction set
• allvecs (list): a list of solutions at each iteration

functional interfaces for mystic’s visual analytics scripts


model_plotter(model, logfile=None, **kwds)
generate surface contour plots for model, specified by full import path; and generate model trajectory from
logfile (or solver restart file), if provided
Available from the command shell as:

mystic_model_plotter.py model (logfile) [options]

or as a function call:

mystic.model_plotter(model, logfile=None, **options)

Parameters
• model (str) – full import path for the model (e.g. mystic.models.rosen)
• logfile (str, default=None) – name of convergence logfile (e.g. log.txt)
Returns None

Notes

• The option out takes a string of the filepath for the generated plot.
• The option bounds takes an indicator string, where bounds are given as comma-separated slices. For
example, using bounds = "-1:10, 0:20" will set lower and upper bounds for x to be (-1,10) and
y to be (0,20). The “step” can also be given, to control the number of lines plotted in the grid. Thus
"-1:10:.1, 0:20" sets the bounds as above, but uses increments of .1 along x and the default step
along y. For models > 2D, the bounds can be used to specify 2 dimensions plus fixed values for remaining
dimensions. Thus, "-1:10, 0:20, 1.0" plots the 2D surface where the z-axis is fixed at z=1.0.
When called from a script, slice objects can be used instead of a string, thus "-1:10:.1, 0:20, 1.
0" becomes (slice(-1,10,.1), slice(20), 1.0).
• The option label takes comma-separated strings. For example, label = "x,y," will place ‘x’ on the
x-axis, ‘y’ on the y-axis, and nothing on the z-axis. LaTeX is also accepted. For example, label = "$
h $, $ {\alpha}$, $ v$" will label the axes with standard LaTeX math formatting. Note that the
leading space is required, while a trailing space aligns the text with the axis instead of the plot frame.
• The option nid takes an integer of the nth simultaneous points to plot.

2.25. scipy_optimize module 145


mystic Documentation, Release 0.3.3.dev0

• The option iter takes an integer of the largest iteration to plot.


• The option reduce can be given to reduce the output of a model to a scalar, thus converting
model(params) to reduce(model(params)). A reducer is given by the import path (e.g.
numpy.add).
• The option scale will convert the plot to log-scale, and scale the cost by z=log(4*z*scale+1)+2.
This is useful for visualizing small contour changes around the minimium.
• If using log-scale produces negative numbers, the option shift can be used to shift the cost by z=z+shift.
Both shift and scale are intended to help visualize contours.
• The option fill takes a boolean, to plot using filled contours.
• The option depth takes a boolean, to plot contours in 3D.
• The option dots takes a boolean, to show trajectory points in the plot.
• The option join takes a boolean, to connect trajectory points.
• The option verb takes a boolean, to print the model documentation.

log_reader(filename, **kwds)
plot parameter convergence from file written with LoggingMonitor
Available from the command shell as:

mystic_log_reader.py filename [options]

or as a function call:

mystic.log_reader(filename, **options)

Parameters filename (str) – name of the convergence logfile (e.g log.txt).


Returns None

Notes

• The option out takes a string of the filepath for the generated plot.
• The option dots takes a boolean, and will show data points in the plot.
• The option line takes a boolean, and will connect the data with a line.
• The option iter takes an integer of the largest iteration to plot.
• The option legend takes a boolean, and will display the legend.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option param takes an indicator string. The indicator string is built from comma-separated array slices.
For example, params = ":" will plot all parameters. Alternatively, params = ":2, 3:" will plot
all parameters except for the third parameter, while params = "0" will only plot the first parameter.

collapse_plotter(filename, **kwds)
generate cost convergence rate plots from file written with write_support_file
Available from the command shell as:

mystic_collapse_plotter.py filename [options]

146 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

or as a function call:

mystic.collapse_plotter(filename, **options)

Parameters filename (str) – name of the convergence logfile (e.g paramlog.py).


Returns None

Notes

• The option dots takes a boolean, and will show data points in the plot.
• The option linear takes a boolean, and will plot in a linear scale.
• The option out takes a string of the filepath for the generated plot.
• The option iter takes an integer of the largest iteration to plot.
• The option label takes a label string. For example, label = "y" will label the plot with a ‘y’, while
label = " log-cost, $ log_{10}(\hat{P} - \hat{P}_{max})$" will label the y-axis
with standard LaTeX math formatting. Note that the leading space is required, and that the text is aligned
along the axis.
• The option col takes a string of comma-separated integers indicating iteration numbers where parameter
collapse has occurred. If a second set of integers is provided (delineated by a semicolon), the additional
set of integers will be plotted with a different linestyle (to indicate a different type of collapse).

2.26 search module

a global searcher
class Searcher(npts=4, retry=1, tol=8, memtol=1, map=None, archive=None, sprayer=None,
seeker=None, traj=False, disp=False)
Bases: object
searcher, which searches for all minima of a response surface
Input: npts - number of solvers in the ensemble retry - max consectutive retries w/o an archive ‘miss’ tol
- rounding precision for the minima comparator memtol - rounding precision for memoization map -
map used for spawning solvers in the ensemble archive - the sampled point archive(s) sprayer - the mys-
tic.ensemble instance seeker - the mystic.solvers instance traj - if True, save the parameter trajectories disp
- if True, be verbose
Coordinates(unique=False)
return the sequence of stored parameter trajectories
Input: unique: if True, only return unique values
Output: a list of parameter trajectories
Minima(tol=None)
return a dict of (coordinates,values) of all discovered minima
Input: tol: tolerance within which to consider a point a minima
Output: a dict of (coordinates,values) of all discovered minima
Reset(archive=None, inv=None)
clear the archive of sampled points

2.26. search module 147


mystic Documentation, Release 0.3.3.dev0

Input: archive - the sampled point archive(s) inv - if True, reset the archive for the inverse of the objective
Samples()
return array of (coordinates, cost) for all trajectories
Search(model, bounds, stop=None, monitor=None, traj=None, disp=None)
use an ensemble of optimizers to search for all minima
Inputs: model - function z=f(x) to be used as the objective of the Searcher bounds - tuple of floats
(min,max), bounds on the search region stop - termination condition monitor - mystic.monitor in-
stance to store parameter trajectories traj - klepto.archive to store sampled points disp - if true, be
verbose
Trajectories()
return tuple (iteration, coordinates, cost) of all trajectories
UseTrajectories(traj=True)
save all sprayers, thus save all trajectories
Values(unique=False)
return the sequence of stored response surface outputs
Input: unique: if True, only return unique values
Output: a list of stored response surface outputs
Verbose(disp=True)
be verbose
__dict__ = dict_proxy({'Reset': <function Reset>, '__module__': 'mystic.search', '_pr
__init__(npts=4, retry=1, tol=8, memtol=1, map=None, archive=None, sprayer=None,
seeker=None, traj=False, disp=False)
searcher, which searches for all minima of a response surface
Input: npts - number of solvers in the ensemble retry - max consectutive retries w/o an archive ‘miss’
tol - rounding precision for the minima comparator memtol - rounding precision for memoization
map - map used for spawning solvers in the ensemble archive - the sampled point archive(s) sprayer
- the mystic.ensemble instance seeker - the mystic.solvers instance traj - if True, save the parameter
trajectories disp - if True, be verbose
__module__ = 'mystic.search'
__weakref__
list of weak references to the object (if defined)
_configure(model, bounds, stop=None, monitor=None)
generate ensemble solver from objective, bounds, termination, monitor
_memoize(solver, tol=1)
apply caching archive to ensemble solver instance
_print(solver, tol=8)
print bestSolution and bestEnergy for each sprayer
_search(sid)
run the solver, store the trajectory, and cache to the archive
_solve(id=None, disp=None)
run the solver (i.e. search for the minima)
_summarize()
provide a summary of the search results

148 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

2.27 solvers module

solvers: minimal and expanded interfaces for optimization algorithms

2.27.1 Standard Interface

All of mystic’s optimizers derive from the solver API, which provides each optimizer with a standard, but highly-
customizable interface. A description of the solver API is found in mystic.models.abstract_model, and in
each derived optimizer. Mystic’s optimizers are:

** Global Optimizers **
DifferentialEvolutionSolver -- Differential Evolution algorithm
DifferentialEvolutionSolver2 -- Price & Storn's Differential Evolution
** Pseudo-Global Optimizers **
SparsitySolver -- N Solvers sampled where point desity is low
BuckshotSolver -- Uniform Random Distribution of N Solvers
LatticeSolver -- Distribution of N Solvers on a Regular Grid
** Local-Search Optimizers **
NelderMeadSimplexSolver -- Nelder-Mead Simplex algorithm
PowellDirectionalSolver -- Powell's (modified) Level Set algorithm

2.27.2 Minimal Interface

Most of mystic’s optimizers can be called from a minimal (i.e. one-line) interface. The collection of arguments is
often unique to the optimizer, and if the underlying solver derives from a third-party package, the original interface is
reproduced. Minimal interfaces to these optimizers are provided:

** Global Optimizers **
diffev -- DifferentialEvolutionSolver
diffev2 -- DifferentialEvolutionSolver2
** Pseudo-Global Optimizers **
sparsity -- SparsitySolver
buckshot -- BuckshotSolver
lattice -- LatticeSolver
** Local-Search Optimizers **
fmin -- NelderMeadSimplexSolver
fmin_powell -- PowellDirectionalSolver

2.27.3 More Information

For more information, please see the solver documentation found here:
• mystic.differential_evolution [differential evolution solvers]
• mystic.scipy_optimize [scipy local-search solvers]
• mystic.ensemble [pseudo-global solvers]
or the API documentation found here:
• mystic.abstract_solver [the solver API definition]
• mystic.abstract_map_solver [the parallel solver API]
• mystic.abstract_ensemble_solver [the ensemble solver API]

2.27. solvers module 149


mystic Documentation, Release 0.3.3.dev0

class BuckshotSolver(dim, npts=8)


Bases: mystic.abstract_ensemble_solver.AbstractEnsembleSolver
parallel mapped optimization starting from N uniform randomly sampled points
Takes two initial inputs: dim – dimensionality of the problem npts – number of parallel solver instances
All important class members are inherited from AbstractEnsembleSolver.
_InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
__init__(dim, npts=8)
Takes two initial inputs: dim – dimensionality of the problem npts – number of parallel solver instances
All important class members are inherited from AbstractEnsembleSolver.
__module__ = 'mystic.ensemble'
class DifferentialEvolutionSolver(dim, NP=4)
Bases: mystic.abstract_solver.AbstractSolver
Differential Evolution optimization.
Takes two initial inputs: dim – dimensionality of the problem NP – size of the trial solution population. [re-
quires: NP >= 4]
All important class members are inherited from AbstractSolver.
SetConstraints(constraints)
apply a constraints function to the optimization
input::
• a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector.
Ideally, this function is constructed so the parameter vector it passes to the cost function will
satisfy the desired (i.e. encoded) constraints.
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using differential evolution.
Uses a differential evolution algorithm to find the minimum of a function of one or more variables.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial so-
lutions.
• CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
• ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.
• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• disp (bool, default=False) – if True, print convergence messages.
Returns None

150 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

UpdateGenealogyRecords(id, newchild)
Override me for more refined behavior. Currently all changes are logged.
_Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration Note that ExtraArgs should be a tuple of extra arguments
__init__(dim, NP=4)
Takes two initial inputs: dim – dimensionality of the problem NP – size of the trial solution population.
[requires: NP >= 4]
All important class members are inherited from AbstractSolver.
__module__ = 'mystic.differential_evolution'
_decorate_objective(cost, ExtraArgs=None)
decorate cost function with bounds, penalties, monitors, etc
_process_inputs(kwds)
process and activate input settings
class DifferentialEvolutionSolver2(dim, NP=4)
Bases: mystic.abstract_map_solver.AbstractMapSolver
Differential Evolution optimization, using Storn and Price’s algorithm.
Alternate implementation:
• utilizes a map-reduce interface, extensible to parallel computing
• both a current and a next generation are kept, while the current generation is invariant during the main
DE logic
Takes two initial inputs: dim – dimensionality of the problem NP – size of the trial solution population. [re-
quires: NP >= 4]
All important class members are inherited from AbstractSolver.
SetConstraints(constraints)
apply a constraints function to the optimization
input::
• a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector.
Ideally, this function is constructed so the parameter vector it passes to the cost function will
satisfy the desired (i.e. encoded) constraints.
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using differential evolution.
Uses a differential evolution algorithm to find the minimum of a function of one or more variables. This
implementation holds the current generation invariant until the end of each iteration.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial so-
lutions.
• CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
• ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.

2.27. solvers module 151


mystic Documentation, Release 0.3.3.dev0

• sigint_callback (func, default=None) – callback function for signal handler.


• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• disp (bool, default=False) – if True, print convergence messages.
Returns None
UpdateGenealogyRecords(id, newchild)
Override me for more refined behavior. Currently all changes are logged.
_Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration Note that ExtraArgs should be a tuple of extra arguments
__init__(dim, NP=4)
Takes two initial inputs: dim – dimensionality of the problem NP – size of the trial solution population.
[requires: NP >= 4]
All important class members are inherited from AbstractSolver.
__module__ = 'mystic.differential_evolution'
_decorate_objective(cost, ExtraArgs=None)
decorate cost function with bounds, penalties, monitors, etc
_process_inputs(kwds)
process and activate input settings
class LatticeSolver(dim, nbins=8)
Bases: mystic.abstract_ensemble_solver.AbstractEnsembleSolver
parallel mapped optimization starting from the centers of N grid points
Takes two initial inputs: dim – dimensionality of the problem nbins – tuple of number of bins in each dimen-
sion
All important class members are inherited from AbstractEnsembleSolver.
_InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
__init__(dim, nbins=8)
Takes two initial inputs: dim – dimensionality of the problem nbins – tuple of number of bins in each
dimension
All important class members are inherited from AbstractEnsembleSolver.
__module__ = 'mystic.ensemble'
LoadSolver(filename=None, **kwds)
load solver state from a restart file
class NelderMeadSimplexSolver(dim)
Bases: mystic.abstract_solver.AbstractSolver
Nelder Mead Simplex optimization adapted from scipy.optimize.fmin.
Takes one initial input: dim – dimensionality of the problem
The size of the simplex is dim+1.
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using the downhill simplex algorithm.

152 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

Uses a Nelder-Mead simplex algorithm to find the minimum of a function of one or more variables.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.
• ExtraArgs (tuple, default=None) – extra arguments for cost.
• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• disp (bool, default=False) – if True, print convergence messages.
• radius (float, default=0.05) – percentage change for initial simplex values.
• adaptive (bool, default=False) – adapt algorithm parameters to the dimensionality of the
initial parameter vector x.
Returns None
_SetEvaluationLimits(iterscale=200, evalscale=200)
set the evaluation limits
_Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration Note that ExtraArgs should be a tuple of extra arguments
__init__(dim)
Takes one initial input: dim – dimensionality of the problem
The size of the simplex is dim+1.
__module__ = 'mystic.scipy_optimize'
_decorate_objective(cost, ExtraArgs=None)
decorate the cost function with bounds, penalties, monitors, etc
_process_inputs(kwds)
process and activate input settings
_setSimplexWithinRangeBoundary(radius=None)
ensure that initial simplex is set within bounds - radius: size of the initial simplex [default=0.05]
class PowellDirectionalSolver(dim)
Bases: mystic.abstract_solver.AbstractSolver
Powell Direction Search optimization, adapted from scipy.optimize.fmin_powell.
Takes one initial input: dim – dimensionality of the problem
Finalize(**kwds)
cleanup upon exiting the main optimization loop
Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using modified Powell’s method.
Uses a modified Powell Directional Search algorithm to find the minimum of a function of one or more
variables.
Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).
• termination (termination, default=None) – termination conditions.

2.27. solvers module 153


mystic Documentation, Release 0.3.3.dev0

• ExtraArgs (tuple, default=None) – extra arguments for cost.


• sigint_callback (func, default=None) – callback function for signal handler.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• direc (tuple, default=None) – the initial direction set.
• xtol (float, default=1e-4) – line-search error tolerance.
• imax (float, default=500) – line-search maximum iterations.
• disp (bool, default=False) – if True, print convergence messages.
Returns None
_PowellDirectionalSolver__generations()
get the number of iterations
_SetEvaluationLimits(iterscale=1000, evalscale=1000)
set the evaluation limits
_Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration Note that ExtraArgs should be a tuple of extra arguments
__init__(dim)
Takes one initial input: dim – dimensionality of the problem
__module__ = 'mystic.scipy_optimize'
_process_inputs(kwds)
process and activate input settings
generations
get the number of iterations
class SparsitySolver(dim, npts=8, rtol=None)
Bases: mystic.abstract_ensemble_solver.AbstractEnsembleSolver
parallel mapped optimization starting from N points sampled from sparse regions
Takes three initial inputs: dim – dimensionality of the problem npts – number of parallel solver instances rtol
– size of radial tolerance for sparsity
All important class members are inherited from AbstractEnsembleSolver.
_InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
__init__(dim, npts=8, rtol=None)
Takes three initial inputs: dim – dimensionality of the problem npts – number of parallel solver instances
rtol – size of radial tolerance for sparsity
All important class members are inherited from AbstractEnsembleSolver.
__module__ = 'mystic.ensemble'
buckshot(cost, ndim, npts=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None,
full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using the buckshot ensemble solver.
Uses a buckshot ensemble algorithm to find the minimum of a function of one or more variables. Mimics the
scipy.optimize.fmin interface. Starts npts solver instances at random points in parameter space.
Parameters

154 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

• cost (func) – the function or method to be minimized: y = cost(x).


• ndim (int) – dimensionality of the problem.
• npts (int, default=8) – number of solver instances.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=10) – maximum iterations to run without improvement.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• solver (solver, default=None) – override the default nested Solver instance.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
• map (func, default=None) – a (parallel) map function y = map(f, x).
• dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting
position using the given distribution.
Returns (xopt, {fopt, iter, funcalls, warnflag, allfuncalls},
{allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function


• fopt (float): value of cost function at minimum: fopt = cost(xopt)
• iter (int): number of iterations
• funcalls (int): number of function calls
• warnflag (int): warning flag:
– 1 : Maximum number of function evaluations

2.27. solvers module 155


mystic Documentation, Release 0.3.3.dev0

– 2 : Maximum number of iterations


• allfuncalls (int): total function calls (for all solver instances)
• allvecs (list): a list of solutions at each iteration

diffev(cost, x0, npop=4, args=(), bounds=None, ftol=0.005, gtol=None, maxiter=None, maxfun=None,


cross=0.9, scale=0.8, full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using differential evolution.
Uses a differential evolution algorithm to find the minimum of a function of one or more variables. Mimics a
scipy.optimize style interface.
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• x0 (ndarray) – the initial guess parameter vector x if desired start is a single point, otherwise
takes a list of (min,max) bounds that define a region from which random initial points are
drawn.
• npop (int, default=4) – size of the trial solution population.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• ftol (float, default=5e-3) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=None) – maximum iterations to run without improvement.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• cross (float, default=0.9) – the probability of cross-parameter mutations.
• scale (float, default=0.8) – multiplier for mutations on the trial solution.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• strategy (strategy, default=None) – override the default mutation strategy.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
• map (func, default=None) – a (parallel) map function y = map(f, x).
Returns (xopt, {fopt, iter, funcalls, warnflag}, {allvecs})

156 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

Notes

• xopt (ndarray): the minimizer of the cost function


• fopt (float): value of cost function at minimum: fopt = cost(xopt)
• iter (int): number of iterations
• funcalls (int): number of function calls
• warnflag (int): warning flag:
– 1 : Maximum number of function evaluations
– 2 : Maximum number of iterations
• allvecs (list): a list of solutions at each iteration

diffev2(cost, x0, npop=4, args=(), bounds=None, ftol=0.005, gtol=None, maxiter=None, maxfun=None,


cross=0.9, scale=0.8, full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using Storn & Price’s differential evolution.
Uses Storn & Prices’s differential evolution algorithm to find the minimum of a function of one or more vari-
ables. Mimics a scipy.optimize style interface.
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• x0 (ndarray) – the initial guess parameter vector x if desired start is a single point, otherwise
takes a list of (min,max) bounds that define a region from which random initial points are
drawn.
• npop (int, default=4) – size of the trial solution population.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• ftol (float, default=5e-3) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=None) – maximum iterations to run without improvement.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• cross (float, default=0.9) – the probability of cross-parameter mutations.
• scale (float, default=0.8) – multiplier for mutations on the trial solution.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• strategy (strategy, default=None) – override the default mutation strategy.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.

2.27. solvers module 157


mystic Documentation, Release 0.3.3.dev0

• constraints (func, default=None) – a function xk' = constraints(xk), where xk


is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
• map (func, default=None) – a (parallel) map function y = map(f, x).
Returns (xopt, {fopt, iter, funcalls, warnflag}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function


• fopt (float): value of cost function at minimum: fopt = cost(xopt)
• iter (int): number of iterations
• funcalls (int): number of function calls
• warnflag (int): warning flag:
– 1 : Maximum number of function evaluations
– 2 : Maximum number of iterations
• allvecs (list): a list of solutions at each iteration

fmin(cost, x0, args=(), bounds=None, xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None,


full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using the downhill simplex algorithm.
Uses a Nelder-Mead simplex algorithm to find the minimum of a function of one or more variables. This
algorithm only uses function values, not derivatives or second derivatives. Mimics the scipy.optimize.
fmin interface.
This algorithm has a long history of successful use in applications. It will usually be slower than an algorithm
that uses first or second derivative information. In practice it can have poor performance in high-dimensional
problems and is not robust to minimizing complicated functions. Additionally, there currently is no complete
theory describing when the algorithm will successfully converge to the minimum, or how fast it will if it does.
Both the ftol and xtol criteria must be met for convergence.
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• x0 (ndarray) – the initial guess parameter vector x.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• xtol (float, default=1e-4) – acceptable absolute error in xopt for convergence.
• ftol (float, default=1e-4) – acceptable absolute error in cost(xopt) for convergence.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• full_output (bool, default=False) – True if fval and warnflag are desired.

158 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

• disp (bool, default=True) – if True, print convergence messages.


• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
Returns (xopt, {fopt, iter, funcalls, warnflag}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function


• fopt (float): value of cost function at minimum: fopt = cost(xopt)
• iter (int): number of iterations
• funcalls (int): number of function calls
• warnflag (int): warning flag:
– 1 : Maximum number of function evaluations
– 2 : Maximum number of iterations
• allvecs (list): a list of solutions at each iteration

fmin_powell(cost, x0, args=(), bounds=None, xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None,


full_output=0, disp=1, retall=0, callback=None, direc=None, **kwds)
Minimize a function using modified Powell’s method.
Uses a modified Powell Directional Search algorithm to find the minimum of a function of one or more variables.
This method only uses function values, not derivatives. Mimics the scipy.optimize.fmin_powell
interface.
Powell’s method is a conjugate direction method that has two loops. The outer loop simply iterates over the
inner loop, while the inner loop minimizes over each current direction in the direction set. At the end of the
inner loop, if certain conditions are met, the direction that gave the largest decrease is dropped and replaced
with the difference between the current estimated x and the estimated x from the beginning of the inner-loop.
The conditions for replacing the direction of largest increase is that: (a) no further gain can be made along the
direction of greatest increase in the iteration, and (b) the direction of greatest increase accounted for a large
sufficient fraction of the decrease in the function value from the current iteration of the inner loop.
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• x0 (ndarray) – the initial guess parameter vector x.
• args (tuple, default=()) – extra arguments for cost.

2.27. solvers module 159


mystic Documentation, Release 0.3.3.dev0

• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• xtol (float, default=1e-4) – acceptable relative error in xopt for convergence.
• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=2) – maximum iterations to run without improvement.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• direc (tuple, default=None) – the initial direction set.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
Returns (xopt, {fopt, iter, funcalls, warnflag, direc}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function


• fopt (float): value of cost function at minimum: fopt = cost(xopt)
• iter (int): number of iterations
• funcalls (int): number of function calls
• warnflag (int): warning flag:
– 1 : Maximum number of function evaluations
– 2 : Maximum number of iterations
• direc (tuple): the current direction set
• allvecs (list): a list of solutions at each iteration

lattice(cost, ndim, nbins=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None,


full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using the lattice ensemble solver.

160 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

Uses a lattice ensemble algorithm to find the minimum of a function of one or more variables. Mimics the
scipy.optimize.fmin interface. Starts N solver instances at regular intervals in parameter space, deter-
mined by nbins (N = numpy.prod(nbins); len(nbins) == ndim).
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• ndim (int) – dimensionality of the problem.
• nbins (tuple(int), default=8) – total bins, or # of bins in each dimension.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=10) – maximum iterations to run without improvement.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• solver (solver, default=None) – override the default nested Solver instance.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).
• map (func, default=None) – a (parallel) map function y = map(f, x).
• dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting
position using the given distribution.
Returns (xopt, {fopt, iter, funcalls, warnflag, allfuncalls},
{allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function


• fopt (float): value of cost function at minimum: fopt = cost(xopt)
• iter (int): number of iterations

2.27. solvers module 161


mystic Documentation, Release 0.3.3.dev0

• funcalls (int): number of function calls


• warnflag (int): warning flag:
– 1 : Maximum number of function evaluations
– 2 : Maximum number of iterations
• allfuncalls (int): total function calls (for all solver instances)
• allvecs (list): a list of solutions at each iteration

sparsity(cost, ndim, npts=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None,


full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using the sparsity ensemble solver.
Uses a sparsity ensemble algorithm to find the minimum of a function of one or more variables. Mimics the
scipy.optimize.fmin interface. Starts npts solver instances at points in parameter space where existing
points are sparse.
Parameters
• cost (func) – the function or method to be minimized: y = cost(x).
• ndim (int) – dimensionality of the problem.
• npts (int, default=8) – number of solver instances.
• args (tuple, default=()) – extra arguments for cost.
• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each param-
eter.
• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.
• gtol (float, default=10) – maximum iterations to run without improvement.
• rtol (float, default=None) – minimum acceptable distance from other points.
• maxiter (int, default=None) – the maximum number of iterations to perform.
• maxfun (int, default=None) – the maximum number of function evaluations.
• full_output (bool, default=False) – True if fval and warnflag are desired.
• disp (bool, default=True) – if True, print convergence messages.
• retall (bool, default=False) – if True, return list of solutions at each iteration.
• callback (func, default=None) – function to call after each iteration. The interface is
callback(xk), with xk the current parameter vector.
• solver (solver, default=None) – override the default nested Solver instance.
• handler (bool, default=False) – if True, enable handling interrupt signals.
• itermon (monitor, default=None) – override the default GenerationMonitor.
• evalmon (monitor, default=None) – override the default EvaluationMonitor.
• constraints (func, default=None) – a function xk' = constraints(xk), where xk
is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded
constraints.
• penalty (func, default=None) – a function y = penalty(xk), where xk is the current
parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0
otherwise).

162 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

• map (func, default=None) – a (parallel) map function y = map(f, x).


• dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting
position using the given distribution.
Returns (xopt, {fopt, iter, funcalls, warnflag, allfuncalls},
{allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function


• fopt (float): value of cost function at minimum: fopt = cost(xopt)
• iter (int): number of iterations
• funcalls (int): number of function calls
• warnflag (int): warning flag:
– 1 : Maximum number of function evaluations
– 2 : Maximum number of iterations
• allfuncalls (int): total function calls (for all solver instances)
• allvecs (list): a list of solutions at each iteration

2.28 strategy module

Differential Evolution Strategies


These strategies are to be passed into DifferentialEvolutionSolver’s Solve method, and determine how the candidate
parameter values mutate across a population.
Best1Bin(inst, candidate)
trial solution is current best solution plus scaled difference of two randomly chosen candidates; mutates at
random
trial = best + scale*(candidate1 - candidate2)
Best1Exp(inst, candidate)
trial solution is current best solution plus scaled difference of two randomly chosen candidates; mutates until
random stop
trial = best + scale*(candidate1 - candidate2)
Best2Bin(inst, candidate)
trial solution is current best solution plus scaled contributions of four randomly chosen candidates; mutates at
random
trial = best + scale*(candidate1 - candidate2 - candidate3 - candidate4)
Best2Exp(inst, candidate)
trial solution is current best solution plus scaled contributions from four randomly chosen candidates; mutates
until random stop
trial = best + scale*(candidate1 + candidate2 - candidate3 - candidate4)

2.28. strategy module 163


mystic Documentation, Release 0.3.3.dev0

Rand1Bin(inst, candidate)
trial solution is randomly chosen candidate plus scaled difference of two other randomly chosen candidates;
mutates at random
trial = candidate1 + scale*(candidate2 - candidate3)
Rand1Exp(inst, candidate)
trial solution is randomly chosen candidate plus scaled difference of two other randomly chosen candidates;
mutates until random stop
trial = candidate1 + scale*(candidate2 - candidate3)
Rand2Bin(inst, candidate)
trial solution is randomly chosen candidate plus scaled contributions of four other randomly chosen candidates;
mutates at random
trial = candidate1 + scale*(candidate2 - candidate3 - candidate4 - candidate5)
Rand2Exp(inst, candidate)
trial solution is randomly chosen candidate plus scaled contributions from four other randomly chosen candi-
dates; mutates until random stop
trial = candidate1 + scale*(candidate2 + candidate3 - candidate4 - candidate5)
RandToBest1Bin(inst, candidate)
trial solution is itself plus scaled difference of best solution and trial solution, plus the difference of two randomly
chosen candidates; mutates until random stop
trial += scale*(best - trial) + scale*(candidate1 - candidate2)
RandToBest1Exp(inst, candidate)
trial solution is itself plus scaled difference of best solution and trial solution, plus the difference of two randomly
chosen candidates; mutates at random
trial += scale*(best - trial) + scale*(candidate1 - candidate2)
get_random_candidates(NP, exclude, N)
select N random candidates from population of size NP, where exclude is the candidate to exclude from selection.
Thus, get_random_candidates(x,1,2) randomly selects two nPop[i], where i != 1

2.29 support module

functional interfaces for mystic’s visual diagnistics for support files


convergence(filename, **kwds)
generate parameter convergence plots from file written with write_support_file
Available from the command shell as:

support_convergence.py filename [options]

or as a function call:

mystic.support.convergence(filename, **options)

Parameters filename (str) – name of the convergence logfile (e.g paramlog.py)


Returns None

164 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

Notes

• The option out takes a string of the filepath for the generated plot.
• The option iter takes an integer of the largest iteration to plot.
• The option param takes an indicator string. The indicator string is built from comma-separated array slices.
For example, params = ":" will plot all parameters in a single plot. Alternatively, params = ":2,
2:" will split the parameters into two plots, and params = "0" will only plot the first parameter.
• The option label takes comma-separated strings. For example, label = "x,y," will label the y-axis
of the first plot with ‘x’, a second plot with ‘y’, and not add a label to a third or subsequent plots. If more
labels are given than plots, then the last label will be used for the y-axis of the ‘cost’ plot. LaTeX is also
accepted. For example, label = "$ h$, $ a$, $ v$" will label the axes with standard LaTeX
math formatting. Note that the leading space is required, and the text is aligned along the axis.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option cost takes a boolean, and will also plot the parameter cost.
• The option legend takes a boolean, and will display the legend.

hypercube(filename, **kwds)
generate parameter support plots from file written with write_support_file
Available from the command shell as:

support_hypercube.py filename [options]

or as a function call:

mystic.support.hypercube(filename, **options)

Parameters filename (str) – name of the convergence logfile (e.g paramlog.py)


Returns None

Notes

• The option out takes a string of the filepath for the generated plot.
• The options bounds, axes, and iters all take indicator strings. The bounds should be given as comma-
separated slices. For example, using bounds = "60:105, 0:30, 2.1:2.8" will set the lower
and upper bounds for x to be (60,105), y to be (0,30), and z to be (2.1,2.8). Similarly, axes also accepts
comma-separated groups of ints; however, for axes, each entry indicates which parameters are to be plotted
along each axis – the first group for the x direction, the second for the y direction, and third for z. Thus,
axes = "2 3, 6 7, 10 11" would set 2nd and 3rd parameters along x. Iters also accepts strings
built from comma-separated array slices. For example, iters = ":" will plot all iters in a single plot.
Alternatively, iters = ":2, 2:" will split the iters into two plots, while iters = "0" will only
plot the first iteration.
• The option label takes comma-separated strings. Thus label = "x,y," will place ‘x’ on the x-axis,
‘y’ on the y-axis, and nothing on the z-axis. LaTeX, such as label = "$ h $, $ a$, $ v$" will
label the axes with standard LaTeX math formatting. Note that the leading space is required, while a
trailing space aligns the text with the axis instead of the plot frame.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option scale takes an integer as a grayscale contrast multiplier.

2.29. support module 165


mystic Documentation, Release 0.3.3.dev0

• The option flat takes a boolean, to plot results in a single plot.

hypercube_measures(filename, **kwds)
generate measure support plots from file written with write_support_file
Available from the command shell as:

support_hypercube_measures.py filename [options]

or as a function call:

mystic.support.hypercube_measures(filename, **options)

Parameters filename (str) – name of the convergence logfile (e.g paramlog.py)


Returns None

Notes

• The option out takes a string of the filepath for the generated plot.
• The options bounds, axes, weight, and iters all take indicator strings. The bounds should be given
as comma-separated slices. For example, using bounds = "60:105, 0:30, 2.1:2.8" will set
lower and upper bounds for x to be (60,105), y to be (0,30), and z to be (2.1,2.8). Similarly, axes also
accepts comma-separated groups of ints; however, for axes, each entry indicates which parameters are to
be plotted along each axis – the first group for the x direction, the second for the y direction, and third for z.
Thus, axes = "2 3, 6 7, 10 11" would set 2nd and 3rd parameters along x. The corresponding
weights are used to color the measure points, where 1.0 is black and 0.0 is white. For example, using
weight = "0 1, 4 5, 8 9" would use the 0th and 1st parameters to weight x. Iters is also similar,
however only accepts comma-separated ints. Hence, iters = "-1" will plot the last iteration, while
iters = "0, 300, 700" will plot the 0th, 300th, and 700th in three plots.
• The option label takes comma-separated strings. Thus label = "x,y," will place ‘x’ on the x-axis,
‘y’ on the y-axis, and nothing on the z-axis. LaTeX, such as label = "$ h $, $ a$, $ v$" will
label the axes with standard LaTeX math formatting. Note that the leading space is required, while a
trailing space aligns the text with the axis instead of the plot frame.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option scale takes an integer as a grayscale contrast multiplier.
• The option flat takes a boolean, to plot results in a single plot.

Warning: This function is intended to visualize weighted measures (i.e. weights and positions), where the
weights must be normalized (to 1) or an error will be thrown.

hypercube_scenario(filename, datafile=None, **kwds)


generate scenario support plots from file written with write_support_file; and generate legacy data and
cones from a dataset file, if provided
Available from the command shell as:

support_hypercube_scenario.py filename (datafile) [options]

or as a function call:

166 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

mystic.support.hypercube_scenario(filename, datafile=None, **options)

Parameters
• filename (str) – name of the convergence logfile (e.g. paramlog.py)
• datafile (str, default=None) – name of the dataset file (e.g. data.txt)
Returns None

Notes

• The option out takes a string of the filepath for the generated plot.
• The options bounds, dim, and iters all take indicator strings. The bounds should be given as comma-
separated slices. For example, using bounds = ".062:.125, 0:30, 2300:3200" will set lower
and upper bounds for x to be (.062,.125), y to be (0,30), and z to be (2300,3200). If all bounds are to not be
strictly enforced, append an asterisk * to the string. The dim (dimensions of the scenario) should comma-
separated ints. For example, dim = "1, 1, 2" will convert the params to a two-member 3-D dataset.
Iters accepts a string built from comma-separated array slices. Thus, iters = ":" will plot all iters
in a single plot. Alternatively, iters = ":2, 2:" will split the iters into two plots, while iters =
"0" will only plot the first iteration.
• The option label takes comma-separated strings. Thus label = "x,y," will place ‘x’ on the x-axis,
‘y’ on the y-axis, and nothing on the z-axis. LaTeX, such as label = "$ h $, $ a$, $ v$" will
label the axes with standard LaTeX math formatting. Note that the leading space is required, while a
trailing space aligns the text with the axis instead of the plot frame.
• The option “filter” is used to select datapoints from a given dataset, and takes comma-separated ints.
• A “mask” is given as comma-separated ints. When the mask has more than one int, the plot will be 2D.
• The option “vertical” will plot the dataset values on the vertical axis; for 2D plots, cones are always plotted
on the vertical axis.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option scale takes an integer as a grayscale contrast multiplier.
• The option gap takes an integer distance from cone center to vertex.
• The option data takes a boolean, to plot legacy data, if provided.
• The option cones takes a boolean, to plot cones, if provided.
• The option flat takes a boolean, to plot results in a single plot.

best_dimensions(n)
get the ‘best’ dimensions (i x j) for arranging plots
Parameters n (int) – number of plots
Returns tuple (i,j) of i rows j columns, where i*j is roughly n
swap(alist, index=None)
swap the selected list element with the last element in alist
Parameters
• alist (list) – a list of objects
• index (int, default=None) – the selected element

2.29. support module 167


mystic Documentation, Release 0.3.3.dev0

Returns list with the elements swapped as indicated

2.30 svc module

Simple utility functions for SV-classifications


Bias(alpha, X, y, kernel=<built-in function dot>)
Compute classification bias.
KernelMatrix(X, k=<built-in function dot>)
inner product of X with self, using k as elementwise product function
SupportVectors(alpha, y=None, epsilon=0)
indices of nonzero alphas (at tolerance epsilon)
If labels y are provided, then group indices by label
WeightVector(alpha, X, y)
asarray(a, dtype=None, order=None)
Convert the input to an array.
Parameters
• a (array_like) – Input data, in any form that can be converted to an array. This includes lists,
lists of tuples, tuples, tuples of tuples, tuples of lists and ndarrays.
• dtype (data-type, optional) – By default, the data-type is inferred from the input data.
• order ({‘C’, ‘F’}, optional) – Whether to use row-major (C-style) or column-major (Fortran-
style) memory representation. Defaults to ‘C’.
Returns out – Array interpretation of a. No copy is performed if the input is already an ndarray
with matching dtype and order. If a is a subclass of ndarray, a base class ndarray is returned.
Return type ndarray
See also:

asanyarray() Similar function which passes through subclasses.


ascontiguousarray() Convert input to a contiguous array.
asfarray() Convert input to a floating point ndarray.
asfortranarray() Convert input to an ndarray with column-major memory order.
asarray_chkfinite() Similar function which checks input for NaNs and Infs.
fromiter() Create an array from an iterator.
fromfunction() Construct an array by executing a function on grid positions.

Examples

Convert a list into an array:

>>> a = [1, 2]
>>> np.asarray(a)
array([1, 2])

168 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

Existing arrays are not copied:

>>> a = np.array([1, 2])


>>> np.asarray(a) is a
True

If dtype is set, array is copied only if dtype does not match:

>>> a = np.array([1, 2], dtype=np.float32)


>>> np.asarray(a, dtype=np.float32) is a
True
>>> np.asarray(a, dtype=np.float64) is a
False

Contrary to asanyarray, ndarray subclasses are not passed through:

>>> issubclass(np.recarray, np.ndarray)


True
>>> a = np.array([(1.0, 2), (3.0, 4)], dtype='f4,i4').view(np.recarray)
>>> np.asarray(a) is a
False
>>> np.asanyarray(a) is a
True

dot(a, b, out=None)
Dot product of two arrays. Specifically,
• If both a and b are 1-D arrays, it is inner product of vectors (without complex conjugation).
• If both a and b are 2-D arrays, it is matrix multiplication, but using matmul() or a @ b is preferred.
• If either a or b is 0-D (scalar), it is equivalent to multiply() and using numpy.multiply(a, b)
or a * b is preferred.
• If a is an N-D array and b is a 1-D array, it is a sum product over the last axis of a and b.
• If a is an N-D array and b is an M-D array (where M>=2), it is a sum product over the last axis of a and
the second-to-last axis of b:

dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])

Parameters
• a (array_like) – First argument.
• b (array_like) – Second argument.
• out (ndarray, optional) – Output argument. This must have the exact kind that would be re-
turned if it was not used. In particular, it must have the right type, must be C-contiguous, and
its dtype must be the dtype that would be returned for dot(a,b). This is a performance fea-
ture. Therefore, if these conditions are not met, an exception is raised, instead of attempting
to be flexible.
Returns output – Returns the dot product of a and b. If a and b are both scalars or both 1-D arrays
then a scalar is returned; otherwise an array is returned. If out is given, then it is returned.
Return type ndarray
Raises ValueError – If the last dimension of a is not the same size as the second-to-last dimen-
sion of b.

2.30. svc module 169


mystic Documentation, Release 0.3.3.dev0

See also:

vdot() Complex-conjugating dot product.


tensordot() Sum products over arbitrary axes.
einsum() Einstein summation convention.
matmul() ‘@’ operator as method with out parameter.

Examples

>>> np.dot(3, 4)
12

Neither argument is complex-conjugated:

>>> np.dot([2j, 3j], [2j, 3j])


(-13+0j)

For 2-D arrays it is the matrix product:

>>> a = [[1, 0], [0, 1]]


>>> b = [[4, 1], [2, 2]]
>>> np.dot(a, b)
array([[4, 1],
[2, 2]])

>>> a = np.arange(3*4*5*6).reshape((3,4,5,6))
>>> b = np.arange(3*4*5*6)[::-1].reshape((5,4,6,3))
>>> np.dot(a, b)[2,3,2,1,2,2]
499128
>>> sum(a[2,3,2,:] * b[1,2,:,2])
499128

sum(a, axis=None, dtype=None, out=None, keepdims=<no value>, initial=<no value>)


Sum of array elements over a given axis.
Parameters
• a (array_like) – Elements to sum.
• axis (None or int or tuple of ints, optional) – Axis or axes along which a sum is performed.
The default, axis=None, will sum all of the elements of the input array. If axis is negative it
counts from the last to the first axis.
New in version 1.7.0.
If axis is a tuple of ints, a sum is performed on all of the axes specified in the tuple instead
of a single axis or all the axes as before.
• dtype (dtype, optional) – The type of the returned array and of the accumulator in which
the elements are summed. The dtype of a is used by default unless a has an integer dtype of
less precision than the default platform integer. In that case, if a is signed then the platform
integer is used while if a is unsigned then an unsigned integer of the same precision as the
platform integer is used.

170 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

• out (ndarray, optional) – Alternative output array in which to place the result. It must have
the same shape as the expected output, but the type of the output values will be cast if
necessary.
• keepdims (bool, optional) – If this is set to True, the axes which are reduced are left in
the result as dimensions with size one. With this option, the result will broadcast correctly
against the input array.
If the default value is passed, then keepdims will not be passed through to the sum method
of sub-classes of ndarray, however any non-default value will be. If the sub-class’ method
does not implement keepdims any exceptions will be raised.
• initial (scalar, optional) – Starting value for the sum. See ~numpy.ufunc.reduce for details.
New in version 1.15.0.
Returns sum_along_axis – An array with the same shape as a, with the specified axis removed. If
a is a 0-d array, or if axis is None, a scalar is returned. If an output array is specified, a reference
to out is returned.
Return type ndarray
See also:

ndarray.sum() Equivalent method.


cumsum() Cumulative sum of array elements.
trapz() Integration of array values using the composite trapezoidal rule.

mean(), average()

Notes

Arithmetic is modular when using integer types, and no error is raised on overflow.
The sum of an empty array is the neutral element 0:

>>> np.sum([])
0.0

Examples

>>> np.sum([0.5, 1.5])


2.0
>>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32)
1
>>> np.sum([[0, 1], [0, 5]])
6
>>> np.sum([[0, 1], [0, 5]], axis=0)
array([0, 6])
>>> np.sum([[0, 1], [0, 5]], axis=1)
array([1, 5])

If the accumulator is too small, overflow occurs:

2.30. svc module 171


mystic Documentation, Release 0.3.3.dev0

>>> np.ones(128, dtype=np.int8).sum(dtype=np.int8)


-128

You can also start the sum with a value other than zero:

>>> np.sum([10], initial=5)


15

transpose(a, axes=None)
Permute the dimensions of an array.
Parameters
• a (array_like) – Input array.
• axes (list of ints, optional) – By default, reverse the dimensions, otherwise permute the axes
according to the values given.
Returns p – a with its axes permuted. A view is returned whenever possible.
Return type ndarray
See also:
moveaxis(), argsort()

Notes

Use transpose(a, argsort(axes)) to invert the transposition of tensors when using the axes keyword argument.
Transposing a 1-D array returns an unchanged view of the original array.

Examples

>>> x = np.arange(4).reshape((2,2))
>>> x
array([[0, 1],
[2, 3]])

>>> np.transpose(x)
array([[0, 2],
[1, 3]])

>>> x = np.ones((1, 2, 3))


>>> np.transpose(x, (1, 0, 2)).shape
(2, 1, 3)

2.31 svr module

Simple utility functions for SV-Regressions


LinearKernel(i1, i2=None)
linear kernel for i1 and i2
dot(i1,i2.T), where i2=i1 if i2 is not provided

172 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

PolynomialKernel(i1, i2=None, degree=3, gamma=None, coeff=1)


polynomial kernel for i1 and i2
(coeff + gamma * dot(i1,i2.T))**degree, where i2=i1 if i2 is not provided
coeff is a float, default of 1. gamma is a float, default of 1./i1.shape(1) degree is an int, default of 3
SigmoidKernel(i1, i2=None, gamma=None, coeff=1)
sigmoid kernel for i1 and i2
tanh(coeff + gamma * dot(i1,i2.T)), where i2=i1 if i2 is not provided
coeff is a float, default of 1. gamma is a float, default of 1./i1.shape(1)
LaplacianKernel(i1, i2=None, gamma=None)
laplacian kernel for i1 and i2
exp(-gamma * manhattan_distance(i1,i2)), where i2=i1 if i2 is not provided
gamma is a float, default of 1./i1.shape(1)
GaussianKernel(i1, i2=None, gamma=None)
gaussian kernel for i1 and i2
exp(-gamma * euclidean_distance(i1,i2)**2), where i2=i1 if i2 is not provided
gamma is a float, default of 1./i1.shape(1)
CosineKernel(i1, i2=None)
cosine kernel for i1 and i2
dot(i1,i2.T)/(||i1||*||i2||), where i2=i1 if i2 is not provided, and ||i|| is defined as L2-normalized i
KernelMatrix(X, Y=None, kernel=<function LinearKernel>)
outer product, using kernel as elementwise product function
SupportVectors(alpha, epsilon=0)
indices of nonzero alphas (at tolerance epsilon)
Bias(x, y, alpha, epsilon, kernel=<function LinearKernel>)
Compute regression bias for epsilon insensitive loss regression
RegressionFunction(x, y, alpha, epsilon, kernel=<function LinearKernel>)
The Support Vector expansion. f(x) = Sum (ap - am) K(xi, x) + b

2.32 symbolic module

Tools for working with symbolic constraints.


linear_symbolic(A=None, b=None, G=None, h=None)
convert linear equality and inequality constraints from matrices to a symbolic string of the form required by
mystic’s constraint parser.
Inputs: A – (ndarray) matrix of coefficients of linear equality constraints b – (ndarray) vector of solutions of
linear equality constraints G – (ndarray) matrix of coefficients of linear inequality constraints h – (ndarray)
vector of solutions of linear inequality constraints
NOTE: Must provide A and b; G and h; or A, b, G, and h; where Ax = b and Gx <= h.
For example: >>> A = [[3., 4., 5.], . . . [1., 6., -9.]] >>> b = [0., 0.] >>> G = [1., 0., 0.] >>> h = [5.] >>>
print(linear_symbolic(A,b,G,h)) 1.0*x0 + 0.0*x1 + 0.0*x2 <= 5.0 3.0*x0 + 4.0*x1 + 5.0*x2 = 0.0 1.0*x0
+ 6.0*x1 + -9.0*x2 = 0.0

2.32. symbolic module 173


mystic Documentation, Release 0.3.3.dev0

replace_variables(constraints, variables=None, markers=’$’)


replace variables in constraints string with a marker. Returns a modified constraints string.
Inputs:
constraints – a string of symbolic constraints, with one constraint equation per line. Constraints can
be equality and/or inequality constraints. Standard python syntax should be followed (with the math
and numpy modules already imported).
variables – list of variable name strings. The variable names will be replaced in the order that they are
provided, where if the default marker “$i” is used, the first variable will be replaced with “$0”, the
second with “$1”, and so on.
For example:
>>> variables = ['spam', 'eggs']
>>> constraints = '''spam + eggs - 42'''
>>> print(replace_variables(constraints, variables, 'x'))
'x0 + x1 - 42'

Additional Inputs:
markers – desired variable name. Default is ‘$’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base.
For example:
>>> variables = ['x1','x2','x3']
>>> constraints = "min(x1*x2) - sin(x3)"
>>> print(replace_variables(constraints, variables, ['x','y','z']))
'min(x*y) - sin(z)'

get_variables(constraints, variables=’x’)
extract a list of the string variable names from constraints string
Inputs:
constraints – a string of symbolic constraints, with one constraint equation per line. Constraints can
be equality and/or inequality constraints. Standard python syntax should be followed (with the math
and numpy modules already imported).
For example:
>>> constraints = '''
... x1 + x2 = x3*4
... x3 = x2*x4'''
>>> get_variables(constraints)
['x1', 'x2', 'x3', 'x4']

Additional Inputs:
variables – desired variable name. Default is ‘x’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base, and can include variables that are not found
in the constraints equation string.
For example:
>>> constraints = '''
... y = min(u,v) - z*sin(x)
... z = x**2 + 1.0
... u = v*z'''
(continues on next page)

174 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

(continued from previous page)


>>> get_variables(constraints, list('pqrstuvwxyz'))
['u', 'v', 'x', 'y', 'z']

merge(*equations, **kwds)
merge bounds in a sequence of equations (e.g. [A<0, A>0] --> [A!=0])
Parameters
• equations (tuple(str)) – a sequence of equations
• inclusive (bool, default=True) – if False, bounds are exclusive
Returns tuple sequence of equations, where the bounds have been merged

Notes

if bounds are invalid, returns None

Examples

>>> merge(*['A > 0', 'A > 0', 'B >= 0', 'B <= 0'], inclusive=False)
('A > 0', 'B = 0')

>>> merge(*['A > 0', 'A > 0', 'B >= 0', 'B <= 0'], inclusive=True)
('A > 0',)

solve(constraints, variables=’x’, target=None, **kwds)


Solve a system of symbolic constraints equations.
Inputs:
constraints – a string of symbolic constraints, with one constraint equation per line. Constraints must
be equality constraints only. Standard python syntax should be followed (with the math and numpy
modules already imported).
For example:

>>> constraints = '''


... x0 - x2 = 2.
... x2 = x3*2.'''
>>> print(solve(constraints))
x2 = 2.0*x3
x0 = 2.0 + 2.0*x3
>>> constraints = '''
... spread([x0,x1]) - 1.0 = mean([x0,x1])
... mean([x0,x1,x2]) = x2'''
>>> print(solve(constraints))
x0 = -0.5 + 0.5*x2
x1 = 0.5 + 1.5*x2

Additional Inputs:
variables – desired variable name. Default is ‘x’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base, and can include variables that are not found
in the constraints equation string.

2.32. symbolic module 175


mystic Documentation, Release 0.3.3.dev0

target – list providing the order for which the variables will be solved. If there are “N” constraint
equations, the first “N” variables given will be selected as the dependent variables. By default, in-
creasing order is used.
For example:

>>> constraints = '''


... x0 - x2 = 2.
... x2 = x3*2.'''
>>> print(solve(constraints, target=['x3','x2']))
x3 = -1.0 + 0.5*x0
x2 = -2.0 + x0

Further Inputs:
locals – a dictionary of additional variables used in the symbolic constraints equations, and their de-
sired values.
simplify(constraints, variables=’x’, target=None, **kwds)
simplify a system of symbolic constraints equations.
Returns a system of equations where a single variable has been isolated on the left-hand side of each constraints
equation, thus all constraints are of the form “x_i = f(x)”.
Inputs:
constraints – a string of symbolic constraints, with one constraint equation per line. Standard python
syntax should be followed (with the math and numpy modules already imported).
For example:

>>> constraints = '''


... x0 - x2 <= 2.
... x2 = x3*2.'''
>>> print(simplify(constraints))
x0 <= x2 + 2.0
x2 = 2.0*x3
>>> constraints = '''
... x0 - x1 - 1.0 = mean([x0,x1])
... mean([x0,x1,x2]) >= x2'''
>>> print(simplify(constraints))
x0 = 3.0*x1 + 2.0
x0 >= -x1 + 2*x2

Additional Inputs:
variables – desired variable name. Default is ‘x’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base, and can include variables that are not found
in the constraints equation string.
target – list providing the order for which the variables will be solved. If there are “N” constraint
equations, the first “N” variables given will be selected as the dependent variables. By default, in-
creasing order is used.
Further Inputs:
locals – a dictionary of additional variables used in the symbolic constraints equations, and their de-
sired values.
cycle – boolean to cycle the order for which the variables are solved. If cycle is True, there should be
more variety on the left-hand side of the simplified equations. By default, the variables do not cycle.

176 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

all – boolean to return all simplifications due to negative values. When dividing by a possibly negative
variable, an inequality may flip, thus creating alternate simplifications. If all is True, return all possible
simplifications due to negative values in an inequalty. The default is False, returning only one possible
simplification.
comparator(equation)
identify the comparator (e.g. ‘<’, ‘=’, . . . ) in a constraints equation
flip(equation, bounds=False)
flip the inequality in the equation (i.e. ‘<’ to ‘>’), if one exists
Inputs: equation – an equation string; can be an equality or inequality bounds – if True, ensure set boundaries
are respected (i.e. ‘<’ to ‘>=’)
_flip(cmp, bounds=False)
flip the comparator (i.e. ‘<’ to ‘>’, or ‘<’ to ‘>=’ if bounds=True)
condense(*equations, **kwds)
condense tuples of equations to the simplest representation
Inputs: equations – tuples of inequalities or equalities
For example: >>> condense((‘C <= 0’, ‘B <= 0’), (‘C <= 0’, ‘B >= 0’)) [(‘C <= 0’,)] >>> condense((‘C
<= 0’, ‘B <= 0’), (‘C >= 0’, ‘B <= 0’)) [(‘B <= 0’,)] >>> condense((‘C <= 0’, ‘B <= 0’), (‘C >= 0’, ‘B >=
0’)) [(‘C <= 0’, ‘B <= 0’), (‘C >= 0’, ‘B >= 0’)]
Additional Inputs: verbose – if True, print diagnostic information. Default is False.
equals(before, after, vals=None, **kwds)
check if equations before and after are equal at the given vals
Inputs: before – an equation string after – an equation string vals – a dict with variable names as keys and floats
as values
Additional Inputs: variables – a list of variable names locals – a dict with variable names as keys and ‘fixed’
values error – if False, ZeroDivisionError evaluates as None variants – a list of ints to use as variants for
fractional powers
penalty_parser(constraints, variables=’x’, nvars=None)
parse symbolic constraints into penalty constraints. Returns a tuple of inequality constraints and a tuple of
equality constraints.
Inputs:
constraints – a string of symbolic constraints, with one constraint equation per line. Constraints can
be equality and/or inequality constraints. Standard python syntax should be followed (with the math
and numpy modules already imported).
For example:

>>> constraints = '''


... x2 = x0/2.
... x0 >= 0.'''
>>> penalty_parser(constraints, nvars=3)
(('-(x[0] - (0.))',), ('x[2] - (x[0]/2.)',))

Additional Inputs:
nvars – number of variables. Includes variables not explicitly given by the constraint equations (e.g.
‘x2’ in the example above).

2.32. symbolic module 177


mystic Documentation, Release 0.3.3.dev0

variables – desired variable name. Default is ‘x’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base, and can include variables that are not found
in the constraints equation string.
constraints_parser(constraints, variables=’x’, nvars=None)
parse symbolic constraints into a tuple of constraints solver equations. The left-hand side of each constraint
must be simplified to support assignment.
Inputs:
constraints – a string of symbolic constraints, with one constraint equation per line. Constraints can
be equality and/or inequality constraints. Standard python syntax should be followed (with the math
and numpy modules already imported).
For example:

>>> constraints = '''


... x2 = x0/2.
... x0 >= 0.'''
>>> constraints_parser(constraints, nvars=3)
('x[2] = x[0]/2.', 'x[0] = max(0., x[0])')

Additional Inputs:
nvars – number of variables. Includes variables not explicitly given by the constraint equations (e.g.
‘x2’ in the example above).
variables – desired variable name. Default is ‘x’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base, and can include variables that are not found
in the constraints equation string.
generate_conditions(constraints, variables=’x’, nvars=None, locals=None)
generate penalty condition functions from a set of constraint strings
Inputs:
constraints – a string of symbolic constraints, with one constraint equation per line. Constraints can
be equality and/or inequality constraints. Standard python syntax should be followed (with the math
and numpy modules already imported).
NOTE: Alternately, constraints may be a tuple of strings of symbolic constraints. Will return a tuple
of penalty condition functions.
For example:

>>> constraints = '''


... x0**2 = 2.5*x3 - 5.0
... exp(x2/x0) >= 7.0'''
>>> ineqf,eqf = generate_conditions(constraints, nvars=4)
>>> print(ineqf[0].__doc__)
'-(exp(x[2]/x[0]) - (7.0))'
>>> ineqf[0]([1,0,1,0])
4.2817181715409554
>>> print(eqf[0].__doc__)
'x[0]**2 - (2.5*x[3] - 5.0)'
>>> eqf[0]([1,0,1,0])
6.0

Additional Inputs:
nvars – number of variables. Includes variables not explicitly given by the constraint equations (e.g.
‘x2’ in the example above).

178 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

variables – desired variable name. Default is ‘x’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base, and can include variables that are not found
in the constraints equation string.
locals – a dictionary of additional variables used in the symbolic constraints equations, and their de-
sired values. Default is {‘tol’: 1e-15, ‘rel’: 1e-15}, where ‘tol’ and ‘rel’ are the absolute and relative
difference from the extremal value in a given inequality. For more details, see mystic.math.tolerance.
generate_solvers(constraints, variables=’x’, nvars=None, locals=None)
generate constraints solver functions from a set of constraint strings
Inputs:
constraints – a string of symbolic constraints, with one constraint equation per line. Constraints can
be equality and/or inequality constraints. Standard python syntax should be followed (with the math
and numpy modules already imported). The left-hand side of each equation must be simplified to
support assignment.
NOTE: Alternately, constraints may be a tuple of strings of symbolic constraints. Will return a tuple
of constraint solver functions.
For example:

>>> constraints = '''


... x2 = x0/2.
... x0 >= 0.'''
>>> solv = generate_solvers(constraints, nvars=3)
>>> print(solv[0].__doc__)
'x[2] = x[0]/2.'
>>> solv[0]([1,2,3])
[1, 2, 0.5]
>>> print(solv[1].__doc__)
'x[0] = max(0., x[0])'
>>> solv[1]([-1,2,3])
[0.0, 2, 3]

Additional Inputs:
nvars – number of variables. Includes variables not explicitly given by the constraint equations (e.g.
‘x2’ in the example above).
variables – desired variable name. Default is ‘x’. A list of variable name strings is also accepted for
when desired variable names don’t have the same base, and can include variables that are not found
in the constraints equation string.
locals – a dictionary of additional variables used in the symbolic constraints equations, and their de-
sired values. Default is {‘tol’: 1e-15, ‘rel’: 1e-15}, where ‘tol’ and ‘rel’ are the absolute and relative
difference from the extremal value in a given inequality. For more details, see mystic.math.tolerance.
generate_penalty(conditions, ptype=None, join=None, **kwds)
converts a penalty constraint function to a mystic.penalty function.
Parameters
• conditions (object) – a penalty contraint function, or list of penalty constraint functions.
• ptype (object, default=None) – a mystic.penalty type, or a list of mystic.penalty
types of the same length as conditions.
• join (object, default=None) – and_ or or_ from mystic.coupler.
• k (int, default=None) – penalty multiplier.

2.32. symbolic module 179


mystic Documentation, Release 0.3.3.dev0

• h (int, default=None) – iterative multiplier.


Returns a mystic.penalty function built from the given constraints

Notes

If join=None, then apply the given penalty constraints iteratively. Otherwise, couple the penalty constraints
with the selected coupler.

Examples

>>> constraints = '''


... x2 = x0/2.
... x0 >= 0.'''
>>> ineqf,eqf = generate_conditions(constraints, nvars=3)
>>> penalty = generate_penalty((ineqf,eqf))
>>> penalty([1.,2.,0.])
25.0
>>> penalty([1.,2.,0.5])
0.0

generate_constraint(conditions, ctype=None, join=None, **kwds)


converts a constraint solver to a mystic.constraints function.
Parameters
• conditions (object) – a constraint solver, or list of constraint solvers.
• ctype (object, default=None) – a mystic.coupler type, or a list of mystic.coupler
types of the same length as conditions.
• join (object, default=None) – and_ or or_ from mystic.constraints.
Returns a mystic.constaints function built from the given constraints

Notes

If join=None, then apply the given constraints iteratively. Otherwise, couple the constraints with the selected
coupler.

Warning: This constraint generator doesn’t check for conflicts in conditions, but simply applies conditions
in the given order. This constraint generator assumes that a single variable has been isolated on the left-hand
side of each constraints equation, thus all constraints are of the form “x_i = f(x)”. This solver picks speed
over robustness, and relies on the user to formulate the constraints so that they do not conflict.

Examples

>>> constraints = '''


... x0 = cos(x1) + 2.
... x1 = x2*2.'''
>>> solv = generate_solvers(constraints)
(continues on next page)

180 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

(continued from previous page)


>>> constraint = generate_constraint(solv)
>>> constraint([1.0, 0.0, 1.0])
[1.5838531634528576, 2.0, 1.0]

Standard python math conventions are used. For example, if an int is used in a constraint equation, one or
more variable may be evaluate to an int – this can affect solved values for the variables.

>>> constraints = '''


... x2 = x0/2.
... x0 >= 0.'''
>>> solv = generate_solvers(constraints, nvars=3)
>>> print(solv[0].__doc__)
'x[2] = x[0]/2.'
>>> print(solv[1].__doc__)
'x[0] = max(0., x[0])'
>>> constraint = generate_constraint(solv)
>>> constraint([1,2,3])
[1, 2, 0.5]
>>> constraint([-1,2,-3])
[0.0, 2, 0.0]

2.33 termination module

Factories that provide termination conditions for a mystic.solver


class And
Bases: mystic.termination.When
couple termination conditions with “and”.
Terminates when all given conditions are satisfied.
Takes one or more termination conditions: args – tuple of termination conditions
Usage:

>>> from mystic.termination import And, VTR, ChangeOverGeneration


>>> term = And( VTR(), ChangeOverGeneration() )
>>> term(solver) # where solver is a mystic.solver instance

__module__ = 'mystic.termination'
static __new__(self, *args)
Takes one or more termination conditions: args – tuple of termination conditions
Usage:

>>> from mystic.termination import And, VTR, ChangeOverGeneration


>>> term = And( VTR(), ChangeOverGeneration() )
>>> term(solver) # where solver is a mystic.solver instance

__repr__() <==> repr(x)


CandidateRelativeTolerance(xtol=0.0001, ftol=0.0001)
absolute difference in candidates is < tolerance:
abs(xi-x0) <= xtol & abs(fi-f0) <= ftol, with x=params & f=cost

2.33. termination module 181


mystic Documentation, Release 0.3.3.dev0

ChangeOverGeneration(tolerance=1e-06, generations=30)
change in cost is < tolerance over a number of generations:
cost[-g] - cost[-1] <= tolerance, with g=generations
CollapseAs(offset=False, tolerance=0.0001, generations=50, mask=None)
max(pairwise(x)) is < tolerance over a number of generations, and mask is column indices of selected params:
bool(collapse_as(monitor, **kwds))
CollapseAt(target=None, tolerance=0.0001, generations=50, mask=None)
change(x[i]) is < tolerance over a number of generations, where target can be a single value or a list of values of
x length, change(x[i]) = max(x[i]) - min(x[i]) if target=None else abs(x[i] - target), and mask is column indices
of selected params:
bool(collapse_at(monitor, **kwds))
CollapseCost(clip=False, limit=1.0, samples=50, mask=None)
cost(x) - min(cost) is >= limit for all samples within an interval, where if clip is True, then clip beyond the
space sampled the optimizer, and mask is a dict of {index:bounds} where bounds are provided as an interval
(min,max), or a list of intervals:
bool(collapse_cost(monitor, **kwds))
CollapsePosition(tolerance=0.005, generations=50, mask=None, **kwds)
max(pairwise(positions)) < tolerance over a number of generations, where (measures,indices) are (row,column)
indices of selected positions:
bool(collapse_position(monitor, **kwds))
CollapseWeight(tolerance=0.005, generations=50, mask=None, **kwds)
value of weights are < tolerance over a number of generations, where mask is (row,column) indices of the
selected weights:
bool(collapse_weight(monitor, **kwds))
EvaluationLimits(generations=None, evaluations=None)
number of iterations is > generations, or number of function calls is > evaluations:
iterations >= generations or fcalls >= evaluations
GradientNormTolerance(tolerance=1e-05, norm=inf )
gradient norm is < tolerance, given user-supplied norm:
sum( abs(gradient)**norm )**(1.0/norm) <= tolerance
Lnorm(weights, p=1, axis=None)
calculate L-p norm of weights
Parameters
• weights (array(float)) – an array of weights
• p (int, default=1) – the power of the p-norm, where p in [0,inf]
• axis (int, default=None) – axis used to take the norm along
Returns a float distance norm for the weights
NormalizedChangeOverGeneration(tolerance=0.0001, generations=10)
normalized change in cost is < tolerance over number of generations:
(cost[-g] - cost[-1]) / 0.5*(abs(cost[-g]) + abs(cost[-1])) <= tolerance

182 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

NormalizedCostTarget(fval=None, tolerance=1e-06, generations=30)


normalized absolute difference from given cost value is < tolerance: (if fval is not provided, then terminate when
no improvement over g iterations)
abs(cost[-1] - fval)/fval <= tolerance or (cost[-1] - cost[-g]) = 0
class Or
Bases: mystic.termination.When
couple termination conditions with “or”.
Terminates when any of the given conditions are satisfied.
Takes one or more termination conditions: args – tuple of termination conditions
Usage:

>>> from mystic.termination import Or, VTR, ChangeOverGeneration


>>> term = Or( VTR(), ChangeOverGeneration() )
>>> term(solver) # where solver is a mystic.solver instance

__call__(solver, info=False)
check if the termination conditions are satisfied.
Inputs: solver – the solver instance
Additional Inputs: info – if True, return information about the satisfied conditions
__module__ = 'mystic.termination'
static __new__(self, *args)
Takes one or more termination conditions: args – tuple of termination conditions
Usage:

>>> from mystic.termination import Or, VTR, ChangeOverGeneration


>>> term = Or( VTR(), ChangeOverGeneration() )
>>> term(solver) # where solver is a mystic.solver instance

__repr__() <==> repr(x)


PopulationSpread(tolerance=1e-06)
normalized absolute deviation from best candidate is < tolerance:
abs(params - params[0]) <= tolerance
SolutionImprovement(tolerance=1e-05)
sum of change in each parameter is < tolerance:
sum(abs(last_params - current_params)) <= tolerance
SolverInterrupt()
handler is enabled and interrupt is given:
_EARLYEXIT == True
VTR(tolerance=0.005, target=0.0)
cost of last iteration is < tolerance from target:
abs(cost[-1] - target) <= tolerance
VTRChangeOverGeneration(ftol=0.005, gtol=1e-06, generations=30, target=0.0)
change in cost is < gtol over a number of generations, or cost of last iteration is < ftol from target:
cost[-g] - cost[-1] <= gtol or abs(cost[-1] - target) <= ftol

2.33. termination module 183


mystic Documentation, Release 0.3.3.dev0

class When
Bases: tuple
provide a termination condition with more reporting options.
Terminates when the given condition is satisfied.
Takes a termination condition: arg – termination condition
Usage:

>>> from mystic.termination import When, VTR


>>> term = When( VTR() )
>>> term(solver) # where solver is a mystic.solver instance

__call__(solver, info=False)
check if the termination conditions are satisfied.
Inputs: solver – the solver instance
Additional Inputs: info – if True, return information about the satisfied conditions
__dict__ = dict_proxy({'__module__': 'mystic.termination', '__new__': <staticmethod o
__module__ = 'mystic.termination'
static __new__(self, arg)
Takes a termination condition: arg – termination condition
Usage:

>>> from mystic.termination import When, VTR


>>> term = When( VTR() )
>>> term(solver) # where solver is a mystic.solver instance

__repr__() <==> repr(x)


_type
alias of __builtin__.type
approx_fprime(xk, f, epsilon, *args)
state(condition)
get state (dict of kwds) used to create termination condition
type(condition)
get object that generated the given termination instance

2.34 tools module

Various python tools


Main functions exported are::
• isiterable: check if an object is iterable
• itertype: get the ‘underlying’ type used to construct x
• multiply: recursive elementwise casting multiply of x by n
• divide: recursive elementwise casting divide of x by n
• factor: generator for factors of a number

184 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

• flatten: flatten a sequence


• flatten_array: flatten an array
• getch: provides “press any key to quit”
• random_seed: sets the seed for calls to ‘random()’
• random_state: build a localized random generator
• wrap_nested: nest a function call within a function object
• wrap_penalty: append a function call to a function object
• wrap_function: bind an EvaluationMonitor and an evaluation counter to a function object
• wrap_bounds: impose bounds on a function object
• wrap_reducer: convert a reducer function to an arraylike interface
• reduced: apply a reducer function to reduce output to a single value
• masked: generate a masked function, given a function and mask provided
• partial: generate a function where some input has fixed values
• synchronized: generate a function, where some input tracks another input
• insert_missing: return a sequence with the ‘missing’ elements inserted
• clipped: generate a function where values outside of bounds are clipped
• suppressed: generate a function where values less than tol are suppressed
• suppress: suppress small values less than tol
• chain: chain together decorators into a single decorator
• connected: generate dict of connected members of a set of tuples (pairs)
• unpair: convert a 1D array of N pairs to two 1D arrays of N values
• pairwise: convert an array of positions to an array of pairwise distances
• measure_indices: get the indices corresponding to weights and to positions
• select_params: get params for the given indices as a tuple of index,values
• solver_bounds: return a dict of tightest bounds defined for the solver
• interval_overlap: find the intersection of intervals in the given bounds
• src: extract source code from a python code object
Other tools of interest are in:: mystic.mystic.filters and mystic.models.poly
_adivide(x, n)
elementwise ‘array-casting’ division of x by n
_amultiply(x, n)
elementwise ‘array-casting’ multiplication of x by n
_divide(x, n)
elementwise division of x by n, as if x were an array
_idivide(x, n)
iterator for elementwise ‘array-like’ division of x by n
_imultiply(x, n)
iterator for elementwise ‘array-like’ multiplication of x by n

2.34. tools module 185


mystic Documentation, Release 0.3.3.dev0

_inverted(pairs)
return a list of tuples, where each tuple has been reversed
_kdiv(num, denom, type=None)
‘special’ scalar division for ‘k’
_multiply(x, n)
elementwise multiplication of x by n, as if x were an array
_symmetric(pairs)
returns a set of tuples, where each tuple includes it’s inverse
chain(*decorators)
chain together decorators into a single decorator
For example:
>>> wm = with_mean(5.0)
>>> wv = with_variance(5.0)
>>>
>>> @chain(wm, wv)
... def doit(x):
... return x
...
>>> res = doit([1,2,3,4,5])
>>> mean(res), variance(res)
(5.0, 5.0000000000000018)

clipped(min=None, max=None, exit=False)


generate a function, where values outside of bounds are clipped
connected(pairs)
generate dict of connected members of a set of tuples (pairs)
For example:
>>> connected({(0,3),(4,2),(3,1),(4,5),(2,6)})
{0: set([1, 3]), 4: set([2, 5, 6])}
>>> connected({(0,3),(3,1),(4,5),(2,6)})
{0: set([1, 3]), 2: set([6]), 4: set([5])}}

divide(x, n, type=<type ’list’>, recurse=False)


elementwise division of x by n, returning the selected type
factor(n)
generator for factors of a number
flatten(sequence, maxlev=999, to_expand=<function list_or_tuple>, lev=0)
flatten a sequence; returns original sequence type
For example:
>>> A = [1,2,3,[4,5,6],7,[8,[9]]]
>>>
>>> # Flatten.
>>> flatten(A)
[1, 2, 3, 4, 5, 6, 7, 8, 9]
>>>
>>> # Flatten only one level deep.
>>> flatten(A,1)
[1, 2, 3, 4, 5, 6, 7, 8, [9]]
(continues on next page)

186 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

(continued from previous page)


>>>
>>> # Flatten twice.
>>> flatten(A,2)
[1, 2, 3, 4, 5, 6, 7, 8, 9]
>>>
>>> # Flatten zero levels deep (i.e. don't flatten).
>>> flatten(A,0)
[1, 2, 3, [4, 5, 6], 7, [8, [9]]]

flatten_array(sequence, maxlev=999, lev=0)


flatten a sequence; returns a ndarray
getch(str=’Press any key to continue’)
configurable pause of execution
insert_missing(x, missing=None)
return a sequence with the ‘missing’ elements inserted
missing should be a dictionary of positional index and a value (e.g. {0:1.0}), where keys must be integers, and
values can be any object (typically a float).
For example:

>>> insert_missing([1,2,4], missing={0:10, 3:-1})


[10, 1, 2, -1, 4]

interval_overlap(bounds1, bounds2)
find the intersection of intervals in the given bounds
bounds1 and bounds2 are a dict of {index:bounds}, where bounds is a list of tuples [(lo,hi),. . . ]
isNull(mon)
isiterable(x)
check if an object is iterable
itertype(x, default=<type ’tuple’>)
get the ‘underlying’ type used to construct x
list_or_tuple(x)
True if x is a list or a tuple
list_or_tuple_or_ndarray(x)
True if x is a list, tuple, or a ndarray
listify(x)
recursivly convert all members of a sequence to a list
masked(mask=None)
generate a masked function, given a function and mask provided
mask should be a dictionary of the positional index and a value (e.g. {0:1.0}), where keys must be integers, and
values can be any object (typically a float).
functions are expected to take a single argument, a n-dimensional list or array, where the mask will be applied to
the input array. Hence, instead of masking the inputs, the function is “masked”. Conceptually, f(mask(x)) ==>
f’(x), instead of f(mask(x)) ==> f(x’).
For example:

2.34. tools module 187


mystic Documentation, Release 0.3.3.dev0

>>> @masked({0:10,3:-1})
... def same(x):
... return x
...
>>> same([1,2,3])
[10, 1, 2, -1, 3]
>>>
>>> @masked({0:10,3:-1})
... def foo(x):
w,x,y,z = x # requires a lenth-4 sequence
... return w+x+y+z
...
>>> foo([-5,2]) # produces [10,-5,2,-1]
6

measure_indices(npts)
get the indices corresponding to weights and to positions
multiply(x, n, type=<type ’list’>, recurse=False)
multiply: recursive elementwise casting multiply of x by n
pairwise(x, indices=False)
convert an array of positions to an array of pairwise distances
if indices=True, also return indices to relate input and output arrays
partial(mask)
generate a function, where some input has fixed values
mask should be a dictionary of the positional index and a value (e.g. {0:1.0}), where keys must be integers, and
values can be any object (typically a float).
functions are expected to take a single argument, a n-dimensional list or array, where the mask will be applied
to the input array.
For example:

>>> @partial({0:10,3:-1})
... def same(x):
... return x
...
>>> same([-5,9])
[10, 9]
>>> same([0,1,2,3,4])
[10, 1, 2, -1, 4]

class permutations
Bases: object
permutations(iterable[, r]) –> permutations object
Return successive r-length permutations of elements in the iterable.
permutations(range(3), 2) –> (0,1), (0,2), (1,0), (1,2), (2,0), (2,1)
__getattribute__
x.__getattribute__(‘name’) <==> x.name
__iter__
__new__(S, ...) → a new object with type S, a subtype of T

188 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

next
random_seed(s=None)
sets the seed for calls to ‘random()’
random_state(module=’random’, new=False, seed=’!’)
return a (optionally manually seeded) random generator
For a given module, return an object that has random number generation (RNG) methods available. If
new=False, use the global copy of the RNG object. If seed=’!’, do not reseed the RNG (using seed=None
‘removes’ any seeding). If seed=’*’, use a seed that depends on the process id (PID); this is useful for building
RNGs that are different across multiple threads or processes.
reduced(reducer=None, arraylike=False)
apply a reducer function to reduce output to a single value
For example:

>>> @reduced(lambda x,y: x)


... def first(x):
... return x
...
>>> first([1,2,3])
1
>>>
>>> @reduced(min)
... def minimum(x):
... return x
...
>>> minimum([3,2,1])
1
>>> @reduced(lambda x,y: x+y)
... def add(x):
... return x
...
>>> add([1,2,3])
6
>>> @reduced(sum, arraylike=True)
... def added(x):
... return x
...
>>> added([1,2,3])
6

select_params(params, index)
get params for the given indices as a tuple of index,values
solver_bounds(solver)
return a dict {index:bounds} of tightest bounds defined for the solver
suppress(x, tol=1e-08, clip=True)
suppress small values less than tol
suppressed(tol=1e-08, exit=False, clip=True)
generate a function, where values less than tol are suppressed
For example:

>>> @suppressed(1e-8)
... def square(x):
(continues on next page)

2.34. tools module 189


mystic Documentation, Release 0.3.3.dev0

(continued from previous page)


... return [i**2 for i in x]
...
>>> square([1e-8, 2e-8, 1e-9])
[1.00000000e-16, 4.00000000e-16, 0.00000000e+00]
>>>
>>> from mystic.math.measures import normalize
>>> @suppressed(1e-8, exit=True, clip=False)
... def norm(x):
... return normalize(x, mass=1)
...
>>> norm([1e-8, 2e-8, 1e-16, 5e-9])
[0.28571428585034014, 0.5714285707482993, 0.0, 0.14285714340136055]
>>> sum(_)
1.0

synchronized(mask)
generate a function, where some input tracks another input
mask should be a dictionary of positional index and tracked index (e.g. {0:1}), where keys and values should
be different integers. However, if a tuple is provided instead of the tracked index (e.g. {0:(1,lambda x:2*x)} or
{0:(1,2)}), the second member of the tuple will be used to scale the tracked index.
functions are expected to take a single argument, a n-dimensional list or array, where the mask will be applied
to the input array.
operations within a single mask are unordered. If a specific ordering of operations is required, apply multiple
masks in the desired order.
For example:

>>> @synchronized({0:1,3:-1})
... def same(x):
... return x
...
>>> same([-5,9])
[9, 9]
>>> same([0,1,2,3,4])
[1, 1, 2, 4, 4]
>>> same([0,9,2,3,6])
[9, 9, 2, 6, 6]
>>>
>>> @synchronized({0:(1,lambda x:1/x),3:(1,-1)})
... def doit(x):
... return x
...
>>> doit([-5.,9.])
[0.1111111111111111, 9.0]
>>> doit([0.,1.,2.,3.,4.])
[1.0, 1.0, 2.0, -1.0, 4.0]
>>> doit([0.,9.,2.,3.,6.])
[0.1111111111111111, 9.0, 2.0, -9.0, 6.0]
>>>
>>> @synchronized({1:2})
... @synchronized({0:1})
... def invert(x):
... return [-i for i in x]
...
(continues on next page)

190 Chapter 2. mystic module documentation


mystic Documentation, Release 0.3.3.dev0

(continued from previous page)


>>> invert([0,1,2,3,4])
[-2, -2, -2, -3, -4]

unpair(pairs)
convert a 1D array of N pairs to two 1D arrays of N values
For example:

>>> unpair([(a0,b0),(a1,b1),(a2,b2)])
[a0,a1,a2],[b0,b1,b2]

wrap_bounds(target_function, min=None, max=None)


impose bounds on a function object
wrap_cf(CF, REG=None, cfmult=1.0, regmult=0.0)
wrap a cost function. . .
wrap_function(the_function, extra_args, EvaluationMonitor, scale=1)
bind an EvaluationMonitor and evaluation counter to a function object
wrap_nested(outer_function, inner_function)
nest a function call within a function object
This is useful for nesting a constraints function in a cost function; thus, the constraints will be enforced at every
cost function evaluation.
wrap_penalty(cost_function, penalty_function)
append a function call to a function object
This is useful for binding a penalty function to a cost function; thus, the penalty will be evaluated at every cost
function evaluation.
wrap_reducer(reducer_function)
convert a reducer function to have an arraylike interface
Parameters reducer_function (func) – a function f of the form y = reduce(f, x).
Returns a function f of the form y = f(x), where x is array-like.

Examples

>>> acum = wrap_reducer(numpy.add)


>>> acum([1,2,3,4])
10
>>> prod = wrap_reducer(lambda x,y: x*y)
>>> prod([1,2,3,4])
24

2.34. tools module 191


mystic Documentation, Release 0.3.3.dev0

192 Chapter 2. mystic module documentation


CHAPTER 3

mystic scripts documentation

3.1 mystic_collapse_plotter script

generate cost convergence rate plots from file written with write_support_file
Available from the command shell as:

mystic_collapse_plotter.py filename [options]

or as a function call:

mystic.collapse_plotter(filename, **options)

Parameters filename (str) – name of the convergence logfile (e.g paramlog.py).


returns None

Notes

• The option dots takes a boolean, and will show data points in the plot.
• The option linear takes a boolean, and will plot in a linear scale.
• The option out takes a string of the filepath for the generated plot.
• The option iter takes an integer of the largest iteration to plot.
• The option label takes a label string. For example, label = "y" will label the plot with a ‘y’, while label
= " log-cost, $ log_{10}(\hat{P} - \hat{P}_{max})$" will label the y-axis with standard
LaTeX math formatting. Note that the leading space is required, and that the text is aligned along the axis.
• The option col takes a string of comma-separated integers indicating iteration numbers where parameter collapse
has occurred. If a second set of integers is provided (delineated by a semicolon), the additional set of integers
will be plotted with a different linestyle (to indicate a different type of collapse).

193
mystic Documentation, Release 0.3.3.dev0

3.2 mystic_log_reader script

plot parameter convergence from file written with LoggingMonitor


Available from the command shell as:

mystic_log_reader.py filename [options]

or as a function call:

mystic.log_reader(filename, **options)

Parameters filename (str) – name of the convergence logfile (e.g log.txt).


returns None

Notes

• The option out takes a string of the filepath for the generated plot.
• The option dots takes a boolean, and will show data points in the plot.
• The option line takes a boolean, and will connect the data with a line.
• The option iter takes an integer of the largest iteration to plot.
• The option legend takes a boolean, and will display the legend.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option param takes an indicator string. The indicator string is built from comma-separated array slices.
For example, params = ":" will plot all parameters. Alternatively, params = ":2, 3:" will plot all
parameters except for the third parameter, while params = "0" will only plot the first parameter.

3.3 mystic_model_plotter script

generate surface contour plots for model, specified by full import path; and generate model trajectory from logfile (or
solver restart file), if provided
Available from the command shell as:

mystic_model_plotter.py model (logfile) [options]

or as a function call:

mystic.model_plotter(model, logfile=None, **options)

Parameters
• model (str) – full import path for the model (e.g. mystic.models.rosen)
• logfile (str, default=None) – name of convergence logfile (e.g. log.txt)
returns None

194 Chapter 3. mystic scripts documentation


mystic Documentation, Release 0.3.3.dev0

Notes

• The option out takes a string of the filepath for the generated plot.
• The option bounds takes an indicator string, where bounds are given as comma-separated slices. For example,
using bounds = "-1:10, 0:20" will set lower and upper bounds for x to be (-1,10) and y to be (0,20).
The “step” can also be given, to control the number of lines plotted in the grid. Thus "-1:10:.1, 0:20" sets
the bounds as above, but uses increments of .1 along x and the default step along y. For models > 2D, the bounds
can be used to specify 2 dimensions plus fixed values for remaining dimensions. Thus, "-1:10, 0:20, 1.
0" plots the 2D surface where the z-axis is fixed at z=1.0. When called from a script, slice objects can be used
instead of a string, thus "-1:10:.1, 0:20, 1.0" becomes (slice(-1,10,.1), slice(20), 1.
0).
• The option label takes comma-separated strings. For example, label = "x,y," will place ‘x’ on the x-axis,
‘y’ on the y-axis, and nothing on the z-axis. LaTeX is also accepted. For example, label = "$ h $, $
{\alpha}$, $ v$" will label the axes with standard LaTeX math formatting. Note that the leading space
is required, while a trailing space aligns the text with the axis instead of the plot frame.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option iter takes an integer of the largest iteration to plot.
• The option reduce can be given to reduce the output of a model to a scalar, thus converting model(params)
to reduce(model(params)). A reducer is given by the import path (e.g. numpy.add).
• The option scale will convert the plot to log-scale, and scale the cost by z=log(4*z*scale+1)+2. This is
useful for visualizing small contour changes around the minimium.
• If using log-scale produces negative numbers, the option shift can be used to shift the cost by z=z+shift.
Both shift and scale are intended to help visualize contours.
• The option fill takes a boolean, to plot using filled contours.
• The option depth takes a boolean, to plot contours in 3D.
• The option dots takes a boolean, to show trajectory points in the plot.
• The option join takes a boolean, to connect trajectory points.
• The option verb takes a boolean, to print the model documentation.

3.4 support_convergence script

generate parameter convergence plots from file written with write_support_file


Available from the command shell as:

support_convergence.py filename [options]

or as a function call:

mystic.support.convergence(filename, **options)

Parameters filename (str) – name of the convergence logfile (e.g paramlog.py)


returns None

3.4. support_convergence script 195


mystic Documentation, Release 0.3.3.dev0

Notes

• The option out takes a string of the filepath for the generated plot.
• The option iter takes an integer of the largest iteration to plot.
• The option param takes an indicator string. The indicator string is built from comma-separated array slices.
For example, params = ":" will plot all parameters in a single plot. Alternatively, params = ":2, 2:"
will split the parameters into two plots, and params = "0" will only plot the first parameter.
• The option label takes comma-separated strings. For example, label = "x,y," will label the y-axis of the
first plot with ‘x’, a second plot with ‘y’, and not add a label to a third or subsequent plots. If more labels are
given than plots, then the last label will be used for the y-axis of the ‘cost’ plot. LaTeX is also accepted. For
example, label = "$ h$, $ a$, $ v$" will label the axes with standard LaTeX math formatting. Note
that the leading space is required, and the text is aligned along the axis.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option cost takes a boolean, and will also plot the parameter cost.
• The option legend takes a boolean, and will display the legend.

3.5 support_hypercube script

generate parameter support plots from file written with write_support_file


Available from the command shell as:

support_hypercube.py filename [options]

or as a function call:

mystic.support.hypercube(filename, **options)

Parameters filename (str) – name of the convergence logfile (e.g paramlog.py)


returns None

Notes

• The option out takes a string of the filepath for the generated plot.
• The options bounds, axes, and iters all take indicator strings. The bounds should be given as comma-separated
slices. For example, using bounds = "60:105, 0:30, 2.1:2.8" will set the lower and upper bounds
for x to be (60,105), y to be (0,30), and z to be (2.1,2.8). Similarly, axes also accepts comma-separated groups
of ints; however, for axes, each entry indicates which parameters are to be plotted along each axis – the first
group for the x direction, the second for the y direction, and third for z. Thus, axes = "2 3, 6 7, 10
11" would set 2nd and 3rd parameters along x. Iters also accepts strings built from comma-separated array
slices. For example, iters = ":" will plot all iters in a single plot. Alternatively, iters = ":2, 2:"
will split the iters into two plots, while iters = "0" will only plot the first iteration.
• The option label takes comma-separated strings. Thus label = "x,y," will place ‘x’ on the x-axis, ‘y’ on
the y-axis, and nothing on the z-axis. LaTeX, such as label = "$ h $, $ a$, $ v$" will label the
axes with standard LaTeX math formatting. Note that the leading space is required, while a trailing space aligns
the text with the axis instead of the plot frame.
• The option nid takes an integer of the nth simultaneous points to plot.

196 Chapter 3. mystic scripts documentation


mystic Documentation, Release 0.3.3.dev0

• The option scale takes an integer as a grayscale contrast multiplier.


• The option flat takes a boolean, to plot results in a single plot.

3.6 support_hypercube_measures script

generate measure support plots from file written with write_support_file


Available from the command shell as:

support_hypercube_measures.py filename [options]

or as a function call:

mystic.support.hypercube_measures(filename, **options)

Parameters filename (str) – name of the convergence logfile (e.g paramlog.py)


returns None

Notes

• The option out takes a string of the filepath for the generated plot.
• The options bounds, axes, weight, and iters all take indicator strings. The bounds should be given as comma-
separated slices. For example, using bounds = "60:105, 0:30, 2.1:2.8" will set lower and upper
bounds for x to be (60,105), y to be (0,30), and z to be (2.1,2.8). Similarly, axes also accepts comma-separated
groups of ints; however, for axes, each entry indicates which parameters are to be plotted along each axis – the
first group for the x direction, the second for the y direction, and third for z. Thus, axes = "2 3, 6 7,
10 11" would set 2nd and 3rd parameters along x. The corresponding weights are used to color the measure
points, where 1.0 is black and 0.0 is white. For example, using weight = "0 1, 4 5, 8 9" would use
the 0th and 1st parameters to weight x. Iters is also similar, however only accepts comma-separated ints. Hence,
iters = "-1" will plot the last iteration, while iters = "0, 300, 700" will plot the 0th, 300th, and
700th in three plots.
• The option label takes comma-separated strings. Thus label = "x,y," will place ‘x’ on the x-axis, ‘y’ on
the y-axis, and nothing on the z-axis. LaTeX, such as label = "$ h $, $ a$, $ v$" will label the
axes with standard LaTeX math formatting. Note that the leading space is required, while a trailing space aligns
the text with the axis instead of the plot frame.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option scale takes an integer as a grayscale contrast multiplier.
• The option flat takes a boolean, to plot results in a single plot.

Warning: This function is intended to visualize weighted measures (i.e. weights and positions), where the weights
must be normalized (to 1) or an error will be thrown.

3.7 support_hypercube_scenario script

generate scenario support plots from file written with write_support_file; and generate legacy data and cones
from a dataset file, if provided

3.6. support_hypercube_measures script 197


mystic Documentation, Release 0.3.3.dev0

Available from the command shell as:

support_hypercube_scenario.py filename (datafile) [options]

or as a function call:

mystic.support.hypercube_scenario(filename, datafile=None, **options)

Parameters
• filename (str) – name of the convergence logfile (e.g. paramlog.py)
• datafile (str, default=None) – name of the dataset file (e.g. data.txt)
returns None

Notes

• The option out takes a string of the filepath for the generated plot.
• The options bounds, dim, and iters all take indicator strings. The bounds should be given as comma-separated
slices. For example, using bounds = ".062:.125, 0:30, 2300:3200" will set lower and upper
bounds for x to be (.062,.125), y to be (0,30), and z to be (2300,3200). If all bounds are to not be strictly
enforced, append an asterisk * to the string. The dim (dimensions of the scenario) should comma-separated
ints. For example, dim = "1, 1, 2" will convert the params to a two-member 3-D dataset. Iters accepts
a string built from comma-separated array slices. Thus, iters = ":" will plot all iters in a single plot.
Alternatively, iters = ":2, 2:" will split the iters into two plots, while iters = "0" will only plot
the first iteration.
• The option label takes comma-separated strings. Thus label = "x,y," will place ‘x’ on the x-axis, ‘y’ on
the y-axis, and nothing on the z-axis. LaTeX, such as label = "$ h $, $ a$, $ v$" will label the
axes with standard LaTeX math formatting. Note that the leading space is required, while a trailing space aligns
the text with the axis instead of the plot frame.
• The option “filter” is used to select datapoints from a given dataset, and takes comma-separated ints.
• A “mask” is given as comma-separated ints. When the mask has more than one int, the plot will be 2D.
• The option “vertical” will plot the dataset values on the vertical axis; for 2D plots, cones are always plotted on
the vertical axis.
• The option nid takes an integer of the nth simultaneous points to plot.
• The option scale takes an integer as a grayscale contrast multiplier.
• The option gap takes an integer distance from cone center to vertex.
• The option data takes a boolean, to plot legacy data, if provided.
• The option cones takes a boolean, to plot cones, if provided.
• The option flat takes a boolean, to plot results in a single plot.

198 Chapter 3. mystic scripts documentation


CHAPTER 4

Indices and tables

• genindex
• modindex
• search

199
mystic Documentation, Release 0.3.3.dev0

200 Chapter 4. Indices and tables


Python Module Index

_ mystic.mask, 51
_mystic_collapse_plotter, 193 mystic.math, 51
_mystic_log_reader, 194 mystic.math.approx, 53
_mystic_model_plotter, 194 mystic.math.compressed, 54
_support_convergence, 195 mystic.math.discrete, 55
_support_hypercube, 196 mystic.math.distance, 71
_support_hypercube_measures, 197 mystic.math.grid, 75
_support_hypercube_scenario, 197 mystic.math.integrate, 76
mystic.math.legacydata, 76
a mystic.math.measures, 81
mystic.abstract_ensemble_solver, 9 mystic.math.poly, 93
mystic.abstract_launcher, 13 mystic.math.samples, 93
mystic.abstract_map_solver, 15 mystic.math.stats, 94
mystic.abstract_solver, 18 mystic.metropolis, 94
mystic.models, 95
c mystic.models.abstract_model, 101
mystic.cache, 25 mystic.models.br8, 102
mystic.collapse, 25 mystic.models.circle, 103
mystic.constraints, 26 mystic.models.dejong, 103
mystic.coupler, 33 mystic.models.functions, 107
mystic.models.lorentzian, 111
d mystic.models.mogi, 112
mystic.models.nag, 112
mystic.differential_evolution, 35
mystic.models.pohlheim, 113
e mystic.models.poly, 120
mystic.models.schittkowski, 121
mystic.ensemble, 41
mystic.models.storn, 122
mystic.models.venkataraman, 125
f mystic.models.wavy, 126
mystic.filters, 46 mystic.models.wolfram, 127
mystic.forward_model, 47 mystic.monitors, 128
mystic.munge, 133
h
mystic.helputil, 50 p
mystic.penalty, 134
l mystic.pools, 136
mystic.linesearch, 50 mystic.python_map, 138
m s
mystic, ?? mystic.scemtools, 138

201
mystic Documentation, Release 0.3.3.dev0

mystic.scipy_optimize, 140
mystic.scripts, 145
mystic.search, 147
mystic.solvers, 149
mystic.strategy, 163
mystic.support, 164
mystic.svc, 168
mystic.svr, 172
mystic.symbolic, 173

t
mystic.termination, 181
mystic.tools, 184

202 Python Module Index


Index

Symbols _AbstractWorkerPool__set_nodes() (AbstractWorker-


_AbstractEnsembleSolver__get_solver_instance() (Ab- Pool method), 14
stractEnsembleSolver method), 12 _InitialPoints() (AbstractEnsembleSolver method), 12
_AbstractSolver__bestEnergy() (AbstractSolver method), _InitialPoints() (BuckshotSolver method), 42, 150
23 _InitialPoints() (LatticeSolver method), 42, 152
_AbstractSolver__bestSolution() (AbstractSolver _InitialPoints() (SparsitySolver method), 42, 154
method), 23 _Monitor__step() (Monitor method), 130
_AbstractSolver__energy_history() (AbstractSolver _PowellDirectionalSolver__generations() (PowellDirec-
method), 23 tionalSolver method), 142, 154
_AbstractSolver__evaluations() (AbstractSolver method), _SerialPool__get_nodes() (SerialPool method), 137
23 _SerialPool__set_nodes() (SerialPool method), 137
_AbstractSolver__generations() (AbstractSolver _SetEvaluationLimits() (AbstractSolver method), 23
method), 23 _SetEvaluationLimits() (NelderMeadSimplexSolver
_AbstractSolver__load_state() (AbstractSolver method), method), 141, 153
23 _SetEvaluationLimits() (PowellDirectionalSolver
_AbstractSolver__save_state() (AbstractSolver method), method), 142, 154
23 _Step() (AbstractSolver method), 23
_AbstractSolver__set_bestEnergy() (AbstractSolver _Step() (DifferentialEvolutionSolver method), 37, 151
method), 23 _Step() (DifferentialEvolutionSolver2 method), 39, 152
_AbstractSolver__set_bestSolution() (AbstractSolver _Step() (NelderMeadSimplexSolver method), 141, 153
method), 23 _Step() (PowellDirectionalSolver method), 142, 154
_AbstractSolver__set_energy_history() (AbstractSolver __add__() (Monitor method), 130
method), 23 __bool__() (Null method), 129
_AbstractSolver__set_solution_history() (AbstractSolver __call__() (AbstractFunction method), 96
method), 23 __call__() (Distribution method), 52
_AbstractSolver__solution_history() (AbstractSolver __call__() (LoggingMonitor method), 132
method), 23 __call__() (Monitor method), 130
_AbstractWorkerPool__get_nodes() (AbstractWorker- __call__() (Null method), 129
Pool method), 14 __call__() (Or method), 183
_AbstractWorkerPool__imap() (AbstractWorkerPool __call__() (VerboseLoggingMonitor method), 132
method), 14 __call__() (VerboseMonitor method), 132
_AbstractWorkerPool__init() (AbstractWorkerPool __call__() (When method), 184
method), 14 __copy__() (AbstractSolver method), 23
_AbstractWorkerPool__map() (AbstractWorkerPool __deepcopy__() (AbstractSolver method), 23
method), 14 __delattr__() (Null method), 129
_AbstractWorkerPool__nodes (AbstractWorkerPool at- __dict__ (AbstractFunction attribute), 96
tribute), 14 __dict__ (AbstractModel attribute), 97
_AbstractWorkerPool__pipe() (AbstractWorkerPool __dict__ (AbstractPipeConnection attribute), 13
method), 14 __dict__ (AbstractSolver attribute), 23
__dict__ (AbstractWorkerPool attribute), 14

203
mystic Documentation, Release 0.3.3.dev0

__dict__ (CostFactory attribute), 48 __module__ (Monitor attribute), 130


__dict__ (Distribution attribute), 52 __module__ (NelderMeadSimplexSolver attribute), 141,
__dict__ (Monitor attribute), 130 153
__dict__ (Null attribute), 129 __module__ (Null attribute), 129
__dict__ (Searcher attribute), 148 __module__ (Or attribute), 183
__dict__ (When attribute), 184 __module__ (PowellDirectionalSolver attribute), 142,
__enter__() (AbstractWorkerPool method), 14 154
__exit__() (AbstractWorkerPool method), 14 __module__ (Searcher attribute), 148
__getattr__() (Null method), 129 __module__ (SerialPool attribute), 137
__getattribute__ (permutations attribute), 188 __module__ (SparsitySolver attribute), 43, 154
__getnewargs__() (Null method), 129 __module__ (VerboseLoggingMonitor attribute), 132
__init__() (AbstractEnsembleSolver method), 12 __module__ (VerboseMonitor attribute), 132
__init__() (AbstractFunction method), 96, 101 __module__ (When attribute), 184
__init__() (AbstractMapSolver method), 18 __new__() (And static method), 181
__init__() (AbstractModel method), 97, 102 __new__() (Null static method), 129
__init__() (AbstractPipeConnection method), 14 __new__() (Or static method), 183
__init__() (AbstractSolver method), 23 __new__() (When static method), 184
__init__() (AbstractWorkerPool method), 14 __new__() (permutations method), 188
__init__() (BuckshotSolver method), 42, 150 __nonzero__() (Null method), 129
__init__() (CostFactory method), 48 __orig_write_converge_file() (in module mystic.munge),
__init__() (DifferentialEvolutionSolver method), 37, 151 133
__init__() (DifferentialEvolutionSolver2 method), 39, __orig_write_support_file() (in module mystic.munge),
152 133
__init__() (Distribution method), 52 __reduce__() (LoggingMonitor method), 132
__init__() (LatticeSolver method), 42, 152 __reduce__() (VerboseLoggingMonitor method), 132
__init__() (LoggingMonitor method), 132 __repr__() (AbstractPipeConnection method), 14
__init__() (Monitor method), 130 __repr__() (AbstractWorkerPool method), 15
__init__() (NelderMeadSimplexSolver method), 141, 153 __repr__() (And method), 181
__init__() (Null method), 129 __repr__() (CostFactory method), 48
__init__() (PowellDirectionalSolver method), 142, 154 __repr__() (Null method), 129
__init__() (Searcher method), 148 __repr__() (Or method), 183
__init__() (SparsitySolver method), 43, 154 __repr__() (When method), 184
__init__() (VerboseLoggingMonitor method), 132 __setattr__() (Null method), 130
__init__() (VerboseMonitor method), 132 __setstate__() (LoggingMonitor method), 132
__iter__ (permutations attribute), 188 __setstate__() (VerboseLoggingMonitor method), 132
__len__() (Monitor method), 130 __weakref__ (AbstractFunction attribute), 96
__len__() (Null method), 129 __weakref__ (AbstractModel attribute), 97
__module__ (AbstractEnsembleSolver attribute), 12 __weakref__ (AbstractPipeConnection attribute), 14
__module__ (AbstractFunction attribute), 96 __weakref__ (AbstractSolver attribute), 24
__module__ (AbstractMapSolver attribute), 18 __weakref__ (AbstractWorkerPool attribute), 15
__module__ (AbstractModel attribute), 97 __weakref__ (CostFactory attribute), 48
__module__ (AbstractPipeConnection attribute), 14 __weakref__ (Distribution attribute), 52
__module__ (AbstractSolver attribute), 24 __weakref__ (Monitor attribute), 130
__module__ (AbstractWorkerPool attribute), 14 __weakref__ (Null attribute), 130
__module__ (And attribute), 181 __weakref__ (Searcher attribute), 148
__module__ (BuckshotSolver attribute), 42, 150 _adivide() (in module mystic.tools), 185
__module__ (CostFactory attribute), 48 _amultiply() (in module mystic.tools), 185
__module__ (DifferentialEvolutionSolver attribute), 38, _bootstrap_objective() (AbstractSolver method), 24
151 _clipGuessWithinRangeBoundary() (AbstractSolver
__module__ (DifferentialEvolutionSolver2 attribute), 39, method), 24
152 _configure() (Searcher method), 148
__module__ (Distribution attribute), 52 _decorate_objective() (AbstractSolver method), 24
__module__ (LatticeSolver attribute), 42, 152 _decorate_objective() (DifferentialEvolutionSolver
__module__ (LoggingMonitor attribute), 132 method), 38, 151

204 Index
mystic Documentation, Release 0.3.3.dev0

_decorate_objective() (DifferentialEvolutionSolver2 _support_hypercube_scenario (module), 197


method), 39, 152 _symmetric() (in module mystic.tools), 186
_decorate_objective() (NelderMeadSimplexSolver _type (in module mystic.termination), 184
method), 141, 153 _update_masks() (in module mystic.mask), 51
_divide() (in module mystic.tools), 185 _update_objective() (AbstractEnsembleSolver method),
_exiting (SerialPool attribute), 137 12
_extend_mask() (in module mystic.mask), 51 _update_objective() (AbstractSolver method), 24
_flip() (in module mystic.symbolic), 177 _weight_filter() (in module mystic.collapse), 25
_get_y() (Monitor method), 131 _weights() (in module mystic.monitors), 133
_id (Null attribute), 130 _wts (Monitor attribute), 131
_idivide() (in module mystic.tools), 185 _x (Null attribute), 130
_ik() (Monitor method), 131 _y (Null attribute), 130
_imultiply() (in module mystic.tools), 185
_index_selector() (in module mystic.collapse), 25 A
_inst (Null attribute), 130 absolute_distance() (in module mystic.math.distance), 71
_inverted() (in module mystic.tools), 185 AbstractEnsembleSolver (class in mys-
_is_alive() (SerialPool method), 137 tic.abstract_ensemble_solver), 10
_k() (Monitor method), 131 AbstractFunction (class in mystic.models), 96
_kdiv() (in module mystic.tools), 186 AbstractFunction (class in mys-
_load() (in module mystic.monitors), 133 tic.models.abstract_model), 101
_measures() (in module mystic.monitors), 133 AbstractMapSolver (class in mys-
_memoize() (Searcher method), 148 tic.abstract_map_solver), 16
_multiply() (in module mystic.tools), 186 AbstractModel (class in mystic.models), 96
_mystic_collapse_plotter (module), 193 AbstractModel (class in mystic.models.abstract_model),
_mystic_log_reader (module), 194 102
_mystic_model_plotter (module), 194 AbstractPipeConnection (class in mys-
_npts (Null attribute), 130 tic.abstract_launcher), 13
_pair_selector() (in module mystic.collapse), 25 AbstractSolver (class in mystic.abstract_solver), 19
_pos (Monitor attribute), 131 AbstractWorkerPool (class in mystic.abstract_launcher),
_position_filter() (in module mystic.collapse), 25 14
_positions() (in module mystic.monitors), 133 Ackley (class in mystic.models.pohlheim), 113
_print() (Searcher method), 148 ackley() (in module mystic.models), 97
_process_inputs() (AbstractSolver method), 24 ackley() (in module mystic.models.functions), 107
_process_inputs() (DifferentialEvolutionSolver method), additive() (in module mystic.coupler), 33
38, 151 additive_proxy() (in module mystic.coupler), 33
_process_inputs() (DifferentialEvolutionSolver2 addModel() (CostFactory method), 48
method), 39, 152 almostEqual() (in module mystic.math), 52
_process_inputs() (NelderMeadSimplexSolver method), almostEqual() (in module mystic.math.approx), 53
141, 153 alpha() (in module mystic.math.samples), 93
_process_inputs() (PowellDirectionalSolver method), amap() (AbstractWorkerPool method), 15
142, 154 And (class in mystic.termination), 181
_replace_mask() (in module mystic.mask), 51 and_() (in module mystic.constraints), 33
_search() (Searcher method), 148 and_() (in module mystic.coupler), 34
_serve() (AbstractWorkerPool method), 15 apipe() (AbstractWorkerPool method), 15
_setSimplexWithinRangeBoundary() (NelderMeadSim- approx_equal() (in module mystic.math.approx), 54
plexSolver method), 141, 153 approx_fprime() (in module mystic.termination), 184
_solutions() (in module mystic.monitors), 133 as_constraint() (in module mystic.constraints), 27
_solve() (Searcher method), 148 as_penalty() (in module mystic.constraints), 26
_split_mask() (in module mystic.collapse), 25 asarray() (in module mystic.svc), 168
_step (Monitor attribute), 131 ax (Monitor attribute), 131
_summarize() (Searcher method), 148 ay (Monitor attribute), 131
_support_convergence (module), 195
_support_hypercube (module), 196 B
_support_hypercube_measures (module), 197 barrier_inequality() (in module mystic.penalty), 135

Index 205
mystic Documentation, Release 0.3.3.dev0

Best1Bin() (in module mystic.strategy), 163 CollapseAs() (in module mystic.termination), 182
Best1Exp() (in module mystic.strategy), 163 CollapseAt() (in module mystic.termination), 182
Best2Bin() (in module mystic.strategy), 163 CollapseCost() (in module mystic.termination), 182
Best2Exp() (in module mystic.strategy), 163 Collapsed() (AbstractSolver method), 20
best_dimensions() (in module mystic.support), 167 collapsed() (in module mystic.collapse), 26
bestEnergy (AbstractSolver attribute), 24 CollapsePosition() (in module mystic.termination), 182
bestSolution (AbstractSolver attribute), 24 CollapseWeight() (in module mystic.termination), 182
BevingtonDecay (class in mystic.models.br8), 102 collisions (dataset attribute), 77
Bias() (in module mystic.svc), 168 collisions() (datapoint method), 76
Bias() (in module mystic.svr), 173 commandfy() (in module mystic.helputil), 50
binary() (in module mystic.math.compressed), 54 commandstring() (in module mystic.helputil), 50
binary2coords() (in module mystic.math.compressed), 55 comparator() (in module mystic.symbolic), 177
bounded() (in module mystic.constraints), 29 compose() (in module mystic.math.discrete), 55
bounded_mean() (in module mystic.math.discrete), 55 condense() (in module mystic.symbolic), 177
Branins (class in mystic.models.pohlheim), 114 conflicts (dataset attribute), 77
branins() (in module mystic.models), 97 conflicts() (datapoint method), 76
branins() (in module mystic.models.functions), 107 connected() (in module mystic.tools), 186
buckshot() (in module mystic.ensemble), 44 constraints_parser() (in module mystic.symbolic), 178
buckshot() (in module mystic.solvers), 154 contains() (lipschitzcone method), 80
BuckshotSolver (class in mystic.ensemble), 42 converge_to_support() (in module mystic.munge), 133
BuckshotSolver (class in mystic.solvers), 149 converge_to_support_converter() (in module mys-
tic.munge), 133
C convergence() (in module mystic.support), 164
CandidateRelativeTolerance() (in module mys- Coordinates() (Searcher method), 147
tic.termination), 181 coords (dataset attribute), 77
cdf_factory() (in module mystic.math.stats), 94 Corana (class in mystic.models.storn), 122
center_mass (measure attribute), 57 corana() (in module mystic.models), 97
center_mass (product_measure attribute), 60 corana() (in module mystic.models.functions), 107
chain() (in module mystic.tools), 186 CosineKernel() (in module mystic.svr), 173
ChangeOverGeneration() (in module mystic.termination), cost() (Chebyshev method), 121
181 CostFactory (class in mystic.forward_model), 47
Chebyshev (class in mystic.models.poly), 120 CostFactory() (AbstractModel method), 96, 102
chebyshev() (in module mystic.math.distance), 71 CostFactory() (BevingtonDecay method), 102
chebyshev16cost() (in module mystic.models.poly), 121 CostFactory() (Chebyshev method), 120
chebyshev2cost() (in module mystic.models.poly), 121 CostFactory() (Circle method), 103
chebyshev4cost() (in module mystic.models.poly), 121 CostFactory() (Mogi method), 112
chebyshev6cost() (in module mystic.models.poly), 121 CostFactory2() (AbstractModel method), 97, 102
chebyshev8cost() (in module mystic.models.poly), 121 CostFactory2() (BevingtonDecay method), 102
chebyshevcostfactory() (in module mystic.models.poly), CostFactory2() (Chebyshev method), 121
121 CostFactory2() (Circle method), 103
Circle (class in mystic.models.circle), 103 CostFactory2() (Mogi method), 112
citation() (in module mystic), 5 CustomMonitor() (in module mystic.monitors), 132
clear() (AbstractWorkerPool method), 15
clear() (SerialPool method), 137 D
clipped() (in module mystic.tools), 186 datapoint (class in mystic.math.legacydata), 76
close() (SerialPool method), 137 dataset (class in mystic.math.legacydata), 77
Collapse() (AbstractSolver method), 20 decompose() (in module mystic.math.discrete), 55
collapse_as() (in module mystic.collapse), 25 derivative() (Rosenbrock method), 105
collapse_at() (in module mystic.collapse), 25 DifferentialEvolutionSolver (class in mys-
collapse_cost() (in module mystic.collapse), 25 tic.differential_evolution), 37
collapse_plotter() (in module mystic), 7 DifferentialEvolutionSolver (class in mystic.solvers), 150
collapse_plotter() (in module mystic.scripts), 146 DifferentialEvolutionSolver2 (class in mys-
collapse_position() (in module mystic.collapse), 26 tic.differential_evolution), 38
collapse_weight() (in module mystic.collapse), 26

206 Index
mystic Documentation, Release 0.3.3.dev0

DifferentialEvolutionSolver2 (class in mystic.solvers), F


151 factor() (in module mystic.tools), 186
DifferentPowers (class in mystic.models.pohlheim), 115 fetch() (dataset method), 77
differs_by_one() (in module mystic.math.compressed), fillpts() (in module mystic.math), 52
55 fillpts() (in module mystic.math.grid), 75
differs_by_one() (product_measure method), 60 filter() (dataset method), 77
diffev() (in module mystic.differential_evolution), 39 Finalize() (AbstractSolver method), 20
diffev() (in module mystic.solvers), 156 Finalize() (PowellDirectionalSolver method), 142, 153
diffev2() (in module mystic.differential_evolution), 40flatten() (in module mystic.math.discrete), 55
diffev2() (in module mystic.solvers), 157 flatten() (in module mystic.tools), 186
disable_signal_handler() (AbstractSolver method), 24 flatten() (product_measure method), 61
discrete() (in module mystic.constraints), 28 flatten() (scenario method), 66
distance() (lipschitzcone method), 80 flatten_array() (in module mystic.tools), 187
Distribution (class in mystic.math), 51 flip() (in module mystic.symbolic), 177
divide() (in module mystic.tools), 186 fmin() (in module mystic.scipy_optimize), 142
dot() (in module mystic.svc), 169 fmin() (in module mystic.solvers), 158
duplicates (dataset attribute), 77 fmin_powell() (in module mystic.scipy_optimize), 144
duplicates() (datapoint method), 77 fmin_powell() (in module mystic.solvers), 159
forward() (Chebyshev method), 121
E forward() (Circle method), 103
Easom (class in mystic.models.pohlheim), 116 ForwardFactory() (AbstractModel method), 97, 102
easom() (in module mystic.models), 97 ForwardFactory() (BevingtonDecay method), 102
easom() (in module mystic.models.functions), 108 ForwardFactory() (Chebyshev method), 121
ellipsoid() (in module mystic.models), 98 ForwardFactory() (Circle method), 103
ellipsoid() (in module mystic.models.functions), 108 ForwardFactory() (Lorentzian method), 111
enable_signal_handler() (AbstractSolver method), 24 ForwardFactory() (Mogi method), 112
energy_history (AbstractSolver attribute), 24 ForwardFactory() (Polynomial method), 121
equals() (in module mystic.symbolic), 177 fOsc3D (class in mystic.models.wolfram), 127
erf() (in module mystic.math.stats), 94 fosc3d() (in module mystic.models), 98
ess_maximum() (in module mystic.math.measures), 81 fosc3d() (in module mystic.models.functions), 108
ess_maximum() (measure method), 57 function() (AbstractFunction method), 96, 101
ess_maximum() (product_measure method), 61 function() (Ackley method), 114
ess_minimum() (in module mystic.math.measures), 82 function() (Branins method), 115
ess_minimum() (measure method), 57 function() (Corana method), 123
ess_minimum() (product_measure method), 61 function() (DifferentPowers method), 115
euclidean() (in module mystic.math.distance), 72 function() (Easom method), 116
evaluate() (AbstractModel method), 97, 102 function() (fOsc3D method), 128
evaluate() (BevingtonDecay method), 103 function() (GoldsteinPrice method), 117
evaluate() (Lorentzian method), 111 function() (Griewangk method), 124
evaluate() (Mogi method), 112 function() (HyperEllipsoid method), 118
evaluate() (Polynomial method), 121 function() (Michalewicz method), 118
EvaluationLimits() (in module mystic.termination), 182 function() (NMinimize51 method), 127
evaluations (AbstractSolver attribute), 24 function() (Paviani method), 122
expect() (measure method), 57 function() (Peaks method), 113
expect() (product_measure method), 61 function() (Quartic method), 104
expect_var() (measure method), 57 function() (Rastrigin method), 119
expect_var() (product_measure method), 61 function() (Rosenbrock method), 105
expectation() (in module mystic.math.measures), 82 function() (Schwefel method), 120
expected_std() (in module mystic.math.measures), 82 function() (Shekel method), 105
expected_variance() (in module mystic.math.measures), function() (Sinc method), 125
82 function() (Sphere method), 106
extend() (Monitor method), 131 function() (Step method), 107
function() (Wavy1 method), 126
function() (Wavy2 method), 126

Index 207
mystic Documentation, Release 0.3.3.dev0

function() (Zimmermann method), 125 has_point() (dataset method), 78


has_position() (dataset method), 78
G has_unique() (in module mystic.constraints), 29
gamma() (in module mystic.math.stats), 94 hessian() (Rosenbrock method), 105
GaussianKernel() (in module mystic.svr), 173 hessian_product() (Rosenbrock method), 105
gencircle() (in module mystic.models.circle), 103 histogram() (in module mystic.models.lorentzian), 112
gendata() (in module mystic.models.circle), 103 hypercube() (in module mystic.support), 165
gendata() (in module mystic.models.lorentzian), 111 hypercube_measures() (in module mystic.support), 166
generate_conditions() (in module mystic.symbolic), 178 hypercube_scenario() (in module mystic.support), 166
generate_constraint() (in module mystic.symbolic), 180 HyperEllipsoid (class in mystic.models.pohlheim), 117
generate_penalty() (in module mystic.symbolic), 179
generate_solvers() (in module mystic.symbolic), 179 I
generations (AbstractSolver attribute), 24 i (Shekel attribute), 106
generations (PowellDirectionalSolver attribute), 142, 154 id (Monitor attribute), 131
get_ax() (Monitor method), 131 Identity() (in module mystic.filters), 46
get_ay() (Monitor method), 131 ids (dataset attribute), 78
get_id() (Monitor method), 131 imap() (AbstractWorkerPool method), 15
get_info() (Monitor method), 131 imap() (SerialPool method), 137
get_ipos() (Monitor method), 131 impose_as() (in module mystic.constraints), 30
get_iwts() (Monitor method), 131 impose_at() (in module mystic.constraints), 30
get_ix() (Monitor method), 131 impose_bounds() (in module mystic.constraints), 30
get_iy() (Monitor method), 131 impose_collapse() (in module mystic.math.measures), 92
get_mask() (in module mystic.mask), 51 impose_expectation() (in module mystic.math.measures),
get_pos() (Monitor method), 131 86
get_random_candidates() (in module mystic.strategy), impose_expected_mean_and_variance() (in module mys-
164 tic.math.measures), 89
get_variables() (in module mystic.symbolic), 174 impose_expected_std() (in module mys-
get_wts() (Monitor method), 131 tic.math.measures), 88
get_x() (Monitor method), 131 impose_expected_variance() (in module mys-
get_y() (Monitor method), 131 tic.math.measures), 87
getch() (in module mystic.tools), 187 impose_feasible() (in module mystic.math.discrete), 55
getCostFunction() (CostFactory method), 48 impose_mad() (in module mystic.math.measures), 90
getCostFunctionSlow() (CostFactory method), 49 impose_mean() (in module mystic.math.measures), 84
getForwardEvaluator() (CostFactory method), 49 impose_measure() (in module mystic.constraints), 31
getParameterList() (CostFactory method), 49 impose_median() (in module mystic.math.measures), 90
getRandomParams() (CostFactory method), 50 impose_moment() (in module mystic.math.measures), 85
getVectorCostFunction() (CostFactory method), 50 impose_position() (in module mystic.constraints), 32
goldstein() (in module mystic.models), 98 impose_product() (in module mystic.math.measures), 92
goldstein() (in module mystic.models.functions), 108 impose_reweighted_mean() (in module mys-
GoldsteinPrice (class in mystic.models.pohlheim), 116 tic.math.measures), 90
GradientNormTolerance() (in module mys- impose_reweighted_std() (in module mys-
tic.termination), 182 tic.math.measures), 90
graphical_distance() (in module mystic.math.distance), impose_reweighted_variance() (in module mys-
72 tic.math.measures), 90
gridpts() (in module mystic.math), 52 impose_spread() (in module mystic.math.measures), 85
gridpts() (in module mystic.math.grid), 75 impose_std() (in module mystic.math.measures), 85
Griewangk (class in mystic.models.storn), 123 impose_sum() (in module mystic.math.measures), 92
griewangk() (in module mystic.models), 98 impose_support() (in module mystic.math.measures), 91
griewangk() (in module mystic.models.functions), 108 impose_tmean() (in module mystic.math.measures), 91
impose_tstd() (in module mystic.math.measures), 91
H impose_tvariance() (in module mystic.math.measures),
hamming() (in module mystic.math.distance), 73 91
has_datapoint() (dataset method), 78 impose_unique() (in module mystic.constraints), 29
has_id() (dataset method), 78

208 Index
mystic Documentation, Release 0.3.3.dev0

impose_unweighted() (in module mystic.math.measures), line_search() (in module mystic.linesearch), 50


92 linear_equality() (in module mystic.penalty), 135
impose_valid() (in module mystic.math.discrete), 56 linear_inequality() (in module mystic.penalty), 135
impose_variance() (in module mystic.math.measures), 84 linear_symbolic() (in module mystic.symbolic), 173
impose_weight() (in module mystic.constraints), 32 LinearKernel() (in module mystic.svr), 172
impose_weight_norm() (in module mys- lipschitz (dataset attribute), 78
tic.math.measures), 90 lipschitz_distance() (in module mystic.math.distance), 73
index2binary() (in module mystic.math.compressed), 55 lipschitz_metric() (in module mystic.math.distance), 74
infeasibility() (in module mystic.math.distance), 73 lipschitzcone (class in mystic.math.legacydata), 80
info (Null attribute), 130 list_or_tuple() (in module mystic.tools), 187
info() (LoggingMonitor method), 132 list_or_tuple_or_ndarray() (in module mystic.tools), 187
info() (Monitor method), 131 listify() (in module mystic.tools), 187
info() (VerboseLoggingMonitor method), 132 Lnorm() (in module mystic.math.distance), 71
info() (VerboseMonitor method), 132 Lnorm() (in module mystic.termination), 182
inner() (in module mystic.coupler), 34 load() (dataset method), 78
inner_proxy() (in module mystic.coupler), 34 load() (product_measure method), 61
insert_missing() (in module mystic.tools), 187 load() (scenario method), 67
integers() (in module mystic.constraints), 28 load_dataset() (in module mystic.math.legacydata), 80
integrate() (in module mystic.math.integrate), 76 LoadSolver() (in module mystic.solvers), 152
integrated_mean() (in module mystic.math.integrate), 76 log_reader() (in module mystic), 7
integrated_variance() (in module mystic.math.integrate), log_reader() (in module mystic.scripts), 146
76 logfile_reader() (in module mystic.munge), 133
intersection() (dataset method), 78 LoggingMonitor (class in mystic.monitors), 132
interval_overlap() (in module mystic.tools), 187 Lorentzian (class in mystic.models.lorentzian), 111
is_feasible() (in module mystic.math.distance), 73
isiterable() (in module mystic.tools), 187 M
isNull() (in module mystic.munge), 133 mad() (in module mystic.math.measures), 90
isNull() (in module mystic.tools), 187 manhattan() (in module mystic.math.distance), 74
issolution() (in module mystic.constraints), 28 map() (AbstractWorkerPool method), 15
itertype() (in module mystic.tools), 187 map() (SerialPool method), 137
ix (Monitor attribute), 131 masked() (in module mystic.tools), 187
iy (Monitor attribute), 131 mass (measure attribute), 57
mass (product_measure attribute), 62
J maximum() (in module mystic.math.measures), 81
j (Shekel attribute), 106 maximum() (measure method), 58
join() (SerialPool method), 137 maximum() (product_measure method), 62
mcdiarmid_bound() (in module mystic.math.stats), 94
K mean() (in module mystic.math.measures), 82
k (Null attribute), 130 mean() (in module mystic.math.stats), 94
KernelMatrix() (in module mystic.svc), 168 mean_value() (scenario method), 67
KernelMatrix() (in module mystic.svr), 173 mean_y_norm_wts_constraintsFactory() (in module mys-
kurtosis() (in module mystic.math.measures), 84 tic.math.discrete), 55
meanconf() (in module mystic.math.stats), 94
L measure (class in mystic.math.discrete), 57
label (Null attribute), 130 measure_indices() (in module mystic.tools), 188
lagrange_equality() (in module mystic.penalty), 135 median() (in module mystic.math.measures), 90
lagrange_inequality() (in module mystic.penalty), 135 merge() (in module mystic.symbolic), 175
LaplacianKernel() (in module mystic.svr), 173 metropolis_hastings() (in module mystic.metropolis), 94
lattice() (in module mystic.ensemble), 43 michal() (in module mystic.models), 98
lattice() (in module mystic.solvers), 160 michal() (in module mystic.models.functions), 108
LatticeSolver (class in mystic.ensemble), 42 Michalewicz (class in mystic.models.pohlheim), 118
LatticeSolver (class in mystic.solvers), 152 Minima() (Searcher method), 147
lgamma() (in module mystic.math.stats), 94 minimizers (AbstractFunction attribute), 96, 102
license() (in module mystic), 5 minimizers (Ackley attribute), 114

Index 209
mystic Documentation, Release 0.3.3.dev0

minimizers (Branins attribute), 115 mystic.math.approx (module), 53


minimizers (Corana attribute), 123 mystic.math.compressed (module), 54
minimizers (DifferentPowers attribute), 116 mystic.math.discrete (module), 55
minimizers (Easom attribute), 116 mystic.math.distance (module), 71
minimizers (fOsc3D attribute), 128 mystic.math.grid (module), 75
minimizers (GoldsteinPrice attribute), 117 mystic.math.integrate (module), 76
minimizers (Griewangk attribute), 124 mystic.math.legacydata (module), 76
minimizers (HyperEllipsoid attribute), 118 mystic.math.measures (module), 81
minimizers (Michalewicz attribute), 119 mystic.math.poly (module), 93
minimizers (NMinimize51 attribute), 127 mystic.math.samples (module), 93
minimizers (Paviani attribute), 122 mystic.math.stats (module), 94
minimizers (Peaks attribute), 113 mystic.metropolis (module), 94
minimizers (Quartic attribute), 104 mystic.models (module), 95
minimizers (Rastrigin attribute), 119 mystic.models.abstract_model (module), 101
minimizers (Rosenbrock attribute), 105 mystic.models.br8 (module), 102
minimizers (Schwefel attribute), 120 mystic.models.circle (module), 103
minimizers (Shekel attribute), 106 mystic.models.dejong (module), 103
minimizers (Sinc attribute), 126 mystic.models.functions (module), 107
minimizers (Sphere attribute), 106 mystic.models.lorentzian (module), 111
minimizers (Step attribute), 107 mystic.models.mogi (module), 112
minimizers (Wavy1 attribute), 126 mystic.models.nag (module), 112
minimizers (Wavy2 attribute), 127 mystic.models.pohlheim (module), 113
minimizers (Zimmermann attribute), 125 mystic.models.poly (module), 120
minimum() (in module mystic.math.measures), 81 mystic.models.schittkowski (module), 121
minimum() (measure method), 58 mystic.models.storn (module), 122
minimum() (product_measure method), 62 mystic.models.venkataraman (module), 125
minkowski() (in module mystic.math.distance), 74 mystic.models.wavy (module), 126
model_plotter() (in module mystic), 5 mystic.models.wolfram (module), 127
model_plotter() (in module mystic.scripts), 145 mystic.monitors (module), 128
Mogi (class in mystic.models.mogi), 112 mystic.munge (module), 133
moment() (in module mystic.math.measures), 83 mystic.penalty (module), 134
Monitor (class in mystic.monitors), 130 mystic.pools (module), 136
monte_carlo_integrate() (in module mys- mystic.python_map (module), 138
tic.math.integrate), 76 mystic.scemtools (module), 138
multinormal_pdf() (in module mystic.scemtools), 138 mystic.scipy_optimize (module), 140
multiply() (in module mystic.tools), 188 mystic.scripts (module), 145
myinsert() (in module mystic.scemtools), 139 mystic.search (module), 147
mystic (module), 1 mystic.solvers (module), 149
mystic.abstract_ensemble_solver (module), 9 mystic.strategy (module), 163
mystic.abstract_launcher (module), 13 mystic.support (module), 164
mystic.abstract_map_solver (module), 15 mystic.svc (module), 168
mystic.abstract_solver (module), 18 mystic.svr (module), 172
mystic.cache (module), 25 mystic.symbolic (module), 173
mystic.collapse (module), 25 mystic.termination (module), 181
mystic.constraints (module), 26 mystic.tools (module), 184
mystic.coupler (module), 33
mystic.differential_evolution (module), 35 N
mystic.ensemble (module), 41 near_integers() (in module mystic.constraints), 29
mystic.filters (module), 46 NelderMeadSimplexSolver (class in mys-
mystic.forward_model (module), 47 tic.scipy_optimize), 141
mystic.helputil (module), 50 NelderMeadSimplexSolver (class in mystic.solvers), 152
mystic.linesearch (module), 50 next (permutations attribute), 188
mystic.mask (module), 51 nmin51() (in module mystic.models), 98
mystic.math (module), 51 nmin51() (in module mystic.models.functions), 109

210 Index
mystic Documentation, Release 0.3.3.dev0

NMinimize51 (class in mystic.models.wolfram), 127 PolynomialKernel() (in module mystic.svr), 173


nodes (SerialPool attribute), 137 PopulationSpread() (in module mystic.termination), 183
norm() (in module mystic.math.measures), 81 pos (Monitor attribute), 131
norm_wts_constraintsFactory() (in module mys- pos (product_measure attribute), 62
tic.math.discrete), 55 position (datapoint attribute), 77
normalize() (in module mystic.math.measures), 90 positions (measure attribute), 58
normalize() (measure method), 58 positions (product_measure attribute), 62
normalized() (in module mystic.constraints), 27 PowellDirectionalSolver (class in mys-
NormalizedChangeOverGeneration() (in module mys- tic.scipy_optimize), 141
tic.termination), 182 PowellDirectionalSolver (class in mystic.solvers), 153
NormalizedCostTarget() (in module mystic.termination), powers() (in module mystic.models), 99
182 powers() (in module mystic.models.functions), 109
not_() (in module mystic.constraints), 33 prepend() (Monitor method), 131
not_() (in module mystic.coupler), 34 prob_mass() (in module mystic.math.stats), 94
npts (dataset attribute), 78 product_measure (class in mystic.math.discrete), 60
npts (measure attribute), 58 pts (product_measure attribute), 62
npts (product_measure attribute), 62 python_map() (in module mystic.python_map), 138
Null (class in mystic.monitors), 129
NullChecker() (in module mystic.filters), 47 Q
quadratic_equality() (in module mystic.penalty), 135
O quadratic_inequality() (in module mystic.penalty), 135
old_to_new_support_converter() (in module mys- Quartic (class in mystic.models.dejong), 104
tic.munge), 133 quartic() (in module mystic.models), 99
Or (class in mystic.termination), 183 quartic() (in module mystic.models.functions), 109
or_() (in module mystic.constraints), 33
or_() (in module mystic.coupler), 34 R
outer() (in module mystic.coupler), 35 Rand1Bin() (in module mystic.strategy), 163
outer_proxy() (in module mystic.coupler), 35 Rand1Exp() (in module mystic.strategy), 164
Rand2Bin() (in module mystic.strategy), 164
P Rand2Exp() (in module mystic.strategy), 164
paginate() (in module mystic.helputil), 50 random_samples() (in module mystic.math.samples), 93
pairwise() (in module mystic.tools), 188 random_seed() (in module mystic.tools), 189
partial() (in module mystic.tools), 188 random_state() (in module mystic.tools), 189
Paviani (class in mystic.models.schittkowski), 121 randomly_bin() (in module mystic.math.grid), 75
paviani() (in module mystic.models), 99 RandToBest1Bin() (in module mystic.strategy), 164
paviani() (in module mystic.models.functions), 109 RandToBest1Exp() (in module mystic.strategy), 164
pdf_factory() (in module mystic.math.stats), 94 range (measure attribute), 58
Peaks (class in mystic.models.nag), 112 Rastrigin (class in mystic.models.pohlheim), 119
peaks() (in module mystic.models), 99 rastrigin() (in module mystic.models), 99
peaks() (in module mystic.models.functions), 109 rastrigin() (in module mystic.models.functions), 109
penalty_parser() (in module mystic.symbolic), 177 raw (dataset attribute), 78
permutations (class in mystic.tools), 188 raw_to_converge() (in module mystic.munge), 134
PickComponent() (in module mystic.filters), 47 raw_to_converge_converter() (in module mystic.munge),
pipe() (AbstractWorkerPool method), 15 134
pipe() (SerialPool method), 137 raw_to_support() (in module mystic.munge), 134
pof() (product_measure method), 62 raw_to_support_converter() (in module mystic.munge),
pof_value() (scenario method), 67 134
point (class in mystic.math.legacydata), 80 read_converge_file() (in module mystic.munge), 134
point_mass (class in mystic.math.discrete), 56 read_history() (in module mystic.munge), 134
poly1d() (in module mystic.math), 52 read_import() (in module mystic.munge), 134
poly1d() (in module mystic.math.poly), 93 read_monitor() (in module mystic.munge), 134
polyeval() (in module mystic.math), 52 read_old_support_file() (in module mystic.munge), 134
polyeval() (in module mystic.math.poly), 93 read_raw_file() (in module mystic.munge), 134
Polynomial (class in mystic.models.poly), 121 read_support_file() (in module mystic.munge), 134

Index 211
mystic Documentation, Release 0.3.3.dev0

read_trajectories() (in module mystic.munge), 134 set_expect_mean_and_var() (product_measure method),


reduced() (in module mystic.tools), 189 64
RegressionFunction() (in module mystic.svr), 173 set_expect_var() (measure method), 59
remix() (in module mystic.scemtools), 139 set_expect_var() (product_measure method), 64
repeats (dataset attribute), 78 set_feasible() (scenario method), 67
repeats() (datapoint method), 77 set_mean_value() (scenario method), 68
replace_variables() (in module mystic.symbolic), 173 set_valid() (scenario method), 68
Reset() (Searcher method), 147 SetConstraints() (AbstractSolver method), 20
restart() (SerialPool method), 137 SetConstraints() (DifferentialEvolutionSolver method),
rms (point attribute), 80 37, 150
rms (point_mass attribute), 57 SetConstraints() (DifferentialEvolutionSolver2 method),
rosen() (in module mystic.models), 99 38, 151
rosen() (in module mystic.models.functions), 110 SetDistribution() (AbstractEnsembleSolver method), 11
rosen0der() (in module mystic.models), 99 SetEvaluationLimits() (AbstractSolver method), 20
rosen0der() (in module mystic.models.functions), 110 SetEvaluationMonitor() (AbstractSolver method), 20
rosen1der() (in module mystic.models), 100 SetGenerationMonitor() (AbstractSolver method), 20
rosen1der() (in module mystic.models.functions), 110 SetInitialPoints() (AbstractEnsembleSolver method), 11
Rosenbrock (class in mystic.models.dejong), 104 SetInitialPoints() (AbstractSolver method), 20
SetLauncher() (AbstractMapSolver method), 17
S SetMapper() (AbstractMapSolver method), 18
sample() (in module mystic.math.samples), 93 SetMultinormalInitialPoints() (AbstractEnsembleSolver
sampled_mean() (in module mystic.math.samples), 93 method), 11
sampled_pof() (in module mystic.math.samples), 93 SetMultinormalInitialPoints() (AbstractSolver method),
sampled_pof() (product_measure method), 62 20
sampled_prob() (in module mystic.math.samples), 93 SetNestedSolver() (AbstractEnsembleSolver method), 11
sampled_pts() (in module mystic.math.samples), 93 SetObjective() (AbstractSolver method), 21
sampled_support() (product_measure method), 63 SetPenalty() (AbstractSolver method), 21
sampled_variance() (in module mystic.math.samples), 94 SetRandomInitialPoints() (AbstractEnsembleSolver
samplepts() (in module mystic.math), 52 method), 11
samplepts() (in module mystic.math.grid), 75 SetRandomInitialPoints() (AbstractSolver method), 21
Samples() (Searcher method), 148 SetReducer() (AbstractSolver method), 21
sampvar() (in module mystic.math.stats), 94 SetSampledInitialPoints() (AbstractEnsembleSolver
save_dataset() (in module mystic.math.legacydata), 80 method), 11
SaveSolver() (AbstractSolver method), 20 SetSampledInitialPoints() (AbstractSolver method), 21
scem() (in module mystic.scemtools), 139 SetSaveFrequency() (AbstractSolver method), 21
scenario (class in mystic.math.discrete), 66 SetStrictRanges() (AbstractSolver method), 21
Schwefel (class in mystic.models.pohlheim), 119 SetTermination() (AbstractSolver method), 22
schwefel() (in module mystic.models), 100 Shekel (class in mystic.models.dejong), 105
schwefel() (in module mystic.models.functions), 110 shekel() (in module mystic.models), 100
Search() (Searcher method), 148 shekel() (in module mystic.models.functions), 110
Searcher (class in mystic.search), 147 short() (dataset method), 78
select() (product_measure method), 63 short_wrt_data() (scenario method), 69
select_params() (in module mystic.tools), 189 short_wrt_self() (scenario method), 69
selector() (in module mystic.collapse), 26 SigmoidKernel() (in module mystic.svr), 173
SelectScheduler() (AbstractMapSolver method), 17 simplify() (in module mystic.symbolic), 176
SelectServers() (AbstractMapSolver method), 17 Sinc (class in mystic.models.venkataraman), 125
sequence() (in module mystic.munge), 134 skewness() (in module mystic.math.measures), 84
sequential_deal() (in module mystic.scemtools), 139 Solution() (AbstractSolver method), 22
serial_launcher() (in module mystic.python_map), 138 solution_history (AbstractSolver attribute), 25
SerialPool (class in mystic.pools), 136 SolutionImprovement() (in module mystic.termination),
set_expect() (measure method), 58 183
set_expect() (product_measure method), 63 Solve() (AbstractEnsembleSolver method), 11
set_expect_mean_and_var() (measure method), 59 Solve() (AbstractSolver method), 22
Solve() (DifferentialEvolutionSolver method), 37, 150

212 Index
mystic Documentation, Release 0.3.3.dev0

Solve() (DifferentialEvolutionSolver2 method), 38, 151 tvariance() (in module mystic.math.measures), 91


solve() (in module mystic.constraints), 28 type() (in module mystic.termination), 184
solve() (in module mystic.symbolic), 175
Solve() (NelderMeadSimplexSolver method), 141, 152 U
Solve() (PowellDirectionalSolver method), 142, 153 uimap() (AbstractWorkerPool method), 15
solver_bounds() (in module mystic.tools), 189 unflatten() (in module mystic.math.discrete), 55
SolverInterrupt() (in module mystic.termination), 183 uniform_equality() (in module mystic.penalty), 136
sort_ab_with_b() (in module mystic.scemtools), 139 uniform_inequality() (in module mystic.penalty), 136
sort_and_deal() (in module mystic.scemtools), 139 unique() (in module mystic.constraints), 29
sort_complex() (in module mystic.scemtools), 139 unpair() (in module mystic.tools), 191
sort_complex0() (in module mystic.scemtools), 139 update() (dataset method), 79
sort_complex2() (in module mystic.scemtools), 139 update() (product_measure method), 65
sparsity() (in module mystic.ensemble), 45 update() (scenario method), 70
sparsity() (in module mystic.solvers), 162 update_complex() (in module mystic.scemtools), 139
SparsitySolver (class in mystic.ensemble), 42 update_mask() (in module mystic.mask), 51
SparsitySolver (class in mystic.solvers), 154 update_position_masks() (in module mystic.mask), 51
Sphere (class in mystic.models.dejong), 106 update_weight_masks() (in module mystic.mask), 51
sphere() (in module mystic.models), 100 UpdateGenealogyRecords() (DifferentialEvolutionSolver
sphere() (in module mystic.models.functions), 110 method), 37, 150
split_param() (in module mystic.math.measures), 92 UpdateGenealogyRecords() (DifferentialEvolution-
spread() (in module mystic.math.measures), 81 Solver2 method), 39, 152
standard_moment() (in module mystic.math.measures), UseTrajectories() (Searcher method), 148
83
state() (in module mystic.termination), 184 V
std() (in module mystic.math.measures), 84 valid() (dataset method), 79
stderr() (in module mystic.math.stats), 94 valid_wrt_model() (scenario method), 70
Step (class in mystic.models.dejong), 106 value (datapoint attribute), 77
Step() (AbstractSolver method), 22 values (dataset attribute), 80
step() (in module mystic.models), 100 values (scenario attribute), 71
step() (in module mystic.models.functions), 110 Values() (Searcher method), 148
sum() (in module mystic.svc), 170 var (measure attribute), 60
support() (in module mystic.math.measures), 83 varconf() (in module mystic.math.stats), 94
support() (measure method), 60 variance() (in module mystic.math.measures), 83
support() (product_measure method), 65 venkat91() (in module mystic.models), 100
support_index() (in module mystic.math.measures), 83 venkat91() (in module mystic.models.functions), 110
support_index() (measure method), 60 Verbose() (Searcher method), 148
support_index() (product_measure method), 65 VerboseLoggingMonitor (class in mystic.monitors), 132
SupportVectors() (in module mystic.svc), 168 VerboseMonitor (class in mystic.monitors), 132
SupportVectors() (in module mystic.svr), 173 volume() (in module mystic.math.stats), 94
suppress() (in module mystic.tools), 189 VTR() (in module mystic.termination), 183
suppressed() (in module mystic.tools), 189 VTRChangeOverGeneration() (in module mys-
swap() (in module mystic.support), 167 tic.termination), 183
synchronized() (in module mystic.tools), 190
W
T Wavy1 (class in mystic.models.wavy), 126
terminate() (SerialPool method), 137 wavy1() (in module mystic.models), 100
Terminated() (AbstractEnsembleSolver method), 12 wavy1() (in module mystic.models.functions), 111
Terminated() (AbstractSolver method), 22 Wavy2 (class in mystic.models.wavy), 126
tmean() (in module mystic.math.measures), 91 wavy2() (in module mystic.models), 101
tolerance() (in module mystic.math), 53 wavy2() (in module mystic.models.functions), 111
tolerance() (in module mystic.math.approx), 54 weighted_select() (in module mystic.math.measures), 81
Trajectories() (Searcher method), 148 weights (measure attribute), 60
transpose() (in module mystic.svc), 172 weights (product_measure attribute), 66
tstd() (in module mystic.math.measures), 91 WeightVector() (in module mystic.svc), 168

Index 213
mystic Documentation, Release 0.3.3.dev0

When (class in mystic.termination), 183


with_constraint() (in module mystic.constraints), 26
with_mean() (in module mystic.constraints), 27
with_penalty() (in module mystic.constraints), 26
with_spread() (in module mystic.constraints), 27
with_std() (in module mystic.constraints), 27
with_variance() (in module mystic.constraints), 27
worker_pool() (in module mystic.python_map), 138
wrap_bounds() (in module mystic.tools), 191
wrap_cf() (in module mystic.tools), 191
wrap_function() (in module mystic.tools), 191
wrap_nested() (in module mystic.tools), 191
wrap_penalty() (in module mystic.tools), 191
wrap_reducer() (in module mystic.tools), 191
write_converge_file() (in module mystic.munge), 134
write_monitor() (in module mystic.munge), 134
write_raw_file() (in module mystic.munge), 134
write_support_file() (in module mystic.munge), 134
wts (Monitor attribute), 131
wts (product_measure attribute), 66

X
x (Monitor attribute), 132
x (Null attribute), 130

Y
y (Monitor attribute), 132
y (Null attribute), 130

Z
Zimmermann (class in mystic.models.storn), 124
zimmermann() (in module mystic.models), 101
zimmermann() (in module mystic.models.functions), 111

214 Index

Вам также может понравиться