fssa package¶
Submodules¶
fssa.fssa module¶
Low-level routines for finite-size scaling analysis
See also
fssa
- The high-level module
Notes
The fssa package provides routines to perform finite-size scaling analyses on experimental data [10] [11].
It has been inspired by Oliver Melchert and his superb autoScale package [3].
References
[10] | M. E. J. Newman and G. T. Barkema, Monte Carlo Methods in Statistical Physics (Oxford University Press, 1999) |
[11] | K. Binder and D. W. Heermann, Monte Carlo Simulation in Statistical Physics (Springer, Berlin, Heidelberg, 2010) |
[3] | O. Melchert, arXiv:0910.5403 (2009) |
-
class
fssa.fssa.
ScaledData
[source]¶ Bases:
fssa.fssa.ScaledData
A
namedtuple
forscaledata()
output
-
fssa.fssa.
autoscale
(l, rho, a, da, rho_c0, nu0, zeta0, x_bounds=None, **kwargs)[source]¶ Automatically scale finite-size data and fit critical point and exponents
Parameters: - rho, a, da (l,) – input for the
scaledata()
function - nu0, zeta0 (rho_c0,) – initial guesses for the critical point and exponents
- x_bounds (tuple of floats, optional) – lower and upper bound for scaled data x to consider
Returns: - res (OptimizeResult)
- res[‘success’] (bool) – Indicates whether the optimization algorithm has terminated successfully.
- res[‘x’] (ndarray)
- res[‘rho’], res[‘nu’], res[‘zeta’] (float) –
The fitted critical point and exponents,
res['x'] == [res['rho'], res['nu'], res['zeta']]
- res[‘drho’], res[‘dnu’], res[‘dzeta’] (float) –
The respective standard errors derived from fitting the curvature at
the minimum,
res['errors'] == [res['drho'], res['dnu'], res['dzeta']]
. - res[‘errors’], res[‘varco’] (ndarray) –
The standard errors as a vector, and the full variance–covariance
matrix (the diagonal entries of which are the squared standard errors),
np.sqrt(np.diag(res['varco'])) == res['errors']
See also
scaledata()
- For the l, rho, a, da input parameters
quality()
- The goal function of the optimization
scipy.optimize.minimize()
- The optimization wrapper routine
scipy.optimize.OptimizeResult()
- The return type
Notes
This implementation uses the quality function by Houdayer & Hartmann [8] which measures the quality of the data collapse, see the sections The data collapse method and The quality function in the manual.
This function and the whole fssa package have been inspired by Oliver Melchert and his superb autoScale package [9].
The critical point and exponents, including its standard errors and (co)variances, are fitted by the Nelder–Mead algorithm, see the section The Nelder–Mead algorithm in the manual.
References
[8] J. Houdayer and A. Hartmann, Physical Review B 70, 014418+ (2004) doi:10.1103/physrevb.70.014418 [9] O. Melchert, arXiv:0910.5403 (2009) Examples
>>> # generate artificial scaling data from master curve >>> # with rho_c == 1.0, nu == 2.0, zeta == 0.0 >>> import fssa >>> l = [ 10, 100, 1000 ] >>> rho = np.linspace(0.9, 1.1) >>> l_mesh, rho_mesh = np.meshgrid(l, rho, indexing='ij') >>> master_curve = lambda x: 1. / (1. + np.exp( - x)) >>> x = np.power(l_mesh, 0.5) * (rho_mesh - 1.) >>> y = master_curve(x) >>> dy = y / 100. >>> y += np.random.randn(*y.shape) * dy >>> a = y >>> da = dy >>> >>> # run autoscale >>> res = fssa.autoscale(l=l, rho=rho, a=a, da=da, rho_c0=0.9, nu0=2.0, zeta0=0.0)
- rho, a, da (l,) – input for the
-
fssa.fssa.
quality
(x, y, dy, x_bounds=None)[source]¶ Quality of data collapse onto a master curve defined by the data
This is the reduced chi-square statistic for a data fit except that the master curve is fitted from the data itself.
Parameters: - y, dy (x,) – output from
scaledata()
, scaled data x, y with standard errors dy - x_bounds (tuple of floats, optional) – lower and upper bound for scaled data x to consider
Returns: the quality of the data collapse
Return type: Raises: ValueError
– if not all arrays x, y, dy have dimension 2, or if not all arrays are of the same shape, or if x is not sorted along rows (axis=1
), or if dy does not have only positive entriesNotes
This is the implementation of the reduced \(\chi^2\) quality function \(S\) by Houdayer & Hartmann [6]. It should attain a minimum of around \(1\) for an optimal fit, and be much larger otherwise.
For further information, see the The quality function section in the manual.
References
[6] J. Houdayer and A. Hartmann, Physical Review B 70, 014418+ (2004) doi:10.1103/physrevb.70.014418 - y, dy (x,) – output from
-
fssa.fssa.
scaledata
(l, rho, a, da, rho_c, nu, zeta)[source]¶ Scale experimental data according to critical exponents
Parameters: - rho (l,) – finite system sizes l and parameter values rho
- da (a,) – experimental data a with standard errors da obtained at finite
system sizes l and parameter values rho, with
a.shape == da.shape == (l.size, rho.size)
- rho_c (float in range [rho.min(), rho.max()]) – (assumed) critical parameter value with
rho_c >= rho.min() and rho_c <= rho.max()
- zeta (nu,) – (assumed) critical exponents
Returns: - py:class:ScaledData – scaled data x, y with standard errors dy
- x, y, dy (ndarray) –
two-dimensional arrays of shape
(l.size, rho.size)
Notes
Scale data points \((\varrho_j, a_{ij}, da_{ij})\) observed at finite system sizes \(L_i\) and parameter values \(\varrho_i\) according to the finite-size scaling ansatz
\[L^{-\zeta/\nu} a_{ij} = \tilde{f}\left( L^{1/\nu} (\varrho_j - \varrho_c) \right).\]The output is the scaled data points \((x_{ij}, y_{ij}, dy_{ij})\) with
\[\begin{split}x_{ij} & = L_i^{1/\nu} (\varrho_j - \varrho_c) \\ y_{ij} & = L_i^{-\zeta/\nu} a_{ij} \\ dy_{ij} & = L_i^{-\zeta/\nu} da_{ij}\end{split}\]such that all data points collapse onto the single curve \(\tilde{f}(x)\) with the right choice of \(\varrho_c, \nu, \zeta\) [4] [5].
Raises: ValueError
– If l or rho is not 1-D array_like, if a or da is not 2-D array_like, if the shape of a or da differs from(l.size, rho.size)
References
[4] M. E. J. Newman and G. T. Barkema, Monte Carlo Methods in Statistical Physics (Oxford University Press, 1999) [5] K. Binder and D. W. Heermann, Monte Carlo Simulation in Statistical Physics (Springer, Berlin, Heidelberg, 2010)
fssa.optimize module¶
Module contents¶
Implements algorithmic finite-size scaling analysis at phase transitions
This module implements the algorithmic finite-size scaling analysis at phase transitions as demonstrated by Oliver Melchert and his superb autoscale.py script.
The fssa
module provides these high-level functions from the
fssa.fssa
module:
fssa.scaledata (l, rho, a, da, rho_c, nu, zeta) |
Scale experimental data according to critical exponents |
fssa.quality (x, y, dy[, x_bounds]) |
Quality of data collapse onto a master curve defined by the data |
fssa.autoscale (l, rho, a, da, rho_c0, nu0, zeta0) |
Automatically scale finite-size data and fit critical point and exponents |
See also
fssa.fssa
- low-level functions
Notes
The fssa.scaledata()
function scales finite-size data in order for the
data to hopefully collapse onto a single universal scaling function, also
known as master curve.
The fssa.quality()
function assesses the quality of this very data
collapse onto a single curve.
Finally, the fssa.autoscale()
function frames the data collapse as an
optimization problem and searches for the critical values that minimize the
quality function.
The fssa package expects finite-size data in the following setting.
l is like a 1-D numpy array which contains the finite system sizes \(L\). rho is like a 1-D numpy array which contains the parameter values \(\varrho\). a is like a 2-D numpy array which contains the observations (the data) \(A_L(\varrho)\), where a[i, j] is the data at the i-th system size and the j-th parameter value. da is like a 2-D numpy array which contains the standard errors in the observations. This implementation uses the quality function by Houdayer & Hartmann [1] which measures the quality of the data collapse, see the sections The data collapse method and The quality function in the manual.
This function and the whole fssa package have been inspired by Oliver Melchert and his superb autoScale package [2].
The critical point and exponents, including its standard errors and (co)variances, are fitted by the Nelder–Mead algorithm, see the section The Nelder–Mead algorithm in the manual.
Currently, the module only implements homogeneous data arrays: Data must be available for all finite system sizes and parameter values.
References
[1] | J. Houdayer and A. Hartmann, Physical Review B 70, 014418+ (2004) doi:10.1103/physrevb.70.014418 |
[2] | O. Melchert, arXiv:0910.5403 (2009) |