# The quality of data collapse¶

## The quality function¶

In the following, we present a measure by Houdayer & Hartmann [HH04] for the quality of the data collapse. Melchert [Mel09] refers to some alternative measures, for example [BS01], [WBJS08], and to some applications of these measures in the literature.

Houdayer & Hartmann [HH04] refine a method proposed by Kawashima & Ito [KI93]. They define the quality as the reduced \(\chi^2\) statistic

where the values \(y_{ij}, dy_{ij}\) are the scaled observations and its standard errors at \(x_{ij}\), and the values \(Y_{ij}, dY_{ij}\) are the estimated value of the master curve and its standard error at \(x_{ij}\).

The quality \(S\) is the mean square of the weighted deviations from the master curve. As we expect the individual deviations \(y_{ij} - Y_{ij}\) to be of the order of the individual error \(\sqrt{dy_{ij}^2 + dY_{ij}^2}\) for an optimal fit, the quality \(S\) should attain its minimum \(S_{\min}\) at around \(1\) and be much larger otherwise [BR03].

Let \(i\) enumerate the system sizes \(L_i\), \(i = 1, \ldots, k\) and let \(j\) enumerate the parameters \(\varrho_j\), \(j = 1, \ldots, n\) with \(\varrho_1 < \varrho_2 < \ldots < \varrho_n\). The scaled data are

The sum in the quality function \(S\) only involves terms for which the estimated value \(Y_{ij}\) of the master curve at \(x_{ij}\) is defined. The number of such terms is \(\mathcal{N}\).

The master curve itself depends on the scaled data. For a given \(i\), \(L_i\), we estimate the master curve at \(x_{ij}\) by the two respective data from all the other system sizes which respectively enclose \(x_{ij}\): for each \(i \neq i\), let \(j'\) be such that \(x_{i'j'} \leq x_{ij} \leq x_{i'(j'+1)}\), and select the points \((x_{i'j'}, y_{i'j'}, dy_{i'j'}), (x_{i'(j'+1)}, y_{i'(j'+1)}, dy_{i'(j'+1)})\). Do not select points for some \(i'\), if there is no such \(j'\). If there is no such \(j'\) for all \(i'\), the master curve remains undefined at \(x_{ij}\).

Given the selected points \((x_l, y_l, dy_l)\), the local approximation of the master curve is the linear fit

with weighted least squares [Str11]. The weights \(w_l\) are the reciprocal variances, \(w_l := 1/dy_{ij}^2\). The estimates and (co)variances of the slope \(m\) and intercept \(b\) are

with \(K_{nm} := \sum w_l x_l^n y_l^m\), \(K := K_{00}\), \(K_x := K_{10}\), \(K_y := K_{01}\), \(K_{xx} := K_{20}\), \(K_{xy} := K_{11}\), \(\Delta := KK_{xx} - K_x^2\).

Hence, the estimated value of the master curve at \(x_{ij}\) is

with error propagation

## Refinement of the quality function¶

The fssa package further refines the quality function. The original sum involves only terms for which the master curve is defined. As the number of missing terms in general differs from system size to system size, the sum implicitly weights system sizes differently. This is unintended behavior, especially when it comes to scalings with less dense coverage of the critical region at large system sizes.

To alleviate this, we modify the sum as follows:

where the number of system sizes is \(k\) (as before), and \(\mathcal{N}_i\) is the number of terms for the \(i\)-th system size. By separately averaging over all available terms for each system size, and then averaging over all system sizes, the contributions of each system size have equal weight in the final sum.

## Implementation in the fssa package¶

### Routines¶

`fssa.fssa.quality` |
Quality of data collapse onto a master curve defined by the data |