Solow Test

The Solow test, described by Solow (1987), is a statistical test for detecting a slope change in a time series. The test divides the available data into two phases at some point in time, and finds the line of best fit in each phase, with the restriction that the two lines of best fit must meet at the dividing point.

The test searches for the dividing point that produces the lowest overall sum of squared errors between the data points and the best-fit curve, and then checks whether that two-phase fit is significantly better than a single-phase fit. If so, then it concludes that a statistically significant slope change does occur at that dividing point.

The example below shows the result of a Solow test applied to the annual time series of global mean temperature between 1880 and 2017. The total sum of squared errors is minimized when the dividing point is the year 1974, making that the most likely point of discontinuity. In this example the significance of the change is near 100%, meaning it is very certain that a slope change occurred near the year 1974:

Tip: The Easterling-Peterson test is a similar test for discontinuity that does not require the lines of best fit to meet.

Windographer implements the Solow test in the Long Term Trends window.

See also

Easterling-Peterson test

Long Term Trends window


Written by: Tom Lambert
Contact: windographer.support@ul.com
Last modified: November 9, 2017