Easterling-Peterson Test

The Easterling-Peterson test, described by Easterling and Peterson (1995), is a statistical test for detecting a discontinuity in a time series. The test divides the available data into two phases at some point in time, and finds the line of best fit in each phase. It does not require that the two lines of best fit must meet at the dividing point.

The test searches for the dividing point that produces the lowest overall sum of squared errors between the data points and the best-fit curve, and then checks whether that two-phase fit is significantly better than a single-phase fit. If so, then it concludes that a statistically significant discontinuity does occur at that dividing point. It tests separately for the significance of the change in slope, and the change in the mean value.

The example below shows the result of an Easterling-Peterson test applied to the annual time series of global mean temperature between 1880 and 2017. The total sum of squared errors is minimized when the dividing point is the year 1963, making that the most likely point of discontinuity. In this example the significance of both the slope change and the change in mean value is near 100%, meaning it is very certain that a discontinuity occurred near the year 1963:

Tip: The Solow test is a similar test for discontinuity, but it requires that the lines of best fit meet at the discontinuity point.

Windographer implements the Easterling-Peterson test in the Long Term Trends window.

See also

Solow test

Long Term Trends window


Written by: Tom Lambert
Contact: windographer.support@ul.com
Last modified: November 9, 2017