Smoothing

Simple exponential smoothing example. Raw data: mean daily temperatures at the Paris-Montsouris weather station (France) from 1960/01/01 to 1960/02/29. Smoothed data with alpha factor = 0.1.

In statistics and image processing, to smooth a data set is to create an approximating function that attempts to capture important patterns in the data, while leaving out noise or other fine-scale structures/rapid phenomena. In smoothing, the data points of a signal are modified so individual points higher than the adjacent points (presumably because of noise) are reduced, and points that are lower than the adjacent points are increased leading to a smoother signal. Smoothing may be used in two important ways that can aid in data analysis (1) by being able to extract more information from the data as long as the assumption of smoothing is reasonable and (2) by being able to provide analyses that are both flexible and robust.[1] Many different algorithms are used in smoothing.

Smoothing may be distinguished from the related and partially overlapping concept of curve fitting in the following ways:

  • curve fitting often involves the use of an explicit function form for the result, whereas the immediate results from smoothing are the "smoothed" values with no later use made of a functional form if there is one;
  • the aim of smoothing is to give a general idea of relatively slow changes of value with little attention paid to the close matching of data values, while curve fitting concentrates on achieving as close a match as possible.
  • smoothing methods often have an associated tuning parameter which is used to control the extent of smoothing. Curve fitting will adjust any number of parameters of the function to obtain the 'best' fit.
  1. ^ Simonoff, Jeffrey S. (1998) Smoothing Methods in Statistics, 2nd edition. Springer ISBN 978-0387947167 [page needed]