In regression analysis, an interval predictor model (IPM) is an approach to regression where bounds on the function to be approximated are obtained. This differs from other techniques in machine learning, where usually one wishes to estimate point values or an entire probability distribution. Interval Predictor Models are sometimes referred to as a nonparametric regression technique, because a potentially infinite set of functions are contained by the IPM, and no specific distribution is implied for the regressed variables.
Multiple-input multiple-output IPMs for multi-point data commonly used to represent functions have been recently developed.[1] These IPM prescribe the parameters of the model as a path-connected, semi-algebraic set using sliced-normal [2] or sliced-exponential distributions.[3] A key advantage of this approach is its ability to characterize complex parameter dependencies to varying fidelity levels. This practice enables the analyst to adjust the desired level of conservatism in the prediction.
As a consequence of the theory of scenario optimization, in many cases rigorous predictions can be made regarding the performance of the model at test time.[4] Hence an interval predictor model can be seen as a guaranteed bound on quantile regression. Interval predictor models can also be seen as a way to prescribe the support of random predictor models, of which a Gaussian process is a specific case .[5]