Random feature

Random features (RF) are a technique used in machine learning to approximate kernel methods, introduced by Ali Rahimi and Ben Recht in their 2007 paper "Random Features for Large-Scale Kernel Machines",[1] and extended by [2][3]. RF uses a Monte Carlo approximation to kernel functions by randomly sampled feature maps. It is used for datasets that are too large for traditional kernel methods like support vector machine, kernel ridge regression, and gaussian process.

  1. ^ Rahimi, Ali; Recht, Benjamin (2007). "Random Features for Large-Scale Kernel Machines". Advances in Neural Information Processing Systems. 20.
  2. ^ Rahimi, Ali; Recht, Benjamin (September 2008). "Uniform approximation of functions with random bases". 2008 46th Annual Allerton Conference on Communication, Control, and Computing. IEEE. doi:10.1109/allerton.2008.4797607.
  3. ^ Rahimi, Ali; Recht, Benjamin (2008). "Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in learning". Advances in Neural Information Processing Systems. 21. Curran Associates, Inc.