In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms.[1] It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it.[2] In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve,[1] and thus to be useful in cryptography.
More precisely, the LWE problem is defined as follows. Let denote the ring of integers modulo and let
denote the set of -vectors over . There exists a certain unknown linear function , and the input to the LWE problem is a sample of pairs , where and , so that with high probability . Furthermore, the deviation from the equality is according to some known noise model. The problem calls for finding the function , or some close approximation thereof, with high probability.
^ abOded Regev, “On lattices, learning with errors, random linear codes, and cryptography,” in Proceedings of the thirty-seventh annual ACM symposium on Theory of computing (Baltimore, MD, USA: ACM, 2005), 84–93, http://portal.acm.org/citation.cfm?id=1060590.1060603.
^Chris Peikert, “Public-key cryptosystems from the worst-case shortest vector problem: extended abstract,” in Proceedings of the 41st annual ACM symposium on Theory of computing (Bethesda, MD, USA: ACM, 2009), 333–342, http://portal.acm.org/citation.cfm?id=1536414.1536461.