In computer vision and computer graphics, the 3D Morphable Model (3DMM) is a generative technique that uses methods of statistical shape analysis to model 3D objects. The model follows an analysis-by-synthesis approach over a dataset of 3D example shapes of a single class of objects (e.g., face, hand). The main prerequisite is that all the 3D shapes are in a dense point-to-point correspondence, namely each point has the same semantical meaning over all the shapes. In this way, we can extract meaningful statistics from the dataset and use it to represent new plausible shapes of the object's class. Given a 2D image, we can represent its 3D shape via a fitting process or generate novel shapes by directly sampling from the statistical shape distribution of that class.[1]
The question that initiated the research on 3DMMs was to understand how a visual system could handle the vast variety of images produced by a single class of objects and how these can be represented. The primary assumption in developing 3DMMs was that prior knowledge about object classes was crucial in vision. 3D Face Morphable Models are the most popular 3DMMs since they were the first to be developed in the field of facial recognition.[2] It has also been applied to the whole human body,[3] the hand,[4] the ear,[5] cars,[6] and animals.[7]