Structured sparsity regularization is a class of methods, and an area of research in statistical learning theory, that extend and generalize sparsity regularization learning methods.[1] Both sparsity and structured sparsity regularization methods seek to exploit the assumption that the output variable (i.e., response, or dependent variable) to be learned can be described by a reduced number of variables in the input space (i.e., the domain, space of features or explanatory variables). Sparsity regularization methods focus on selecting the input variables that best describe the output. Structured sparsity regularization methods generalize and extend sparsity regularization methods, by allowing for optimal selection over structures like groups or networks of input variables in .[2][3]
Common motivation for the use of structured sparsity methods are model interpretability, high-dimensional learning (where dimensionality of may be higher than the number of observations ), and reduction of computational complexity.[4] Moreover, structured sparsity methods allow to incorporate prior assumptions on the structure of the input variables, such as overlapping groups,[2] non-overlapping groups, and acyclic graphs.[3] Examples of uses of structured sparsity methods include face recognition,[5] magnetic resonance image (MRI) processing,[6] socio-linguistic analysis in natural language processing,[7] and analysis of genetic expression in breast cancer.[8]
groupLasso
was invoked but never defined (see the help page).latentLasso
was invoked but never defined (see the help page).LR18
was invoked but never defined (see the help page).