Generic-case complexity is a subfield of computational complexity theory that studies the complexity of computational problems on "most inputs".
Generic-case complexity is a way of measuring the complexity of a computational problem by neglecting a small set of unrepresentative inputs and considering worst-case complexity on the rest. Small is defined in terms of asymptotic density. The apparent efficacy of generic case complexity is because for a wide variety of concrete computational problems, the most difficult instances seem to be rare. Typical instances are relatively easy.
This approach to complexity originated in combinatorial group theory, which has a computational tradition going back to the beginning of the last century. The notion of generic complexity was introduced in a 2003 paper,[1] where authors showed that for a large class of finitely generated groups the generic time complexity of some classical decision problems from combinatorial group theory, namely the word problem, conjugacy problem and membership problem, are linear.
A detailed introduction of generic case complexity can be found in the surveys.[2][3]