Common knowledge is a special kind of knowledge for a group of agents. There is common knowledge of p in a group of agents G when all the agents in G know p, they all know that they know p, they all know that they all know that they know p, and so on ad infinitum.[1] It can be denoted as .
The concept was first introduced in the philosophical literature by David Kellogg Lewis in his study Convention (1969). The sociologist Morris Friedell defined common knowledge in a 1969 paper.[2] It was first given a mathematical formulation in a set-theoretical framework by Robert Aumann (1976). Computer scientists grew an interest in the subject of epistemic logic in general – and of common knowledge in particular – starting in the 1980s.[1] There are numerous puzzles based upon the concept which have been extensively investigated by mathematicians such as John Conway.[3]
The philosopher Stephen Schiffer, in his 1972 book Meaning, independently developed a notion he called "mutual knowledge" () which functions quite similarly to Lewis's and Friedel's 1969 "common knowledge".[4] If a trustworthy announcement is made in public, then it becomes common knowledge; However, if it is transmitted to each agent in private, it becomes mutual knowledge but not common knowledge. Even if the fact that "every agent in the group knows p" () is transmitted to each agent in private, it is still not common knowledge: . But, if any agent publicly announces their knowledge of p, then it becomes common knowledge that they know p (viz. ). If every agent publicly announces their knowledge of p, p becomes common knowledge .