This article needs additional citations for verification. (September 2024) |
Bradford's law is a pattern first described by Samuel C. Bradford in 1934 that estimates the exponentially diminishing returns of searching for references in science journals. One formulation is that if journals in a field are sorted by number of articles into three groups, each with about one-third of all articles, then the number of journals in each group will be proportional to 1:n:n2.[1] There are a number of related formulations of the principle.
In many disciplines, this pattern is called a Pareto distribution. As a practical example, suppose that a researcher has five core scientific journals for his or her subject. Suppose that in a month there are 12 articles of interest in those journals. Suppose further that in order to find another dozen articles of interest, the researcher would have to go to an additional 10 journals. Then that researcher's Bradford multiplier bm is 2 (i.e. 10/5). For each new dozen articles, that researcher will need to look in bm times as many journals. After looking in 5, 10, 20, 40, etc. journals, most researchers quickly realize that there is little point in looking further.
Different researchers have different numbers of core journals, and different Bradford multipliers. But the pattern holds quite well across many subjects, and may well be a general pattern for human interactions in social systems. Like Zipf's law, to which it is related, we do not have a good explanation for why it works, but knowing that it does is very useful for librarians. What it means is that for each specialty, it is sufficient to identify the "core publications" for that field and only stock those; very rarely will researchers need to go outside that set.[verification needed]
However, its impact has been far greater than that. Armed with this idea and inspired by Vannevar Bush's famous article As We May Think, Eugene Garfield at the Institute for Scientific Information in the 1960s developed a comprehensive index of how scientific thinking propagates. His Science Citation Index (SCI) had the effect of making it easy to identify exactly which scientists did science that had an impact, and which journals that science appeared in. It also caused the discovery, which some did not expect, that a few journals, such as Nature and Science, were core for all of hard science. The same pattern does not happen with the humanities or the social sciences.
The result of this is pressure on scientists to publish in the best journals, and pressure on universities to ensure access to that core set of journals. On the other hand, the set of "core journals" may vary more or less strongly with the individual researchers, and even more strongly along schools-of-thought divides. There is also a danger of over-representing majority views if journals are selected in this fashion.