This article needs additional citations for verification. (February 2024) |
In computer science, a weak ontology is an ontology that is not sufficiently rigorous to allow software to infer new facts without intervention by humans (the end users of the software system). In other words, it does not contain sufficient literal information.[1]
By this standard – which evolved as artificial intelligence methods became more sophisticated, and computers were used to model high human impact decisions – most databases use weak ontologies.
A weak ontology is adequate for many purposes, including education, where one teaches a set of distinctions and tries to induce the power to make those distinctions in the student. Stronger ontologies only tend to evolve as the weaker ones prove deficient. This phenomenon of ontology becoming stronger over time parallels observations in folk taxonomy about taxonomy: as a society practices more labour specialization, it tends to become intolerant of confusions and mixed metaphors, and sorts them into formal professions or practices. Ultimately, these are expected to reason about them in common, with mathematics, especially statistics and logic, as the common ground.
On the World Wide Web, folksonomy in the form of tag schemas and typed links has tended to evolve slowly in a variety of forums, and then be standardized in such schemes as microformats as more and more forums agree. These weak ontology constructs only become strong in response to growing demands for a more powerful form of search engine than is possible with keywording.