One of the beauties of wikis in general -- and Wikipedia in particular -- is that there is no rigid process that specifies how content evolves. The lack of a rigid process allows each of us to develop our own personal style of editing. Some people create stubs, some people fix typos, some people facilitate discussions, and some people bring articles to featured status. The community polices itself through discussions and the creation of guidelines in a somewhat indirect fashion. In essence, it is the status quo that drives wikipedia. The community only gets involved when there is a dispute. The bulk of editing happens without discussion, by a community that follows the status quo. Community discussions help clarify and crystallize the status quo, and occasionally steer it in a new direction. Policy is mostly descriptive.
Reading through this discussion, I find that the practice of using a bot to create pages is in keeping with the status quo. Hundreds of pages have already been created this way. Fritzpoll makes a strong case that using a bot to create an article is like using a can-opener to open cans. It is a tool that helps speed up the task. Many editors use automated tools to help speed up the process. The wiki itself is an automated tool for creating web pages that we all use.
Bots are very powerful tools. Because they have the power to create bad edits just as fast as they can create good edits we don't just let anyone use them. The community has created policies and procedures to keep bots from running amok. We rely on the bot operators to be responsible for the use of their bots. Fritzpoll has been extremely responsible in adapting the proposal so that the choice of "cans to be opened" is under the control of individuals or groups of individuals. Clearly, there still needs to be some work done to optimize the criteria for selecting places to be created, and I expect that these criteria can and will be refined. With the adaptations to the proposal that have already been made, I don't see any likelihood that a million or so articles will quickly flood Wikipedia. The bot has only been approved for a limited run. So far, all our status quo practices and policies have been followed.
There needs to be strong convincing reasons to put limits on editing that is in keeping with the status quo. The practice being limited has to be shown to be either disruptive, contrary to the goals of the project, or detrimental to our core principals such as verifiability and transparency. A large majority of opinion in this discussion has been in favor of allowing the proposal. Nevertheless, I don't believe in percentages. A single objection that shows a proposal to be disruptive, harmful or detrimental to the project would outweigh any majority. The main objections raised can be roughly summarized as follows:
"New articles should be created by people, not bots". However, there is precedent for creating articles by bot, and the articles still remain years later. There seem to be positives and negatives to creating large numbers of new articles using a bot. I don't find any convincing argument for the need to ban bot created articles simply because they are created by a bot. On the contrary, I think there is a long standing consensus that all edits created by a bot are the creation of the person running the bot, and the responsibility of the person running the bot.
"Red-links inspire article creation more than stubs". I would have fully agreed with this statement when I joined Wikipedia in 2004. At that time every article was filled with red-links and many editors were driven to turn as many of them to blue as they could. Since then, red-links have become a source of embarrassment to many editors, who see them as pointing out the deficiencies of Wikipedia. I don't think anyone has made a conclusive argument either way. Certainly, to the ordinary user of Wikipedia, some information is better than nothing.
"Places are not inherently notable". I would say that this is the most convincing argument against running the bot. The community expends quite a bit of effort arguing about notability. It seems to have different meanings to different people. It is important to have standards for notability so that Wikipedia does not devolve into facebook. In this regard, notability is a way that we judge the verifiability of information. I don't think anyone is proposing creating articles about places who's existence is unverifiable. I find that Fritzpoll with the input of the larger community has made a concerted effort to limit the bot to creating articles of places that the community deems notable. With these limitations, the notability argument is moot.
"This will set a bad precedent that will cause a massive increase in the number of articles". I don't see convincing evidence that such an increase would be a bad thing. Some people see it as good, some do not. Since everything is undo-able, and the bot is going to ramp up slowly, there is no chance of this being an irreversible problem if it is found to have a negative effect. If anything, there is an argument to be made that the community input that happened here is a good precedent for any future project that would involve a bot creating a large number of articles.
"These pages should be created in a new separate project -- a Wikiatlas". I am not convinced that this would make Wikipedia any better. Users commonly click on place names in articles to find out where they are. Putting all these places in a different project would just make it more difficult for our users to find the information they are looking for.
I don't find any of these arguments convincing enough to override our established guidelines and practices. All things considered, I find that there is a consensus for going forward with the proposal. I commend everyone involved for working to create a proposal that takes the concerns of the community into account. -- ☑ SamuelWantman 00:06, 10 June 2008 (UTC)[reply]
The following is an archived discussion. Please do not modify it.
New proposal formulated - old discussion archived to talk page.