Wikipedia:Automated moderation

Automated moderation in Wikipedia is the use of Wikipedia:Bots to promote good behavior in our shared wiki environment.

From 2001-2019 Wikipedia's bots mostly executed simple commands as directed by humans. Most of these commands were editorial to focus on Wikipedia's content, and bot operators less often directed bots to intervene in human actions.

With advances in data science, and nearly 20 years of data on human activities in Wikipedia, it has become possible to have bots patrol Wikipedia to detect various kinds of misconduct. For example, with thousands of examples of humans applying the Wikipedia:Blocking policy to user misconduct, bots can use machine learning to identify misconduct in past cases then apply what they learned to situations which humans have not yet evaluated. This is "automated moderation".

Automated moderation should not replace human evaluation, but should only complement it. The Wikipedia community should take care to be human-centered in all its moderation, and to always value and demand transparency in judgement and the Wikipedia community process.