Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning.[1][2][3] For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union.[4] Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging.[5] Another emerging topic is the regulation of blockchain algorithms (Use of the smart contracts must be regulated) and is mentioned along with regulation of AI algorithms.[6] Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.[citation needed]
The motivation for regulation of algorithms is the apprehension of losing control over the algorithms, whose impact on human life increases. Multiple countries have already introduced regulations in case of automated credit score calculation—right to explanation is mandatory for those algorithms.[7][8] For example, The IEEE has begun developing a new standard to explicitly address ethical issues and the values of potential future users.[9] Bias, transparency, and ethics concerns have emerged with respect to the use of algorithms in diverse domains ranging from criminal justice[10] to healthcare[11]—many fear that artificial intelligence could replicate existing social inequalities along race, class, gender, and sexuality lines.
^Law Library of Congress (U.S.). Global Legal Research Directorate, issuing body. Regulation of artificial intelligence in selected jurisdictions. OCLC1110727808.