There are a few terms that have been thrown around a lot lately: AI, DL, NN, ML, NLP, and more. While a precise definition of all these terms would take multiple paragraphs, the thing they have in common is that a computer is doing some stuff.
For anyone who is not familiar with this alphabet soup, I've written a fairly comprehensive overview of the field's origins and history, as well as an explanation of the technologies involved, here, and ask forgiveness for starting the explanation of a 2019 software released in 1951.
In recent years, the field of machine learning has advanced at a pace which is, depending on who you ask, somewhere between "astounding", "terrifying", "overhyped" and "revolutionary". For example, GPT (2018) was a mildly interesting research tool, GPT-2 (2019) could write human-level text but was barely capable of staying on topic for more than a couple paragraphs, and GPT-3 (2020–22) wrote this month's arbitration report (a full explanation of what I did, how I did it, and responses to the most obvious questions can be found below).
The generative pre-trained transformers (this is what "GPT" stands for) are a family of large language models developed by OpenAI, similar to BERT and XLnet. Perhaps as a testament to the rapidity of developments in the field, even Wikipedia (famous for articles written within minutes of speeches being made and explosions being heard) currently has a redlink for large language models. Much ink has already been spilled on claims of GPTs' sentience, bias, and potential. It's obvious that a computer program capable of writing on the level of humans would have enormous implications for the corporate, academic, journalistic, and literary world. While there are certainly some unrealistically hyped-up claims, it's hard to overstate how much these things are capable of, despite their constraints.