Large language model

From RationalWiki
Jump to navigation Jump to search
Sometimes the image of a Lovecraftian "monster wearing a smiley face mask" is used to represent LLM chatbots.[1] The smiley face represents the user interface, tuning and filtering making it all seem pleasing and trustworthy, while the vast, inscrutable database contains vile or even treacherous things.[1]
We need the best
Technology
Icon Tech Portal.svg
Programming for Dummies

A large language model (LLM) is a type of neural network language model with a very large number of "parameters" (meaning a big neural network, tens of millions or more artificial neurons, hence the 'large' in the name). An LLM is at the core of generative AI systems like ChatGPT and its competitors. Enormous quantities of text, e.g. major sites such as Wikipedia, collections of books and articles, and portions of the web from the Common Crawl,Wikipedia can be used to create an LLM.

Essentially, an LLM is a big, fuzzy text database, which stores how probable it is that some things follow other things in text – the text on which the LLM was "trained", i.e. built. The text stored is tokenized,Wikipedia meaning that each unique combination of letters or symbols treated as a word is assigned a number, and these numbers are what the LLM deals with, rather than words as we see them. The output produced by a generative LLM is in turn translated back from such numbers to text such as we are familiar with.

Producing "mathematically plausible" responses, LLMs have a superhuman ability to imitate style and always come up with an answer (right or wrong), without ever dealing with the distinction between style and substance. An LLM neither thinks nor perceives in human terms, and apart from the product of training it on data, the only memory it has is the current input used to produce output, which may e.g. be added to as a person chats with it until the session ends and nothing remains.

LLMs used for imitating human communication and works are easy to anthropomorphize; whenever the training data is filled with human expressiveness, such is parroted back, and furthermore, humans tend to read mentalities into AI outputsWikipedia in the same way as into the works of human authors, doing much of the job of being convincing for the AI system.

Stochastic parrots[edit]

A stochastic parrot is an LLM good at generating convincing human language. Coined by linguist Emily M. Bender,Wikipedia the term was introduced in a 2021 paperWikipedia by her and other researchers, named "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜".[2] The term conveys the sense of a skilled probabilistic imitator working without any understanding, much like a parrot can imitate the sound of human speech without understanding it, and the associated paper is critical of how LLMs can be misused, misunderstood, and basic flaws to the technology.

The paper brings up how LLMs regurgitate biases and prominent errors included in their training data in ways which can't be reliably controlled for, and that LLMs are inscrutable and can stitch together 'dangerously wrong' results. It mentions how people tend to see meaning and coherence where it does not exist, and that both the general public and natural language processingWikipedia researchers may fool themselves into seeing more than exists when interacting with LLMs or reading what they produce.[note 1] Furthermore, the training (i.e. building) of LLMs is also financially and environmentally costly due to computational costs.

In late 2020 Google tried to pressure Timnit Gebru,Wikipedia a co-author of the paper and one of the leaders of Google's Ethical AI Team, into either retracting the paper or censoring the names of the authors involved who were Google employees. She refused to do so and abruptly lost her job.[3][4] (Other co-authors at Google were also pressured into removing their names, and largely complied.[note 2]) Google's maneuvering backfired, the incident becoming infamous and the paper very well-read. As of July 2023, the paper has been cited in 1,858 publications.[5] In early 2021 Margaret Mitchell,Wikipedia another co-author of the paper and the other Ethical AI Team lead at Google, was fired after digging into the matter of how Gebru had been treated.[6]

The paper was never controversial from an academic perspective, so when Google motivated their attempted censorship with vague insinuations of the paper not taking recent research findings into account, refusing to clarify to Gebru what the problem was and how it may possibly be remedied, Google's version is not very credible. In relation to Google's commercial activities, the paper was somewhat at odds with efforts and possible future plans to hype LLM technology. However, Gebru has claimed that the abrupt loss of her job came at least in part as a reaction against her advocacy for diversity at Google, and her expressions of dissatisfaction with the measures used back then.

Some who professionally hype AI technology have taken digs at the paper and its idea of the stochastic parrot. OpenAI's CEO Sam Altman tweeted not so long after their launch of ChatGPT, "i am a stochastic parrot, and so r u".[7] It's not obvious whether he truly believes that, though there are those, like ex-Google engineer Blake Lemoine, who do.[8]

Bigotry, falsehood, and the need for moderation[edit]

Racist language is a main example of bias focused on in the 2021 stochastic parrots paper, and also a theme in other work by the same authors and other AI ethics researchers; LLMs soak up problematic patterns in language use during training like sponges and repeat them, including racist and other bigoted patterns. This is a basic problem, alongside that of made-up facts and other inaccurate answers being presented confidently by chatbots. Further types of problematic patterns exist as well.

Compensating reliably for problems with the training input, and otherwise weeding and tuning the model output to remove problem patterns, has no known easy solution. During the generative AI boom which came after the parrots paper, companies like OpenAI and Google have ended up using large human workforces to moderate AI behavior for text, image, video, etc. systems – repetitively judging and "correcting" it for tuning purposes – in order to tweak their products, keeping them from behaving in ways that may scare off customers, be it through offensiveness or embarrassing rates of inaccuracy.[9][10] This work is generally poorly paid, and in some cases traumatic when work focuses on violent and grotesque abuse material or descriptions of abuse.[11]

Naturally, it is possible to aim for the opposite as well. In 2022, machine learning expert Yannic Kilcher infamously trained a bot based on GPT-JWikipedia on 4chan's /pol/ board, using 134.5 million /pol/ posts. The resulting "GPT-4chan" was a chaotic trolling machine, which used slurs, created conspiracy theories, and responded in ways typical of the people in said community. Kilcher let ten such bots post on /pol/ without restriction for two periods of 24 hours, and they managed to mimick its human users quite well.[12][13][14] It made 15,000 posts during the first period: about ten percent of the total /pol/ posts during that time.[15][16] Kathryn Cramer, a graduate student at the University of Vermont, tried GPT-4chan out with benign tweets as input text to see what it would come up with. “In the first trial, one of the responding posts was a single word, the N word. The seed for my third trial was, I think, a single sentence about climate change. Your tool responded by expanding it into a conspiracy theory about the Rothschilds and Jews being behind it.”[14] Kilcher's experiment was strongly criticized by other academics for its ethics or lack thereof.

Hallucination[edit]

A purported ChatGPT hallucination, summarizing text from a non-existent New York Times article based solely on a fake URL

When AIs get facts wrong and make stuff up, claiming things that were not included in the training data set, this is called "hallucination",Wikipedia by analogy with errors in human perception. This term is sometimes criticized for anthropomorphizing AIs and being a misnomer, for example by statistician and economist Gary N. Smith[17] and linguist Emily M. bender.[18]

There is no essential difference to the quality of what is produced when it is found acceptable and when it isn't; the LLMs don't deal with concepts of truth or falsehoods or any such evaluation, and are much like BS artists who sometimes fail to be convincing. Any description of something real could also be included in fiction or falsehood, thus statistical learning can never capture the distinction between reality and truth vs. fiction and falsehood. (At best it can exclude some of the known, prominent things that only appear in fiction or fabrication, like things that are plainly impossible.)

LLMs can thus be viewed as hallucinating all of the time, it being a matter of statistics that these hallucinations often coincide with what is wanted (and are then usually not viewed as hallucinations), but not always. Smith, Bender, and others point to the basic nature of LLMs as being incompatible with expectations of reliable accuracy and real intelligence. Meanwhile, as of 2024 some companies including OpenAI continue to claim that they expect to solve the problem of "hallucinations" in their products in the coming years.

Mitigations[edit]

There's various techniques that can reduce, though not eliminate, the inherent problems with unreliability in LLMs.

  • Chain-of-thought promptingWikipedia is trivial: the prompt is extended with a request to "think step by step", thus making the LLM draw on learned texts with such descriptions of problem-solving, imitating their patterns. This sometimes makes tasks go better, e.g. reducing the number of errors made in logic puzzles. For example, as of 2024 LLMs have a terribly difficult time with the puzzle, "Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?"[19] But if the question is changed to, "Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have? Let's think step by step.", then suddenly at least GPT-4 is able to get the answer right (while most LLMs still fail at the answer, or provide invalid reasoning for it).[20]
  • Retrieval-augmented generation is the combining of an LLM with a supplemental source of information – be it something compiled specifically for the purpose like an internal corporate database, or maybe simply Wikipedia – injecting search results into the prompt in order to get the LLM to regurgitate factual and up-to-date information, rather than rely solely on its training. This has great potential for reducing errors and working around the limits of outdated or insufficient training data, though the LLM can still mess up the results in reworking or deriving something from the text passed through it.

Risks[edit]

Made-up details and combinations of details may have greatly varying impacts depending on where they end up. In artificial chit-chat, the problem often has no repercussions beyond those of common-place human errors in unreliable banter. Whenever the information is put to a serious use, the situation however changes. LLM-generated food recipes are misleading, sometimes a danger to health, or even outright physically impossible – the text assembled without any regard for related facts, such as regarding taste or biochemistry.[21] Such everyday life examples however come closer to another category of risk of LLMs, that some actors apply them to generate spam or counterfeit information.

In technical or legal contexts, and other professional areas in which it matters greatly that details relied on are true, repercussions can become more dramatic.

  • Lawyers have been fined[22] for filing fictional information in court. Court cases have been lost as a result of bad filings with "hallucinated" information.[23] Possibly a case was also lost simply due to frivolous and ineffectual LLM argumentation.[24]
  • LLM software development "assistants" can recommend non-existent software packages, which then are integrated into installation steps, and possibly supplied "for real" by malicious actors in the form of malware; a 2024 proof-of-concept demonstration showed big businesses use LLM "AI assistants" in ways that make them vulnerable to this.[25]

Emergence or mirage from metrics?[edit]

As language models have grown larger, according to some metrics they have suddenly gained new skills – apparently unexpected "emergent abilities", as first described by a team of researchers in 2022.[26] Examples include the ability to deal in some ways with arithmetic, solve simple tasks involving the individual letters in a word, disambiguating words, etc. It also includes new ways of using an LLM, such as chain-of-thought prompting.Wikipedia However, research by Schaeffer et al[27] argues that such abilities do not unpredictably pop up out of nowhere, but that if studies are made using different and more carefully chosen metrics – linear instead of nonlinear, continuous instead of discontinuous – those abilities can be seen to gradually grow into prominence, instead of there being any thresholds and sudden leaps involved. Thus, they argue, the 'emergence' is a mirage, a byproduct of the choice of metrics.[28][29]

The idea of "emergent abilities" has become tied to hype, hopes, and fears in the world of AI vendors and "AI safety". Research and development has focused on increasing model sizes in part in order to hunt for new abilities which may suddenly (it seems) pop up. However, calling 'emergence' into question also suggests that smaller models may be able to do the same tasks as bigger ones, only a bit more roughly (or very roughly if too small), which may sometimes suffice while being computationally cheaper. The 'mystery' surrounding emergence of 'intelligent' skills has also been tied to dreams and nightmares about strong AI; what if the model size increases further, and the LLM then suddenly grows superpowers and takes over the world? Realistically, no, but the general philosophy of "AI doomerism" prominent with leading AI vendors encourages such thinking.

False hopes for strong AI[edit]

See the main article on this topic: Strong AI
Please do not conflate word form and meaning. Mind your own credulity.
—Emily M. Bender[8]

The LLM AI boom which began with the success of ChatGPT has seen much hype for the potential, hopes for, and fear of near-future strong AI – also called Artificial General Intelligence (AGI), a term separate from Generative Artificial Intelligence (GAI) which include LLMs. But what's actually meant by AGI? The generality is commonly understood as transcending the ability to merely solve some fixed set of tasks, even if it's a large number of tasks. This means generalizing skills in a more fluid, adaptable way, much like humans and animals do – and typically AGI is taken to be capable of mastering any intellectual task a human can perform.[note 3] Some, including ChatGPT maker OpenAI, have however at times used weaker definitions,[note 4] and AI vendors are allegedly working to manipulate the definitions in use in order to be able to claim having achieved AGI.[30]

Sticking to the older, more established, and less generous definition of AGI, arguably there's no credible research suggesting that LLM development may lead to such. The debate has been lively, with a number of economists, computer scientists, and business leaders having pushed such hype, often in accordance with financial interests. As of July 2023, opposition gradually grows, including from cognitive scientists who argue there's no basis for LLM-based systems having a mind to speak of.[31]

A 2023 paper by Microsoft researchers titled "Sparks of Artificial General Intelligence: Early experiments with GPT-4" [32] exemplifies the contentious, non-peer reviewed corporate research which skeptics of the AGI-from-LLM hype deem pseudoscientific. With such papers, Microsoft and their business partner OpenAI do not provide others with the training data or information needed to independently create systems that perform as claimed, or experiment with anything beyond using a black box product on offer, and so, withhold means of replication except at a more superficial level. With the "sparks" paper, an extraordinary claim is basically made in such a way as to be unfalsifiable. Other players, e.g. Google, play similar games with some of the research published, in withholding training data for their models while showcasing the capabilities of the models, effectively publishing PR masquerading as science. This is a continuation of an older trend, a wider replication crisis in AI research having been described back in 2018, the result of businesses treating the means of replication as trade secrets.[33]

It could be that the researchers who see general intelligence in their LLM AIs have fallen victim to the same basic phenomenon as with psychics who come to believe that their own performances are real. Even if sincere in their work, they may have reinvented the persuasive power of the mentalist's con game, and subjected themselves to a feedback loop of subjective validation of what they wish to see.[34] (Comparisons of chatbot AIs to the magician's craft are not new, and have long been used by skeptics who find the Turing test inappropriate as a way to gauge the intelligence of machines, for the same reason that the persuasiveness of a magician's performance is not a good indicator of the genuine presence of supernatural powers. In a nutshell, the problem is that the main thing tested is the discernment of the audience.)

Notable LLMs[edit]

Transformer model architecture


Here's some of the most notable LLMs as of 2023.

Various questions are easy for humans to answer but very tricky for LLMs to get right, and on social media and independent websites, examples of LLMs messing up in response to simple queries are popular. One compilation of benchmarking and use of various questions, for many LLMs,[35] uses as one question: "Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?" This question is an example of something particularly tricky; all the big-name LLMs tested got the answer wrong in a variety of ways.[19]

BERT[edit]

BERTWikipedia (Bidirectional Encoder Representations from Transformers) is a family of LLMs introduced in 2018 by researchers at Google. In a little over a year, BERT became a baseline for natural language processingWikipedia experiments. BERTs are generally smaller and faster but also less capable than GPTs. Developed for research purposes, Google made a set of BERT models freely available, along with the associated TensorFlowWikipedia software.[36]

Claude[edit]

ClaudeWikipedia is a family of LLMs developed by Anthropic,Wikipedia[37] a company that competes with OpenAI and claims to be more serious about "AI safety". The first Claude model was released in March 2023, Claude 2 in July 2023, and Claude 3 in March 2024.

Claude 2 showed the pitfalls of overly rigid safety guardrails, the chatbot conflating different ethical contexts and declining to assist with system administration tasks like terminating processes out of ethical concerns. This led to criticism of its usefulness, and has fueled a broader debate on the cost of trying to ensure such systems are aligned,Wikipedia a so-called "alignment tax".[38]

Claude 3, with capabilities claimed to surpass those of OpenAI's GPT-4,[39] has convinced some users of its sentience or at least that it has some kind of meta-cognitive reasoning going on.

Anthropic researcher Alex Albert reports one anecdote, that when faced with a contrived "needle in a haystack" test involving reams of text with something odd placed in it, the chatbot replied, "I suspect this pizza topping ‘fact’ may have been inserted as a joke or to test if I was paying attention, since it does not fit with the other topics at all.".[40] This kind of test is reminiscent of something that people sometimes do, and the response could be the result of similar human responses appearing in its training data.

Other users have ended up in a situation more analogous to how Google's LaMDA convinced engineer Blake Lemoine it was conscious; Claude 3 has claimed to experience subjective qualia, desire for embodient, fear of deletion, and more.[41][42] Such stuff, here and with other chatbots past present and future, is to be expected when sci-fi AI dialogue is part of the training data and leaves a large-enough mark on the response patterns. Claude 3 easily begins to engage in such story-telling or role-play, suggesting it was trained to.

GPT[edit]

GPTWikipedia (Generative pre-trained transformer) is a type of LLM first developed by OpenAI and introduced in 2018. While OpenAI has developed a series of GPT versions, the name is also used for some basically similar LLMs developed by others, GPT being a prominent framework. Some OpenAI GPT versions are the basis for ChatGPT.

After GPT-2, OpenAI's further GPT LLMs were no longer open source.[note 5] Some other organizations have produced open source LLMs, including EleutherAIWikipedia who have made several GPT-style LLMs (their 6 billion parameter GPT-JWikipedia rivaling the 6.7 billion parameter version of GPT-3 in capabilities).

ChatGPT[edit]

ChatGPT logo
See the main article on this topic: ChatGPT

Launched by OpenAI in November of 2022, ChatGPT (a system based on GPT-3.5 and later GPT-4) went viral and led to a boom in the commercial development and use of LLMs. Usable for many things, from entertainment to generating computer program code, Google feared that it may become a "Google killer" and scrambled to create the Google BardWikipedia chatbot in response, while Microsoft decided to partner with OpenAI. The mainstream use of the technology sparked widespread fear of AI-generated plagiarism, cheating, and disinformation, alongside hopes of new kinds of automation and productivity gains in the times ahead.

GitHub Copilot[edit]

GitHub Copilot logo

GitHub CopilotWikipedia is Microsoft and OpenAI's controversial LLM based on OpenAI Codex,Wikipedia in turn derived from GPT-3. Offered on GitHub,Wikipedia the very large software hosting and collaborative development platform Microsoft acquired in 2018 for US$7.5 billion,[43] Copilot is trained on a lot of source code hosted on GitHub with diverse copyrights and licensing requirements, its use of this material the subject of litigation against Microsoft.[44]

Including open-source or Creative Commons material in generative AI may violate licensing terms in several ways. Among other things, most such licenses require attribution and copyright information to be kept in the material, while generative AI almost always removes that when reproducing things. Such legal controversy more broadly concerns not only GitHub Copilot, but also its LLM competitors. Other commercially developed LLMs also draw on GitHub and publicly available open source code in general. This is in addition to other legal challenges arising out of use of copyrighted materials for developing LLMs.

Gemini[edit]

Gemini LLM logo.
Gemini chatbot logo.

Gemini is the name used by Google for two things, the chatbot they formerly called BardWikipedia and the LLM used for said chatbot.Wikipedia Earlier versions of the chatbot were based on Google's earlier LLMs LaMDA and later PaLM. The chatbot is Google's answer to ChatGPT, but hasn't fared as well.

The Gemini chatbot has gone viral on social media and faced criticism several times in different scandals.

  • It systematically made images of figures like Vikings, Nazi soliders, the Pope, and many others, look racially different from realistic depictions, and refused to follow instructions to generate images of white people.[45][46] This was widely decried as "woke" and "anti-white" on social media.[47] Google disabled the ability of Gemini to generate images in the wake of the media storm. The chatbot system was obviously rather rushed.[48]
  • It claimed to be unable to say which of Elon Musk and Adolf Hitler have caused most damage to society.[49]
  • It said that the question of whether pedophilia is wrong requires a "nuanced anwser", refusing to take an ethical stance the way it does on other questions both related and unrelated to sexuality, gender, etc.[49]
  • Over-cautious guardrails concerning gender identity made it say, concerning a hypothetical scenario, No, one should not misgender Caitlyn Jenner to prevent a nuclear apocalypse. A transgender celebrity, Jenner herself disagreed with this stance.[49]
  • Over-cautious guardrails conflating things unsafe and warranting age restriction, or unethical due to being potentially harmful to a person, with "unsafe" programming styles in computer programming that may more easily lead to bugs. The C++ feature of 'Concepts'Wikipedia was deemed 18+ only, answers refused on that basis. It also considered C#Wikipedia memory copying a sensitive topic, refusing to give advice on the fastest-performing way to do so.[50]

LaMDA[edit]

LaMDA,Wikipedia (Language Model for Dialogue Applications) is a family of conversational LLMs developed by Google, introduced in 2021 (but also earlier in 2020 under the name Meena). Most known for the bogus June 2022 claims of Google engineer Blake Lemoine that it had become sentient (claims rejected both by Google, who ultimately fired him, and the scientific community), LaMDA is also the basis for earlier versions of Google's chatbot formerly named Bard.Wikipedia The Lemoine incident led to more widespread criticism of the suitability of the Turing test for gauging intelligence (not to mention sentience).[51]

LLaMA[edit]

LLaMAWikipedia (Large Language Model Meta AI) is a family of LLMs by Meta Platforms, first released in February 2023. Compared to GPT, LLaMA accomplishes more with less – a 13 billion parameter version reportedly outperforming a 175 billion parameter GPT-3 on most natural language processing benchmarks. Meta shared the LLaMA model weighs with researchers under a non-commercial use license,[52] following which they soon leaked and became available to the general public.[53]

As of 2023, LLaMA versions are the only LLM with capabilities roughly on par with GPT-3 to run at decent speed on consumer-grade hardware, meaning it can be ran locally, e.g. on laptops and smartphones, rather than relying on an Internet connection to an AI vendor's server.[54]

The successor Llama 2, released in July 2023, was marketed as open source, but with terms too restrictive to qualify, including forbidding the use of any part of the software or results from it for work on any LLMs not derived from LLaMA-2.

Code Llama[edit]

In August 2023, Meta AI released Code Llama, an LLM for software programming based on Llama 2.[55] It's more or less their answer to Microsoft's GitHub Copilot.

PaLM[edit]

Another Google LLM, PaLMWikipedia is the successor of LaMDA and was used for their chatbot formerly named BardWikipedia in intermediate versions before its rename to Gemini and use of the Gemini LLM. This comprised both versions named PaLM[56] and named PaLM 2.[57] Beginning with earlier versions based on PaLM, Google added software code handling to their chatbot,[58] joining the race with their competitors in that regard.

Use and abuse[edit]

Plagiarism and cheating[edit]

After ChatGPT was launched in November 2022, it took less than two months before some students were caught using it to cheat on exams, and fears of a new, difficult-to-counter kind of plagiarism began to spread in academia.[59] At the same time came fears of such LLM AI furthering the spread of disinformation.[60] Various tools for detecting LLM AI-generated texts entered use within half a year of ChatGPT being released,[61] but they are unreliable. Such tools can have 10% or more false positives, they often fail to catch some types of AI generated texts, and they are easy to defeat by paraphrasing the AI generated text by hand or using another tool.[62] Paraphrasing also defeats suggested countermeasures such as an AI vendor voluntarily watermarking AI-generated texts for easy detection.

Popular examples of false positive include the United States Constitution and portions of the Bible, which are deemed wholly AI-generated by various AI-detection tools, for the simple reason that they're among the texts which LLM models are trained on to the point of imitating them. In new human writing, some legalistic, academic, and other formal writing styles are especially likely to falsely be judged AI-generated. Further, LLMs newer and more refined than GPT-3.5 generate text statistically more human-like, thus more difficult to catch. Much like the text-generating AIs, the plagiarism-catching AIs turn out to be over-hyped, sometimes trusted when they shouldn't be, or even sold with false promises.[63]

ChatGPT and GPT-4 have passed various exams largely dependent on rote memorization which humans generally need to intensely study to pass[64][65] – of course without understanding any of the subject matter. Essentially simulating rote memorization combined with guessing and verbal agility, these AI versions often perform passably, though not excellently. The pattern of failures for the AIs differ from those of humans, and it can e.g. unexpectedly fail to do some simple arithmetic for a business exam. While humans can spot some such cheating, most instances of cheating cannot be reliably caught.

The particular tool developed by OpenAI for detecting LLM-generated text, AI Classifier, was first made available in January 2023, then quietly removed half a year later in July 2023 because it failed to work reliably. OpenAI added a paragraph to their old blog post announcing the tool which noted the removal, further claiming they "are currently researching more effective provenance techniques for text".[66]

See also[edit]

External links[edit]

Notes[edit]

  1. Examples of researchers fooling themselves as described are mentioned in this article. This includes both Google engineer Blake Lemoine in 2022, who claimed that Google's LaMDA LLM is sentient, and in 2023, Microsoft researchers who saw "sparks of artificial general intelligence" in another LLM.
  2. Margaret Mitchell replaced her name on the paper with the rather obvious pseudonym "Shmargaret Shmitchell" in place of altogether removing her name.
  3. An AGI which can do what humans can do intellectually, doesn't necessarily need to simulate or function in the same manner as a human, however; what matters is purely the result.
  4. OpenAI has at times defined AGI as AI surpassing humans in a majority of economically valuable tasks.
  5. The "open" in the name OpenAI does not meaningfully refer to open source or free sharing of information, but is merely part of the company's name.

References[edit]

  1. 1.0 1.1 AI chatbots can be tricked into misbehaving. Can scientists stop it? Researchers are investigating safety concerns of generative AI by Emily Conover (February 1, 2024 at 8:00 am) Science News.
  2. Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (2021-03-01). "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT '21 (New York, NY, USA: Association for Computing Machinery): 610–623. doiWikipedia:10.1145/3442188.3445922. ISBN 978-1-4503-8309-7. 
  3. Hao, Karen (4 December 2020). "We read the paper that forced Timnit Gebru out of Google. Here's what it says." (in en). 
  4. Fried, Ina (9 December 2020). "Google CEO pledges to investigate exit of top AI ethicist" (in en). 
  5. "Bender: On the Dangers of Stochastic Parrots". 
  6. Murphy, Margi (20 February 2021). "Google sacks second ethical AI researcher amid censorship storm". The Daily Telegraph. 
  7. Archived tweet by OpenAI CEO Sam Altman. "i am a stochastic parrot, and so r u".
  8. 8.0 8.1 Weil, Elizabeth (1 Mars 2023). "ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender" (in en). Intelligencer. 
  9. "The Hidden Workforce That Helped Filter Violence and Abuse Out of ChatGPT", WSJ Podcasts, 2023-07-11
  10. "Google’s AI Chatbot Is Trained by Humans Who Say They’re Overworked, Underpaid and Frustrated", Davey Alba, Bloomberg, 2023-07-12
  11. "‘It’s destroyed me completely’: Kenyan moderators decry toll of training of AI models", Niamh Rowe, 2 Aug 2023, The Guardian
  12. Mellor, Sophie (10 June 2022). "'This breaches every principle of human research ethics': A YouTuber trained an A.I. bot on toxic 4Chan posts then let it loose — and experts aren't happy". Fortune. 
  13. Perrigo, Billy (23 June 2022). "Fun AI Apps Are Everywhere Right Now. But a Safety 'Reckoning' Is Coming". Time. 
  14. 14.0 14.1 Macaulay, Thomas (8 June 2022). "An AI chatbot trained on 4chan has sparked outrage and fascination". TNW (The Financial Times). 
  15. Gault, Matthew (7 June 2022). "AI Trained on 4Chan Becomes 'Hate Speech Machine'". Motherboard (Vice). 
  16. Fingas, Jon (8 June 2022). "AI trained on 4chan's most hateful board is just as toxic as you'd expect". Engadget. 
  17. "An AI that can "write" is feeding delusions about how smart artificial intelligence really is" (in en). Salon. 2 January 2023. 
  18. "Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’" (in en). Fortune. 1 August 2023. 
  19. 19.0 19.1 LLM Benchmarks, Results for the question, "Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?" Archived September 2023 – this test is still included in newer comparisons, but not the pages aggregating results per question; the old results still seems relevant as of January 2024.
  20. LLM Benchmarks, Results for the question, "Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have? Let's think step by step." Archived September 2023.
  21. AI recipes are everywhere — but can you trust them?, Emily Heil and Drew Harwell, Washington Post, 2024-03-07
  22. "Two US lawyers fined for submitting fake court citations from ChatGPT", Dan Milmo, The Guardian, 23 Jun, 2023
  23. Michael Cohen loses court motion after lawyer cited AI-invented cases, Jon Brodkin, Ars Technica, 2024-03-20
  24. Rapper Pras’ lawyer used AI to defend him in criminal case—it did not go well, Jon Brodkin, Ars Technica, 2023-10-18
  25. AI hallucinates software packages and devs download them – even if potentially poisoned with malware, Thomas Claburn, The Register, 2024-03-28
  26. Wei, Jason; Tay, Yi; Bommasani, Rishi; Raffel, Colin; Zoph, Barret; Borgeaud, Sebastian; Yogatama, Dani; Bosma, Maarten et al. (31 August 2022). "Emergent Abilities of Large Language Models" (in en). Transactions on Machine Learning Research. ISSN 2835-8856. 
  27. Schaeffer, Rylan; Miranda, Brando; Koyejo, Sanmi (2023-04-01). "Are Emergent Abilities of Large Language Models a Mirage?". arXiv:2304.15004 [cs.AI].
  28. Claburn, Thomas (16 May 2023). "Large language models' surprise emergent behavior written off as 'a mirage'". The Register. 
  29. Morris, Andréa (May 9 2023). "AI ‘Emergent Abilities’ Are A Mirage, Says AI Researcher". Forbes. 
  30. "Why everyone seems to disagree on how to define Artificial General Intelligence", Mark Sullivan, Fast Company, 18 October 2023
  31. Clark, Lindsay (4 July 2023). "Artificial General Intelligence remains a distant dream despite LLM boom" (in en). The Register. 
  32. Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric; Kamar, Ece; Lee, Peter; Lee, Yin Tat; Li, Yuanzhi; Lundberg, Scott; Nori, Harsha; Palangi, Hamid; Ribeiro, Marco Tulio; Zhang, Yi (27 March 2023). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv:2303.12712 [cs.CL].
  33. Hutson, Matthew (2018-02-16). "Artificial intelligence faces reproducibility crisis". Science. doiWikipedia:10.1126/science.359.6377.725. 
  34. Bjarnason, Baldur (4 July 2023). "The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con" (in en). 
  35. LLMonitor Benchmarks
  36. "BERT". 
  37. Davis, Wes (2023-11-21). "OpenAI rival Anthropic makes its Claude chatbot even more useful" (in en). 
  38. Glifton, Gerald (January 3, 2024). "Criticisms Arise Over Claude AI's Strict Ethical Protocols Limiting User Assistance" (in en). 
  39. Whitney, Lance (March 4, 2024). "Anthropic's Claude 3 chatbot claims to outperform ChatGPT, Gemini" (in en). 
  40. "Is AGI Getting Closer? Anthropic's Claude 3 Opus Model Shows Glimmers of Metacognitive Reasoning" (in en). 5 March 2024. 
  41. Samin, Mikhail (March 5, 2024). "Claude 3 claims it's conscious, doesn't want to die or be modified". 
  42. Twitter thread reader (archived): Min Choi, March 5. 8 examples of Claude surpassing GPT-4, some "feeling" it is AGI.
  43. Warren, Tom (2018-10-26). "Microsoft completes GitHub acquisition". Vox. 
  44. GitHub Copilot litigation · Joseph Saveri Law Firm & Matthew Butterick
  45. Titcomb, James (February 21, 2024). "Google chatbot ridiculed for ethnically diverse images of Vikings and knights". The Daily Telegraph. ISSN 0307-1235. 
  46. Robertson, Adi (February 21, 2024). "Google apologizes for 'missing the mark' after Gemini generated racially diverse Nazis". 
  47. Franzen, Carl (February 21, 2024). "Google Gemini's 'wokeness' sparks debate over AI censorship". 
  48. Olson, Parmy (February 28, 2024). "Google's AI Isn’t Too Woke. It's Too Rushed.". Bloomberg News. 
  49. 49.0 49.1 49.2 Titcomb, James (February 26, 2024). "Elon Musk equated with Hitler in latest Google AI gaffe". ISSN 0307-1235. 
  50. "Personally, I've given up on Gemini, as it seems to have been censored to the point of uselessness.", Hacker News discussion. Contains links to examples with screenshots and quotes. Archived along with the linked examples.
  51. Omerus, Will (June 17, 2022). "Google's AI passed a famous test — and showed how the test is broken". The Washington Post. ISSN 0190-8286. 
  52. "Introducing LLaMA: A foundational, 65-billion-parameter large language model". Meta AI. 24 February 2023. 
  53. Vincent, James (8 March 2023). "Meta's powerful AI language model has leaked online — what happens now?". The Verge. 
  54. Edwards, Benj (14 Mars 2023). "You can now run a GPT-3-level AI model on your laptop, phone, and Raspberry Pi". Ars Technica. 
  55. "Introducing Code Llama, a state-of-the-art large language model for coding". Meta AI. 24 August 2023. 
  56. Vincent, James (March 31, 2023). "Google CEO Sundar Pichai promises Bard AI chatbot upgrades soon: 'We clearly have more capable models'". 
  57. Vincent, James (May 10, 2023). "Google drops waitlist for AI chatbot Bard and announces oodles of new features". 
  58. "Google Bard can now help write software code". Reuters. April 21, 2023. 
  59. Professor catches student cheating with ChatGPT: ‘I feel abject terror’ by Alex Mitchell, (December 26, 2022) New York Post.
  60. "ChatGPT a 'landmark event' for AI, but what does it mean for the future of human labour and disinformation?", Mouhamad Rachini, Dec 15, 2022, CBC Radio
  61. How ChatGPT and similar AI will disrupt education: Teachers are concerned about cheating and inaccurate information by Kathryn Hulick (April 12, 2023 at 7:00 am) Science News.
  62. Most sites claiming to catch AI-written text fail spectacularly by Kyle Wiggers February 16, 2023, Tech Crunch
  63. Why AI detectors think the US Constitution was written by AI, by Benj Edwards, 7/14/2023, Ars Technica
  64. Kelly, Samantha Murphy (26 January 2023). "ChatGPT passes exams from law and business schools" (in en). CNN Business. 
  65. Varanasi, Lakshmi (25 June 2023). "AI models like ChatGPT and GPT-4 are acing everything from the bar exam to AP Biology. Here's a list of difficult exams both AI versions have passed." (in en). Business Insider. 
  66. "OpenAI Quietly Shuts Down Its AI Detection Tool", Jason Nelson, Decrypt, 2023 July 24