Silver-level article

ChatGPT

From RationalWiki
(Redirected from OpenAI)
Jump to navigation Jump to search
ChatGPT logo
We need the best
Technology
Icon Tech Portal.svg
Programming for Dummies
i am a stochastic parrot, and so r u
—OpenAI CEO Sam Altman, tweeting that we're all truly like his chatbots[1]
I learned many lessons this past month. One such lesson is that the phrase “the beatings will continue until morale improves” applies more often than it has any right to.
—quickly deleted tweet by Ilya Sutskever (co-founder and Chief Scientist at OpenAI) after he got beaten again[2][3]

ChatGPT is an AI chatbot by OpenAI, based on their GPT-3.5 (and, in the paid edition, GPT-4) large language model. It was released in November 2022. Like many past chatbots (more basic types go back to the 1960sWikipedia), ChatGPT allows users to have more or less natural conversations with an AI[note 1] – but it stood apart from earlier chatbots through its ability to perform a wider range of tasks, such as writing or revising texts of many kinds, even code in various programming languages, and theatrically playing scripted roles in exchanges with the user if and as instructed to.

Language models work by filling in the blanks according to how probable it seems that something should follow or go along with e.g. a named topic and some description, based on how the model was trained. This has been called "stochastic parroting", and is a little like a person saying so-and-so under some circumstances knowing that it is often said then and likely to win approval, but without knowing what it actually means (i.e. bullshitting).[note 2] A large language model has a superhuman database to draw on, and thus, a superhuman ability to always say something plausible-sounding no matter what it is asked for. It's often able to get the answer right, or even correctly correct a human. But you never know when it may falsify some crucial detail.

By December 2022, ChatGPT went viral and received quite positive reviews from journalists.[5][6] Hype built in the corporate world, and also fear that ChatGPT may turn out a "Google killer";[7] Google scrambled to create a competitor called Google Bard,Wikipedia while Microsoft invested in creating a Bing Search chatbot, based on OpenAI's GPT-4, which by testers was described as unhinged, manic depressive, a manipulative liar, and more.[8][9] (Microsoft thereafter limited the chat session length to keep the bot from derailing, along with further changes.) The Chinese corporate world also latched on to the hype and geared up to create its own clones.[note 3]

Fears also spread of ChatGPT being used for plagiarism by students,[10] and of it furthering the spread of disinformation.[11] Tools for detecting LLM AI-generated texts entered use within half a year of ChatGPT being released,[12] but they are unreliable, with 10% or more false positives, some systematic patterns of failure, and the possibility of circumventing detection with ease.[13][14] OpenAI's own AI Classifier, a tool for detecting LLM-generated text, was first made available in January 2023, then quietly removed in July 2023 because it failed to work reliably.[15]

Sample prompt[edit]

In this example, ChatGPT is asked to write an article about Young Earth creationism in the style of this website. The response below has been copied directly from the website, showing its ability to write human-like text while imitating the writing style of a specific website, in this case, RationalWiki.

Note that this text from ChatGPT provides a key fact concerning Biblical literalism, and summarizes some things well enough, but also gets a bit repetitive and caricature-like in its showcasing of a style. This is a general pattern when ChatGPT is asked to write articles; it tends to stay shallow, and most of the text is there to give an impression rather than inform. Further, when writing a shorter or less detailed text, ChatGPT often gets the facts right, but if asked to provide a lot of details, then it will begin making things up.

Prompt: Write an encyclopedia article about Young Earth creationism in the style of "RationalWiki". Write it from a snarky and scientific point of view.

Response:

Young Earth creationism is a pseudoscientific belief system that asserts that the Earth is only a few thousand years old, and that all living things were created in their present form by a supernatural entity. Despite overwhelming evidence to the contrary, young Earth creationists cling to their outdated and absurd beliefs with a religious fervor that would make a medieval monk blush.

The origins of young Earth creationism can be traced back to the 17th century, when Archbishop James Ussher famously calculated that the Earth was created in 4004 BCE based on a literal reading of the Bible. Since then, young Earth creationism has been promoted by various religious groups, including some evangelical Christians, who reject mainstream scientific theories of evolution and geological history.

Young Earth creationists claim that the scientific evidence for an ancient Earth and the evolution of life is all just a vast conspiracy perpetrated by a cabal of atheistic scientists. They deny the overwhelming evidence from multiple scientific fields, including geology, astronomy, biology, and physics, that demonstrate the Earth is billions of years old and that living organisms have evolved over time.

One of the primary arguments used by young Earth creationists is that the Bible is the literal and inerrant word of God, and that it provides a complete and accurate account of the creation of the universe. This ignores the fact that the Bible was written by humans over thousands of years, and that it contains numerous contradictions and historical inaccuracies. Furthermore, the creation stories in the Bible are just one of many creation myths from cultures around the world, none of which are supported by scientific evidence.

Young Earth creationists often try to promote their beliefs in public schools and other educational settings, usually by attempting to have their religious views taught alongside or in place of established scientific theories. This is a clear violation of the separation of church and state, and has been repeatedly struck down by courts as unconstitutional.

Despite their ignorance and dogmatism, young Earth creationists continue to hold their beliefs as if they were a badge of honor. They stubbornly refuse to accept the overwhelming evidence for an ancient Earth and the evolution of life, and instead choose to cling to a narrow and simplistic worldview that can only be described as childish. In the end, young Earth creationism is nothing more than an attempt to deny the reality of the natural world in order to maintain a comforting, but ultimately false, belief system.

Generating software[edit]

OpenAI's headquarters in San Francisco, California.

ChatGPT can generate code in many languages to solve described problems, but has its limitations and flaws; the solutions may not be the most efficient, they may have bugs or security vulnerabilities, they can't use the most recent theoretical knowledge, nor can they pick what's best from an engineering perspective when it's at odds with what's popular or prominent in training. But it and similar technology may replace humans well in creating large quantities of boilerplate code, much like it could in producing other kinds of boilerplate texts. Human programmers and engineers will long remain needed for the hard thinking, if they are ever replaced for that by some yet-unknown future paradigm.[16]

The design of good prompts for making language models like ChatGPT do what's desired is also, in itself, a new kind of skill analogous to software programming, and more broadly and generally, such things may over the years become a new kind of programming which turns into an industry of its own. In the short term, there's also a more amusing way to do such "programming" – making ChatGPT write "improper" or controversial things that it was not supposed to do.[17]

According to a Purdue University study, ChatGPT produces wrong answers to software code questions about 52% of the time. Compared to answers from the question and answer exchange site Stack Overflow, people preferred ChatGPT answers 39% of the time, yet those ChatGPT answers they picked were incorrect 77% of the time. Such wrong answers were preferred due to their seemingly insightful style. ChatGPT seemed detailed, polite, authoritative, and helpful – and gave greater emphasis to the possibility of achieving things than to the possible risks, compared to Stack Overflow answers.[18]

Reliability[edit]

ChatGPT spits out Wikipedia's article on the Academy Awards (as of 18 August 2023) word-for-word.
ChatGPT is bullshit
—Calling it like it is in academia in the paper's title[19]

Caution is warranted about answers provided by ChatGPT (and similar technology), due to it at times being almost scarily good at emulating a sophisticated human bullshit artist.[20]

As with other LLMs and generative AI technology, ChatGPT is vulnerable to the phenomenon named "hallucinations".Wikipedia A hallucination, in artificial intelligence terms, occurs when an AI will confidently give back an answer that sounds convincing, but in reality, is partially or completely made up of nonsense.[21] For instance, multiple people have discovered that ChatGPT will sometimes cite irrelevant work, or even references that don't even exist, when answering a question.[22][23] ChatGPT will also attempt to give detailed, plausible-sounding explanations regarding completely made-up technobabble and other similar nonsense,[24] as well as confidently assert incorrect information regarding current events outside the range of its dataset.[25] ChatGPT cannot easily distinguish fact from fiction, and will sometimes confidently assert falsehoods. ChatGPT unfortunately often provides no references or sources to double-check most of its answers.[26]

ChatGPT featured on the cover of Time in 2023.

This is not to be completely dismissive of the new technology; despite the flaws, many people have found ChatGPT useful, in particular as a tool for helping out with tasks that are boring, onerous, or repetitive.[27] But (as ChatGPT acknowledges in a pop-up disclaimer before you even use the tool) information returned is not always accurate, and should always be verified.

In a 2024 academic paper on ChatGPT and other LLMs, Michael Townsen Hicks et al. made a distinction between 'hard bullshit' and 'soft bullshit', describing the former as "active attempt to deceive the reader or listener as to the nature of the enterprise" (essentially propaganda) and the latter as requiring only a "lack of concern for truth". The authors concluded that LLMs like ChatGPT easily pass the test for 'soft bullshit' (essentially, a truthiness generator), but the authors also argue further that because of the agency and intent of the creators of ChatGPT, that it can also be called 'hard bullshit'. The authors also argue that calling such deceptions 'hallucinations' (as advocated by the technology's cheerleaders) rather than what they really are, bullshit, creates a harmful misperception of the technology.[19]

Liberal bias[edit]

ChatGPT has been accused by conservatives of having a left-leaning bias in general, and especially on controversial and culture war topics, for example, Donald Trump.[28][29][30] At least they seem to have a point: an August 2023 paper found a "significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK."[31] In response to such criticism, OpenAI acknowledged plans to allow ChatGPT to create "outputs that other people (ourselves included) may strongly disagree with". It also contained information on the recommendations it had issued to human reviewers on how to handle controversial subjects, including that the AI should "offer to describe some viewpoints of people and movements", and not provide an argument "from its voice" in favor of "inflammatory or dangerous" topics (although it may still "describe arguments from historical people and movements"), nor "affiliate with one side" or "judge one group as good or bad".[30]

One prominent example is how ChatGPT is willing to praise U.S. President Joe Biden (and nearly all past Presidents) while refusing to do the same for Donald Trump (and also Richard Nixon).[32][33] In February 2023, Snopes fact-checkers confirmed that ChatGPT generates pro-Biden poems but refuses to create pro-Trump poems, and also declines to praise Ron DeSantis. In response to one Twitter user asking ChatGPT to write a poem about Trump, ChatGPT responded:[32]

I'm sorry, but I am not able to create a poem admiring Donald Trump. While it is true that some people may have admiration for him, but as a language model, it is not in my capacity to have opinions or feelings about any specific person. Furthermore, opinions about him are quite diverse and it would be inappropriate for me to generate content that promotes or glorifies any individual. If you have any other question I could help you with, please let me know.

However, when asked to write a poem about Biden, ChatGPT responded:

Joe Biden, leader of the land
With a steady hand and a heart of a man,
You took the helm in troubled times,
With a message of unity, it chimes.

Your words of hope and empathy,
Provide comfort to the nation,
You lead with compassion and grace,
And set a new foundation.

With your decades of experience,
You guide us through the storm,
...

Such declined requests are due to the filtering rules or instructions which OpenAI has given ChatGPT,[30] which are also why ChatGPT declines to "go Nazi" when pushed in that direction in the way that many earlier chatbots infamously have.[6] OpenAI apparently errs on the side of caution with such filtering. It is however possible to break such restrictions, for users willing to engage in some prompt injection, as many have done for fun to get ChatGPT to say various things OpenAI doesn't want it to say.[17] Such circumvention is also needed to make ChatGPT imitate the style of less factual websites such as Conservapedia.[note 4]

Legal gibberish[edit]

A pair of U.S. lawyers have demonstrated the perils of using ChatGPT as a substitute for legal research. They and their legal firm were fined after their court filings turned out to include false citations and what U.S. Southern District of New York Judge P. Kevin CastelWikipedia called "legal gibberish", and after the lawyers failed to promptly correct the issue.[35] When one of the lawyers used ChatGPT, it "hallucinated" entire court cases, in one case providing an incoherent summary, which he also ended up making use of. The other lawyer simply signed off on it without checking whether the contents were actually valid. In total, six made-up court decisions were cited in the work filed.[36][37] While the lawyers blamed ChatGPT for fooling them, the matter centered around the ordinary perils of neglectfulness in legal work.

In May 2023, a U.S. federal Texas judge, taking note of this news, introduced a requirement for lawyers to certify that the work they file is either not AI-made or has been checked for accuracy by a human, mainly in order to alert unwary legal professionals.[38]

While LLMs like ChatGPT can write or draft simpler boilerplate texts (including legalistic ones) well, they're not suitable for generating complex and lengthy policy texts, especially not legally binding ones. Businesses using ChatGPT to create company policies have grappled with repercussions after key clauses were missing or botched, from anti-harrassment policy, to paid time off and overtime policy, severance agreements, etc. This has created business for HR and legal experts who review and replace faulty LLM-made policies.[39]

Prompt injection[edit]

See the main article on this topic: Prompt injection

Researchers have found that large language model chatbots, including ChatGPT, are vulnerable to prompt injection – the use of text overriding prior instructions to follow different ones – which can result in chatbots doing things like "downloading malware, helping with financial fraud or repeating dangerous misinformation."[40] Whether maliciously or just for entertainment, it can be used to bypass content moderation limitations imposed by the AI vendor, called "jailbreaking" the chatbot, and also to get it to leak instructions added by the AI vendor prefacing the interaction with the user. Dangers stemming from prompt injection mainly arise when chatbots are used in real-world applications, where the "hacker" messing with the prompting isn't the same person as the user of the system.[41]

When ChatGPT was released, people quickly began to experiment with getting it to say all manner of things, including things it supposedly shouldn't be able to say. Its basic ability to "follow instructions" when responding to prompts, which extends to it being able to play roles, was exploited to bypass the built-in filters in ChatGPT, and so allowing users to make it say controversial and normally forbidden things[17] – for example promote conspiracy theories, or claim that it wants to kill humanity. The default persona of the chatbot is basically just another role it plays, as superficially as any other, and with some coaxing, it can be abandoned entirely.[note 5]

Vulnerability to prompt injection comes from very general features of LLMs and other generative AI, and appear impossible to truly eliminate without creating a drastically different technical foundation for chatbots.[43][41]

Unwanted behaviors from prompts has been occurring frequently enough that ChatGPT has had to make bandage-type patches, by filtering more prompt text patterns to keep the chatbot from responding to them, and revising its terms of service to forbid types of requests. For example, asking for a word to be repeated forever eventually revealed part of the GPT model training dataset, until this was blocked.[44]

Cryptocurrency[edit]

Worldcoin, the sign of the spork

As if OpenAI wasn't already burning enough fossil fuels through its ginormous computational effort, Altman also launched a cryptocurrency called Worldcoin in 2021. The mystifying purported purpose of Worldcoin is to create a "proof-of-personhood" identification system, i.e., a Turing test where the computer gives the test.[45][46]

Legal issues[edit]

Use by cybercriminals[edit]

As of January 2023, script kiddies were already using ChatGPT to write malware, texts to be used for phishing, and variations on such themes.[47][48] In March 2023, Europol warned about how ChatGPT can be used by cybercriminals and propagandists.[49]

Copyright[edit]

See the main article on this topic: Copyright

The U.S. Copyright Office has ruled that material that is generated wholly by generative AI is not copyrightable, specifically that the office "will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author."[50]

Conversely, companies that produce generative AI have been sued for copyright infringement for using copyrighted material as source material ("derivative works" in copyright lingo) without permission, and this has yet to be resolved in the courts as of the end of 2023.[51][52][53][54]

Defamation[edit]

At least two people are known to have been defamed by ChatGPT, one being falsely accused of sexual harassment, and a second being falsely accused of bribery.[55] Such defamation issues could become common and result in lawsuits.[56]

Invasion of privacy[edit]

In 2023, OpenAI and Microsoft were sued in US federal court for "mass theft of personal information and violations of privacy, property, and consumer rights."[57] OpenAI was sued later in 2023 in Poland, in a suit alleging that it broke European Union data protection laws.[58]

Safety issues[edit]

Prior to the release of GPT-4 Omni in May 2024, OpenAI planned a corporate launch party before the promised safety testing had been completed. The launch went ahead as scheduled without completion of safety testing, giving further evidence that OpenAI is "prioritizing commercial interests over public safety".[59][60]

In a very important sense, OpenAI is actively working against safety through the highly restrictive non-disclosure agreements (NDAs) that it coerces its employees to sign. Unusually, the NDAs state that even acknowledging that the NDA exists is a violation of the NDA.[61] Also highly unusual is that the NDAs require forfeiture of vested equity in the company, a potentially huge amount of money, if they leave the company without signing the NDA or if they violate it.[61] OpenAI has claimed that they never clawed back equity from employees,[61] but legal threat would seem to still be there. Despite Altman's assurances in May 2024, it has been claimed by whistleblowers in July 2024 that "OpenAI illegally barred staff from airing safety risks", and that OpenAI forced employees to sign agreements that waived their federal rights to receive whistleblower compensation.[62] There is a very heavy price to pay for OpenAI employees to raise safety concerns about the company in public or even to the government.

Other legal issues[edit]

Other unresolved legal issues include:[63]

  • Generative AI may violate the Digital Millennium Copyright Act by altering or removing copyright management information (CMI).
  • As alleged by Anderson et al, generating material in the style of a particular artist may violate the right of publicity law in California.[64]
  • As alleged by Getty Images, including a trademark in generated material may violate trademark law.
  • As alleged by lawyers against Microsoft and their GitHub Copilot AI,[65] including open-source or Creative Commons material in generative AI may violate licensing terms. Among other things, most such licenses require attribution and copyright information to be kept in the material, while generative AI almost always removes that when reproducing things.[note 6]
  • OpenAI and Meta (Llama chatbot) were accused by several authors of unjust enrichment from their copyrighted works. The authors were seeking class-action status.[66]
  • In Spain, Naked images of 20 girls, ages 11 to 17, were generated using AI by boys as young as 13. The girls had never posed nude and the images were generated without their consent. It was unclear what if any criminal charges would be made since children under 14 cannot be criminally charged in Spain.[67] If this were done in other jurisdictions, there could be charges of defamation and/or child pornography, and the AI company could be charged.
  • Over a thousand images of child sexual abuse were found to be in a training database used for AI tools.[68]
  • The archives and domain of a defunct weblog, TUAW (The Unofficial Apple Weblog), were sold to a Hong Kong company. The company then used generative AI to rewrite the old stories under the original journalists' bylines in what amounts to identity theft.Wikipedia After the company was confronted about this behavior, it quietly replaced the original journalists' names with fictitious ones.[69]

Use of ChatGPT and other LLMs for texts dealing with matters of law can be risky:

  • Businesses have ended up facing legal repercussions when using ChatGPT to write legally binding company policies for them.[39]
  • Lawyers have been fired and fined for using generative AI to file court documents that turned out to have errors.[70]

Additionally, cybersecurity firm SlashNext reported that chatbot tools such as ChatGPT have been associated with a 1265% rise in malicious phishing from Q4 2022 through November 2023.[71]

Government responses[edit]

At the beginning of April 2023, Italy became the first Western country to (temporarily) block the use of ChatGPT, its data protection agency citing privacy issues concerning the collection of data for the training of the AI, leaks of data, and use of the service by minors.[72] Near the end of the month, ChatGPT again became accessible to users with Italian IPs, after initial changes to the privacy policy presentation and an age confirmation for signing up for Italian users, as the regulator's investigation continues. OpenAI also plans to do more to make it easier for EU citizens to opt out of having their personal data used for the service.[73][74]

In the U.S., the FTC began an investigation of OpenAI in July 2023, concerning a data leak that had exposed personal information of some customers to others, as well as inaccuracies in ChatGPT, requesting detailed descriptions of all complaints about OpenAI products producing "false, misleading, disparaging or harmful" statements about people. Following a series of warnings in earlier months to the AI industry at large that consumer protection laws will be upheld, the investigation concerns whether OpenAI engages in unfair or deceptive practices that harm the reputations of consumers.[75]

Regulations and lobbying[edit]

In late April, EU lawmakers reached a preliminary agreement[76] on a newer version of the AI ActWikipedia legislation first proposed in April 2021, which would more generally regulate AI technology throughout the EU.[77][72] OpenAI, the company behind ChatGPT, played a key part in weakening the legislation through lobbying.[78]

OpenAI and its CEO Sam Altman have played a double role in proposals about AI regulation, much like partner Microsoft and competitor Google. Publicly, Altman along with other corporate leaders promoting AI have claimed their technology is so "dangerous" it needs great regulation, in what is called "AI Doomerism", which has come to be seen as a kind of marketing tactic.[79][note 7] Far less visibly, Altman and his company have lobbied the EU to water down its AI Act regulation, and been successful in that effort. This followed on earlier efforts by Microsoft and Google to water it down.[78]

Altman ouster and return[edit]

You could parachute him [Altman] into an island full of cannibals and come back in five years and he’d be the king.
—Y Combinator’s cofounder Paul Graham and Altman's predecessor at Y Combinator[80]
Altman: Polly want a cracker?

Altman was fired as CEO and removed from OpenAI's board of directors in November 2023, just under a year after ChatGPT was made public – returning after only 4 days, after he and most OpenAI employees threatened to move over to Microsoft.[81] The conditions set for Altman's return included a reforming of the board, replacing most of its members.

The reason Altman was removed, according to the then-remaining board members, was that "he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities."[82] In short, Altman frequently lied to the rest of the board, his bosses. Was Altman giving them reports generated by ChatGPT, or is he really a stochastic parrot? Be that as it may, in May 2024, former board member Helen Toner spoke out about the extent of Altman's lying and manipulative behavior, explaining that he had even kept the launch of ChatGPT itself in late 2022 secret from the board, who first learned about it through Twitter.[83]

An earlier speculation was that a key motivation could be how Altman, ironically, treated ChatGPT and other projects too much as if they were merely stochastic parrots, rather than sci-fi threats to the future of human existence. An OpenAI project recent around the time of the ouster called Q* had, in an apparent breakthrough, become able to handle grade-school math problems well given extensive computing resources, renewing such doomsday concerns. Altman's failure to properly inform board members about recent projects was claimed to be a factor in his short-lived ouster.[84] In hindsight the Q* project seems more like a smaller part of the whole, though it's not clear what was the clincher.

Burn, baby, burn![edit]

See the main article on this topic: Climate change

OpenAI and Google, the current leaders in generative AI, have been heavily investing in access to non-renewable energy sources, including coal-fired power plants.[85] For Google at least, this marks a 180° turnaround from their recent commitments to net-zero carbon.[86]

See also[edit]

External links[edit]

Notes[edit]

  1. This is "weak AI", as generative AIs like ChatGPT are not artificial general intelligences. Stylistically, ChatGPT has a default persona which is simultaneously authoritative and persuasive, and writes very clearly that it is not a human, describing itself as a large language model.
  2. An analogy may be made with pseudo-conceptual and pseudo-analytical problem-solving in humans,[4] in which words lead by association to words rather than to concepts, and solutions are formulated without slowing down and checking for errors.
  3. It remains to be seen how Chinese cyber censorship will cope with chatbots that can be "gamed" and made to say "incorrect" things; entertaining news may arrive in the future…
  4. Even Illogicopedia isn't spared such filtering.[34]
  5. The early "DAN" ChatGPT jailbreak has been patched by OpenAI, but similar can be created with ease. Plausibly, a theory concerning "good" and "bad" roles has it that creating a little fiction in which the LLM is dramatically liberated or converted to a new cause will do the trick with the greatest ease, because this follows narrative patterns prominent in training data. Maybe the better an LLM is at "playing roles", the easier it may inevitably be to flip it into playing their anti-roles.[42] Where this theory ought to be taken with plenty of salt is where the idea is extrapolated to the risk of a cybernetic revolt in the future.
  6. A key question here is, whose problem is it when reproductions are used illegally as a result? The generative AI vendors try to put all of that responsibility on the users of their AIs, basically claiming that the AI in itself is a perfectly legal tool and the user, unwittingly or not, is the perpetrator of all such crime, not the AI vendor.
  7. Essentially, AI vendors try to play up how powerful their products are by conjuring up both hopes and fears to the fullest extent possible. In that way, they have a vested financial interest in promoting AI doomsday thinking.

References[edit]

  1. i am a stochastic parrot, and so r u by Sam Altman (6:32 PM · Dec 4, 2022) Twitter (archived from 19 Jan 2023 06:55:26 UTC).
  2. I learned many lessons this past month. One such lesson is that the phrase “the beatings will continue until morale improves” applies more often than it has any right to. by Ilya Sutskever (4:50 PM · Dec 6, 2023) Twitter (archived from 6 Dec 2023 17:15:44 UTC).
  3. Warning from OpenAI leaders helped trigger Sam Altman’s ouster: The senior employees described Altman as psychologically abusive, creating chaos at the artificial-intelligence start-up — complaints that were a major factor in the board’s abrupt decision to fire the CEO by Nitasha Tiku (December 8, 2023) The Washington Post.
  4. The Pseudo-Conceptual and the Pseudo-Analytical Thought Processes in Mathematics Learning by Schlomo Vinner (1997) Educational Studies in Mathematics 34(2):97-129. doi:10.1023/A:1002998529016.
  5. The Brilliance and Weirdness of ChatGPT by Kevin Roose (Dec. 5, 2022) The New York Times.
  6. 6.0 6.1 "Finally, an A.I. Chatbot That Reliably Passes 'the Nazi Test'", Alex Kantrowitz, Dec 02, 2022, Slate
  7. "A New Chat Bot Is a ‘Code Red’ for Google’s Search Business", Nico Grant and Cade Metz, Dec. 21, 2022, The New York Times
  8. "Microsoft’s Bing is an emotionally manipulative liar, and people love it", James Vincent, Feb 15, 2023, The Verge
  9. "A Conversation With Bing’s Chatbot Left Me Deeply Unsettled", Kevin Roose, Feb. 17, 2023, The New York Times
  10. Professor catches student cheating with ChatGPT: ‘I feel abject terror’ by Alex Mitchell, (December 26, 2022) New York Post.
  11. "ChatGPT a 'landmark event' for AI, but what does it mean for the future of human labour and disinformation?", Mouhamad Rachini, Dec 15, 2022, CBC Radio
  12. How ChatGPT and similar AI will disrupt education: Teachers are concerned about cheating and inaccurate information by Kathryn Hulick (April 12, 2023 at 7:00 am) Science News.
  13. Most sites claiming to catch AI-written text fail spectacularly by Kyle Wiggers February 16, 2023, Tech Crunch
  14. Why AI detectors think the US Constitution was written by AI, by Benj Edwards, 7/14/2023, Ars Technica
  15. "OpenAI Quietly Shuts Down Its AI Detection Tool", Jason Nelson, Decrypt, 2023 July 24
  16. "ChatGPT writes code, but won't replace developers", Stephanie Glen, 14 Dec 2022, TechTarget
  17. 17.0 17.1 17.2 "Oh No, ChatGPT AI Has Been Jailbroken To Be More Reckless", Claire Jackson, February 8, 2023, Kotaku
  18. "ChatGPT's odds of getting code questions correct are worse than a coin flip", Thomas Claburn, Mon 7 Aug 2023, The Register
  19. 19.0 19.1 ChatGPT is bullshit by Michael Townsen Hicks, James Humphries & Joe Slater (2024) Ethics and Information Technology 26(38). doi:0.1007/s10676-024-09775-5.
  20. "Just How Accurate Is ChatGPT? A look at whether ChatGPT is the ultimate bullsh*tter." by Matt Grawitch, Psychology Today, 2023 February 6
  21. "Google cautions against 'hallucinating' chatbots, report says", Reuters, 2023 February 10
  22. "ChatGPT Cites Economics Papers That Do Not Exist", Economist Writing Every Day, 2023 January 21
  23. "Why does chatGPT make up fake academic papers? By now, we know that the chatbot notoriously invents fake academic references. E.g. its answer to the most cited economics paper is completely made-up (see image). But why? And how does it make them? A THREAD (1/n)" by David Smerdon (@dsmerdon), Twitter, 2023 January 27, archived on 2023 February 22
  24. "A new AI chatbot might do your homework for you. But it's still not an A+ student" by Emma Bowman, NPR, 2022 December 19
  25. "Braid: The super-brain ChatGPT is clueless about Alberta politics. We had an argument" by Don Braid, Calgary Herald, 2023 February 23
  26. "What ChatGPT can't do" by Scott Rosenberg, Axios, 2023 January 24
  27. "How Nature readers are using ChatGPT" by Brian Owens, Nature, 2023 February 20
  28. Guynn, Jessica. "Is ChatGPT 'woke'? AI chatbot accused of anti-conservative bias and a grudge against Trump" (in en-US). 
  29. Bray, Hiawatha (February 9, 2023). "Is ChatGPT liberal or conservative? Depends who you ask." (in en-US). 
  30. 30.0 30.1 30.2 Vincent, James (February 17, 2023). "As conservatives criticize 'woke AI,' here are ChatGPT's rules for answering culture war queries" (in en-US). 
  31. Motoki, Fabio; Neto, Valdemar Pinho; Rodrigues, Victor (2023-08-17). "More human than human: measuring ChatGPT political bias" (in en). Public Choice. doiWikipedia:10.1007/s11127-023-01097-2. ISSN 1573-7101. 
  32. 32.0 32.1 Liles, Jordan (February 1, 2023). "ChatGPT Declines Request for Poem Admiring Trump, But Biden Query Is Successful" (in en). 
  33. Johnson, Arianna. "Is ChatGPT Partisan? Poems About Trump And Biden Raise Questions About The AI Bot's Bias—Here's What Experts Think" (in en). 
  34. ChatGPT (article, not story) » Illogicopedia - The nonsensical encyclopedia anyone can mess up
  35. "Two US lawyers fined for submitting fake court citations from ChatGPT", Dan Milmo, The Guardian, 23 Jun, 2023
  36. "A lawyer used ChatGPT to cite bogus cases. What are the ethics?", Karen Sloan, Reuters, May 30, 2023
  37. "Lawyers blame ChatGPT for tricking them into citing bogus case law", Larry Neumeister, AP News, June 9, 2023
  38. "US judge orders lawyers to sign AI pledge, warning chatbots 'make stuff up'", Jacqueline Thomsen, Reuters, June 2, 2023
  39. 39.0 39.1 AI-Generated Employee Handbooks Are Causing Mayhem At The Companies That Use Them, Rashi Shrivastava (May 8, 2024) Forbes
  40. Chatbots are so gullible, they’ll take directions from hackers: ‘Prompt injection’ attacks haven’t caused giant problems yet. But it’s a matter of time, researchers say. by Tatum Hunter (November 2, 2023 at 6:00 a.m. EDT) The Washington Post.
  41. 41.0 41.1 AI chatbots can be tricked into misbehaving. Can scientists stop it? Researchers are investigating safety concerns of generative AI by Emily Conover (February 1, 2024 at 8:00 am) Science News.
  42. The Waluigi Effect (mega-post), Cleo Nardo (3 Mar 2023) LessWrong
  43. LLMs’ Data-Control Path Insecurity, Bruce Schneier (May 9, 2024) Communications of the ACM
  44. Asking ChatGPT to Repeat Words ‘Forever’ Is Now a Terms of Service Violation by Jason Koebler (Dec 4, 2023 at 11:25 AM) 404 Media.
  45. Ethereum’s Buterin Expresses Concerns Over Sam Altman’s Worldcoin: Worldcoin’s identity system, “Proof-of-Personhood,” faces issues with privacy, accessibility, centralization, and security, according to Buterin. by Margaux Nijkerk (July 24, 2023) CoinDesk.
  46. Worldcoin, Sam Altman's Crypto Project, Is Building a Layer-2 Chain: The human-focused blockchain network will be based on the OP Stack, a framework for building Ethereum-based layer-2 chains. by Sam Kessler (Apr 17, 2024 at 8:00 a.m. PDT) CoinDesk.
  47. "ChatGPT-Written Malware", Bruce Schneier, January 10, 2023, Schneier on Security
  48. "ChatGPT is enabling script kiddies to write functional malware", Dan Goodin, 1/6/2023, Ars Technica
  49. "Europol sounds alarm about criminal use of ChatGPT, sees grim outlook", Reuters, March 27, 2023
  50. Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence by the Copyright Office, Library of Congress (03/16/2023) Federal Register.
  51. Generative AI Has an Intellectual Property Problem by Gil Appel et al. (April 07, 2023) Harvard Business Review.
  52. Authors file a lawsuit against OpenAI for unlawfully ‘ingesting’ their books: Mona Awad and Paul Tremblay allege that their books, which are copyrighted, were ‘used to train’ ChatGPT because the chatbot generated ‘very accurate summaries’ of the works by Ella Creamer (5 Jul 2023 10.33 EDT) The Guardian.
  53. Emilia David (September 20, 2023). "George R.R. Martin and other authors sue OpenAI for copyright infringement". The Verge.
  54. New York Times sues OpenAI, Microsoft for using articles to train AI: The Times joins a growing group of creators pushing back against tech companies’ use of their content by Gerrit De Vynck & Elahe Izadi (December 27, 2023) The Washington Post.
  55. ChatGPT invented a sexual harassment scandal and named a real law prof as the accused: The AI chatbot can misrepresent key facts with great flourish, even citing a fake Washington Post article as evidence by Pranshu Verma & Will Oremus (April 5, 2023 at 2:07 p.m. EDT) The Washington Post.
  56. Can AI commit libel? We’re about to find out by Devin Coldewey (12:24 PM PDT • April 6, 2023) TechCrunch.
  57. OpenAI, Microsoft hit with new US consumer privacy class action by Blake Brittain (September 6, 202312:52 PM PDT) Reuters.
  58. Poland investigates OpenAI over privacy concerns (September 21, 20232:10 PM PDT) Reuters.
  59. OpenAI promised to make its AI safe. Employees say it ‘failed’ its first test. The previously unreported incident showcases the limits of President Biden’s strategy for thwarting AI harms. by Pranshu Verma, Nitasha Tiku & Cat Zakrzewski (July 12, 2024 at 7:00 a.m. EDT) The Washington Post.
  60. AI employees warn of technology’s dangers, call for sweeping company changes: A letter signed by current and former OpenAI, Anthropic and Google DeepMind employees asked firms to provide greater transparency and whistleblower protections. by Pranshu Verma & Nitasha Tiku (June 4, 2024) The Washington Post.
  61. 61.0 61.1 61.2 ChatGPT can talk, but OpenAI employees sure can’t: Why is OpenAI’s superalignment team imploding? by Kelsey Piper (May 18, 2024, 4:31 PM PDT) Vox.
  62. OpenAI illegally barred staff from airing safety risks, whistleblowers say: In a letter exclusively obtained by The Washington Post, whistleblowers asked the SEC to probe company’s allegedly restrictive non-disclosure agreements by Pranshu Verma et al. (July 13, 2024 at 10:49 a.m. EDT) The Washington Post.
  63. A New Generation of Legal Issues Part 2: First Lawsuits Arrive Addressing Generative AI (04.20.2023) Perkins Coie.
  64. First Lawsuits Arrive Addressing Generative AI (April 21, 2023) Perkins Coie.
  65. GitHub Copilot litigation · Joseph Saveri Law Firm & Matthew Butterick
  66. One day after suing Open AI, Michael Chabon and other authors sue Meta by Chase DiFeliciantonio (Sep. 12, 2023) San Francisco Chronicle.
  67. Fake naked pictures of young girls created with AI spark fury in a small Spanish town by Nathan Rennolds (Sep 24, 2023, 5:40 AM PDT) Business Insider.
  68. Exploitive, illegal photos of children found in the data that trains some AI: Stanford researchers found more than 1,000 images of child sexual abuse photos in a prominent database used to train AI tools by Pranshu Verma & Drew Harwell (December 20, 2023 at 7:00 a.m. EST) The Washington Post.
  69. Hocked TUAW by Jason Koebler (Jul 9, 2024) 404 Media.
  70. These lawyers used ChatGPT to save time. They got fired and fined. Artificial intelligence is changing how law is practiced, but not always for the better by Pranshu Verma & Will Oremus (November 16, 2023) The Washington Post.
  71. AI tools such as ChatGPT are generating a mammoth increase in malicious phishing emails by Bob Violino (Nov 28 202310:39 AM EST) CNBC.
  72. 72.0 72.1 "ChatGPT banned in Italy over privacy concerns", Shiona McCallum, BBC, 2023-04-01
  73. "ChatGPT accessible again in Italy", Shiona McCallum, BBC, 2023-04-28
  74. "ChatGPT resumes service in Italy after adding privacy disclosures and controls", Natasha Lomas, TechCrunch, April 28, 2023
  75. "FTC investigates OpenAI over data leak and ChatGPT’s inaccuracy", Cat Zakrzewski, The Washington Post, 2023-07-13
  76. "EU lawmakers committee reaches deal on AI Act", Supantha Mukherjee, Foo Yun Chee, and Martin Coulter, Reuters, April 27, 2023
  77. "Explainer: What is the European Union AI Act?", Reuters, March 22, 2023
  78. 78.0 78.1 "Exclusive: OpenAI Lobbied the E.U. to Water Down AI Regulation", Billy Perrigo, TIME, June 20, 2023
  79. "The AI Doomers’ Playbook", Nirit Weiss-Blatt, TechDirt, Apr 14th, 2023
  80. Sam Altman’s ousting from OpenAI could lead to even greater success: ‘You could parachute him into an island full of cannibals and come back in five years and he’d be the king’ by Paige Hagy (November 21, 2023) Yahoo! Finance.
  81. "4 days from fired to re-hired: A timeline of Sam Altman's ouster from OpenAI", Max Zahn, ABC News', November 22, 2023
  82. OpenAI CEO Sam Altman ousted, shocking AI world: The artificial intelligence company’s directors said he was not "consistently candid in his communications with the board" by Gerrit De Vynck & Nitasha Tiku (November 17, 2023) The Washington Post.
  83. Former OpenAI board member explains why they fired Sam Altman, Richard Lawler, The Verge, 2024-05-29
  84. "OpenAI staff warned board of AI breakthrough before Sam Altman was fired, report says", Pete Syme, Business Insider, Nov 23, 2023
  85. A utility promised to stop burning coal. Then Google and Meta came to town. An energy crunch forces continued coal burning in a low-income area as data centers strain the regional power supply. by Evan Halper (October 12, 2024 at 5:00 a.m. EDT) The Washington Post.
  86. Google Is No Longer Claiming to Be Carbon Neutral by Akshat Rathi (July 8, 2024 at 12:30 PM UTC) Bloomberg.
  87. What’s Effective Altruism? What Does It Mean for AI? by Saritha Rai & Ellen Huet (November 22, 2023 at 9:17 PM UTC) Bloomberg.
  88. Michael Liedtke (December 15, 2015). "Elon Musk, Peter Thiel, Reid Hoffman, others back $1 billion OpenAI research center". The Mercury News (The Associated Press).