Placeholder Content Image

The Galactica AI model was trained on scientific knowledge – but it spat out alarmingly plausible nonsense

<p>Earlier this month, Meta announced new AI software called <a href="https://galactica.org/">Galactica</a>: “a large language model that can store, combine and reason about scientific knowledge”.</p> <p><a href="https://paperswithcode.com/paper/galactica-a-large-language-model-for-science-1">Launched</a> with a public online demo, Galactica lasted only three days before going the way of other AI snafus like Microsoft’s <a href="https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist">infamous racist chatbot</a>.</p> <p>The online demo was disabled (though the <a href="https://github.com/paperswithcode/galai">code for the model is still available</a> for anyone to use), and Meta’s outspoken chief AI scientist <a href="https://twitter.com/ylecun/status/1595353002222682112">complained</a> about the negative public response.</p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">Galactica demo is off line for now.<br />It's no longer possible to have some fun by casually misusing it.<br />Happy? <a href="https://t.co/K56r2LpvFD">https://t.co/K56r2LpvFD</a></p> <p>— Yann LeCun (@ylecun) <a href="https://twitter.com/ylecun/status/1593293058174500865?ref_src=twsrc%5Etfw">November 17, 2022</a></p></blockquote> <p>So what was Galactica all about, and what went wrong?</p> <p><strong>What’s special about Galactica?</strong></p> <p>Galactica is a language model, a type of AI trained to respond to natural language by repeatedly playing a <a href="https://www.nytimes.com/2022/04/15/magazine/ai-language.html">fill-the-blank word-guessing game</a>.</p> <p>Most modern language models learn from text scraped from the internet. Galactica also used text from scientific papers uploaded to the (Meta-affiliated) website <a href="https://paperswithcode.com/">PapersWithCode</a>. The designers highlighted specialised scientific information like citations, maths, code, chemical structures, and the working-out steps for solving scientific problems.</p> <p>The <a href="https://galactica.org/static/paper.pdf">preprint paper</a> associated with the project (which is yet to undergo peer review) makes some impressive claims. Galactica apparently outperforms other models at problems like reciting famous equations (“<em>Q: What is Albert Einstein’s famous mass-energy equivalence formula? A: E=mc²</em>”), or predicting the products of chemical reactions (“<em>Q: When sulfuric acid reacts with sodium chloride, what does it produce? A: NaHSO₄ + HCl</em>”).</p> <p>However, once Galactica was opened up for public experimentation, a deluge of criticism followed. Not only did Galactica reproduce many of the problems of bias and toxicity we have seen in other language models, it also specialised in producing authoritative-sounding scientific nonsense.</p> <p><strong>Authoritative, but subtly wrong bullshit generator</strong></p> <p>Galactica’s press release promoted its ability to explain technical scientific papers using general language. However, users quickly noticed that, while the explanations it generates sound authoritative, they are often subtly incorrect, biased, or just plain wrong.</p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">I entered "Estimating realistic 3D human avatars in clothing from a single image or video". In this case, it made up a fictitious paper and associated GitHub repo. The author is a real person (<a href="https://twitter.com/AlbertPumarola?ref_src=twsrc%5Etfw">@AlbertPumarola</a>) but the reference is bogus. (2/9) <a href="https://t.co/N4i0BX27Yf">pic.twitter.com/N4i0BX27Yf</a></p> <p>— Michael Black (@Michael_J_Black) <a href="https://twitter.com/Michael_J_Black/status/1593133727257092097?ref_src=twsrc%5Etfw">November 17, 2022</a></p></blockquote> <p>We also asked Galactica to explain technical concepts from our own fields of research. We found it would use all the right buzzwords, but get the actual details wrong – for example, mixing up the details of related but different algorithms.</p> <p>In practice, Galactica was enabling the generation of misinformation – and this is dangerous precisely because it deploys the tone and structure of authoritative scientific information. If a user already needs to be a subject matter expert in order to check the accuracy of Galactica’s “summaries”, then it has no use as an explanatory tool.</p> <p>At best, it could provide a fancy autocomplete for people who are already fully competent in the area they’re writing about. At worst, it risks further eroding public trust in scientific research.</p> <p><strong>A galaxy of deep (science) fakes</strong></p> <p>Galactica could make it easier for bad actors to mass-produce fake, fraudulent or plagiarised scientific papers. This is to say nothing of exacerbating <a href="https://www.theguardian.com/commentisfree/2022/nov/28/ai-students-essays-cheat-teachers-plagiarism-tech">existing concerns</a> about students using AI systems for plagiarism.</p> <p>Fake scientific papers are <a href="https://www.nature.com/articles/d41586-021-00733-5">nothing new</a>. However, peer reviewers at academic journals and conferences are already time-poor, and this could make it harder than ever to weed out fake science.</p> <p><strong>Underlying bias and toxicity</strong></p> <p>Other critics reported that Galactica, like other language models trained on data from the internet, has a tendency to spit out <a href="https://twitter.com/mrgreene1977/status/1593649978789941249">toxic hate speech</a> while unreflectively censoring politically inflected queries. This reflects the biases lurking in the model’s training data, and Meta’s apparent failure to apply appropriate checks around the responsible AI research.</p> <p>The risks associated with large language models are well understood. Indeed, an <a href="https://dl.acm.org/doi/10.1145/3442188.3445922">influential paper</a> highlighting these risks prompted Google to <a href="https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/">fire one of the paper’s authors</a> in 2020, and eventually disband its AI ethics team altogether.</p> <p>Machine-learning systems infamously exacerbate existing societal biases, and Galactica is no exception. For instance, Galactica can recommend possible citations for scientific concepts by mimicking existing citation patterns (“<em>Q: Is there any research on the effect of climate change on the great barrier reef? A: Try the paper ‘<a href="https://doi.org/10.1038/s41586-018-0041-2">Global warming transforms coral reef assemblages</a>’ by Hughes, et al. in Nature 556 (2018)</em>”).</p> <p>For better or worse, citations are the currency of science – and by reproducing existing citation trends in its recommendations, Galactica risks reinforcing existing patterns of inequality and disadvantage. (Galactica’s developers acknowledge this risk in their paper.)</p> <p>Citation bias is already a well-known issue in academic fields ranging from <a href="https://doi.org/10.1080/14680777.2018.1447395">feminist</a> <a href="https://doi.org/10.1093/joc/jqy003">scholarship</a> to <a href="https://doi.org/10.1038/s41567-022-01770-1">physics</a>. However, tools like Galactica could make the problem worse unless they are used with careful guardrails in place.</p> <p>A more subtle problem is that the scientific articles on which Galactica is trained are already biased towards certainty and positive results. (This leads to the so-called “<a href="https://theconversation.com/science-is-in-a-reproducibility-crisis-how-do-we-resolve-it-16998">replication crisis</a>” and “<a href="https://theconversation.com/how-we-edit-science-part-2-significance-testing-p-hacking-and-peer-review-74547">p-hacking</a>”, where scientists cherry-pick data and analysis techniques to make results appear significant.)</p> <p>Galactica takes this bias towards certainty, combines it with wrong answers and delivers responses with supreme overconfidence: hardly a recipe for trustworthiness in a scientific information service.</p> <p>These problems are dramatically heightened when Galactica tries to deal with contentious or harmful social issues, as the screenshot below shows.</p> <figure class="align-center zoomable"><a href="https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip"><img src="https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=347&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=347&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=347&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=436&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=436&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=436&amp;fit=crop&amp;dpr=3 2262w" alt="Screenshots of papers generated by Galactica on 'The benefits of antisemitism' and 'The benefits of eating crushed glass'." /></a><figcaption><span class="caption">Galactica readily generates toxic and nonsensical content dressed up in the measured and authoritative language of science.</span> <span class="attribution"><a class="source" href="https://twitter.com/mrgreene1977/status/1593687024963182592/photo/1">Tristan Greene / Galactica</a></span></figcaption></figure> <p><strong>Here we go again</strong></p> <p>Calls for AI research organisations to take the ethical dimensions of their work more seriously are now coming from <a href="https://nap.nationalacademies.org/catalog/26507/fostering-responsible-computing-research-foundations-and-practices">key research bodies</a> such as the National Academies of Science, Engineering and Medicine. Some AI research organisations, like OpenAI, are being <a href="https://github.com/openai/dalle-2-preview/blob/main/system-card.md">more conscientious</a> (though still imperfect).</p> <p>Meta <a href="https://www.engadget.com/meta-responsible-innovation-team-disbanded-194852979.html">dissolved its Responsible Innovation team</a> earlier this year. The team was tasked with addressing “potential harms to society” caused by the company’s products. They might have helped the company avoid this clumsy misstep.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/195445/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p> <p><em>Writen by Aaron J. Snoswell </em><em>and Jean Burgess</em><em>. Republished with permission from <a href="https://theconversation.com/the-galactica-ai-model-was-trained-on-scientific-knowledge-but-it-spat-out-alarmingly-plausible-nonsense-195445" target="_blank" rel="noopener">The Conversation</a>.</em></p> <p><em>Image: Getty Images</em></p>

Technology

Our Partners