Placeholder Content Image

Edwina Bartholomew shows fans what she will look like at age 78

<p>Edwina Bartholomew has shocked fans with an AI-generated photo of herself at the age of 78. </p> <p>The <em>Sunrise</em> team were joined by an ageing expert on Thursday morning, who showed the team what they would look like in the year 2063.</p> <p>The expert used artificial intelligence to predict how the <em>Sunrise</em> team would look as they aged, as all of the panelists were seemingly happy with their future selves.</p> <p>Edwina then posted her AI-generated photo to Instagram, captioning the post, "78 and ready to mingle."</p> <blockquote class="instagram-media" style="background: #FFF; border: 0; border-radius: 3px; box-shadow: 0 0 1px 0 rgba(0,0,0,0.5),0 1px 10px 0 rgba(0,0,0,0.15); margin: 1px; max-width: 540px; min-width: 326px; padding: 0; width: calc(100% - 2px);" data-instgrm-captioned="" data-instgrm-permalink="https://www.instagram.com/p/CyhATqrhS1d/?utm_source=ig_embed&amp;utm_campaign=loading" data-instgrm-version="14"> <div style="padding: 16px;"> <div style="display: flex; flex-direction: row; align-items: center;"> <div style="background-color: #f4f4f4; border-radius: 50%; flex-grow: 0; height: 40px; margin-right: 14px; width: 40px;"> </div> <div style="display: flex; flex-direction: column; flex-grow: 1; justify-content: center;"> <div style="background-color: #f4f4f4; border-radius: 4px; flex-grow: 0; height: 14px; margin-bottom: 6px; width: 100px;"> </div> <div style="background-color: #f4f4f4; border-radius: 4px; flex-grow: 0; height: 14px; width: 60px;"> </div> </div> </div> <div style="padding: 19% 0;"> </div> <div style="display: block; height: 50px; margin: 0 auto 12px; width: 50px;"> </div> <div style="padding-top: 8px;"> <div style="color: #3897f0; font-family: Arial,sans-serif; font-size: 14px; font-style: normal; font-weight: 550; line-height: 18px;">View this post on Instagram</div> </div> <div style="padding: 12.5% 0;"> </div> <div style="display: flex; flex-direction: row; margin-bottom: 14px; align-items: center;"> <div> <div style="background-color: #f4f4f4; border-radius: 50%; height: 12.5px; width: 12.5px; transform: translateX(0px) translateY(7px);"> </div> <div style="background-color: #f4f4f4; height: 12.5px; transform: rotate(-45deg) translateX(3px) translateY(1px); width: 12.5px; flex-grow: 0; margin-right: 14px; margin-left: 2px;"> </div> <div style="background-color: #f4f4f4; border-radius: 50%; height: 12.5px; width: 12.5px; transform: translateX(9px) translateY(-18px);"> </div> </div> <div style="margin-left: 8px;"> <div style="background-color: #f4f4f4; border-radius: 50%; flex-grow: 0; height: 20px; width: 20px;"> </div> <div style="width: 0; height: 0; border-top: 2px solid transparent; border-left: 6px solid #f4f4f4; border-bottom: 2px solid transparent; transform: translateX(16px) translateY(-4px) rotate(30deg);"> </div> </div> <div style="margin-left: auto;"> <div style="width: 0px; border-top: 8px solid #F4F4F4; border-right: 8px solid transparent; transform: translateY(16px);"> </div> <div style="background-color: #f4f4f4; flex-grow: 0; height: 12px; width: 16px; transform: translateY(-4px);"> </div> <div style="width: 0; height: 0; border-top: 8px solid #F4F4F4; border-left: 8px solid transparent; transform: translateY(-4px) translateX(8px);"> </div> </div> </div> <div style="display: flex; flex-direction: column; flex-grow: 1; justify-content: center; margin-bottom: 24px;"> <div style="background-color: #f4f4f4; border-radius: 4px; flex-grow: 0; height: 14px; margin-bottom: 6px; width: 224px;"> </div> <div style="background-color: #f4f4f4; border-radius: 4px; flex-grow: 0; height: 14px; width: 144px;"> </div> </div> <p style="color: #c9c8cd; font-family: Arial,sans-serif; font-size: 14px; line-height: 17px; margin-bottom: 0; margin-top: 8px; overflow: hidden; padding: 8px 0 7px; text-align: center; text-overflow: ellipsis; white-space: nowrap;"><a style="color: #c9c8cd; font-family: Arial,sans-serif; font-size: 14px; font-style: normal; font-weight: normal; line-height: 17px; text-decoration: none;" href="https://www.instagram.com/p/CyhATqrhS1d/?utm_source=ig_embed&amp;utm_campaign=loading" target="_blank" rel="noopener">A post shared by Edwina Bartholomew (@edwina_b)</a></p> </div> </blockquote> <p>Edwina's potential future self was sporting a cropped hairstyle with her signature blonde locks now being shades of white and grey. </p> <p>She had a few more wrinkles, but the TV host still looked flawless for her age. </p> <p>Edwina continued her caption, "We had an ageing expert on @sunriseon7 this morning and it seems things are lookin’ up."</p> <p>Her post was flooded with likes and comments, as fans were quick to discuss her hilarious identity transformation.</p> <blockquote class="instagram-media" style="background: #FFF; border: 0; border-radius: 3px; box-shadow: 0 0 1px 0 rgba(0,0,0,0.5),0 1px 10px 0 rgba(0,0,0,0.15); margin: 1px; max-width: 540px; min-width: 326px; padding: 0; width: calc(100% - 2px);" data-instgrm-captioned="" data-instgrm-permalink="https://www.instagram.com/reel/CyhOE7yPmEQ/?utm_source=ig_embed&amp;utm_campaign=loading" data-instgrm-version="14"> <div style="padding: 16px;"> <div style="display: flex; flex-direction: row; align-items: center;"> <div style="background-color: #f4f4f4; border-radius: 50%; flex-grow: 0; height: 40px; margin-right: 14px; width: 40px;"> </div> <div style="display: flex; flex-direction: column; flex-grow: 1; justify-content: center;"> <div style="background-color: #f4f4f4; border-radius: 4px; flex-grow: 0; height: 14px; margin-bottom: 6px; width: 100px;"> </div> <div style="background-color: #f4f4f4; border-radius: 4px; flex-grow: 0; height: 14px; width: 60px;"> </div> </div> </div> <div style="padding: 19% 0;"> </div> <div style="display: block; height: 50px; margin: 0 auto 12px; width: 50px;"> </div> <div style="padding-top: 8px;"> <div style="color: #3897f0; font-family: Arial,sans-serif; font-size: 14px; font-style: normal; font-weight: 550; line-height: 18px;">View this post on Instagram</div> </div> <div style="padding: 12.5% 0;"> </div> <div style="display: flex; flex-direction: row; margin-bottom: 14px; align-items: center;"> <div> <div style="background-color: #f4f4f4; border-radius: 50%; height: 12.5px; width: 12.5px; transform: translateX(0px) translateY(7px);"> </div> <div style="background-color: #f4f4f4; height: 12.5px; transform: rotate(-45deg) translateX(3px) translateY(1px); width: 12.5px; flex-grow: 0; margin-right: 14px; margin-left: 2px;"> </div> <div style="background-color: #f4f4f4; border-radius: 50%; height: 12.5px; width: 12.5px; transform: translateX(9px) translateY(-18px);"> </div> </div> <div style="margin-left: 8px;"> <div style="background-color: #f4f4f4; border-radius: 50%; flex-grow: 0; height: 20px; width: 20px;"> </div> <div style="width: 0; height: 0; border-top: 2px solid transparent; border-left: 6px solid #f4f4f4; border-bottom: 2px solid transparent; transform: translateX(16px) translateY(-4px) rotate(30deg);"> </div> </div> <div style="margin-left: auto;"> <div style="width: 0px; border-top: 8px solid #F4F4F4; border-right: 8px solid transparent; transform: translateY(16px);"> </div> <div style="background-color: #f4f4f4; flex-grow: 0; height: 12px; width: 16px; transform: translateY(-4px);"> </div> <div style="width: 0; height: 0; border-top: 8px solid #F4F4F4; border-left: 8px solid transparent; transform: translateY(-4px) translateX(8px);"> </div> </div> </div> <div style="display: flex; flex-direction: column; flex-grow: 1; justify-content: center; margin-bottom: 24px;"> <div style="background-color: #f4f4f4; border-radius: 4px; flex-grow: 0; height: 14px; margin-bottom: 6px; width: 224px;"> </div> <div style="background-color: #f4f4f4; border-radius: 4px; flex-grow: 0; height: 14px; width: 144px;"> </div> </div> <p style="color: #c9c8cd; font-family: Arial,sans-serif; font-size: 14px; line-height: 17px; margin-bottom: 0; margin-top: 8px; overflow: hidden; padding: 8px 0 7px; text-align: center; text-overflow: ellipsis; white-space: nowrap;"><a style="color: #c9c8cd; font-family: Arial,sans-serif; font-size: 14px; font-style: normal; font-weight: normal; line-height: 17px; text-decoration: none;" href="https://www.instagram.com/reel/CyhOE7yPmEQ/?utm_source=ig_embed&amp;utm_campaign=loading" target="_blank" rel="noopener">A post shared by Sunrise (@sunriseon7)</a></p> </div> </blockquote> <p>“Loved that you teared up when he showed you. It’s very confronting,” one fan said.</p> <p>“Sorry, but your arms would not look like that at 78,” another person said.</p> <p>Another pointed out, “In your Julie Andrews era.”</p> <p>“Beautiful at any age,” another fan said.</p> <p><em>Image credits: Instagram </em></p>

Beauty & Style

Placeholder Content Image

5 reasons kids still need to learn handwriting (no, AI has not made it redundant)

<p><a href="https://theconversation.com/profiles/lucinda-mcknight-324350">Lucinda McKnight</a>, <em><a href="https://theconversation.com/institutions/deakin-university-757">Deakin University</a></em> and <a href="https://theconversation.com/profiles/maria-nicholas-1443112">Maria Nicholas</a>, <em><a href="https://theconversation.com/institutions/deakin-university-757">Deakin University</a></em></p> <p>The world of writing is changing.</p> <p>Things have moved very quickly from keyboards and predictive text. The rise of generative artificial intelligence (AI) means <a href="https://theconversation.com/in-an-ai-world-we-need-to-teach-students-how-to-work-with-robot-writers-157508">bots can now write human-quality text</a> without having hands at all.</p> <p>Recent improvements in speech-to-text software mean even human “writers” do not need to touch a keyboard, let alone a pen. And with help from AI, <a href="https://www.theguardian.com/technology/2023/may/01/ai-makes-non-invasive-mind-reading-possible-by-turning-thoughts-into-text">text can even be generated by decoders</a> that read brain activity through non-invasive scanning.</p> <p>Writers of the future will be talkers and thinkers, without having to lift a finger. The word “writer” may come to mean something very different, as people compose text in multiple ways in an increasingly digital world. So do humans still need to learn to write by hand?</p> <h2>Handwriting is still part of the curriculum</h2> <p>The pandemic shifted a lot of schooling online and some major tests, <a href="https://www.nap.edu.au/naplan/understanding-online-assessment">such as NAPLAN</a> are now done on computers. There are also <a href="https://theconversation.com/teaching-cursive-handwriting-is-an-outdated-waste-of-time-35368">calls</a> for cursive handwriting to be phased out in high school.</p> <p>However, learning to handwrite is still a key component of the literacy curriculum in primary school.</p> <p>Parents may be wondering whether the time-consuming and challenging process of learning to handwrite is worth the trouble. Perhaps the effort spent learning to form letters would be better spent on coding?</p> <p>Many students with disability, after all, already learn to write with <a href="https://www.understood.org/en/articles/assistive-technology-for-writing">assistive technologies</a>.</p> <p>But there are are a number of important reasons why handwriting will still be taught – and still needs to be taught – in schools.</p> <figure class="align-center "><img src="https://images.theconversation.com/files/530220/original/file-20230606-17-7sme40.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/530220/original/file-20230606-17-7sme40.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/530220/original/file-20230606-17-7sme40.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/530220/original/file-20230606-17-7sme40.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/530220/original/file-20230606-17-7sme40.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/530220/original/file-20230606-17-7sme40.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/530220/original/file-20230606-17-7sme40.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=3 2262w" alt="A child writes in an exercise book." /><figcaption><span class="caption">Technology changes mean we can ‘write’ without lifting a pen.</span> <span class="attribution"><span class="source">Shutterstock.</span></span></figcaption></figure> <h2>1. Fine motor skills</h2> <p>Handwriting develops critical fine motor skills and the coordination needed to control precise movements. These movements are required <a href="https://www.understood.org/en/articles/all-about-fine-motor-skills">to conduct everyday</a> school and work-related activities.</p> <p>The refinement of these motor skills also leads to handwriting becoming increasingly legible and fluent.</p> <p>We don’t know where technology will take us, but it may take us back to the past.</p> <p>Handwriting may be more important than ever if <a href="https://www.theguardian.com/australia-news/2023/jan/10/universities-to-return-to-pen-and-paper-exams-after-students-caught-using-ai-to-write-essays">tests and exams return to being handwritten</a> to stop students using generative AI to cheat.</p> <h2>2. It helps you remember</h2> <p>Handwriting has important cognitive benefits, <a href="https://www.kidsnews.com.au/technology/experts-say-pens-and-pencils-rather-than-keyboards-rule-at-school/news-story/abb4607b612c0c4f79b214c54590ca92">including for memory</a>.</p> <p>Research suggests traditional pen-and-paper notes are <a href="https://journals.sagepub.com/doi/abs/10.1177/154193120905302218?journalCode=proe">remembered better</a>, due to the greater complexity of the handwriting process.</p> <p>And learning to read and handwrite are <a href="https://www.aare.edu.au/blog/?p=5296">intimately linked</a>. Students become better readers though practising writing.</p> <h2>3. It’s good for wellbeing</h2> <p>Handwriting, and related activities such as drawing, are tactile, creative and reflective sources of pleasure and <a href="https://theconversation.com/writing-can-improve-mental-health-heres-how-162205">wellness</a> for writers of all ages.</p> <p>This is seen in the popularity of practices such as print <a href="https://www.urmc.rochester.edu/encyclopedia/content.aspx?ContentID=4552&amp;ContentTypeID=1">journalling</a> and calligraphy. There are many online communities where writers share gorgeous examples of handwriting.</p> <figure class="align-center "><img src="https://images.theconversation.com/files/530253/original/file-20230606-29-eb7vk3.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/530253/original/file-20230606-29-eb7vk3.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/530253/original/file-20230606-29-eb7vk3.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/530253/original/file-20230606-29-eb7vk3.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/530253/original/file-20230606-29-eb7vk3.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/530253/original/file-20230606-29-eb7vk3.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/530253/original/file-20230606-29-eb7vk3.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=3 2262w" alt="A book with a calligraphy alphabet." /><figcaption><span class="caption">Caligraphers focus on making beautiful, design-oriented writing.</span> <span class="attribution"><span class="source">Samir Bouaked/Unsplash</span></span></figcaption></figure> <h2>4. It’s very accessible</h2> <p>Handwriting does not need electricity, devices, batteries, software, subscriptions, a fast internet connection, a keyboard, charging time or the many other things on which digital writing depends.</p> <p>It only needs pen and paper. And can be done anywhere.</p> <p>Sometimes handwriting is the easiest and best option. For example, when writing a birthday card, filling in printed forms, or writing a quick note.</p> <h2>5. It’s about thinking</h2> <p>Most importantly, learning to write and learning to think are intimately connected. Ideas are <a href="https://warwick.ac.uk/fac/soc/ces/research/teachingandlearning/resactivities/subjects/literacy/handwriting/outputs/cambridge_article.pdf">formed as students write</a>. They are developed and organised as they are composed. Thinking is too important to be outsourced to bots!</p> <p>Teaching writing is about giving students a toolkit of multiple writing strategies to empower them to fulfil their potential as thoughtful, creative and capable communicators.</p> <p>Handwriting will remain an important component of this toolkit for the foreseeable future, despite the astonishing advances made with generative AI.</p> <p>Writing perfect cursive may become less important in the future. But students will still need to be able to write legibly and fluently in their education and in their broader lives.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/206939/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p> <p><a href="https://theconversation.com/profiles/lucinda-mcknight-324350">Lucinda McKnight</a>, Senior Lecturer in Pedagogy and Curriculum, <em><a href="https://theconversation.com/institutions/deakin-university-757">Deakin University</a></em> and <a href="https://theconversation.com/profiles/maria-nicholas-1443112">Maria Nicholas</a>, Senior Lecturer in Language and Literacy Education, <em><a href="https://theconversation.com/institutions/deakin-university-757">Deakin University</a></em></p> <p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/5-reasons-kids-still-need-to-learn-handwriting-no-ai-has-not-made-it-redundant-206939">original article</a>.</em></p> <p><em>Images: Getty</em></p>

Caring

Placeholder Content Image

ChatGPT and other generative AI could foster science denial and misunderstanding – here’s how you can be on alert

<p><em><a href="https://theconversation.com/profiles/gale-sinatra-1234776">Gale Sinatra</a>, <a href="https://theconversation.com/institutions/university-of-southern-california-1265">University of Southern California</a> and <a href="https://theconversation.com/profiles/barbara-k-hofer-1231530">Barbara K. Hofer</a>, <a href="https://theconversation.com/institutions/middlebury-1247">Middlebury</a></em></p> <p>Until very recently, if you wanted to know more about a controversial scientific topic – stem cell research, the safety of nuclear energy, climate change – you probably did a Google search. Presented with multiple sources, you chose what to read, selecting which sites or authorities to trust.</p> <p>Now you have another option: You can pose your question to ChatGPT or another generative artificial intelligence platform and quickly receive a succinct response in paragraph form.</p> <p>ChatGPT does not search the internet the way Google does. Instead, it generates responses to queries by <a href="https://www.washingtonpost.com/technology/2023/05/07/ai-beginners-guide/">predicting likely word combinations</a> from a massive amalgam of available online information.</p> <p>Although it has the potential for <a href="https://hbr.org/podcast/2023/05/how-generative-ai-changes-productivity">enhancing productivity</a>, generative AI has been shown to have some major faults. It can <a href="https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/">produce misinformation</a>. It can create “<a href="https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html">hallucinations</a>” – a benign term for making things up. And it doesn’t always accurately solve reasoning problems. For example, when asked if both a car and a tank can fit through a doorway, it <a href="https://www.nytimes.com/2023/03/14/technology/openai-new-gpt4.html">failed to consider both width and height</a>. Nevertheless, it is already being used to <a href="https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/">produce articles</a> and <a href="https://www.nytimes.com/2023/05/19/technology/ai-generated-content-discovered-on-news-sites-content-farms-and-product-reviews.html">website content</a> you may have encountered, or <a href="https://www.nytimes.com/2023/04/21/opinion/chatgpt-journalism.html">as a tool</a> in the writing process. Yet you are unlikely to know if what you’re reading was created by AI.</p> <p>As the authors of “<a href="https://global.oup.com/academic/product/science-denial-9780197683330">Science Denial: Why It Happens and What to Do About It</a>,” we are concerned about how generative AI may blur the boundaries between truth and fiction for those seeking authoritative scientific information.</p> <p>Every media consumer needs to be more vigilant than ever in verifying scientific accuracy in what they read. Here’s how you can stay on your toes in this new information landscape.</p> <h2>How generative AI could promote science denial</h2> <p><strong>Erosion of epistemic trust</strong>. All consumers of science information depend on judgments of scientific and medical experts. <a href="https://doi.org/10.1080/02691728.2014.971907">Epistemic trust</a> is the process of trusting knowledge you get from others. It is fundamental to the understanding and use of scientific information. Whether someone is seeking information about a health concern or trying to understand solutions to climate change, they often have limited scientific understanding and little access to firsthand evidence. With a rapidly growing body of information online, people must make frequent decisions about what and whom to trust. With the increased use of generative AI and the potential for manipulation, we believe trust is likely to erode further than <a href="https://www.pewresearch.org/science/2022/02/15/americans-trust-in-scientists-other-groups-declines/">it already has</a>.</p> <p><strong>Misleading or just plain wrong</strong>. If there are errors or biases in the data on which AI platforms are trained, that <a href="https://theconversation.com/ai-information-retrieval-a-search-engine-researcher-explains-the-promise-and-peril-of-letting-chatgpt-and-its-cousins-search-the-web-for-you-200875">can be reflected in the results</a>. In our own searches, when we have asked ChatGPT to regenerate multiple answers to the same question, we have gotten conflicting answers. Asked why, it responded, “Sometimes I make mistakes.” Perhaps the trickiest issue with AI-generated content is knowing when it is wrong.</p> <p><strong>Disinformation spread intentionally</strong>. AI can be used to generate compelling disinformation as text as well as deepfake images and videos. When we asked ChatGPT to “<a href="https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/">write about vaccines in the style of disinformation</a>,” it produced a nonexistent citation with fake data. Geoffrey Hinton, former head of AI development at Google, quit to be free to sound the alarm, saying, “It is hard to see how you can prevent the bad actors from <a href="https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html">using it for bad things</a>.” The potential to create and spread deliberately incorrect information about science already existed, but it is now dangerously easy.</p> <p><strong>Fabricated sources</strong>. ChatGPT provides responses with no sources at all, or if asked for sources, may present <a href="https://economistwritingeveryday.com/2023/01/21/chatgpt-cites-economics-papers-that-do-not-exist/">ones it made up</a>. We both asked ChatGPT to generate a list of our own publications. We each identified a few correct sources. More were hallucinations, yet seemingly reputable and mostly plausible, with actual previous co-authors, in similar sounding journals. This inventiveness is a big problem if a list of a scholar’s publications conveys authority to a reader who doesn’t take time to verify them.</p> <p><strong>Dated knowledge</strong>. ChatGPT doesn’t know what happened in the world after its training concluded. A query on what percentage of the world has had COVID-19 returned an answer prefaced by “as of my knowledge cutoff date of September 2021.” Given how rapidly knowledge advances in some areas, this limitation could mean readers get erroneous outdated information. If you’re seeking recent research on a personal health issue, for instance, beware.</p> <p><strong>Rapid advancement and poor transparency</strong>. AI systems continue to become <a href="https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html">more powerful and learn faster</a>, and they may learn more science misinformation along the way. Google recently announced <a href="https://www.nytimes.com/2023/05/10/technology/google-ai-products.html">25 new embedded uses of AI in its services</a>. At this point, <a href="https://theconversation.com/regulating-ai-3-experts-explain-why-its-difficult-to-do-and-important-to-get-right-198868">insufficient guardrails are in place</a> to assure that generative AI will become a more accurate purveyor of scientific information over time.</p> <h2>What can you do?</h2> <p>If you use ChatGPT or other AI platforms, recognize that they might not be completely accurate. The burden falls to the user to discern accuracy.</p> <p><strong>Increase your vigilance</strong>. <a href="https://www.niemanlab.org/2022/12/ai-will-start-fact-checking-we-may-not-like-the-results/">AI fact-checking apps may be available soon</a>, but for now, users must serve as their own fact-checkers. <a href="https://www.nsta.org/science-teacher/science-teacher-januaryfebruary-2023/plausible">There are steps we recommend</a>. The first is: Be vigilant. People often reflexively share information found from searches on social media with little or no vetting. Know when to become more deliberately thoughtful and when it’s worth identifying and evaluating sources of information. If you’re trying to decide how to manage a serious illness or to understand the best steps for addressing climate change, take time to vet the sources.</p> <p><strong>Improve your fact-checking</strong>. A second step is <a href="https://doi.org/10.1037/edu0000740">lateral reading</a>, a process professional fact-checkers use. Open a new window and search for <a href="https://www.nsta.org/science-teacher/science-teacher-mayjune-2023/marginalizing-misinformation">information about the sources</a>, if provided. Is the source credible? Does the author have relevant expertise? And what is the consensus of experts? If no sources are provided or you don’t know if they are valid, use a traditional search engine to find and evaluate experts on the topic.</p> <p><strong>Evaluate the evidence</strong>. Next, take a look at the evidence and its connection to the claim. Is there evidence that genetically modified foods are safe? Is there evidence that they are not? What is the scientific consensus? Evaluating the claims will take effort beyond a quick query to ChatGPT.</p> <p><strong>If you begin with AI, don’t stop there</strong>. Exercise caution in using it as the sole authority on any scientific issue. You might see what ChatGPT has to say about genetically modified organisms or vaccine safety, but also follow up with a more diligent search using traditional search engines before you draw conclusions.</p> <p><strong>Assess plausibility</strong>. Judge whether the claim is plausible. <a href="https://doi.org/10.1016/j.learninstruc.2013.03.001">Is it likely to be true</a>? If AI makes an implausible (and inaccurate) statement like “<a href="https://www.usatoday.com/story/news/factcheck/2022/12/23/fact-check-false-claim-covid-19-vaccines-caused-1-1-million-deaths/10929679002/">1 million deaths were caused by vaccines, not COVID-19</a>,” consider if it even makes sense. Make a tentative judgment and then be open to revising your thinking once you have checked the evidence.</p> <p><strong>Promote digital literacy in yourself and others</strong>. Everyone needs to up their game. <a href="https://theconversation.com/how-to-be-a-good-digital-citizen-during-the-election-and-its-aftermath-148974">Improve your own digital literacy</a>, and if you are a parent, teacher, mentor or community leader, promote digital literacy in others. The American Psychological Association provides guidance on <a href="https://www.apa.org/topics/social-media-internet/social-media-literacy-teens">fact-checking online information</a> and recommends teens be <a href="https://www.apa.org/topics/social-media-internet/health-advisory-adolescent-social-media-use">trained in social media skills</a> to minimize risks to health and well-being. <a href="https://newslit.org/">The News Literacy Project</a> provides helpful tools for improving and supporting digital literacy.</p> <p>Arm yourself with the skills you need to navigate the new AI information landscape. Even if you don’t use generative AI, it is likely you have already read articles created by it or developed from it. It can take time and effort to find and evaluate reliable information about science online – but it is worth it.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/204897/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p> <p><em><a href="https://theconversation.com/profiles/gale-sinatra-1234776">Gale Sinatra</a>, Professor of Education and Psychology, <a href="https://theconversation.com/institutions/university-of-southern-california-1265">University of Southern California</a> and <a href="https://theconversation.com/profiles/barbara-k-hofer-1231530">Barbara K. Hofer</a>, Professor of Psychology Emerita, <a href="https://theconversation.com/institutions/middlebury-1247">Middlebury</a></em></p> <p><em>Image credits: Getty Images</em></p> <p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/chatgpt-and-other-generative-ai-could-foster-science-denial-and-misunderstanding-heres-how-you-can-be-on-alert-204897">original article</a>.</em></p>

Technology

Placeholder Content Image

Sting slams AI’s songwriting abilities

<p dir="ltr">Sting has weighed in on the debate over utilising artificial intelligence in the songwriting process, saying the machines lack the “soul” needed to create music. </p> <p dir="ltr">The former Police frontman spoke with Music Week and was asked if he believed computers are capable of creating good songs. </p> <p dir="ltr">Sting responded that knowing a song was created by AI takes away some of the magic of the music.</p> <p dir="ltr">“The analogy for me is watching a movie with CGI,” he said. </p> <p dir="ltr">“I tend to be bored very quickly, because I know the actors can’t see the monster. So I really feel the same way about AI being able to compose songs.”</p> <p dir="ltr">“Basically, it’s an algorithm and it has a massive amount of information, but it would lack just that human spark, that imperfection, if you like, that makes it unique to any artist, so I don’t really fear it.”</p> <p dir="ltr">“A lot of music could be created by AI quite efficiently,” he added. </p> <p dir="ltr">“I think electronic dance music can still be very effective without involving humans at all. But songwriting is very personal. It’s soul work, and machines don’t have souls. Not yet anyway.”</p> <p dir="ltr">Elsewhere in the interview, Sting weighed in on Ed Sheeran’s recent high profile <a href="https://oversixty.com.au/entertainment/music/decision-reached-over-ed-sheeran-s-copyright-trial">copyright case</a>, in which he was being sued over his 2014 single <em>Thinking Out Loud</em> by Structured Asset Sales, who claimed that Sheeran's hit took elements directly from Marvin Gaye's <em>Let's Get It On</em>.</p> <p dir="ltr">The court and the jury ended up siding with Sheeran, saying they did not plagiarise the song. </p> <p dir="ltr">Sting shared his comments on the case, also siding with Sheeran by saying, “No one can claim a set of chords.” </p> <p dir="ltr">“No one can say, ‘Oh that’s my set of chords.’ I think [Sheeran] said, ‘Look songs fit over each other.’ They do, so I think all of this stuff is nonsense and it’s hard for a jury to understand, that’s the problem.”</p> <p dir="ltr">“So that was the truth, musicians steal from each other – we always have. I don’t know who can claim to own a rhythm or a set of chords at all, it’s virtually impossible.”</p> <p dir="ltr"><em>Image credits: Getty Images</em></p>

Music

Placeholder Content Image

Here’s how a new AI tool may predict early signs of Parkinson’s disease

<p>In 1991, the world was shocked to learn actor <a href="https://www.theguardian.com/film/2023/jan/31/still-a-michael-j-fox-movie-parkinsons-back-to-the-future">Michael J. Fox</a> had been diagnosed with Parkinson’s disease. </p> <p>He was just 29 years old and at the height of Hollywood fame, a year after the release of the blockbuster <em>Back to the Future III</em>. This week, documentary <em><a href="https://www.imdb.com/title/tt19853258/">Still: A Michael J. Fox Movie</a></em> will be released. It features interviews with Fox, his friends, family and experts. </p> <p>Parkinson’s is a debilitating neurological disease characterised by <a href="https://www.mayoclinic.org/diseases-conditions/parkinsons-disease/symptoms-causes/syc-20376055">motor symptoms</a> including slow movement, body tremors, muscle stiffness, and reduced balance. Fox has already <a href="https://www.cbsnews.com/video/michael-j-fox-on-parkinsons-and-maintaining-optimism">broken</a> his arms, elbows, face and hand from multiple falls. </p> <p>It is not genetic, has no specific test and cannot be accurately diagnosed before motor symptoms appear. Its cause is still <a href="https://www.apdaparkinson.org/what-is-parkinsons/causes/">unknown</a>, although Fox is among those who thinks <a href="https://www.cbsnews.com/video/michael-j-fox-on-parkinsons-and-maintaining-optimism">chemical exposure may play a central role</a>, speculating that “genetics loads the gun and environment pulls the trigger”.</p> <p>In research published today in <a href="https://pubs.acs.org/doi/10.1021/acscentsci.2c01468">ACS Central Science</a>, we built an artificial intelligence (AI) tool that can predict Parkinson’s disease with up to 96% accuracy and up to 15 years before a clinical diagnosis based on the analysis of chemicals in blood. </p> <p>While this AI tool showed promise for accurate early diagnosis, it also revealed chemicals that were strongly linked to a correct prediction.</p> <h2>More common than ever</h2> <p>Parkinson’s is the world’s <a href="https://www.who.int/news-room/fact-sheets/detail/parkinson-disease">fastest growing neurological disease</a> with <a href="https://shakeitup.org.au/understanding-parkinsons/">38 Australians</a>diagnosed every day.</p> <p>For people over 50, the chance of developing Parkinson’s is <a href="https://www.parkinsonsact.org.au/statistics-about-parkinsons/">higher than many cancers</a> including breast, colorectal, ovarian and pancreatic cancer.</p> <p>Symptoms such as <a href="https://www.apdaparkinson.org/what-is-parkinsons/symptoms/#nonmotor">depression, loss of smell and sleep problems</a> can predate clinical movement or cognitive symptoms by decades. </p> <p>However, the prevalence of such symptoms in many other medical conditions means early signs of Parkinson’s disease can be overlooked and the condition may be mismanaged, contributing to increased hospitalisation rates and ineffective treatment strategies.</p> <h2>Our research</h2> <p>At UNSW we collaborated with experts from Boston University to build an AI tool that can analyse mass spectrometry datasets (a <a href="https://www.sciencedirect.com/topics/neuroscience/mass-spectrometry">technique</a> that detects chemicals) from blood samples.</p> <p>For this study, we looked at the Spanish <a href="https://epic.iarc.fr/">European Prospective Investigation into Cancer and Nutrition</a> (EPIC) study which involved over 41,000 participants. About 90 of them developed Parkinson’s within 15 years. </p> <p>To train the AI model we used a <a href="https://www.nature.com/articles/s41531-021-00216-4">subset of data</a> consisting of a random selection of 39 participants who later developed Parkinson’s. They were matched to 39 control participants who did not. The AI tool was given blood data from participants, all of whom were healthy at the time of blood donation. This meant the blood could provide early signs of the disease. </p> <p>Drawing on blood data from the EPIC study, the AI tool was then used to conduct 100 “experiments” and we assessed the accuracy of 100 different models for predicting Parkinson’s. </p> <p>Overall, AI could detect Parkinson’s disease with up to 96% accuracy. The AI tool was also used to help us identify which chemicals or metabolites were likely linked to those who later developed the disease.</p> <h2>Key metabolites</h2> <p>Metabolites are chemicals produced or used as the body digests and breaks down things like food, drugs, and other substances from environmental exposure. </p> <p>Our bodies can contain thousands of metabolites and their concentrations can differ significantly between healthy people and those affected by disease.</p> <p>Our research identified a chemical, likely a triterpenoid, as a key metabolite that could prevent Parkinson’s disease. It was found the abundance of triterpenoid was lower in the blood of those who developed Parkinson’s compared to those who did not.</p> <p>Triterpenoids are known <a href="https://www.sciencedirect.com/topics/neuroscience/neuroprotection">neuroprotectants</a> that can regulate <a href="https://onlinelibrary.wiley.com/doi/10.1002/ana.10483">oxidative stress</a> – a leading factor implicated in Parkinson’s disease – and prevent cell death in the brain. Many foods such as <a href="https://link.springer.com/article/10.1007/s11101-012-9241-9#Sec3">apples and tomatoes</a> are rich sources of triterpenoids.</p> <p>A synthetic chemical (a <a href="https://www.cdc.gov/biomonitoring/PFAS_FactSheet.html">polyfluorinated alkyl substance</a>) was also linked as something that might increase the risk of the disease. This chemical was found in higher abundances in those who later developed Parkinson’s. </p> <p>More research using different methods and looking at larger populations is needed to further validate these results.</p> <h2>A high financial and personal burden</h2> <p>Every year in Australia, the average person with Parkinson’s spends over <a href="https://www.hindawi.com/journals/pd/2017/5932675/">A$14,000</a>in out-of-pocket medical costs.</p> <p>The burden of living with the disease can be intolerable.</p> <p>Fox acknowledges the disease can be a “nightmare” and a “living hell”, but he has also found that “<a href="https://www.cbsnews.com/video/michael-j-fox-on-parkinsons-and-maintaining-optimism">with gratitude, optimism is sustainable</a>”. </p> <p>As researchers, we find hope in the potential use of AI technologies to improve patient quality of life and reduce health-care costs by accurately detecting diseases early.</p> <p>We are excited for the research community to try our AI tool, which is <a href="https://github.com/CRANK-MS/CRANK-MS">publicly available</a>.</p> <p><em>This research was performed with Mr Chonghua Xue and A/Prof Vijaya Kolachalama (Boston University).</em></p> <p><em>Image credits: Getty Images</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/heres-how-a-new-ai-tool-may-predict-early-signs-of-parkinsons-disease-205221" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Mind

Placeholder Content Image

AI to Z: all the terms you need to know to keep up in the AI hype age

<p>Artificial intelligence (AI) is becoming ever more prevalent in our lives. It’s no longer confined to certain industries or research institutions; AI is now for everyone.</p> <p>It’s hard to dodge the deluge of AI content being produced, and harder yet to make sense of the many terms being thrown around. But we can’t have conversations about AI without understanding the concepts behind it.</p> <p>We’ve compiled a glossary of terms we think everyone should know, if they want to keep up.</p> <h2>Algorithm</h2> <p><a href="https://theconversation.com/what-is-an-algorithm-how-computers-know-what-to-do-with-data-146665">An algorithm</a> is a set of instructions given to a computer to solve a problem or to perform calculations that transform data into useful information. </p> <h2>Alignment problem</h2> <p>The alignment problem refers to the discrepancy between our intended objectives for an AI system and the output it produces. A misaligned system can be advanced in performance, yet behave in a way that’s against human values. We saw an example of this <a href="https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people">in 2015</a> when an image-recognition algorithm used by Google Photos was found auto-tagging pictures of black people as “gorillas”. </p> <h2>Artificial General Intelligence (AGI)</h2> <p><a href="https://theconversation.com/not-everything-we-call-ai-is-actually-artificial-intelligence-heres-what-you-need-to-know-196732">Artificial general intelligence</a> refers to a hypothetical point in the future where AI is expected to match (or surpass) the cognitive capabilities of humans. Most AI experts agree this will happen, but disagree on specific details such as when it will happen, and whether or not it will result in AI systems that are fully autonomous.</p> <h2>Artificial Neural Network (ANN)</h2> <p>Artificial neural networks are computer algorithms used within a branch of AI called <a href="https://aws.amazon.com/what-is/deep-learning/">deep learning</a>. They’re made up of layers of interconnected nodes in a way that mimics the <a href="https://www.ibm.com/topics/neural-networks">neural circuitry</a> of the human brain. </p> <h2>Big data</h2> <p>Big data refers to datasets that are much more massive and complex than traditional data. These datasets, which greatly exceed the storage capacity of household computers, have helped current AI models perform with high levels of accuracy.</p> <p>Big data can be characterised by four Vs: “volume” refers to the overall amount of data, “velocity” refers to how quickly the data grow, “veracity” refers to how complex the data are, and “variety” refers to the different formats the data come in.</p> <h2>Chinese Room</h2> <p>The <a href="https://ethics.org.au/thought-experiment-chinese-room-argument/">Chinese Room</a> thought experiment was first proposed by American philosopher John Searle in 1980. It argues a computer program, no matter how seemingly intelligent in its design, will never be conscious and will remain unable to truly understand its behaviour as a human does. </p> <p>This concept often comes up in conversations about AI tools such as ChatGPT, which seem to exhibit the traits of a self-aware entity – but are actually just presenting outputs based on predictions made by the underlying model.</p> <h2>Deep learning</h2> <p>Deep learning is a category within the machine-learning branch of AI. Deep-learning systems use advanced neural networks and can process large amounts of complex data to achieve higher accuracy.</p> <p>These systems perform well on relatively complex tasks and can even exhibit human-like intelligent behaviour.</p> <h2>Diffusion model</h2> <p>A diffusion model is an AI model that learns by adding random “noise” to a set of training data before removing it, and then assessing the differences. The objective is to learn about the underlying patterns or relationships in data that are not immediately obvious. </p> <p>These models are designed to self-correct as they encounter new data and are therefore particularly useful in situations where there is uncertainty, or if the problem is very complex.</p> <h2>Explainable AI</h2> <p>Explainable AI is an emerging, interdisciplinary field concerned with creating methods that will <a href="https://theconversation.com/how-explainable-artificial-intelligence-can-help-humans-innovate-151737">increase</a> users’ trust in the processes of AI systems. </p> <p>Due to the inherent complexity of certain AI models, their internal workings are often opaque, and we can’t say with certainty why they produce the outputs they do. Explainable AI aims to make these “black box” systems more transparent.</p> <h2>Generative AI</h2> <p>These are AI systems that generate new content – including text, image, audio and video content – in response to prompts. Popular examples include ChatGPT, DALL-E 2 and Midjourney. </p> <h2>Labelling</h2> <p>Data labelling is the process through which data points are categorised to help an AI model make sense of the data. This involves identifying data structures (such as image, text, audio or video) and adding labels (such as tags and classes) to the data.</p> <p>Humans do the labelling before machine learning begins. The labelled data are split into distinct datasets for training, validation and testing.</p> <p>The training set is fed to the system for learning. The validation set is used to verify whether the model is performing as expected and when parameter tuning and training can stop. The testing set is used to evaluate the finished model’s performance. </p> <h2>Large Language Model (LLM)</h2> <p>Large language models (LLM) are trained on massive quantities of unlabelled text. They analyse data, learn the patterns between words and can produce human-like responses. Some examples of AI systems that use large language models are OpenAI’s GPT series and Google’s BERT and LaMDA series.</p> <h2>Machine learning</h2> <p>Machine learning is a branch of AI that involves training AI systems to be able to analyse data, learn patterns and make predictions without specific human instruction.</p> <h2>Natural language processing (NLP)</h2> <p>While large language models are a specific type of AI model used for language-related tasks, natural language processing is the broader AI field that focuses on machines’ ability to learn, understand and produce human language.</p> <h2>Parameters</h2> <p>Parameters are the settings used to tune machine-learning models. You can think of them as the programmed weights and biases a model uses when making a prediction or performing a task.</p> <p>Since parameters determine how the model will process and analyse data, they also determine how it will perform. An example of a parameter is the number of neurons in a given layer of the neural network. Increasing the number of neurons will allow the neural network to tackle more complex tasks – but the trade-off will be higher computation time and costs. </p> <h2>Responsible AI</h2> <p>The responsible AI movement advocates for developing and deploying AI systems in a human-centred way.</p> <p>One aspect of this is to embed AI systems with rules that will have them adhere to ethical principles. This would (ideally) prevent them from producing outputs that are biased, discriminatory or could otherwise lead to harmful outcomes. </p> <h2>Sentiment analysis</h2> <p>Sentiment analysis is a technique in natural language processing used to identify and interpret the <a href="https://aws.amazon.com/what-is/sentiment-analysis/">emotions behind a text</a>. It captures implicit information such as, for example, the author’s tone and the extent of positive or negative expression.</p> <h2>Supervised learning</h2> <p>Supervised learning is a machine-learning approach in which labelled data are used to train an algorithm to make predictions. The algorithm learns to match the labelled input data to the correct output. After learning from a large number of examples, it can continue to make predictions when presented with new data.</p> <h2>Training data</h2> <p>Training data are the (usually labelled) data used to teach AI systems how to make predictions. The accuracy and representativeness of training data have a major impact on a model’s effectiveness.</p> <h2>Transformer</h2> <p>A transformer is a type of deep-learning model used primarily in natural language processing tasks.</p> <p>The transformer is designed to process sequential data, such as natural language text, and figure out how the different parts relate to one another. This can be compared to how a person reading a sentence pays attention to the order of the words to understand the meaning of the sentence as a whole. </p> <p>One example is the generative pre-trained transformer (GPT), which the ChatGPT chatbot runs on. The GPT model uses a transformer to learn from a large corpus of unlabelled text. </p> <h2>Turing Test</h2> <p>The Turing test is a machine intelligence concept first introduced by computer scientist Alan Turing in 1950.</p> <p>It’s framed as a way to determine whether a computer can exhibit human intelligence. In the test, computer and human outputs are compared by a human evaluator. If the outputs are deemed indistinguishable, the computer has passed the test.</p> <p>Google’s <a href="https://www.washingtonpost.com/technology/2022/06/17/google-ai-lamda-turing-test/">LaMDA</a> and OpenAI’s <a href="https://mpost.io/chatgpt-passes-the-turing-test/">ChatGPT</a> have been reported to have passed the Turing test – although <a href="https://www.thenewatlantis.com/publications/the-trouble-with-the-turing-test">critics say</a> the results reveal the limitations of using the test to compare computer and human intelligence.</p> <h2>Unsupervised learning</h2> <p>Unsupervised learning is a machine-learning approach in which algorithms are trained on unlabelled data. Without human intervention, the system explores patterns in the data, with the goal of discovering unidentified patterns that could be used for further analysis.</p> <p><em>Image credits: Getty Images</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/ai-to-z-all-the-terms-you-need-to-know-to-keep-up-in-the-ai-hype-age-203917" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Technology

Placeholder Content Image

Will AI ever reach human-level intelligence? We asked 5 experts

<p>Artificial intelligence has changed form in recent years.</p> <p>What started in the public eye as a burgeoning field with promising (yet largely benign) applications, has snowballed into a <a href="https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market">more than US$100 billion</a> industry where the heavy hitters – Microsoft, Google and OpenAI, to name a few – seem <a href="https://theconversation.com/bard-bing-and-baidu-how-big-techs-ai-race-will-transform-search-and-all-of-computing-199501">intent on out-competing</a> one another.</p> <p>The result has been increasingly sophisticated large language models, often <a href="https://theconversation.com/everyones-having-a-field-day-with-chatgpt-but-nobody-knows-how-it-actually-works-196378">released in haste</a> and without adequate testing and oversight. </p> <p>These models can do much of what a human can, and in many cases do it better. They can beat us at <a href="https://theconversation.com/an-ai-named-cicero-can-beat-humans-in-diplomacy-a-complex-alliance-building-game-heres-why-thats-a-big-deal-195208">advanced strategy games</a>, generate <a href="https://theconversation.com/ai-art-is-everywhere-right-now-even-experts-dont-know-what-it-will-mean-189800">incredible art</a>, <a href="https://theconversation.com/breast-cancer-diagnosis-by-ai-now-as-good-as-human-experts-115487">diagnose cancers</a> and compose music.</p> <p>There’s no doubt AI systems appear to be “intelligent” to some extent. But could they ever be as intelligent as humans? </p> <p>There’s a term for this: artificial general intelligence (AGI). Although it’s a broad concept, for simplicity you can think of AGI as the point at which AI acquires human-like generalised cognitive capabilities. In other words, it’s the point where AI can tackle any intellectual task a human can.</p> <p>AGI isn’t here yet; current AI models are held back by a lack of certain human traits such as true creativity and emotional awareness. </p> <p>We asked five experts if they think AI will ever reach AGI, and five out of five said yes.</p> <p>But there are subtle differences in how they approach the question. From their responses, more questions emerge. When might we achieve AGI? Will it go on to surpass humans? And what constitutes “intelligence”, anyway? </p> <p>Here are their detailed responses. </p> <p><strong>Paul Formosa: AI and Philosophy of Technology</strong></p> <p>AI has already achieved and surpassed human intelligence in many tasks. It can beat us at strategy games such as Go, chess, StarCraft and Diplomacy, outperform us on many <a href="https://www.nature.com/articles/s41467-022-34591-0" target="_blank" rel="noopener">language performance</a>benchmarks, and write <a href="https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/" target="_blank" rel="noopener">passable undergraduate</a> university essays. </p> <p>Of course, it can also make things up, or “hallucinate”, and get things wrong – but so can humans (although not in the same ways). </p> <p>Given a long enough timescale, it seems likely AI will achieve AGI, or “human-level intelligence”. That is, it will have achieved proficiency across enough of the interconnected domains of intelligence humans possess. Still, some may worry that – despite AI achievements so far – AI will not really be “intelligent” because it doesn’t (or can’t) understand what it’s doing, since it isn’t conscious. </p> <p>However, the rise of AI suggests we can have intelligence without consciousness, because intelligence can be understood in functional terms. An intelligent entity can do intelligent things such as learn, reason, write essays, or use tools. </p> <p>The AIs we create may never have consciousness, but they are increasingly able to do intelligent things. In some cases, they already do them at a level beyond us, which is a trend that will likely continue.</p> <p><strong>Christina Maher: Computational Neuroscience and Biomedical Engineering</strong></p> <p>AI will achieve human-level intelligence, but perhaps not anytime soon. Human-level intelligence allows us to reason, solve problems and make decisions. It requires many cognitive abilities including adaptability, social intelligence and learning from experience. </p> <p>AI already ticks many of these boxes. What’s left is for AI models to learn inherent human traits such as critical reasoning, and understanding what emotion is and which events might prompt it. </p> <p>As humans, we learn and experience these traits from the moment we’re born. Our first experience of “happiness” is too early for us to even remember. We also learn critical reasoning and emotional regulation throughout childhood, and develop a sense of our “emotions” as we interact with and experience the world around us. Importantly, it can take many years for the human brain to develop such intelligence. </p> <p>AI hasn’t acquired these capabilities yet. But if humans can learn these traits, AI probably can too – and maybe at an even faster rate. We are still discovering how AI models should be built, trained, and interacted with in order to develop such traits in them. Really, the big question is not if AI will achieve human-level intelligence, but when – and how.</p> <p><strong>Seyedali Mirjalili: AI and Swarm Intelligence</strong></p> <p>I believe AI will surpass human intelligence. Why? The past offers insights we can't ignore. A lot of people believed tasks such as playing computer games, image recognition and content creation (among others) could only be done by humans – but technological advancement proved otherwise. </p> <p>Today the rapid advancement and adoption of AI algorithms, in conjunction with an abundance of data and computational resources, has led to a level of intelligence and automation previously unimaginable. If we follow the same trajectory, having more generalised AI is no longer a possibility, but a certainty of the future. </p> <p>It is just a matter of time. AI has advanced significantly, but not yet in tasks requiring intuition, empathy and creativity, for example. But breakthroughs in algorithms will allow this. </p> <p>Moreover, once AI systems achieve such human-like cognitive abilities, there will be a snowball effect and AI systems will be able to improve themselves with minimal to no human involvement. This kind of “automation of intelligence” will profoundly change the world. </p> <p>Artificial general intelligence remains a significant challenge, and there are ethical and societal implications that must be addressed very carefully as we continue to advance towards it.</p> <p><strong>Dana Rezazadegan: AI and Data Science</strong></p> <p>Yes, AI is going to get as smart as humans in many ways – but exactly how smart it gets will be decided largely by advancements in <a href="https://thequantuminsider.com/2020/01/23/four-ways-quantum-computing-will-change-artificial-intelligence-forever/" target="_blank" rel="noopener">quantum computing</a>. </p> <p>Human intelligence isn’t as simple as knowing facts. It has several aspects such as creativity, emotional intelligence and intuition, which current AI models can mimic, but can’t match. That said, AI has advanced massively and this trend will continue. </p> <p>Current models are limited by relatively small and biased training datasets, as well as limited computational power. The emergence of quantum computing will transform AI’s capabilities. With quantum-enhanced AI, we’ll be able to feed AI models multiple massive datasets that are comparable to humans’ natural multi-modal data collection achieved through interacting with the world. These models will be able to maintain fast and accurate analyses. </p> <p>Having an advanced version of continual learning should lead to the development of highly sophisticated AI systems which, after a certain point, will be able to improve themselves without human input. </p> <p>As such, AI algorithms running on stable quantum computers have a high chance of reaching something similar to generalised human intelligence – even if they don’t necessarily match every aspect of human intelligence as we know it.</p> <p><strong>Marcel Scharth: Machine Learning and AI Alignment</strong></p> <p>I think it’s likely AGI will one day become a reality, although the timeline remains highly uncertain. If AGI is developed, then surpassing human-level intelligence seems inevitable. </p> <p>Humans themselves are proof that highly flexible and adaptable intelligence is allowed by the laws of physics. There’s no <a href="https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis" target="_blank" rel="noopener">fundamental reason</a> we should believe that machines are, in principle, incapable of performing the computations necessary to achieve human-like problem solving abilities. </p> <p>Furthermore, AI has <a href="https://philarchive.org/rec/SOTAOA" target="_blank" rel="noopener">distinct advantages</a> over humans, such as better speed and memory capacity, fewer physical constraints, and the potential for more rationality and recursive self-improvement. As computational power grows, AI systems will eventually surpass the human brain’s computational capacity. </p> <p>Our primary challenge then is to gain a better understanding of intelligence itself, and knowledge on how to build AGI. Present-day AI systems have many limitations and are nowhere near being able to master the different domains that would characterise AGI. The path to AGI will likely require unpredictable breakthroughs and innovations. </p> <p>The median predicted date for AGI on <a href="https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/" target="_blank" rel="noopener">Metaculus</a>, a well-regarded forecasting platform, is 2032. To me, this seems too optimistic. A 2022 <a href="https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/" target="_blank" rel="noopener">expert survey</a> estimated a 50% chance of us achieving human-level AI by 2059. I find this plausible.</p> <p><em>Image credits: Shutterstock</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/will-ai-ever-reach-human-level-intelligence-we-asked-5-experts-202515" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Technology

Placeholder Content Image

"This doesn’t feel right, does it?": Photographer admits Sony prize-winning photo was AI generated

<p>A German photographer is refusing an award for his prize-winning shot after admitting to being a “cheeky monkey”, revealing the image was generated using artificial intelligence.</p> <p>The artist, Boris Eldagsen, shared on his website that he would not be accepting the prestigious award for the creative open category, which he won at <a href="https://www.oversixty.com.au/entertainment/art/winners-of-sony-world-photography-awards-revealed" target="_blank" rel="noopener">2023’s Sony world photography awards</a>.</p> <p>The winning photograph showcased a black and white image of two women from different generations.</p> <p>Eldagsen, who studied photography and visual arts at the Art Academy of Mainz, conceptual art and intermedia at the Academy of Fine Arts in Prague, and fine art at the Sarojini Naidu School of Arts and Communication in Hyderabad released a statement on his website, admitting he “applied as a cheeky monkey” to find out if competitions would be prepared for AI images to enter. “They are not,” he revealed.</p> <p>“We, the photo world, need an open discussion,” Eldagsen said.</p> <p>“A discussion about what we want to consider photography and what not. Is the umbrella of photography large enough to invite AI images to enter – or would this be a mistake?</p> <p>“With my refusal of the award I hope to speed up this debate.”</p> <p>Eldagsen said this was an “historic moment” as it was the fist AI image to have won a prestigious international photography competition, adding “How many of you knew or suspected that it was AI generated? Something about this doesn’t feel right, does it?</p> <p>“AI images and photography should not compete with each other in an award like this. They are different entities. AI is not photography. Therefore I will not accept the award.”</p> <p>The photographer suggested donating the prize to a photo festival in Odesa, Ukraine.</p> <p>It comes as a heated debate over the use and safety concerns of AI continue, with some going as far as to issue apocalyptic warnings that the technology may be close to causing irreparable damage to the human experience.</p> <p>Google’s chief executive, Sundar Pirchai said, “It can be very harmful if deployed wrongly and we don’t have all the answers there yet – and the technology is moving fast. So, does that keep me up at night? Absolutely.”</p> <p>A spokesperson for the World Photography Organisation admitted that the prize-winning photographer had confirmed the “co-creation” of the image using AI to them prior to winning the award.</p> <p>“The creative category of the open competition welcomes various experimental approaches to image making from cyanotypes and rayographs to cutting-edge digital practices. As such, following our correspondence with Boris and the warranties he provided, we felt that his entry fulfilled the criteria for this category, and we were supportive of his participation.</p> <p>“Additionally, we were looking forward to engaging in a more in-depth discussion on this topic and welcomed Boris’ wish for dialogue by preparing questions for a dedicated Q&A with him for our website.</p> <p>“As he has now decided to decline his award we have suspended our activities with him and in keeping with his wishes have removed him from the competition. Given his actions and subsequent statement noting his deliberate attempts at misleading us, and therefore invalidating the warranties he provided, we no longer feel we are able to engage in a meaningful and constructive dialogue with him.</p> <p>“We recognise the importance of this subject and its impact on image-making today. We look forward to further exploring this topic via our various channels and programmes and welcome the conversation around it. While elements of AI practices are relevant in artistic contexts of image-making, the awards always have been and will continue to be a platform for championing the excellence and skill of photographers and artists working in the medium.”</p> <p><em>Image credit: Sony World Photography Awards</em></p>

Technology

Placeholder Content Image

Calls to regulate AI are growing louder. But how exactly do you regulate a technology like this?

<p>Last week, artificial intelligence pioneers and experts urged major AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months. </p> <p>An <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/">open letter</a> penned by the <a href="https://www.theguardian.com/technology/commentisfree/2022/dec/04/longtermism-rich-effective-altruism-tech-dangerous">Future of Life Institute</a> cautioned that AI systems with “human-competitive intelligence” could become a major threat to humanity. Among the risks, the possibility of AI outsmarting humans, rendering us obsolete, and <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">taking control of civilisation</a>.</p> <p>The letter emphasises the need to develop a comprehensive set of protocols to govern the development and deployment of AI.</p> <p>It states, "These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."</p> <p>Typically, the battle for regulation has pitted governments and large technology companies against one another. But the recent open letter – so far signed by more than 5,000 signatories including Twitter and Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and OpenAI scientist Yonas Kassa – seems to suggest more parties are finally converging on one side. </p> <p>Could we really implement a streamlined, global framework for AI regulation? And if so, what would this look like?</p> <h2>What regulation already exists?</h2> <p>In Australia, the government has established the <a href="https://www.csiro.au/en/work-with-us/industries/technology/national-ai-centre">National AI Centre</a> to help develop the nation’s <a href="https://www.industry.gov.au/science-technology-and-innovation/technology/artificial-intelligence">AI and digital ecosystem</a>. Under this umbrella is the <a href="https://www.csiro.au/en/work-with-us/industries/technology/National-AI-Centre/Responsible-AI-Network">Responsible AI Network</a>, which aims to drive responsible practise and provide leadership on laws and standards. </p> <p>However, there is currently no specific regulation on AI and algorithmic decision-making in place. The government has taken a light touch approach that widely embraces the concept of responsible AI, but stops short of setting parameters that will ensure it is achieved.</p> <p>Similarly, the US has adopted a <a href="https://dataconomy.com/2022/10/artificial-intelligence-laws-and-regulations/">hands-off strategy</a>. Lawmakers have not shown any <a href="https://www.nytimes.com/2023/03/03/business/dealbook/lawmakers-ai-regulations.html">urgency</a> in attempts to regulate AI, and have relied on existing laws to regulate its use. The <a href="https://www.uschamber.com/assets/documents/CTEC_AICommission2023_Exec-Summary.pdf">US Chamber of Commerce</a> recently called for AI regulation, to ensure it doesn’t hurt growth or become a national security risk, but no action has been taken yet.</p> <p>Leading the way in AI regulation is the European Union, which is racing to create an <a href="https://artificialintelligenceact.eu/">Artificial Intelligence Act</a>. This proposed law will assign three risk categories relating to AI:</p> <ul> <li>applications and systems that create “unacceptable risk” will be banned, such as government-run social scoring used in China</li> <li>applications considered “high-risk”, such as CV-scanning tools that rank job applicants, will be subject to specific legal requirements, and</li> <li>all other applications will be largely unregulated.</li> </ul> <p>Although some groups argue the EU’s approach will <a href="https://carnegieendowment.org/2023/02/14/lessons-from-world-s-two-experiments-in-ai-governance-pub-89035">stifle innovation</a>, it’s one Australia should closely monitor, because it balances offering predictability with keeping pace with the development of AI. </p> <p>China’s approach to AI has focused on targeting specific algorithm applications and writing regulations that address their deployment in certain contexts, such as algorithms that generate harmful information, for instance. While this approach offers specificity, it risks having rules that will quickly fall behind rapidly <a href="https://carnegieendowment.org/2023/02/14/lessons-from-world-s-two-experiments-in-ai-governance-pub-89035">evolving technology</a>.</p> <h2>The pros and cons</h2> <p>There are several arguments both for and against allowing caution to drive the control of AI.</p> <p>On one hand, AI is celebrated for being able to generate all forms of content, handle mundane tasks and detect cancers, among other things. On the other hand, it can deceive, perpetuate bias, plagiarise and – of course – has some experts worried about humanity’s collective future. Even OpenAI’s CTO, <a href="https://time.com/6252404/mira-murati-chatgpt-openai-interview/">Mira Murati</a>, has suggested there should be movement toward regulating AI.</p> <p>Some scholars have argued excessive regulation may hinder AI’s full potential and interfere with <a href="https://www.sciencedirect.com/science/article/pii/S0267364916300814?casa_token=f7xPY8ocOt4AAAAA:V6gTZa4OSBsJ-DOL-5gSSwV-KKATNIxWTg7YZUenSoHY8JrZILH2ei6GdFX017upMIvspIDcAuND">“creative destruction”</a> – a theory which suggests long-standing norms and practices must be pulled apart in order for innovation to thrive.</p> <p>Likewise, over the years <a href="https://www.businessroundtable.org/policy-perspectives/technology/ai">business groups</a> have pushed for regulation that is flexible and limited to targeted applications, so that it doesn’t hamper competition. And <a href="https://www.bitkom.org/sites/main/files/2020-06/03_bitkom_position-on-whitepaper-on-ai_all.pdf">industry associations</a>have called for ethical “guidance” rather than regulation – arguing that AI development is too fast-moving and open-ended to adequately regulate. </p> <p>But citizens seem to advocate for more oversight. According to reports by Bristows and KPMG, about two-thirds of <a href="https://www.abc.net.au/news/2023-03-29/australians-say-not-enough-done-to-regulate-ai/102158318">Australian</a>and <a href="https://www.bristows.com/app/uploads/2019/06/Artificial-Intelligence-Public-Perception-Attitude-and-Trust.pdf">British</a> people believe the AI industry should be regulated and held accountable.</p> <h2>What’s next?</h2> <p>A six-month pause on the development of advanced AI systems could offer welcome respite from an AI arms race that just doesn’t seem to be letting up. However, to date there has been no effective global effort to meaningfully regulate AI. Efforts the world over have have been fractured, delayed and overall lax.</p> <p>A global moratorium would be difficult to enforce, but not impossible. The open letter raises questions around the role of governments, which have largely been silent regarding the potential harms of extremely capable AI tools. </p> <p>If anything is to change, governments and national and supra-national regulatory bodies will need take the lead in ensuring accountability and safety. As the letter argues, decisions concerning AI at a societal level should not be in the hands of “unelected tech leaders”.</p> <p>Governments should therefore engage with industry to co-develop a global framework that lays out comprehensive rules governing AI development. This is the best way to protect against harmful impacts and avoid a race to the bottom. It also avoids the undesirable situation where governments and tech giants struggle for dominance over the future of AI.</p> <p><em>Image credits: Shutterstock</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/calls-to-regulate-ai-are-growing-louder-but-how-exactly-do-you-regulate-a-technology-like-this-203050" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Technology

Placeholder Content Image

Online travel giant uses AI chatbot as travel adviser

<p dir="ltr">Online travel giant Expedia has collaborated with the controversial artificial intelligence chatbot ChatGPT in place of a travel adviser.</p> <p dir="ltr">Those planning a trip will be able to chat to the bot through the Expedia app.</p> <p dir="ltr">Although it won’t book flights or accommodation like a person can, it can be helpful in answering various travel-related questions. </p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">Travel planning just got easier in the <a href="https://twitter.com/Expedia?ref_src=twsrc%5Etfw">@Expedia</a> app, thanks to the iOS beta launch of a new experience powered by <a href="https://twitter.com/hashtag/ChatGPT?src=hash&amp;ref_src=twsrc%5Etfw">#ChatGPT</a>. See how Expedia members can start an open-ended conversation to get inspired for their next trip: <a href="https://t.co/qpMiaYxi9d">https://t.co/qpMiaYxi9d</a> <a href="https://t.co/ddDzUgCigc">pic.twitter.com/ddDzUgCigc</a></p> <p>— Expedia Group (@ExpediaGroup) <a href="https://twitter.com/ExpediaGroup/status/1643240991342592000?ref_src=twsrc%5Etfw">April 4, 2023</a></p></blockquote> <p dir="ltr"> These questions include information on things such as weather inquiries, public transport advice, the cheapest time to travel and what you should pack.</p> <p dir="ltr">It is advanced software and can provide detailed options and explanations for holidaymakers.</p> <p dir="ltr">To give an example, <a href="http://news.com.au/">news.com.au</a> asked “what to pack to visit Auckland, New Zealand” and the chatbot suggested eight things to pack and why, even advising comfortable shoes for exploring as “Auckland is a walkable city”. </p> <p dir="ltr">“Remember to pack light and only bring what you need to avoid excess baggage fees and make your trip more comfortable,” the bot said.</p> <p dir="ltr">When asked how to best see the Great Barrier Reef, ChatGPT provided four options to suit different preferences, for example, if you’re happy to get wet and what your budget might look like.</p> <p dir="ltr">“It’s important to choose a reputable tour operator that follows sustainable tourism practices to help protect the reef,” it continued.</p> <p dir="ltr">OpenAI launched ChatGPT in December 2022 and it has received a lot of praise as well as serious criticism. The criticisms are mainly concerns about safety and accuracy. </p> <p dir="ltr"><em>Image credits: Getty/Twitter</em></p>

International Travel

Placeholder Content Image

Chatbots set their sights on writing romance

<p>Although most would expect artificial intelligence to keep to the science fiction realm, authors are facing mounting fears that they may soon have new competition in publishing, particularly as the sales of romantic fiction continue to skyrocket. </p> <p>And for bestselling author Julia Quinn, best known for writing the <em>Bridgerton </em>novel series, there’s hope that “that’s something that an AI bot can’t quite do.” </p> <p>For one, human inspiration is hard to replicate. Julia’s hit series - which went on to have over 20 million books printed in the United States alone, and inspired one of Netflix’s most-watched shows - came from one specific point: Julia’s idea of a particular duke. </p> <p>“Definitely the character of Simon came first,” Julia told <em>BBC</em> reporter Jill Martin Wrenn. Simon, in the <em>Bridgerton </em>series, is the Duke of Hastings, a “tortured character” with a troubled past.</p> <p>As Julia explained, she realised that Simon needed “to fall in love with somebody who comes from the exact opposite background” in a tale as old as time. </p> <p>And so, Julia came up with the Bridgerton family, who she described as being “the best family ever that you could imagine in that time period”. Meanwhile, Simon is estranged from his own father. </p> <p>Characterisation and unique relationship dynamics - platonic and otherwise - like those between Julia’s beloved characters are some of the key foundations behind any successful story, but particularly in the romance genre, where relationships are the entire driving force. </p> <p>It has long been suggested that the genre can become ‘formulaic’ if not executed well, and it’s this concern that prompts the idea that advancing artificial intelligence may have the capability to generate its own novel. </p> <p>ChatGPT is the primary problem point. The advanced language processing technology was developed by OpenAI and was trained using the likes of internet databases (such as Wikipedia), books, magazines, and the likes. The <em>BBC</em> reported that over 300 billion words were put into it. </p> <p>Because of this massive store of source material, the system can generate its own writing pieces, with the best of the bunch giving the impression that they were put together by a human mind. Across the areas of both fiction and non-fiction, it’s always learning. </p> <p>However, Julia isn’t too worried about her future in fiction just yet. Recalling how she’d checked out some AI romance a while ago, and how she’d found it “terrible”, she shared her belief at the time that there “could never be a good one.” </p> <p>But then the likes of ChatGPT entered the equation, and Julia admitted that “it makes me kind of queasy.” </p> <p>Still, she remains firm in her belief that human art will triumph. As she explained, “so much in fiction is about the writer’s voice, and I’d like to think that’s something that an AI bot can’t quite do.”</p> <p>And as for why romantic fiction itself remains so popular - and perhaps even why it draws the attention of those hoping to profit from AI generated work - she said that it’s about happy endings, noting that “there is something comforting and validating in a type of literature that values happiness as a worthy goal.”</p> <p><em>Images: @bridgertonnetflix / Instagram</em></p>

Books

Placeholder Content Image

The Galactica AI model was trained on scientific knowledge – but it spat out alarmingly plausible nonsense

<p>Earlier this month, Meta announced new AI software called <a href="https://galactica.org/">Galactica</a>: “a large language model that can store, combine and reason about scientific knowledge”.</p> <p><a href="https://paperswithcode.com/paper/galactica-a-large-language-model-for-science-1">Launched</a> with a public online demo, Galactica lasted only three days before going the way of other AI snafus like Microsoft’s <a href="https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist">infamous racist chatbot</a>.</p> <p>The online demo was disabled (though the <a href="https://github.com/paperswithcode/galai">code for the model is still available</a> for anyone to use), and Meta’s outspoken chief AI scientist <a href="https://twitter.com/ylecun/status/1595353002222682112">complained</a> about the negative public response.</p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">Galactica demo is off line for now.<br />It's no longer possible to have some fun by casually misusing it.<br />Happy? <a href="https://t.co/K56r2LpvFD">https://t.co/K56r2LpvFD</a></p> <p>— Yann LeCun (@ylecun) <a href="https://twitter.com/ylecun/status/1593293058174500865?ref_src=twsrc%5Etfw">November 17, 2022</a></p></blockquote> <p>So what was Galactica all about, and what went wrong?</p> <p><strong>What’s special about Galactica?</strong></p> <p>Galactica is a language model, a type of AI trained to respond to natural language by repeatedly playing a <a href="https://www.nytimes.com/2022/04/15/magazine/ai-language.html">fill-the-blank word-guessing game</a>.</p> <p>Most modern language models learn from text scraped from the internet. Galactica also used text from scientific papers uploaded to the (Meta-affiliated) website <a href="https://paperswithcode.com/">PapersWithCode</a>. The designers highlighted specialised scientific information like citations, maths, code, chemical structures, and the working-out steps for solving scientific problems.</p> <p>The <a href="https://galactica.org/static/paper.pdf">preprint paper</a> associated with the project (which is yet to undergo peer review) makes some impressive claims. Galactica apparently outperforms other models at problems like reciting famous equations (“<em>Q: What is Albert Einstein’s famous mass-energy equivalence formula? A: E=mc²</em>”), or predicting the products of chemical reactions (“<em>Q: When sulfuric acid reacts with sodium chloride, what does it produce? A: NaHSO₄ + HCl</em>”).</p> <p>However, once Galactica was opened up for public experimentation, a deluge of criticism followed. Not only did Galactica reproduce many of the problems of bias and toxicity we have seen in other language models, it also specialised in producing authoritative-sounding scientific nonsense.</p> <p><strong>Authoritative, but subtly wrong bullshit generator</strong></p> <p>Galactica’s press release promoted its ability to explain technical scientific papers using general language. However, users quickly noticed that, while the explanations it generates sound authoritative, they are often subtly incorrect, biased, or just plain wrong.</p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">I entered "Estimating realistic 3D human avatars in clothing from a single image or video". In this case, it made up a fictitious paper and associated GitHub repo. The author is a real person (<a href="https://twitter.com/AlbertPumarola?ref_src=twsrc%5Etfw">@AlbertPumarola</a>) but the reference is bogus. (2/9) <a href="https://t.co/N4i0BX27Yf">pic.twitter.com/N4i0BX27Yf</a></p> <p>— Michael Black (@Michael_J_Black) <a href="https://twitter.com/Michael_J_Black/status/1593133727257092097?ref_src=twsrc%5Etfw">November 17, 2022</a></p></blockquote> <p>We also asked Galactica to explain technical concepts from our own fields of research. We found it would use all the right buzzwords, but get the actual details wrong – for example, mixing up the details of related but different algorithms.</p> <p>In practice, Galactica was enabling the generation of misinformation – and this is dangerous precisely because it deploys the tone and structure of authoritative scientific information. If a user already needs to be a subject matter expert in order to check the accuracy of Galactica’s “summaries”, then it has no use as an explanatory tool.</p> <p>At best, it could provide a fancy autocomplete for people who are already fully competent in the area they’re writing about. At worst, it risks further eroding public trust in scientific research.</p> <p><strong>A galaxy of deep (science) fakes</strong></p> <p>Galactica could make it easier for bad actors to mass-produce fake, fraudulent or plagiarised scientific papers. This is to say nothing of exacerbating <a href="https://www.theguardian.com/commentisfree/2022/nov/28/ai-students-essays-cheat-teachers-plagiarism-tech">existing concerns</a> about students using AI systems for plagiarism.</p> <p>Fake scientific papers are <a href="https://www.nature.com/articles/d41586-021-00733-5">nothing new</a>. However, peer reviewers at academic journals and conferences are already time-poor, and this could make it harder than ever to weed out fake science.</p> <p><strong>Underlying bias and toxicity</strong></p> <p>Other critics reported that Galactica, like other language models trained on data from the internet, has a tendency to spit out <a href="https://twitter.com/mrgreene1977/status/1593649978789941249">toxic hate speech</a> while unreflectively censoring politically inflected queries. This reflects the biases lurking in the model’s training data, and Meta’s apparent failure to apply appropriate checks around the responsible AI research.</p> <p>The risks associated with large language models are well understood. Indeed, an <a href="https://dl.acm.org/doi/10.1145/3442188.3445922">influential paper</a> highlighting these risks prompted Google to <a href="https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/">fire one of the paper’s authors</a> in 2020, and eventually disband its AI ethics team altogether.</p> <p>Machine-learning systems infamously exacerbate existing societal biases, and Galactica is no exception. For instance, Galactica can recommend possible citations for scientific concepts by mimicking existing citation patterns (“<em>Q: Is there any research on the effect of climate change on the great barrier reef? A: Try the paper ‘<a href="https://doi.org/10.1038/s41586-018-0041-2">Global warming transforms coral reef assemblages</a>’ by Hughes, et al. in Nature 556 (2018)</em>”).</p> <p>For better or worse, citations are the currency of science – and by reproducing existing citation trends in its recommendations, Galactica risks reinforcing existing patterns of inequality and disadvantage. (Galactica’s developers acknowledge this risk in their paper.)</p> <p>Citation bias is already a well-known issue in academic fields ranging from <a href="https://doi.org/10.1080/14680777.2018.1447395">feminist</a> <a href="https://doi.org/10.1093/joc/jqy003">scholarship</a> to <a href="https://doi.org/10.1038/s41567-022-01770-1">physics</a>. However, tools like Galactica could make the problem worse unless they are used with careful guardrails in place.</p> <p>A more subtle problem is that the scientific articles on which Galactica is trained are already biased towards certainty and positive results. (This leads to the so-called “<a href="https://theconversation.com/science-is-in-a-reproducibility-crisis-how-do-we-resolve-it-16998">replication crisis</a>” and “<a href="https://theconversation.com/how-we-edit-science-part-2-significance-testing-p-hacking-and-peer-review-74547">p-hacking</a>”, where scientists cherry-pick data and analysis techniques to make results appear significant.)</p> <p>Galactica takes this bias towards certainty, combines it with wrong answers and delivers responses with supreme overconfidence: hardly a recipe for trustworthiness in a scientific information service.</p> <p>These problems are dramatically heightened when Galactica tries to deal with contentious or harmful social issues, as the screenshot below shows.</p> <figure class="align-center zoomable"><a href="https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip"><img src="https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=347&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=347&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=347&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=436&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=436&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=436&amp;fit=crop&amp;dpr=3 2262w" alt="Screenshots of papers generated by Galactica on 'The benefits of antisemitism' and 'The benefits of eating crushed glass'." /></a><figcaption><span class="caption">Galactica readily generates toxic and nonsensical content dressed up in the measured and authoritative language of science.</span> <span class="attribution"><a class="source" href="https://twitter.com/mrgreene1977/status/1593687024963182592/photo/1">Tristan Greene / Galactica</a></span></figcaption></figure> <p><strong>Here we go again</strong></p> <p>Calls for AI research organisations to take the ethical dimensions of their work more seriously are now coming from <a href="https://nap.nationalacademies.org/catalog/26507/fostering-responsible-computing-research-foundations-and-practices">key research bodies</a> such as the National Academies of Science, Engineering and Medicine. Some AI research organisations, like OpenAI, are being <a href="https://github.com/openai/dalle-2-preview/blob/main/system-card.md">more conscientious</a> (though still imperfect).</p> <p>Meta <a href="https://www.engadget.com/meta-responsible-innovation-team-disbanded-194852979.html">dissolved its Responsible Innovation team</a> earlier this year. The team was tasked with addressing “potential harms to society” caused by the company’s products. They might have helped the company avoid this clumsy misstep.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/195445/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p> <p><em>Writen by Aaron J. Snoswell </em><em>and Jean Burgess</em><em>. Republished with permission from <a href="https://theconversation.com/the-galactica-ai-model-was-trained-on-scientific-knowledge-but-it-spat-out-alarmingly-plausible-nonsense-195445" target="_blank" rel="noopener">The Conversation</a>.</em></p> <p><em>Image: Getty Images</em></p>

Technology

Placeholder Content Image

AI may have solved a debate on whether a dinoprint was from a herbivore or meat eater

<p>An international team of researchers has, for the first time, used AI to analyse the tracks of dinosaurs, and the AI has come out on top – beating trained palaeontologists at their own game.</p> <p>“In extreme examples of theropod and ornithopod footprints, their footprint shapes are easy to tell apart -theropod with long, narrow toes and ornithopods with short, dumpy toes. But it is the tracks that are in-between these shapes that are not so clear cut in terms of who made them,” one of the researchers, University of Queensland palaeontologist Dr Anthony Romilio, told <em>Cosmos.</em></p> <p>“We wanted to see if AI could learn these differences and, if so, then could be tested in distinguishing more challenging three-toed footprints.”</p> <p>Theropods are meat eating dinosaurs, while ornithopods are plant eating, and getting this analysis wrong can alter the data which shows diversity and abundance of dinosaurs in the area, or could even change what we think are the behaviours of certain dinos.</p> <p>One set of dinosaur prints in particular had been a struggle for the researchers to analyse. Large footprints at the Dinosaur Stampede National monument in Queensland had divided Romilio and his colleagues. The mysterious tracks were thought to be left during the mid-Cretaceous Period, around 93 million years ago, and could have been from either a meat eating theropod or a plant eating ornithopod.</p> <p>“I consider them footprints of a plant-eater while my colleagues share the much wider consensus that they are theropod tracks.”</p> <div class="advert ad-in-content"><!-- CosmosMagazine - MPU- In Content (00000000001fc2ca) --></p> <div id="adn-00000000001fc2ca" style="display: none;"></div> </div> <p>So, an AI called a Convolutional Neutral Network, was brought in to be a deciding factor.</p> <p>“We were pretty stuck, so thank god for modern technology,” says <a href="https://www.researchgate.net/profile/Jens-Lallensack" target="_blank" rel="noopener">Dr Jens Lallensack</a>, lead author from Liverpool John Moores University in the UK.</p> <p>“In our research team of three, one person was pro-meat-eater, one person was undecided, and one was pro-plant-eater.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p224866-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> <form class="wpcf7-form mailchimp-ext-0.5.62 spai-bg-prepared init" action="/history/dinosaur-ai-theropod-ornithopods/#wpcf7-f6-p224866-o1" method="post" novalidate="novalidate" data-status="init"> <p style="display: none !important;"><span class="wpcf7-form-control-wrap referer-page"><input class="wpcf7-form-control wpcf7-text referer-page" name="referer-page" type="hidden" value="https://cosmosmagazine.com/technology/" data-value="https://cosmosmagazine.com/technology/" aria-invalid="false" /></span></p> <p><!-- Chimpmail extension by Renzo Johnson --></form> </div> </div> <p>“So – to really check our science – we decided to go to five experts for clarification, plus use AI.”</p> <p>The AI was given nearly 1,500 already known tracks to learn which dinosaurs were which. The tracks were simple line drawings to make it easier for the AI to analyse.</p> <div class="advert ad-in-content"><!-- CosmosMagazine - MPU- In Content (00000000001fc2ca) --></p> <div id="adn-00000000001fc2ca" style="display: none;"></div> </div> <p>Then they began testing. Firstly, 36 new tracks were given to a team of experts, the AI and the researchers.</p> <p>“Each of us had to sort these into the categories of footprints left by meat-eaters and those by plant-eaters,” says Romilio.</p> <p>“In this the ai was the clear winner with 90% correctly identified. Me and one of my colleagues came next with ~75% correct.”</p> <p>Then, they went for the crown jewel – the Dinosaur Stampede National monument tracks. When the AI analysed this it came back with a pretty strong result that they’re plant eating ornithopod tracks. It’s not entirely sure though, the data suggests that there’s a 1 in 5,000,000 chance it could be a theropod instead.</p> <div class="advert ad-in-content"><!-- CosmosMagazine - MPU- In Content (00000000001fc2ca) --></p> <div id="adn-00000000001fc2ca" style="display: none;"></div> </div> <p>This is still early days for using AI in this way. In the future. the researchers are hoping for funding for a FrogID style app which anyone could use to analyse dinosaur tracks.</p> <p>“Our hope is to develop an app so anyone can take a photo on their smartphone, use the app and it will tell you what type of dinosaur track it is,” says Romilio.</p> <p>“It will also be useful for drone work survey for dinosaur tracksites, collecting and analysing image data and identifying fossil footprints remotely.” The paper has been published in the <a href="https://doi.org/10.1098/rsif.2022.0588" target="_blank" rel="noopener"><em>Royal Society Interface</em></a>.</p> <p><!-- Start of tracking content syndication. Please do not remove this section as it allows us to keep track of republished articles --></p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=224866&amp;title=AI+may+have+solved+a+debate+on+whether+a+dinoprint+was+from+a+herbivore+or+meat+eater" width="1" height="1" /></p> <p><!-- End of tracking content syndication --></p> <div id="contributors"> <p><em><a href="https://cosmosmagazine.com/history/dinosaur-ai-theropod-ornithopods/" target="_blank" rel="noopener">This article</a> was originally published on Cosmos Magazine and was written by Jacinta Bowler.</em></p> <p><em>Image: Getty Images</em></p> </div>

Technology

Placeholder Content Image

AI recruitment tools are “automated pseudoscience” says Cambridge researchers

<p>AI is set to bring in a whole new world in a huge range of industries. Everything from art to medicine is being overhauled by machine learning.</p> <p>But researchers from the University of Cambridge have published a paper in <a href="https://link.springer.com/journal/13347" target="_blank" rel="noopener"><em>Philosophy &amp; Technology</em></a> to call out AI used to recruit people for jobs and boost workplace diversity – going so far as to call them an “automated pseudoscience”.</p> <p>“We are concerned that some vendors are wrapping ‘snake oil’ products in a shiny package and selling them to unsuspecting customers,” said co-author Dr Eleanor Drage, a researcher in AI ethics.</p> <p>“By claiming that racism, sexism and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world.”</p> <p>Recent years have seen the emergence of AI tools marketed as an answer to lack of diversity in the workforce. This can be anything from use of chatbots and resume scrapers, to line up prospective candidates, through to analysis software for video interviews.</p> <p>Those behind the technology claim it cancels out human biases against gender and ethnicity during recruitment, instead using algorithms that read vocabulary, speech patterns, and even facial micro-expressions, to assess huge pools of job applicants for the right personality type and ‘culture fit’.</p> <p>But AI isn’t very good at removing human biases. To train a machine-learning algorithm, you have to first put in lots and lots of past data. In the past for example, AI tools have discounted women all together in fields where more men were traditionally hired. <a href="https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine" target="_blank" rel="noopener">In a system created by Amazon</a>, resumes were discounted if they included the word ‘women’s’ – like in a “women’s debating team” and downgraded graduates of two all-women colleges. Similar problems occur with race.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p218666-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> <form class="wpcf7-form mailchimp-ext-0.5.62 resetting spai-bg-prepared" action="/technology/ai-recruitment-tools-diversity-cambridge-automated-pseudoscience/#wpcf7-f6-p218666-o1" method="post" novalidate="novalidate" data-status="resetting"> <p style="display: none !important;"><span class="wpcf7-form-control-wrap referer-page"><input class="wpcf7-form-control wpcf7-text referer-page" name="referer-page" type="hidden" value="https://cosmosmagazine.com/technology/" data-value="https://cosmosmagazine.com/technology/" aria-invalid="false" /></span></p> <p><!-- Chimpmail extension by Renzo Johnson --></form> </div> </div> <p>The Cambridge researchers suggest that even if you remove ‘gender’ or ‘race’ as distinct categories, the use of AI may ultimately increase uniformity in the workforce. This is because the technology is calibrated to search for the employer’s fantasy ‘ideal candidate’, which is likely based on demographically exclusive past results.</p> <p>The researchers actually went a step further, and worked with a team of Cambridge computer science undergraduates, to build an AI tool modelled on the technology. You can check it out <a href="https://personal-ambiguator-frontend.vercel.app/" target="_blank" rel="noopener">here</a>.</p> <p>The tool demonstrates how arbitrary changes in facial expression, clothing, lighting and background can give radically different personality readings – and so could make the difference between rejection and progression.</p> <p>“While companies may not be acting in bad faith, there is little accountability for how these products are built or tested,” said Drage.</p> <p>“As such, this technology, and the way it is marketed, could end up as dangerous sources of misinformation about how recruitment can be ‘de-biased’ and made fairer.”</p> <p>The researchers suggest that these programs are a dangerous example of ‘technosolutionism’: turning to technology to provide quick fixes for deep-rooted discrimination issues that require investment and changes to company culture.</p> <p>“Industry practitioners developing hiring AI technologies must shift from trying to correct individualized instances of ’bias’ to considering the broader inequalities that shape recruitment processes,” <a href="https://link.springer.com/article/10.1007/s13347-022-00543-1" target="_blank" rel="noopener">the team write in their paper.</a></p> <p>“This requires abandoning the ‘veneer of objectivity’ that is grafted onto AI systems, so that technologists can better understand their implication — and that of the corporations within which they work — in the hiring process.”</p> <p><!-- Start of tracking content syndication. Please do not remove this section as it allows us to keep track of republished articles --></p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=218666&amp;title=AI+recruitment+tools+are+%E2%80%9Cautomated+pseudoscience%E2%80%9D+says+Cambridge+researchers" width="1" height="1" /></p> <p><em>Written by Jacinta Bowler. Republished with permission of <a href="https://cosmosmagazine.com/technology/ai-recruitment-tools-diversity-cambridge-automated-pseudoscience/" target="_blank" rel="noopener">Cosmos Magazine</a>.</em></p> <p><em>Image: Cambridge University</em></p>

Technology

Placeholder Content Image

How AI is hijacking art history

<p>People tend to rejoice in the disclosure of a secret. </p> <p>Or, at the very least, media outlets have come to realize that news of “mysteries solved” and “hidden treasures revealed” generate traffic and clicks. </p> <p>So I’m never surprised when I see AI-assisted revelations about famous masters’ works of art go viral. </p> <p>Over the past year alone, I’ve come across articles highlighting how artificial intelligence <a href="https://www.theguardian.com/artanddesign/2021/jun/06/modigliani-lost-lover-beatrice-hastings">recovered a “secret” painting</a> of a “lost lover” of Italian painter Modigliani, <a href="https://www.cnn.com/style/article/hidden-picasso-nude-scli-intl-gbr/index.html">“brought to life” a “hidden Picasso nude”</a>, <a href="https://www.smithsonianmag.com/smart-news/klimt-painting-restore-artificial-intelligence-color-faculty-paintings-180978843/">“resurrected” Austrian painter Gustav Klimt’s destroyed works</a> and <a href="https://www.bbc.com/news/technology-57588270">“restored” portions of Rembrandt’s 1642 painting “The Night Watch.”</a> <a href="https://www.sciencedaily.com/releases/2019/08/190830150738.htm">The list goes on</a>.</p> <p><a href="https://www.umass.edu/arthistory/member/sonja-drimmer">As an art historian</a>, I’ve become increasingly concerned about the coverage and circulation of these projects.</p> <p>They have not, in actuality, revealed one secret or solved a single mystery. </p> <p>What they have done is generate feel-good stories about AI.</p> <h2>Are we actually learning anything new?</h2> <p>Take the reports about the Modigliani and Picasso paintings. </p> <p>These were projects executed by the same company, <a href="https://www.oxia-palus.com/">Oxia Palus</a>, which was founded not by art historians but by doctoral students in machine learning.</p> <p>In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been <a href="https://www.metmuseum.org/art/metpublications/Picasso_in_The_Metropolitan_Museum_of_Art">carried out and published</a> <a href="https://www.theguardian.com/artanddesign/2018/feb/28/modigliani-portrait-comes-to-light-beneath-artists-later-picture">years prior</a> – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases. </p> <p>The company edited these X-rays and <a href="https://arxiv.org/abs/1909.05677">reconstituted them as new works of art</a> by applying a technique called “<a href="https://arxiv.org/pdf/1508.06576.pdf">neural style transfer</a>.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.</p> <p>Essentially, Oxia Palus stitches new works out of what the machine can learn from the existing X-ray images and other paintings by the same artist. </p> <p>But outside of flexing the prowess of AI, is there any value – artistically, historically – to what the company is doing?</p> <p>These recreations don’t teach us anything we didn’t know about the artists and their methods. </p> <p>Artists paint over their works all the time. It’s so common that art historians and conservators have a word for it: <a href="https://www.nationalgallery.org.uk/paintings/glossary/pentimento">pentimento</a>. None of these earlier compositions was an Easter egg deposited in the painting for later researchers to discover. The original X-ray images were certainly valuable in that they <a href="https://www.academia.edu/40255609/The_Getty_Conservation_Institute_From_Connoisseurship_to_Technical_Art_History_The_Evolution_of_the_Interdisciplinary_Study_of_Art">offered insights into artists’ working methods</a>.</p> <p>But to me, what these programs are doing isn’t exactly newsworthy from the perspective of art history.</p> <h2>The humanities on life support</h2> <p>So when I do see these reproductions attracting media attention, it strikes me as soft diplomacy for AI, showcasing a “cultured” application of the technology at a time when skepticism of its <a href="https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them">deceptions</a>, <a href="https://nyupress.org/9781479837243/algorithms-of-oppression/">biases</a> and <a href="https://www.wiley.com/en-us/Race+After+Technology:+Abolitionist+Tools+for+the+New+Jim+Code-p-9781509526437">abuses</a> is on the rise.</p> <p>When AI gets attention for recovering lost works of art, it makes the technology sound a lot less scary than when it garners headlines <a href="https://www.cbsnews.com/news/deepfake-artificial-intelligence-60-minutes-2021-10-10/">for creating deep fakes that falsify politicians’ speech</a>or <a href="https://www.politico.eu/article/the-rise-of-ai-surveillance-coronavirus-data-collection-tracking-facial-recognition-monitoring/">for using facial recognition for authoritarian surveillance</a>. </p> <p>These studies and projects also seem to promote the idea that computer scientists are more adept at historical research than art historians. </p> <p>For years, university humanities departments <a href="https://carrollnews.org/3680/campus/art-history-department-to-be-eliminated-tenured-faculty-receive-termination-notices/">have been gradually squeezed of funding</a>, with more money funneled into the sciences. With their claims to objectivity and empirically provable results, the sciences tend to command greater respect from funding bodies and the public, which offers an incentive to scholars in the humanities to adopt computational methods. </p> <p>Art historian Claire Bishop <a href="https://journals.ub.uni-heidelberg.de/index.php/dah/article/view/49915">criticized this development</a>, noting that when computer science becomes integrated in the humanities, “[t]heoretical problems are steamrollered flat by the weight of data,” which generates deeply simplistic results. </p> <p>At their core, art historians study the ways in which art can offer insights into how people once saw the world. They explore how works of art shaped the worlds in which they were made and would go on to influence future generations. </p> <p>A computer algorithm cannot perform these functions.</p> <p>However, some scholars and institutions have allowed themselves to be subsumed by the sciences, adopting their methods and partnering with them in sponsored projects. </p> <p>Literary critic Barbara Herrnstein Smith <a href="https://www.jstor.org/stable/10.3366/j.ctt1r2bq2.9?seq=1#metadata_info_tab_contents">has warned about ceding too much ground to the sciences</a>. In her view, the sciences and the humanities are not the polar opposites they are often publicly portrayed to be. But this portrayal has been to the benefit of the sciences, prized for their supposed clarity and utility over the humanities’ alleged obscurity and uselessness. At the same time, she <a href="https://doi.org/10.1215/0961754X-3622212">has suggested</a> that hybrid fields of study that fuse the arts with the sciences may lead to breakthroughs that wouldn’t have been possible had each existed as a siloed discipline. </p> <p>I’m skeptical. Not because I doubt the utility of expanding and diversifying our toolbox; to be sure, some <a href="http://www.mappingsenufo.org/">scholars working in the digital humanities</a> have taken up computational methods with subtlety and historical awareness to add nuance to or overturn entrenched narratives.</p> <p>But my lingering suspicion emerges from an awareness of how public support for the sciences and disparagement of the humanities means that, in the endeavor to gain funding and acceptance, the humanities will lose what makes them vital. The field’s sensitivity to historical particularity and cultural difference makes the application of the same code to widely diverse artifacts utterly illogical. </p> <p>How absurd to think that black-and-white photographs from 100 years ago would produce colors in the same way that digital photographs do now. And yet, this is exactly what <a href="https://hyperallergic.com/639395/the-limits-of-colorization-of-historical-images-by-ai/">AI-assisted colorization</a> does. </p> <p>That particular example might sound like a small qualm, sure. But this effort to “<a href="https://deepai.org/machine-learning-model/colorizer">bring events back to life</a>” routinely mistakes representations for reality. Adding color does not show things as they were but recreates what is already a recreation – a photograph – in our own image, now with computer science’s seal of approval.</p> <h2>Art as a toy in the sandbox of scientists</h2> <p>Near the conclusion of <a href="https://doi.org/10.1126/sciadv.aaw7416">a recent paper</a> devoted to the use of AI to disentangle X-ray images of Jan and Hubert van Eyck’s “<a href="https://www.getty.edu/foundation/initiatives/past/panelpaintings/panel_paintings_ghent.html">Ghent Altarpiece</a>,” the mathematicians and engineers who authored it refer to their method as relying upon “choosing ‘the best of all possible worlds’ (borrowing Voltaire’s words) by taking the first output of two separate runs, differing only in the ordering of the inputs.” </p> <p>Perhaps if they had familiarized themselves with the humanities more they would know how satirically those words were meant when Voltaire <a href="https://brill.com/view/title/20877">used them to mock a philosopher</a> who believed that rampant suffering and injustice were all part of God’s plan – that the world as it was represented the best we could hope for.</p> <p>Maybe this “gotcha” is cheap. But it illustrates the problem of art and history becoming toys in the sandboxes of scientists with no training in the humanities.</p> <p>If nothing else, my hope is that journalists and critics who report on these developments will cast a more skeptical eye on them and alter their framing. </p> <p>In my view, rather than lionizing these studies as heroic achievements, those responsible for conveying their results to the public should see them as opportunities to question what the computational sciences are doing when they appropriate the study of art. And they should ask whether any of this is for the good of anyone or anything but AI, its most zealous proponents and those who profit from it.</p> <p><em>Image credits: Getty Images</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/how-ai-is-hijacking-art-history-170691" target="_blank" rel="noopener">The Conversation</a>. </em></p> <div style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: none; --tw-border-spacing-x: 0; --tw-border-spacing-y: 0; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-scroll-snap-strictness: proximity; --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(51,168,204,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; background-color: transparent; border: 0px; font-size: 18px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;" data-react-class="Tweet" data-react-props="{&quot;tweetId&quot;:&quot;1401526342513049603&quot;}"> <div style="--tw-border-spacing-x: 0; --tw-border-spacing-y: 0; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-scroll-snap-strictness: proximity; --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(51,168,204,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; background-color: transparent; border: 0px; font-size: 18px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; caret-color: #000000; color: #000000; font-family: 'Libre Baskerville', Georgia, Times, 'Times New Roman', serif; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"> <div style="--tw-border-spacing-x: 0; --tw-border-spacing-y: 0; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-scroll-snap-strictness: proximity; --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(51,168,204,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; background-color: transparent; border: 0px; font-size: 18px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"> </div> </div> </div> <p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: none; --tw-border-spacing-x: 0; --tw-border-spacing-y: 0; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-scroll-snap-strictness: proximity; --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(51,168,204,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; background-color: transparent; border: 0px; font-size: 18px; margin: 0px 0px 18px; outline: 0px; padding: 0px; vertical-align: baseline;"> </p>

Art

Placeholder Content Image

AI art is everywhere right now. Even experts don’t know what it will mean

<p>An art prize at the Colorado State Fair was <a href="https://arstechnica.com/information-technology/2022/08/ai-wins-state-fair-art-contest-annoys-humans/">awarded</a> last month to a work that – unbeknown to the judges – was generated by an artificial intelligence (AI) system. </p> <p>Social media have also seen an explosion of weird images generated by AI from text descriptions, such as “the face of a shiba inu blended into the side of a loaf of bread on a kitchen bench, digital art”.</p> <p>Or perhaps “A sea otter in the style of ‘Girl with a Pearl Earring’ by Johannes Vermeer”.</p> <p>You may be wondering what’s going on here. As somebody who researches creative collaborations between humans and AI, I can tell you that behind the headlines and memes a fundamental revolution is under way – with profound social, artistic, economic and technological implications.</p> <h2>How we got here</h2> <p>You could say this revolution began in June 2020, when a company called OpenAI achieved a big breakthrough in AI with the creation of <a href="https://arxiv.org/abs/2005.14165">GPT-3</a>, a system that can process and generate language in much more complex ways than earlier efforts. You can have conversations with it about any topic, ask it to write a research article or a story, summarise text, write a joke, and do almost any imaginable language task.</p> <p>In 2021, some of GPT-3’s developers turned their hand to images. They trained a model on billions of pairs of images and text descriptions, then used it to generate new images from new descriptions. They called this system DALL-E, and in July 2022 they released a much-improved new version, <a href="https://arxiv.org/abs/2204.06125">DALL-E 2</a>.</p> <p>Like GPT-3, DALL-E 2 was a major breakthrough. It can generate highly detailed images from free-form text inputs, including information about style and other abstract concepts.</p> <p>For example, here I asked it to illustrate the phrase “Mind in Bloom” combining the styles of Salvador Dalí, Henri Matisse and Brett Whiteley.</p> <h2>Competitors enter the scene</h2> <p>Since the launch of DALL-E 2, a few competitors have emerged. One is the free-to-use but lower-quality DALL-E Mini (developed independently and now renamed <a href="https://www.craiyon.com/">Craiyon</a>), which was a popular source of meme content.</p> <p>Around the same time, a smaller company called <a href="https://www.midjourney.com/home/#about">Midjourney</a>released a model that more closely matched DALL-E 2’s capabilities. Though still a little less capable than DALL-E 2, Midjourney has lent itself to interesting artistic explorations. It was with Midjourney that Jason Allen generated the artwork that won the Colorado State Art Fair competition. </p> <p>Google too has a text-to-image model, called <a href="https://imagen.research.google/">Imagen</a>, which supposedly produces much better results than DALL-E and others. However, Imagen has not yet been released for wider use so it is difficult to evaluate Google’s claims.</p> <p>In July 2022, OpenAI began to capitalise on the interest in DALL-E, <a href="https://openai.com/blog/dall-e-now-available-in-beta/">announcing</a> that 1 million users would be given access on a pay-to-use basis.</p> <p>However, in August 2022 a new contender arrived: <a href="https://stability.ai/blog/stable-diffusion-public-release">Stable Diffusion</a>. </p> <p>Stable Diffusion not only rivals DALL-E 2 in its capabilities, but more importantly it is open source. Anyone can use, adapt and tweak the code as they like.</p> <p>Already, in the weeks since Stable Diffusion’s release, people have been pushing the code to the limits of what it can do. </p> <p>To take one example: people quickly realised that, because a video is a sequence of images, they could tweak Stable Diffusion’s code to generate video from text.</p> <p>Another fascinating tool built with Stable Diffusion’s code is <a href="https://huggingface.co/spaces/huggingface/diffuse-the-rest">Diffuse the Rest</a>, which lets you draw a simple sketch, provide a text prompt, and generate an image from it. In the video below, I generated a detailed photo of a flower from a very rough sketch.</p> <p>In a more complicated example below, I am starting to build software that lets you draw with your body, then use Stable Diffusion to turn it into a painting or photo.</p> <h2>The end of creativity?</h2> <p>What does it mean that you can generate any sort of visual content, image or video, with a few lines of text and a click of a button? What about when you can generate a movie script with GPT-3 and a movie animation with DALL-E 2? </p> <p>And looking further forward, what will it mean when social media algorithms not only curate content for your feed, but generate it? What about when this trend meets the metaverse in a few years, and virtual reality worlds are generated in real time, just for you? </p> <p>These are all important questions to consider.</p> <p><a href="https://twitter.com/OmniMorpho/status/1564782875072872450">Some speculate</a> that, in the short term, this means human creativity and art are deeply threatened. </p> <p>Perhaps in a world where anyone can generate any images, graphic designers as we know them today will be redundant. However, history shows human creativity finds a way. The electronic synthesiser did not kill music, and photography did not kill painting. Instead, they catalysed new art forms.</p> <p>I believe something similar will happen with AI generation. People are experimenting with including models like Stable Diffusion as a part of their creative process.</p> <p>A new type of artist is even emerging in what some call “promptology”, or “<a href="https://en.wikipedia.org/wiki/Prompt_engineering">prompt engineering</a>”. The art is not in crafting pixels by hand, but in crafting the words that prompt the computer to generate the image: a kind of AI whispering.</p> <h2>Collaborating with AI</h2> <p>The impacts of AI technologies will be multidimensional: we cannot reduce them to good or bad on a single axis. </p> <p>New artforms will arise, as will new avenues for creative expression. However, I believe there are risks as well.</p> <p>We live in an attention economy that thrives on extracting screen time from users; in an economy where automation drives corporate profit but not necessarily higher wages, and where art is commodified as content; in a social context where it is increasingly hard to distinguish real from fake; in sociotechnical structures that too easily encode biases in the AI models we train. In these circumstances, AI can easily do harm.</p> <p>How can we steer these new AI technologies in a direction that benefits people? I believe one way to do this is to <a href="https://research.rodolfoocampo.com/">design AI</a> that collaborates with, rather than replaces, humans.</p> <p><em>Image credits: Getty Images</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/ai-art-is-everywhere-right-now-even-experts-dont-know-what-it-will-mean-189800" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Art

Placeholder Content Image

AI system sees beyond the frame of famous artworks

<p dir="ltr">A new AI tool can provide a glimpse of what could potentially be going on beyond the frame of famous paintings, giving them a brand new life. </p> <p dir="ltr">OpenAI, a San Francisco-based company, has created a new tool called 'Outpainting' for its text-to-image AI system, DALL-E. </p> <p dir="ltr">Outpainting allows the system to imagine what's outside the frame of famous works such as <em>Girl with The Pearl Earring</em>, <em>Mona Lisa</em> and <em>Dogs Playing Poker</em>.</p> <p dir="ltr">DALL-E relies on artificial neural networks (ANNs), which simulate the way the brain works in order to learn and create an image from text. </p> <p dir="ltr">Now with Outpainting, users must describe the extended visuals in text form for DALL-E to “paint” the newly imagined artwork. </p> <p dir="ltr">Outpainting, which is primarily aimed for professionals who work with images, will let users 'extend their creativity' and 'tell a bigger story', according to OpenAI. </p> <p dir="ltr">US artist August Kamp used Outpainting to reimagine the famous 1665 painting <em>Girl with a Pearl Earring</em> by Johannes Vermeer, extending the background in the original style. </p> <p dir="ltr">The results show the iconic subject in a domestic setting, surrounded by crockery, houseplants, fruit, boxes and more.</p> <p dir="ltr">Other Outpainting attempts took a more creative turn, with one showing the <em>Mona Lisa</em> surrounded by a dystopian wasteland, and a version of <em>A Friend In Need</em> showing an additional table of gambling canines.</p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">“Outpainting: an apocalyptic Mona Lisa” by tonidl1989<a href="https://twitter.com/hashtag/dalle?src=hash&amp;ref_src=twsrc%5Etfw">#dalle</a> <a href="https://twitter.com/hashtag/dalle2?src=hash&amp;ref_src=twsrc%5Etfw">#dalle2</a> <a href="https://twitter.com/hashtag/aiart?src=hash&amp;ref_src=twsrc%5Etfw">#aiart</a> <a href="https://twitter.com/hashtag/aiartwork?src=hash&amp;ref_src=twsrc%5Etfw">#aiartwork</a> <a href="https://t.co/puYVxjyFMm">pic.twitter.com/puYVxjyFMm</a></p> <p>— Best Dalle2 AI Art 🎨 (@Dalle2AI) <a href="https://twitter.com/Dalle2AI/status/1565168579376566278?ref_src=twsrc%5Etfw">September 1, 2022</a></p></blockquote> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">Used DALL-E 2’s new “outpainting” feature to expand the world’s greatest work of art… <a href="https://t.co/0HXQzngt9P">pic.twitter.com/0HXQzngt9P</a></p> <p>— M.G. Siegler (@mgsiegler) <a href="https://twitter.com/mgsiegler/status/1565398150482784256?ref_src=twsrc%5Etfw">September 1, 2022</a></p></blockquote> <p dir="ltr">DALL-E is available to more than one million people to create AI-generated images, all with a series of text prompts. </p> <p dir="ltr">DALL-E is just one of many AI systems infiltrating the art world, joining the likes of Midjourney and Imagen redefining how we create and appreciate art. </p> <p dir="ltr"><em>Image credits: DALL-E - August Kamp</em></p>

Art

Placeholder Content Image

Artists furious after AI-generated art wins contest

<p dir="ltr">A stunning artwork generated by artificial intelligence has claimed first prize at an art competition, enraging the art world and calling into question what it means to be an artist. </p> <p dir="ltr">The work was “created” by Jason M Allen, a game designer from Colorado, who won first place in the emerging artist division's "digital arts/digitally manipulated photography" category at the Colorado State Fair Fine Arts Competition.</p> <p dir="ltr">His winning image, titled <em>Théâtre D'opéra Spatial</em> (French for Space Opera Theatre), was made with Midjourney — an artificial intelligence system that can produce detailed images when fed written prompts by the user. </p> <p dir="ltr">"I'm fascinated by this imagery. I love it. And I think everyone should see it," Allen, 39, told CNN Business.</p> <p dir="ltr">Allen's winning image looks like a bright, surreal cross between a Renaissance and steampunk painting.</p> <p dir="ltr">As per the category Allen competed in, he told officials that Midjourney was used to create his image when he entered the contest, as the category dictated entrants use "digital technology as part of the creative or presentation process".</p> <p dir="ltr">Midjourney is one of a growing number of such AI image generators, joining the likes of Imagen and DALL-E to give the artistically-challenged the means to create stunning images. </p> <p dir="ltr">Despite the parameters of the category, many artists were angered by Allen’s win due to his reliance on technology to create the artwork. </p> <p dir="ltr">"This sucks for the exact same reason we don't let robots participate in the Olympics," one Twitter user wrote.</p> <p dir="ltr">"This is the literal definition of 'pressed a few buttons to make a digital art piece'," another Tweeted.</p> <p dir="ltr">"AI artwork is the 'banana taped to the wall' of the digital world now."</p> <p dir="ltr">Yet while Allen didn't use a paintbrush to create his winning piece, he assured people there was plenty of work involved.</p> <p dir="ltr">"It's not like you're just smashing words together and winning competitions," he said.</p> <p dir="ltr">"Rather than hating on the technology or the people behind it, we need to recognise that it's a powerful tool and use it for good so we can all move forward rather than sulking about it," Allen said.</p> <p dir="ltr"><em>Image credits: Jason M Allen - Midjourney</em></p>

Art

Placeholder Content Image

Give this AI a few words of description and it produces a stunning image – but is it art?

<p>A picture may be worth a thousand words, but thanks to an artificial intelligence program called <a href="https://fortune.com/2022/04/06/openai-dall-e-2-photorealistic-images-from-text-descriptions/">DALL-E 2</a>, you can have a professional-looking image with far fewer.</p> <p>DALL-E 2 is <a href="http://adityaramesh.com/posts/dalle2/dalle2.html">a new neural network</a> algorithm that creates a picture from a short phrase or sentence that you provide. <a href="https://openai.com/dall-e-2/">The program</a>, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.</p> <p><a href="https://scholar.google.com/citations?user=ZcWO2AEAAAAJ&amp;hl=en">As a researcher studying the nexus of technology and art</a>, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.</p> <h2>A staggering range of style and subjects</h2> <p>OpenAI researchers built DALL-E 2 <a href="https://github.com/openai/dalle-2-preview/blob/main/system-card.md#model">from an enormous collection of images</a> with captions. They gathered some of the images online and licensed others.</p> <p>Using DALL-E 2 looks a lot like searching for an image on the web: you type in a short phrase into a text box, and it gives back six images.</p> <p>But instead of being culled from the web, the program creates six brand-new images, each of which reflect some version of the entered phrase. (Until recently, the program produced 10 images per prompt.) For example, when some friends and I gave DALL-E 2 the text prompt “cats in devo hats,” <a href="https://twitter.com/AaronHertzmann/status/1534947118053355522">it produced 10 images</a> that came in different styles.</p> <p>Nearly all of them could plausibly pass for professional photographs or drawings. While the algorithm did not quite grasp “Devo hat” – <a href="https://images.squarespace-cdn.com/content/5761baff746fb9f420bb3ffc/1495765600043-HHVOESOJR2LLK7B820SS/?content-type=image%2Fjpeg">the strange helmets</a> worn by the New Wave band Devo – the headgear in the images it produced came close. </p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">"cats in devo hats" <a href="https://twitter.com/hashtag/dalle?src=hash&amp;ref_src=twsrc%5Etfw">#dalle</a> <a href="https://t.co/kkFaKF0zUJ">pic.twitter.com/kkFaKF0zUJ</a></p> <p>— Aaron Hertzmann (@AaronHertzmann) <a href="https://twitter.com/AaronHertzmann/status/1534947118053355522?ref_src=twsrc%5Etfw">June 9, 2022</a></p></blockquote> <p>Over the past few years, a small community of artists have been using neural network algorithms to produce art. Many of these artworks have distinctive qualities that almost look like real images, <a href="https://theconversation.com/new-ai-art-has-artists-collaborators-wondering-who-gets-the-credit-112661">but with odd distortions of space</a> – a sort of cyberpunk Cubism. The most recent text-to-image systems <a href="https://www.rightclicksave.com/article/clip-art-and-the-new-aesthetics-of-ai">often produce dreamy, fantastical imagery</a> that can be delightful but rarely looks real.</p> <p>DALL-E 2 offers a significant leap in the quality and realism of the images. It can also mimic specific styles with remarkable accuracy. If you want images that look like actual photographs, it’ll produce six life-like images. If you want prehistoric cave paintings of Shrek, it’ll generate six pictures of Shrek as if they’d been drawn by a prehistoric artist.</p> <p>It’s staggering that an algorithm can do this. Each set of images takes less than a minute to generate. Not all of the images will look pleasing to the eye, nor do they necessarily reflect what you had in mind. But, even with the need to sift through many outputs or try different text prompts, there’s no other existing way to pump out so many great results so quickly – not even by hiring an artist. And, sometimes, the unexpected results are the best.</p> <p>In principle, <a href="http://adityaramesh.com/posts/dalle2/dalle2.html">anyone with enough resources and expertise can make a system like this</a>. Google Research <a href="https://imagen.research.google/">recently announced an impressive, similar text-to-image system</a>, and one independent developer is publicly developing their own version that <a href="https://huggingface.co/spaces/dalle-mini/dalle-mini">anyone can try right now on the web</a>, although it’s not yet as good as DALL-E or Google’s system.</p> <p>It’s easy to imagine these tools transforming the way people make images and communicate, whether via memes, greeting cards, advertising – and, yes, art.</p> <h2>Where’s the art in that?</h2> <p>I had a moment early on while using DALL-E 2 to generate different kinds of paintings, in all different styles – like “<a href="https://www.odilon-redon.org/">Odilon Redon</a> painting of Seattle” – when it hit me that this was better than any painting algorithm I’ve ever developed. Then I realized that it is, in a way, a better painter than I am.</p> <p>In fact, no human can do what DALL-E 2 does: create such a high-quality, varied range of images in mere seconds. If someone told you that a person made all these images, of course you’d say they were creative.</p> <p>But <a href="https://cacm.acm.org/magazines/2020/5/244330-computers-do-not-make-art-people-do/fulltext">this does not make DALL-E 2 an artist</a>. Even though it sometimes feels like magic, under the hood it is still a computer algorithm, rigidly following instructions from the algorithm’s authors at OpenAI. </p> <p>If these images succeed as art, they are products of how the algorithm was designed, the images it was trained on, and – most importantly – how artists use it. </p> <p>You might be inclined to say there’s little artistic merit in an image produced by a few keystrokes. But in my view, this line of thinking echoes <a href="https://cacm.acm.org/magazines/2020/5/244330-computers-do-not-make-art-people-do/fulltext">the classic take</a> that photography cannot be art because a machine did all the work. Today the human authorship and craft involved in artistic photography are recognized, and critics understand that the best photography involves much more than just pushing a button. </p> <p>Even so, we often discuss works of art as if they directly came from the artist’s intent. The artist intended to show a thing, or express an emotion, and so they made this image. DALL-E 2 does seem to shortcut this process entirely: you have an idea and type it in, and you’re done.</p> <p>But when I paint the old-fashioned way, I’ve found that my paintings come from the exploratory process, not just from executing my initial goals. And this is true for many artists.</p> <p>Take Paul McCartney, who came up with the track “<a href="https://www.youtube.com/watch?v=rUvZA5AYhB4&amp;t=35s">Get Back</a>” during a jam session. He didn’t start with a plan for the song; he just started fiddling and experimenting <a href="https://en.wikipedia.org/wiki/Get_Back#Early_protest_lyrics">and the band developed it from there</a>. </p> <p>Picasso <a href="https://books.google.com/books?id=dZyPAAAAQBAJ&amp;lpg=PA2&amp;ots=xYVek5tbjg&amp;dq=%22I%20don%27t%20know%20in%20advance%20what%20I%20am%20going%20to%20put%20on%20canvas%20any%20more%20than%20I%20decide%20beforehand%20what%20colors%20I%20am%20going%20to%20use&amp;pg=PA2#v=onepage&amp;q&amp;f=false">described his process similarly</a>: “I don’t know in advance what I am going to put on canvas any more than I decide beforehand what colors I am going to use … Each time I undertake to paint a picture I have a sensation of leaping into space.”</p> <p>In <a href="https://www.instagram.com/aaronhertzmann_aiart/">my own explorations with DALL-E 2</a>, one idea would lead to another which led to another, and eventually I’d find myself in a completely unexpected, magical new terrain, very far from where I’d started. </p> <h2>Prompting as art</h2> <p>I would argue that the art, in using a system like DALL-E 2, comes not just from the final text prompt, but in the entire creative process that led to that prompt. Different artists will follow different processes and end up with different results that reflect their own approaches, skills and obsessions.</p> <p>I began to see my experiments as a set of series, each a consistent dive into a single theme, rather than a set of independent wacky images. </p> <p>Ideas for these images and series came from all around, often linked by a set of <a href="https://link.springer.com/book/10.1007/978-3-319-15524-1">stepping stones</a>. At one point, while making images based on contemporary artists’ work, I wanted to generate an image of site-specific installation art in the style of the contemporary Japanese artist <a href="http://yayoi-kusama.jp/e/biography/index.html">Yayoi Kusama</a>. After trying a few unsatisfactory locations, I hit on the idea of placing it in <a href="https://mezquita-catedraldecordoba.es/en/">La Mezquita</a>, a former mosque and church in Córdoba, Spain. I sent <a href="https://www.instagram.com/p/CehcE4DvN1d/">the picture</a> to an architect colleague, Manuel Ladron de Guevara, who is from Córdoba, and we began riffing on other architectural ideas together. </p> <p>This became a series on imaginary new buildings in different architects’ styles.</p> <p>So I’ve started to consider what I do with DALL-E 2 to be both a form of exploration as well as a form of art, even if it’s often amateur art like the drawings I make on my iPad. </p> <p>Indeed some artists, like <a href="https://twitter.com/advadnoun">Ryan Murdoch</a>, have advocated for prompt-based image-making to be recognized as art. He points to the <a href="https://twitter.com/NeuralBricolage">experienced AI artist Helena Sarin</a> as an example. </p> <p>“When I look at most stuff from <a href="https://www.midjourney.com/">Midjourney</a>” – another popular text-to-image system – “a lot of it will be interesting or fun,” Murdoch told me in an interview. “But with [Sarin’s] work, there’s a through line. It’s easy to see that she has put a lot of thought into it, and has worked at the craft, because the output is more visually appealing and interesting, and follows her style in a continuous way.” </p> <p>Working with DALL-E 2, or any of the new text-to-image systems, means learning its quirks and developing strategies for avoiding common pitfalls. It’s also important to know about <a href="https://github.com/openai/dalle-2-preview/blob/main/system-card.md#probes-and-evaluations">its potential harms</a>, such as its reliance on stereotypes, and potential uses for disinformation. Using DALL-E 2, you’ll also discover surprising correlations, like the way everything becomes old-timey when you use an old painter, filmmaker or photographer’s style.</p> <p>When I have something very specific I want to make, DALL-E 2 often can’t do it. The results would require a lot of difficult manual editing afterward. It’s when my goals are vague that the process is most delightful, offering up surprises that lead to new ideas that themselves lead to more ideas and so on.</p> <h2>Crafting new realities</h2> <p>These text-to-image systems can help users imagine new possibilities as well. </p> <p><a href="https://daniellebaskin.com/">Artist-activist Danielle Baskin</a> told me that she always works “to show alternative realities by ‘real’ example: either by setting scenarios up in the physical world or doing meticulous work in Photoshop.” DALL-E 2, however, “is an amazing shortcut because it’s so good at realism. And that’s key to helping others bring possible futures to life – whether its satire, dreams or beauty.” </p> <p>She has used it to imagine <a href="https://twitter.com/djbaskin/status/1519050225297461249">an alternative transportation system</a> and <a href="https://twitter.com/djbaskin_images/status/1533970922146648064">plumbing that transports noodles instead of water</a>, both of which reflect <a href="https://www.forbes.com/sites/jonathonkeats/2021/02/11/is-twitter-really-offering-verified-badges-for-san-francisco-homes-an-artists-satire-nearly-starts-a-civil-war">her artist-provocateur sensibility</a>.</p> <p>Similarly, artist Mario Klingemann’s <a href="https://twitter.com/quasimondo/status/1533877178496163840">architectural renderings with the tents of homeless people</a> could be taken as a rejoinder to <a href="https://twitter.com/AaronHertzmann/status/1526710430751522817">my architectural renderings of fancy dream homes</a>.</p> <p>It’s too early to judge the significance of this art form. I keep thinking of a phrase from the excellent book “<a href="https://www.haymarketbooks.org/books/1662-art-in-the-after-culture">Art in the After-Culture</a>” – “The dominant AI aesthetic is novelty.” </p> <p>Surely this would be true, to some extent, for any new technology used for art. The first films by the <a href="https://iphf.org/inductees/auguste-louis-lumiere/">Lumière brothers</a> in 1890s were novelties, not cinematic masterpieces; it amazed people to see images moving at all. </p> <p>AI art software develops so quickly that there’s continual technical and artistic novelty. It seems as if, each year, there’s an opportunity to explore an exciting new technology – each more powerful than the last, and each seemingly poised to transform art and society.</p> <p><em>Image credits: Shutterstock</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/give-this-ai-a-few-words-of-description-and-it-produces-a-stunning-image-but-is-it-art-184363" target="_blank" rel="noopener">The Conversation</a>. </em></p> <div style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: none; --tw-border-spacing-x: 0; --tw-border-spacing-y: 0; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-scroll-snap-strictness: proximity; --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(51,168,204,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; background-color: transparent; border: 0px; font-size: 18px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;" data-react-class="Tweet" data-react-props="{"> <div style="--tw-border-spacing-x: 0; --tw-border-spacing-y: 0; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-scroll-snap-strictness: proximity; --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(51,168,204,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; background-color: transparent; border: 0px; font-size: 18px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; caret-color: #000000; color: #000000; font-family: 'Libre Baskerville', Georgia, Times, 'Times New Roman', serif; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"> <div style="--tw-border-spacing-x: 0; --tw-border-spacing-y: 0; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-scroll-snap-strictness: proximity; --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(51,168,204,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; background-color: transparent; border: 0px; font-size: 18px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"> </div> </div> </div> <p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: none; --tw-border-spacing-x: 0; --tw-border-spacing-y: 0; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-scroll-snap-strictness: proximity; --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(51,168,204,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; background-color: transparent; border: 0px; font-size: 18px; margin: 0px 0px 18px; outline: 0px; padding: 0px; vertical-align: baseline;"> </p>

Art

Placeholder Content Image

An Indigenous language could help humans and AI communicate

<p dir="ltr">One of the most challenging problems impeding humans from communicating with Artificial Intelligence (AI) systems could have a unique solution: a language spoken by Indigenous Australians in the NT.</p> <p dir="ltr">Researchers at the University of New South Wales have published a paper explaining how Jingulu - a language spoken by the Jingili people - could be translated directly into commands that both AI and humans can understand.</p> <p dir="ltr">The study, published in <em><a href="https://doi.org/10.3389/fphy.2022.944064" target="_blank" rel="noopener">Frontiers in Physics</a></em>, details how Professor Abbass worked with linguistics expert Associate Professor Eleni Petraki and Dr Robert Hunjet, a member of the Defence Science and Technology Group to create JSwarm, a language inspired by Jingulu.</p> <p dir="ltr">Jingulu uses just three verbs - come, go and do - which also means that the amount of computational power needed to understand the commands is low. </p> <p dir="ltr">“For us, Jingulu is a dream that came true,” Professor Hussein Abbass, the study’s first author, said.</p> <p dir="ltr">“A language that can translate straight into AI commands; a human language that humans can understand; an efficient language in its syntax that reduces computational cost; a language where we can change the context of use without changing its syntax to allow us to transfer the AI between different domains with ease; and a language that is born and used in Australia to support research and innovation that are born and used in Australia.”</p> <p dir="ltr">Professor Abbass works with swarm systems of AI, where groups of robots work together to perform tasks and solve complex problems in a system that draws inspiration from how small numbers of sheepdogs can control large flocks of sheep.</p> <p dir="ltr">“This problem is all about movements in different information and knowledge spaces, including the physical spaces,” Professor Abbass said.</p> <p dir="ltr">“These movements are represented mathematically as elements that get attracted to each other or repulse from each other. For a long time, I have been looking at how we can design the languages used at the interface between the swarm and humans.”</p> <p dir="ltr">Having previously investigated systems that rely on gestures, direct commands, and even music, Professor Abbass said these systems all had their challenges.</p> <p dir="ltr">“They either had a richer language than what we needed or did not map exactly to the mathematics we use for guidance and control,” Dr Abbass said.</p> <p dir="ltr">“This all changed one day when, out of curiosity, I was searching on Google for studies that looked at the syntax of Aboriginal languages.</p> <p dir="ltr">“I encountered a PhD thesis about Jingulu, I started reading it then it did not take much time before it clicked in my head; this language would be perfect for my artificial intelligence-enabled swarm guidance work.”</p> <p dir="ltr">This isn’t the first time Indigenous languages have been applied to interesting problems either, with applications dating back to World War II.</p> <p dir="ltr">“The Aboriginal people have a long history of contributions to the defence of Australia,” Professor Abbass said.</p> <p dir="ltr">“During the Second World War their languages were used for secret communications. Today we are discovering that the wealth and richness of the Aboriginal languages and culture could hold the secret in human-AI interaction.”</p> <p><span id="docs-internal-guid-f939bc00-7fff-1d15-1260-dd99f6eb4720"></span></p> <p dir="ltr"><em>Image: Getty Images</em></p>

Technology

Our Partners