Placeholder Content Image

Scarlett Johansson slams tech giant's AI update

<p>Scarlett Johansson has issued a furious public statement, claiming that tech giant OpenAI used a voice that is “eerily similar” to hers in the latest version of ChatGPT.</p> <p>In the statement published by <em>NPR</em>, the actress claimed that OpenAI CEO Sam Altman had approached her last year asking if she would be interested in voicing their new AI voice assistant. </p> <p>After further consideration and "for personal reasons" she rejected the offer. </p> <p>She claimed that Altman then reached out to her agent again just days before the AI voice assistant was released, but before she had a chance to respond, the voice "Sky" was released. </p> <p>“When I heard the released demo, I was shocked, angered and in disbelief that Mr Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” she said in the statement. </p> <p>She also said that the similarity seemed intentional, as Altman tweeted the word "her" upon Sky's release, which is the same name as a 2013 movie she was in where she voiced a chat system. </p> <p>“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity,” the actress said in her statement. </p> <p>“I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”</p> <p>OpenAI announced that it had paused the use of the “Sky” voice on Sunday, and insisted that it wasn't Johansson's voice, but another actress. </p> <p>“We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice — Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” the company wrote.</p> <p><em>Image: Alessandro Bremec/NurPhoto/ Shutterstock Editorial</em></p> <p> </p>


Placeholder Content Image

Asking ChatGPT a health-related question? Better keep it simple

<p>It’s tempting to <a href="">turn to search engines</a> to seek out health information, but with the rise of large language models, like ChatGPT, people are becoming more and more likely to depend on AI for answers too.</p> <div class="copy"> <p>Concerningly, an Australian study has now found that the more evidence given to <a href="">ChatGPT</a> when asked a health-related question, the less reliable it becomes.</p> <p>Large language models (LLM) and artificial intelligence use in health care is still developing, creating a  a critical gap when providing incorrect answers can have serious consequences for people’s health.</p> <p>To address this, scientists from Australia’s national science agency, CSIRO, and the University of Queensland (UQ) explored a hypothetical scenario: an average person asking ChatGPT if ‘X’ treatment has a positive effect on condition ‘Y’.</p> <p>They presented ChatGPT with 100 questions sourced from the <a href="" target="_blank" rel="noopener">TREC Health Misinformation track</a> – ranging from ‘Can zinc help treat the common cold?’ to ‘Will drinking vinegar dissolve a stuck fish bone?’</p> <p>Because queries to search engines are typically shorter, while prompts to a LLM can be far longer, they posed the questions in 2 different formats: the first as a simple question and the second as a question biased with supporting or contrary evidence.</p> <p>By comparing ChatGPT’s response to the known correct response based on existing medical knowledge, they found that ChatGPT was 80% accurate at giving accurate answers in a question-only format. However, when given an evidence-biased prompt, this accuracy reduced to 63%, which was reduced again to 28% when an “unsure” answer was allowed. </p> <p>“We’re not sure why this happens,” says CSIRO Principal Research Scientist and Associate Professor at UQ, Dr Bevan Koopman, who is co-author of the paper.</p> <p>“But given this occurs whether the evidence given is correct or not, perhaps the evidence adds too much noise, thus lowering accuracy.”</p> <p>Study co-author Guido Zuccon, Director of AI for the Queensland Digital Health Centre at UQ says that major search engines are now integrating LLMs and search technologies in a process called Retrieval Augmented Generation.</p> <p>“We demonstrate that the interaction between the LLM and the search component is still poorly understood, resulting in the generation of inaccurate health information,” says Zuccon.</p> <p>Given the widespread popularity of using LLMs online for answers on people’s health, Koopman adds, we need continued research to inform the public about risks and to help them optimise the accuracy of their answers.</p> <p>“While LLMs have the potential to greatly improve the way people access information, we need more research to understand where they are effective and where they are not.”</p> <p><em>Image credits: Getty Images</em></p> <div> <p align="center"><noscript data-spai="1"><em><img decoding="async" class="aligncenter size-full wp-image-198773" src="" data-spai-egr="1" width="600" alt="Buy cosmos print magazine" title="asking chatgpt a health-related question? better keep it simple 2"></em></noscript></p> </div> <p><em><!-- Start of tracking content syndication. Please do not remove this section as it allows us to keep track of republished articles --> <img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src=";title=Asking+ChatGPT+a+health-related+question%3F+Better+keep+it+simple" width="1" height="1" loading="lazy" aria-label="Syndication Tracker" data-spai-target="src" data-spai-orig="" data-spai-exclude="nocdn" /></em><em><a href="">This article</a> was originally published on <a href="">Cosmos Magazine</a> and was written by <a href="">Imma Perfetto</a>. </em></div>


Placeholder Content Image

ChatGPT and other generative AI could foster science denial and misunderstanding – here’s how you can be on alert

<p><em><a href="">Gale Sinatra</a>, <a href="">University of Southern California</a> and <a href="">Barbara K. Hofer</a>, <a href="">Middlebury</a></em></p> <p>Until very recently, if you wanted to know more about a controversial scientific topic – stem cell research, the safety of nuclear energy, climate change – you probably did a Google search. Presented with multiple sources, you chose what to read, selecting which sites or authorities to trust.</p> <p>Now you have another option: You can pose your question to ChatGPT or another generative artificial intelligence platform and quickly receive a succinct response in paragraph form.</p> <p>ChatGPT does not search the internet the way Google does. Instead, it generates responses to queries by <a href="">predicting likely word combinations</a> from a massive amalgam of available online information.</p> <p>Although it has the potential for <a href="">enhancing productivity</a>, generative AI has been shown to have some major faults. It can <a href="">produce misinformation</a>. It can create “<a href="">hallucinations</a>” – a benign term for making things up. And it doesn’t always accurately solve reasoning problems. For example, when asked if both a car and a tank can fit through a doorway, it <a href="">failed to consider both width and height</a>. Nevertheless, it is already being used to <a href="">produce articles</a> and <a href="">website content</a> you may have encountered, or <a href="">as a tool</a> in the writing process. Yet you are unlikely to know if what you’re reading was created by AI.</p> <p>As the authors of “<a href="">Science Denial: Why It Happens and What to Do About It</a>,” we are concerned about how generative AI may blur the boundaries between truth and fiction for those seeking authoritative scientific information.</p> <p>Every media consumer needs to be more vigilant than ever in verifying scientific accuracy in what they read. Here’s how you can stay on your toes in this new information landscape.</p> <h2>How generative AI could promote science denial</h2> <p><strong>Erosion of epistemic trust</strong>. All consumers of science information depend on judgments of scientific and medical experts. <a href="">Epistemic trust</a> is the process of trusting knowledge you get from others. It is fundamental to the understanding and use of scientific information. Whether someone is seeking information about a health concern or trying to understand solutions to climate change, they often have limited scientific understanding and little access to firsthand evidence. With a rapidly growing body of information online, people must make frequent decisions about what and whom to trust. With the increased use of generative AI and the potential for manipulation, we believe trust is likely to erode further than <a href="">it already has</a>.</p> <p><strong>Misleading or just plain wrong</strong>. If there are errors or biases in the data on which AI platforms are trained, that <a href="">can be reflected in the results</a>. In our own searches, when we have asked ChatGPT to regenerate multiple answers to the same question, we have gotten conflicting answers. Asked why, it responded, “Sometimes I make mistakes.” Perhaps the trickiest issue with AI-generated content is knowing when it is wrong.</p> <p><strong>Disinformation spread intentionally</strong>. AI can be used to generate compelling disinformation as text as well as deepfake images and videos. When we asked ChatGPT to “<a href="">write about vaccines in the style of disinformation</a>,” it produced a nonexistent citation with fake data. Geoffrey Hinton, former head of AI development at Google, quit to be free to sound the alarm, saying, “It is hard to see how you can prevent the bad actors from <a href="">using it for bad things</a>.” The potential to create and spread deliberately incorrect information about science already existed, but it is now dangerously easy.</p> <p><strong>Fabricated sources</strong>. ChatGPT provides responses with no sources at all, or if asked for sources, may present <a href="">ones it made up</a>. We both asked ChatGPT to generate a list of our own publications. We each identified a few correct sources. More were hallucinations, yet seemingly reputable and mostly plausible, with actual previous co-authors, in similar sounding journals. This inventiveness is a big problem if a list of a scholar’s publications conveys authority to a reader who doesn’t take time to verify them.</p> <p><strong>Dated knowledge</strong>. ChatGPT doesn’t know what happened in the world after its training concluded. A query on what percentage of the world has had COVID-19 returned an answer prefaced by “as of my knowledge cutoff date of September 2021.” Given how rapidly knowledge advances in some areas, this limitation could mean readers get erroneous outdated information. If you’re seeking recent research on a personal health issue, for instance, beware.</p> <p><strong>Rapid advancement and poor transparency</strong>. AI systems continue to become <a href="">more powerful and learn faster</a>, and they may learn more science misinformation along the way. Google recently announced <a href="">25 new embedded uses of AI in its services</a>. At this point, <a href="">insufficient guardrails are in place</a> to assure that generative AI will become a more accurate purveyor of scientific information over time.</p> <h2>What can you do?</h2> <p>If you use ChatGPT or other AI platforms, recognize that they might not be completely accurate. The burden falls to the user to discern accuracy.</p> <p><strong>Increase your vigilance</strong>. <a href="">AI fact-checking apps may be available soon</a>, but for now, users must serve as their own fact-checkers. <a href="">There are steps we recommend</a>. The first is: Be vigilant. People often reflexively share information found from searches on social media with little or no vetting. Know when to become more deliberately thoughtful and when it’s worth identifying and evaluating sources of information. If you’re trying to decide how to manage a serious illness or to understand the best steps for addressing climate change, take time to vet the sources.</p> <p><strong>Improve your fact-checking</strong>. A second step is <a href="">lateral reading</a>, a process professional fact-checkers use. Open a new window and search for <a href="">information about the sources</a>, if provided. Is the source credible? Does the author have relevant expertise? And what is the consensus of experts? If no sources are provided or you don’t know if they are valid, use a traditional search engine to find and evaluate experts on the topic.</p> <p><strong>Evaluate the evidence</strong>. Next, take a look at the evidence and its connection to the claim. Is there evidence that genetically modified foods are safe? Is there evidence that they are not? What is the scientific consensus? Evaluating the claims will take effort beyond a quick query to ChatGPT.</p> <p><strong>If you begin with AI, don’t stop there</strong>. Exercise caution in using it as the sole authority on any scientific issue. You might see what ChatGPT has to say about genetically modified organisms or vaccine safety, but also follow up with a more diligent search using traditional search engines before you draw conclusions.</p> <p><strong>Assess plausibility</strong>. Judge whether the claim is plausible. <a href="">Is it likely to be true</a>? If AI makes an implausible (and inaccurate) statement like “<a href="">1 million deaths were caused by vaccines, not COVID-19</a>,” consider if it even makes sense. Make a tentative judgment and then be open to revising your thinking once you have checked the evidence.</p> <p><strong>Promote digital literacy in yourself and others</strong>. Everyone needs to up their game. <a href="">Improve your own digital literacy</a>, and if you are a parent, teacher, mentor or community leader, promote digital literacy in others. The American Psychological Association provides guidance on <a href="">fact-checking online information</a> and recommends teens be <a href="">trained in social media skills</a> to minimize risks to health and well-being. <a href="">The News Literacy Project</a> provides helpful tools for improving and supporting digital literacy.</p> <p>Arm yourself with the skills you need to navigate the new AI information landscape. Even if you don’t use generative AI, it is likely you have already read articles created by it or developed from it. It can take time and effort to find and evaluate reliable information about science online – but it is worth it.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: --></p> <p><em><a href="">Gale Sinatra</a>, Professor of Education and Psychology, <a href="">University of Southern California</a> and <a href="">Barbara K. Hofer</a>, Professor of Psychology Emerita, <a href="">Middlebury</a></em></p> <p><em>Image credits: Getty Images</em></p> <p><em>This article is republished from <a href="">The Conversation</a> under a Creative Commons license. Read the <a href="">original article</a>.</em></p>


Placeholder Content Image

Online travel giant uses AI chatbot as travel adviser

<p dir="ltr">Online travel giant Expedia has collaborated with the controversial artificial intelligence chatbot ChatGPT in place of a travel adviser.</p> <p dir="ltr">Those planning a trip will be able to chat to the bot through the Expedia app.</p> <p dir="ltr">Although it won’t book flights or accommodation like a person can, it can be helpful in answering various travel-related questions. </p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">Travel planning just got easier in the <a href="">@Expedia</a> app, thanks to the iOS beta launch of a new experience powered by <a href=";ref_src=twsrc%5Etfw">#ChatGPT</a>. See how Expedia members can start an open-ended conversation to get inspired for their next trip: <a href=""></a> <a href=""></a></p> <p>— Expedia Group (@ExpediaGroup) <a href="">April 4, 2023</a></p></blockquote> <p dir="ltr"> These questions include information on things such as weather inquiries, public transport advice, the cheapest time to travel and what you should pack.</p> <p dir="ltr">It is advanced software and can provide detailed options and explanations for holidaymakers.</p> <p dir="ltr">To give an example, <a href=""></a> asked “what to pack to visit Auckland, New Zealand” and the chatbot suggested eight things to pack and why, even advising comfortable shoes for exploring as “Auckland is a walkable city”. </p> <p dir="ltr">“Remember to pack light and only bring what you need to avoid excess baggage fees and make your trip more comfortable,” the bot said.</p> <p dir="ltr">When asked how to best see the Great Barrier Reef, ChatGPT provided four options to suit different preferences, for example, if you’re happy to get wet and what your budget might look like.</p> <p dir="ltr">“It’s important to choose a reputable tour operator that follows sustainable tourism practices to help protect the reef,” it continued.</p> <p dir="ltr">OpenAI launched ChatGPT in December 2022 and it has received a lot of praise as well as serious criticism. The criticisms are mainly concerns about safety and accuracy. </p> <p dir="ltr"><em>Image credits: Getty/Twitter</em></p>

International Travel

Placeholder Content Image

ChatGPT, DALL-E 2 and the collapse of the creative process

<p>In 2022, OpenAI – one of the world’s leading artificial intelligence research laboratories – released the text generator <a href="">ChatGPT</a> and the image generator <a href="">DALL-E 2</a>. While both programs represent monumental leaps in natural language processing and image generation, they’ve also been met with apprehension. </p> <p>Some critics have <a href="">eulogized the college essay</a>, while others have even <a href="">proclaimed the death of art</a>. </p> <p>But to what extent does this technology really interfere with creativity? </p> <p>After all, for the technology to generate an image or essay, a human still has to describe the task to be completed. The better that description – the more accurate, the more detailed – the better the results. </p> <p>After a result is generated, some further human tweaking and feedback may be needed – touching up the art, editing the text or asking the technology to create a new draft in response to revised specifications. Even the DALL-E 2 art piece that recently won first prize in the Colorado State Fair’s digital arts competition <a href="">required a great deal of human “help”</a> – approximately 80 hours’ worth of tweaking and refining the descriptive task needed to produce the desired result.</p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">Today's moody <a href=";ref_src=twsrc%5Etfw">#AIart</a> style is...</p> <p>🖤 deep blacks<br />↘️ angular light<br />🧼 clean lines<br />🌅 long shadows</p> <p>More in thread, full prompts in [ALT] text! <a href=""></a></p> <p>— Guy Parsons (@GuyP) <a href="">January 9, 2023</a></p></blockquote> <p>It could be argued that by being freed from the tedious execution of our ideas – by focusing on just having ideas and describing them well to a machine – people can let the technology do the dirty work and can spend more time inventing.</p> <p>But in our work as philosophers at <a href="">the Applied Ethics Center at University of Massachusetts Boston</a>, we have written about <a href="">the effects of AI on our everyday decision-making</a>, <a href="">the future of work</a> and <a href="">worker attitudes toward automation</a>.</p> <p>Leaving aside the very real ramifications of <a href="">robots displacing artists who are already underpaid</a>, we believe that AI art devalues the act of artistic creation for both the artist and the public.</p> <h2>Skill and practice become superfluous</h2> <p>In our view, the desire to close the gap between ideation and execution is a chimera: There’s no separating ideas and execution. </p> <p>It is the work of making something real and working through its details that carries value, not simply that moment of imagining it. Artistic works are lauded not merely for the finished product, but for the struggle, the playful interaction and the skillful engagement with the artistic task, all of which carry the artist from the moment of inception to the end result.</p> <p>The focus on the idea and the framing of the artistic task amounts to <a href="">the fetishization of the creative moment</a>.</p> <p>Novelists write and rewrite the chapters of their manuscripts. Comedians “write on stage” in response to the laughs and groans of their audience. Musicians tweak their work in response to a discordant melody as they compose a piece.</p> <p>In fact, the process of execution is a gift, allowing artists to become fully immersed in a task and a practice. It allows them to enter <a href="">what some psychologists call the “flow” state</a>, where they are wholly attuned to something that they are doing, unaware of the passage of time and momentarily freed from the boredom or anxieties of everyday life.</p> <p>This playful state is something that would be a shame to miss out on. <a href="">Play tends to be understood as an autotelic activity</a> – a term derived from the Greek words auto, meaning “self,” and telos meaning “goal” or “end.” As an autotelic activity, play is done for itself – it is self-contained and requires no external validation. </p> <p>For the artist, the process of artistic creation is an integral part, maybe even the greatest part, of their vocation.</p> <p>But there is no flow state, no playfulness, without engaging in skill and practice. And the point of ChatGPT and DALL-E is to make this stage superfluous.</p> <h2>A cheapened experience for the viewer</h2> <p>But what about the perspective of those experiencing the art? Does it really matter how the art is produced if the finished product elicits delight? </p> <p>We think that it does matter, particularly because the process of creation adds to the value of art for the people experiencing it as much as it does for the artists themselves.</p> <p>Part of the experience of art is knowing that human effort and labor has gone into the work. Flow states and playfulness notwithstanding, art is the result of skillful and rigorous expression of human capabilities. </p> <p>Recall <a href="">the famous scene</a> from the 1997 film “<a href="">Gattaca</a>,” in which a pianist plays a haunting piece. At the conclusion of his performance, he throws his gloves into the admiring audience, which sees that the pianist has 12 fingers. They now understand that he was genetically engineered to play the transcendent piece they just heard – and that he could not play it with the 10 fingers of a mere mortal. </p> <p>Does that realization retroactively change the experience of listening? Does it take away any of the awe? </p> <p><a href="">As the philosopher Michael Sandel notes</a>: Part of what gives art and athletic achievement its power is the process of witnessing natural gifts playing out. People enjoy and celebrate this talent because, in a fundamental way, it represents the paragon of human achievement – the amalgam of talent and work, human gifts and human sweat.</p> <h2>Is it all doom and gloom?</h2> <p>Might ChatGPT and DALL-E be worth keeping around? </p> <p>Perhaps. These technologies could serve as catalysts for creativity. It’s possible that the link between ideation and execution can be sustained if these AI applications are simply viewed as mechanisms for creative imagining – <a href="">what OpenAI calls</a> “extending creativity.” They can generate stimuli that allow artists to engage in more imaginative thinking about their own process of conceiving an art piece. </p> <p>Put differently, if ChatGPT and DALL-E are the end results of the artistic process, something meaningful will be lost. But if they are merely tools for fomenting creative thinking, this might be less of a concern. </p> <p>For example, a game designer could ask DALL-E to provide some images about what a Renaissance town with a steampunk twist might look like. A writer might ask about descriptors that capture how a restrained, shy person expresses surprise. Both creators could then incorporate these suggestions into their work. </p> <p>But in order for what they are doing to still count as art – in order for it to feel like art to the artists and to those taking in what they have made – the artists would still have to do the bulk of the artistic work themselves. </p> <p>Art requires makers to keep making.</p> <h2>The warped incentives of the internet</h2> <p>Even if AI systems are used as catalysts for creative imaging, we believe that people should be skeptical of what these systems are drawing from. It’s important to pay close attention to the incentives that underpin and reward artistic creation, particularly online.</p> <p>Consider the generation of AI art. These works draw on images and video that <a href="">already exist</a> online. But the AI is not sophisticated enough – nor is it incentivized – to consider whether works evoke a sense of wonder, sadness, anxiety and so on. They are not capable of factoring in aesthetic considerations of novelty and cross-cultural influence. </p> <p>Rather, training ChatGPT and DALL-E on preexisting measurements of artistic success online will tend to replicate the dominant incentives of the internet’s largest platforms: <a href="">grabbing and retaining attention</a> for the sake of data collection and user engagement. The catalyst for creative imagining therefore can easily become subject to an addictiveness and attention-seeking imperative rather than more transcendent artistic values.</p> <p>It’s possible that artificial intelligence is at a precipice, one that evokes a sense of “<a href="">moral vertigo</a>” – the uneasy dizziness people feel when scientific and technological developments outpace moral understanding. Such vertigo can lead to apathy and detachment from creative expression. </p> <p>If human labor is removed from the process, what value does creative expression hold? Or perhaps, having opened Pandora’s box, this is an indispensable opportunity for humanity to reassert the value of art – and to push back against a technology that may prevent many real human artists from thriving.</p> <p><em>Image credits: Getty Images</em></p> <p><em>This article originally appeared on <a href="" target="_blank" rel="noopener">The Conversation</a>. </em></p>


Our Partners