Placeholder Content Image

Which of these pictures is a deepfake? Your brain knows the answer before you do

<p>Deepfakes – AI-generated videos and pictures of people – are becoming more and more realistic. This makes them the perfect weapon for disinformation and fraud.</p> <p>But while you might consciously be tricked by a deepfake, new evidence suggests that your brain knows better. Fake portraits cause different signals to fire on brain scans, according to a paper <a href="" target="_blank" rel="noreferrer noopener">published</a> in <em>Vision Research.</em></p> <p>While you consciously can’t spot the fake (for those playing at home, the face on the right is the phony), your neurons are more reliable.</p> <p>“Your brain sees the difference between the two images. You just can’t see it yet,” says co-author Associate Professor Thomas Carlson, a researcher at the University of Sydney’s School of Psychology.</p> <p>The researchers asked volunteers to view a series of several hundred photos, some of which were real and some of which were fakes generated by a GAN (a Generative Adversarial Network, a common way of making deepfakes).</p> <p>One group of 200 participants was asked to guess which images were real, and which were fake, by pressing a button.</p> <p>A different group of 22 participants didn’t guess, but underwent electroencephalography (EEG) tests while they were viewing the images.</p> <p>The EEGs showed distinct signals when participants were viewing deepfakes, compared to real images.</p> <p>“The brain is responding different than when it sees a real image,” says Carlson.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p197814-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> </div> </div> <p>“It’s sort of difficult to figure out what exactly it’s picking up on, because all you can really see is that it is different – that’s something we’ll have to do more research to figure out.”</p> <p>The EEG scans weren’t foolproof: they could only spot deepfakes 54% of the time. But that’s significantly better than the participants who were guessing consciously. People only found deepfakes 37% of the time – worse than if they’d just flipped a coin.</p> <p>“The fact that the brain can detect deepfakes means current deepfakes are flawed,” says Carlson.</p> <p>“If we can learn how the brain spots deepfakes, we could use this information to create algorithms to flag potential deepfakes on digital platforms like Facebook and Twitter.”</p> <p>It could also be used to prevent fraud and theft.</p> <p>“EEG-enabled helmets could have been helpful in preventing recent bank heist and corporate fraud cases in Dubai and the UK, where scammers used cloned voice technology to steal tens of millions of dollars,” says Carlson.</p> <p>“In these cases, finance personnel thought they heard the voice of a trusted client or associate and were duped into transferring funds.”</p> <p>But this is by no means a guarantee. The researchers point out in their paper that, even while they were doing the research, GANs got more advanced and generated better fake images than the ones they used in their study. It’s possible that, once the algorithms exist, deepfakers will just figure out ways to circumvent them.</p> <p>“That said, the deepfakes are always being generated by a computer that has an ‘idea’ of what a face is,” says Carlson.</p> <p>“As long as it’s generating these things from this ‘idea’, there might be just the slightest thing that’s wrong. It’s a matter of figuring out what’s wrong with it this time.”</p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src=";title=Which+of+these+pictures+is+a+deepfake%3F+Your+brain+knows+the+answer+before+you+do" width="1" height="1" /></p> <div id="contributors"> <p><em><a href="" target="_blank" rel="noopener">This article</a> was originally published on <a href="" target="_blank" rel="noopener">Cosmos Magazine</a> and was written by <a href="" target="_blank" rel="noopener">Ellen Phiddian</a>. Ellen Phiddian is a science journalist at Cosmos. She has a BSc (Honours) in chemistry and science communication, and an MSc in science communication, both from the Australian National University.</em></p> <p><em>Image: Moshel et al. 2022, Vision Research,</em></p> </div>


Placeholder Content Image

How Frank Sinatra was caught singing 20 years after his death

<p><span style="font-weight: 400;">Fans were confused in 2020 when seemingly footage of Frank Sinatra went viral of him singing about hot tubs.</span></p> <p><span style="font-weight: 400;">The iconic singer died in 1998, so many were wondering how old audio clips of him surfaced, but the audios were actually new. </span></p> <p><span style="font-weight: 400;">However, they weren’t Frank Sinatra singing at all.</span></p> <p><span style="font-weight: 400;">The song, titled Hot Tub Christmas, was the product of a new technology known as a “deepfake” that mimicked Sinatra’s iconic voice. </span></p> <p><span style="font-weight: 400;">The video came from a San Francisco tech company who used their AI system, known as Jukebox, to generate new songs and vocals that almost sound exactly like real artists. </span></p> <p><strong>So, what is a deepfake?</strong></p> <p><span style="font-weight: 400;">Deepfakes are realistic video or audio of events that never actually took place and are generated by artificial intelligence.</span></p> <p><span style="font-weight: 400;">These videos have been used to trick online users into thinking their favourite celebrities said things they never actually did. </span></p> <p><span style="font-weight: 400;">The tech has been used to create fake videos of Hollywood actor Tom Cruise, which set off alarm bells in national security circles. </span></p> <p><span style="font-weight: 400;">Deepfakes can also be used to manipulate images, where people’s faces have been added into random events and videos. </span></p> <p><span style="font-weight: 400;">Audio deepfakes, like this unusual track of Frank Sinatra’s have received less attention in the media so far. </span></p> <p><span style="font-weight: 400;">One audio deepfake that has garnered a lot of criticism is a recreation of the voice of late chef Anthony Bourdain for use in his upcoming documentary. </span></p> <p><strong>How are deepfakes made?</strong></p> <p><span style="font-weight: 400;">These audios are created by artificial intelligence ingesting and examining 1.2 million songs, their corresponding lyrics and information, such as artist names, genres and years of release.</span></p> <p><span style="font-weight: 400;">Using this data, AI can create new music samples from scratch and make them seem like they came from the original artist. </span></p> <p><span style="font-weight: 400;">While some celebrities who have been spoofed in deepfakes have expressed their discomfort and irritation in the new tech, one singer named Holly Herndon believes they are here to stay</span></p> <p><span style="font-weight: 400;">She said, "Vocal deepfakes are here to stay. A balance needs to be found between protecting artists and encouraging people to experiment with a new and exciting technology."</span></p> <p><em><span style="font-weight: 400;">Image credit: Getty Images</span></em></p>


Our Partners