Placeholder Content Image

“Unbelievably legitimate”: Deb Knight falls victim to popular scam

<p>Deb Knight has shared how she fell victim to a popular scam, losing $1,200 while trying to get Taylor Swift tickets for her daughter's birthday. </p> <p>Like many people around Australia, the veteran journalist was eager to get her hands on tickets to the highly anticipated Eras Tour as a once in a lifetime surprise for her eight-year-old daughter's birthday present.</p> <p>After missing out on tickets through all official channels, Deb thought hope was lost, until a friend reached out to her. </p> <p>“A really good friend, who I’ve known all my life, contacted me and said, ‘do you still want Taylor Swift tickets?’” Knight told <em>A Current Affair</em>.</p> <p>“It was my daughter’s eighth birthday and getting my hands on these tickets would be the best present ever."</p> <p>“My friend put me in contact with her friend who had the tickets – or so I thought.”</p> <p>Knight had received a phone call from her close friend who said her cousin was selling tickets, but unbeknownst to everyone involved, the friend’s Facebook account had been hacked. </p> <p>Deb promised to pay half the cost of the tickets as a bond, then pay the rest after she had seen the tickets, which she said looked “unbelievably legitimate". </p> <p>Tech expert Trevor Long joined Deb on <em>ACA</em>, and noticed one major error about the fake tickets. </p> <p>“The difference is a genuine Taylor Swift ticket in an Apple Wallet right now does not have that barcode.”</p> <p>Alarm bells started ringing for the veteran journalist when the so-called seller said the payment had not come through, but by then it was too late.</p> <p>Deb contacted her bank but it was too late to get her $1,200 back, and her hunt to find Taylor Swift tickets continued. </p> <p>“I realised I’d been scammed. I felt sick to the stomach, absolutely humiliated. I also felt embarrassed and ashamed,” she said.</p> <p>“I was reluctant to speak publicly about this but I think we’ve got to. We have to normalise it so people feel there’s less of a stigma about it."</p> <p>“It happens to everyone, even Deb Knight – it’s disgusting, what’s happening, so something needs to be done.”</p> <p>Police have warned Swifties who missed out on tickets to the singer’s upcoming tour not to fall prey to ticketing scams, and only to purchase tickets through official channels such as Ticketek marketplace. </p> <p>Since tickets for the Eras tour went on sale last June, and subsequently sold out in record timing, Victoria Police said there had been more than 250 reports of ticketing scams for Taylor Swift shows alone.</p> <p><em>Image credits: A Current Affair</em></p>

Money & Banking

Placeholder Content Image

Give this AI a few words of description and it produces a stunning image – but is it art?

<p>A picture may be worth a thousand words, but thanks to an artificial intelligence program called <a href="https://fortune.com/2022/04/06/openai-dall-e-2-photorealistic-images-from-text-descriptions/">DALL-E 2</a>, you can have a professional-looking image with far fewer.</p> <p>DALL-E 2 is <a href="http://adityaramesh.com/posts/dalle2/dalle2.html">a new neural network</a> algorithm that creates a picture from a short phrase or sentence that you provide. <a href="https://openai.com/dall-e-2/">The program</a>, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.</p> <p><a href="https://scholar.google.com/citations?user=ZcWO2AEAAAAJ&amp;hl=en">As a researcher studying the nexus of technology and art</a>, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.</p> <h2>A staggering range of style and subjects</h2> <p>OpenAI researchers built DALL-E 2 <a href="https://github.com/openai/dalle-2-preview/blob/main/system-card.md#model">from an enormous collection of images</a> with captions. They gathered some of the images online and licensed others.</p> <p>Using DALL-E 2 looks a lot like searching for an image on the web: you type in a short phrase into a text box, and it gives back six images.</p> <p>But instead of being culled from the web, the program creates six brand-new images, each of which reflect some version of the entered phrase. (Until recently, the program produced 10 images per prompt.) For example, when some friends and I gave DALL-E 2 the text prompt “cats in devo hats,” <a href="https://twitter.com/AaronHertzmann/status/1534947118053355522">it produced 10 images</a> that came in different styles.</p> <p>Nearly all of them could plausibly pass for professional photographs or drawings. While the algorithm did not quite grasp “Devo hat” – <a href="https://images.squarespace-cdn.com/content/5761baff746fb9f420bb3ffc/1495765600043-HHVOESOJR2LLK7B820SS/?content-type=image%2Fjpeg">the strange helmets</a> worn by the New Wave band Devo – the headgear in the images it produced came close. </p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">"cats in devo hats" <a href="https://twitter.com/hashtag/dalle?src=hash&amp;ref_src=twsrc%5Etfw">#dalle</a> <a href="https://t.co/kkFaKF0zUJ">pic.twitter.com/kkFaKF0zUJ</a></p> <p>— Aaron Hertzmann (@AaronHertzmann) <a href="https://twitter.com/AaronHertzmann/status/1534947118053355522?ref_src=twsrc%5Etfw">June 9, 2022</a></p></blockquote> <p>Over the past few years, a small community of artists have been using neural network algorithms to produce art. Many of these artworks have distinctive qualities that almost look like real images, <a href="https://theconversation.com/new-ai-art-has-artists-collaborators-wondering-who-gets-the-credit-112661">but with odd distortions of space</a> – a sort of cyberpunk Cubism. The most recent text-to-image systems <a href="https://www.rightclicksave.com/article/clip-art-and-the-new-aesthetics-of-ai">often produce dreamy, fantastical imagery</a> that can be delightful but rarely looks real.</p> <p>DALL-E 2 offers a significant leap in the quality and realism of the images. It can also mimic specific styles with remarkable accuracy. If you want images that look like actual photographs, it’ll produce six life-like images. If you want prehistoric cave paintings of Shrek, it’ll generate six pictures of Shrek as if they’d been drawn by a prehistoric artist.</p> <p>It’s staggering that an algorithm can do this. Each set of images takes less than a minute to generate. Not all of the images will look pleasing to the eye, nor do they necessarily reflect what you had in mind. But, even with the need to sift through many outputs or try different text prompts, there’s no other existing way to pump out so many great results so quickly – not even by hiring an artist. And, sometimes, the unexpected results are the best.</p> <p>In principle, <a href="http://adityaramesh.com/posts/dalle2/dalle2.html">anyone with enough resources and expertise can make a system like this</a>. Google Research <a href="https://imagen.research.google/">recently announced an impressive, similar text-to-image system</a>, and one independent developer is publicly developing their own version that <a href="https://huggingface.co/spaces/dalle-mini/dalle-mini">anyone can try right now on the web</a>, although it’s not yet as good as DALL-E or Google’s system.</p> <p>It’s easy to imagine these tools transforming the way people make images and communicate, whether via memes, greeting cards, advertising – and, yes, art.</p> <h2>Where’s the art in that?</h2> <p>I had a moment early on while using DALL-E 2 to generate different kinds of paintings, in all different styles – like “<a href="https://www.odilon-redon.org/">Odilon Redon</a> painting of Seattle” – when it hit me that this was better than any painting algorithm I’ve ever developed. Then I realized that it is, in a way, a better painter than I am.</p> <p>In fact, no human can do what DALL-E 2 does: create such a high-quality, varied range of images in mere seconds. If someone told you that a person made all these images, of course you’d say they were creative.</p> <p>But <a href="https://cacm.acm.org/magazines/2020/5/244330-computers-do-not-make-art-people-do/fulltext">this does not make DALL-E 2 an artist</a>. Even though it sometimes feels like magic, under the hood it is still a computer algorithm, rigidly following instructions from the algorithm’s authors at OpenAI. </p> <p>If these images succeed as art, they are products of how the algorithm was designed, the images it was trained on, and – most importantly – how artists use it. </p> <p>You might be inclined to say there’s little artistic merit in an image produced by a few keystrokes. But in my view, this line of thinking echoes <a href="https://cacm.acm.org/magazines/2020/5/244330-computers-do-not-make-art-people-do/fulltext">the classic take</a> that photography cannot be art because a machine did all the work. Today the human authorship and craft involved in artistic photography are recognized, and critics understand that the best photography involves much more than just pushing a button. </p> <p>Even so, we often discuss works of art as if they directly came from the artist’s intent. The artist intended to show a thing, or express an emotion, and so they made this image. DALL-E 2 does seem to shortcut this process entirely: you have an idea and type it in, and you’re done.</p> <p>But when I paint the old-fashioned way, I’ve found that my paintings come from the exploratory process, not just from executing my initial goals. And this is true for many artists.</p> <p>Take Paul McCartney, who came up with the track “<a href="https://www.youtube.com/watch?v=rUvZA5AYhB4&amp;t=35s">Get Back</a>” during a jam session. He didn’t start with a plan for the song; he just started fiddling and experimenting <a href="https://en.wikipedia.org/wiki/Get_Back#Early_protest_lyrics">and the band developed it from there</a>. </p> <p>Picasso <a href="https://books.google.com/books?id=dZyPAAAAQBAJ&amp;lpg=PA2&amp;ots=xYVek5tbjg&amp;dq=%22I%20don%27t%20know%20in%20advance%20what%20I%20am%20going%20to%20put%20on%20canvas%20any%20more%20than%20I%20decide%20beforehand%20what%20colors%20I%20am%20going%20to%20use&amp;pg=PA2#v=onepage&amp;q&amp;f=false">described his process similarly</a>: “I don’t know in advance what I am going to put on canvas any more than I decide beforehand what colors I am going to use … Each time I undertake to paint a picture I have a sensation of leaping into space.”</p> <p>In <a href="https://www.instagram.com/aaronhertzmann_aiart/">my own explorations with DALL-E 2</a>, one idea would lead to another which led to another, and eventually I’d find myself in a completely unexpected, magical new terrain, very far from where I’d started. </p> <h2>Prompting as art</h2> <p>I would argue that the art, in using a system like DALL-E 2, comes not just from the final text prompt, but in the entire creative process that led to that prompt. Different artists will follow different processes and end up with different results that reflect their own approaches, skills and obsessions.</p> <p>I began to see my experiments as a set of series, each a consistent dive into a single theme, rather than a set of independent wacky images. </p> <p>Ideas for these images and series came from all around, often linked by a set of <a href="https://link.springer.com/book/10.1007/978-3-319-15524-1">stepping stones</a>. At one point, while making images based on contemporary artists’ work, I wanted to generate an image of site-specific installation art in the style of the contemporary Japanese artist <a href="http://yayoi-kusama.jp/e/biography/index.html">Yayoi Kusama</a>. After trying a few unsatisfactory locations, I hit on the idea of placing it in <a href="https://mezquita-catedraldecordoba.es/en/">La Mezquita</a>, a former mosque and church in Córdoba, Spain. I sent <a href="https://www.instagram.com/p/CehcE4DvN1d/">the picture</a> to an architect colleague, Manuel Ladron de Guevara, who is from Córdoba, and we began riffing on other architectural ideas together. </p> <p>This became a series on imaginary new buildings in different architects’ styles.</p> <p>So I’ve started to consider what I do with DALL-E 2 to be both a form of exploration as well as a form of art, even if it’s often amateur art like the drawings I make on my iPad. </p> <p>Indeed some artists, like <a href="https://twitter.com/advadnoun">Ryan Murdoch</a>, have advocated for prompt-based image-making to be recognized as art. He points to the <a href="https://twitter.com/NeuralBricolage">experienced AI artist Helena Sarin</a> as an example. </p> <p>“When I look at most stuff from <a href="https://www.midjourney.com/">Midjourney</a>” – another popular text-to-image system – “a lot of it will be interesting or fun,” Murdoch told me in an interview. “But with [Sarin’s] work, there’s a through line. It’s easy to see that she has put a lot of thought into it, and has worked at the craft, because the output is more visually appealing and interesting, and follows her style in a continuous way.” </p> <p>Working with DALL-E 2, or any of the new text-to-image systems, means learning its quirks and developing strategies for avoiding common pitfalls. It’s also important to know about <a href="https://github.com/openai/dalle-2-preview/blob/main/system-card.md#probes-and-evaluations">its potential harms</a>, such as its reliance on stereotypes, and potential uses for disinformation. Using DALL-E 2, you’ll also discover surprising correlations, like the way everything becomes old-timey when you use an old painter, filmmaker or photographer’s style.</p> <p>When I have something very specific I want to make, DALL-E 2 often can’t do it. The results would require a lot of difficult manual editing afterward. It’s when my goals are vague that the process is most delightful, offering up surprises that lead to new ideas that themselves lead to more ideas and so on.</p> <h2>Crafting new realities</h2> <p>These text-to-image systems can help users imagine new possibilities as well. </p> <p><a href="https://daniellebaskin.com/">Artist-activist Danielle Baskin</a> told me that she always works “to show alternative realities by ‘real’ example: either by setting scenarios up in the physical world or doing meticulous work in Photoshop.” DALL-E 2, however, “is an amazing shortcut because it’s so good at realism. And that’s key to helping others bring possible futures to life – whether its satire, dreams or beauty.” </p> <p>She has used it to imagine <a href="https://twitter.com/djbaskin/status/1519050225297461249">an alternative transportation system</a> and <a href="https://twitter.com/djbaskin_images/status/1533970922146648064">plumbing that transports noodles instead of water</a>, both of which reflect <a href="https://www.forbes.com/sites/jonathonkeats/2021/02/11/is-twitter-really-offering-verified-badges-for-san-francisco-homes-an-artists-satire-nearly-starts-a-civil-war">her artist-provocateur sensibility</a>.</p> <p>Similarly, artist Mario Klingemann’s <a href="https://twitter.com/quasimondo/status/1533877178496163840">architectural renderings with the tents of homeless people</a> could be taken as a rejoinder to <a href="https://twitter.com/AaronHertzmann/status/1526710430751522817">my architectural renderings of fancy dream homes</a>.</p> <p>It’s too early to judge the significance of this art form. I keep thinking of a phrase from the excellent book “<a href="https://www.haymarketbooks.org/books/1662-art-in-the-after-culture">Art in the After-Culture</a>” – “The dominant AI aesthetic is novelty.” </p> <p>Surely this would be true, to some extent, for any new technology used for art. The first films by the <a href="https://iphf.org/inductees/auguste-louis-lumiere/">Lumière brothers</a> in 1890s were novelties, not cinematic masterpieces; it amazed people to see images moving at all. </p> <p>AI art software develops so quickly that there’s continual technical and artistic novelty. It seems as if, each year, there’s an opportunity to explore an exciting new technology – each more powerful than the last, and each seemingly poised to transform art and society.</p> <p><em>Image credits: Shutterstock</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/give-this-ai-a-few-words-of-description-and-it-produces-a-stunning-image-but-is-it-art-184363" target="_blank" rel="noopener">The Conversation</a>. </em></p> <div style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: none; --tw-border-spacing-x: 0; --tw-border-spacing-y: 0; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-scroll-snap-strictness: proximity; --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(51,168,204,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; background-color: transparent; border: 0px; font-size: 18px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;" data-react-class="Tweet" data-react-props="{"> <div style="--tw-border-spacing-x: 0; --tw-border-spacing-y: 0; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-scroll-snap-strictness: proximity; --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(51,168,204,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; background-color: transparent; border: 0px; font-size: 18px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; caret-color: #000000; color: #000000; font-family: 'Libre Baskerville', Georgia, Times, 'Times New Roman', serif; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"> <div style="--tw-border-spacing-x: 0; --tw-border-spacing-y: 0; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-scroll-snap-strictness: proximity; --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(51,168,204,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; background-color: transparent; border: 0px; font-size: 18px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"> </div> </div> </div> <p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: none; --tw-border-spacing-x: 0; --tw-border-spacing-y: 0; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-scroll-snap-strictness: proximity; --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(51,168,204,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; background-color: transparent; border: 0px; font-size: 18px; margin: 0px 0px 18px; outline: 0px; padding: 0px; vertical-align: baseline;"> </p>

Art

Our Partners