Have you ever considered that what you type into Google, or the ironic memes you laugh at on Facebook, might be building a more dangerous online environment?

Regulation of online spaces is starting to gather momentum, with governments, consumer groups, and even digital companies themselves calling for more control over what is posted and shared online.

Yet we often fail to recognise the role that you, me and all of us as ordinary citizens play in shaping the digital world.

The privilege of being online comes with rights and responsibilities, and we need to actively ask what kind of  digital citizenship  we want to encourage in Australia and beyond.

Beyond the knee-jerk

The Christchurch terror attack prompted policy change by governments in both New Zealand and Australia.

Australia recently passed a  new law  that will enforce penalties for social media platforms if they don’t remove violent content after it becomes available online.

Platforms may well be lagging behind in their content moderation responsibilities, and still need to  do better  in this regard. But this kind of “kneejerk” policy response won’t solve the spread of problematic content on social media.

Addressing hate online requires coordinated efforts. Platforms must  improve the enforcement of their rules  (not just  announce  tougher measures) to guarantee users’ safety. They may also reconsider a  serious redesign, because the way they currently organise, select, and recommend information often amplifies systemic problems in society like racism.

Discrimination is entrenched

Of course, biased beliefs and content don’t just live online.

In Australia, racial discrimination  has been perpetuated  in public policy, and the country has an  unreconciled history  of Indigenous dispossession and oppression.

Today, Australia’s political mainstream  is still lenient  with bigots, and the media  often contributes  to fearmongering about immigration.

However, we can all play a part in reducing harm online.

There are three aspects we might reconsider when interacting online so as to deny oxygen to racist ideologies:

  • a better understanding of how platforms work
  • the development of empathy  to identify differences in interpretation when engaging with media (rather than focusing on intent)
  • working towards a more productive anti-racism online.

Online lurkers and the amplification of harm

White supremacists and other reactionary pundits seek attention on mainstream and social media. New Zealand Prime Minister Jacinda Ardern  refused to name  the Christchurch gunman to prevent fuelling his desired notoriety, and so did some media outlets.

The rest of us might draw comfort from not having contributed to amplifying the Christchurch attacker’s desired fame. It’s likely we didn’t watch his video or read his manifesto, let alone upload or share this content on social media.

But what about apparently less harmful practices, such as searching on Google and social media sites for keywords related to the gunman’s manifesto or his live video?

It’s not the intent behind these practices that should be the focus of this debate, but the consequences of it. Our everyday interactions on platforms  influence  search autocomplete algorithms and the hierarchical organisation and recommendation of information.

In the Christchurch tragedy, even if we didn’t share or upload the manifesto or the video, the zeal to access this information drove traffic to problematic content and amplified harm for the Muslim community.

Normalisation of hate through seemingly lighthearted humour

Reactionary groups know how to  capitalise  on memes and other jokey content that degrades and dehumanises.

By using irony to  deny  the racism in these jokes, these far-right groups connect and immerse new members in an online culture that deliberately uses memetic media to have fun at the expense of others.

The Christchurch terrorist attack showed this  connection  between online irony and the radicalisation of white men.

However, humour, irony and play – which are protected on platform policies – serve to cloak racism in more mundane and everyday contexts.

Just as everyday racism  shares discourses  and vocabularies with white supremacy, lighthearted racist and sexist jokes are as harmful as  online fascist irony.

Humour and satire  should not  be hiding places for ignorance and bigotry. As digital citizens we should be more careful about what kind of jokes we engage with and laugh at on social media.

What’s harmful and what’s a joke might not be apparent when interpreting content from a limited worldview. The development of empathy to others’ interpretations of the same content is a useful skill to minimise the amplification of racist ideologies online.

As scholar  danah boyd argues:

The goal is to understand the multiple ways of making sense of the world and use that to interpret media.

Effective anti-racism on social media

A common practice in challenging racism on social media is to publicly call it out, and show support for those who are victims of it. But critics of social media’s callout culture and solidarity  sustain  that these tactics often do not work as an effective anti-racism tool, as they are performative rather than having an advocacy effect.

An alternative is to channel outrage into more productive forms of anti-racism. For example, you can report hateful online content either individually or through organisations that are already working on these issues, such as  The Online Hate Prevention Institute  and the  Islamophobia Register Australia.

Most major social media platforms struggle to understand how hate articulates in non-US contexts. Reporting content  can help  platforms understand culturally specific coded words, expressions, and jokes (most of which are mediated through visual media) that moderators might not understand and algorithms can’t identify.

As digital citizens we can work together to deny attention to those that seek to discriminate and inflict harm online.

We can also learn how our everyday interactions might have unintended consequences and actually amplify hate.

However, these ideas do not diminish the responsibility of platforms to protect users, nor do they negate the role of governments to find effective ways to regulate platforms in collaboration and consultation with civil society and industry.

Written by Ariadna Matamoros-Fernández. Republished with permission of The Conversation.