SOCIAL MEDIASocial Media’: The Changing Tech of Terror
In the wake of the white noise generated by mainstream social media channels and apps, a new trend of ‘anti-social media’ has emerged in recent years, which seeks to abandon mainstream platforms, reduce screen time, and seek private, intimate, or even ‘analogue’ communication to avoid algorithm-driven polarization, surveillance and loneliness. But some of these so-called anti-social media platforms have also become off-the-wall mediums for disseminating extremist propaganda.
The limited span of attention of today’s dopamine-fed and adrenaline-rushed social media junkies, be they on TikTok, X, YouTube Shorts or Instagram posts, often bypasses the cortex of cerebral consciousness and directly targets the pre-cognitive instinctual and visceral seat of the collective unconscious.
Thus, a serious debate is currently raging over whether anti-social elements or violent extremists exploit social media platforms for their insidious purposes, or whether most social media outlets and their apps intentionally design provocative hashtags to spur prolonged, polarizing debates and profit from them.
By providing content creators with anonymity and by stripping censorship regulations, the steady livestreaming of visceral online responses has become difficult to regulate, given the speed at which messages are communicated and exchanged.
Medium Is the Message
In fact, the business models of many social media platforms are based on engagement algorithms, hashtags and rabbit holes that spur further online debate and thereby increase advertising revenue. In the words of Carlos Diaz Ruiz, “Incendiary, shocking content – whether it is true or not – is an easy way to get our attention, which means advertisers can end up funding fake news and hate speech.” [i]
Thus, Marshall McLuhan’s famous phrase “The Medium is the Message”[ii] renders highly interactive social media platforms a real-time hazard to public safety and security, particularly in relation to sensitive societal issues.
In an August 2019 internal memo (leaked in 2021), a Facebook staffer admitted that “the mechanics of our platforms are not neutral”[iii] and concluded that, to maximize profits, optimization for engagement is necessary. To increase engagement, hate and misinformation become profitable. Thus, the memo states: “The more incendiary the material, the more it keeps users engaged (and) the more it is boosted by the algorithm.”[iv] Although Facebook has taken commendable steps to prevent incendiary material from appearing on its platforms, the complexities of regulating problematic content appear to be increasing.
According to a 2018 MIT Sloan study, “false rumors spread faster and wider than true information, which supports the adage that ‘A lie can travel halfway around the world while truth is still putting on its shoes’”. Thus, the study states that falsehoods are 70 per cent more likely to be retweeted than the truth, and reach their first 1,500 people six times faster. This effect is more pronounced for political news and for content targeting particular races, nationalities, or religions.
