When AI Blurs Reality: The Rise of Hyperreal Digital Culture
“Individuals experiencing stress or social isolation may be more prone to believe deepfakes,” De Choudhury explained. “Such content often reinforces existing beliefs or fills gaps in social connection.”
The AI content challenges our understanding of authenticity, trust, and digital identity. It also raises questions about consent, misinformation, and the psychological effects of interacting with synthetic personas. Gen Z users, she notes, often judge AI content by emotional resonance rather than factual accuracy, while older users may struggle to detect synthetic cues altogether.
Platforms, Persuasion, and Misinformation
Riedl emphasizes that AI storytelling tools can be used to sway public opinion through “narrative transportation,” a psychological phenomenon in which audiences become immersed in a story and are less likely to question its truth.
“Storytelling is a means of persuasive communication,” he said. “Our brains are attuned to stories in a way that can bypass critical thinking.”
Recent incidents highlight the changing landscape. Deepfakes of public figures such as Taylor Swift and Tom Hanks have surged in 2025, with over 179 incidents in the first four months of the year alone — surpassing all of 2024. These deepfakes range from humorous impersonations to fraudulent and explicit content, raising ethical and legal concerns about identity misuse and misinformation. Riedl notes that video misinformation has historically been harder to produce but is now easier and more likely to be tailored to niche audiences.
Social media companies face mounting pressure to take action. De Choudhury argues that labeling AI-generated content is necessary but insufficient. “Platforms must invest in user-centered design, digital literacy interventions, and transparency about how algorithms surface such content,” she said.
The stakes are especially high in mental health communities, where authenticity and lived experience are critical. “Users often feel overwhelmed or deceived when they encounter synthetic content without clear cues of its artificial origin,” she added.
Governance in a Globalized AI Era
Milton Mueller, professor in the Jimmy and Rosalynn Carter School of Public Policy, argues that regulation may be ineffective or even counterproductive in a decentralized digital ecosystem.
“Generative AI is part of a globalized and distributed digital ecosystem,” Mueller said. “So, which regulatory authority are you talking about, and how does it gain the leverage needed to control the outputs?”
While the EU’s AI Act mandates labeling and imposes steep fines, U.S. efforts remain fragmented. The Federal Communications Commission has made AI-generated voices in robocalls illegal, with entities facing fines, and several states are pushing for watermarking and criminal penalties for political deepfakes. But experts warn that First Amendment protections complicate enforcement.
Mueller cautions that governments are already using AI as a geopolitical tool, which could undermine global cooperation and lead to strategic escalation. “Instead of freely trading data and establishing common rules, governments are asserting digital sovereignty,” he said.
He advocates for addressing AI-generated misinformation through decentralized governance, public debate, and media literacy, rather than centralized regulation or automated controls, emphasizing that content moderation should be guided by open processes and existing legal remedies applied after the fact.
As AI-generated content becomes more sophisticated and widespread, researchers say the challenge lies not only in technological safeguards but in how society adapts. Experts at Georgia Tech emphasize the need for transparency, interdisciplinary collaboration, and public engagement. The future of hyperreal media, they say, will depend on how well platforms, policymakers, and users navigate its risks and possibilities.