TRUTH DECAYEmpowering Users to Discern Fact from Fiction in the Age of AI
A new project will investigate interventions that enable individuals to effectively harness AI while building the literacy needed to avoid scams and other forms of abuse.
Summary
● Stanford’s Social Media Lab is developing interventions to improve digital and AI literacy among diverse communities.
● Specific educational tools, such as video tutorials on lateral reading, have proven effective in improving digital literacy and can be adapted for AI education.
● Building community trust is essential for the success of interventions, as it fosters resilience against misinformation and enhances user engagement.
Just a few years ago, artificial intelligence (AI) was akin to science fiction for many people. Today, hundreds of millions of people use it regularly. Many more interact with AI without even knowing it.
Separating fact from fiction has never been easy online. But the proliferation of AI makes it even more challenging. How can people build the skills to judge when AI-generated information is trustworthy, and avoid AI-powered methods that are designed to deceive them?
Empowering Diverse Digital Citizens, a research project led by Stanford Social Media Lab’s founding director, Jeffrey Hancock, will investigate what types of interventions best inform and encourage people to interact with AI with confidence. The team hopes to develop tools that allow diverse groups of users to reap the benefits of the evolving technology while avoiding its pitfalls.
The Social Media Lab has already designed a set of interventions to boost digital literacy, a set of skills that allow people to access, evaluate, and understand information (and misinformation) shared online, with funding from Stanford Impact Labs (SIL). Now, Hancock and his collaborators are turning their focus to AI, with the help of a new SIL investment. Addressing the ways that AI complicates information-sharing online is a challenging – but essential – mandate as it increasingly infiltrates daily life.
“Digital literacy – which we define as the ability to find, evaluate, use, and create information with digital tools safely, ethically, and effectively – has a fair bit of background and research into what works and what doesn’t,” said Hancock. “AI literacy, in many ways, is in a very different stage … it’s an evolving tool that is changing, literally, as we speak.”
To conduct this research, Hancock, the Harry and Norman Chandler Professor of Communication at Stanford’s School of Humanities and Social Sciences, will partner with prior collaborators, including the American Library Association, Jigsaw, and the Poynter Institute’s MediaWise initiative, as well as new partners, including Common Sense Media and the News Literacy Project.
