From Help to Harm: How the Government Is Quietly Repurposing Everyone’s Data for Surveillance
Palantir, a private data firm and prominent federal contractor, supplies investigative platforms to agencies such as Immigration and Customs Enforcement, the Department of Defense, the Centers for Disease Control and Prevention and the Internal Revenue Service. These platforms aggregate data from various sources – driver’s license photos, social services, financial information, educational data – and present it in centralized dashboards designed for predictive policing and algorithmic profiling. These tools extend government reach in ways that challenge existing norms of privacy and consent.
The Role of AI
Artificial intelligence has further accelerated this shift.
Predictive algorithms now scan vast amounts of data to generate risk scores, detect anomalies and flag potential threats.
These systems ingest data from school enrollment records, housing applications, utility usage and even social media, all made available through contracts with data brokers and tech companies. Because these systems rely on machine learning, their inner workings are often proprietary, unexplainable and beyond meaningful public accountability.
Sometimes the results are inaccurate, generated by AI hallucinations – responses AI systems produce that sound convincing but are incorrect, made up or irrelevant. Minor data discrepancies can lead to major consequences: job loss, denial of benefits and wrongful targeting in law enforcement operations. Once flagged, individuals rarely have a clear pathway to contest the system’s conclusions.
Digital Profiling
Participation in civic life, applying for a loan, seeking disaster relief and requesting student aid now contribute to a person’s digital footprint. Government entities could later interpret that data in ways that allow them to deny access to assistance. Data collected under the banner of care could be mined for evidence to justify placing someone under surveillance. And with growing dependence on private contractors, the boundaries between public governance and corporate surveillance continue to erode.
Artificial intelligence, facial recognition systems and predictive profiling systems lack oversight. They also disproportionately affect low-income individuals, immigrants and people of color, who are more frequently flagged as risks.
Initially built for benefits verification or crisis response, these data systems now feed into broader surveillance networks. The implications are profound. What began as a system targeting noncitizens and fraud suspects could easily be generalized to everyone in the country.
Eyes on Everyone
This is not merely a question of data privacy. It is a broader transformation in the logic of governance. Systems once designed for administration have become tools for tracking and predicting people’s behavior. In this new paradigm, oversight is sparse and accountability is minimal.
AI allows for the interpretation of behavioral patterns at scale without direct interrogation or verification. Inferences replace facts. Correlations replace testimony.
The risk extends to everyone. While these technologies are often first deployed at the margins of society – against migrants, welfare recipients or those deemed “high risk” – there’s little to limit their scope. As the infrastructure expands, so does its reach into the lives of all citizens.
With every form submitted, interaction logged and device used, a digital profile deepens, often out of sight. The infrastructure for pervasive surveillance is in place. What remains uncertain is how far it will be allowed to go.
Nicole M. Bennett is Ph.D. Candidate in Geography and Assistant Director at the Center for Refugee Studies, Indiana University. This article is published courtesy of The Conversation. This article is published courtesy of The Conversation.