In Appearance Before Congress, Bruce Schneier Raises Concerns about DOGE Data Handling Practices
In a warning to lawmakers, cybersecurity expert Bruce Schneier testified before the House Committee on Oversight and Government Reform, sharply criticizing the Department of Government Efficiency’s (DOGE) handling of federal data. Describing DOGE’s security protocols as dangerously inadequate, Schneier warned that the agency’s practices have put sensitive government and citizen information at risk of exploitation by foreign adversaries and criminal networks.
In testimony before the House Committee on Oversight and Government Reform, Harvard Kennedy School’s Bruce Schneier sounded the alarm about the Department of Government Efficiency’s (DOGE) data security protocols. He warned that highly sensitive federal data could fall into the hands of hostile nations or criminal groups.
“Data security breaches present significant dangers to everyone in the United States, from private citizens to corporations to government agencies to elected officials,” said Schneier, an internationally recognized security technologist who teaches cybersecurity policy at the Kennedy School. He described DOGE’s approach toward data security as “reckless” and urged Congress to rein in the agency’s attempts to consolidate federal data and remove key privacy and security controls.
“Their actions have weakened security within the federal government by bypassing and disabling critical security measures, exporting sensitive data to environments with less security, and consolidating disparate data streams to create a massively attractive target for any adversary,” Schneier told the committee.
In his testimony, Schneir outlined what he called a “DOGE approach” to data handling, with four distinct features:
Data consolidation: Exfiltrating and connecting massive U.S. databases to create a single pool of data covering all citizens.
Reduced security protocols: Removing access controls and audit logs, creating unmonitored copies of data, exposing highly sensitive data to cloud-based tools, seeking maximally permissive data access waivers, and eliminating previously required security protocols for vetting staff.
AI training and processing: Using AI tools to process data outside of carefully monitored environments.
Outsourcing: Transferring control over data access to private companies.
Taken together, Schneier argued, these steps have already caused significant damage to the data security of the federal government. “By following the DOGE approach, the current administration has increased both the likelihood and the potential scale of attacks against us and endangered our safety, both individually and collectively. A decisive shift in the administration’s approach to data security can begin to right the ship.”
The views expressed in this article are those of the author(s) alone and do not necessarily represent the positions of the Ash Center or its affiliates.
Terms of Engagement—What If Millennials and Gen Z Leaders Replaced the Gerontocracy?
Amanda Litman, the president and founder of Run for Something, joins co-hosts Archon Fung and Stephen Richer to talk about why she believes democracy needs a generational makeover.
Terms of Engagement—How Does Our Civil Rights History Shape the Future of American Democracy?
Archon Fung and Stephen Richer invite Harvard Kennedy School Professor and civil rights advocate Cornell William Brooks to assess the evolution of America’s historical narrative and what implications history has on our contemporary political context.
Allen Lab Fellow Jeremy McKey reflects on India’s AI Impact Summit, exploring the theme of diffusion and the implications for sovereignty and democracy.
The Ecosystem of Deliberative Technologies for Public Input
Ensuring public opinion and policy preferences are reflected in policy outcomes is essential to a functional democracy. A growing ecosystem of deliberative technologies aims to improve the input-to-action loop between people and their governments.
In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.