Feature  

In Appearance Before Congress, Bruce Schneier Raises Concerns about DOGE Data Handling Practices

In a warning to lawmakers, cybersecurity expert Bruce Schneier testified before the House Committee on Oversight and Government Reform, sharply criticizing the Department of Government Efficiency’s (DOGE) handling of federal data. Describing DOGE’s security protocols as dangerously inadequate, Schneier warned that the agency’s practices have put sensitive government and citizen information at risk of exploitation by foreign adversaries and criminal networks.

Cyber image of a lock on a computer screen

In testimony before the House Committee on Oversight and Government Reform, Harvard Kennedy School’s Bruce Schneier sounded the alarm about the Department of Government Efficiency’s (DOGE) data security protocols. He warned that highly sensitive federal data could fall into the hands of hostile nations or criminal groups.

“Data security breaches present significant dangers to everyone in the United States, from private citizens to corporations to government agencies to elected officials,” said Schneier, an internationally recognized security technologist who teaches cybersecurity policy at the Kennedy School. He described DOGE’s approach toward data security as “reckless” and urged Congress to rein in the agency’s attempts to consolidate federal data and remove key privacy and security controls.

Watch Full Testimony Here

“Their actions have weakened security within the federal government by bypassing and disabling critical security measures, exporting sensitive data to environments with less security, and consolidating disparate data streams to create a massively attractive target for any adversary,” Schneier told the committee.

In his testimony, Schneir outlined what he called a “DOGE approach” to data handling, with four distinct features:

  • Data consolidation: Exfiltrating and connecting massive U.S. databases to create a single pool of data covering all citizens.
  • Reduced security protocols: Removing access controls and audit logs, creating unmonitored copies of data, exposing highly sensitive data to cloud-based tools, seeking maximally permissive data access waivers, and eliminating previously required security protocols for vetting staff.
  • AI training and processing: Using AI tools to process data outside of carefully monitored environments.
  • Outsourcing: Transferring control over data access to private companies.

Taken together, Schneier argued, these steps have already caused significant damage to the data security of the federal government. “By following the DOGE approach, the current administration has increased both the likelihood and the potential scale of attacks against us and endangered our safety, both individually and collectively. A decisive shift in the administration’s approach to data security can begin to right the ship.”

 

The views expressed in this article are those of the author(s) alone and do not necessarily represent the positions of the Ash Center or its affiliates.

More from this Program

Can Venezuela Still Reclaim Democracy?
Terms of Engagement

Podcast

Can Venezuela Still Reclaim Democracy?

As Venezuela grapples with authoritarian collapse and a controversial U.S. operation that removed Nicolás Maduro, Freddy Guevara joins the podcast to discuss what Venezuelans are feeling and what democratic renewal might actually look like.

What Does January 6 Mean Five Years Later?

Podcast

What Does January 6 Mean Five Years Later?

In the season 2 premiere of Terms of Engagement, Archon Fung and Stephen Richer revisit January 6 with journalist Mary Clare Jalonick to examine what the January 6 Capitol attack reveals about democratic trust, accountability, and political violence.

What Does the MAGA New Right Think?

Podcast

What Does the MAGA New Right Think?

In the season finale, author and political theorist Laura Field joins co-hosts Archon Fung and Stephen Richer to unpack the ideas and beliefs of the New Right and their impact on elections, race, and public debate.

More on this Issue

Ethical-Moral Intelligence of AI

Occasional Paper

Ethical-Moral Intelligence of AI

In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.

Sunset Section 230 and Unleash the First Amendment

Open Access Resource

Sunset Section 230 and Unleash the First Amendment

Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, envision a post-Section 230 landscape that fosters innovation in digital public spaces using models optimized for public interest rather than attention metrics.

Digital Civic Infrastructure for Massachusetts Workshop

Feature

Digital Civic Infrastructure for Massachusetts Workshop

The Allen Lab for Democracy Renovation and Bloomberg Center for Cities brought together civic technologists, researchers, as well as municipal and state leaders across Massachusetts for a workshop on digital civic infrastructure.