Feature  

In Appearance Before Congress, Bruce Schneier Raises Concerns about DOGE Data Handling Practices

In a warning to lawmakers, cybersecurity expert Bruce Schneier testified before the House Committee on Oversight and Government Reform, sharply criticizing the Department of Government Efficiency’s (DOGE) handling of federal data. Describing DOGE’s security protocols as dangerously inadequate, Schneier warned that the agency’s practices have put sensitive government and citizen information at risk of exploitation by foreign adversaries and criminal networks.

Cyber image of a lock on a computer screen

In testimony before the House Committee on Oversight and Government Reform, Harvard Kennedy School’s Bruce Schneier sounded the alarm about the Department of Government Efficiency’s (DOGE) data security protocols. He warned that highly sensitive federal data could fall into the hands of hostile nations or criminal groups.

“Data security breaches present significant dangers to everyone in the United States, from private citizens to corporations to government agencies to elected officials,” said Schneier, an internationally recognized security technologist who teaches cybersecurity policy at the Kennedy School. He described DOGE’s approach toward data security as “reckless” and urged Congress to rein in the agency’s attempts to consolidate federal data and remove key privacy and security controls.

Watch Full Testimony Here

“Their actions have weakened security within the federal government by bypassing and disabling critical security measures, exporting sensitive data to environments with less security, and consolidating disparate data streams to create a massively attractive target for any adversary,” Schneier told the committee.

In his testimony, Schneir outlined what he called a “DOGE approach” to data handling, with four distinct features:

  • Data consolidation: Exfiltrating and connecting massive U.S. databases to create a single pool of data covering all citizens.
  • Reduced security protocols: Removing access controls and audit logs, creating unmonitored copies of data, exposing highly sensitive data to cloud-based tools, seeking maximally permissive data access waivers, and eliminating previously required security protocols for vetting staff.
  • AI training and processing: Using AI tools to process data outside of carefully monitored environments.
  • Outsourcing: Transferring control over data access to private companies.

Taken together, Schneier argued, these steps have already caused significant damage to the data security of the federal government. “By following the DOGE approach, the current administration has increased both the likelihood and the potential scale of attacks against us and endangered our safety, both individually and collectively. A decisive shift in the administration’s approach to data security can begin to right the ship.”

 

The views expressed in this article are those of the author(s) alone and do not necessarily represent the positions of the Ash Center or its affiliates.

More from this Program

Inside Trump’s White House

Podcast

Inside Trump’s White House

White House reporter Annie Linskey offers a closer look at how the Trump White House makes decisions and what recent actions reveal about its strategy.

So, Is It Fascism?

Podcast

So, Is It Fascism?

Jonathan Rauch joins the podcast to discuss why he now believes “fascism” accurately describes Trump’s governing style.

Beyond MAGA: What Trump’s Coalition Really Looks Like

Podcast

Beyond MAGA: What Trump’s Coalition Really Looks Like

Drawing on new data from more than 10,000 Trump voters, this episode of Terms of Engagement unpacks the diverse constituencies behind the MAGA label.

More on this Issue

The Ecosystem of Deliberative Technologies for Public Input

Additional Resource

The Ecosystem of Deliberative Technologies for Public Input

Ensuring public opinion and policy preferences are reflected in policy outcomes is essential to a functional democracy. A growing ecosystem of deliberative technologies aims to improve the input-to-action loop between people and their governments.

Ethical-Moral Intelligence of AI

Occasional Paper

Ethical-Moral Intelligence of AI

In a new working paper, Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?, Allen Lab authors Sarah Hubbard, David Kidd, and Andrei Stupu introduce an ethical-moral intelligence framework for evaluating AI models across dimensions of moral expertise, sensitivity, coherence, and transparency.

Sunset Section 230 and Unleash the First Amendment

Open Access Resource

Sunset Section 230 and Unleash the First Amendment

Allen Lab for Democracy Renovation Senior Fellow Allison Stanger, in collaboration with Jaron Lanier and Audrey Tang, envision a post-Section 230 landscape that fosters innovation in digital public spaces using models optimized for public interest rather than attention metrics.