Building a Safer Future for AI Research

Future for AI

Artificial intelligence research has grown rapidly over the past five years, drawing significant federal investment. In fiscal year 2025 alone, the National Science Foundation allocated $2 billion to AI-related research and development, underscoring the technology’s importance to U.S. innovation and global leadership.

As AI research becomes a strategic national asset tied to competitiveness and security, it also faces heightened risks, including espionage, misuse, and ethical misconduct. To address these challenges, researchers at Virginia Tech have received a $300,000 grant from the National Science Foundation to strengthen the security, resilience, and responsibility of AI research practices.

Expanding concerns over research security

Historically, research security efforts focused largely on military applications or commercially sensitive technologies. That focus has since broadened.

“There’s now a concern with the entire research life cycle, especially with emerging technologies like AI and biotechnologies, in a way that there just wasn’t before,” said Rockwell Clancy, a research scientist in Virginia Tech’s Department of Engineering Education. “The risks of stolen intellectual property can happen during data collection, while co-developing models with international collaborations, throughout evaluation and publication, or even in routine conversations about project progress.”

Several factors are driving the push for stronger safeguards. Other countries are often able to move quickly from research breakthroughs to applied technologies, raising concerns that the United States could lose its competitive edge. At the same time, cases involving intellectual property diversion and illicit technology transfer have increased. The sensitive data, models, and methods central to AI research can be exploited well before projects are completed, and federal agencies acknowledge that existing standards do not fully address these vulnerabilities.

Developing practical protection tools

While many universities are discussing research security, few are producing evidence-based tools that researchers can integrate into their daily work. National efforts have largely remained conceptual, relying on policy papers and high-level guidance rather than discipline-specific training.

Federal agencies are now seeking actionable resources tailored to individual fields that help researchers recognize concrete threats in their own work.

“Our team here at Virginia Tech is one of the few groups developing evidence-based, scenario tools to help researchers understand and determine what threats across the AI research life cycle look like,” said Qin Zhu, associate professor of engineering education and the project’s principal investigator.

The research team plans to interview and survey stakeholders involved in research security, including AI researchers, to gather firsthand accounts of security risks. Using that information, the group will create fictional but realistic scenarios that reflect potential breaches or misconduct at different stages of the research process. These scenarios will be refined and packaged into a digital toolkit designed to help universities, funding agencies, and industry partners better identify and respond to security threats.

“Our ultimate goal is to show our industry partners and funding agencies that we are knowledgeable and care deeply about secure research,” said John Talerico, assistant vice president for research security. “We want to be able to say, ‘Come sponsor your research here at Virginia Tech. Your work is safe with us.’”

Research team members

The project is led by Qin Zhu, associate professor in the Department of Engineering Education. Other team members include Rockwell Clancy, research scientist in engineering education; Lisa M. Lee, senior associate vice president for the Office of Research and Innovation and director of the Division of Scholarly Integrity and Research Compliance; and John Talerico, assistant vice president for research security and chief research officer.