NCITE Selects Two New Research Teams to Study Cyber Threats
Researchers from the Georgia Institute of Technology and the University of Oklahoma will spearhead NCITE's push to probe cyber threats to the nation's critical infrastructure.
- published: 2024/12/19
- contact: NCITE Communications
- phone: 4025546423
- email: ncite@unomaha.edu
- search keywords:
- cybersecurity
- deepfake
- terrorist
- artificial intelligence
Our nation’s many interconnected cyber networks power businesses, schools, government, hospitals, and more. Already, malign profiteers who have hacked into systems for ransom have found the cyber domain to be an attractive target. How might terrorist or extremist actors attack critical infrastructure’s cyber networks?
This question arose during ENVISION24, NCITE’s conference that brought government stakeholders and academics together in Omaha in June 2024. Because the cyber domain is a highly salient but understudied area, NCITE launched its first targeted request for proposals, seeking research projects examining cyber threats to critical infrastructure.
Leading academics from across the country applied. NCITE chose two projects. One will create a cyber threat risk modeling tool; the other will examine the threat deepfakes and synthetic media pose.
“We live in an ultra-connected world,” said Bret Blackman, vice president for information technology and chief information officer for the University of Nebraska system. “These projects will help defend against a growing number of cyber threats to critical infrastructure, including our universities.”
Risk Model for Identifying Threats to Cyber for Critical Infrastructure
A team led by researchers at the Georgia Institute of Technology (Georgia Tech) will develop a model to measure the relative terrorist and extremist threat against the nation’s 16 critical infrastructure sectors and offer risk scores for each. The goal is to create a tool that not only maps threats to one sector, like energy, but also shows how disruptions could impact interdependent sectors, like health care. By doing so, the model can help identify which of the critical infrastructure sectors are most attractive to terrorist actors, in an effort to better address vulnerabilities and increase protections.This interdisciplinary team is led by Ryan Shandler, Ph.D., a political scientist at Georgia Tech. Other key personnel include Saman Zonouz, Ph.D., who leads Georgia Tech’s Cyber-Physical Systems Security Lab, and Jon Lindsay, Ph.D., a political scientist with expertise in computer science. They will work with the federal Cybersecurity and Infrastructure Security Agency (CISA), a component of the Department of Homeland Security (DHS).
Together, these researchers will use controlled experimental conditions to identify the social and economic impact of cyberattacks and model vulnerabilities that may facilitate current and future threats. The team will identify motivations and cyber capabilities of adversaries. The team will also measure the societal upheaval such attacks may wreak.
This three-year project kicks off in January 2025. Findings will be shared with government and industry stakeholders, along with the public, through briefings, events, and publications.
How Deepfakes and Other Synthetic Media Threaten Critical Infrastructure
A team led by the University of Oklahoma will examine how emerging technologies, namely artificial intelligence (AI) and deepfakes, threaten our national security and critical infrastructure. This project will investigate how terrorist/extremist actors could use deepfakes to harm a critical infrastructure sector’s organizational reputation, financial health, and data security.This research team is led by a veteran NCITE principal investigator, Matthew Jensen, Ph.D., co-director of the Center for Applied Social Research at the University of Oklahoma. Other experts on the team include Allen Johnston, Ph.D., a cybersecurity and management information systems professor at the University of Alabama, and Deanna House, Ph.D., a cybersecurity professor at the University of Nebraska at Omaha and head of NCITE’s Cyber Threat Analysis Lab.
Their joint effort will respond to an alarm raised in 2023 by the National Security Agency (NSA), Federal Bureau of Investigation (FBI), and CISA. These groups warned that the increasing availability and efficiency of synthetic media – created using AI – will increasingly be used by malicious actors who intend to cause widespread panic and violence. These synthetic media tools, the researchers note, are quick, effective, and difficult to counter. Evidence of this has already occurred in foreign countries and researchers cite a recent Microsoft threat intelligence report attributing malicious use of AI by Iran, Russia, and China in U.S. government and election contexts.
This 2½-year project kicks off in January 2025. The research team aims to provide DHS stakeholders with analysis and insight regarding the use of deepfakes in cyber terror attacks against organizations critical to the function of America.