An artist’s rendition of a broad, synchronized cyber and psychological operations attack
An artist’s rendition of a broad, synchronized cyber and psychological operations attack. Image: AI by Gerardo Mena/Army University Press

Charles Chen knows what it takes to bring cutting-edge technology into the heart of US diplomacy and security.

Headshot Charles Chen

As the former Director of the State Department’s AI and Emerging Technology Office and now Senior Advisor at Resecurity, he’s spent nearly 30 years navigating IT infrastructure, cybersecurity, and threat intelligence.

In this exclusive interview, he unpacks where AI meets national security — and what’s next for cyber defense.

How is the State Department using AI in cybersecurity and national security?

The State Department has recognized that AI is not just a technological innovation but a strategic asset for efficiency, automation, and national security. 

During my tenure, we focused on embedding AI capabilities into our core operations, with emphasis on aggregating quality data and improving the accuracy of data analytics, which fed into various AI platforms to generate deeper insights and higher confidence in predictive analysis.

This was particularly relevant in the Diplomatic Technology Bureau, the Diplomatic Security Service Cyber Threat and Investigations Office, the Office of Management Strategy and Solutions’ Center for Analytics, the Bureau of Intelligence and Research, as well as numerous missions and partners.

Although each organizational group has its specific responsibilities, functions, and authorities, mutual collaboration and intelligence sharing collectively strengthened US diplomacy in the digital age.

The goal was to counter cyber threats and malicious activities while supporting mission success and adapting quickly to an ever-changing threat landscape. 

Our approach leveraged AI for rapid threat detection, automation of repetitive tasks, and enhanced data analysis for actionable intelligence.

Importantly, this integration was always paired with strong governance and human oversight to ensure reliability and ethical use.

U.S. Army Sgt. Michael Morales, assigned to 41st Field Artillery Brigade, updates Soldier information in a battalion aid station during Saber Guardian 25, Cincu Training Area, Romania, June 13, 2025. The aid station enables rapid triage, treatment, and evacuation of casualties, ensuring readiness and lifesaving support in austere operational environments. (U.S. Army photo by Spc. Hunter Carpenter) Demonstrating global deterrence and the U.S. Army’s ability to rapidly deploy U.S.-based combat power in Europe and the Arctic region alongside Allies and partners, DEFENDER 25 brings U.S. troops together with forces from 29 Allied and partner nations to build readiness through large-scale combat training from May 11-June 24, 2025. DEFENDER 25 increases the lethality of the NATO alliance through large-scale tactical training maneuvers and long-range fires, builds unit readiness in a complex joint, multinational environment and leverages host nation capabilities to increase the U.S. Army’s operational reach. During three large-scale combat training exercises—Swift Response, Immediate Response, and Saber Guardian—Ally and partner forces integrate and expand multi-domain operations capability, demonstrating combined command and control structures and readiness to respond to crisis and conflict.
A personnel updates soldier information in a deployed battalion station during an exercise. Photo: Spc. Hunter Carpenter/US Army
What are the most significant recent AI advances in cybersecurity and threat intelligence?

AI and machine learning have revolutionized how we process and respond to cyber threats.

One of the most significant developments is the use of AI for real-time threat detection and response. AI models can sift through enormous datasets to identify anomalies and breaches far faster than traditional methods.

They are also being used to automate incident triage, generate defensive code, and even predict emerging threats based on historical patterns

The Department of State and other agencies are actively deploying these models to enhance national resilience.

For example, CONTEXT AI by Resecurity accelerates decision-making, optimizes resource allocation, enables parallel cybersecurity operations across units, and supports scalability.

The challenge, however, lies in integrating these tools with legacy systems and ensuring that AI-driven decisions remain transparent and explainable.

Furthermore, accurate analysis and predictive capabilities always begin with quality data inputs. While AI in cybersecurity and threat intelligence continues to advance with great leaps forward, data cleaning will remain a foundational element of success.

How does the State Department coordinate with other agencies and international partners on cyber intelligence and threat intelligence?

Coordination is critical. The State Department works closely with agencies such as the Cybersecurity and Infrastructure Security Agency, the Department of Homeland Security, and the intelligence community to share actionable threat intelligence. 

We also engage with international partners to establish norms and share best practices for cyber defense. 

For example, the Bureau of Cyberspace and Digital Policy has played a key role in counter-spyware agreements and in addressing state-sponsored cyber threats.

Our overarching goal is to build a unified front against cyber adversaries by integrating intelligence from multiple sources and ensuring that our diplomatic efforts support global cybersecurity resilience.

US Cyber Command members working in the Integrated Cyber Center. Photo: Josef Cole/DVIDS
What risks come with AI in national security?

The main challenges include ensuring the security and integrity of AI systems themselves, managing the risks of adversarial attacks on AI models, and safeguarding sensitive data.

Trust is also critical — AI systems must be transparent, explainable, and subject to human oversight. 

Another challenge is integrating AI with legacy systems that often lack the flexibility to support new technologies. 

Finally, there are ethical concerns, such as data privacy and the potential misuse of AI for offensive cyber operations.

Addressing these challenges requires robust governance, continuous monitoring, and international cooperation to establish shared standards and best practices.

How do you see the future of human-machine teaming in cybersecurity and threat intelligence?

Human-machine teaming is the future of cybersecurity. AI can augment human analysts by automating routine tasks, accelerating detection, and extracting insights from complex datasets. Yet humans remain essential for providing context, making strategic decisions, and ensuring responsible use. 

The most effective cybersecurity operations will combine AI’s speed and scale with human judgment and expertise. 

This approach not only enhances defensive capabilities but also ensures that ethical and legal considerations remain at the forefront.

What ethical guidelines should researchers and practitioners follow when working with AI in sensitive government contexts?

Ethical guidelines are critical, particularly when handling sensitive government data. 

Researchers should obtain informed consent, protect confidentiality, and ensure participants fully understand the scope and risks of any project. Activities should be reviewed by an ethics board, with clear boundaries between research and operational work. 

Cultural sensitivity, respect for autonomy, and creating a safe environment are equally important.

Ultimately, the priority is to balance innovation with responsibility, ensuring AI serves the public good without compromising individual rights or national security.

You May Also Like

Leidos, VML Launch ‘Imperium’ AI System for Information Warfare Support

The system provides tools for planning, executing, analyzing, and assessing information, with built-in compliance and ethical safeguards.

Hack The Box Opens AI Arena to Train Next Generation of Cyber Defenders

HTB AI Range tests autonomous AI agents in realistic cyber attacks, measuring adaptability, success rates, and teamwork with human operators.

Maryland Firm Unleashes AI for Frontline Cyber Defense

Called Operator X, the fully offline system gives cyber teams “mission-ready” support in disconnected or contested environments where cloud tools are not an option.