US Cyber Command and personnel. Photo: Josef Cole/DVIDS

Cybersecurity upskilling platform Hack The Box (HTB) has launched HTB AI Range, a “first-of-its-kind” cyber range built to test and benchmark autonomous AI security agents.

The platform drops AI models and human operators into high-pressure, realistic attack scenarios to see how they perform under the same conditions.

“AI is now part of the cyber battle, and we’re building the arena where it can be safely tested and used for responsible defense,” said Haris Pylarinos, CEO and founder of HTB.

The company has spent over two years developing AI-driven labs where humans and machines train and compete together.

A conceptual illustration representing Hack The Box’s AI Range, where autonomous agents and human operators train in realistic cyber scenarios. Image: Hack The Box

In April, an event organized by HTB tested autonomous AI teams in a Capture the Flag competition, where participants solved digital challenges to complete specific objectives.

The AI teams cleared 19 of 20 easy challenges, keeping pace with 403 human red teams, but stumbled on the final multi-step task, which required more complex problem-solving.

HTB noted that attackers are already using AI to automate their operations at scale, arguing that defenders will need similar AI-driven capabilities to keep up.

“AI is fundamentally reshaping the threat landscape,” said Dawn-Marie Vaughan, Global Offering Lead – Cybersecurity at DXC. “Early research is already showing how AI can automate reconnaissance and link potential exploit paths in ways that were extremely difficult just a year ago.”

Where AI Meets Chaos

Rather than testing AI in clean, artificial environments, HTB AI Range throws models into scenarios that mirror real-world attacks: confusing signals, chained exploits, unexpected behavior, and time pressure.

Teams simulate the kinds of attacks adversaries might attempt, from simple prompt injections to complex, multi-step intrusions requiring calculated strategy.

The AI Range creates a continuous evaluation loop.

Each AI agent is tested on security-critical tasks and measured on success rates, adaptability, and how it fails. This gives teams a clear picture of AI performance and improvement over time.

The platform also includes an Assistants & Agents feature, designed to train AI to work alongside humans.

These agents take on realistic roles such as detecting threats, triaging incidents, or defending systems, learning through human oversight and reinforcement.

You May Also Like

USAF Taps AI Firm PureCipher for Stealth Combat Comms

PureCipher signs contract with the US Air Force to deploy AI-powered secure communications systems for critical combat missions and training operations.

AI on the Frontlines: Charles Chen on Cybersecurity’s Next Era

Charles Chen shares insights from his nearly 30 years in cybersecurity and national security, discussing AI-driven threat detection, human–machine collaboration, ethical challenges, and the evolving landscape of cyber defense and diplomacy.

Anthropic Puts Tighter Leash on Claude AI to Stop Hackers, Weapons Builders

The modified policy bans any attempt to make explosives or biological, chemical, nuclear, or radiological weapons using the AI tool. It also blocks hacking, malware creation, or denial-of-service attacks.