A new uncensored AI on the darknet called DIG AI is rapidly becoming the go‑to tool for cybercriminals, extremists, and terrorists.
Detected in late 2025 and accessible via Tor with no registration required, DIG AI strips away safety guardrails to churn out malware, backdoor scripts, explosive device instructions, fraud campaigns, and even highly realistic child sexual abuse material at the click of a button, according to a Resecurity threat report.
Unlike traditional AI platforms with content safeguards, DIG AI is designed for illegal use, letting bad actors scale operations and bypass protective filters that would stop harmful outputs on mainstream models.

Underground AI Economy
Researchers warn such “dark LLMs” are fueling a booming underground AI economy just as major global events like the 2026 Winter Olympics and World Cup approach, creating fresh targets and complexity for military cyber defenders and law enforcement alike.
Resecurity threat teams found the tool capable of generating functional malicious code and detailed illicit guidance — capabilities that blur the line between cybercrime and asymmetric warfare in the digital domain.
With mentions of malicious AI tools skyrocketing over 200 percent this year, Resecurity forecasts weaponized AI will reshape threat landscapes in 2026, demanding new strategies from militaries and security agencies fighting in what some are calling the “fifth domain of warfare: cyber.”