AI-Powered Malware: State-Backed Hackers Use LLMs for Evasion
Cyber Security News by CyberSum.net
State-backed hackers are deploying malware that uses large language models (LLMs) to dynamically generate malicious scripts and evade detection. Researchers at Google observed malware employing AI capabilities mid-execution, marking a significant step towards more autonomous and adaptive threats. Experimental malware like PROMPTFLUX and PROMPTSTEAL show how attackers are integrating AI into future intrusion activities. The marketplace for AI tools purpose-built for criminal behavior is growing, making sophisticated tools accessible to low-level criminals.