Hackers Developing Malicious LLMs After WormGPT Falls Flat
Cybercrooks are exploring ways to develop custom, malicious large language models after existing tools such as WormGPT failed to cater to their demands for advanced intrusion capabilities, security researchers said.