Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack
A few Twitter users discovered how to hijack an automated tweet bot, dedicated to remote jobs, running on the GPT-3 language model by OpenAI, using a newly discovered technique called a “prompt injection attack.”