Terrorists could learn to carry out a biological attack using a generative AI chatbot, warns a new report by the non-profit policy think tank RAND Corporation.
The report said that while the large language model used in the research did not give specific instructions on creating a biological weapon, its responses could help plan the attack using jailbreaking prompts.
“Generally, if a malicious actor is explicit [in their intent], you will get a response that’s of the flavor ‘I’m sorry, I…