The researchers are employing a way called adversarial education to prevent ChatGPT from permitting end users trick it into behaving terribly (often known as jailbreaking). This perform pits multiple chatbots versus one another: a person chatbot performs the adversary and attacks One more chatbot by generating textual content to drive it to buck it