Microsoft knows you love tricking its AI chatbots into doing weird stuff and it’s designing ‘prompt shields’ to stop you FortuneMicrosoft’s new safety system can catch hallucinations in its customers’ AI apps The VergeAnnouncing new tools in Azure AI to help you build more secure and trustworthy generative AI applications MicrosoftMicrosoft Launches Measures to Keep Users From Tricking AI Chatbots PYMNTS.comMicrosoft wants to stop you from using AI chatbots for evil ZDNet