Microsoft’s New Azure AI Tools for Building Secure and Trustworthy Gen AI Applications
Microsoft has announced the addition of new safety tools for Azure AI Studio. These tools will provide a defense against prompt injection and hallucination. The new tools will also help the gen AI app developers in monitoring and evaluating risks to their LLM models.
Microsoft New Azure AI Tools
Microsoft has announced new tools to be added to Azure AI Studio for generative AI app developers to meet new AI quality and safety challenges that customers face. In the growing world of artificial intelligence, it has become essential to maintain a balance between innovation and risk management.
The biggest challenge faced by Gen AI app developers is Prompt Injection attacks, where malicious actors try to manipulate an AI system into doing something outside its intended purpose, such as producing harmful content or exfiltrating confidential data.
Business leaders and organizations are also worried about quality and reliability,: in addition to the security risks to their AI systems. They want their AI system to be free from any kind of error generation to build user trust.
To help customers overcome these AI quality and safety challenges, Microsoft is announcing new tools to be added to its Azure AI Studio for generative AI App developers.
What are the Important Features of Microsoft Azure AI safety tools:
Prompt Shields: It will detect and block direct prompt injection attacks known as jailbreaks to safeguard your LLMs. It will also include a new model for identifying indirect prompt attacks before they impact your model. These are now available in preview in Azure AI Content Safety.
Groundedness detection: It will detect “hallucinations” in model outputs, planned to be coming soon. ‘Hallucinations’ in generative AI refer to instances when a model confidently generates outputs that misalign with common sense or lack grounding data. Groundedness detection is a new feature designed to identify text-based hallucinations. This feature detects ‘ungrounded material’ in text to support the quality of LLM outputs
Safety system messages: These will steer the model’s behavior toward safe, responsible outputs, planned to be coming soon.
Risk and Safety Evaluations: To assess an application’s vulnerability to jailbreak attacks and to generate content risks. These are now available in preview.
Risk and safety monitoring: To understand what model inputs, outputs, and end users are triggering content filters to inform mitigations. These are now available in preview in Azure OpenAI Service.
Safeguard your LLMs against prompt injection attacks with Prompt Shields: Prompt injection attacks, both direct attacks, known as jailbreaks, and indirect attacks, are emerging as significant threats to foundation model safety and security. Successful attacks that bypass an AI system’s safety mitigations can have severe consequences, such as personally identifiable information (PII) and intellectual property (IP) leakage.
Identify LLM Hallucinations with Groundedness Detection: ‘Hallucinations’ in generative AI refer to instances when a model confidently generates outputs that misalign with common sense or lack grounding data. This issue can manifest in different ways, ranging from minor inaccuracies to starkly false outputs.
Steer your application with an effective safety system message: Today, Azure AI enables users to ground foundation models on trusted data sources and build system messages that guide the optimal use of that grounding data and overall behavior (do this, not that). At Microsoft, we have found that even small changes to a system message can have a significant impact on an application’s quality and safety.
Evaluate your LLM application for risks and safety: How do you know if your application and mitigations are working as intended? Today, many organizations lack the resources to stress test their generative AI applications so they can confidently progress from prototype to production.
Monitor your Azure OpenAI Service deployments for risks and safety in production: Monitoring generative AI models in production is an essential part of the AI lifecycle. Today we are pleased to announce risk and safety monitoring in Azure OpenAI Service.
Confidently scale the next generation of safe, responsible AI applications: Generative AI can be a force multiplier for every department, company, and industry. Azure AI customers are using this technology to operate more efficiently, improve customer experience, and build new pathways for innovation and growth.
A postgraduate in Science with an inclination towards education and technology. She always looks for ways to help people improve their lives by putting complex things into simple words through her writing.