News

Microsoft’s New Azure AI Tools for Building Secure and Trustworthy Gen AI Applications

Microsoft has announced the addition of new safety tools for Azure AI Studio. These tools will provide a defense against prompt injection and hallucination. The new tools will also help the gen AI app developers in monitoring and evaluating risks to their LLM models.

Microsoft has announced new tools to be added to Azure AI Studio for generative AI app developers to meet new AI quality and safety challenges that customers face. In the growing world of artificial intelligence, it has become essential to maintain a balance between innovation and risk management. 

The biggest challenge faced by Gen AI app developers is Prompt Injection attacks, where malicious actors try to manipulate an AI system into doing something outside its intended purpose, such as producing harmful content or exfiltrating confidential data. 

Also Read: What is a Microsoft Surface PC with a dedicated Copilot AI button?

Business leaders and organizations are also worried about quality and reliability,: in addition to the security risks to their AI systems. They want their AI system to be free from any kind of error generation to build user trust.

To help customers overcome these AI quality and safety challenges, Microsoft is announcing new tools to be added to its Azure AI Studio for generative AI App developers. 

What are the Important Features of Microsoft Azure AI safety tools:

  • Prompt Shields: It will detect and block direct prompt injection attacks known as jailbreaks to safeguard your LLMs. It will also include a new model for identifying indirect prompt attacks before they impact your model. These are now available in preview in Azure AI Content Safety.
  • Groundedness detection: It will detect “hallucinations” in model outputs, planned to be coming soon. ‘Hallucinations’ in generative AI refer to instances when a model confidently generates outputs that misalign with common sense or lack grounding data. Groundedness detection is a new feature designed to identify text-based hallucinations. This feature detects ‘ungrounded material’ in text to support the quality of LLM outputs
  • Safety system messages: These will steer the model’s behavior toward safe, responsible outputs, planned to be coming soon.
  • Risk and Safety Evaluations: To assess an application’s vulnerability to jailbreak attacks and to generate content risks. These are now available in preview.
  • Risk and safety monitoring: To understand what model inputs, outputs, and end users are triggering content filters to inform mitigations. These are now available in preview in Azure OpenAI Service.
  • Safeguard your LLMs against prompt injection attacks with Prompt Shields: Prompt injection attacks, both direct attacks, known as jailbreaks, and indirect attacks, are emerging as significant threats to foundation model safety and security. Successful attacks that bypass an AI system’s safety mitigations can have severe consequences, such as personally identifiable information (PII) and intellectual property (IP) leakage.
  • Identify LLM Hallucinations with Groundedness Detection: ‘Hallucinations’ in generative AI refer to instances when a model confidently generates outputs that misalign with common sense or lack grounding data. This issue can manifest in different ways, ranging from minor inaccuracies to starkly false outputs.
  • Steer your application with an effective safety system message: Today, Azure AI enables users to ground foundation models on trusted data sources and build system messages that guide the optimal use of that grounding data and overall behavior (do this, not that). At Microsoft, we have found that even small changes to a system message can have a significant impact on an application’s quality and safety.
  • Evaluate your LLM application for risks and safety: How do you know if your application and mitigations are working as intended? Today, many organizations lack the resources to stress test their generative AI applications so they can confidently progress from prototype to production.
  • Monitor your Azure OpenAI Service deployments for risks and safety in production: Monitoring generative AI models in production is an essential part of the AI lifecycle. Today we are pleased to announce risk and safety monitoring in Azure OpenAI Service.
  • Confidently scale the next generation of safe, responsible AI applications: Generative AI can be a force multiplier for every department, company, and industry. Azure AI customers are using this technology to operate more efficiently, improve customer experience, and build new pathways for innovation and growth.

Read the official document Click Here

Also Read: Microsoft Taps DeepMind Co-Founder Suleyman to Lead New AI Division

Microsoft’s new tool Azure AI Studio for Generative AI app Developers will help to evaluate, mitigate, and monitor risk to realize their goals.

This post was last modified on March 28, 2024 10:42 pm

Kumud Sahni Pruthi

A postgraduate in Science with an inclination towards education and technology. She always looks for ways to help people improve their lives by putting complex things into simple words through her writing.

Recent Posts

Rish Gupta Net Worth: CEO & Co-Founder of Spot AI

Rish Gupta is an Indian entrepreneur who serves as the chief executive officer (CEO) of…

April 19, 2025

Top 10 Robotics Skills Required for Engineering Career Growth

Are you looking to advance your engineering career in the field of robotics? Check out…

April 18, 2025

Top 20 Books on AI in 2025: The Ultimate Reading List on Artificial Intelligence

Artificial intelligence is a topic that has recently made internet users all over the world…

April 18, 2025

Top 10 Best AI Communities in 2025

Boost your learning journey with the power of AI communities. The article below highlights the…

April 18, 2025

Artificial Intelligence (AI) Glossary and Terminologies – Complete Cheat Sheet List

Demystify the world of Artificial Intelligence with our comprehensive AI Glossary and Terminologies Cheat Sheet.…

April 18, 2025

Scott Wu Net Worth: Devin AI Software Engineer, CEO of Cognition Labs

Scott Wu is the co-founder and Chief Executive Officer of Cognition Labs, an artificial intelligence…

April 17, 2025