News

Microsoft’s New Azure AI Tools for Building Secure and Trustworthy Gen AI Applications

Microsoft has announced new tools to be added to Azure AI Studio for generative AI app developers to meet new AI quality and safety challenges that customers face. In the growing world of artificial intelligence, it has become essential to maintain a balance between innovation and risk management. 

The biggest challenge faced by Gen AI app developers is Prompt Injection attacks, where malicious actors try to manipulate an AI system into doing something outside its intended purpose, such as producing harmful content or exfiltrating confidential data. 

Also Read: What is a Microsoft Surface PC with a dedicated Copilot AI button?

Business leaders and organizations are also worried about quality and reliability,: in addition to the security risks to their AI systems. They want their AI system to be free from any kind of error generation to build user trust.

To help customers overcome these AI quality and safety challenges, Microsoft is announcing new tools to be added to its Azure AI Studio for generative AI App developers. 

What are the Important Features of Microsoft Azure AI safety tools:

  • Prompt Shields: It will detect and block direct prompt injection attacks known as jailbreaks to safeguard your LLMs. It will also include a new model for identifying indirect prompt attacks before they impact your model. These are now available in preview in Azure AI Content Safety.
  • Groundedness detection: It will detect “hallucinations” in model outputs, planned to be coming soon. ‘Hallucinations’ in generative AI refer to instances when a model confidently generates outputs that misalign with common sense or lack grounding data. Groundedness detection is a new feature designed to identify text-based hallucinations. This feature detects ‘ungrounded material’ in text to support the quality of LLM outputs
  • Safety system messages: These will steer the model’s behavior toward safe, responsible outputs, planned to be coming soon.
  • Risk and Safety Evaluations: To assess an application’s vulnerability to jailbreak attacks and to generate content risks. These are now available in preview.
  • Risk and safety monitoring: To understand what model inputs, outputs, and end users are triggering content filters to inform mitigations. These are now available in preview in Azure OpenAI Service.
  • Safeguard your LLMs against prompt injection attacks with Prompt Shields: Prompt injection attacks, both direct attacks, known as jailbreaks, and indirect attacks, are emerging as significant threats to foundation model safety and security. Successful attacks that bypass an AI system’s safety mitigations can have severe consequences, such as personally identifiable information (PII) and intellectual property (IP) leakage.
  • Identify LLM Hallucinations with Groundedness Detection: ‘Hallucinations’ in generative AI refer to instances when a model confidently generates outputs that misalign with common sense or lack grounding data. This issue can manifest in different ways, ranging from minor inaccuracies to starkly false outputs.
  • Steer your application with an effective safety system message: Today, Azure AI enables users to ground foundation models on trusted data sources and build system messages that guide the optimal use of that grounding data and overall behavior (do this, not that). At Microsoft, we have found that even small changes to a system message can have a significant impact on an application’s quality and safety.
  • Evaluate your LLM application for risks and safety: How do you know if your application and mitigations are working as intended? Today, many organizations lack the resources to stress test their generative AI applications so they can confidently progress from prototype to production.
  • Monitor your Azure OpenAI Service deployments for risks and safety in production: Monitoring generative AI models in production is an essential part of the AI lifecycle. Today we are pleased to announce risk and safety monitoring in Azure OpenAI Service.
  • Confidently scale the next generation of safe, responsible AI applications: Generative AI can be a force multiplier for every department, company, and industry. Azure AI customers are using this technology to operate more efficiently, improve customer experience, and build new pathways for innovation and growth.

Read the official document Click Here

Also Read: Microsoft Taps DeepMind Co-Founder Suleyman to Lead New AI Division

Microsoft’s new tool Azure AI Studio for Generative AI app Developers will help to evaluate, mitigate, and monitor risk to realize their goals.

Kumud Sahni Pruthi

A postgraduate in Science with an inclination towards education and technology. She always looks for ways to help people improve their lives by putting complex things into simple words through her writing.

Recent Posts

India has less than 2000 Artificial Intelligence (AI) Engineers

According to a recent study, India has less than 2000 AI engineers. This underwhelming figure…

8 hours ago

AlphaFold AI model from Google DeepMind promises to speed up drug discovery

AlphaFold 3 will revolutionize drug discovery and our understanding of biology by precisely predicting the…

8 hours ago

What is NVIDIA DGX SuperPOD for the US Government’s Generative AI Projects?

NVIDIA's DGX SuperPOD will support the US government's generative AI efforts, driven by President Biden's…

8 hours ago

Top 15 Countries with the Most AI Startups [2024]

The top three nations for AI startups are the United States, China, and the United…

11 hours ago

Can you find the hidden snake in the grass in 11 seconds?

A snake is hiding in plain sight in the grass in this image. Only eagle-eyed…

11 hours ago

Top 10 AI Startups and Companies in Bangalore 2024

Learn about the top 10 AI startups and companies in Bangalore for 2024. Explore the…

14 hours ago