Azure-AI-Content-Safety
Azure-AI-Content-Safety
Version: 1
MicrosoftLast updated September 2024

Azure AI Content Safety

Introduction

Azure AI Content Safety is a safety system for monitoring content generated by both foundation models and humans. Detect and block potential risks, threats, and quality problems. You can build an advanced safety system for foundation models to detect and mitigate harmful content and risks in user prompts and AI-generated outputs. Use Prompt Shields to detect and block prompt injection attacks, groundedness detection to pinpoint ungrounded or hallucinated materials, and protected material detection to identify copyrighted or owned content.

Core Features

  • Block harmful input and output
    • Description: Detect and block violence, hate, sexual, and self-harm content for both text, images and multimodal. Configure severity thresholds for your specific use case and adhere to your responsible AI policies.
    • Key Features: Violence, hate, sexual, and self-harm content detection. Custom blocklist.
  • Policy customization with custom categories
    • Description: Create unique content filters tailored to your requirements using custom categories. Quickly train a new custom category by providing examples of content you need to block.
    • Key Features: Custom categories
  • Identify the security risks
    • Description: Safeguard your AI applications against prompt injection attacks and jailbreak attempts. Identify and mitigate both direct and indirect threats with prompt shields.
    • Key Features: Direct jailbreak attack, indirect prompt injection from docs.
  • Detect and correct Gen AI hallucinations
    • Description: Identify and correct generative AI hallucinations and ensure outputs are reliable, accurate, and grounded in data with groundedness detection.
    • Key Features: Groundedness detection, reasoning, and correction.
  • Identify protected material
    • Description: Pinpoint copyrighted content and provide sources for preexisting text and code with protected material detection.
    • Key Features: Protected material for code, protected material for text

Use Cases

  • Generative AI services screen user-submitted prompts and generated outputs to ensure safe and appropriate content.
  • Online marketplaces monitor and filter product listings and other user-generated content to prevent harmful or inappropriate material.
  • Gaming platforms manage and moderate user-created game content and in-game communication to maintain a safe environment.
  • Social media platforms review and regulate user-uploaded images and posts to enforce community standards and prevent harmful content.
  • Enterprise media companies implement centralized content moderation systems to ensure the safety and appropriateness of their published materials.
  • K-12 educational technology providers filter out potentially harmful or inappropriate content to create a safe learning environment for students and educators.

Benefits

  • No ML experience required: Incorporate content safety features into your projects with no machine learning experience required.
  • Effortlessly customize your RAI policies: Customizing your content safety classifiers can be done with one line of description, a few samples using Custom Categories.
  • State of the art models: ready for use APIs, SOTA models, and flexible deployment options reduce the need for ongoing manual training or extensive customization. Microsoft has a science team and policy experts working on the frontier of Gen AI to constantly improve the safety and security models to ensure our customers can develop and deploy generative AI safely and responsibly.
  • Global Reach: Support more than 100 languages, enabling businesses to communicate effectively with customers, partners, and employees worldwide.
  • Scalable and Reliable: Built on Azure’s cloud infrastructure, the Azure AI Content Safety service scales automatically to meet demand, from small business applications to global enterprise workloads.
  • Security and Compliance: Azure AI Content Safety runs on Azure’s secure cloud infrastructure, ensuring data privacy and compliance with global standards. User data is not stored after the translation process.
  • Flexible deployment: Azure AI Content Safety can be deployed on cloud, on premises and on devices.

Technical Details

Pricing

Explore pricing options here: Azure AI Content Safety - Pricing | Microsoft Azure .
Model Specifications
Last UpdatedSeptember 2024
PublisherMicrosoft